Wenxiong Chen and Congming Li
Methods on Nonlinear Elliptic
Equations
SPIN AIMS’ internal project number, if known
– Monograph –
September 22, 2008
AIMS
Preface
In this book we intend to present basic materials as well as real research
examples to young researchers in the ﬁeld of nonlinear analysis for partial
diﬀerential equations (PDEs), in particular, for semilinear elliptic PDEs. We
hope it will become a good reading material for graduate students and a handy
textbook for professors to use in a topic course on nonlinear analysis.
We will introduce a series of wellknown typical methods in nonlinear
analysis. We will ﬁrst illustrate them by using simple examples, then we will
lead readers to the research front and explain how these methods can be
applied to solve practical problems through careful analysis of a series of
recent research articles, mostly the authors’ recent papers.
From our experience, roughly speaking, in applying these commonly used
methods, there are usually two aspects:
i) general scheme (more or less universal) and,
ii) key points for each individual problem.
We will use our own research articles to show readers what the general
schemes are and what the key points are; and with the understandings of these
examples, we hope the readers will be able to apply these general schemes to
solve their own research problems with the discovery of their own key points.
In Chapter 1, we introduce basic knowledge on Sobolev spaces and some
commonly used inequalities. These are the major spaces in which we will seek
weak solutions of PDEs.
Chapter 2 shows how to ﬁnd weak solutions for some typical linear and
semilinear PDEs by using functional analysis methods, mainly, the calculus
of variations and critical point theories including the wellknown Mountain
Pass Lemma.
In Chapter 3, we establish W
2,p
a priori estimates and regularity. We prove
that, in most cases, weak solutions are actually diﬀerentiable, and hence are
classical ones. We will also present a Regularity Lifting Theorem. It is a simple
method to boost the regularity of solutions. It has been used extensively in
various forms in the authors’ previous works. The essence of the approach is
wellknown in the analysis community. However, the version we present here
VI Preface
contains some new developments. It is much more general and is very easy
to use. We believe that our method will provide convenient ways, for both
experts and nonexperts in the ﬁeld, in obtaining regularities. We will use
examples to show how this theorem can be applied to PDEs and to integral
equations.
Chapter 4 is a preparation for Chapter 5 and 6. We introduce Rieman
nian manifolds, curvatures, covariant derivatives, and Sobolev embedding on
manifolds.
Chapter 5 deals with semilinear elliptic equations arising from prescrib
ing Gaussian curvature, on both positively and negatively curved manifolds.
We show the existence of weak solutions in critical cases via variational ap
proaches. We also introduce the method of lower and upper solutions.
Chapter 6 focus on solving a problem from prescribing scalar curvature
on S
n
for n ≥ 3. It is in the critical case where the corresponding variational
functional is not compact at any level sets. To recover the compactness, we
construct a maxmini variational scheme. The outline is clearly presented,
however, the detailed proofs are rather complex. The beginners may skip
these proofs.
Chapter 7 is devoted to the study of various maximum principles, in partic
ular, the ones based on comparisons. Besides classical ones, we also introduce a
version of maximum principle at inﬁnity and a maximum principle for integral
equations which basically depends on the absolute continuity of a Lebesgue
integral. It is a preparation for the method of moving planes.
In Chapter 8, we introduce the method of moving planes and its variant–
the method of moving spheres–and apply them to obtain the symmetry, mono
tonicity, a priori estimates, and even nonexistence of solutions. We also intro
duce an integral form of the method of moving planes which is quite diﬀerent
than the one for PDEs. Instead of using local properties of a PDE, we exploit
global properties of solutions for integral equations. Many research examples
are illustrated, including the wellknown one by Gidas, Ni, and Nirenberg, as
well as a few from the authors’ recent papers.
Chapters 7 and 8 function as a selfcontained group. The readers who are
only interested in the maximum principles and the method of moving planes,
can skip the ﬁrst six chapters, and start directly from Chapter 7.
Yeshiva University Wenxiong Chen
University of Colorado Congming Li
Contents
1 Introduction to Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Approximation by Smooth Functions. . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5 Compact Embedding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.6 Other Basic Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.6.1 Poincar´e’s Inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.6.2 The Classical HardyLittlewoodSobolev Inequality . . . . 36
2 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.1 Second Order Elliptic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.3 Methods of Linear Functional Analysis . . . . . . . . . . . . . . . . . . . . . 44
2.3.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.3.2 Some Basic Principles in Functional Analysis . . . . . . . . . 44
2.3.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 48
2.4 Variational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.1 Semilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.2 Calculus of Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.3 Existence of Minimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.4.4 Existence of Minimizers Under Constraints . . . . . . . . . . . 55
2.4.5 Minimax Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4.6 Existence of a Minimax via the Mountain Pass Theorem 65
3 Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.1 W
2,p
a Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.1.1 Newtonian Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.1.2 Uniform Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2 W
2,p
Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2.1 The Case p ≥ 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
VIII Contents
3.2.2 The Case 1 < p < 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.2.3 Other Useful Results Concerning the Existence,
Uniqueness, and Regularity . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.3 Regularity Lifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.1 Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.2 Regularity Lifting Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.3.3 Applications to PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.3.4 Applications to Integral Equations . . . . . . . . . . . . . . . . . . . 103
4 Preliminary Analysis on Riemannian Manifolds . . . . . . . . . . . . 107
4.1 Diﬀerentiable Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.2 Tangent Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3 Riemannian Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4 Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.1 Curvature of Plane Curves. . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.2 Curvature of Surfaces in R
3
. . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.3 Curvature on Riemannian Manifolds . . . . . . . . . . . . . . . . . 117
4.5 Calculus on Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.5.1 Higher Order Covariant Derivatives and the
LaplaceBertrami Operator . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.5.2 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature123
4.6 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5 Prescribing Gaussian Curvature on Compact 2Manifolds . . 127
5.1 Prescribing Gaussian Curvature on S
2
. . . . . . . . . . . . . . . . . . . . . 129
5.1.1 Obstructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.1.2 Variational Approach and Key Inequalities . . . . . . . . . . . 130
5.1.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 135
5.2 Prescribing Gaussian Curvature on Negatively Curved
Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5.2.1 Kazdan and Warner’s Results. Method of Lower and
Upper Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.2.2 Chen and Li’s Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6 Prescribing Scalar Curvature on S
n
, for n ≥ 3 . . . . . . . . . . . . . 157
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.2 The Variational Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.2.1 Estimate the Values of the Functional . . . . . . . . . . . . . . . . 162
6.2.2 The Variational Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.3 The A Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.3.1 In the Region Where R < 0 . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.3.2 In the Region Where R is Small . . . . . . . . . . . . . . . . . . . . . 172
6.3.3 In the Regions Where R > 0. . . . . . . . . . . . . . . . . . . . . . . . 174
Contents IX
7 Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.2 Weak Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.3 The Hopf Lemma and Strong Maximum Principles . . . . . . . . . . 183
7.4 Maximum Principles Based on Comparisons . . . . . . . . . . . . . . . . 188
7.5 A Maximum Principle for Integral Equations . . . . . . . . . . . . . . . . 191
8 Methods of Moving Planes and of Moving Spheres . . . . . . . . 195
8.1 Outline of the Method of Moving Planes . . . . . . . . . . . . . . . . . . . 197
8.2 Applications of the Maximum Principles Based on
Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
8.2.1 Symmetry of Solutions in a Unit Ball . . . . . . . . . . . . . . . 199
8.2.2 Symmetry of Solutions of −´u = u
p
in R
n
. . . . . . . . . . . 202
8.2.3 Symmetry of Solutions for −´u = e
u
in R
2
. . . . . . . . . . . 209
8.3 Method of Moving Planes in a Local Way . . . . . . . . . . . . . . . . . . 214
8.3.1 The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
8.3.2 The A Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
8.4 Method of Moving Spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.4.1 The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.4.2 Necessary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
8.5 Method of Moving Planes in Integral Forms and Symmetry
of Solutions for an Integral Equation . . . . . . . . . . . . . . . . . . . . . . . 228
A Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
A.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
A.1.1 Algebraic and Geometric Notations . . . . . . . . . . . . . . . . . . 235
A.1.2 Notations for Functions and Derivatives . . . . . . . . . . . . . . 235
A.1.3 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
A.1.4 Notations for Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
A.1.5 Notations from Riemannian Geometry . . . . . . . . . . . . . . . 239
A.2 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
A.3 CalderonZygmund Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 243
A.4 The Contraction Mapping Principle . . . . . . . . . . . . . . . . . . . . . . . . 246
A.5 The ArzelaAscoli Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
1
Introduction to Sobolev Spaces
1.1 Distributions
1.2 Sobolev Spaces
1.3 Approximation by Smooth Functions
1.4 Sobolev Embeddings
1.5 Compact Embedding
1.6 Other Basic Inequalities
1.6.1 Poincar´e’s Inequality
1.6.2 The Classical HardyLittlewoodSobolev Inequality
In real life, people use numbers to quantify the surrounding objects. In
mathematics, the absolute value [a − b[ is used to measure the diﬀerence
between two numbers a and b. Functions are used to describe physical states.
For example, temperature is a function of time and place. Very often, we use
a sequence of approximate solutions to approach a real one; and how close
these solutions are to the real one depends on how we measure them, that
is, which metric we are choosing. Hence, it is imperative that we need not
only develop suitable metrics to measure diﬀerent states (functions), but we
must also study relationships among diﬀerent metrics. For these purposes,
the Sobolev Spaces were introduced. They have many applications in various
branches of mathematics, in particular, in the theory of partial diﬀerential
equations.
The role of Sobolev Spaces in the analysis of PDEs is somewhat similar
to the role of Euclidean spaces in the study of geometry. The fundamental
research on the relations among various Sobolev Spaces (Sobolev norms) was
ﬁrst carried out by G. Hardy and J. Littlewood in the 1910s and then by S.
Sobolev in the 1930s. More recently, many well known mathematicians, such
as H. Brezis, L. Caﬀarelli, A. Chang, E. Lieb, L. Nirenberg, J. Serrin, and
2 1 Introduction to Sobolev Spaces
E. Stein, have worked in this area. The main objectives are to determine if
and how the norms dominate each other, what the sharp estimates are, which
functions achieve these sharp estimates, and which functions are ‘critically’
related to these sharp estimates.
To ﬁnd the existence of weak solutions for partial diﬀerential equations,
especially for nonlinear partial diﬀerential equations, the method of functional
analysis, in particular, the calculus of variations, has seen more and more
applications.
To roughly illustrate this kind of application, let’s start oﬀ with a simple
example. Let Ω be a bounded domain in R
n
and consider the Dirichlet problem
associated with the Laplace equation:
−´u = f(x), x ∈ Ω
u = 0, x ∈ ∂Ω
(1.1)
To prove the existence of solutions, one may view −´as an operator acting
on a proper linear space and then apply some known principles of functional
analysis, for instance, the ‘ﬁxed point theory’ or ‘the degree theory,’ to derive
the existence. One may also consider the corresponding variational functional
J(u) =
1
2
Ω
[u[
2
d x −
Ω
f(x) ud x (1.2)
in a proper linear space, and seek critical points of the functional in that
space. This kind of variational approach is particularly powerful in dealing
with nonlinear equations. For example, in equation (1.1), instead of f(x), we
consider f(x, u). Then it becomes a semilinear equation. Correspondingly, we
have the functional
J(u) =
1
2
Ω
[u[
2
d x −
Ω
F(x, u) d x, (1.3)
where
F(x, u) =
u
0
f(x, s) d x
is an antiderivative of f(x, ). From the deﬁnition of the functional in either
(1.2) or (1.3), one can see that the function u in the space need not be second
order diﬀerentiable as required by the classical solutions of (1.1). Hence the
critical points of the functional are solutions of the problem only in the ‘weak’
sense. However, by an appropriate regularity argument, one may recover the
diﬀerentiability of the solutions, so that they can still satisfy equation (1.1)
in the classical sense.
In general, given a PDE problem, our intention is to view it as an op
erator A acting on some proper linear spaces X and Y of functions and to
symbolically write the equation as
Au = f (1.4)
1.1 Distributions 3
Then we can apply the general and elegant principles of linear or nonlinear
functional analysis to study the solvability of various equations involving A.
The result can then be applied to a broad class of partial diﬀerential equations.
We may also associate this operator with a functional J(), whose critical
points are the solutions of the equation (1.4). In this process, the key is to
ﬁnd an appropriate operator ‘A’ and appropriate spaces ‘X’ and ‘Y’. As we
will see later, the Sobolev spaces are designed precisely for this purpose and
will work out properly.
In solving a partial diﬀerential equation, in many cases, it is natural to
ﬁrst ﬁnd a sequence of approximate solutions, and then go on to investigate
the convergence of the sequence. The limit function of a convergent sequence
of approximate solutions would be the desired exact solution of the equation.
As we will see in the next few chapters, there are two basic stages in showing
convergence:
i) in a reﬂexive Banach space, every bounded sequence has a weakly con
vergent subsequence, and then
ii) by the compact embedding from a “stronger” Sobolev space into a
“weaker” one, the weak convergent sequence in the “stronger” space becomes
a strong convergent sequence in the “weaker” space.
Before going onto the details of this chapter, the readers may have a glance
at the introduction of the next chapter to gain more motivations for studying
the Sobolev spaces.
In Section 1.1, we will introduce the distributions, mainly the notion of
the weak derivatives, which are the elements of the Sobolev spaces.
We then deﬁne Sobolev spaces in Section 1.2.
To derive many useful properties in Sobolev spaces, it is not so convenient
to work directly on weak derivatives. Hence in Section 1.3, we show that, these
weak derivatives can be approximated by smooth functions. Then in the next
three sections, we can just work on smooth functions to establish a series of
important inequalities.
1.1 Distributions
As we have seen in the introduction, for the functional J(u) in (1.2) or (1.3),
what really involved were only the ﬁrst derivatives of u instead of the second
derivatives as required classically for a second order equation; and these ﬁrst
derivatives need not be continuous or even be deﬁned everywhere. Therefore,
via functional analysis approach, one can substantially weaken the notion of
partial derivatives. The advantage is that it allows one to divide the task of
ﬁnding “suitable” smooth solutions for a PDE into two major steps:
Step 1. Existence of Weak Solutions. One seeks solutions which are less
diﬀerentiable but are easier to obtain. It is very common that people use
“energy” minimization or conservation, sometimes use ﬁnite dimensional ap
proximation, to show the existence of such weak solutions.
4 1 Introduction to Sobolev Spaces
Step 2. Regularity Lifting. One uses various analysis tools to boost the
diﬀerentiability of the known weak solutions and try to show that they are
actually classical ones.
Both the existence of weak solutions and the regularity lifting have become
two major branches of today’s PDE analysis. Various functions spaces and
the related embedding theories are basic tools in both analysis, among which
Sobolev spaces are the most frequently used ones.
In this section, we introduce the notion of ‘weak derivatives,’ which will
be the elements of the Sobolev spaces.
Let R
n
be the ndimensional Euclidean space and Ω be an open connected
subset in R
n
. Let D(Ω) = C
∞
0
(Ω) be the linear space of inﬁnitely diﬀeren
tiable functions with compact support in Ω. This is called the space of test
functions on Ω.
Example 1.1.1 Assume
B
R
(x
o
) := ¦x ∈ R
n
[ [x −x
o
[ < R¦ ⊂ Ω,
then for any r < R, the following function
f(x) =
exp¦
1
x−x
o

2
−r
2
¦ for [x −x
o
[ < r
0 elsewhere
is in C
∞
0
(Ω).
Example 1.1.2 Assume ρ ∈ C
∞
0
(R
n
), u ∈ L
p
(Ω), and supp u ⊂ K ⊂⊂ Ω.
Let
u
(x) := ρ
∗ u :=
R
n
1
n
ρ(
x −y
)u(y)dy.
Then u
∈ C
∞
0
(Ω) for suﬃciently small.
Let L
p
loc
(Ω) be the space of p
th
power locally summable functions for
1 ≤ p ≤ ∞. Such functions are Lebesque measurable functions f deﬁned on
Ω and with the property that
f
L
p
(K)
:=
K
[f(x)[
p
dx
1/p
< ∞
for every compact subset K in Ω.
Assume that u is a C
1
function in Ω and φ ∈ D(Ω). Through integration
by parts, we have, for i = 1, 2, , n,
Ω
∂u
∂x
i
φ(x)dx = −
Ω
u(x)
∂φ
∂x
i
dx. (1.5)
Now if u is not in C
1
(Ω), then
∂u
∂x
i
does not exist. However, the integral on
the right hand side of (1.5) still makes sense if u is a locally L
1
summable
1.1 Distributions 5
function. For this reason, we deﬁne the ﬁrst derivative
∂u
∂x
i
weakly as the
function v(x) that satisﬁes
Ω
v(x)φ(x)dx = −
Ω
u
∂φ
∂x
i
dx
for all functions φ ∈ D(Ω).
The same idea works for higher partial derivatives. Let α = (α
1
, α
2
, , α
n
)
be a multiindex of order
k := [α[ := α
1
+α
2
+ +α
n
.
For u ∈ C
k
(Ω), the regular αth partial derivative of u is
D
α
u =
∂
α1
∂
α2
∂
αn
u
∂x
α1
1
∂x
α2
2
∂x
αn
n
.
Given any test function φ ∈ D(Ω), through a straight forward integration by
parts k times, we arrive at
Ω
D
α
uφ(x) d x = (−1)
α
Ω
uD
α
φd x. (1.6)
There is no boundary term because φ vanishes near the boundary.
Now if u is not k times diﬀerentiable, the left hand side of (1.6) makes no
sense. However the right hand side is valid for functions u with much weaker
diﬀerentiability, i.e. u only need to be locally L
1
summable. Thus it is natural
to choose those functions v that satisfy (1.6) as the weak representatives of
D
α
u.
Deﬁnition 1.1.1 For u, v ∈ L
1
loc
(Ω), we say that v is the αth weak derivative
of u, written
v = D
α
u
provided
Ω
v(x) φ(x) d x = (−1)
α
Ω
uD
α
φd x
for all test functions φ ∈ D(Ω).
Example 1.1.3 For n = 1 and Ω = (−π, 1), let
u(x) =
cos x if −π < x ≤ 0
1 −x if 0 < x < 1.
Then its weak derivative u
(x) can be represented by
v(x) =
−sin x if −π < x ≤ 0
−1 if 0 < x < 1.
6 1 Introduction to Sobolev Spaces
To see this, we verify, for any φ ∈ D(Ω), that
1
−π
u(x)φ
(x)dx = −
1
−π
v(x)φ(x)dx. (1.7)
In fact, through integration by parts, we have
1
−π
u(x)φ
(x)dx =
0
−π
cos xφ
(x)dx +
1
0
(1 −x)φ
(x)dx
=
0
−π
sin xφ(x)dx +φ(0) −φ(0) +
1
0
φ(x)dx
= −
1
−π
v(x)φ(x)dx.
In this example, one can see that, in the classical sense, the function u is
not diﬀerentiable at x = 0. Since the weak derivative is deﬁned by integrals,
one may alter the values of the weak derivative v(x) in a set of measure zero,
and (1.7) still holds. However, it should be unique up to a set of measure zero.
Lemma 1.1.1 (Uniqueness of Weak Derivatives). If v and w are the weak
αth partial derivatives of u, D
α
u, then v(x) = w(x) almost everywhere.
Proof. By the deﬁnition of the weak derivatives, we have
Ω
v(x)φ(x)dx = (−1)
α
Ω
u(x)D
α
φdx =
Ω
w(x)φ(x)dx
for any φ ∈ D(Ω). It follows that
Ω
(v(x) −w(x))φ(x)dx = 0 ∀φ ∈ D(Ω).
Therefore, we must have
v(x) = w(x) almost everywhere .
¯ .
From the deﬁnition, we can view a weak derivative as a linear functional
acting on the space of test functions D(Ω), and we call it a distribution. More
generally, we have
Deﬁnition 1.1.2 A distribution is a continuous linear functional on D(Ω).
The linear space of distributions or the generalized functions on Ω, denoted
by D
(Ω), is the collection of all continuous linear functionals on D(Ω).
1.1 Distributions 7
Here, the continuity of a functional T on D(Ω) means that, for any se
quence ¦φ
k
¦ ⊂ D(Ω) with φ
k
→φ in D(Ω), we have
T(φ
k
)→T(φ), as k→∞;
and we say that φ
k
→φ in D(Ω) if
a) there exists K ⊂⊂ Ω such that suppφ
k
, suppφ ⊂ K, and
b) for any α, D
α
φ
k
→D
α
φ uniformly as k→∞.
The most important and most commonly used distributions are locally
summable functions. In fact, for any f ∈ L
p
loc
(Ω) with 1 ≤ p ≤ ∞, consider
T
f
(φ) =
Ω
f(x)φ(x)dx.
It is easy to verify that T
f
() is a continuous linear functional on D(Ω), and
hence it is a distribution.
For any distribution µ, if there is an f ∈ L
1
loc
(Ω) such that
µ(φ) = T
f
(φ), ∀φ ∈ D(Ω),
then we say that µ is (or can be realized as ) a locally summable function and
identify it as f.
An interesting example of a distribution that is not a locally summable
function is the wellknown Dirac delta function. Let x
o
be a point in Ω. For
any φ ∈ D(Ω), the delta function at x
o
can be deﬁned as
δ
x
o(φ) = φ(x
o
).
Hence it is a distribution. However, one can show that such a delta function is
not locally summable. It is not a function at all. This kind of “function” has
been used widely and so successfully by physicists and engineers, who often
simply view δ
x
o as
δ
x
o(x) =
0, for x = x
o
∞, for x = x
o
.
Surprisingly, such a delta function is the derivative of some function in the
following distributional sense. To explain this, let Ω = (−1, 1), and let
f(x) =
0, for x < 0
1, for x ≥ 0.
Then, we have
−
1
−1
f(x)φ
(x)dx = −
1
0
φ
(x)dx = φ(0) = δ
0
(φ).
Compare this with the deﬁnition of weak derivatives, we may regard δ
0
(x) as
f
(x) in the sense of distributions.
8 1 Introduction to Sobolev Spaces
1.2 Sobolev Spaces
Now suppose given a function f ∈ L
p
(Ω), we want to solve the partial diﬀer
ential equation
∆u = f(x)
in the sense of weak derivatives. Naturally, we would seek solutions u, such
that ∆u are in L
p
(Ω). More generally, we would start from the collections of
all distributions whose second weak derivatives are in L
p
(Ω).
In a variational approach, as we have seen in the introduction, to seek
critical points of the functional
1
2
Ω
[u[
2
dx −
Ω
F(x, u)dx,
a natural set of functions we start with is the collection of distributions whose
ﬁrst weak derivatives are in L
2
(Ω). More generally, we have
Deﬁnition 1.2.1 The Sobolev space W
k,p
(Ω) (k ≥ 0 and p ≥ 1) is the col
lection of all distributions u on Ω such that for all multiindex α with [α[ ≤ k,
D
α
u can be realized as a L
p
function on Ω. Furthermore, W
k,p
(Ω) is a Ba
nach space with the norm
[[u[[ :=
¸
¸
α≤k
Ω
[D
α
u(x)[
p
dx
¸
1/p
.
In the special case when p = 2, it is also a Hilbert space and we usually
denote it by H
k
(Ω).
Deﬁnition 1.2.2 W
k,p
0
(Ω) is the closure of C
∞
0
(Ω) in W
k,p
(Ω).
Intuitively, W
k,p
0
(Ω) is the space of functions whose up to (k −1)th order
derivatives vanish on the boundary.
Example 1.2.1 Let Ω = (−1, 1). Consider the function f(x) = [x[
β
. For
0 < β < 1, it is obviously not diﬀerentiable at the origin. However for any
1 ≤ p <
1
1 −β
, it is in the Sobolev space W
1,p
(Ω). More generally, let Ω be
an open unit ball centered at the origin in R
n
, then the function [x[
β
is in
W
1,p
(Ω) if and only if
β > 1 −
n
p
. (1.8)
To see this, we ﬁrst calculate
f
xi
(x) =
βx
i
[x[
2−β
, for x = 0,
and hence
1.3 Approximation by Smooth Functions 9
[f(x)[ =
β
[x[
1−β
. (1.9)
Fix a small > 0. Then for any φ ∈ D(Ω), by integration by parts, we
have
Ω\B(0)
f
xi
(x)φ(x)dx = −
Ω\B(0)
f(x)φ
xi
(x)dx +
∂B(0)
fφν
i
dS.
where ν = (ν
1
, ν
2
, , ν
n
) is an inwardnormal vector on ∂B
(0).
Now, under the condition that β > 1 −n/p, f
xi
is in L
p
(Ω) ⊂ L
1
(Ω), and
[
∂B(0)
fφν
i
dS[ ≤ C
n−1+β
→0, as →0.
It follows that
Ω
[x[
β
φ
xi
(x)dx = −
Ω
βx
i
[x[
2−β
φ(x)dx.
Therefore, the weak ﬁrst partial derivatives of [x[
β
are
βx
i
[x[
2−β
, i = 1, 2, , n.
Moreover, from (1.9), one can see that [f[ is in L
p
(Ω) if and only if
β > 1 −
n
p
.
1.3 Approximation by Smooth Functions
While working in Sobolev spaces, for instance, proving inequalities, it may feel
inconvenient and cumbersome to manage the weak derivatives directly based
on its deﬁnition. To get around, in this section, we will show that any function
in a Sobolev space can be approached by a sequence of smooth functions. In
other words, the smooth functions are dense in Sobolev spaces. Based on this,
when deriving many useful properties of Sobolev spaces, we can just work on
smooth functions and then take limits.
At the end of the section, for more convenient application of the approxi
mation results, we introduce an Operator Completion Theorem. We also prove
an Extension Theorem. Both theorems will be used frequently in the next few
sections.
The idea in approximation is based on molliﬁers. Let
j(x) =
c e
1
x
2
−1
if [x[ < 1
0 if [x[ ≥ 1.
One can verify that
10 1 Introduction to Sobolev Spaces
j(x) ∈ C
∞
0
(B
1
(0)).
Choose the constant c, such that
R
n
j(x)dx = 1.
For each > 0, deﬁne
j
(x) =
1
n
j(
x
).
Then obviously
R
n
j
(x)dx = 1, ∀ > 0.
One can also verify that j
∈ C
∞
0
(B
(0)), and
lim
→0
j
(x) =
0 for x = 0
∞ for x = 0.
The above observations suggest that the limit of j
(x) may be viewed as a
delta function, and from the wellknown property of the delta function, we
would naturally expect, for any continuous function f(x) and for any point
x ∈ Ω,
(J
f)(x) :=
Ω
j
(x −y)f(y)dy→f(x), as →0. (1.10)
More generally, if f(x) is only in L
p
(Ω), we would expect (1.10) to hold almost
everywhere. We call j
(x) a molliﬁer and (J
f)(x) the molliﬁcation of f(x).
We will show that, for each > 0, J
f is a C
∞
function, and as →0,
J
f→f in W
k,p
.
Notice that, actually
(J
f)(x) =
B(x)∩Ω
j
(x −y)f(y)dy,
hence, in order that J
f(x) to approximate f(x) well, we need B
(x) to be
completely contained in Ω to ensure that
B(x)∩Ω
j
(x −y)dy = 1
(one of the important property of delta function). Equivalently, we need x to
be in the interior of Ω. For this reason, we ﬁrst prove a local approximation
theorem.
Theorem 1.3.1 (Local approximation by smooth functions).
For any f ∈ W
k,p
(Ω), J
f ∈ C
∞
(R
n
) and J
f → f in W
k,p
loc
(Ω) as →0.
1.3 Approximation by Smooth Functions 11
Then, to extend this result to the entire Ω, we will choose inﬁnitely many
open sets O
i
, i = 1, 2, , each of which has a positive distance to the bound
ary of Ω, and whose union is the whole Ω. Based on the above theorem, we are
able to approximate a W
k,p
(Ω) function on each O
i
by a sequence of smooth
functions. Combining this with a partition of unity, and a cut oﬀ function if
Ω is unbounded, we will then prove
Theorem 1.3.2 (Global approximation by smooth functions).
For any f ∈ W
k,p
(Ω), there exists a sequence of functions ¦f
m
¦ ⊂
C
∞
(Ω) ∩ W
k,p
(Ω) such that f
m
→ f in W
k,p
(Ω) as m→∞.
Theorem 1.3.3 (Global approximation by smooth functions up to the bound
ary).
Assume that Ω is bounded with C
1
boundary ∂Ω, then for any f ∈
W
k,p
(Ω), there exists a sequence of functions ¦f
m
¦ ⊂ C
∞
(Ω) = C
∞
(Ω) ∩
W
k,p
(Ω) such that f
m
→ f in W
k,p
(Ω) as m→∞.
When Ω is the entire space R
n
, the approximation by C
∞
or by C
∞
0
functions are essentially the same. We have
Theorem 1.3.4 W
k,p
(R
n
) = W
k,p
0
(R
n
). In other words, for any f ∈ W
k,p
(R
n
),
there exists a sequence of functions ¦f
m
¦ ⊂ C
∞
0
(R
n
), such that
f
m
→f, as m→∞; in W
k,p
(R
n
).
Proof of Theorem 1.3.1
We prove the theorem in three steps.
In step 1, we show that J
f ∈ C
∞
(R
n
) and
J
f
L
p
(Ω)
≤ f
L
p
(Ω)
.
From the deﬁnition of J
f(x), we can see that it is well deﬁned for all x ∈ R
n
,
and it vanishes if x is of distance away from Ω. Here and in the following,
for simplicity of argument, we extend f to be zero outside of Ω.
In step 2, we prove that, if f is in L
p
(Ω),
(J
f)→f in L
p
loc
(Ω).
We ﬁrst verify this for continuous functions and then approximate L
p
functions
by continuous functions.
In step 3, we reach the conclusion of the Theorem. For each f ∈ W
k,p
(Ω)
and [α[ ≤ k, D
α
f is in L
p
(Ω). Then from the result in Step 2, we have
J
(D
α
f)→D
α
f in L
loc
(Ω).
Hence what we need to verify is
12 1 Introduction to Sobolev Spaces
D
α
(J
f)(x) = J
(D
α
f)(x).
As the readers will notice, the arguments in the last two steps only work
in any compact subset of Ω.
Step 1.
Let e
i
= (0, , 0, 1, 0, , 0) be the unit vector in the x
i
direction.
Fix > 0 and x ∈ R
n
. By the deﬁnition of J
f, we have, for [h[ < ,
(J
f)(x +he
i
) −(J
f)(x)
h
=
B2(x)∩Ω
j
(x +he
i
−y) −j
(x −y)
h
f(y)dy.
(1.11)
Since as h→0,
j
(x +he
i
−y) −j
(x −y)
h
→
∂j
(x −y)
∂x
i
uniformly for all y ∈ B
2
(x) ∩ Ω, we can pass the limit through the integral
sign in (1.11) to obtain
∂(J
f)(x)
∂x
i
=
Ω
∂j
(x −y)
∂x
i
f(y)dy.
Similarly, we have
D
α
(J
f)(x) =
Ω
D
α
x
j
(x −y)f(y)dy.
Noticing that j
() is inﬁnitely diﬀerentiable, we conclude that J
f is also
inﬁnitely diﬀerentiable.
Then we derive
J
f
L
p
(Ω)
≤ f
L
p
(Ω)
. (1.12)
By the H¨older inequality, we have
[(J
f)(x)[ = [
Ω
j
p−1
p
(x −y)j
1
p
(x −y)f(y)dy[
≤
Ω
j
(x −y)dy
p−1
p
Ω
j
(x −y)[f(y)[
p
dy
1
p
≤
B(x)
j
(x −y)[f(y)[
p
dy
1
p
.
It follows that
Ω
[(J
f)(x)[
p
dx ≤
Ω
B(x)
j
(x −y)[f(y)[
p
dy
dx
≤
Ω
[f(y)[
p
B(y)
j
(x −y)dx
dy
≤
Ω
[f(y)[
p
dy.
1.3 Approximation by Smooth Functions 13
This veriﬁes (1.12).
Step 2.
We prove that, for any compact subset K of Ω,
J
f −f
L
p
(K)
→0, as →0. (1.13)
We ﬁrst show this for a continuous function f. By writing
(J
f)(x) −f(x) =
B(0)
j
(y)[f(x −y) −f(x)]dy,
we have
[(J
f)(x) −f(x)[ ≤ max
x∈K,y<
[f(x −y) −f(x)[
B(0)
j
(y)dy
≤ max
x∈K,y<
[f(x −y) −f(x)[.
Due to the continuity of f and the compactness of K, the last term in the above
inequality tends to zero uniformly as →0. This veriﬁes (1.13) for continuous
functions.
For any f in L
p
(Ω), and given any δ > 0, choose a a continuous function
g, such that
f −g
L
p
(Ω)
<
δ
3
. (1.14)
This can be derived from the wellknown fact that any L
p
function can be
approximated by a simple function of the form
¸
k
j=1
a
j
χ
j
(x), where χ
j
is the
characteristic function of some measurable set A
j
; and each simple function
can be approximated by a continuous function.
For the continuous function g, (1.13) infers that for suﬃciently small , we
have
J
g −g
L
p
(K)
<
δ
3
.
It follows from this and (1.14) that
J
f −f
L
p
(K)
≤ J
f −J
g
L
p
(K)
+J
g −g
L
p
(K)
+g −f
L
p
(K)
≤ 2f −g
L
p
(K)
+J
g −g
L
p
(K)
≤ 2
δ
3
+
δ
3
= δ.
This proves (1.13).
Step 3.
Now assume that f ∈ W
k,p
(Ω). Then for any α with [α[ ≤ k, we have
D
α
f ∈ L
p
(Ω). We show that
14 1 Introduction to Sobolev Spaces
D
α
(J
f)→D
α
f in L
p
(K). (1.15)
By the result in Step 2, we have
J
(D
α
f)→D
α
f in L
p
(K).
Now what left to verify is, for suﬃciently small ,
D
α
(J
f)(x) = J
(D
α
f)(x), ∀x ∈ K.
To see this, we ﬁx a point x in K. Choose <
1
2
dist(K, ∂Ω), then any
point y in the ball B
(x) is in the interior of Ω. Consequently,
D
α
(J
f)(x) =
Ω
D
α
x
j
(x −y)f(y)dy
=
B(x)
D
α
x
j
(x −y)f(y)dy
= (−1)
α
B(x)
D
α
y
j
(x −y)f(y)dy
=
B(x)
j
(x −y)D
α
f(y)dy
=
Ω
j
(x −y)D
α
f(y)dy.
Here we have used the fact that j
(x−y) and all its derivatives are supported
in B
(x) ⊂ Ω.
This completes the proof of the Theorem 1.3.1. ¯ .
The Proof of Theorem 1.3.2
Step 1. For a Bounded Region Ω.
Let
Ω
i
= ¦x ∈ Ω [ dist(x, ∂Ω) >
1
i
¦, i = 1, 2, 3,
Write O
i
= Ω
i+3
`
¯
Ω
i+1
. Choose some open set O
0
⊂⊂ Ω so that Ω = ∪
∞
i=0
O
i
.
Choose a smooth partition of unity ¦η
i
¦
∞
i=0
associated with the open sets
¦O
i
¦
∞
i=0
,
0 ≤ η
i
(x) ≤ 1, η
i
∈ C
∞
0
(O
i
)
¸
∞
i=0
η
i
(x) = 1, x ∈ Ω.
Given any function f ∈ W
k,p
(Ω), obvious η
i
f ∈ W
k,p
(Ω) and supp(η
i
f) ⊂ O
i
.
Fix a δ > 0. Choose
i
> 0 so small that f
i
:= J
i
(η
i
f) satisﬁes
f
i
−η
i
f
W
k,p
(Ω)
≤
δ
2
i+1
i = 0, 1,
suppf
i
⊂ (Ω
i+4
`
¯
Ω
i
) i = 1, 2,
(1.16)
1.3 Approximation by Smooth Functions 15
Set g =
¸
∞
i=0
f
i
. Then g ∈ C
∞
(Ω), because each f
i
is in C
∞
(Ω), and for
each open set O ⊂⊂ Ω, there are at most ﬁnitely many nonzero terms in the
sum. To see that g ∈ W
k,p
(Ω), we write
g −f =
∞
¸
i=0
(f
i
−η
i
f).
It follows from (1.16) that
∞
¸
i=0
f
i
−η
i
f
W
k,p
(Ω)
≤ δ
∞
¸
i=0
1
2
i+1
= δ.
From here we can see that the series
¸
∞
i=0
(f
i
−η
i
f) converges in W
k,p
(Ω) (see
Exercise 1.3.1 below), hence (g − f) ∈ W
k,p
(Ω), and therefore g ∈ W
k,p
(Ω).
Moreover, from the above inequality, we have
g −f
W
k,p
(Ω)
≤ δ.
Since δ > 0 is any number, we complete step 1.
Remark 1.3.1 In the above argument, neither
¸
f
i
nor
¸
η
i
f converge, but
their diﬀerence converges. Although each partial sum of
¸
f
i
is in C
∞
0
(Ω),
the inﬁnite series
¸
f
i
does not converge to g in W
k,p
(Ω). Then how did we
prove that g ∈ W
k,p
(Ω)? We showed that the diﬀerence (g−f) is in W
k,p
(Ω).
Exercise 1.3.1 .
Let X be a Banach space and v
i
∈ X. Show that the series
¸
∞
i
v
i
converges
in X if
¸
∞
i
v
i
 < ∞.
Hint: Show that the partial sum is a Cauchy sequence.
Step 2. For Unbounded Region Ω.
Given any δ > 0, since f ∈ W
k,p
(Ω), there exist R > 0, such that
f
W
k,p
(Ω\B
R−2
(0))
≤ δ. (1.17)
Choose a cut oﬀ function φ ∈ C
∞
(R
n
) satisfying
φ(x) =
1, x ∈ B
R−2
(0),
0, x ∈ R
n
` B
R
(0);
and [D
α
φ(x)[ ≤ 1 ∀x ∈ R
n
, ∀[α[ ≤ k.
Then by (1.17), it is easy to verify that, there exists a constant C, such that
φf −f
W
k,p
(Ω)
≤ Cδ. (1.18)
Now in the bounded domain Ω ∩ B
R
(0), by the argument in Step 1, there is
a function g ∈ C
∞
(Ω ∩ B
R
(0)), such that
16 1 Introduction to Sobolev Spaces
g −f
W
k,p
(Ω∩B
R
(0))
≤ δ. (1.19)
Obviously, the function φg is in C
∞
(Ω), and by (1.18) and (1.19),
φg −f
W
k,p
(Ω)
≤ φg −φf
W
k,p
(Ω)
+φf −f
W
k,p
(Ω)
≤ C
1
g −f
W
k,p
(Ω∩B
R
(0))
+Cδ
≤ (C
1
+C)δ.
This completes the proof of the Theorem 1.3.2. ¯ .
The Proof of Theorem 1.3.3
We will still use the molliﬁers to approximate a function. As compared to
the proofs of the previous theorem, the main diﬃculty here is that for a point
on the boundary, there is no room to mollify. To circumvent this, we cover
Ω with ﬁnitely many open sets, and on each set that covers the boundary
layer, we will translate the function a little bit inward, so that there is room
to mollify within Ω. Then we will again use the partition of unity to complete
the proof.
Step 1. Approximating in a Small Set Covering ∂Ω.
Let x
o
be any point on ∂Ω. Since ∂Ω is C
1
, we can make a C
1
change
of coordinates locally, so that in the new coordinates system (x
1
, , x
n
), we
can express, for a suﬃciently small r > 0,
B
r
(x
o
) ∩ Ω = ¦x ∈ B
r
(x
o
) [ x
n
> φ(x
1
, x
n−1
)¦
with some C
1
function Φ.
Set
D = Ω ∩ B
r/2
(x
o
).
Shift every point x ∈ D in x
n
direction a units, deﬁne
x
= x +ae
n
.
and
D
= ¦x
[ x ∈ D¦.
This D
is obtained by shifting D toward the inside of Ω by a units. Choose
a suﬃciently large, so that the ball B
(x) lies in Ω ∩ B
r
(x
o
) for all x ∈ D
and for all small > 0. Now, there is room to mollify a given W
k,p
function
f on D
within Ω. More precisely, we ﬁrst translate f a distance in the x
n
direction to become f
(x) = f(x
), then mollify it. We claim that
J
f
→f in W
k,p
(D).
Actually, for any multiindex [α[ ≤ k, we have
D
α
(J
f
) −D
α
f
L
p
(D)
≤ D
α
(J
f
) −D
α
f

L
p
(D)
+D
α
f
−D
α
f
L
p
(D)
.
1.3 Approximation by Smooth Functions 17
A similar argument as in the proof of Theorem 1.3.1 implies that the ﬁrst
term on the right hand side goes to zero as →0; while the second term also
vanishes in the process due to the continuity of the translation in the L
p
norm.
Step 2. Applying the Partition of Unity.
Since ∂Ω is compact, we can ﬁnd ﬁnitely many such sets D, call them D
i
,
i = 1, 2, , N, the union of which covers ∂Ω. Given δ > 0, from the argument
in Step 1, for each D
i
, there exists g
i
∈ C
∞
(
¯
D
i
), such that
g
i
−f
W
k,p
(Di)
≤ δ. (1.20)
Choose an open set D
0
⊂⊂ Ω such that Ω ⊂ ∪
N
i=0
D
i
, and select a function
g
0
∈ C
∞
(
¯
D
0
) such that
g
0
−f
W
k,p
(D0)
≤ δ. (1.21)
Let ¦η
i
¦ be a smooth partition of unity subordinated to the open sets
¦D
i
¦
N
i=0
. Deﬁne
g =
N
¸
i=0
η
i
g
i
.
Then obviously g ∈ C
∞
(
¯
Ω), and f =
¸
N
i=0
η
i
f. Similar to the proof of Theo
rem 1.3.1, it follows from (1.20) and (1.21) that, for each [α[ ≤ k
D
α
g −D
α
f
L
p
(Ω)
≤
N
¸
i=0
D
α
(η
i
g
i
) −D
α
(η
i
f)
L
p
(Di)
≤ C
N
¸
i=0
g
i
−f
W
k,p
(Di)
= C(N + 1)δ.
This completes the proof of the Theorem 1.3.3. ¯ .
The Proof of Theorem 1.3.4
Let φ(r) be a C
∞
0
cut oﬀ function such that
φ(r) =
1 , for 0 ≤ r ≤ 1;
between 0 and 1 , for 1 < r < 2;
0 , for r ≥ 2.
Then by a direct computation, one can show that
φ(
[x[
R
)f(x) −f(x)
W
k,p
(R
n
)
→0, as R→∞. (1.22)
Thus, there exists a sequence of numbers ¦R
m
¦ with R
m
→∞, such that for
18 1 Introduction to Sobolev Spaces
g
m
(x) := φ(
[x[
R
m
)f(x),
we have
g
m
−f
W
k,p
(R
n
)
≤
1
m
. (1.23)
On the other hand, from the Approximation Theorem 1.3.3, for each ﬁxed
m,
J
(g
m
)→g
m
, as →0, in W
k,p
(R
n
).
Hence there exist
m
, such that for
f
m
:= J
m
(g
m
),
we have
f
m
−g
m

W
k,p
(R
n
)
≤
1
m
. (1.24)
Obviously, each function f
m
is in C
∞
0
(R
n
), and by (1.23) and (1.24),
f
m
−f
W
k,p
(R
n
)
≤ f
m
−g
m

W
k,p
(R
n
)
+g
m
−f
W
k,p
(R
n
)
≤
2
m
→0, as m→∞.
This completes the proof of Theorem 1.3.4. ¯ .
Now we have proved all the four Approximation Theorems, which show
that smooth functions are dense in Sobolev spaces W
k,p
. In other words, W
k,p
is the completion of C
k
under the norm  
W
k,p. Later, particularly in the
next three sections, when we derive various inequalities in Sobolev spaces, we
can just ﬁrst work on smooth functions and then extend them to the whole
Sobolev spaces. In order to make such extensions more conveniently, without
going through approximation process in each particular space, we prove the
following Operator Completion Theorem in general Banach spaces.
Theorem 1.3.5 Let D be a dense linear subspace of a normed space X. Let
Y be a Banach space. Assume
T : D → Y
is a bounded linear map. Then there exists an extension
¯
T of T from D to the
whole space X, such that
¯
T is a bounded linear operator from X to Y ,

¯
T = T and
¯
Tx = Tx ∀ x ∈ D.
Proof. Given any element x
o
∈ X, since D is dense in X, there exists a
sequence ¦x
i
¦ ∈ D, such that
x
i
→x
o
, as i→∞.
1.3 Approximation by Smooth Functions 19
It follows that
Tx
i
−Tx
j
 ≤ Tx
i
−x
j
→0, as i, j→∞.
This implies that ¦Tx
i
¦ is a Cauchy sequence in Y . Since Y is a Banach space,
¦Tx
i
¦ converges to some element y
o
in Y . Let
¯
Tx
o
= y
o
. (1.25)
To see that (1.25) is well deﬁned, suppose there is another sequence ¦x
i
¦ that
converges to x
o
in X and Tx
i
→y
1
in Y . Then
y
1
−y
o
 = lim
i→∞
Tx
i
−Tx
i
 ≤ Clim
i→∞
x
i
−x
i
 = 0.
Now, for x ∈ D, deﬁne
¯
Tx = Tx; and for other x ∈ X, deﬁne
¯
Tx by (1.25).
Obviously,
¯
T is a linear operator. Moreover

¯
Tx
o
 = lim
i→∞
Tx
i
 ≤ lim
i→∞
Tx
i
 = Tx
o
.
Hence
¯
T is also bounded. This completes the proof. ¯ .
As a direct application of this Operator Completion Theorem, we prove
the following Extension Theorem.
Theorem 1.3.6 Assume that Ω is bounded and ∂Ω is C
k
, then for any open
set O ⊃ Ω, there exists a bounded linear operator E : W
k,p
(Ω) → W
k,p
0
(O)
such that Eu = u a.e. in Ω.
Proof. To deﬁne the extension operator E, we ﬁrst work on functions u ∈
C
k
(
¯
Ω). Then we can apply the Operator Completion Theorem to extend E
to W
k,p
(Ω), since C
k
(
¯
Ω) is dense in W
k,p
(Ω) (see Theorem 1.3.3). From this
density, it is easy to show that, for u ∈ W
k,p
(Ω),
E(u)(x) = u(x) almost everywhere on Ω
based on
E(u)(x) = u(x) on
¯
Ω ∀ u ∈ C
k
(
¯
Ω).
Now we prove the theorem for u ∈ C
k
(
¯
Ω). We deﬁne E(u) in the following
two steps.
Step 1. The special case when Ω is a half ball
B
+
r
(0) := ¦x = (x
, x
n
) ∈ R
n
[ [x[ < 1, x
n
> 0¦.
Assume u ∈ C
k
(B
+
r
(0)). We extend u to be a C
k
function on the whole
ball B
r
(0). Deﬁne
¯ u(x
, x
n
) =
u(x
, x
n
) , x
n
≥ 0
¸
k
i=0
a
i
u(x
, −λ
i
x
n
) , x
n
< 0.
20 1 Introduction to Sobolev Spaces
To guarantee that all the partial derivatives
∂
j
¯ u(x
, x
n
)
∂x
j
n
up to the order k are
continuous across the hyper plane x
n
= 0, we ﬁrst pick λ
i
, such that
0 < λ
0
< λ
1
< < λ
k
< 1.
Then solve the algebraic system
k
¸
i=0
a
i
λ
j
i
= 1 , j = 0, 1, , k,
to determine a
0
, a
1
, , a
k
. Since the coeﬃcient determinant det(λ
j
i
) is not
zero, and hence the system has a unique solution. One can easily verify that,
for such λ
i
and a
i
, the extended function ¯ u is in C
k
(B
r
(0)).
Step 2. Reduce the general domains to the case of half ball covered in Step
1 via domain transformations and a partition of unity.
Let O be an open set containing
¯
Ω. Given any x
o
∈ ∂Ω, there exists a
neighborhood U of x
o
and a C
k
transformation φ : U→R
n
which satisﬁes
that φ(x
o
) = 0, φ(U ∩ ∂Ω) lies on the hyper plane x
n
= 0, and φ(U ∩ Ω) is
contained in R
n
+
. Then there exists an r
o
> 0, such that B
ro
(0) ⊂⊂ φ(U),
D
x
o := φ
−1
(B
ro
(0)) is an open set containing x
o
, and φ ∈ C
k
(D
x
o). Choose
r
o
suﬃciently small so that D
x
o ⊂ O. All such D
x
o and Ω forms an open
covering of
¯
Ω, hence there exist a ﬁnite subcovering
D
0
:= Ω, D
1
, , D
m
and a corresponding partition of unity
η
0
, η
1
, , η
m
,
such that
η
i
∈ C
∞
0
(D
i
) i = 0, 1, , m
and
m
¸
i=0
η
i
(x) = 1 ∀ x ∈ Ω.
Let φ
i
be the mapping associated with D
i
as described in Step 1. And let
˜ u
i
(y) = u(φ
−1
i
(y) be the function deﬁned on the half ball
B
+
ri
(0) = φ
i
(D
i
∩
¯
Ω) , i = 1, 2, m.
From Step 1, each ˜ u
i
(y) can be extended as a C
k
function ¯ u
i
(y) onto the
whole ball B
ri
(0).
Now, we can deﬁne the extension of u from Ω to its neighborhood O as
E(u) =
m
¸
i=1
η
i
(x)¯ u
i
(φ
i
(x)) +η
0
u(x).
1.4 Sobolev Embeddings 21
Obviously, E(u) ∈ C
∞
0
(O), and
E(u)
W
k,p
(O)
≤ Cu
W
k,p
(Ω)
.
Moreover, for any x ∈
¯
Ω, we have
E(u)(x) =
m
¸
i=1
η
i
(x)¯ u
i
(φ
i
(x)) +η
0
(x)u(x) =
m
¸
i=1
η
i
(x)˜ u
i
(φ
i
(x)) +η
0
(x)u(x)
=
m
¸
i=1
η
i
(x)u(φ
−1
i
(φ
i
(x))) +η
0
(x)u(x) =
m
¸
i=1
η
i
(x)u(x) +η
0
(x)u(x)
=
m
¸
i=0
η
i
(x)
u(x) = u(x).
This completes the proof of the Extension Theorem. ¯ .
Remark 1.3.2 Notice that the Extension Theorem actually implies an im
proved version of the Approximation Theorem under stronger assumption on
∂Ω (be C
k
instead of C
1
). From the Extension Theorem, one can derive im
mediately
Corollary 1.3.1 Assume that Ω is bounded and ∂Ω is C
k
. Let O be an open
set containing
¯
Ω. Let
O
:= ¦x ∈ R
n
[ dist(x, O) ≤ ¦.
Then the linear operator
J
(E(u)) : W
k,p
(Ω)→C
∞
0
(O
)
is bounded, and for each u ∈ W
k,p
(Ω), we have
J
(E(u)) −u
W
k,p
(Ω)
→0 , as →0.
Here, as compare to the previous approximation theorems, the improvement
is that one can write out the explicit form of the approximation.
1.4 Sobolev Embeddings
When we seek weak solutions of partial diﬀerential equations, we start with
functions in a Sobolev space W
k,p
. Then it is natural that we would like to
know whether the functions in this space also automatically belong to some
other spaces. The following theorem answers the question and at meantime
provides inequalities among the relevant norms.
22 1 Introduction to Sobolev Spaces
Theorem 1.4.1 (General Sobolev inequalities).
Assume Ω is bounded and has a C
1
boundary. Let u ∈ W
k,p
(Ω).
(i) If k <
n
p
, then u ∈ L
q
(Ω) with
1
q
=
1
p
−
k
n
and there exists a constant
C such that
u
L
q
(Ω)
≤ Cu
W
k,p
(Ω)
(ii) If k >
n
p
, then u ∈ C
k−[
n
p
]−1,γ
(Ω), and there exists a constant C, such
that
u
C
k−[
n
p
]−1,γ
(Ω)
≤ Cu
W
k,p
(Ω)
where
γ =
1 +
n
p
−
n
p
, if
n
p
is not an integer
any positive number < 1, if
n
p
is an integer .
Here, [b] is the integer part of the number b.
The proof of the Theorem is based upon several simpler theorems. We
ﬁrst consider the functions in W
1,p
(Ω). From the deﬁnition, apparently these
functions belong to L
q
(Ω) for 1 ≤ q ≤ p. Naturally, one would expect more,
and what is more meaningful is to ﬁnd out how large this q can be. And to
control the L
q
norm for larger q by W
1,p
norm–the norm of the derivatives
Du
L
p
(Ω)
=
Ω
[Du[
p
dx
1
p
–would suppose to be more useful in practice.
For simplicity, we start with the smooth functions with compact supports
in R
n
. We would like to know for what value of q, can we establish the in
equality
u
L
q
(R
n
)
≤ CDu
L
p
(R
n
)
, (1.26)
with constant C independent of u ∈ C
∞
0
(R
n
). Now suppose (1.26) holds. Then
it must also be true for the rescaled function of u:
u
λ
(x) = u(λx),
that is
u
λ

L
q
(R
n
)
≤ CDu
λ

L
p
(R
n
)
. (1.27)
By substitution, we have
R
n
[u
λ
(x)[
q
dx =
1
λ
n
R
n
[u(y)[
q
dy, and
R
n
[Du
λ
[
p
dx =
λ
p
λ
n
R
n
[Du(y)[
p
dy.
1.4 Sobolev Embeddings 23
It follows from (1.27) that
u
L
q
(R
n
)
≤ Cλ
1−
n
p
+
n
q
Du
L
p
(R
n
)
.
Therefore, in order that C to be independent of u, it is necessarily that the
power of λ here be zero, that is
q =
np
n −p
.
It turns out that this condition is also suﬃcient, as will be stated in the
following theorem. Here for q to be positive, we must require p < n.
Theorem 1.4.2 (GagliardoNirenbergSobolev inequality).
Assume that 1 ≤ p < n. Then there exists a constant C = C(p, n), such
that
u
L
p
∗
(R
n
)
≤ CDu
L
p
(R
n
)
, u ∈ C
1
0
(R
n
) (1.28)
where p
∗
=
np
n−p
.
Since any function in W
1,p
(R
n
) can be approached by a sequence of func
tions in C
1
0
(R
n
), we derive immediately
Corollary 1.4.1 Inequality (1.28) holds for all functions u in W
1,p
(R
n
).
For functions in W
1,p
(Ω), we can extended them to be W
1,p
(R
n
) functions
by Extension Theorem and arrive at
Theorem 1.4.3 Assume that Ω is a bounded, open subset of R
n
with C
1
boundary. Suppose that 1 ≤ p < n and u ∈ W
1,p
(Ω). Then u is in L
p∗
(Ω)
and there exists a constant C = C(p, n, Ω), such that
u
L
p∗
(Ω)
≤ Cu
W
1,p
(Ω)
. (1.29)
In the limiting case as p→n, p∗ :=
np
n−p
→∞. Then one may suspect that
u ∈ L
∞
when p = n. Unfortunately, this is true only in dimension one. For
n = 1, from
u(x) =
x
−∞
u
(x)dx,
we derive immediately that
[u(x)[ ≤
∞
−∞
[u
(x)[dx.
However, for n > 1, it is false. One counter example is u = log log(1 +
1
x
) on
Ω = B
1
(0). It belongs to W
1,n
(Ω), but not to L
∞
(Ω). This is a more delicate
situation, and we will deal with it later.
24 1 Introduction to Sobolev Spaces
Naturally, for p > n, one would expect W
1,p
to embed into better spaces.
To get some rough idea what these spaces might be, let us ﬁrst consider the
simplest case when n = 1 and p > 1. Obviously, for any x, y ∈ R
1
with x < y,
we have
u(y) −u(x) =
y
x
u
(t)dt,
and consequently, by H¨older inequality,
[u(y) −u(x)[ ≤
y
x
[u
(t)[dt ≤
y
x
[u
(t)[
p
dt
1
p
y
x
dt
1−
1
p
.
It follows that
[u(y) −u(x)[
[y −x[
1−
1
p
≤
∞
−∞
[u
(t)[
p
dt
1
p
.
Take the supremum over all pairs x, y in R
1
, the left hand side of the above
inequality is the norm in the H¨older space C
0,γ
(R
1
) with γ = 1 −
1
p
. This is
indeed true in general, and we have
Theorem 1.4.4 (Morrey’s inequality).
Assume n < p ≤ ∞. Then there exists a constant C = C(p, n) such that
u
C
0,γ
(R
n
)
≤ Cu
W
1,p
(R
n
)
, u ∈ C
1
(R
n
)
where γ = 1 −
n
p
.
To establish an inequality like (1.28), we follow two basic principles.
First, we consider the special case where Ω = R
n
. Instead of dealing with
functions in W
k,p
, we reduce the proof to functions with enough smoothness
(which is just Theorem 1.4.2).
Secondly, we deal with general domains by extending functions u ∈
W
1,p
(Ω) to W
1,p
(R
n
) via the Extension Theorem.
Here, one sees that Theorem 1.4.2 is the ‘key’ and inequality (1.27) there
provides the foundation for the proof of the Sobolev embedding. Often, the
proof of (1.27) is called a ‘hard analysis’, and the steps leading from (1.27)
to (1.28) are called ‘soft analyseses’. In the following, we will show the ‘soft’
parts ﬁrst and then the ‘hard’ ones. First, we will assume that Theorem 1.4.2
is true and derive Corollary 1.4.1 and Theorem 1.4.3. Then, we will prove
Theorem 1.4.2.
The Proof of Corollary 1.4.1.
Given any function u ∈ W
1,p
(R
n
), by the Approximation Theorem, there
exists a sequence ¦u
k
¦ ⊂ C
∞
0
(R
n
) such that u−u
k

W
1,p
(R
n
)
→ 0 as k → ∞.
Applying Theorem 1.4.2, we obtain
u
i
−u
j

L
p
∗
(R
n
)
≤ Cu
i
−u
j

W
1,p
(R
n
)
→ 0, as i, j → ∞.
1.4 Sobolev Embeddings 25
Thus ¦u
k
¦ also converges to u in L
p
∗
(R
n
). Consequently, we arrive at
u
L
p
∗
(R
n
)
= limu
k

L
p
∗
(R
n
)
≤ limCu
k

W
1,p
(R
n
)
= Cu
W
1,p
(R
n
)
.
This completes the proof of the Corollary. ¯ .
The Proof of Theorem 1.4.3.
Now for functions in W
1,p
(Ω), to apply inequality (1.27), we ﬁrst extend
them to be functions with compact supports in R
n
. More precisely, let O be
an open set that covers Ω, by the Extension Theorem (Theorem 1.3.6), for
every u in W
1,p
(Ω), there exists a function ˜ u in W
1,p
0
(O), such that
˜ u = u, almost everywhere in Ω;
moreover, there exists a constant C
1
= C
1
(p, n, Ω, O), such that
˜ u
W
1,p
(O)
≤ C
1
u
W
1,p
(Ω)
. (1.30)
Now we can apply the GagliardoNirenbergSobolev inequality to ˜ u to derive
u
L
p
∗
(Ω)
≤ ˜ u
L
p
∗
(O)
≤ C˜ u
W
1,p
(O)
≤ CC
1
u
W
1,p
(Ω)
.
This completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.2.
We ﬁrst establish the inequality for p = 1, i.e. we prove
R
n
[u[
n
n−1
dx
n−1
n
≤
R
n
[Du[dx. (1.31)
Then we will apply (1.31) to [u[
γ
for a properly chosen γ > 1 to extend the
inequality to the case when p > 1.
We need
Lemma 1.4.1 (General H¨older Inequality) Assume that
u
i
∈ L
pi
(Ω) for i = 1, 2, , m
and
1
p
1
+
1
p
2
+ +
1
p
m
= 1.
Then
Ω
[u
1
u
2
u
m
[dx ≤
m
¸
i
Ω
[u
i
[
pi
dx
1
p
i
. (1.32)
26 1 Introduction to Sobolev Spaces
The proof can be obtained by applying induction to the usual H¨older inequal
ity for two functions.
Now we are ready to prove the Theorem.
Step 1. The case p = 1.
To better illustrate the idea, we ﬁrst derive inequality (1.31) for n = 2,
that is, we prove
R
2
[u(x)[
2
dx ≤
R
2
[Du[dx
2
. (1.33)
Since u has a compact support, we have
u(x) =
x1
−∞
∂u
∂y
1
(y
1
, x
2
)dy
1
.
It follows that
[u(x)[ ≤
∞
−∞
[Du(y
1
, x
2
)[dy
1
.
Similarly, we have
[u(x)[ ≤
∞
−∞
[Du(x
1
, y
2
)[dy
2
.
The above two inequalities together imply that
[u(x)[
2
≤
∞
−∞
[Du(y
1
, x
2
)[dy
1
∞
−∞
[Du(x
1
, y
2
)[dy
2
.
Now, integrate both sides of the above inequality with respect to x
1
and x
2
from −∞ to ∞, we arrive at (1.33).
Then we deal with the general situation when n > 2. We write
u(x) =
xi
−∞
∂u
∂y
i
(x
1
, , x
i−1
, y
i
, x
i+1
, , x
n
)dy
i
, i = 1, 2, n.
Consequently,
[u(x)[ ≤
∞
−∞
[Du(x
1
, , y
i
, , x
n
)[dy
i
, i = 1, 2, n.
And it follows that
[u(x)[
n
n−1
≤
n
¸
i=1
∞
−∞
[Du(x
1
, , y
i
, , x
n
)[dy
i
1
n−1
.
Integrating both sides with respect to x
1
and applying the general H¨older
inequality (1.32), we obtain
∞
−∞
[u[
n
n−1
dx
1
≤
∞
−∞
[Du[dy
1
1
n−1
∞
−∞
n
¸
i=2
∞
−∞
[Du[dy
i
1
n−1
dx
1
≤
∞
−∞
[Du[dy
1
1
n−1
n
¸
i=2
∞
−∞
∞
−∞
[Du[dx
1
dy
i
1
n−1
.
1.4 Sobolev Embeddings 27
Then integrate the above inequality with respect to x
2
. We have
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
2
≤
∞
−∞
∞
−∞
[Du[dx
1
dy
2
1
n−1
∞
−∞
∞
−∞
[Du[dy
1
1
n−1
n
¸
i=3
∞
−∞
∞
−∞
[Du[dx
1
dy
i
1
n−1
¸
dx
2
.
Again, applying the General H¨older Inequality, we arrive at
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
2
≤
∞
−∞
∞
−∞
[Du[dy
1
dx
2
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dy
2
1
n−1
n
¸
i=3
∞
−∞
∞
−∞
∞
−∞
[Du[dx
1
dx
2
dy
i
1
n−1
.
Continuing this way by integrating with respect to x
3
, , x
n−1
, we deduce
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
n−1
≤
∞
−∞
∞
−∞
[Du[dy
1
dx
2
dx
n−1
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dy
2
dx
3
dx
n−1
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dx
n−2
dy
n−1
1
n−1
R
n
[Du[dx
1
n−1
.
Finally, integrating both sides with respect to x
n
and applying the general
H¨older inequality, we obtain
R
n
[u[
n
n−1
dx ≤
R
n
[Du[dx
n
n−1
.
This veriﬁes (1.31).
Exercise 1.4.1 Write your own proof with all details for the cases n = 3 and
n = 4.
Step 2. The Case p > 1.
Applying (1.31) to the function [u[
γ
with γ > 1 to be chosen later, and by
the H¨older inequality, we have
R
n
[u[
γn
n−1
dx
n−1
n
≤
R
n
[D([u[
γ
)[dx = γ
R
n
[u[
γ−1
[Du[dx
≤ γ
R
n
[u[
γn
n−1
dx
(γ−1)(n−1)
γn
R
n
[Du[
γn
γ+n−1
dx
γ+n−1
γn
.
28 1 Introduction to Sobolev Spaces
It follows that
R
n
[u[
γn
n−1
dx
n−1
γn
≤ γ
R
n
[Du[
γn
γ+n−1
dx
γ+n−1
γn
.
Now choose γ, so that
γn
γ +n −1
= p, that is
γ =
p(n −1)
n −p
,
we obtain
R
n
[u[
np
n−p
dx
n−p
np
≤ γ
R
n
[Du[
p
dx
1
p
.
This completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.4 (Morrey’s inequality).
We will establish two inequalities
sup
R
n
[u[ ≤ Cu
W
1,p
(R
n
)
, (1.34)
and
sup
x=y
[u(x) −u(y)[
[x −y[
1−n/p
≤ CDu
L
p
(R
n
)
. (1.35)
Both of them can be derived from the following
1
[B
r
(x)[
Br(x)
[u(y) −u(x)[dy ≤ C
Br(x)
[Du(y)[
[y −x[
n−1
dy, (1.36)
where [B
r
(x)[ is the volume of B
r
(x). We will carry the proof out in three
steps. In Step 1, we prove (1.36), and in Step 2 and Step 3, we verify (1.34)
and (1.35), respectively.
Step 1. We start from
u(y) −u(x) =
1
0
d
dt
u(x +t(y −x))dt =
s
0
Du(x +τω) ωdτ,
where
ω =
y −x
[x −y[
, s = [x −y[, and hence y = x +sω.
It follows that
[u(x +sω) −u(x)[ ≤
s
0
[Du(x +τω)[dτ.
Integrating both sides with respect to ω on the unit sphere ∂B
1
(0), then
converting the integral on the right hand side from polar to rectangular coor
dinates, we obtain
1.4 Sobolev Embeddings 29
∂B1(0)
[u(x +sω) −u(x)[dσ ≤
s
0
∂B1(0)
[Du(x +τω)[dσdτ
=
s
0
∂B1(0)
[Du(x +τω)[
τ
n−1
τ
n−1
dσdτ
=
Bs(x)
[Du(z)[
[x −z[
n−1
dz.
Multiplying both sides by s
n−1
, integrating with respect to s from 0 to r, and
taking into account that the integrand on the right hand side is nonnegative,
we arrive at
Br(x)
[u(y) −u(x)[dy ≤
r
n
n
Br(x)
[Du(y)[
[x −y[
n−1
dy.
This veriﬁes (1.36).
Step 2. For each ﬁxed x ∈ R
n
, we have
[u(x)[ =
1
[B
1
(x)[
B1(x)
[u(x)[dy
≤
1
[B
1
(x)[
B1(x)
[u(x) −u(y)[dy +
1
[B
1
(x)[
B1(x)
[u(y)[dy
= I
1
+I
2
. (1.37)
By (1.36) and the H¨older inequality, we deduce
I
1
≤ C
B1(x)
[Du(y)[
[x −y[
n−1
dy
≤ C
R
n
[Du[
p
dy
1/p
B1(x)
dy
[x −y[
(n−1)p
p−1
dy
p−1
p
≤ C
1
R
n
[Du[
p
dy
1/p
. (1.38)
Here we have used the condition that p > n, so that
(n−1)p
p−1
< n, and hence
the integral
B1(x)
dy
[x −y[
(n−1)p
p−1
is ﬁnite.
Also it is obvious that
I
2
≤ Cu
L
p
(R
n
)
. (1.39)
Now (1.34) is an immediate consequence of (1.37), (1.38), and (1.39).
30 1 Introduction to Sobolev Spaces
Step 3. For any pair of ﬁxed points x and y in R
n
, let r = [x − y[ and
D = B
r
(x) ∩ B
r
(y). Then
[u(x) −u(y)[ ≤
1
[D[
D
[u(x) −u(z)[dz +
1
[D[
D
[u(z) −u(y)[dz. (1.40)
Again by (1.36), we have
1
[D[
D
[u(x) −u(z)[dz ≤
C
[B
r
(x)[
Br(x)
[u(x) −u(z)[dz
≤ C
R
n
[Du[
p
dz
1/p
Br(x)
dz
[x −z[
(n−1)p
p−1
p−1
p
= Cr
1−n/p
Du
L
p
(R
n
)
. (1.41)
Similarly,
1
[D[
D
[u(z) −u(y)[dz ≤ Cr
1−n/p
Du
L
p
(R
n
)
. (1.42)
Now (1.40), (1.41), and (1.42) yield
[u(x) −u(y)[ ≤ C[x −y[
1−n/p
Du
L
p
(R
n
)
.
This implies (1.35) and thus completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.1 (the General Sobolev Inequality).
(i) Assume that u ∈ W
k,p
(Ω) with k <
n
p
. We want to show that u ∈
L
q
(Ω) and
u
L
q
(Ω)
≤ Cu
W
k,p
(Ω)
. (1.43)
with
1
q
=
1
p
−
k
n
. This can be done by applying Theorem 1.4.3 successively on
the integer k. Again denote p
∗
=
np
n−p
, then
1
p
∗
=
1
p
−
1
n
. For [α[ ≤ k−1, D
α
u ∈
W
1,p
(Ω). By Theorem 1.4.3, we have D
α
u ∈ L
p
∗
(Ω), thus u ∈ W
k−1,p
∗
(Ω)
and
u
W
k−1,p
∗
(Ω)
≤ C
1
u
W
k,p
(Ω)
.
Applying Theorem 1.4.3 again to W
k−1,p
∗
(Ω), we have u ∈ W
k−2,p
∗∗
(Ω) and
u
W
k−2,p
∗∗
(Ω)
≤ C
2
u
W
k−1,p
∗
(Ω)
≤ C
2
C
1
u
W
k,p
(Ω)
,
where p
∗∗
=
np∗
n−p
∗
, or
1
p
∗∗
=
1
p
∗
−
1
n
= (
1
p
−
1
n
) −
1
n
=
1
p
−
2
n
.
Continuing this way k times, we arrive at (1.43).
1.4 Sobolev Embeddings 31
(ii) Now assume that k >
n
p
. Recall that in the Morrey’s inequality
u
C
0,γ
(Ω)
≤ Cu
W
1,p
(Ω)
, (1.44)
we require that p > n. However, in our situation, this condition is not neces
sarily met. To remedy this, we can use the result in the previous step. We can
ﬁrst decrease the order of diﬀerentiation to increase the power of summability.
More precisely, we will try to ﬁnd a smallest integer m, such that
W
k,p
(Ω) → W
k−m,q
(Ω),
with q > n. That is, we want
q =
np
n −mp
> n,
and equivalently,
m >
n
p
−1.
Obviously, the smallest such integer m is
m =
¸
n
p
.
For this choice of m, we can apply Morrey’s inequality (1.44) to D
α
u, with
any [α[ ≤ k −m−1 to obtain
D
α
u
C
0,γ
(Ω)
≤ C
1
u
W
k−m,q
(Ω)
≤ C
1
C
2
u
W
k,p
(Ω)
.
Or equivalently,
u
C
k−
[
n
p
]
−1,γ
(Ω)
≤ Cu
W
k,p
(Ω)
.
Here, when
n
p
is not an integer,
γ = 1 −
n
q
= 1 +
¸
n
p
−
n
p
.
While
n
p
is an integer, we have m =
n
p
, and in this case, q can be any number
> n, which implies that γ can be any positive number < 1.
This completes the proof of the Theorem. ¯ .
32 1 Introduction to Sobolev Spaces
1.5 Compact Embedding
In the previous section, we proved that W
1,p
(Ω) is embedded into L
p
∗
(Ω)
with p
∗
=
np
n−p
. Obviously, for q < p
∗
, the embedding of W
1,p
(Ω) into L
q
(Ω)
is still true, if the region Ω is bounded. Actually, due to the strict inequality
on the exponent, one can expect more, as stated below.
Theorem 1.5.1 (RellichKondrachov Compact Embedding).
Assume that Ω is a bounded open subset in R
n
with C
1
boundary ∂Ω.
Suppose 1 ≤ p < n. Then for each 1 ≤ q < p
∗
, W
1,p
(Ω) is compactly imbedded
into L
q
(Ω):
W
1,p
(Ω) →→ L
q
(Ω),
in the sense that
i) there is a constant C, such that
u
L
q
(Ω)
≤ Cu
W
1,p
(Ω)
, ∀ u ∈ W
1,p
(Ω); (1.45)
and
ii) every bounded sequence in W
1,p
(Ω) possesses a convergent subsequence
in L
q
(Ω).
Proof. The ﬁrst part–inequality (1.45)– is included in the general Embedding
Theorem. What we need to show is that if ¦u
k
¦ is a bounded sequence in
W
1,p
(Ω), then it possesses a convergent subsequence in L
q
(Ω). This can be
derived immediately from the following
Lemma 1.5.1 Every bounded sequence in W
1,1
(Ω) possesses a convergent
subsequence in L
1
(Ω).
We postpone the proof of the Lemma for a moment and see how it implies
the Theorem.
In fact, assume that ¦u
k
¦ is a bounded sequence in W
1,p
(Ω). Then there
exists a subsequence (still denoted by ¦u
k
¦) which converges weakly to an ele
ment u
o
in W
1,p
(Ω). By the Sobolev embedding, ¦u
k
¦ is bounded in L
p∗
(Ω).
On the other hand, it is also bounded in W
1,1
(Ω), since p ≥ 1 and Ω is
bounded. Now, by Lemma 1.5.1, there is a subsequence (still denoted by ¦u
k
¦)
that converges strongly to u
o
in L
1
(Ω). Applying the H¨older inequality
u
k
−u
o

L
q
(Ω)
≤ u
k
−u
o

θ
L
1
(Ω)
u
k
−u
o

1−θ
L
p∗
(Ω)
,
we conclude immediately that ¦u
k
¦ converges strongly to u
o
in L
q
(Ω). This
proves the Theorem.
We now come back to prove the Lemma. Let ¦u
k
¦ be a bounded sequence
in W
1,1
(Ω). We will show the strong convergence of this sequence in three
steps with the help of a family of molliﬁers
u
k
(x) =
Ω
j
(y)u
k
(x −y)dy.
1.5 Compact Embedding 33
First, we show that
u
k
→u
k
in L
1
(Ω) as →0 , uniformly in k. (1.46)
Then, for each ﬁxed > 0, we prove that
there is a subsequence of ¦u
k
¦ which converges uniformly . (1.47)
Finally, corresponding to the above convergent sequence ¦u
k
¦, we extract
diagonally a subsequence of ¦u
k
¦ which converges strongly in L
1
(Ω).
Based on the Extension Theorem, we may assume, without loss of gener
ality, that Ω = R
n
, the sequence of functions ¦u
k
¦ all have compact support
in a bounded open set G ⊂ R
n
, and
u
k

W
1,1
(G)
≤ C < ∞, for all k = 1, 2, .
Since every W
1,1
function can be approached by a sequence of smooth
functions, we may also assume that each u
k
is smooth.
Step 1. From the property of molliﬁers, we have
u
k
(x) −u
k
(x) =
B1(0)
j
(y)[u
k
(x −y) −u
k
(x)]dy
=
B1(0)
j
(y)
1
0
d
dt
u
k
(x −ty)dt dy
= −
B1(0)
j
(y)
1
0
Du
k
(x −ty)dt y dy.
Integrating with respect to x and changing the order of integration, we
obtain
u
k
−u
k

L
1
(G)
≤
B1(0)
j(y)
1
0
G
[Du
k
(x −ty)[dxdt dy
≤
G
[Du
k
(z)[dz ≤ u
k

W
1,1
(G)
. (1.48)
It follows that
u
k
−u
k

L
1
(G)
→0 , as →0 , uniformly in k. (1.49)
Step 2. Now ﬁx an > 0, then for all x ∈ R
n
and for all k = 1, 2, , we
have
[u
k
(x)[ ≤
B(x)
j
(x −y)[u
k
(y)[dy
≤ j

L
∞
(R
n
)
u
k

L
1
(G)
≤
C
n
< ∞. (1.50)
34 1 Introduction to Sobolev Spaces
Similarly,
[Du
k
(x)[ ≤
C
n+1
< ∞. (1.51)
(1.50) and (1.51) imply that, for each ﬁxed > 0, the sequence ¦u
k
¦ is uni
formly bounded and equicontinuous. Therefore, by the ArzelaAscoli Theo
rem (see the Appendix), it possesses a subsequence (still denoted by ¦u
k
¦ )
which converges uniformly on G, in particular
lim
j,i→∞
u
j
−u
i

L
1
(G)
= 0. (1.52)
Step 3. Now choose to be 1, 1/2, 1/3, , 1/k, successively, and denote
the corresponding subsequence that satisﬁes (1.52) by
u
1/k
k1
, u
1/k
k2
, u
1/k
k3
,
for k = 1, 2, 3, . Pick the diagonal subsequence from the above:
¦u
1/i
ii
¦ ⊂ ¦u
k
¦.
Then select the corresponding subsequence from ¦u
k
¦:
u
11
, u
22
, u
33
, .
This is our desired subsequence, because
u
ii
−u
jj

L
1
(G)
≤ u
ii
−u
1/i
ii

L
1
(G)
+u
1/i
ii
−u
1/j
jj

L
1
(G)
+u
1/j
jj
−u
jj

L
1
(G)
→0 , as i, j→∞,
due to the fact that each of the norm on the right hand side →0.
This completes the proof of the Lemma and hence the Theorem. ¯ .
1.6 Other Basic Inequalities
1.6.1 Poincar´e’s Inequality
For functions that vanish on the boundary, we have
Theorem 1.6.1 (Poincar´e’s Inequality I).
Assume Ω is bounded. Suppose u ∈ W
1,p
0
(Ω) for some 1 ≤ p ≤ ∞. Then
u
L
p
(Ω)
≤ CDu
L
p
(Ω)
. (1.53)
1.6 Other Basic Inequalities 35
Remark 1.6.1 i) Now based on this inequality, one can take an equivalent
norm of W
1,p
0
(Ω) as u
W
1,p
0
(Ω)
= Du
L
p
(Ω)
.
ii) Just for this theorem, one can easily prove it by using the Sobolev in
equality u
L
p ≤ u
L
q with p =
nq
n−q
and the H¨older inequality. However,
the one we present in the following is a uniﬁed proof that works for both this
and the next theorem.
Proof. For convenience, we abbreviate u
L
p
(Ω)
as u
p
. Suppose inequality
(1.53) does not hold, then there exists a sequence ¦u
k
¦ ⊂ W
1,p
0
(Ω), such that
Du
k

p
= 1 , while u
k

p
→∞, as k→∞.
Since C
∞
0
(Ω) is dense in W
1,p
0
(Ω), we may assume that ¦u
k
¦ ⊂ C
∞
0
(Ω).
Let v
k
=
u
k
u
k

p
. Then
v
k

p
= 1 , and Dv
k

p
→0 .
Consequently, ¦v
k
¦ is bounded in W
1,p
(Ω) and hence possesses a subsequence
(still denoted by ¦v
k
¦) that converges weakly to some v
o
∈ W
1,p
(Ω). From the
compact embedding results in the previous section, ¦v
k
¦ converges strongly
in L
p
(Ω) to v
o
, and therefore
v
o

p
= 1 . (1.54)
On the other hand, for each φ ∈ C
∞
0
(Ω),
Ω
vφ
xi
dx = lim
k→∞
Ω
v
k
φ
xi
dx = − lim
k→∞
Ω
v
k,xi
φdx = 0.
It follows that
Dv
o
(x) = 0 , a.e.
Thus v
o
is a constant. Taking into account that v
k
∈ C
∞
0
(Ω), we must have
v
o
≡ 0. This contradicts with (1.54) and therefore completes the proof of the
Theorem. ¯ .
For functions in W
1,p
(Ω), which may not be zero on the the boundary, we
have another version of Poincar´e’s inequality.
Theorem 1.6.2 (Poincar´e Inequality II).
Let Ω be a bounded, connected, and open subset in R
n
with C
1
boundary.
Let ¯ u be the average of u on Ω. Assume 1 ≤ p ≤ ∞. Then there exists a
constant C = C(n, p, Ω), such that
u − ¯ u
p
≤ CDu
p
, ∀ u ∈ W
1,p
(Ω). (1.55)
36 1 Introduction to Sobolev Spaces
The proof of this Theorem is similar to the previous one. Instead of letting
v
k
=
u
k
u
k

p
, we choose
v
k
=
u
k
− ¯ u
k
u
k
− ¯ u
k

p
.
Remark 1.6.2 The connectness of Ω is essential in this version of the in
equality. A simple counter example is when n = 1, Ω = [0, 1] ∪ [2, 3], and
u(x) =
−1 for x ∈ [0, 1]
1 for x ∈ [2, 3] .
1.6.2 The Classical HardyLittlewoodSobolev Inequality
Theorem 1.6.3 (HardyLittlewoodSobolev Inequality).
Let 0 < λ < n and s, r > 1 such that
1
r
+
1
s
+
λ
n
= 2.
Assume that f ∈ L
r
(R
n
) and g ∈ L
s
(R
n
). Then
R
n
R
n
f(x)[x −y[
−λ
g(y)dxdy ≤ C(n, s, λ)[[f[[
r
[[g[[
s
(1.56)
where
C(n, s, λ) =
n[B
n
[
λ/n
(n −λ)rs
λ/n
1 −1/r
λ/n
+
λ/n
1 −1/s
λ/n
with [B
n
[ being the volume of the unit ball in R
n
, and where
f
r
:= f
L
r
(R
n
)
.
Proof. Without loss of generality, we may assume that both f and g are non
negative and f
r
= 1 = g
s
.
Let
χ
G
(x) =
1 if x ∈ G
0 if x ∈G
be the characteristic function of the set G. Then one can see obviously that
f(x) =
∞
0
χ
{f>a}
(x) da (1.57)
g(x) =
∞
0
χ
{g>b}
(x) db (1.58)
[x[
−λ
= λ
∞
0
c
−λ−1
χ
{x<c}
(x) dc (1.59)
1.6 Other Basic Inequalities 37
To see the last identity, one may ﬁrst write
[x[
−λ
=
∞
0
χ
{x
−λ
>˜ c}
(x) d˜ c,
and then let ˜ c = c
−λ
.
Substituting (1.57), (1.58), and (1.59) into the left hand side of (1.56), we
have
I :=
R
n
R
n
f(x)[x −y[
−λ
g(y) dxdy
= λ
∞
0
∞
0
∞
0
R
n
R
n
c
−λ−1
χ
{f>a}
(x)χ
{x<c}
(x −y)χ
{g>b}
(y) dxdy dc db da
= λ
∞
0
∞
0
∞
0
c
−λ−1
I(a, b, c) dc db da. (1.60)
Let
u(c) = [B
n
[c
n
,
the volume of the ball of radius c, and let
v(a) =
R
n
χ
{f>a}
(x) dx, w(b) =
R
n
χ
{g>b}
(y) dy,
the measure of the sets ¦x [ f(x) > a¦ and ¦y [ g(y) > b¦, respectively. Then
we can express the norms as
f
r
r
= r
∞
0
a
r−1
v(a) da = 1 and g
s
s
= s
∞
0
b
s−1
w(b) db = 1. (1.61)
It is easy to see that
I(a, b, c) ≤
R
n
R
n
χ
{x<c}
(x −y)χ
{g>b}
(y) dxdy
≤
R
n
u(c)χ
{g>b}
(y) dy = u(c)w(b)
Similarly, one can show that I(a, b, c) is bounded above by other pairs and
arrive at
I(a, b, c) ≤ min¦u(c)w(b), u(c)v(a), v(a)w(b)¦. (1.62)
We integrate with respect to c ﬁrst. By (1.62), we have
∞
0
c
−λ−1
I(a, b, c) dc
≤
u(c)≤v(a)
c
−λ−1
w(b)u(c) dc +
u(c)>v(a)
c
−λ−1
w(b)v(a) dc
38 1 Introduction to Sobolev Spaces
= w(b)[B
n
[
(v(a)/B
n
)
1/n
0
c
−λ−1+n
dc +w(b)v(a)
(v(a)/B
n
)
1/n
c
−λ−1
dc
=
[B
n
[
λ/n
n −λ
w(b)v(a)
1−λ/n
+
[B
n
[
λ/n
λ
w(b)v(a)
1−λ/n
=
[B
n
[
λ/n
λ(n −λ)
w(b)v(a)
1−λ/n
(1.63)
Exchanging v(a) with w(b) in the second line of (1.63), we also obtain
∞
0
c
−λ−1
I(a, b, c) dc
≤
u(c)≤w(b)
c
−λ−1
v(a)u(c) dc +
u(c)>w(b))
c
−λ−1
w(b)v(a) dc
≤
[B
n
[
λ/n
λ(n −λ)
v(a)w(b)
1−λ/n
(1.64)
In view of (1.61), we split the bintegral into two parts, one from 0 to a
r/s
and the other from a
r/s
to ∞. By virtue of (1.60), (1.63), and (1.64), we derive
I ≤
n
n −λ
[B
n
[
λ/n
∞
0
v(a)
a
r/s
0
w(b)
1−λ/n
db da +
∞
0
v(a)
1−λ/n
∞
a
r/s
w(b) db da. (1.65)
To estimate the ﬁrst integral in (1.65), we use H¨older inequality with
m = (s −1)(1 −λ/n)
a
r/s
0
w(b)
1−λ/n
b
m
b
−m
db
≤
a
r/s
0
w(b)b
s−1
db
1−λ/n
a
r/s
0
b
−mn/λ
db
λ/n
≤
a
r/s
0
w(b)b
s−1
db
1−λ/n
a
r−1
, (1.66)
since mn/λ < 1. It follows that the ﬁrst integral in (1.65) is bounded above
by
λ
n −s(n −λ)
λ/n
∞
0
v(a)a
r−1
da
∞
0
w(b)b
s−1
db
1−λ/n
=
1
rs
λ/n
1 −1/r
λ/n
. (1.67)
1.6 Other Basic Inequalities 39
To estimate the second integral in (1.65), we ﬁrst rewrite it as
∞
0
w(b)
b
s/r
0
v(a)
1−λ/n
da db,
then an analogous computation shows that it is bounded above by
1
rs
λ/n
1 −1/s
λ/n
. (1.68)
Now the desired HardyLittlewoodSobolev inequality follows directly from
(1.65), (1.67), and (1.68). ¯ .
Theorem 1.6.4 (An equivalent form of the HardyLittlewoodSobolev in
equality)
Let g ∈ L
p
(R
n
) for
n
n−α
< p < ∞. Deﬁne
Tg(x) =
R
n
[x −y[
α−n
g(y)dy.
Then
Tg
p
≤ C(n, p, α)g
np
n+αp
. (1.69)
Proof. By the classical HardyLittlewoodSobolev inequality, we have
< f, Tg >=< Tf, g >≤ C(n, s, α) f
r
g
s
,
where < , > is the L
2
inner product.
Consequently,
Tg
p
= sup
fr=1
< f, Tg > ≤ C(n, s, α)g
s
,
where
1
p
+
1
r
= 1
1
r
+
1
s
=
n+α
n
.
Solving for s, we arrive at
s =
np
n +αp
.
This completes the proof of the Theorem. ¯ .
Remark 1.6.3 To see the relation between inequality (1.69) and the Sobolev
inequality, let’s rewrite it as
Tg
nq
n−αq
≤ Cg
q
(1.70)
with
40 1 Introduction to Sobolev Spaces
1 < q :=
np
n +αp
<
n
α
.
Let u = Tg. Then one can show that (see [CLO]),
(−´)
α
2
u = g.
Now inequality (1.70) becomes the Sobolev one
u
nq
n−αq
≤ C(−´)
α
2
u
q
.
2
Existence of Weak Solutions
2.1 Second Order Elliptic Operators
2.2 Weak Solutions
2.3 Method of Linear Functional Analysis
2.3.1 Linear Equations
2.3.2 Some Basic Principles in Functional Analysis
2.3.3 Existence of Weak Solutions
2.4 Variational Methods
2.4.1 Semilinear Equations
2.4.2 Calculus of Variations
2.4.3 Existence of Minimizers
2.4.4 Existence of Minimizers under Constrains
2.4.5 Minimax Critical Points
2.4.6 Existence of a Minimax via the Mountain Pass Theorem
Many physical models come with natural variational structures where one
can minimize a functional, usually an energy E(u). Then the minimizers are
weak solutions of the related partial diﬀerential equation
E
(u) = 0.
For example, for an open bounded domain Ω ⊂ R
n
, and for f ∈ L
2
(Ω),
the functional
E(u) =
1
2
Ω
[u[
2
dx −
Ω
f udx
possesses a minimizer among all u ∈ H
1
0
(Ω) (try to show this by using the
knowledge from the previous chapter). As we will see in this chapter, the
minimizer is a weak solution of the Dirichlet boundary value problem
42 2 Existence of Weak Solutions
−´u = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
In practice, it is often much easier to obtain a weak solution than a classical
one. One of the seven millennium problem posed by the Clay Institute of
Mathematical Sciences is to show that some suitable weak solutions to the
NavieStoke equation with good initial values are in fact classical solutions
for all time.
In this chapter, we consider the existence of weak solutions to second order
elliptic equations, both linear and semilinear.
For linear equations, we will use the LaxMilgram Theorem to show the
existence of weak solutions.
For semilinear equations, we will introduce variational methods. We will
consider the corresponding functionals in proper Hilbert spaces and seek weak
solutions of the equations associated with the critical points of the functionals.
We will use examples to show how to ﬁnd a minimizer, a minimizer under
constraint, and a minimax critical point. The wellknown Mountain Pass
Theorem is introduced and applied to show the existence of such a minimax.
To show that a weak solution possesses the desired diﬀerentiability so that
it is actually a classical one is called the regularity method, which will be
studied in Chapter 3.
2.1 Second Order Elliptic Operators
Let Ω be an open bounded set in R
n
. Let a
ij
(x), b
i
(x), and c(x) be bounded
functions on Ω with a
ij
(x) = a
ji
(x). Consider the second order partial diﬀer
ential operator L either in the divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u, (2.1)
or in the nondivergence form
Lu = −
n
¸
i,j=1
a
ij
(x)u
xixj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (2.2)
We say that L is uniformly elliptic if there exists a constant δ > 0, such
that
n
¸
i,j=1
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for almost every x ∈ Ω and for all ξ ∈ R
n
.
In the simplest case when a
ij
(x) = δ
ij
, b
i
(x) ≡ 0, and c(x) ≡ 0, L reduces
to −´. And, as we will see later, that the solutions of the general second order
elliptic equation Lu = 0 share many similar properties with the harmonic
functions.
2.2 Weak Solutions 43
2.2 Weak Solutions
Let the operator L be in the divergence form as deﬁned in (2.1). Consider the
second order elliptic equation with Dirichlet boundary condition
Lu = f , x ∈ Ω
u = 0 , x ∈ ∂Ω
(2.3)
When f = f(x), the equation is linear; and when f = f(x, u), semilinear.
Assume that, f(x) is in L
2
(Ω), or for a given solution u, f(x, u) is in
L
2
(Ω).
Multiplying both sides of the equation by a test function v ∈ C
∞
0
(Ω), and
integrating by parts on Ω, we have
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx =
Ω
fvdx. (2.4)
There are no boundary terms because both u and v vanish on ∂Ω. One can see
that the integrals in (2.4) are well deﬁned if u, v, and their ﬁrst derivatives are
square integrable. Write H
1
(Ω) = W
1,2
(Ω), and let H
1
0
(Ω) the completion of
C
∞
0
(Ω) in the norm of H
1
(Ω):
u =
¸
Ω
([Du[
2
+[u[
2
)dx
1/2
.
Then (2.4) remains true for any v ∈ H
1
0
(Ω). One can easily see that H
1
0
(Ω)
is also the most appropriate space for u. On the other hand, if u ∈ H
1
0
(Ω)
satisﬁes (2.4) and is second order diﬀerentiable, then through integration by
parts, we have
Ω
(Lu −f)vdx = 0 ∀v ∈ H
1
0
(Ω).
This implies that
Lu = f , ∀x ∈ Ω.
And therefore u is a classical solution of (2.3).
The above observation naturally leads to
Deﬁnition 2.2.1 The weak solution of problem (2.3) is a function u ∈
H
1
0
(Ω) that satisﬁes (2.4) for all v ∈ H
1
0
(Ω).
Remark 2.2.1 Actually, here the condition on f can be relaxed a little bit.
By the Sobolev embedding
H
1
(Ω) → L
2n
n−2
(Ω),
44 2 Existence of Weak Solutions
we see that v is in L
2n
n−2
(Ω), and hence by duality, f only need to be in
L
2n
n+2
(Ω). In the semilinear case, if f(x, u) = u
p
for any p ≤
n + 2
n −2
or more
generally
f(x, u) ≤ C
1
+C
2
[u[
n+2
n−2
;
then for any u ∈ H
1
(Ω), f(x, u) is in L
2n
n+2
(Ω).
2.3 Methods of Linear Functional Analysis
2.3.1 Linear Equations
For the Dirichelet problem with linear equation
Lu = f(x) , x ∈ Ω
u = 0 , x ∈ ∂Ω,
(2.5)
to ﬁnd its weak solutions, we can apply representation type theorems in Linear
Functional Analysis, such as Riesz Representation Theorem or LaxMilgram
Theorem. To this end, we introduce the bilinear form
B[u, v] =
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx
deﬁned on H
1
0
(Ω), which is the left hand side of (2.4) in the deﬁnition of the
weak solutions. While the right hand side of (2.4) may be regarded as a linear
functional on H
1
0
(Ω);
< f, v >:=
Ω
fvdx.
Now, to ﬁnd a weak solution of (2.5), it is equivalent to show that there
exists a function u ∈ H
1
0
(Ω), such that
B[u, v] =< f, v >, ∀ v ∈ H
1
0
(Ω). (2.6)
This can be realized by representation type theorems in Functional Analysis.
2.3.2 Some Basic Principles in Functional Analysis
Here we list brieﬂy some basic deﬁnitions and principles of functional analysis,
which will be used to obtain the existence of weak solutions. For more details,
please see [Sch1].
Deﬁnition 2.3.1 A Banach space X is a complete, normed linear space with
norm   that satisﬁes
(i) u +v ≤ u +v, ∀u, v ∈ X,
(ii) λu = [λ[u, ∀u, v ∈ X, λ ∈ R,
(iii) u = 0 if and only if u = 0.
2.3 Methods of Linear Functional Analysis 45
The examples of Banach spaces are:
(i) L
p
(Ω) with norm u
L
p
(Ω)
,
(ii) Sobolev spaces W
k,p
(Ω) with norm u
W
k,p
(Ω)
, and
(iii) H¨older spaces C
0,γ
(Ω) with norm u
C
0,γ
(Ω)
.
Deﬁnition 2.3.2 A Hilbert space H is a Banach space endowed with an inner
product (, ) which generates the norm
u := (u, u)
1/2
with the following properties
(i) (u, v) = (v, u), ∀u, v ∈ H,
(ii) the mapping u → (u, v) is linear for each v ∈ H,
(iii) (u, u) ≥ 0, ∀u ∈ H, and
(iv) (u, u) = 0 if and only if u = 0.
Examples of Hilbert spaces are
(i) L
2
(Ω) with inner product
(u, v) =
Ω
u(x)v(x)dx,
and
(ii) Sobolev spaces H
1
(Ω) or H
1
0
(Ω) with inner product
(u, v) =
Ω
[u(x)v(x) +Du(x) Dv(x)]dx.
Deﬁnition 2.3.3 (i) A mapping A : X→Y is a linear operator if
A[au +bv] = aA[u] +bA[u] ∀ u, v ∈ X, a, b ∈ R
1
.
(ii) A linear operator A : X→Y is bounded provided
A := sup
u
X
≤1
Au
Y
< ∞.
(iii) A bounded linear operator f : X→R
1
is called a bounded linear func
tional on X. The collection of all bounded linear functionals on X, denoted by
X
∗
, is called the dual space of X. We use
< f, u >:= f[u]
to denote the pairing of X
∗
and X.
Lemma 2.3.1 (Projection).
Let M be a closed subspace of H. Then for every u ∈ H, there is a v ∈ M,
such that
(u −v, w) = 0, ∀w ∈ M. (2.7)
In other words,
H = M ⊕M
⊥
:= ¦u ∈ H [ u⊥M¦.
46 2 Existence of Weak Solutions
Here v is the projection of u on M. The readers may ﬁnd the proof of the
Lemma in the book [Sch] ( Theorem 13) or as given in the following exercise.
Exercise 2.3.1 i) Given any u ∈ H and u ∈M, there exist a v
o
∈ M such
that
v
o
−u = inf
v∈M
v −u.
Hint: Show that a minimizing sequence is a Cauchy sequence.
ii) Show that
(u −v
o
, w) = 0 , ∀w ∈ M.
Based on this Projection Lemma, one can prove
Theorem 2.3.1 (Riesz Representation Theorem). Let H be a real Hilbert
space with inner product (, ), and H
∗
be its dual space. Then H
∗
can be
canonically identiﬁed with H, more precisely, for each u
∗
∈ H
∗
, there exists
a unique element u ∈ H, such that
< u
∗
, v >= (u, v) ∀v ∈ H.
The mapping u
∗
→ u is a linear isomorphism from H
∗
onto H.
This is a wellknown theorem in functional analysis. The readers may see
the book [Sch1] for its proof or do it as the following exercise.
Exercise 2.3.2 Prove the Riesz Representation Theorem in 4 steps.
Let T be a linear operator from H to R
1
and T ≡0. Show that
i) K(T) := ¦u ∈ H [ Tu = 0¦ is closed.
ii) H = K(T) ⊕K(T)
⊥
.
iii) The dimension of K(T) is 1.
iv) There exists v, such that Tv = 1. Then
Tv = (u, v) , ∀u ∈ H.
From Riesz Representation Theorem, one can derive the following
Theorem 2.3.2 (LaxMilgram Theorem). Let H be a real Hilbert space with
norm  . Let
B : H H→R
1
be a bilinear mapping. Assume that there exist constants M, m > 0, such that
(i) [B[u, v][ ≤ Muv, ∀u, v ∈ H
and
(ii) mu
2
≤ B[u, u], ∀u ∈ H.
Then for each bounded linear functional f on H, there exists a unique
element u ∈ H, such that
B[u, v] =< f, v >, ∀v ∈ H.
2.3 Methods of Linear Functional Analysis 47
Proof. 1. If B[u, v] is symmetric, i.e. B[u, v] = B[v, u], then the conditions
(i) and (ii) ensure that B[u, v] can be made as an inner product in H, and
the conclusion of the theorem follows directly from the Reisz Representation
Theorem.
2. In the case B[u, v] may not be symmetric, we proceed as follows.
On one hand, by the Riesz Representation Theorem, for a given bounded
linear functional f on H, there is an
˜
f ∈ H, such that
< f, v >= (
˜
f, v), ∀v ∈ H. (2.8)
On the other hand, for each ﬁxed element u ∈ H, by condition (i), B[u, ]
is also a bounded linear functional on H, and hence there exist a ˜ u ∈ H, such
that
B[u, v] = (˜ u, v), ∀v ∈ H. (2.9)
Denote this mapping from u to ˜ u by A, i.e.
A : H→H, Au = ˜ u. (2.10)
From (2.8), (2.9), and (2.10), It suﬃce to show that for each
˜
f ∈ H, there
exists a unique element u ∈ H, such that
Au =
˜
f.
We carry this out in two parts. In part (a), we show that A is a bounded
linear operator, and in part (b), we prove that it is onetoone and onto.
(a) For any u
1
, u
2
∈ H, any a
1
, a
2
∈ R
1
, and each v ∈ H, we have
(A(a
1
u
1
+a
2
u
2
), v) = B[(a
1
u
1
+a
2
u
2
), v]
= a
1
B[u
1
, v] +a
2
B[u
2
, v]
= a
1
(Au
1
, v) +a
2
(Au
2
, v)
= (a
1
Au
1
+a
2
Au
2
, v)
Hence
A(a
1
u
1
+a
2
u
2
) = a
1
Au
1
+a
2
Au
2
.
A is linear.
Moreover, by condition (i), for any u ∈ H,
Au
2
= (Au, Au) = B[u, Au] ≤ MuAu,
and consequently
Au ≤ Mu ∀u ∈ H. (2.11)
This veriﬁes that A is bounded.
(b) To see that A is onetoone, we apply condition (ii):
mu
2
≤ B[u, u] = (Au, u) ≤ Auu,
48 2 Existence of Weak Solutions
and it follows that
mu ≤ Au. (2.12)
Now if Au = Av, then from (2.12),
mu −v ≤ Au −Av.
This implies u = v. Hence A is onetoone.
From (2.12), one can also see that R(A), the range of A in H is closed. In
fact, assume that v
k
∈ R(A) and v
k
→v in H. Then for each v
k
, there is a u
k
,
such that Au
k
= v
k
. Inequality (2.12) implies that
u
i
−u
j
 ≤ Cv
i
−v
j
,
i.e. u
k
is a Cauchy sequence in H, and hence u
k
→u for some u ∈ H. Now, by
(2.11), Au
k
→Au and v = Au ∈ H. Therefore, R(A) is closed.
To show that R(A) = H, we need the Projection Lemma 2.3.1. For any
u ∈ H, by the Projection Lemma, there exists a v ∈ R(A), such that
0 = (u −v, A(u −v)) = B[(u −v), (u −v)] ≥ mu −v
2
.
Consequently, u = v ∈ R(A) and hence H ⊂ R(A). Therefore R(A) = H.
This completes the proof of the Theorem. ¯ .
2.3.3 Existence of Weak Solutions
Now we explore that in what situations, our second order elliptic operator
L would satisfy the conditions (i) and (ii) requested by the LaxMilgram
Theorem.
Let H = H
1
0
(Ω) be our Hilbert space with inner product
(u, v) :=
Ω
(DuDv +uv)dx.
By Poincar´e inequality, we can use the equivalent inner product
(u, v)
H
:=
Ω
DuDvdx
and the corresponding norm
u
H
:=
Ω
[Du[
2
dx.
For any v ∈ H, by the Sobolev Embedding, v is in L
2n
n−2
(Ω). And by virtue
of the H¨older inequality,
2.3 Methods of Linear Functional Analysis 49
< f, v >:=
Ω
f(x)v(x)dx ≤
Ω
[f(x)[
2n
n+2
dx
n+2
2n
Ω
[v(x)[
2n
n−2
dx
n−2
2n
,
hence we only need to assume that f ∈ L
2n
n+2
(Ω).
In the simplest case when L = −´, we have
B[u, v] =
Ω
Du Dvdx.
Obviously, condition (i) is met:
[B[u, v][ ≤ u
H
v
H
,
Condition (ii) is also satisﬁed:
B[u, u] = u
2
H
.
Now applying the LaxMilgram Theorem, we conclude that there exists a
unique u ∈ H such that
B[u, v] =< f, v >, ∀v ∈ H.
We have actually proved
Theorem 2.3.3 For every f(x) in L
2n
n+2
(Ω), the Dirichlet problem
−´u = f(x), x ∈ Ω,
u(x) = 0, x ∈ ∂Ω
has a unique weak solution u in H
1
0
(Ω).
In general,
B[u, v] =
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx.
Under the assumption that
a
ij

L
∞
(Ω)
, b
i

L
∞
(Ω)
, c
L
∞
(Ω)
≤ C,
and by H¨older inequality, one can easily verify that
[B[u, v][ ≤ 3Cu
H
v
H
.
Hence condition (i) is always satisﬁed. However, condition (ii) may not auto
matically hold. It depends on the sign of c(x), and the L
∞
norm of b
i
(x) and
c(x).
First from the uniform ellipticity condition, we have
50 2 Existence of Weak Solutions
Ω
n
¸
i,j=1
a
ij
(x)u
xi
u
xj
dx ≥ δ
Ω
[Du[
2
dx = δu
2
H
, (2.13)
for some δ > 0.
From here one can see that if b
i
(x) ≡ 0 and c(x) ≥ 0, then condition (ii) is
met. Actually, the condition on c(x) can be relaxed a little bit, for instance, if
c(x) ≥ −δ/2 (2.14)
with δ as in (2.13), then condition (ii) holds. Hence we have
Theorem 2.3.4 Assume that b
i
(x) ≡ 0 and c(x) satisﬁes (2.14). Then for
every f ∈ L
2n
n+2
(Ω), the problem
Lu = f(x) x ∈ Ω
u(x) = 0 x ∈ ∂Ω
has a unique weak solution in H
1
0
(Ω).
2.4 Variational Methods
2.4.1 Semilinear Equations
To ﬁnd the weak solutions of semilinear equations
Lu = f(x, u)
the LaxMilgram Theorem can no longer be applied. We will use variational
methods to obtain the existence. Roughly speaking, we will associate the
equation with the functional
J(u) =
1
2
Ω
(Lu u)dx −
Ω
F(x, u)dx
where
F(x, u) =
u
0
f(x, s)ds.
We show that the critical point of J(u) is a weak solution of our problem.
Then we will focus on how to seek critical points in various situations.
2.4.2 Calculus of Variations
For a continuously diﬀerentiable function g deﬁned on R
n
, a critical point is
a point where Dg vanishes. The simplest sort of critical points are global or
local maxima or minima. Others are saddle points. To seek weak solutions of
2.4 Variational Methods 51
partial diﬀerential equations, we generalize this concept to inﬁnite dimensional
spaces. To illustrate the idea, let us start from a simple example:
−´u = f(x) , x ∈ Ω;
u(x) = 0 , x ∈ ∂Ω.
(2.15)
To ﬁnd weak solutions of the problem, we study the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
f(x)udx
in H
1
0
(Ω).
Now suppose that there is some function u which happen to be a minimizer
of J() in H
1
0
(Ω), then we will demonstrate that u is actually a weak solution
of our problem.
Let v be any function in H
1
0
(Ω). Consider the realvalue function
g(t) = J(u +tv), t ∈ R.
Since u is a minimizer of J(), the function g(t) has a minimum at t = 0;
and therefore, we must have
g
(0) = 0, i.e.
d
dt
J(u +tv) [
t=0
= 0.
Here, explicitly,
J(u +tv) =
1
2
Ω
[D(u +tv)[
2
dx −
Ω
f(x)(u +tv)dx,
and
d
dt
J(u +tv) =
Ω
D(u +tv) Dvdx −
Ω
f(x)vdx.
Consequently, g
(0) = 0 yields
Ω
Du Dvdx −
Ω
f(x)vdx = 0, ∀v ∈ H
1
0
(Ω). (2.16)
This implies that u is a weak solution of problem (2.15).
Furthermore, if u is also second order diﬀerentiable, then through integra
tion by parts, we obtain
Ω
[−´u −f(x)] vdx = 0.
Since this is true for any v ∈ H
1
0
(Ω), we conclude
−´u −f(x) = 0.
52 2 Existence of Weak Solutions
Therefore, u is a classical solution of the problem.
From the above argument, the readers can see that, in order to be a weak
solution of the problem, here u need not to be a minima; it can be a maxima,
or a saddle point of the functional; or more generally, any point that satisﬁes
d
dt
J(u +tv) [
t=0
.
More precisely, we have
Deﬁnition 2.4.1 Let J be a linear functional on a Banach space X.
i) We say that J is Frechet diﬀerentiable at u ∈ X if there exists a con
tinuous linear map L = L(u) from X to X
∗
satisfying:
For any > 0, there is a δ = δ(, u), such that
[J(u +v) −J(u)− < L(u), v > [ ≤ v
X
whenever v
X
< δ,
where
< L(u), v >:= L(u)(v)
is the duality between X and X
∗
.
The mapping L(u) is usually denoted by J
(u).
ii) A critical point of J is a point at which J
(u) = 0, that is
< J
(u), v >= 0 ∀v ∈ X.
iii) We call J
(u) = 0 the EulerLagrange equation of the functional J.
Remark 2.4.1 One can verify that, if J is Frechet diﬀerentiable at u, then
< J
(u), v >= lim
t→0
J(u +tv) −J(u)
t
=
d
dt
J(u +tv) [
t=0
.
2.4.3 Existence of Minimizers
In the previous subsection, we have seen that the critical points of the func
tional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
f(x)udx
are weak solutions of the Dirichlet problem (2.15). Then our next question is:
Does the functional J actually possess a critical point?
We will show that there exists a minimizer of J. To this end, we verify
that J possesses the following three properties:
i) boundedness from below,
ii) coercivity, and
iii) weak lower semicontinuity.
i) Boundedness from Below.
2.4 Variational Methods 53
First we prove that J is bounded from below in H := H
1
0
(Ω) if f ∈ L
2
(Ω).
Again, for simplicity, we use the equivalent norm in H:
u
H
=
Ω
[Du[
2
dx.
By Poincar´e and H¨older inequalities, we have
J(u) ≥
1
2
u
2
H
−Cu
H
f
L
2
(Ω)
=
1
2
u
H
−Cf
L
2
(Ω)
2
−
C
2
2
f
2
L
2
(Ω)
≥ −
C
2
2
f
2
L
2
(Ω)
. (2.17)
Therefore, the functional J is bounded from below.
ii) Coercivity. However, A functional that is bounded from below does not
guarantee that it has a minimum. A simple counter example is
g(x) =
1
1 +x
2
, x ∈ R
1
.
Obviously, the inﬁmum of g(x) is 1, but it can never be attained. If we take a
minimizing sequence ¦x
k
¦ such that g(x
k
)→1, then x
k
goes away to inﬁnity.
This suggests that in order to prevent a minimizing sequence from ‘leaking’ to
inﬁnity, we need to have some sort of ‘coercive’ condition on our functional J.
That is, if a sequence ¦u
k
¦ goes to inﬁnity, i.e. if u
H
→∞, then J(u
k
) must
also become unbounded. Therefore a minimizing sequence would be retained
in a bounded set. And this is indeed true for our functional J. Actually, from
the second part of (2.17), one can see that
if u
k

H
→∞, then J(u
k
)→∞.
This implies that any minimizing sequence must be bounded in H.
iii) Weak Lower Semicontinuity. If H is a ﬁnite dimensional space, then a
bounded minimizing sequence ¦u
k
¦ would possesses a convergent subsequence,
and the limit would be a minimizer of J. This is no longer true in inﬁnite
dimensional spaces. We therefore turn our attention to weak topology.
Deﬁnition 2.4.2 Let X be a real Banach space. If a sequence ¦u
k
¦ ⊂ X
satisﬁes
< f, u
k
> → < f, u
o
> as k→∞
for every bounded linear functional f on X; then we say that ¦u
k
¦ converges
weakly to u
o
in X, and write
u
k
u
o
.
54 2 Existence of Weak Solutions
Deﬁnition 2.4.3 A Banach space is called reﬂexive if (X
∗
)
∗
= X, where X
∗
is the dual of X.
Theorem 2.4.1 (Weak Compactness.) Let X be a reﬂexive Banach space.
Assume that the sequence ¦u
k
¦ is bounded in X. Then there exists a subse
quence of ¦u
k
¦, which converges weakly in X.
Now let’s come back to our Hilbert space H := H
1
0
(Ω). By the Reisz
Representation Theorem, for every linear functional f on H, there is a unique
v ∈ H, such that
< f, u >= (v, u), ∀u ∈ H.
It follows that H
∗
= H, and therefore H is reﬂexive. By the Coercivity and
Weak Compactness Theorem, now a minimizing sequence ¦u
k
¦ is bounded,
and hence possesses a subsequence ¦u
ki
¦ that converges weakly to an element
u
o
in H:
(u
ki
, v)→(u
o
, v), ∀v ∈ H.
Naturally, we wish that the functional J we consider is continuous with respect
to this weak convergence, that is
lim
i→∞
J(u
ki
) = J(u
o
),
then u
o
would be our desired minimizer. However, this is not the case with our
functional, nor is it true for most other functionals of interest. Fortunately,
we do not really need such a continuity, instead, we only need a weaker one:
J(u
o
) ≤ liminf
i→∞
J(u
ki
).
Since on the other hand, by the deﬁnition of a minimizing sequence, we obvi
ously have
J(u
o
) ≥ liminf
i→∞
J(u
ki
),
and hence
J(u
o
) = liminf
i→∞
J(u
ki
).
Therefore, u
o
is a minimizer.
Deﬁnition 2.4.4 We say that a functional J() is weakly lower semicontinuous
on a Banach space X, if for every weakly convergent sequence
u
k
u
o
, in X,
we have
J(u
o
) ≤ liminf
k→∞
J(u
k
).
2.4 Variational Methods 55
Now we show that our functional J is weakly lower semicontinuous in H.
Assume that ¦u
k
¦ ⊂ H, and
u
k
u
o
in H.
Since for each ﬁxed f ∈ L
2n
n+2
(Ω),
Ω
fudx is a linear functional on H, it
follows immediately
Ω
fu
k
dx→
Ω
fu
o
dx, as k→∞. (2.18)
From the algebraic inequality
a
2
+b
2
≥ 2ab,
we have
Ω
[Du
k
[
2
dx ≥
Ω
[Du
o
[
2
dx + 2
Ω
Du
o
(Du
k
−Du
o
)dx.
Here the second term on the right goes to zero due to the weak convergence
u
k
u
o
. Therefore
liminf
k→∞
Ω
[Du
k
[
2
dx ≥
Ω
[Du
o
[
2
dx. (2.19)
Now (2.18) and (2.19) imply the weakly lower semicontinuity. So far, we
have proved
Theorem 2.4.2 Assume that Ω is a bounded domain with smooth boundary.
Then for every f ∈ L
2n
n+2
(Ω), the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
fudx
possesses a minimum u
o
in H
1
0
(Ω), which is a weak solution of the boundary
value problem
−´u = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
2.4.4 Existence of Minimizers Under Constraints
Now we consider the semilinear Dirichlet problem
−´u = [u[
p−1
u, x ∈ Ω,
u(x) = 0, x ∈ ∂Ω,
(2.20)
with 1 < p <
n+2
n−2
.
56 2 Existence of Weak Solutions
Obviously, u ≡ 0 is a solution. We call this a trivial solution. Of course,
what we are more interested is whether there exists nontrivial solutions. Let
J(u) =
1
2
Ω
[Du[
2
dx −
1
p + 1
Ω
[u[
p+1
dx.
It is easy to verify that
d
dt
J(u +tv) [
t=0
=
Ω
Du Dv −[u[
p−1
uv
dx.
Therefore, a critical point of the functional J in H := H
1
0
(Ω) is a weak solution
of (2.20)
However, this functional is not bounded from below. To see this, ﬁxed an
element u ∈ H and consider
J(tu) =
t
2
2
Ω
[Du[
2
dx −
t
p+1
p + 1
Ω
[u[
p+1
dx.
Noticing p + 1 > 2, we ﬁnd that
J(tu)→−∞, as t→∞.
Therefore, there is no minimizer of J(u). For this reason, instead of J, we will
consider another functional
I(u) :=
1
2
Ω
[Du[
2
dx,
in the constrain set
M := ¦u ∈ H [ G(u) :=
Ω
[u[
p+1
dx = 1¦.
We seek minimizers of I in M. Let ¦u
k
¦ ⊂ M be a minimizing sequence, i.e.
lim
k→∞
I(u
k
) = inf
u∈M
I(u) := m.
It follows that
Ω
[Du
k
[
2
dx is bounded, hence ¦u
k
¦ is bounded in H. By the
Weak Compactness Theorem, there exist a subsequence (for simplicity, we
still denote it by ¦u
k
¦), which converges weakly to u
o
in H. The weak lower
semicontinuity of the functional I yields
I(u
o
) ≤ liminf
k→∞
I(u
k
) = m. (2.21)
On the other hand, since p+1 <
2n
n −2
, by the Compact Sobolev Embedding
Theorem
H
1
(Ω) →→ L
p+1
(Ω),
2.4 Variational Methods 57
we ﬁnd that u
k
converges strongly to u
o
in L
p+1
(Ω), and hence
Ω
[u
o
[
p+1
dx = 1.
That is u
o
∈ M, and therefore
I(u
o
) ≥ m.
Together with (2.21), we obtain
I(u
o
) = m.
Now we have shown the existence of a minimizer u
o
of I in M. To link
u
o
to a weak solution of (2.20), we derive the corresponding EulerLagrange
equation for this minimizer under the constraint.
Theorem 2.4.3 (Lagrange Multiplier). let u be a minimizer of I in M, i.e.
I(u) = min
v∈M
I(v).
Then there exists a real number λ such that
I
(u) = λG
(u),
or
< I
(u), v >= λ < G
(u), v >, ∀v ∈ H.
Before proving the theorem, let’s ﬁrst recall the Lagrange Multiplier for
functions deﬁned on R
n
. Let f(x) and g(x) be smooth functions in R
n
. It is
well known that the critical points ( in particular, the minima ) x
o
of f(x)
under the constraint g(x) = c satisfy
Df(x
o
) = λDg(x
o
) (2.22)
for some constant λ. Geometrically, Df(x
o
) is a vector in R
n
. It can be de
composed as the sum of two perpendicular vectors:
Df(x
o
) = Df(x
o
) [
N
+Df(x
o
) [
T
,
where the two terms on the right hand side are the projection of Df(x
o
)
onto the normal and tangent spaces of the level set g(x) = c at point x
o
. By
deﬁnition, x
o
being a critical point of f(x) on the level set g(x) = c means
that the tangential projection Df(x
o
) [
T
= 0. While the normal projection
Df(x
o
) [
N
is parallel to Dg(x
o
), and we therefore have (2.22). Heuristically,
we may generalize this perception into inﬁnite dimensional space H. At a ﬁxed
point u ∈ H, I
(u) is a linear functional on H. By the Reisz Representation
58 2 Existence of Weak Solutions
Theorem, it can be identiﬁed as a point (or a vector) in H, denoted by
¯
I
(u),
such that
< I
(u), v >= (
¯
I
(u), v), ∀v ∈ H.
Similarly for G
(u), we have
¯
G
(u) ∈ H, and it is a normal vector of M at
point u. At a minimum u of I on the hyper surface ¦w ∈ H [ G(w) = 1¦,
¯
I
(u) must be perpendicular to the tangent space at point u, and hence
¯
I
(u) = λ
¯
G
(u).
We now prove this rigorously.
The Proof of Theorem 2.4.3.
Recall that if u is the minimum of J in the whole space H, then, for any
v ∈ H, we have
d
dt
I(u +tv) [
t=0
. (2.23)
Now u is only the minimum on M, we can no longer use (2.23) because u+tv
can not be kept on M, we can not guarantee that
G(u +tv) :=
Ω
[u +tv[
p+1
dx = 1
for all small t. To remedy this, we consider
g(t, s) := G(u +tv +sw).
For each small t, we try to show that there is an s = s(t), such that
G(u +tv +s(t)w) = 1.
Then we can calculate the variation of J(u +tv +s(t)w) as t→0.
In fact,
∂g
∂s
(0, 0) =< G
(u), w >= (p + 1)
Ω
[u[
p−1
uwdx.
Since u ∈ M,
Ω
[u[
p+1
dx = 1, there exists w (may just take w as u),
such that the right hand side of the above is not zero. Hence, according to the
Implicit Function Theorem, there exists a C
1
function s : R
1
→R
1
, such that
s(0) = 0, and g(t, s(t)) = 1,
for suﬃciently small t. Now u+tv+s(t)w is on M, and hence at the minimum
u of J on M, we must have
d
dt
I(u +tv +s(t)w) [
t=0
=< I
(u), v > + < I
(u), w > s
(0) = 0. (2.24)
2.4 Variational Methods 59
On the other hand, we have
0 =
dg
dt
(0, 0) =
∂g
∂t
(0, 0) +
∂g
∂s
(0, 0)s
(0)
= < G
(u), v > + < G
(u), w > s
(0).
Consequently
s
(0) = −
< G
(u), v >
< G
(u), w >
.
Substitute this into (2.24), and denote
λ =
< I
(u), w >
< G
(u), w >
,
we arrive at
< I
(u), v >= λ < G
(u), v >, ∀v ∈ H.
This completes the proof of the Theorem. ¯ .
Now let’s come back to the weak solution of problem (2.20). Our minimizer
u
o
of I under the constraint G(u) = 1 satisﬁes
< I
(u
o
), v >= λ < G
(u
o
), v >, ∀v ∈ H,
that is
Ω
Du
o
Dvdx = λ
Ω
[u
o
[
p−1
u
o
vdx, ∀v ∈ H. (2.25)
One can see that λ > 0. Let ˜ u = au
o
with λ/a
p−1
= 1. This is possible
since p > 1. Then it is easy to verify that
Ω
D˜ u Dvdx =
Ω
[˜ u[
p−1
˜ uvdx.
Now ˜ u is the desired weak solution of (2.20) in H. ¯ .
Exercise 2.4.1 Show that λ > 0 in (2.25).
2.4.5 Minimax Critical Points
Besides local or global minima and maxima, there are other types of critical
points: saddle points or minimax points. For a simple example, let’s consider
the function f(x, y) = x
2
−y
2
from R
2
to R
1
. (x, y) = (0, 0) is a critical point
of f, i.e. Df(0, 0) = 0. However, it is neither a local maximum nor a local
minimum. The graph of z = f(x, y) in R
3
looks like a horse saddle. If we
go on the graph along the direction of xaxis, (0,0) is the minimum, while
along the direction of yaxis, (0,0) is the maximum. For this reason, we also
call (0,0) a minimax critical point of f(x, y). The way to locate or to show
60 2 Existence of Weak Solutions
the existence of such minimax point is called a minimax variational method.
To illustrate the main idea, let’s still take this simple example. Suppose we
don’t know whether f(x, y) possesses a minimax point, but we do detect a
‘mountain range’ on the graph near xz plane. To show the existence of a mini
max point, we pick up two points on xy plane, for instance, P = (1, 1) and
Q = (−1, −1) on two sides of the ‘mountain range’. Let γ(s) be a continuous
curve linking the two points P and Q with γ(0) = P and γ(1) = Q. To go
from (P, f(P)) to (Q, f(Q)) on the graph, one needs ﬁrst to climb over the
’mountain range’ near xz plane, then to decent to the point (Q, f(Q)). Hence
on this path, there is a highest point. Then we take the inﬁmum among the
highest points on all the possible paths linking P and Q. More precisely, we
deﬁne
c = inf
γ∈Γ
max
s∈[0,1]
f(γ(s)),
where
Γ = ¦γ [ γ = γ(s) is continuous on [0,1], and γ(0) = P, γ(1) = Q¦
We then try to show that there exists a point P
o
at which f attains this
minimax value c, and more importantly, Df(P
o
) = 0.
For a functional J deﬁned on an inﬁnite dimensional space X, we will apply
a similar idea to seek its minimax critical points. Unlike ﬁnite dimensional
space, in an inﬁnite dimensional space, a bounded sequence may not converge.
Hence we require the functional J to satisfy some compactness condition,
which is the PalaisSmale condition.
Deﬁnition 2.4.5 We say that the functional J satisﬁes the PalaisSmale con
dition (in short, (PS)), if any sequence ¦u
k
¦ ∈ X for which J(u
k
) is bounded
and J
(u
k
)→0 as k→∞ possesses a convergent subsequence.
Under this compactness condition, Ambrosetti and Rabinowitz [AR] estab
lish the following wellknown theorem on the existence of a minimax critical
point.
Theorem 2.4.4 (Mountain Pass Theorem). Let X be a real Banach space
and J ∈ C
1
(X, R
1
). Suppose J satisﬁes (PS), J(0) = 0,
(J
1
) there exist constants ρ, α > 0 such that J [
∂Bρ(0)
≥ α, and
(J
2
) there is an e ∈ X ` B
ρ
(0), such that J(e) ≤ 0.
Then J possesses a critical value c ≥ α which can be characterized as
c = inf
γ∈Γ
max
u∈γ[0,1]
J(u)
where
Γ = ¦γ ∈ C([0, 1], R
1
) [ γ(0) = 0, γ(1) = e¦.
2.4 Variational Methods 61
Here, a critical value c means a value of the functional which is attained
by a critical point u, i.e. J
(u) = 0 and J(u) = c.
Heuristically, the Theorem says if the point 0 is located in a valley sur
rounded by a mountain range, and if at the other side of the range, there is
another low point e; then there must be a mountain pass from 0 to e which
contains a minimax critical point of J. In the ﬁgure below, on the graph of
J, the path connecting the two low points (0, J(0)) and (e, J(e)) contains a
highest point S, which is the lowest as compared to the highest points on the
nearby paths. This S is a saddle point, the minimax critical point of J.
The proof of the Theorem is mainly based on a deformation theorem which
exploits the change of topology of the level sets of J when the value of J
goes through a critical value. To illustrate this, let’s come back to the simple
example f(x, y) = x
2
−y
2
. Denote the level set of f by
f
c
:= ¦(x, y) ∈ R
2
[ f(x, y) ≤ c¦.
One can see that 0 is a critical value and for any a > 0, the level sets f
a
and
f
−a
has diﬀerent topology. f
a
is connected while f
−a
is not. If there is no
critical value between the two level sets, say f
a
and f
b
with 0 < a < b, then
one can continuously deform the set f
b
into f
a
. One convenient way is let the
points in the set f
b
‘ﬂow’ along the direction of −Df into f
a
; correspondingly,
the points on the graph ‘ﬂow downhills’ into a lower level. More precisely and
generally, we have
Theorem 2.4.5 (Deformation Theorem). Let X be a real Banach space. As
sume that J ∈ C
1
(X, R
1
) and satisﬁes the (PS). Suppose that c is not a
critical value of J.
62 2 Existence of Weak Solutions
Then for each suﬃciently small > 0, there exists a constant 0 < δ <
and a continuous mapping η(t, u) from [0, 1] X to X, such that
(i) η(0, u) = u, ∀u ∈ X,
(ii) η(1, u) = u, ∀u ∈J
−1
[c −, c +],
(iii) J(η(t, u) ≤ J(u), ∀u ∈ X, t ∈ [0, 1],
(iv) η(1, J
c+δ
) ⊂ J
c−δ
.
Roughly speaking, the Theorem says if c is not a critical value of J, then
one can continuously deform a higher level set J
c+δ
into a lower level J
c−δ
by
‘ﬂowing downhills’. Condition (ii) is to ensure that after the deformation, the
two points 0 and e as in the Mountain Pass theorem still remain ﬁxed.
Proof of the Deformation Theorem.
The main idea is to solve an ODE initial value problem (with respect to
t) for each u ∈ X, roughly look like the following:
dη
dt
(t, u) = −J
(η(t, u)),
η(0, u) = u.
(2.26)
That is, to let the ‘ﬂow’ go along the direction of −J
(η(t, u)), so that the
value of J(η(t, u) would decrease as t increases. However, we need to modify
the right hand side of the equation a little bit, so that the ﬂow will satisfy
other desired conditions.
Step 1 We ﬁrst choose a small > 0, so that the ﬂow would not go too
slow in J
c+
` J
c−
. This is actually guaranteed by the (PS). We claim that
there exist constants 0 < a, < 1, such that
J
(u) ≥ a, ∀u ∈ J
c+
` J
c−
. (2.27)
Otherwise, there would exist sequences a
k
→0,
k
→0 and elements u
k
∈ J
c+
k
`
J
c−
k
with J
(u
k
) ≤ a
k
. Then by virtue of the (PS) condition, there is a
subsequence ¦u
ki
¦ that converges to an element u
o
∈ X. Since J ∈ C
1
(X, R
1
),
we must have
J(u
o
) = c and J
(u
o
) = 0.
This is a contradiction with the assumption that c is not a critical value of J.
Therefore (2.27) holds.
Step 2. To satisfy condition (ii), i.e. to make the ﬂow stay still in the set
M := ¦u ∈ X [ J(u) ≤ c − or J(u) ≥ c +¦,
we could multiply the right hand side of equation (2.26) by dist(η, M), but
this would make the ﬂow go too slow near M. To modify further, we choose
0 < δ < , such that
δ <
a
2
2
, (2.28)
and let
2.4 Variational Methods 63
N := ¦u ∈ X [ c −δ ≤ J(u) ≤ c +δ¦.
Now for u ∈ X, deﬁne
g(u) :=
dist(u, M)
dist(u, M) +dist(u, N)
,
and let
h(s) :=
1 if 0 ≤ s ≤ 1
1
s
if s > 1.
We ﬁnally modify the −J
(η) as
F(η) := −g(η)h(J
(η))J
(η).
The introduction of the factor h() is to ensure the boundedness of F().
Step 3. Now for each ﬁxed u ∈ X, we consider the modiﬁed ODE problem
dη
dt
(t, u) = F(η(t, u))
η(0, u) = u.
One can verify that F is bounded and Lipschitz continuous on bounded sets,
and therefore by the wellknown ODE theory, there exists a unique solution
η(t, u) for all time t ≥ 0.
Condition (i) is satisﬁed automatically. From the deﬁnition of g
g(u) = 0 ∀u ∈ M;
hence condition (ii) is also satisﬁed.
To verify (iii) and (iv), we compute
d
dt
J(η(t, u)) = < J
(η(t, u)),
dη
dt
(t, u) >
= < J
(η(t, u)), F(η(t, u)) >
= −g(η(t, u))h(J
(η(t, u)))J
(η(t, u))
2
. (2.29)
In the above equality, obviously, the right hand side is nonpositive, and
hence J(η(t, u)) is nonincreasing. This veriﬁes (iii).
To see (iv), it suﬃce to show that for each u in N = J
c+δ
` J
c−δ
, we have
η(1, u) ∈ J
c−δ
.
Notice that for η ∈ N, g(η) ≡ 1, and (2.29) becomes
d
dt
J(η(t, u)) = −h(J
(η(t, u)))J
(η(t, u)
2
.
By the deﬁnition of h and (2.27), we have,
64 2 Existence of Weak Solutions
d
dt
J(η(t, u)) =
−J
(η(t, u))
2
≤ −a
2
if J
(η(t, u)) ≤ 1
−J
(η(t, u)) ≤ −a if J
(η(t, u)) > 1.
In any case, we derive
J(η(1, u)) ≤ J(η(0, u)) −a
2
≤ c +δ −a
2
≤ c −δ.
This establishes (iv) and therefore completes the proof of the Theorem. ¯ .
With this Deformation Theorem, the proof of the Mountain Pass Theorem
becomes very simple.
The Proof of the Mountain Pass Theorem.
First, due to the deﬁnition, we have c < ∞. To see this, we take a particular
path γ(t) = te and consider the function
g(t) = J(γ(t)).
It is continuous on the closed interval [0,1], and hence bounded from above.
Therefore
c ≤ max
u∈γ([0,1])
J(u) = max
t∈[0,1]
g(t) < ∞.
On the other hand, for any γ ∈ Γ
γ([0, 1]) ∩ ∂B
ρ
(0) = ∅.
Hence by condition (J
1
),
max
u∈γ([0,1])
J(u) ≥ inf
v∈∂Bρ(0)
J(v) ≥ α.
And it follows that c ≥ α.
Suppose the number c so deﬁned is not a critical value. We will use the
Deformation Theorem to continuously deform a path γ ∈ Γ down to the
level set J
c−δ
to derive a contradiction. More precisely, choose the in the
Deformation Theorem to be
α
2
, and 0 < δ < . From the deﬁnition of c, we
can pick a path γ ∈ Γ, such that
max
u∈γ([0,1])
J(u) ≤ c +δ,
i.e.
γ([0, 1]) ⊂ J
c+δ
.
Now by the Deformation Theorem, this path can be continuously deformed
down to the level set J
c−δ
, that is
η(1, γ([0, 1]) ⊂ J
c−δ
,
or in order words,
2.4 Variational Methods 65
max
u∈η(1,γ([0,1]))
J(u) ≤ c −δ. (2.30)
On the other hand, since 0 and e are in J
c−
, by (ii) of the Deformation
Theorem,
η(1, 0) = 0 and η(1, e) = e.
This means that η(1, γ([0, 1])) is also a path in Γ, and therefore we must have
max
u∈η(1,γ([0,1]))
J(u) ≥ c.
This contradicts with (2.30) and hence completes the proof of the Theorem.
¯ .
2.4.6 Existence of a Minimax via the Mountain Pass Theorem
Now we apply the Mountain Pass Theorem to seek weak solutions of the
semilinear elliptic problem
−´u = f(x, u), x ∈ Ω,
u(x) = 0, x ∈ ∂Ω.
(2.31)
Although the method we illustrate here could be applied to a more general
second order uniformly elliptic equations, we prefer this simple model that
better illustrates the main ideas.
We assume that f(x, u) satisﬁes
(f
1
)f(x, s) ∈ C(
¯
Ω R
1
, R
1
),
(f
2
) there exists constants c
1
, c
2
≥ 0, such that
[f(x, s)[ ≤ c
1
+c
2
[s[
p
,
with 0 ≤ p <
n+2
n−2
if n > 2. If n = 1, (f
2
) can be dropped; if n = 2, it suﬃce
that
[f(x, s)[ ≤ c
1
e
φ(s)
,
with φ(s)/s
2
→0 as [s[→∞.
(f
3
)f(x, s) = o([s[) as s→0, and
(f
4
) there are constants µ > 2 and r ≥ 0 such that for [s[ ≥ r,
0 < µF(x, s) ≤ sf(x, s),
where
F(x, s) =
s
0
f(x, t)dt.
Remark 2.4.2 A simple example of such function f(x, u) is [u[
p−1
u.
Theorem 2.4.6 (Rabinowitz [Ra]) If f satisﬁes condition (f
1
) − (f
4
), then
problem (2.31) possesses a nontrivial weak solution.
66 2 Existence of Weak Solutions
To ﬁnd weak solutions of the problem, we seek minimax critical points of
the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
F(x, u)dx
on the Hilbert space H := H
1
0
(Ω). We verify that J satisﬁes the conditions in
the Mountain Pass Theorem.
We ﬁrst need to show that J ∈ C
1
(H, R
1
). Since we have veriﬁed that the
ﬁrst term
Ω
[Du[
2
dx
is continuously diﬀerentiable, now we only need to check the second term
I(u) :=
Ω
F(x, u)dx.
Proposition 2.4.1 (Rabinowitz [Ra]) Let Ω ⊂ R
n
be a bounded domain and
let g satisfy
(g
1
)g ∈ C(
¯
Ω R
1
, R
1
), and
(g
2
) there are constants r, s ≥ 1 and a
1
, a
2
≥ 0 such that
[g(x, ξ)[ ≤ a
1
+a
2
[ξ[
r/s
for all x ∈
¯
Ω, ξ ∈ R
1
.
Then the map u(x)→g(x, u(x)) belongs to C(L
r
(Ω), L
s
(Ω)).
Proof. By (g
2
),
Ω
[g(x, u(x))[
s
dx ≤
Ω
(a
1
+a
2
[u[
r/s
)
s
dx
≤ a
3
Ω
(1 +[u[
r
)dx.
It follows that if u ∈ L
r
(Ω), then g(x, u) ∈ L
s
(Ω). That is
g(x, ) : L
r
(Ω)→L
s
(Ω).
Now we verify the continuity of the map. Fix u ∈ L
r
(Ω). Given any > 0, we
want to show that, there is a δ > 0, such that,
whenever φ
L
r
(Ω)
< δ, we have g(, u +φ) −g(, u)
L
s
(Ω)
< . (2.32)
Let
Ω
1
:= ¦x ∈
¯
Ω [ [φ(x)[ ≤ m¦
and
Ω
2
:=
¯
Ω ` Ω
1
.
Let
2.4 Variational Methods 67
I
i
=
Ωi
[g(x, u(x) +φ(x)) −g(x, u(x))[
s
dx, i = 1, 2.
By the continuity of g(x, ), for any η > 0, there exists m > 0, such that
[g(x, u(x) +φ(x)) −g(x, u(x))[ ≤ η, ∀x ∈ Ω
1
.
It follows that
I
1
≤ [Ω
1
[η
s
≤ [Ω[η
s
,
where [Ω[ is the volume of Ω. For the given > 0, we choose η and then m,
so that
[Ω[η
s
<
2
s
,
and therefore
I
1
≤
2
s
. (2.33)
Now we ﬁx this m, and estimate I
2
.
By (g
2
), we have
I
2
≤
Ω2
(c
1
+c
2
([u[
r
+[φ[
r
)) dx ≤ c
1
[Ω
2
[ +c
2
Ω2
[u[
r
dx +c
2
φ
r
L
r
(Ω)
,
(2.34)
for some constant c
1
and c
2
.
Moreover, as φ
L
r
(Ω)
< δ,
δ
r
>
Ω
[φ[
r
dx ≥ m
r
[Ω
2
[. (2.35)
Since u ∈ L
r
(Ω), we can make
Ω2
[u[
r
dx as small as we wish by letting [Ω
2
[
small. Now by (2.35), we can choose δ so small, such that [Ω
2
[ is suﬃciently
small, and hence the right hand side of (2.34) is less than
2
s
. Consequently,
by (2.33), for such a small δ,
I
1
+I
2
≤
2
s
+
2
s
.
This veriﬁes (2.32), and therefore completes the proof of the Proposition. ¯ .
Proposition 2.4.2 (Rabinowitz [Ra]) Assume that f(x, s) satisﬁes (f
1
) and
(f
2
). Let n ≥ 3. Then I ∈ C
1
(H, R) and
< I
(u), v >=
Ω
f(x, u)vdx, ∀v ∈ H.
Moreover, I() is weakly continuous and I
() is compact, i.e.
I(u
k
)→I(u) and I
(u
k
)→I
(u), whenever u
k
u in H;
68 2 Existence of Weak Solutions
Proof. We will ﬁrst show that I is Frechet diﬀerentiable on H and then prove
that I
(u) is continuous.
1. Let u, v ∈ H. We want to show that given any > 0, there exists a
δ = δ(, u), such that
Q(u, v) := [I(u +v) −I(u) −
Ω
f(x, u)vdx[ ≤ v (2.36)
whenever v ≤ δ. Here v denotes the H
1
(Ω) norm of v.
In fact,
Q(u, v) ≤
Ω
[F(x, u(x) +v(x)) −F(x, u(x)) −f(x, u(x))v(x)[dx
=
Ω
[f(x, u(x) +ξ(x)) −f(x, u(x))[ [v(x)[dx
≤ f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
v
L
2n
n−2
(Ω)
≤ f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
Kv. (2.37)
Here we have applied the Mean Value Theorem (with [ξ(x)[ ≤ [v(x)[), the
H¨older inequality, and the Sobolev inequality
v
L
2n
n−2
(Ω)
≤ Kv.
In Proposition 2.4.1, let r =
2n
n−2
and s =
2n
n+2
. Then by (f
2
), we deduce
that the map
u(x)→f(x, u(x))
is continuous from L
2n
n−2
(Ω) to L
2n
n+2
(Ω). Hence for any given > 0, we
can choose suﬃciently small δ > 0, such that whenever v < δ, we have
v
L
2n
n−2
(Ω)
< Kδ, and hence ξ
L
2n
n−2
(Ω)
< Kδ, and therefore
f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
<
K
.
It follows from (2.37) that
Q(u, v) < , whenever v < δ.
This veriﬁes (2.36) and hence I() is diﬀerentiable, and
< I
(u), v >=
Ω
f(x, u(x))v(x)dx. (2.38)
2. To verify the continuity of I
(), we show that for each ﬁxed u,
I
(u +φ) −I
(u)→0, as φ→0. (2.39)
2.4 Variational Methods 69
By deﬁnition,
I
(u +φ) −I
(u) = sup
v≤1
< I
(u +φ) −I
(u), v >
= sup
v≤1
Ω
[f(x, u(x) +φ(x)) −f(x, u(x))]v(x)dx
≤ sup
v≤1
f(, u +φ) −f(, u)
L
2n
n+2
(Ω)
Kv
≤ Kf(, u +φ) −f(, u)
L
2n
n+2
(Ω)
(2.40)
Here we have applied (2.38), the H¨older inequality, and the Sobolev inequality.
Now (2.39) follows again from Proposition 2.4.1. Therefore, I
() is continuous.
3. Finally, we verify the weak continuity of I(u) and the compactness of
I
(u). Assume that
¦u
k
¦ ⊂ H, and u
k
u in H.
We show that
I(u
k
)→I(u) as k→∞.
In fact,
[I(u
k
) −I(u)[ ≤
Ω
[F(x, u
k
(x)) −F(x, u(x))[dx
≤
Ω
[f(x, ξ
k
(x))[ [u
k
(x) −u(x)[dx
≤ f(, ξ
k
)
L
r
(Ω)
u
k
−u
L
q
(Ω)
≤ Cξ
k

L
2n
n−2
(Ω)
u
k
−u
L
q
(Ω)
(2.41)
Here we have applied the Mean Value Theorem and the H¨older inequality with
ξ
k
(x) between u
k
(x) and u(x), and
r =
2n
p(n −2)
, q =
r
r −1
=
2n
2n −p(n −2)
.
It is easy to see that
q <
2n
n −2
.
Since a weak convergent sequence is bounded, ¦u
k
¦ is bounded in H,
and hence by Sobolev Embedding, it is bounded in L
2n
n−2
(Ω), so does ¦ξ
k
¦.
Furthermore, by Compact Embedding of H into L
q
(Ω), the last term of (2.41)
approaches zero as k→∞. This veriﬁes the weak continuity of I().
Using a similar argument as in deriving (2.40), we obtain
I
(u
k
) −I
(u) ≤ Kf(, u
k
) −f(, u)
L
2n
n+2
(Ω)
. (2.42)
70 2 Existence of Weak Solutions
Since p <
n+2
n−2
, we have
2np
n+2
<
2n
n−2
. Then by the Compact Embedding,
H →→ L
2np
n+2
(Ω),
we derive
u
k
→u strongly in L
2np
n+2
(Ω).
Consequently, form Proposition 2.4.1,
f(, u
k
)→f(, u) strongly in L
2n
n+2
(Ω).
Now, together with (2.42), we arrive at
I
(u
k
)→I
(u) as k→∞.
This completes the proof of the proposition. ¯ .
Now let’s come back to Problem (2.31) and prove Theorem 2.4.6. We will
verify that J() satisﬁes the conditions in the Mountain Pass Theorem, and
thus possesses a nontrivial minimax critical point, which is a weak solution
of (2.31).
First we verify the (PS) condition. Assume that ¦u
k
¦ is a sequence in H
satisfying
[J(u
k
)[ ≤ M and J
(u
k
)→0.
We show that ¦u
k
¦ possesses a convergent subsequence.
By (f
4
), we have
µM +J
(u
k
)u
k
 ≥ µJ(u
k
)− < J
(u
k
), u
k
>
= (
µ
2
−1)
Ω
[Du
k
[
2
dx +
Ω
[f(x, u
k
)u
k
−µF(x, u
k
)] dx
≥ (
µ
2
−1)
Ω
[Du
k
[
2
dx +
u
k
(x)≤r
[f(x, u
k
)u
k
−µF(x, u
k
)] dx
≥ (
µ
2
−1)
Ω
[Du
k
[
2
dx −(a
1
r +a
2
r
p+1
)[Ω[
= c
o
u
k

2
−C
o
, (2.43)
for some positive constant c
o
and C
o
. Since J
(u
k
) is bounded, (2.43) implies
that ¦u
k
¦ must be bounded in H. Hence there exists a subsequence (still
denoted by ¦u
k
¦), which converges weakly to some element u
o
in H.
Let A : H→H
∗
be the duality map between H and its dual space H
∗
.
Then for any u, v ∈ H,
< Au, v >=
Ω
Du Dvdx.
Consequently,
2.4 Variational Methods 71
A
−1
J
(u) = u −A
−1
f(, u). (2.44)
From Proposition 2.4.2,
f(, u
k
)→f(, u) strongly in H
∗
.
It follows that
u
k
= A
−1
J
(u
k
) +A
−1
f(, u
k
)→A
−1
f(, u
o
), as k→∞.
This veriﬁes the (PS).
To see there is a ‘mountain range’ surrounding the origin, we estimate
Ω
F(x, u)dx. By (f
3
), for any given > 0, there is a δ > 0, such that, for all
x ∈
¯
Ω,
[F(x, s)[ ≤ [s[
2
, whenever [s[ < δ. (2.45)
While by (f
2
), there is a constant M = M(δ), such that for all x ∈
¯
Ω,
[F(x, s)[ ≤ M[s[
p+1
. (2.46)
Combining (2.45) and (2.46), we have,
[F(x, s)[ ≤ [s[
2
+M[s[
p+1
, for all x ∈
¯
Ω and for all s ∈ R.
It follows, via the Poincar´e and the Sobolev inequality, that
[
Ω
F(x, u)dx[ ≤
Ω
u
2
dx+M
Ω
[u[
p+1
dx ≤ C( +Mu
p−1
)u
2
. (2.47)
And consequently,
J(u) ≥
¸
1
2
−C( +Mu
p−1
)
u
2
. (2.48)
Choose =
1
8C
. For this , we ﬁx M; then choose u = ρ small, so that
CMρ
p−1
≤
1
8
.
Then form (2.48), we deduce
J[
∂Bρ(0)
≥
1
4
ρ
2
.
To see the existence of a point e ∈ H, such that J(e) ≤ 0, we ﬁx any u = 0
in H, and consider
J(tu) =
t
2
2
Ω
[Du[
2
dx −
Ω
F(x, tu)dx. (2.49)
72 2 Existence of Weak Solutions
By (f
4
), we have for [s[ ≥ r,
d
ds
[s[
−µ
F(x, s)
= [s[
−µ−2
s (−µF(x, s) +sf(x, s))
≥ 0 if s ≥ r
≤ 0 if s ≤ −r.
In any case, we deduce, for [s[ ≥ r,
F(x, s) ≥ a
3
[s[
µ
;
and it follows that
F(x, s) ≥ a
3
[s[
µ
−a
4
. (2.50)
Combining (2.49) and (2.50), and taking into account that µ > 2, we conclude
that
J(tu)→−∞, as t→∞.
Now J satisﬁes all the conditions in the Mountain Pass Theorem, hence it
possesses a minimax critical point u
o
, which is a weak solution of (2.31).
Moreover, J(u
o
) > 0, therefore it is a nontrivial solution. This completes the
proof of the Theorem 2.4.6. ¯ .
3
Regularity of Solutions
3.1 W
2,p
a Priori Estimates
3.1.1 Newtonian Potentials,
3.1.2 Uniform Elliptic Equations
3.2 W
2,p
Regularity of Solutions
3.2.1 The Case p ≥ 2.
3.2.2 The Case 1 < p < 2.
3.2.3 Other Useful Results Concerning the Existence, Uniqueness, and
Regularity
3.3 Regularity Lifting
3.3.1 Bootstraps
3.3.2 the Regularity Lifting Theorem
3.3.3 Applications to PDEs
3.3.4 Applications to Integral Equations
In the previous chapter, we used functional analysis, mainly calculus of
variations to seek the existence of weak solutions for second order linear or
semilinear elliptic equations. These weak solutions we obtained were in the
Sobolev space W
1,2
and hence, by Sobolev imbedding, in L
p
space for p ≤
n+2
n−2
.
Usually, the weak solutions are easier to obtain than the classical ones, and
this is particularly true for nonlinear equations. However, in practice, we are
often required to ﬁnd classical solutions. Therefore, one would naturally want
to know whether these weak solutions are actually diﬀerentiable, so that they
can become classical ones. These questions will be answered here.
In this chapter, we will introduce methods to show that, in most cases, a
weak solution is in fact smooth. This is called the regularity argument. Very
often, the regularity is equivalent to the a priori estimate of solutions. To see
the diﬀerence between the regularity and the a priori estimate, we take the
equation
74 3 Regularity of Solutions
−´u = f(x)
for example. Roughly speaking, the W
2,p
regularity theory infers that
If u ∈ W
1,p
is a weak solution and if f ∈ L
p
, then u ∈ W
2,p
.
While the W
2,p
a priori estimate says
If u ∈ W
1,p
0
∩ W
2,p
is a weak solution and if f ∈ L
p
, then
u
W
2,p ≤ Cf
L
p.
In the a priori estimate, we preassumed that u is in W
2,p
. It might seem
a little bit surprising at this moment that the a priori estimates turn out to
be powerful tools in deriving regularities. The readers will see in section 3.2
how the a priori estimates and uniqueness of solutions lead to the regularity.
Let Ω be an open bounded set in R
n
. Let a
ij
(x), b
i
(x), and c(x) be bounded
functions on Ω with a
ij
(x) = a
ji
(x). Consider the second order partial diﬀer
ential equation
Lu := −
n
¸
i,j=1
a
ij
(x)u
xixj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u = f(x). (3.1)
We assume L is uniformly elliptic, that is, there exists a constant δ > 0,
such that
n
¸
i,j=1
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for a. e. x ∈ Ω and for all ξ ∈ R
n
.
In Section 3.1, we establish W
2,p
a priori estimate for the solutions of (3.1).
We show that if the function f(x) is in L
p
(Ω), and u ∈ W
2,p
(Ω) is a strong
solution of (3.1), then there exists a constant C, such that
u
W
2,p ≤ C(u
L
p +f
L
p).
We will ﬁrst establish this for a Newtonian potential, then use the method of
“frozen coeﬃcients” to generalize it to uniformly elliptic equations.
In Section 3.2, we establish a W
2,p
regularity. We show that if f(x) is in
L
p
(Ω), then any weak solution u of (3.1) is in W
2,p
(Ω).
In Section 3.3, we introduce and prove a regularity lifting theorem. It
provides a simple method for the study of regularity. The version we present
here contains some new developments. It is much more general and very easy
to use. We believe that the method will be helpful to both experts and non
experts in the ﬁeld.
We will also use examples to show how this method can be applied to boost
the regularity of the solutions for PDEs as well as for integral equations.
3.1 W
2,p
a Priori Estimates 75
3.1 W
2,p
a Priori Estimates
We establish W
2,p
estimate for the solutions of
Lu = f(x) , x ∈ Ω. (3.2)
First we start with the Newtonian potential.
3.1.1 Newtonian Potentials
Let Ω be a bounded domain of R
n
. Let
Γ(x) =
1
n(n −2)ω
n
1
[x[
n−2
n ≥ 3,
be the fundamental solution of the Laplace equation. The Newtonian potential
is known as
w(x) =
Ω
Γ(x −y)f(y)dy.
We prove
Theorem 3.1.1 Let f ∈ L
p
(Ω) for 1 < p < ∞, and let w be the Newtonian
potential of f. Then w ∈ W
2,p
(Ω) and
´w = f(x), a.e. x ∈ Ω, (3.3)
and
D
2
w
L
p
(Ω)
≤ Cf
L
p
(Ω)
. (3.4)
Write w = Nf, where N is obviously a linear operator. For ﬁxed i, j,
deﬁne the linear operator T as
Tf = D
ij
Nf = D
ij
R
n
Γ(x −y)f(y)dy
To prove Theorem 3.1.1, it is equivalent to show that
T : L
p
(Ω) −→ L
p
(Ω) is a bounded linear operator . (3.5)
Our proof can actually be applied to more general operators. To this end,
we introduce the concept of weak type and strong type operators.
Deﬁne
µ
f
(t) = [¦x ∈ Ω[ [f(x)[ > t¦[.
For p ≥ 1, a weak L
p
space L
p
w
(Ω) is the collection of functions f that
satisfy
f
p
L
p
w
(Ω)
= sup¦µ
f
(t)t
p
, ∀t > 0¦ < ∞.
76 3 Regularity of Solutions
An operator T : L
p
(Ω) → L
q
(Ω) is of strong type (p, q) if
Tf
L
q
(Ω)
≤ Cf
L
p
(Ω)
, ∀f ∈ L
p
(Ω).
T is of weak type (p, q) if
Tf
L
q
w
(Ω)
≤ Cf
L
p
(Ω)
, ∀f ∈ L
p
(Ω).
Outline of Proof of (3.5).
We decompose the proof into the proofs of the following four lemmas.
First, we use Fourier transform to show
Lemma 3.1.1 T : L
2
(Ω) → L
2
(Ω) is a bounded linear operator. i.e. T is of
strong type (2, 2).
Secondly, we use CalderonZygmund’s Decomposition Lemma to prove
Lemma 3.1.2 T is of weak type (1,1).
Thirdly, we employ Marcinkiewicz Interpolation Theorem to derive
Lemma 3.1.3 T is of strong type (r, r) for any 1 < r ≤ 2.
Finally, by duality, we conclude
Lemma 3.1.4 T is of strong type (p, p), for 1 < p < ∞.
The Proof of Lemma 3.1.1.
First we consider f ∈ C
∞
0
(Ω) ⊂ C
∞
0
(R
n
). Then obviously w ∈ C
∞
(R
n
)
and satisﬁes
´w = f(x) , ∀x ∈ R
n
.
Let
ˆ
f(ξ) =
1
(2π)
n/2
R
n
e
i<x,ξ>
f(x)dx,
be the Fourier transform of f, where
< x, ξ >=
n
¸
k
x
k
ξ
k
is the inner product of R
n
, and i =
√
−1.
We need the following simple property of the transform
¯
D
k
f(ξ) = −iξ
k
ˆ
f(ξ) , and
¯
D
jk
f(ξ) = −ξ
j
ξ
k
ˆ
f(ξ), (3.6)
and the wellknown Plancherel’s Theorem:

ˆ
f
L
2
(R
n
)
= f
L
2
(R
n
)
. (3.7)
It follows from (3.6) and (3.7),
3.1 W
2,p
a Priori Estimates 77
Ω
[f(x)[
2
dx =
R
n
[f(x)[
2
dx
=
R
n
[´w(x)[
2
dx
=
R
n
[
¯
´w(ξ)[
2
dx
=
R
n
[ξ[
4
[ ˆ w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
ξ
2
k
ξ
2
j
[ ˆ w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
[
¯
D
kj
w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
[D
kj
w(x)[
2
dx
=
R
n
[D
2
w[
2
dx.
Consequently,
Tf
L
2
(Ω)
≤ f
L
2
(Ω)
, ∀f ∈ C
∞
0
(Ω). (3.8)
Now to verify (3.8) for any f ∈ L
2
(Ω), we simply pick a sequence ¦f
k
¦ ⊂
C
∞
0
(Ω) that converges to f in L
2
(Ω), and take the limit. This proves the
Lemma. ¯ .
The Proof of Lemma 3.1.2.
We need the following wellknown Calder´onZygmund’s Decomposition
Lemma (see its proof in Appendix C).
Lemma 3.1.5 For f ∈ L
1
(R
n
), ﬁxed α > 0, ∃ E, G such that
(i) R
n
= E ∪ G, E ∩ G = ∅
(ii) [f(x)[ ≤ α, a.e. x ∈ E
(iii) G =
∞
¸
k=1
Q
k
, ¦Q
k
¦: disjoint cubes s.t.
α <
1
[Q
k
[
Q
k
[f(x)[dx ≤ 2
n
α
For any f ∈ L
1
(Ω), to apply the Calder´onZygmund’s Decomposition
Lemma, we ﬁrst extend f to vanish outside Ω. For any given α > 0, ﬁx a
large cube Q
o
in R
n
, such that
Qo
[f(x)[dx ≤ α[Q
o
[.
78 3 Regularity of Solutions
We show that
µ
Tf
(α) := [¦x ∈ R
n
[ [Tf(x)[ ≥ α¦[ ≤ C
f
L
1
(Ω)
α
. (3.9)
Split the function f into the “good” part g and “bad” part b: f = g + b,
where
g(x) =
f(x) for x ∈ E
1
Q
k

Q
k
f(x)dx for x ∈ Q
k
, k = 1, 2, .
Since the operator T is linear, Tf = Tg +Tb; and therefore
µ
Tf
(α) ≤ µ
Tg
(
α
2
) +µ
Tb
(
α
2
).
We will estimate µ
Tg
(
α
2
) and µ
Tb
(
α
2
) separately. The estimate of the ﬁrst one
is easy, because g ∈ L
2
. To estimate µ
Tb
(
α
2
), we divide R
n
into two parts: G
∗
and E
∗
:= R
n
` G
∗
(see below for the precise deﬁnition of G
∗
.) We will show
that
(a) [G
∗
[ ≤
C
α
f
L
1
(Ω)
, and
(b) [¦x ∈ E
∗
[ [Tb(x)[ ≥
α
2
¦[ ≤
C
α
E
∗
[Tb(x)[dx ≤
C
α
f
L
1
(Ω)
.
These will imply the desired estimate for µ
Tb
(
α
2
).
Obviously, from the deﬁnition of g, we have
[g(x)[ ≤ 2
n
α , almost everywhere (3.10)
and
b(x) = 0 for x ∈ E , and
Q
k
b(x)dx = 0 for k = 1, 2, . (3.11)
We ﬁrst estimate µ
Tg
. By Lemma 3.1.1 and (3.10), we derive
µ
Tg
(
α
2
) ≤
4
α
2
g
2
(x)dx ≤
2
n+2
α
[g(x)[dx ≤
2
n+2
α
[f(x)[dx. (3.12)
We then estimate µ
Tb
. Let
b
k
(x) =
b(x) for x ∈ Q
k
0 elsewhere .
Then
Tb =
∞
¸
k=1
Tb
k
.
For each ﬁxed k, let ¦b
km
¦ ⊂ C
∞
0
(Q
k
) be a sequence converging to b
k
in
L
2
(Ω) satisfying
Q
k
b
km
(x)dx =
Q
k
b
k
(x)dx = 0. (3.13)
3.1 W
2,p
a Priori Estimates 79
From the expression
Tb
km
(x) =
Q
k
D
ij
Γ(x −y)b
km
(y)dy,
one can see that due to the singularity of D
ij
Γ(x−y) in Q
k
and the fact that
b
km
may not be bounded in Q
k
, one can only estimate Tb
km
(x) when x is of
a positive distance away from Q
k
. For this reason, we cover Q
k
by a bigger
ball B
k
which has the same center as Q
k
, and the radius of the ball δ
k
is the
same as the diameter of Q
k
. We now estimate the integral in the complement
of B
k
:
R
n
\B
k
[Tb
km
[(x)dx =
Qo\B
k
[
Q
k
D
ij
Γ(x −y)b
km
(y)dy[dx
=
Qo\B
k
[
Q
k
[D
ij
Γ(x −y) −D
ij
Γ(x − ¯ y)]b
km
(y)dy[dx
≤ C δ
k
Qo\B
k
1
[x[
n+1
dx [
Q
k
b
km
(y)dy[
≤ C
1
δ
k
∞
δ
k
1
r
2
dr
Q
k
[b
km
(y)[dy
≤ C
2
Q
k
[b
km
(y)[dy (3.14)
where ¯ y is the center of the cube Q
k
. One small trick here is to add a term
(which is 0 by (3.13)):
Q
k
D
ij
Γ(x − ¯ y)b
km
(y)dy
to produce a helpful factor δ
k
by applying the mean value theorem to the
diﬀerence:
D
ij
Γ(x −y) −D
ij
Γ(x − ¯ y) = (y − ¯ y) D(D
ij
Γ)(x −ξ) ≤ δ
k
[D(D
ij
)(x −ξ)[.
Now letting m→∞ in (3.14), we obtain
R
n
\B
k
[Tb
k
(x)[dx ≤ C
Q
k
[b
k
(y)[dy.
Let
G
∗
=
∞
¸
k=1
B
k
and E
∗
= R
n
` G
∗
.
It follows that
E
∗
[Tb(x)[dx ≤ C
∞
¸
k=1
R
n
\G
∗
[Tb
k
[dx ≤ C
∞
¸
k=1
R
n
\B
k
[Tb
k
[dx
≤ C
∞
¸
k=1
Q
k
[b
k
(y)[dy ≤ C
R
n
[f(x)[dx. (3.15)
80 3 Regularity of Solutions
Obviously
µ
Tb
(
α
2
) ≤ [G
∗
[ +[¦x ∈ E
∗
[ Tb(x) ≥
α
2
¦[. (3.16)
By (iii) in the Calder´onZygmund’s Decomposition Lemma, we have
[G
∗
[ =
∞
¸
k=1
[B
k
[ = C
∞
¸
k=1
[Q
k
[ ≤
C
α
∞
¸
k=1
Q
k
[f(x)[dx =
C
α
R
n
[f(x)[dx.
(3.17)
Write
E
∗
α
= ¦x ∈ E
∗
[ [Tb(x)[ ≥
α
2
¦.
Then by (3.15), we derive
[E
∗
α
[
α
2
≤
E
∗
α
[Tb(x)[dx ≤
E
∗
[Tb(x)[dx ≤ C
R
n
[f(x)[dx. (3.18)
Now the desired inequality (3.9) is a direct consequence of (3.12), (3.16),
(3.17, and (3.18). This completes the proof of the Lemma.
The proof of Lemma 3.1.3. In the previous lemmas, we have shown that
the operator T is of weak type (1, 1) and strong type (2, 2) (of course also weak
type (2, 2)). Now Lemma 3.1.3 is a direct consequence of the Marcinkiewicz
interpolation theorem in the following restricted form:
Lemma 3.1.6 Let T be a linear operator from L
p
(Ω) ∩L
q
(Ω) into itself with
1 ≤ p < q < ∞. If T is of weak type (p, p) and weak type (q, q), then for any
p < r < q, T is of strong type (r, r). More precisely, if there exist constants
B
p
and B
q
, such that
µ
Tf
(t) ≤
B
p
f
p
t
p
and µ
Tf
(t) ≤
B
q
f
q
t
q
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
then
Tf
r
≤ CB
θ
p
B
1−θ
q
f
r
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
where
1
r
=
θ
p
+
1 −θ
q
and C depends only on p, q, and r.
The proof of this lemma can be found in Appendix C.
The proof of Lemma 3.1.4. From the previous lemma, we know that,
for any 1 < r ≤ 2, we have
Tg
L
r
(Ω)
≤ C
r
g
L
r
(Ω)
. (3.19)
3.1 W
2,p
a Priori Estimates 81
Let
< f, g >=
Ω
f(x)g(x)dx
be the duality between f and g. Then it is easy to verify that
< g, Tf >=< Tg, f > . (3.20)
Given any 2 < p < ∞, let r =
p
p−1
, i.e.
1
r
+
1
p
= 1. Obviously, 1 < r < 2. It
follows from (3.19) and (3.20) that
Tf
L
p = sup
g
L
r =1
< g, Tf >= sup
g
L
r =1
< Tg, f >
≤ sup
g
L
r =1
f
L
pTg
L
r ≤ C
r
f
L
p.
This completes the proof of the Lemma.
3.1.2 Uniform Elliptic Equations
In this section, we consider general second order uniform elliptic equations
with Dirichlet boundary condition
Lu := −
¸
n
i,j=1
a
ij
(x)u
xixj
+
¸
n
i=1
b
i
(x)u
xi
+c(x)u = f(x), x ∈ Ω
u(x) = 0, x ∈ ∂Ω.
(3.21)
We assume that Ω is bounded with C
2,α
boundary, a
ij
(x) ∈ C
0
(
¯
Ω), b
i
(x) ∈
L
∞
(Ω), c(x) ∈ L
∞
(Ω), and
λ[ξ[
2
≤ a
ij
ξ
i
ξ
j
≤ Λ[ξ[
2
for some positive constants λ and Λ.
Deﬁnition 3.1.1 We say that u is a strong solution of
Lu = f(x), x ∈ Ω
if u is twice weakly diﬀerentiable in Ω and satisﬁes the equation almost ev
erywhere in Ω.
Based on the result in the previous section–the estimates on the Newtonian
potentials–we will establish a priori estimates on the strong solutions of (3.21).
Theorem 3.1.2 Let u ∈ W
2,p
(Ω)
¸
W
1,2
0
(Ω) be a strong solution of (3.21).
Assume that f ∈ L
p
(Ω). Then
[[u[[
W
2,p
(Ω)
≤ C([[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.22)
where C is a constant depending on [[b
i
(x)[[
L
∞
(Ω)
, [[c(x)[[
L
∞
(Ω)
, λ, Λ, n, p,
and the domain Ω.
82 3 Regularity of Solutions
Remark 3.1.1 Conditions
b
i
(x), c(x) ∈ L
∞
(Ω)
can be replaced by the weaker ones
b
i
(x), c(x) ∈ L
p
(Ω) , for any p > n.
Proof of Theorem 3.1.2.
We divide the proof into two major parts.
Part 1. Interior Estimate
[[∇
2
u[[
L
p
(K)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.23)
where K is any compact subset of Ω.
Part 2. Boundary Estimate
[[u[[
W
2,p
(Ω\Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.24)
where Ω
δ
= ¦x ∈ Ω [ d(x, ∂Ω) > δ¦.
Combining the interior and the boundary estimates, we obtain
[[u[[
W
2,p
(Ω)
≤ [[u[[
W
2,p
(Ω\Ω
2δ
)
+[[u[[
W
2,p
(Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.25)
Theorem 3.1.2 is then a simple consequence of (3.25) and the following
well known GargaliadoJohnNirenberg type interpolation estimate
[[∇u[[
L
p
(Ω)
≤ C[[u[[
1/2
L
p
(Ω)
[[∇
2
u[[
1/2
L
p
(Ω)
≤ [[∇
2
u[[
L
p
(Ω)
+
C
4
[[u[[
L
p
(Ω)
.
Substituting this estimate of [[∇u[[
L
p
(Ω)
into our inequality (3.25), we get:
[[u[[
W
2,p
(Ω)
≤ C[[∇
2
u[[
L
p
(Ω)
+C(
C
4
[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
).
Choosing <
1
2C
, we can then absorb the term [[∇
2
u[[
L
p
(Ω)
on the right hand
side by the left hand side and arrive at the conclusion of our theorem
[[u[[
W
2,p
(Ω)
≤ C([[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.26)
For both the interior and the boundary estimates, the main idea is the well
known method of “frozen” coeﬃcients. Locally, near a point x
o
, we may regard
the leading coeﬃcients a
ij
(x) of the operator approximately as the constant
a
ij
(x
o
) (as if the functions a
ij
are frozen at point x
o
), and thus we are able
to treat the operator as a constant coeﬃcient one and apply the potential
estimates derived in the previous section.
3.1 W
2,p
a Priori Estimates 83
Part 1: Interior Estimates.
First we deﬁne the cutoﬀ function φ(s) to be a compact supported C
∞
function:
φ(s) = 1 if s ≤ 1
φ(s) = 0 if s ≥ 2.
Then we quantify the ‘module’ continuity of a
ij
with
(δ) = sup
x−y≤δ;x,y∈Ω;1≤i,j≤n
[a
ij
(x) −a
ij
(y)[
The function (δ) ` 0 as δ ` 0, and it measures the ‘module’ continuity
of the functions a
ij
.
For any x
o
∈ Ω
2δ
, let
η(x) = φ(
[x −x
o
[
δ
) and w(x) = η(x)u(x),
then
a
ij
(x
o
)
∂
2
w
∂x
i
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+a
ij
(x)
∂
2
w
∂x
i
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+η(x)a
ij
(x)
∂
2
u
∂x
i
∂x
j
+
+ a
ij
(x)u(x)
∂
2
η
∂x
i
∂x
j
+ 2a
ij
(x)
∂u
∂x
i
∂η
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+η(x)(b
i
(x)
∂u
∂x
i
+c(x)u −f(x)) +
+ a
ij
(x)u(x)
∂
2
η
∂x
i
∂x
j
+ 2a
ij
(x)
∂u
∂x
i
∂η
∂x
j
:= F(x) for x ∈ R
n
.
In the above, we skipped the summation sign
¸
. We abbreviated
¸
n
i,j=1
a
ij
as a
ij
and so on. Here we notice that all terms are supported in B
2δ
:=
B
2δ
(x
o
) ⊂ Ω. With a simple linear transformation, we may assume a
ij
(x
o
) =
δ
ij
. Apparently both w and
Γ ∗ F :=
1
n(n −2)ω
n
R
n
1
[x −y[
n−2
F(y)dy
are solutions of
´u = F.
Thus by the uniqueness, w = Γ ∗ F (see the exercise after the proof of
the theorem). Now, following the Newtonian potential estimate (see Theorem
3.3), we obtain:
84 3 Regularity of Solutions
[[
2
w[[
L
p
(B
2δ
)
= [[
2
w[[
L
p
(R
n
)
≤ C[[F[[
L
p
(R
n
)
= C[[F[[
L
p
(B
2δ
)
. (3.27)
Calculating term by term, we have
[[F[[
L
p
(B
2δ
)
≤ (2δ)[[
2
w[[
L
p
(B
2δ
)
+[[f[[
L
p
(B
2δ
)
+C([[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
).
Combining this with the previous inequality (3.27) and choosing δ so small
such that C(2δ) < 1/2, we get
[[
2
w[[
L
p
(B
2δ
)
≤ C(2δ)[[
2
w[[
L
p
(B
2δ
)
+C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
)
≤
1
2
[[
2
w[[
L
p
(B
2δ
)
+C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
).
This is equivalent to
[[
2
w[[
L
p
(B
2δ
)
≤ C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
)
Since u = w on B
δ
(x
o
), we have
[[
2
u[[
L
p
(B
δ
)
= [[
2
w[[
L
p
(B
δ
)
.
Consequently,
[[
2
u[[
L
p
(B
δ
)
≤ C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
). (3.28)
To extend this estimate from a δball to a compact set, we notice that the
collection of balls
¦B
δ
(x) [ x ∈ Ω
2δ
¦
forms an open covering of Ω
2δ
. Hence there are ﬁnitely many balls ¦B
δ
(x
i
) [
i = 1, .......m¦ that have already covered Ω
2δ
. We now apply the above esti
mate (3.28) on each of the balls and sum up to obtain
[[
2
u[[
L
p
(Ω
2δ
)
≤
m
¸
i=1
[[
2
u[[
L
p
(B
δ
)(x
i
)
≤
m
¸
i=1
C([[f[[
L
p
(B
2δ
)(x
i
)
+[[∇u[[
L
p
(B
2δ
)(x
i
)
+[[u[[
L
p
(B
2δ
)(x
i
)
)
≤ C([[f[[
L
p
(Ω)
+[[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
) (3.29)
For any compact subset K of Ω, let δ <
1
2
dist(K, ∂Ω), then K ⊂ Ω
2δ
, and
we derive
[[∇
2
u[[
L
p
(K)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.30)
This establishes the interior estimates.
Part 2. Boundary Estimates
The boundary estimates are very similar. Let’s just describe how we can
apply almost the same scheme here. For any point x
o
∈ ∂Ω, B
δ
(x
o
)
¸
∂Ω
3.1 W
2,p
a Priori Estimates 85
is a C
2,α
graph for δ small. With a suitable rotation, we may assume that
B
δ
(x
o
)
¸
∂Ω is given by the graph of
x
n
= h(x
1
, x
2
, ..., x
n−1
) = h(x
),
and Ω is on top of this graph locally. Let
y = ψ(x) = (x
−x
o
, x
n
−h(x
)),
then ψ is a diﬀeomorphism that maps a neighborhood of x
o
in Ω onto B
+
r
(0) =
¦y ∈ B
r
(0) [ y
n
> 0¦. The equation becomes
−
¸
n
i,j=1
¯ a
ij
(y)u
yiyj
+
¸
n
i=1
¯
b
i
(y)u
yi
+ ¯ c(y)u =
¯
f(y) , for y ∈ B
+
r
(0)
u(y) = 0 , for y ∈ ∂B
+
r
(0) with y
n
= 0;
(3.31)
where the new coeﬃcients are computed from the original ones via the chain
rule of diﬀerentiation. For example,
¯ a
ij
(y) =
∂ψ
i
∂x
l
(ψ
−1
(y))a
lk
(ψ
−1
(y))
∂ψ
j
∂x
k
(ψ
−1
(y)).
If necessary, we make a linear transformation so that ¯ a
ij
(0) = δ
ij
. Since
a plane is still mapped to a plane, we may assume that equation (3.31) is
valid for some smaller r. Applying the method of frozen coeﬃcients (with
w(y) = φ(
2y
r
)u(y)) we get:
´w(y) = F(y) on B
+
r
(0).
Now let ¯ w(y) and
¯
F(y) be the odd extension of w(y) and F(y) from B
+
r
(0)
to B
r
(0), i.e.
¯ w(y) = ¯ w(y
1
, , y
n−1
, y
n
) =
w(y
1
, , y
n−1
, y
n
), if y
n
≥ 0;
−w(y
1
, , y
n−1
, −y
n
), if y
n
< 0.
Similarly for
¯
F(y).
Then one can check that:
´¯ w(y) =
¯
F(y) , for y ∈ B
r
(0).
Through the same calculations as in Part 1, we derive the basic interior esti
mate
[[
2
u[[
L
p
(Br(x
o
))
≤ C([[f[[
L
p
(B2r(x
o
))
+[[∇u[[
L
p
(B2r(x
o
))
+[[u[[
L
p
(B2r(x
o
))
)
≤ C
1
([[f[[
L
p
(B2r(x
o
)∩Ω)
+[[∇u[[
L
p
(B2r(x
o
)∩Ω)
+[[u[[
L
p
(B2r(x
o
)∩Ω)
)
for any x
o
on ∂Ω and for some small radius r. The last inequality in the above
was due to the symmetric extension of w to ¯ w from the half ball to the whole
ball.
86 3 Regularity of Solutions
These balls form a covering of the compact set ∂Ω, and thus has a ﬁnite
covering B
ri
(x
i
) (i = 1, 2, ...., k). These balls also cover a neighborhood of ∂Ω
including Ω`Ω
δ
for some δ small. Summing the estimates up, and following
the same steps as we did for the interior estimates, we get:
[[u[[
W
2,p
(Ω\Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.32)
This establishes the boundary estimates and hence completes the proof of
the theorem. ¯ .
Exercise 3.1.1 Assume that Ω is bounded, and w ∈ W
2,p
0
(Ω). Show that if
−´w = F, then w = Γ ∗ F.
Hint: (a) Show that Γ ∗ F(x) = 0 for x ∈
¯
Ω.
(b) Show that it is true for w ∈ C
2
0
(R
n
).
(c) Show that
´(J
w) = J
(´w)
and thus
J
w = Γ ∗ (J
F).
Let →0 to derive w = Γ ∗ F.
3.2 W
2,p
Regularity of Solutions
In this section, we will use the a priori estimates established in the previous
section to derive the W
2,p
regularity for the weak solutions.
Let L be a second order uniformly elliptic operator in divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (3.33)
Deﬁnition 3.2.1 We say that u ∈ W
1,p
0
(Ω) is a weak solution of
Lu = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω
(3.34)
if for any v ∈ W
1,q
0
(Ω),
Ω
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i
b
i
(x)u
xi
v +c(x)uv
¸
¸
dx =
Ω
f(x)vdx, (3.35)
where
1
p
+
1
q
= 1.
3.2 W
2,p
Regularity of Solutions 87
Remark 3.2.1 Since C
∞
0
(Ω) is dense in W
1,p
0
(Ω), we only need to require
(3.35) be true for all v ∈ C
∞
0
(Ω); and this is more convenient in many appli
cations.
The main result of the section is
Theorem 3.2.1 Assume Ω is a bounded domain in R
n
. Let L be a uniformly
elliptic operator deﬁned in (3.33) with a
ij
(x) Lipschitz continuous and b
i
(x)
and c(x) bounded.
Assume that u ∈ W
1,p
0
(Ω) is a weak solution of (3.34). If f(x) ∈ L
p
(Ω),
then u ∈ W
2,p
(Ω) .
Outline of the Proof. The proof of the theorem is quite complex and
will be accomplished in two stages.
In stage 1, we ﬁrst consider the case when p ≥ 2, because one can rather
easily show the uniqueness of the weak solutions by multiplying both sides of
the equation by the solution itself and integrating by parts. As one will see,
this uniqueness plays a key role in deriving the regularity.
The proof of the regularity is based on the following fundamental proposi
tion on the existence, uniqueness, and regularity for the solution of the Laplace
equation on a unit ball.
Proposition 3.2.1 Assume f ∈ L
p
(B
1
(0)) with p ≥ 2. Then the Dirichlet
problem
´u = f(x) , x ∈ B
1
(0)
u(x) = 0 , x ∈ ∂B
1
(0)
(3.36)
exists a unique solution u ∈ W
2,p
(B
1
(0)) satisfying
u
W
2,p
(B1)
≤ Cf
L
p
(B1)
. (3.37)
We will prove this proposition in subsection 3.2.1, and then use the
“frozen” coeﬃcients method to derive the regularity for general operator L.
In stage 2 (subsection 3.2.2), we consider the case 1 < p < 2. The main
diﬃculty lies in the uniqueness of the weak solution. To show the uniqueness,
we ﬁrst prove a W
2,p
version of Fredholm Alternative for p ≥ 2 based on
the regularity result in stage 1, which will be used to deduce the existence of
solutions for equation
−
¸
ij
(a
ij
(x)w
xi
)
xj
= F(x).
Then we use this equation with a proper choice of F(x) to derive a contradic
tion if there exists a nontrivial solution of equation
−
¸
ij
(a
ij
(x)v
xi
)
xj
= 0.
88 3 Regularity of Solutions
After proving the uniqueness, to derive the regularity, everything else is
the same as in stage 1.
Remark 3.2.2 Actually, the conditions on the coeﬃcients of L can be weak
ened. This will use the Regularity Lifting Theorem in Section 3.3, and hence
will be deliberated there.
3.2.1 The Case p ≥ 2
We will prove Proposition 3.2.1 and use it to derive the regularity of weak
solutions for general operator L. To this end, we need
Lemma 3.2.1 (Better A Priori Estimates at Presence of Uniqueness.) As
sume that
i) for solutions of
Lu = f(x) , x ∈ Ω (3.38)
it holds the a priori estimate
u
W
2,p
(Ω)
≤ C(u
L
p
(Ω)
+f
L
p
(Ω)
), (3.39)
and
ii) (uniqueness) if Lu = 0, then u = 0.
Then we have a better a priori estimate for the solution of (3.38)
u
W
2,p
(Ω)
≤ Cf
L
p
(Ω)
. (3.40)
Proof. Suppose the inequality (3.40) is false, then there exists a sequence of
functions ¦f
k
¦ with f
k

L
p = 1 and the corresponding sequence of solutions
¦u
k
¦ for
Lu
k
= f
k
(x) , x ∈ Ω
such that
u
k

W
2,p→∞ as k→∞.
It follows from the a priori estimate (3.39) that
u
k

L
p→∞ as k→∞.
Let
v
k
=
u
k
u
k

L
p
and g
k
=
f
k
u
k

L
p
.
Then
v
k

L
p = 1 , and g
k

L
p→0. (3.41)
From the equation
Lv
k
= g
k
(x) , x ∈ Ω (3.42)
3.2 W
2,p
Regularity of Solutions 89
it holds the a priori estimate
v
k

W
2,p ≤ C(v
k

L
p +g
k

L
p). (3.43)
(3.41) and (3.43) imply that ¦v
k
¦ is bounded in W
2,p
, and hence there exists
a subsequence (still denoted by ¦v
k
¦) that converges weakly to v in W
2,p
. By
the compact Sobolev embedding, the same sequence converges strongly to v
in L
p
, and hence v
L
p = 1. From (3.42), we arrive at
Lv = 0 , x ∈ Ω.
Therefore by the uniqueness assumption, we must have v = 0. This contradicts
with the fact v
L
p = 1 and therefore completes the proof. ¯ .
The Proof of Proposition 3.2.1.
For convenience, we abbreviate B
r
(0) as B
r
.
To see the uniqueness, assume that both u and v are solutions of (3.36).
Let w = u −v. Then w weakly satisﬁes
´w = 0 , x ∈ B
1
w(x) = 0 , x ∈ ∂B
1
Multiplying both sides of the equation by w and then integrating on B
1
, we
derive
B1
[w[
2
dx = 0.
Hence w = 0 almost everywhere, this, together with the zero boundary
condition, implies w = 0 almost everywhere.
For the existence, it is wellknown that, if f is continuous, then
u(x) =
B1
G(x, y)f(y)dy
is the solution. Here
G(x, y) = Γ(x −y) −h(x, y)
is the Green’s function, in which
Γ(x) =
1
n(n−2)ωn
1
x
n−2
, n ≥ 3
1
2π
ln [x[ , n = 2
is the fundamental solution of Laplace equation as introduce in the deﬁnition
of Newtonian Potentials, and
h(x, y) =
1
n(n −2)ω
n
1
([x[[
x
x
2
−y[)
n−2
, for n ≥ 3
90 3 Regularity of Solutions
is a harmonic function. The addition of h(x, y) is to ensure that G(x, y) van
ishes on the boundary.
We have obtained the needed estimate on the ﬁrst part
B1
Γ(x −y)f(y)dy
when we worked on the Newtonian Potentials. For the second part, noticing
that h(x, y) is smooth when y is away from the boundary (see the exercise
below), we start from a smaller ball B
1−δ
for each δ > 0. Let
u
δ
(x) =
B1
G(x, y)f
δ
(y)dy,
where
f
δ
(x) =
f(x) , x ∈ B
1−δ
0 , elsewhere .
Now, by our result for the Newtonian Potentials, D
2
u
δ
is in L
p
(B
1
). Applying
the Poincar`e inequality, we derive that u
δ
is also in L
p
(B
1
), and hence it is in
W
2,p
(B
1
). Moreover, due to uniqueness and Lemma 3.2.1, we have the better
a priori estimate
u
δ

W
2,p
(B1)
≤ Cf
L
p
(B1)
. (3.44)
Pick a sequence ¦δ
i
¦ tending to zero, then the corresponding solutions
¦u
δi
¦ is a Cauchy sequence in W
2,p
(B
1
), because
u
δi
−u
δj

W
2,p
(B1)
≤ Cf
δi
−f
δj

L
p
(B1)
→0 , as i, j→∞.
Let
u
o
= lim
i→∞
u
δi
, in W
2,p
(B
1
).
Then u
o
is a solution of (3.36). Moreover, from 3.44, we see that the better a
priori estimate (3.37) holds for u
o
.
This completes the proof of Proposition 3.2.1. ¯ .
Exercise 3.2.1 Show that if [y[ ≤ 1 − δ for some δ > 0, then there exist a
constant c
o
> 0, such that
[x[[
x
[x[
2
−y[ ≥ c
o
, ∀ x ∈ B
1
(0).
Proof of Regularity Theorem 3.2.1 for p ≥ 2.
The general frame of the proof is similar to the W
2,p
a priori estimate in
the previous section. The key diﬀerence here is that in the a priori estimate, we
preassume that u is a strong solution in W
2,p
, and here we only assume that
u is a weak solution in W
1,p
. After reading the proof of the a priori estimates,
the reader may notice that if one can establish a W
2,p
regularity in a small
3.2 W
2,p
Regularity of Solutions 91
neighborhood of each point in Ω, then by an entirely similar argument as in
obtaining the a priori estimates, one can derive the regularity on the whole of
Ω.
As in the proof of the a priori estimates, we deﬁne the cutoﬀ function
φ(s) =
1 , if s ≤ 1
0 , if s ≥ 2.
Let u be a W
1,p
0
(Ω) weak solution of (3.34).
For any
x
o
∈ Ω
2δ
:= ¦x ∈ Ω [ dist(x, ∂Ω) ≥ 2δ¦,
let
η(x) = φ(
[x −x
o
[
δ
) and w(x) = η(x)u(x).
Then w is supported in B
2δ
(x
o
). By (3.35) and a straight forward calculation,
one can verify that, for any v ∈ C
∞
0
(B
2δ
(x
o
)),
B
2δ
(x
o
)
a
ij
(x
o
)w
xi
v
xj
dx =
B
2δ
(x
o
)
[a
ij
(x
o
)−a
ij
(x)]w
xi
v
xj
dx+
B
2δ
(x
o
)
F(x)vdx
where
F(x) = f(x) −(a
ij
(x)η
xi
u)
xj
−b
i
(x)u
xi
−c(x)u(x).
In order words, w is a weak solution of
a
ij
(x
o
)w
xixj
= ([a
ij
(x
o
) −a
ij
(x)]w
xi
)
xj
−F(x) , x ∈ B
2δ
(x
o
)
w(x) = 0 , x ∈ ∂B
2δ
(x
o
).
(3.45)
In the above, we omitted the summation signs
¸
. We abbreviated
¸
n
i,j=1
a
ij
as a
ij
and so on.
With a change of coordinates, we may assume that a
ij
(x
o
) = δ
ij
and write
equation (3.45) as
´w = ([a
ij
(x
o
) −a
ij
(x)]w
xi
)
xj
−F(x) (3.46)
For any v ∈ W
2,p
(B
2δ
(x
o
)), obviously,
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
∈ L
p
(B
2δ
(x
o
)).
Also, one can easily verify that F(x) is in L
p
(B
2δ
(x
o
)). By virtue of Proposi
tion 3.2.1, the operator ´ is invertible. Consider the equation in W
2,p
v = Kv +´
−1
F (3.47)
where
(Kv)(x) = ´
−1
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
.
92 3 Regularity of Solutions
Under the assumption that a
ij
(x) are Lipschitz continuous, one can verify that
(see the exercise below), for suﬃciently small δ, K is a contracting map from
W
2,p
(B
2δ
(x
o
)) to itself. Therefore, there exists a unique solution v of equation
(3.47). This v is also a weak solution of equation (3.46). Similar to the proof
of Proposition 3.2.1, one can show (as an exercise) the uniqueness of the weak
solution of (3.46). Therefore, we must have w = v and thus conclude that w is
also in W
2,p
(B
2δ
(x
o
)). This completes the stage 1 of proof for Theorem 3.2.1.
¯ .
Exercise 3.2.2 Assume that each a
ij
(x) is Lipschitz continuous. Let
(Kv)(x) = ´
−1
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
, x ∈ B
2δ
(x
o
).
Show that, for δ suﬃciently small, the operator K is a contracting map in
W
2,p
(B
2δ
(x
o
)), i.e. there exists a constant 0 < γ < 1, such that
Kφ −Kψ
W
2,p
(B
2δ
(x
o
))
≤ γφ −ψ
W
2,p
(B
2δ
(x
o
))
.
Exercise 3.2.3 Prove the uniqueness of the W
1,p
(B
2δ
(x
o
)) weak solution for
equation (3.46) when p ≥ 2.
3.2.2 The Case 1 < p < 2.
Lemma 3.2.2 Let p ≥ 2. If u = 0 whenever Lu = 0 (in the sense of W
1,p
0
(Ω)
weak solution), then for any f ∈ L
p
(Ω), there exist a unique u ∈ W
1,p
0
(Ω) ∩
W
2,p
(Ω), such that
Lu = f(x) and u
W
2,p
(Ω)
≤ Cf
L
p. (3.48)
Proof. This is actually a W
2,p
version of the Fredholm Alternative.
From the previous chapter, one can see that for α suﬃciently large, by
LaxMilgram Theorem, for any given g ∈ L
2
(Ω), there exist a unique W
1,2
0
(Ω)
weak solution v of the equation
(L +α)v = g , x ∈ Ω.
From the Regularity Theorem in this chapter, we have v ∈ W
2,2
(Ω). There
fore, we can write
v = (L +α)
−1
g
where (L +α)
−1
is a bounded linear operator from L
2
(Ω) to W
2,2
(Ω).
Let ¦f
k
¦ ⊂ C
∞
0
(Ω) be a sequence of smooth approximations of f(x). For
each k, consider equation
(L +α)w −αw = f
k
, (3.49)
3.2 W
2,p
Regularity of Solutions 93
or equivalently,
(I −K)w = F, (3.50)
where I is the identity operator, K = α(L+α)
−1
, and F = (L+α)
−1
f
k
. One
can see that K is a compact operator from W
1,2
0
(Ω) into itself, because of
the compact embedding of W
2,2
into W
1,2
. Now we can apply the Fredholm
Alternative:
Equation (3.50) exists a unique solution if and only if the corresponding
homogeneous equation (I −K)w = 0 has only trivial solution.
The second half of the above is just the assumption of the theorem, hence,
equation (3.50) possesses a unique solution u
k
in W
1,2
0
(Ω), i.e.
u
k
−α(L +α)
−1
u
k
= (L +α)
−1
f
k
. (3.51)
Furthermore, since both α(L + α)
−1
u
k
and (L + α)
−1
f
k
are in W
2,2
(Ω), we
derive immediately that u
k
is also in W
2,2
(Ω), or we can deduce this from the
Regularity Theorem 3.2.1. Since f
k
is in L
p
(Ω) for any p, we conclude from
Theorem 3.2.1 that u
k
is in W
2,p
(Ω), and furthermore, due to uniqueness, we
have the improved a priori estimate (see Lemma 3.2.1)
u
k

W
2,p ≤ Cf
k

L
p , k = 1, 2,
Moreover, for any integers i and j,
L(u
i
−u
j
) = f
i
(x) −f
j
(x) , x ∈ Ω.
It follows that
u
i
−u
j

W
2,p ≤ Cf
i
−f
j

L
p→0 , as i, j→∞.
This implies that ¦u
k
¦ is a Cauchy sequence in W
2,p
and hence it converges
to an element u in W
2,p
(Ω). Now this function u is the desired solution. ¯ .
Proposition 3.2.2 (Uniqueness of W
1,p
0
Weak Solution for 1 < p < ∞. )
Let Ω be a bounded domain in R
n
. Assume that a
ij
(x) are Lipschitz con
tinuous. If u is a W
1,p
0
(Ω) weak solution of
(a
ij
(x)u
xi
)
xj
= 0, (3.52)
then u = 0 almost every where.
Proof. . When p ≥ 2, we have proved the uniqueness. Now assume 1 < p < 2.
Suppose there exist a nontrivial weak solution u. Let ¦u
k
¦ be a sequence
of C
∞
0
(Ω) functions such that
u
k
→u in W
1,p
0
(Ω) , as k→∞.
For each k, consider the equation
94 3 Regularity of Solutions
(a
ij
(x)v
xi
)
xj
= F
k
(x) :=
[u
k
[
p/q
u
k
1 +[u
k
[
2
, (3.53)
where
1
q
+
1
p
= 1.
Obviously, for p < 2, q is greater than 2, and F
k
(x) is in L
q
(Ω). Since we
already have uniqueness for W
1,q
0
weak solution, Lemma 3.2.2 guarantees the
existence of a solution v
k
for equation (3.53).
Through integration by parts several times, one can verify that
0 =
Ω
v
k
(a
ij
(x)u
xi
)
xj
dx =
Ω
[u
k
[
p/q
u
k
1 +[u
k
[
2
udx. (3.54)
Since u
k
→u in W
1,p
, it is easy to see that
[u
k
[
p/q
u
k
1 +[u
k
[
2
→
[u[
p/q
u
1 +[u[
2
in L
q
(Ω), and therefore, by (3.54), we must have u = 0 almost everywhere.
Taking into account that u ∈ W
1,p
0
(Ω), we ﬁnally deduce that u = 0 almost
everywhere. This completes the proof of the proposition. ¯ .
After proving the uniqueness, the rest of the arguments are entirely the
same as in stage 1. This completes stage 2 of the proof for Theorem 3.2.1.
3.2.3 Other Useful Results Concerning the Existence, Uniqueness,
and Regularity
Theorem 3.2.2 Given any pair p, q > 1 and f ∈ L
q
(Ω), if u ∈ W
1,p
0
(Ω) is
a weak solution of
Lu = f(x) , x ∈ Ω, (3.55)
then u ∈ W
2,q
(Ω) and it holds the a priori estimate
u
W
2,q
(Ω)
≤ C(u
L
q +f
L
q ). (3.56)
Proof. Case i) If q ≤ p, then obviously, u is also a W
1,q
0
(Ω) weak solution, and
the results follow from the Regularity Theorem 3.2.1 and the W
2,q
a priori
estimates.
Case ii) If q > p, then we use the Sobolev embedding
W
2,p
→ W
1,p1
,
with
p
1
=
np
n −p
.
If p
1
≥ q, we are done. If p
1
< q, we use the fact that u is a W
1,p1
weak
solution, and by Regularity, u ∈ W
2,p1
. Repeating this process, after a few
steps, we will arrive at a p
k
≥ q.
3.3 Regularity Lifting 95
From this Theorem, we can derive immediately the equivalence between
the uniqueness of weak solutions in any two spaces W
1,p
0
and W
1,q
0
:
Corollary 3.2.1 For any pair p, q > 1, the following are equivalent
i) If u ∈ W
1,p
0
(Ω) is a weak solution of Lu = 0, then u = 0.
ii) If u ∈ W
1,q
0
(Ω) is a weak solution of Lu = 0, then u = 0.
Theorem 3.2.3 (W
2,p
Version of the Fredholm Alternative). Let 1 < p < ∞.
If u = 0 whenever Lu = 0 (in the sense of W
1,p
0
(Ω) weak solution), then for
any f ∈ L
p
(Ω), there exist a unique u ∈ W
1,p
0
(Ω) ∩ W
2,p
(Ω), such that
Lu = f(x) and u
W
2,p
(Ω)
≤ Cf
L
p.
For 2 ≤ p < ∞, we have stated this theorem in Lemma 3.2.2. Now, for
1 < p < 2, after we obtained the W
2,p
regularity in this case, the proof of this
theorem is entirely the same as for Lemma 3.2.2.
3.3 Regularity Lifting
3.3.1 Bootstrap
Assume that u ∈ H
1
(Ω) is a weak solution of
−´u = u
p
(x) , x ∈ Ω (3.57)
Then by Sobolev Embedding, u ∈ L
2n
n−2
(Ω). If the power p is less than the
critical number
n+2
n−2
, then the regularity of u can be enhanced repeatedly
through the equation until we reach that u ∈ C
α
(Ω). Finally the “Schauder
Estimate” will lift u to C
2+α
(Ω) and hence to be a classical solution. We call
this the “Bootstrap Method” as will be illustrated below.
For each ﬁxed p <
n+2
n−2
, write
p =
n + 2
n −2
−δ
for some δ > 0.
Since u ∈ L
2n
n−2
(Ω),
u
p
∈ L
2n
p(n−2)
(Ω) = L
2n
(n+2)−δ(n−2)
(Ω).
The equation (3.57) boosts the solution u to W
2,q1
(Ω) with
q
1
=
2n
(n + 2) −δ(n −2)
.
By the Sobolev Embedding
96 3 Regularity of Solutions
W
2,q1
(Ω) → L
nq
1
n−2q
1
(Ω)
we have u ∈ L
s1
(Ω) with
s
1
=
nq
1
n −2q
1
=
2n
n −2
1
1 −δ
.
The integrable power of u has been ampliﬁed by
1
1−δ
(> 1) times.
Now, u
p
∈ L
q2
(Ω) with
q
2
=
s
1
p
=
4n
(1 −δ)[n + 2 −δ(n −2)]
.
And hence through the equation, we derive u ∈ W
2,q2
(Ω). By Sobolev em
bedding, if 2q
2
≥ n, i.e. if
(1 −δ)[n + 2 −δ(n −2)] −4 ≤ 0 ,
then, we have either u ∈ L
q
(Ω) for any q, or u ∈ C
α
(Ω). We are done. If
2q
2
< n, then u ∈ L
s2
(Ω), with
s
2
=
2n
(1 −δ)[n + 2 −δ(n −2)] −4
.
It is easy to verify that
s
2
≥
2n
(1 −δ)(n −2)
1
1 −δ
= s
1
1
1 −δ
.
The integrable power of u has again been ampliﬁed by at least
1
1−δ
times.
Continuing this process, after a ﬁnite many steps, we will boost u to L
q
(Ω)
for any q, and hence in C
α
(Ω) for some 0 < α < 1. Finally, by the Schauder
estimate, u ∈ C
2+α
(Ω), and therefore, it is a classical solution.
From the above argument, one can see that, if p =
n+2
n−2
, the so called
“critical power”, then δ = 0, and the integrable power of the solution u can not
be boosted this way. To deal with this situation, we introduce a “Regularity
Lifting Method” in the next subsection.
3.3.2 Regularity Lifting Theorem
Here, we present a simple method to boost regularity of solutions. It has been
used extensively in various forms in the authors previous works. The essence of
the approach is wellknown in the analysis community. However, the version
we present here contains some new developments. It is much more general
and is very easy to use. We believe that our method will provide convenient
ways, for both experts and nonexperts in the ﬁeld, in obtaining regularities.
Essentially, it is based on the following Regularity Lifting Theorem.
3.3 Regularity Lifting 97
Let Z be a given vector space. Let  
X
and  
Y
be two norms on Z.
Deﬁne a new norm  
Z
by
 
Z
=
p
 
p
X
+ 
p
Y
For simplicity, we assume that Z is complete with respect to the norm  
Z
.
Let X and Y be the completion of Z under  
X
and  
Y
, respectively.
Here, one can choose p, 1 ≤ p ≤ ∞, according to what one needs. It’s easy to
see that Z = X ∩ Y .
Theorem 3.3.1 Let T be a contraction map from X into itself and from Y
into itself. Assume that f ∈ X, and that there exits a function g ∈ Z such
that f = Tf +g. Then f also belongs to Z.
Proof. Step 1. First show that T : Z −→ Z is a contraction. Since T is a
contraction on X, there exists a constant θ
1
, 0 < θ
1
< 1 such that
Th
1
−Th
2

X
≤ θ
1
h
1
−h
2

X
.
Similarly, we can ﬁnd a constant θ
2
, 0 < θ
2
< 1 such that
Th
1
−Th
2

Y
≤ θ
2
h
1
−h
2

Y
.
Let θ = max¦θ
1
, θ
2
¦. Then, for any h
1
, h
2
∈ Z,
Th
1
−Th
2

Z
=
p
Th
1
−Th
2

p
X
+Th
1
−Th
2

p
Y
≤
p
θ
p
1
h
1
−h
2

p
X
+θ
p
2
h
1
−h
2

p
Y
≤ θh
1
−h
2

Z
.
Step 2. Since T : Z −→ Z is a contraction, given g ∈ Z, we can ﬁnd a
solution h ∈ Z such that h = Th + g. We see that T : X −→ X is also a
contraction and g ∈ Z ⊂ X. The equation x = Tx + g has a unique solution
in X. Thus, f = h ∈ Z since both h and f are solutions of x = Tx +g in X.
3.3.3 Applications to PDEs
Now, we explain how the “Regularity Lifting Theorem” proved in the previous
subsection can be used to boost the regularity of week solutions involving
critical exponent:
−´u = u
n+2
n−2
. (3.58)
Still assume that Ω is a smooth bounded domain in R
n
with n ≥ 3. Let
u ∈ H
1
0
(Ω) be a weak solution of equation (3.58). Then by Sobolev embedding,
u ∈ L
2n
n−2
(Ω).
98 3 Regularity of Solutions
We can split the right hand side of (3.58) in two parts:
u
n+2
n−2
= u
4
n−2
u := a(x)u.
Then obviously, a(x) ∈ L
n
2
(Ω). Hence, more generally, we consider the regu
larity of the weak solution of the following equation
−´u = a(x)u +b(x). (3.59)
Theorem 3.3.2 Assume that a(x), b(x) ∈ L
n
2
(Ω). Let u be any weak solution
of equation (3.59) in H
1
0
(Ω). Then u is in L
p
(Ω) for any 1 ≤ p < ∞.
Remark 3.3.1 Even in the best situation when a(x) ≡ 0, we can only have
u ∈ W
2,
n
2
(Ω) via the W
2,p
estimate, and hence derive that u ∈ L
p
(Ω) for any
p < ∞ by Sobolev embedding.
Now let’s come back to nonlinear equation (3.58). Assume that u is a
H
1
0
(Ω) weak solution. From the above theorem, we ﬁrst conclude that u is
in L
q
(Ω) for any 1 < q < ∞. Then by our Regularity Theorem 3.2.1, u is
in W
2,q
(Ω). This implies that u ∈ C
1,α
(Ω) via Sobolev embedding. Finally,
from the classical C
2,α
regularity, we arrive at u ∈ C
2,α
(Ω).
Corollary 3.3.1 If u is a H
1
0
(Ω) weak solution of equation (3.58), then u ∈
C
2,α
(Ω), and hence it is a classical solution.
Proof of Theorem 3.3.2.
Let G(x, y) be the Green’s function of −´ in Ω. Deﬁne
(Tf)(x) = (−´)
−1
f(x) =
Ω
G(x, y)f(y)dy.
Then obviously
0 < G(x, y) < Γ([x −y[) :=
C
n
[x −y[
n−2
,
and it follows that, for any q ∈ (1,
n
2
),
Tf
L
nq
n−2q
(Ω)
≤ Γ ∗ f
L
nq
n−2q
(R
n
)
≤ Cf
L
q
(Ω)
.
Here, we may extend f to be zero outside Ω when necessary.
For a positive number A, deﬁne
a
A
(x) =
a(x) if [a(x)[ ≥ A
0 otherwise ,
and
a
B
(x) = a(x) −a
A
(x).
3.3 Regularity Lifting 99
Let
(T
A
u)(x) =
Ω
G(x, y)a
A
(y)u(y)dy.
Then equation (3.59) can be written as
u(x) = (T
A
u)(x) +F
A
(x),
where
F
A
(x) =
Ω
G(x, y)[a
B
(y)u(y) +b(y)]dy.
We will show that, for any 1 ≤ p < ∞,
i) T
A
is a contracting operator from L
p
(Ω) to L
p
(Ω) for A large, and
ii) F
A
(x) is in L
p
(Ω).
Then by the “Regularity Lifting Theorem”, we derive immediately that
u ∈ L
p
(Ω).
i) The Estimate of the Operator T
A
.
For any
n
n−2
< p < ∞, there is a q, 1 < q <
n
2
, such that
p =
nq
n −2q
and then by H¨older inequality, we derive
T
A
u
L
p
(Ω)
≤ Γ ∗ a
A
u
L
p
(R
n
)
≤ Ca
A
u
L
q
(Ω)
≤ Ca
A

L
n
2 (Ω)
u
L
p
(Ω)
.
Since a(x) ∈ L
n
2
(Ω), one can choose a large number A, such that
a
A

L
n
2 (Ω)
≤
1
2
and hence arrive at
T
A
u
L
p
(Ω)
≤
1
2
u
L
p
(Ω)
.
That is T
A
: L
p
(Ω)→L
p
(Ω) is a contracting operator.
ii) The Integrability of F
A
(x).
(a) First consider
F
1
A
(x) :=
Ω
G(x, y)b(y)dy.
For any
n
n−2
< p < ∞, choose 1 < q <
n
2
, such that p =
nq
n−2q
. Extending b(x)
to be zero outside Ω, we have
F
1
A

L
p
(Ω)
≤ Γ ∗ b
L
p
(R
n
)
≤ Cb
L
q
(Ω)
≤ Cb
L
n
2 (Ω)
< ∞.
Hence
F
1
A
(x) ∈ L
p
(Ω).
100 3 Regularity of Solutions
(b) Then estimate
F
2
A
(x) =
Ω
G(x, y)a
B
(y)u(y)dy.
By the boundedness of a
B
(x), we have
F
2
A

L
p
(Ω)
≤ a
B
u
L
q
(Ω)
≤ Cu
L
q
(Ω)
.
Noticing that u ∈ L
q
(Ω) for any 1 < q ≤
2n
n −2
, and
p =
2n
n −6
when q =
2n
n −2
,
we conclude that, for the following values of p and dimension n,
1 < p < ∞ when 3 ≤ n ≤ 6
1 < p <
2n
n−6
when n > 6,
F
A
(x) ∈ L
p
(Ω), and hence T
A
is a contracting operator from L
p
(Ω) to L
p
(Ω).
Applying the Regularity Lifting Theorem, we arrive at
u ∈ L
p
(Ω) , for any 1 < p < ∞ if 3 ≤ n ≤ 6
u ∈ L
p
(Ω) , for any 1 < p <
2n
n−6
if n > 6.
The only thing that prevent us from reaching the full range of p is the
estimate of F
2
A
(x). However, from the starting point where u ∈ L
2n
n−2
(Ω), we
have arrived at
u ∈ L
p
(Ω) , for any 1 < p <
2n
n −6
.
Now from this point on, through an entire similar argument as above, we will
reach
u ∈ L
p
(Ω) , for any 1 < p < ∞ if 3 ≤ n ≤ 10
u ∈ L
p
(Ω) , for any 1 < p <
2n
n−10
if n > 10.
Each step, we add 4 to the dimension n for which holds
u ∈ L
p
(Ω) , 1 ≤ p < ∞.
Continuing this way, we prove our theorem for all dimensions. This completes
the proof of the Theorem. ¯ .
Now we will use the similar idea and the Regularity Lifting Theorem to
carry out Stage 3 of the proof of the Regularity Theorem.
Let L be a second order uniformly elliptic operator in divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (3.60)
we will prove
3.3 Regularity Lifting 101
Theorem 3.3.3 (W
2,p
Regularity under Weaker Conditions)
Let Ω be a bounded domain in R
n
and 1 < p < ∞. Assume that
i) a
ij
(x) ∈ W
1,n+δ
(Ω) for some δ > 0;
ii)
b
i
(x) ∈
L
n
(Ω), if 1 < p < n
L
n+δ
(Ω), if p = n
L
p
(Ω), if n < p < ∞;
iii)
c(x) ∈
L
n/2
(Ω), if 1 < p < n/2
L
n/2+δ
(Ω), if p = n/2
L
p
(Ω), if n/2 < p < ∞.
Suppose f ∈ L
p
(Ω) and u is a W
1,p
0
(Ω) weak solution of
Lu = f(x) x ∈ Ω. (3.61)
Then u is in W
2,p
(Ω).
Proof. Write equation (3.61) as
−
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
= −
n
¸
i=1
b
i
(x)u
xi
+c(x)u
+f(x).
Let
L
o
u := −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
.
By Proposition 3.2.2, we know that equation L
o
u = 0 has only trivial
solution, and it follows from Theorem 3.2.3, the operator L
o
is invertible.
Denote it by L
−1
o
. Then L
−1
o
is a bounded linear operator from L
p
(Ω) to
W
2,p
(Ω).
For each i and each positive number A, deﬁne
b
iA
(x) =
b
i
(x) , if [b
i
(x)[ ≥ A
0 , elsewhere .
Let
b
iB
(x) := b
i
(x) −b
iA
(x).
Similarly for C
A
(x) and C
B
(x).
Now we can rewrite equation (3.61) as
u = T
A
u +F
A
(u, x) (3.62)
where
T
A
u = −L
−1
o
¸
i
b
iA
(x)u
xi
+c
A
(x)u
102 3 Regularity of Solutions
and
F
A
(u, x) = −L
−1
o
¸
i
b
iB
(x)u
xi
+c
B
(x)u
+L
−1
o
f(x).
We ﬁrst show that
F
A
(u, ) ∈ W
2,p
(Ω) , ∀u ∈ W
1,p
(Ω), f ∈ L
p
(Ω). (3.63)
Since b
iB
(x) and c
iB
(x) are bounded, one can easily verify that
¸
i
b
iB
(x)u
xi
+c
B
(x)u
∈ L
p
(Ω)
for any u ∈ W
1,p
(Ω). This implies (3.63).
Then we prove that T
A
is a contracting operator fromW
2,p
(Ω) to W
2,p
(Ω).
In fact, by H¨older inequality and Sobolev embedding from W
2,p
to W
1,q
,
one can see that, for any v ∈ W
2,p
(Ω),
b
iA
Dv
L
p ≤ C
b
iA

L
nv
W
2,p , if 1 < p < n
b
iA

L
n+δ v
W
2,p , if p = n
b
iA

L
pv
W
2,p , if n < p < ∞,
(3.64)
and
c
A
v
L
p ≤ C
c
A

L
n/2v
W
2,p , if 1 < p < n/2
c
A

L
n/2+δ v
W
2,p , if p = n/2
c
A

L
pv
W
2,p , if n/2 < p < ∞.
(3.65)
Under the conditions of the theorem, the ﬁrst norms on the right hand
side of (3.64) and (3.65), i.e. b
iA

L
n, c
A

L
n/2 can be made arbitrarily small
if A is suﬃciently large. Hence for any > 0, there exists an A > 0, such that

¸
i
b
iA
(x)v
xi
+c
A
(x)v
L
p ≤ v
W
2,p , ∀ v ∈ W
2,p
(Ω).
Taking into account that L
−1
o
is a bounded operator from L
p
(Ω) to
W
2,p
(Ω), we deduce that T
A
is a contract mapping from W
2,p
(Ω) to W
2,p
(Ω).
It follows from the Contract Mapping Theorem, there exist a v ∈ W
2,p
(Ω),
such that
v = T
A
v +F
A
(u, x).
Now u and v satisfy the same equation. By uniqueness, we must have
u = v. Therefore we derive that u ∈ W
2,p
(Ω). This completes the proof of the
theorem. ¯ .
3.3 Regularity Lifting 103
3.3.4 Applications to Integral Equations
As another application of the Regularity Lifting Theorem, we consider the
following integral equation
u(x) =
R
n
1
[x −y[
n−α
u(y)
n+α
n−α
dy , x ∈ R
n
, (3.66)
where α is any real number satisfying 0 < α < n.
It arises as an EulerLagrange equation for a functional under a constraint
in the context of the HardyLittlewoodSobolev inequalities.
It is also closely related to the following family of semilinear partial dif
ferential equations
(−∆)
α/2
u = u
(n+α)/(n−α)
, x ∈ R
n
n ≥ 3. (3.67)
Actually, one can prove (See [CLO]) that the integral equation (3.66) and the
PDE (3.67) are equivalent.
In the special case α = 2, (3.67) becomes
−∆u = u
(n+2)/(n−2)
, x ∈ R
n
.
which is the well known Yamabe equation.
In the context of the HardyLittlewoodSobolev inequality, it is natural
to start with u ∈ L
2n
n−α
(R
n
). We will use the Regularity Lifting Theorem to
boost u to L
q
(R
n
) for any 1 < q < ∞, and hence in L
∞
(R
n
). More generally,
we consider the following equation
u(x) =
R
n
K(y)[u(y)[
p−1
u(y)
[x −y[
n−α
dy (3.68)
for some 1 < p < ∞. We prove
Theorem 3.3.4 Let u be a solution of (3.68). Assume that
u ∈ L
qo
(R
n
) for some q
o
>
n
n −α
, (3.69)
R
n
[K(y)[u(y)[
p−1
[
n
α
dy < ∞ , and [K(y)[ ≤ C(1 +
1
[y[
γ
) for some γ < α.
(3.70)
Then u is in L
q
(R
n
) for any 1 < q < ∞.
Remark 3.3.2 i) If
p >
n
n −α
, [K(x)[ ≤ M, and u ∈ L
n(p−1)
α
(R
n
), (3.71)
then conditions (3.69) and (3.70) are satisﬁed.
104 3 Regularity of Solutions
ii) Last condition in (3.71) is somewhat sharp in the sense that if it is
violated, then equation (3.68) when K(x) ≡ 1 possesses singular solutions
such as
u(x) =
c
[x[
α
p−1
.
Proof of Theorem 3.3.4: Deﬁne the linear operator
T
v
w =
R
n
K(y)[v(y)[
p−1
w
[x −y[
n−α
dy.
For any real number a > 0, deﬁne
u
a
(x) = u(x) , if [u(x)[ > a , or if [x[ > a
u
a
(x) = 0 , otherwise .
Let u
b
(x) = u(x) −u
a
(x).
Then since u satisﬁes equation (3.68), one can verify that u
a
satisﬁes the
equation
u
a
= T
ua
u
a
+g(x) (3.72)
with the function
g(x) =
R
n
K(y)[u
b
(y)[
p−1
u
b
(y)
[x −y[
n−α
dy −u
b
(x).
Under the second part of the condition (3.70), it is obvious that
g(x) ∈ L
∞
∩ L
qo
.
For any q >
n
n−α
, we ﬁrst apply the HardyLittlewoodSobolev inequality
to obtain
T
ua
w
L
q ≤ C(α, n, q)K[u
a
[
p−1
w
L
nq
n+αq
. (3.73)
Then write H(x) = K(x)[u
a
(x)[
p−1
, we apply the H¨older inequality to the
right hand side of the above inequality
R
n
[H(y)[
nq
n+αq
[f(y)[
nq
n+αq
dy
n+αq
nq
≤
R
n
[H(y)[
nq
n+αq
·r
dy
1
r
R
n
[f(y)[
nq
n+αq
·s
dy
1
s
¸
n+αq
nq
=
R
n
[H(y)[
n
α
dy
α
n
w
L
q . (3.74)
Here we have chosen
s =
n +αq
n
and r =
n +αq
αq
.
3.3 Regularity Lifting 105
It follows from (3.73) and (3.74) that
T
ua
w
L
q ≤ C(α, n, q)(
[K(y)[u
a
(y)[
p−1
[
n
α
dy)
α
n
w
L
q . (3.75)
By (3.70) and (3.75), we deduce, for suﬃciently large a,
T
ua
w
L
q ≤
1
2
w
L
q . (3.76)
Applying (3.76) to both the case q = q
o
and the case q = p
o
> q
o
, and by
the Contracting Mapping Theorem, we see that the equation
w = T
ua
w +g(x) (3.77)
has a unique solution in both L
qo
and L
po
∩L
qo
. From (3.72), u
a
is a solution
of (3.77) in L
qo
. Let w be the solution of (3.77) in L
po
∩L
qo
, then w is also a
solution in L
qo
. By the uniqueness, we must have u
a
= w ∈ L
po
∩L
qo
for any
p
o
>
n
n−α
. So does u. This completes the proof of the Theorem.
4
Preliminary Analysis on Riemannian Manifolds
4.1 Diﬀerentiable Manifolds
4.2 Tangent Spaces
4.3 Riemannian Metrics
4.4 Curvature
4.4.1 Curvature of Plane Curves
4.4.2 Curvature of Surfaces in R
3
4.4.3 Curvature on Riemannian Manifolds
4.5 Calculus on Manifolds
4.5.1 Higher Order Covariant Derivatives and the LaplaceBertrami Op
erator
4.5.2 Integrals
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature
4.6 Sobolev Embeddings
According to Einstein, the Universe we live in is a curved space. We call
this curved space a manifold, which is a generalization of the ﬂat Euclidean
space. Roughly speaking, each neighborhood of a point on a manifold is home
omorphic to an open set in the Euclidean space, and the manifold as a whole
is obtained by pasting together pieces of the Euclidean spaces. In this section,
we will introduce the most basic concepts of diﬀerentiable manifolds, tangent
space, Riemannian metrics, and curvatures. We will try to be as intuitive and
brief as possible. For more rigorous arguments and detailed discussions, please
see, for example, the book Riemannian Geometry by do Carmo [Ca].
4.1 Diﬀerentiable Manifolds
A simple example of a two dimensional diﬀerentiable manifold is a regular
surface in R
3
. Intuitively, it is a union of open sets of R
2
, organized in such a
108 4 Preliminary Analysis on Riemannian Manifolds
way that at the intersection of any two open sets the change from one to the
other can be made in a diﬀerentiable manner. More precisely, we have
Deﬁnition 4.1.1 (Diﬀerentiable Manifolds)
A diﬀerentiable ndimensional manifold M is
(i) a union of open sets, M =
¸
α
V
α
and a family of homeomorphisms φ
α
from V
α
to an open set φ
α
(V
α
) in
R
n
, such that
(ii) For any pair α and β with V
α
¸
V
β
= U = ∅, the images of the
intersection φ
α
(U) and φ
β
(U) are open sets in R
n
and the mappings φ
β
◦φ
−1
α
are diﬀerentiable.
(iii) The family ¦V
α
, φ
α
¦ are maximal relative to the conditions (i) and
(ii).
A manifold M is a union of open sets. In each open set U ⊂ M, one
can deﬁne a coordinates chart. In fact, let φ
U
be the homeomorphism from
U to φ
U
(U) in R
n
, for each p ∈ U, if φ
U
(p) = (x
1
, x
2
, , x
n
) in R
n
, then
we can deﬁne the coordinates of p to be (x
1
, x
2
, x
n
). This is called a local
coordinates of p, and (U, φ
U
) is called a coordinates chart. If a family of
coordinates charts that covers M are well made so that the transition from
one to the other is in a diﬀerentiable way as in condition (ii), then it is called
a diﬀerentiable structure of M. Given a diﬀerentiable structure on M, one
can easily complete it into a maximal one, by taking the union of all the
coordinates charts of M that are compatible with (in the sense of condition
(ii)) the ones in the given structure.
A trivial example of ndimensional manifolds is the Euclidean space R
n
,
with the diﬀerentiable structure given by the identity. The following are two
nontrivial examples.
Example 1. The ndimensional unit sphere
S
n
= ¦x ∈ R
n+1
[ x
2
1
+ +x
2
n+1
= 1¦.
Take the unit circle S
1
for instance, we can use the following four coordinates
charts:
V
1
= ¦x ∈ S
1
[ x
2
> 0¦, φ
1
(x) = x
1
,
V
2
= ¦x ∈ S
1
[ x
2
< 0¦, φ
2
(x) = x
1
,
V
3
= ¦x ∈ S
1
[ x
1
> 0¦, φ
3
(x) = x
2
,
V
4
= ¦x ∈ S
1
[ x
1
< 0¦, φ
4
(x) = x
2
.
Obviously, S
1
is the union of the four open sets V
1
, V
2
, V
3
, and V
4
. On the
intersection of any two open sets, say on V
1
¸
V
3
, we have
x
2
=
1 −x
2
1
, x
1
> 0;
x
1
=
1 −x
2
2
, x
2
> 0.
They are both diﬀerentiable functions. Same is true on other intersections.
Therefore, with this diﬀerentiable structure, S
1
is a 1dimensional diﬀeren
tiable manifold.
4.2 Tangent Spaces 109
Example 2. The ndimensional real projective space P
n
(R). It is the set of
straight lines of R
n+1
which passes through the origin (0, , 0) ∈ R
n+1
, or
the quotient space of R
n+1
` ¦0¦ by the equivalence relation
(x
1
, x
n+1
) ∼ (λx
1
, , λx
n+1
), λ ∈ R, λ = 0.
We denote the points in P
n
(R) by [x]. Observe that if x
i
= 0, then
[x
1
, , x
n+1
] = [x
1
/x
i
, x
i−1
/x
i
, 1, x
i+1
/x
i
, x
n+1
/x
i
].
For i = 1, 2, , n + 1, let
V
i
= ¦[x] [ x
i
= 0¦.
Apparently,
P
n
(R) =
n+1
¸
i=1
V
i
.
Geometrically, V
i
is the set of straight lines of R
n+1
which passes through the
origin and which do not belong to the hyperplane x
i
= 0.
Deﬁne
φ
i
([x]) = (ξ
i
1
, , ξ
i
i−1
, ξ
i
i+1
, , ξ
i
n+1
),
where ξ
i
k
= x
k
/x
i
(i = k) and 1 ≤ i ≤ n + 1.
On the intersection V
i
¸
V
j
, the changes of coordinates
ξ
j
k
=
ξ
i
k
ξ
i
j
k = i, j
ξ
j
i
=
1
ξ
i
j
.
are obviously smooth, thus ¦V
i
, φ
i
¦ is a diﬀerentiable structure of P
n
(R) and
hence P
n
(R) is a diﬀerentiable manifold.
4.2 Tangent Spaces
We know that at every point of a regular curve or at a regular surface, there
exists a tangent line or a tangent plane. Similarly, given a diﬀerentiable struc
ture on a manifold, we can deﬁne a tangent space at each point. For a regular
surfaces S in R
3
, at each given point p, the tangent plane is the space of all
tangent vectors, and each tangent vector is deﬁned as the “velocity in the am
bient space R
3
. Since on a diﬀerential manifold, there is no such an ambient
space, we have to ﬁnd a characteristic property of the tangent vector which
will not involve the concepts of velocity. To this end, let’s consider a smooth
curve in R
n
:
γ(t) = (x
1
(t), , x
n
(t)), t ∈ (−, ), with γ(0) = p.
110 4 Preliminary Analysis on Riemannian Manifolds
The tangent vector of γ at point p is
v = (x
1
(0), , x
n
(0)).
Let f(x) be a smooth function near p. Then the directional derivative of f
along vector v ∈ R
n
can be expressed as
d
dt
(f ◦ γ) [
t=0
=
n
¸
i=1
∂f
∂x
i
(p)
∂x
i
∂t
(0) = (
¸
i
x
i
(0)
∂
∂x
i
)f.
Hence the directional derivative is an operator on diﬀerentiable functions that
depends uniquely on the vector v. We can identify v with this operator and
thus generalize the concept of tangent vectors to diﬀerentiable manifolds M.
Deﬁnition 4.2.1 A diﬀerentiable function γ : (−, )→M is called a (diﬀer
entiable) curve in M.
Let D be the set of all functions that are diﬀerentiable at point p = γ(0) ∈
M. The tangent vector to the curve γ at t = 0 is an operator (or function)
γ
(0) : D→R given by
γ
(0)f =
d
dt
(f ◦ γ) [
t=0
.
A tangent vector at p is the tangent vector of some curve γ at t = 0 with
γ(0) = p. The set of all tangent vectors at p is called the tangent space at p
and denoted by T
p
M.
Let (x
1
, , x
n
) be the coordinates in a local chart (U, φ) that covers p
with φ(p) = 0 ∈ φ(U). Let γ(t) = (x
1
(t), , x
n
(t)) be a curve in M with
γ(0) = p. Then
γ
(0)f =
d
dt
f(γ(t)) [
t=0
=
d
dt
f(x
1
(t), , x
n
(t)) [
t=0
=
¸
i
x
i
(0)
∂f
∂x
i
=
¸
i
x
i
(0)
∂
∂x
i
p
f.
This shows that the vector γ
(0) can be expressed as the linear combination
of
∂
∂xi
p
, i.e.
γ
(0) =
¸
i
x
i
(0)
∂
∂x
i
p
.
Here
∂
∂xi
p
is the tangent vector at p of the “coordinate curve”:
x
i
→φ
−1
(0, , 0, x
i
, 0, , 0);
and
4.2 Tangent Spaces 111
∂
∂x
i
p
f =
∂
∂x
i
f(φ
−1
(0, , 0, x
i
, 0, , 0)) [
xi=0
.
One can see that the tangent space T
p
M is an ndimensional linear space with
the basis
∂
∂x
1
p
, ,
∂
∂x
n
p
.
The dual space of the tangent space T
p
M is called the cotangent space,
and denoted by T
∗
p
M. We can realize it in the following.
We ﬁrst introduce the diﬀerential of a diﬀerentiable mapping.
Deﬁnition 4.2.2 Let M and N be two diﬀerential manifolds and let f :
M→N be a diﬀerentiable mapping. For every p ∈ M and for each v ∈ T
p
M,
choose a diﬀerentiable curve γ : (−, )→M, with γ(0) = p and γ
(0) = v. Let
β = f ◦ γ. Deﬁne the mapping
df
p
: T
p
M→T
f(p)
N
by
df
p
(v) = β
(0).
We call df
p
the diﬀerential of f. Obviously, it is a linear mapping from one
tangent space T
p
M to the other T
f(p)
N; And one can show that it does not
depend on the choice of γ (See [Ca]). When N = R
1
, the collection of all such
diﬀerentials form the cotangent space. From the deﬁnition, one can derive
that
df
p
(
∂
∂x
i
p
) =
∂
∂x
i
p
f.
In particular, for the diﬀerential (dx
i
)
p
of the coordinates function x
i
, we have
(dx
i
)
p
(
∂
∂x
j
p
) = δ
ij
.
And consequently,
df
p
=
¸
i
∂f
∂x
i
p
(dx
i
)
p
.
From here we can see that
(dx
1
)
p
, (dx
2
)
p
, , (dx
n
)
p
form a basis of the cotangent space T
∗
p
M at point p.
Deﬁnition 4.2.3 The tangent space of M is
TM =
¸
p∈M
T
p
M.
And the cotangent space of M is
T
∗
M =
¸
p∈M
T
∗
p
M.
112 4 Preliminary Analysis on Riemannian Manifolds
4.3 Riemannian Metrics
To measure the arc length, area, or volume on a manifold, we need to introduce
some kind of measurements, or ‘metrics’. For a surface S in the three dimen
sional Euclidean space R
3
, there is a natural way of measuring the length of
vectors tangent to S, which are simply the length of the vectors in the ambient
space R
3
. These can be expressed in terms of the inner product <, > in R
3
.
Given a curve γ(t) on S for t ∈ [a, b], its length is the integral
b
a
[γ
(t)[dt,
where the length of the velocity vector γ
(t) is given by the inner product in
R
3
:
[γ
(t)[ =
< γ
(t), γ
(t) >.
The deﬁnition of the inner product <, > enable us to measure not only
the lengths of curves on S, but also the area of domains on S, as well as other
quantities in geometry.
Now we generalize this concept to diﬀerentiable manifolds without using
the ambient space. More precisely, we have
Deﬁnition 4.3.1 A Riemannian metric on a diﬀerentiable manifold M is a
correspondence which associates to each point p of M an inner product < , >
p
on the tangent space T
p
M, which varies diﬀerentiably, that is,
g
ij
(q) ≡<
∂
∂x
i
q
,
∂
∂x
j
q
>
q
is a diﬀerentiable function of q for all q near p.
A diﬀerentiable manifold with a given Riemannian metric will be called a
Riemannian manifold.
It is clear this deﬁnition does not depend on the choice of coordinate
system. Also one can prove that ( see [Ca])
Proposition 4.3.1 A diﬀerentiable manifold M has a Riemannian metric.
Now we show how a Riemannian metric can be used to measure the length
of a curve on a manifold M.
Let I be an open interval in R
1
, and let γ : I→M be a diﬀerentiable
mapping ( curve ) on M. For each t ∈ I, dγ is a linear mapping from T
t
I to
T
γ(t)
M, and γ
(t) ≡
dγ
dt
≡ dγ(
d
dt
) is a tangent vector of T
γ(t)
M. The restriction
of the curve γ to a closed interval [a, b] ⊂ I is called a segment. We deﬁne the
length of the segment by
b
a
< γ
(t), γ
(t) >
1/2
dt.
4.3 Riemannian Metrics 113
Here the arc length element is
ds =< γ
(t), γ
(t) >
1/2
dt. (4.1)
As we mentioned in the previous section, let (x
1
, , x
n
) be the coordinates
in a local chart (U, φ) that covers p = γ(t) = (x
1
(t), , x
n
(t)). Then
γ
(t) =
¸
i
x
i
(t)
∂
∂x
i
p
.
Consequently,
ds
2
= < γ
(t), γ
(t) > dt
2
=
n
¸
i,j=1
<
∂
∂x
i
,
∂
∂x
j
>
p
x
i
(t)dt x
j
(t)dt
=
n
¸
i,j=1
g
ij
(p)dx
i
dx
j
.
This expresses the length element ds in terms of the metric g
ij
.
To derive the formula for the volume of a region (an open connected subset)
D in M in terms of metric g
ij
, let’s begin with the volume of the parallelepiped
form by the tangent vectors. Let ¦e
1
, , e
n
¦ be an orthonormal basis of T
p
M.
Let
∂
∂x
i
(p) =
¸
j
a
ij
e
j
, i = 1, n.
Then
g
ik
(p) =<
∂
∂x
i
,
∂
∂x
k
>
p
=
¸
j l
a
ij
a
kl
< e
j
, e
l
>=
¸
j
a
ij
a
kj
. (4.2)
We know the volume of the parallelepiped form by
∂
∂x1
,
∂
∂xn
is the deter
minant det(a
ij
), and by virtue of (4.2), it is the same as
det(g
ij
).
Assume that the region D is cover by a coordinate chart (U, φ) with co
ordinates (x
1
, x
n
). From the above arguments, it is natural to deﬁne the
volume of D as
vol(D) =
φ(U)
det(g
ij
)dx
1
dx
n
.
A trivial example is M = R
n
, the ndimensional Euclidean space with
∂
∂x
i
= e
i
= (0, , 1, 0, , 0).
The metric is given by
g
ij
=< e
i
, e
j
>= δ
ij
.
114 4 Preliminary Analysis on Riemannian Manifolds
A less trivial example is S
2
, the 2dimensional unit sphere with standard
metric, i.e., with the metric inherited from the ambient space R
3
= ¦(x, y, z)¦.
We will use the usual spherical coordinates (θ, ψ) with
x = sin θ cos ψ
y = sin θ sin ψ
z = cos θ
to illustrate how we use the measurement (inner product) in R
3
to derive the
expression of the standard metric g
ij
in term of the local coordinate (θ, ψ) on
S
2
.
Let p be a point on S
2
covered by a local coordinates chart (U, φ) with
coordinates (θ, ψ). We ﬁrst derive the expression for
∂
∂θ
p
and
∂
∂ψ
p
the
tangent vectors at point p = φ
−1
(θ, ψ) of the coordinate curves ψ = constant
and θ =constant respectively.
In the coordinates of R
3
, we have,
∂
∂θ
p
= (
∂x
∂θ
,
∂y
∂θ
,
∂z
∂θ
) = (cos θ cos ψ, cos θ sin ψ, −sin θ),
and
∂
∂ψ
p
= (
∂x
∂ψ
,
∂y
∂ψ
,
∂z
∂ψ
) = (−sin θ sin ψ, sin θ cos ψ, 0).
It follows that
g
11
(p) =<
∂
∂θ
,
∂
∂θ
>
p
= cos
2
θ cos
2
ψ + cos
2
θ sin
2
ψ + sin
2
θ = 1,
g
12
(p) = g
21
(p) =<
∂
∂θ
,
∂
∂ψ
>
p
= 0,
and
g
22
(p) =<
∂
∂ψ
,
∂
∂ψ
>
p
= sin
2
θ.
Consequently, the length element is
ds =
g
11
dθ
2
+ 2g
12
dθdψ +g
22
dψ
2
=
dθ
2
+ sin
2
dψ
2
,
and the area element is
dA =
det(g
ij
)dθdψ = sin θdθdψ.
These conform with our knowledge on the length and area in the spherical
coordinates.
4.4 Curvature 115
4.4 Curvature
4.4.1 Curvature of Plane Curves
Consider curves on a plane. Intuitively, we feel that a straight line does not
bend at all, and a circle of radius one bends more than a circle of radius 2.
To measure the extent of bending of a curve, or the amount it deviates from
being straight, we introduce curvature.
Let p be a point on a curve, and T(p) be the unit tangent vector at p. Let
q be a nearby point. Let ∆α be the amount of change of the angle from T(p)
to T(q), and ∆s the arc length between p and q on the curve. We deﬁne the
curvature at p as
k(p) = lim
q→p
∆α
∆s
=
dα
ds
(p).
If a plane curve is given parametrically as c(t) = (x(t), y(t)), then the unit
tangent vector at c(t) is
T(t) =
(x
(t), y
(t))
[x
(t)]
2
+ [y
(t)]
2
,
and
ds =
[x
(t)]
2
+ [y
(t)]
2
dt.
By a straight forward calculation, one can verify that
k(c(t)) =
[dT[
ds
=
x
(t)y
(t) −y
(t)x
(t)
¦[x
(t)]
2
+ [y
(t)]
2
¦
3/2
.
If a plane curve is given explicitly as y = f(x), then in the above formula,
replacing t by x and y(t) by f(x), we ﬁnd that the curvature at point (x, f(x))
is
k(x, f(x)) =
f
(x)
¦1 + [f
(x)]
2
¦
3/2
. (4.3)
4.4.2 Curvature of Surfaces in R
3
1. Principle Curvature
Let S be a surface in R
3
, p be a point on S, and N(p) a normal vector
of S at p. Given a tangent vector T(p) at p, consider the intersection of
the surface with the plane containing T(p) and N(p). This intersection is a
plane curve, and its curvature is taken as the absolute value of the normal
curvature, which is positive if the curve turns in the same direction as the
surface’s chosen normal, and negative otherwise. The normal curvature varies
as the tangent vector T(p) changes. The maximum and minimum values of
the normal curvature at point p are called principal curvatures, and usually
denoted by k
1
(p) and k
2
(p).
116 4 Preliminary Analysis on Riemannian Manifolds
2. Gaussian Curvature. One way to measure the extent of bending of a
surface is by Gaussian curvature. It is deﬁned as the product of the two
principal curvatures
K(p) = k
1
(p) k
2
(p).
From this, one can see that no matter which side normal is chosen (say, for
a closed surface, one may chose outer normals or inner normals), the Gaussian
curvature of a sphere is positive, of a one sheet hyperboloids is negative, and
of a torus or a plane is zero. A sphere of radius R has Gaussian curvature
1
R
2
.
Let f(x, y) be a diﬀerentiable function. Consider a surface S in R
3
deﬁned
as the graph of x
3
= f(x
1
, x
2
). Assume that the origin is on S, and the xy
plane is tangent to S, that is
f(0, 0) = 0 and f
1
(0, 0) = 0 = f
2
(0, 0),
where, for simplicity, we write f
i
=
∂f
∂xi
and we will use f
ij
to denote
∂
2
f
∂xi∂xj
.
We calculate the Gaussian curvature of S at the origin 0 = (0, 0, 0).
Let u = (u
1
, u
2
) be a unit vector on the tangent plane. Then by (4.3), the
normal curvature k
n
(u) in u direction is
∂
2
f
∂u
2
[(
∂f
∂u
)
2
+ 1]
3/2
(0)
where
∂f
∂u
and
∂
2
f
∂u
2
are ﬁrst and second directional derivative of f in u direction.
Using the assumption that f
1
(0) = 0 = f
2
(0), we arrive at
k
n
(u) = (f
11
u
2
1
+ 2f
12
u
1
u
2
+f
22
u
2
2
)(0) = u
T
F u,
where
F =
f
11
f
12
f
21
f
22
(0),
and u
T
denotes the transpose of u.
Let λ
1
and λ
2
be two eigenvalues of the matrix F with corresponding unit
eigenvectors e
1
and e
2
, i.e.
F e
i
= λ
i
e
i
, i = 1, 2. (4.4)
Since the matrix F is symmetric, the two eigenvectors e
1
and e
2
are orthog
onal, and it is obvious that
e
T
i
F e
i
= λ
i
, i = 1, 2.
Let θ be the angle between u and e
1
. Then we can express
u = cos θ e
1
+ sin θ e
2
.
It follows that
4.4 Curvature 117
k
n
(u) = u
T
F u = cos
2
θ λ
1
+ sin
2
θ λ
2
.
Assume that λ
1
≤ λ
2
. Then
λ
1
≤ λ
1
+ sin
2
θ (λ
2
−λ
1
) = k
n
(u) = cos
2
θ (λ
1
−λ
2
) +λ
2
≤ λ
2
.
Hence λ
1
and λ
2
are minimum and maximum values of normal curvature,
and therefore they are principal curvatures k
1
and k
2
.
From (4.4), we see that λ
1
and λ
2
are solutions of the quadratic equation
f
11
−λ f
12
f
21
f
22
−λ
= λ
2
−(f
11
+f
22
)λ + (f
11
f
22
−f
12
f
21
) = 0.
Therefore, by deﬁnition, the Gaussian curvature at 0 is
K(0) = k
1
k
2
= λ
1
λ
2
= (f
11
f
22
−f
12
f
21
)(0).
4.4.3 Curvature on Riemannian Manifolds
On a general n dimensional Riemannian manifold M, there are several kinds
of curvatures. Before introducing them, we need a few preparations.
1. Vector Fields and Brackets
A vector ﬁeld X on M is a correspondence that assigns each point p ∈ M
a vector X(p) in the tangent space T
p
M. It is a mapping from M to T M. If
this mapping is diﬀerentiable, then we say the vector ﬁeld X is diﬀerentiable.
In a local coordinates, we can express
X(p) =
n
¸
i
a
i
(p)
∂
∂x
i
.
When applying to a smooth function f on M, it is a directional derivative
operator in the direction X(p):
(Xf)(p) =
n
¸
i
a
i
(p)
∂f
∂x
i
.
Deﬁnition 4.4.1 For two diﬀerentiable vector ﬁelds X and Y, deﬁne the Lie
Bracket as
[X, Y ] = XY −Y X.
If X(p) =
¸
n
i
a
i
(p)
∂
∂xi
and Y (p) =
¸
n
j
b
j
(p)
∂
∂xj
, then
[X, Y ]f = XY f −Y Xf =
n
¸
i,j=1
a
i
∂b
j
∂x
i
−b
i
∂a
j
∂x
i
∂f
∂x
j
.
118 4 Preliminary Analysis on Riemannian Manifolds
2. Covariant Derivatives and Riemannian Connections
To be more intuitive, we begin with a surface S in R
3
. Let c : I→S be a
curve on S, and let V (t) be a vector ﬁeld along c(t), t ∈ I. One can see that
dV
dt
is a vector in the ambient space R
3
, but it may not belong to the tangent
plane of S. To consider that rate of change of V (t) restricted to the surface S,
we make an orthogonal projection of
dV
dt
onto the tangent plane, and denote
it by
DV
dt
. We call this the covariant derivative of V (t) on S – the derivative
viewed by the twodimensional creatures on S.
On an ndimensional diﬀerentiable manifold M, the covariant derivative
is deﬁned by aﬃne connections.
Let D(M) be the set of all smooth vector ﬁelds.
Deﬁnition 4.4.2 Let X, Y, Z ∈ D(M), and let f and g be smooth functions
on M. An aﬃne connection D on a diﬀerentiable manifold M is a mapping
from D(M) D(M) to D(M) which maps (X, Y ) to D
X
Y and satisﬁes
1)D
fX+gY
Z = fD
X
Z +gD
Y
Z
2)D
X
(Y +Z) = D
X
Y +D
X
Z
3)D
X
(fY ) = fD
X
Y +X(f)Y.
Given an aﬃne connection D on M, for a vector ﬁeld V (t) = Y (c(t)) along
a diﬀerentiable curve c : I→M, deﬁne the covariant derivative of V along c(t)
as
DV
dt
= Ddc
dt
Y.
In a local coordinates, let
c(t) = (x
1
(t), x
n
(t)) and V (t) =
n
¸
i
v
i
(t)
∂
∂x
i
.
Then by the properties of the aﬃne connection, one can express
DV
dt
=
¸
i
dv
i
dt
∂
∂x
i
+
¸
i,j
dx
i
dt
v
j
D ∂
∂x
i
∂
∂x
j
. (4.5)
Recall that in Euclidean spaces with inner product <, >, a vector ﬁeld
V (t) along a curve c(t) is parallel if only if
dV
dt
= 0; and for a pair of vector
ﬁelds X and Y along c(t), we have < X, Y >= constant. Similarly, we have
Deﬁnition 4.4.3 Let M be a diﬀerentiable manifold with an aﬃne connec
tion D. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if
DV
dt
= 0.
4.4 Curvature 119
Deﬁnition 4.4.4 Let M be a diﬀerentiable manifold with an aﬃne connec
tion D and a Riemannian metric <, >. If for any pair of parallel vector ﬁelds
X and Y along any smooth curve c(t), we have
< X, Y >
c(t)
= constant ,
then we say that D is compatible with the metric <, >.
The following fundamental theorem of Levi and Civita guarantees the
existence of such an aﬃne connection on a Riemannian manifold.
Theorem 4.4.1 Given a Riemannian manifold M with metric <, >, there
exists a unique connection D on M that is compatible with <, >. Moreover
this connection is symmetric, i.e.
D
X
Y −D
Y
X = [X, Y ].
We skip the proof of the theorem. Interested readers may see the book of do
Carmo [ca] (page 55).
In a local coordinates (x
1
, , x
n
), this unique Riemannian connection can
be expressed as
D ∂
∂x
i
∂
∂x
j
=
¸
k
Γ
k
ij
∂
∂x
k
,
where
Γ
k
ij
=
1
2
¸
m
∂g
jm
∂x
i
+
∂g
mi
∂x
j
−
∂g
ij
∂x
m
g
mk
is called the Christoﬀel symbols, g
ij
=<
∂
∂xi
,
∂
∂xj
>, and (g
ij
) is the inverse
matrix of (g
ij
).
3. Curvature
Deﬁnition 4.4.5 Let M be a Riemannian manifold with the Riemannian
connection D. The curvature R is a correspondence that assigns every pair of
smooth vector ﬁeld X, Y ∈ D(M) a mapping R(X, Y ) : D(M)→D(M) deﬁned
by
R(X, Y )Z = D
Y
D
X
Z −D
X
D
Y
Z +D
[X,Y ]
Z, ∀Z ∈ D(M).
Deﬁnition 4.4.6 The sectional curvature of a given two dimensional sub
space E ⊂ T
p
M at point p is
K(E) =
< R(X, Y )X, Y >
[X[
2
[Y [
2
− < X, Y >
2
,
where X and Y are any two linearly independent vectors in E, and
[X[
2
[Y [
2
− < X, Y >
2
is the area of the parallelogram spun by X and Y .
120 4 Preliminary Analysis on Riemannian Manifolds
One can show that K(E) so deﬁned is independent of the choice of X and Y .
It is actually the Gaussian curvature of the section.
Example. Let M be the unit sphere with standard metric, then one can
verify that its sectional curvature at every point is K(E) = 1.
Deﬁnition 4.4.7 Let X = Y
n
be a unit vector in T
p
M, and let ¦Y
1
, , Y
n−1
¦
be an orthonormal basis of the hyper plane in T
p
M orthogonal to X, then the
Ricci curvature in the direction X is the average of the sectional curvature of
the n −1 sections spun by X and Y
1
, X and Y
2
, , and X and Y
n−1
:
Ric
p
(X) =
1
n −1
n−1
¸
i=1
< R(X, Y
i
)X, Y
i
> .
The scalar curvature at p is
R(p) =
1
n
n
¸
i=1
Ric
p
(Y
i
).
One can show that the above deﬁnition does not depend on the choice of
the orthonormal basis. On two dimensional manifolds, Ricci curvature is the
same as sectional curvature, and they both depends only on the point p.
4.5 Calculus on Manifolds
In the previous chapter, we introduced the concept of aﬃne connection D on
a diﬀerentiable manifold M. It is a mapping from Γ(M) Γ(M) to Γ(M),
where Γ(M) is the set of all smooth vector ﬁelds on M. Here we will extend
this deﬁnition to the space of diﬀerentiable tensor ﬁelds, and thus introduce
higher order covariant derivatives. For convenience, we adapt the summation
convention and write, for instance,
X
i
∂
∂x
i
:=
¸
i
X
i
∂
∂x
i
,
Γ
j
ik
dx
k
:=
¸
k
Γ
j
ik
dx
k
,
and so on.
4.5.1 Higher Order Covariant Derivatives and the
LaplaceBertrami Operator
Let p and q be two nonnegative integers. At a point x on M, as usual, we
denote the tangent space by T
x
M. Deﬁne T
q
p
(T
x
M) as the space of (p, q)
tensors on T
x
M, that is, the space of (p +q)linear forms
4.5 Calculus on Manifolds 121
η : T
x
M T
x
M
. .. .
p
T
∗
x
M T
∗
x
M
. .. .
q
→R
1
.
In a local coordinates, we can express
T = T
j1···jq
i1···ip
dx
i1
⊗ ⊗dx
ip
⊗
∂
∂x
j1
⊗ ⊗
∂
∂x
jq
.
Recall that, for the aﬃne connection D, in a local coordinates, if we set
i
= D ∂
∂x
i
,
then
i
∂
∂x
j
(x) = Γ
k
ij
∂
∂x
k
x
,
where Γ
k
ij
are the Christoﬀel symbols. For smooth vector ﬁelds X, Y ∈ Γ(M)
with
X = (X
1
, , X
n
) and Y = (Y
1
, , Y
n
),
by the bilinear properties of the connection, we have
D
X
Y = D
X
i ∂
∂x
i
Y = X
i
i
Y
= X
i
∂Y
j
∂x
i
∂
∂x
j
+Y
j
Γ
k
ij
∂
∂x
k
= X
i
∂Y
j
∂x
i
+Γ
j
ik
Y
k
∂
∂x
j
(4.6)
Now we naturally extend this aﬃne connection D to diﬀerentiable (p, q)
tensor ﬁelds T on M. For a point x on M, and a vector X ∈ T
x
M, D
X
T is
deﬁned to be a (p, q)tensor on T
x
M by
D
X
T(x) = X
i
(
i
T)(x),
where
(
i
T)(x)
j1···jq
i1···ip
=
∂T
j1···jq
i1···ip
∂x
i
x
−
p
¸
k=1
Γ
α
ii
k
(x)T(x)
j1···jq
i1···i
k−1
αi
k+1
···ip
+
q
¸
k=1
Γ
j
k
iα
(x)T(x)
j1···j
k−1
αj
k+1
jq
i1···ip
. (4.7)
Notice that (4.6) is a particular case when (4.7) is applied to the (0, 1)
tensor Y . If we apply it to a (1, 0)tensor, say dx
j
, then
i
(dx
j
) = −Γ
j
ik
dx
k
. (4.8)
Given two diﬀerentiable tensor ﬁelds T and
˜
T, we have
122 4 Preliminary Analysis on Riemannian Manifolds
D
X
(T ⊗
˜
T) = D
X
T ⊗
˜
T +T ⊗D
X
˜
T.
For a diﬀerentiable (p, q)tensor ﬁeld T, we deﬁne T to be the (p +1, q)
tensor ﬁeld whose components are given by
(T)
j1···jq
i1···ip+1
= (
i1
T)
j1···jq
i2···ip+1
.
Similarly, one can deﬁne
2
T,
3
T, .
For example, a smooth function f(x) on M is a (0, 0)tensor. Hence f is
a (1, 0)tensor deﬁned by
f =
i
fdx
i
=
∂f
∂x
i
dx
i
.
And
2
f is a (2, 0)tensor:
2
f =
j
∂f
∂x
i
dx
i
⊗dx
j
=
j
∂f
∂x
i
dx
i
+
∂f
∂x
i
j
(dx
i
)
dx
j
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
dx
i
⊗dx
j
.
Here we have used (4.7) and (4.8). In the Riemannian context,
2
f is called
the Hessian of f and denoted by Hess(f). Its ij
th
component is
(
2
f)
ij
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
.
The trace of the Hessian matrix ((
2
f)
ij
) is deﬁned to be the Laplace
Bertrami operator ´ acting on f,
´f = g
ij
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
where (g
ij
) is the inverse matrix of (g
ij
). Through a straight forward calcula
tion, one can verify that
´ =
1
[g[
n
¸
i,j=1
∂
∂x
i
(
[g[ g
ij
∂
∂x
j
)
where [g[ is the determinant of (g
ij
).
4.5.2 Integrals
On a smooth ndimensional Riemannian manifold (M, g), one can deﬁne a
natural positive Radon measure on M, and the theory of Lebesgue integral
applies.
4.5 Calculus on Manifolds 123
Let (U
i
, φ
i
)
i
be a family of coordinates charts that covers M. We say that
a family (U
i
, φ
i
, η
i
)
i
is a partition of unity associate to (U
i
, φ
i
)
i
if
(i) (η
i
)
i
is a smooth partition of unity associated to the covering (U
i
)
i
,
and
(ii) for any i, supp η
i
⊂ U
i
.
One can show that, for each family of coordinates chart (U
i
, φ
i
)
i
of M,
there exists a corresponding family of partition of unity (U
i
, φ
i
, η
i
)
i
.
Let f be a continuous function on M with compact support, and given a
family of coordinates charts (U
i
, φ
i
)
i
of M, we deﬁne its integral on M as the
sum of the integrals on open sets in R
n
:
M
fdV
g
=
¸
i
φi(Ui)
(η
i
[g[f) ◦ φ
−1
i
dx
where (U
i
, φ
i
, η
i
)
i
is a family of partition of unity associated to (U
i
, φ
i
)
i
, [g[ is
the determinant of (g
ij
) in the charts (U
i
, φ
i
)
i
, and dx stands for the volume
element of R
n
. One can prove that such a deﬁnition does not depend on the
choice of the charts (U
i
, φ
i
)
i
and the partition of unity (U
i
, φ
i
, η
i
)
i
.
For smooth functions u and v on a compact manifold M, one can verify
the following integration by parts formula
−
M
´
g
uvdV
g
=
M
< u, v > dV
g
, (4.9)
where ´
g
is the LaplaceBertrami operator associated to the metric g, and
< u, v >= g
ij
i
u
j
v = g
ij
∂u
∂x
i
∂v
∂x
j
is the scalar product associated with g for 1forms.
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature
A Riemannian metric g on M is said to be pointwise conformal to another
metric g
o
if there exist a positive function ρ, such that
g(x) = ρ(x)g
o
(x), ∀ x ∈ M.
Suppose M is a two dimensional Riemannian manifold with a metric g
o
and
the corresponding Gaussian curvature K
o
(x). Let g(x) = e
2u(x)
g
o
(x) for some
smooth function u(x) on M. Let K(x) be the Gaussian curvature associated
to the metric g. Then by a straight forward calculation, one can verify that
−´
o
u +K
o
(x) = K(x)e
2u(x)
, ∀ x ∈ M. (4.10)
Here ´
o
is the LaplaceBertrami operator associated to g
o
.
124 4 Preliminary Analysis on Riemannian Manifolds
Similarly, for conformal relation between scalar curvatures on ndimensional
manifolds ( n ≥ 3), say on standard sphere S
n
, we have
−´
o
u +
n(n −2)
4
u =
(n −2)
4(n −1)
R(x)u
n+2
n−2
, ∀ x ∈ S
n
, (4.11)
where g
o
is the standard metric on S
n
with the corresponding Laplace
Bertrami operator ´
o
, R(x) is the scalar curvature associated to the point
wise conformal metric g = u
4
n−2
g
o
.
In Chapter 5 and 6, we will study the existence and qualitative properties
of the solutions u for the above two equations respectively.
4.6 Sobolev Embeddings
Let (M, g) be a smooth Riemannian manifold. Let u be a smooth function
on M and k be a nonnegative integer. We denote
k
u the k
th
covariant
derivative of u as deﬁned in the previous section, and its norm [
k
u[ is deﬁned
in a local coordinates chart by
[
k
u[
2
= g
i1j1
g
i
k
j
k
(
k
u)
i1···i
k
(
k
u)
j1···j
k
.
In particular, we have [
0
u[ = [u[ and
[
1
u[
2
= [u[
2
= g
ij
(u)
i
(u)
j
= g
ij
i
u
j
u = g
ij
∂u
∂x
i
∂u
∂x
j
.
For an integer k and a real number p ≥ 1, let (
p
k
(M) be the space of C
∞
functions on M such that
M
[
j
u[
p
dV
g
< ∞, ∀ j = 0, 1, , k.
Deﬁnition 4.6.1 The Sobolev space H
k,p
(M) is the completion of (
p
k
(M)
with respect to the norm
u
H
k,p
(M)
=
k
¸
j=0
M
[
j
u[
p
dV
g
1/p
.
Many results concerning the Sobolev spaces on R
n
introduced in Chapter
1 can be generalized to Sobolev spaces on Riemannian manifolds. For appli
cation purpose, we list some of them in the following. Interested readers may
ﬁnd the proofs in [He] or [Au].
Theorem 4.6.1 For any integer k, H
k,2
(M) is a Hilbert space when equipped
with the equivalent norm
4.6 Sobolev Embeddings 125
u =
k
¸
i=1
M
[
i
u[
2
dV
g
1/2
.
The corresponding scalar product is deﬁned by
< u, v >=
k
¸
i=1
M
<
i
u,
i
v > dV
g
where < , > is the scalar product on covariant tensor ﬁelds associated to the
metric g.
Theorem 4.6.2 If M is compact, then H
k,p
(M) does not depend on the met
ric.
Theorem 4.6.3 (Sobolev Embeddings I)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold. As
sume 1 ≤ q < n and p =
nq
n−q
. Then
H
1,q
(M) ⊂ L
p
(M) .
More generally, if m, k are two integers with 0 ≤ m < k, and if p =
nq
n−q(k−m)
, then
H
k,q
(M) ⊂ H
m,p
(M) .
Theorem 4.6.4 (Sobolev Embeddings II)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold. As
sume that q ≥ 1 and m, k are two integers with 0 ≤ m < k. If q >
n
k−m
,
then
H
k,q
(M) ⊂ C
m
(M) .
Theorem 4.6.5 (Compact Embeddings)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold.
(i) Assume that q ≥ 1 and m, k are two integers with 0 ≤ m < k. Then
for any real number p such that 1 ≤ p <
nq
n−q(k−m)
, the embedding
H
k,q
(M) ⊂ H
m,p
(M)
is compact. In particular, the embedding of H
1,q
(M) into L
p
(M) is compact
if 1 ≤ p <
nq
n−q
.
(ii) For q > n, the embedding of H
k,q
(M) into C
λ
(M) is compact, if
(k−λ) >
n
q
. In particular, the embedding of H
1,q
(M) into C
0
(M) is compact.
126 4 Preliminary Analysis on Riemannian Manifolds
Theorem 4.6.6 (Poincar´e Inequality)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold and
let p ∈ [1, n) be real. Then there exists a positive constant C such that for any
u ∈ H
1,p
(M),
M
[u − ¯ u[
p
dV
g
1/p
≤ C
M
[u[
p
dV
g
1/p
,
where
¯ u =
1
V ol
(M,g)
M
udV
g
is the average of u on M.
5
Prescribing Gaussian Curvature on Compact
2Manifolds
5.1 Prescribing Gaussian Curvature on S
2
5.1.1 Obstructions
5.1.2 Variational Approaches and Key Inequalities
5.1.3 Existence of Weak Solutions
5.2 Gaussian Curvature on Negatively Curved Manifolds
5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper Solu
tions
5.2.2 Chen and Li’s Results
In this and the next chapters, we will present the readers with real re
search examples–semilinear equations arising from prescribing Gaussian and
scalar curvatures. We will illustrate, among other methods, how the calculus
of variations can be applied to seek weak solutions of these equations.
To show the existence of weak solutions for a certain equation, we consider
the corresponding functional J() in an appropriate Hilbert space. According
to the compactness of the functional, people usually divide it into three cases:
a) subcritical case where the level set of J is compact, say, J satisﬁes the
(PS) condition (see also Chapter 2):
Every sequence ¦u
k
¦ that satisﬁes J(u
k
)→c and J
(u
k
)→0 possesses a
convergent subsequence,
b) critical case which is the limiting situation, and can be approximated
by subcritical cases, or
c) super critical case–the remaining cases.
To roughly see when a functional may lose compactness, let’s consider
J(x) = (x
2
−1)e
x
deﬁned on one dimensional space R
1
. One can easily see that
there exists a sequence ¦x
k
¦, for instance, x
k
= −k, such that J(x
k
)→0 and
J
(x
k
)→0 as k→∞, however, ¦x
k
¦ possesses no convergent subsequence be
cause this sequence is not bounded. The functional has no compactness at level
128 5 Prescribing Gaussian Curvature on Compact 2Manifolds
J = 0. We know that in a ﬁnite dimensional space, a bounded sequence has a
convergent subsequence. While in inﬁnite dimensional space, even a bounded
sequence may not have a convergent subsequence. For instance ¦sin kx¦ is a
bounded sequence in L
2
([0, 1]) but does not have a convergent subsequence.
In the situation where the lack of compactness occurs, to ﬁnd a critical
point of the functional, one would try to recover the compactness through
various means. Take again the one dimensional example J(x) = (x
2
− 1)e
x
,
which has no compactness at level J = 0. However, if we restrict it to levels
less than 0, we can recover the compactness. That is, if we pick a sequence
¦x
k
¦ satisfying J(x
k
) < 0 and J
(x
k
)→0, then one can verify that it possesses
a subsequence which converges to a critical point x
0
of J. Actually, this x
0
is
a minimum of J.
To further illustrate the concept of compactness, we consider some exam
ples in inﬁnite dimensional space. Let Ω be an open bounded subset in R
n
,
and let H
1
0
(Ω) be the Hilbert space introduced in previous chapters. Consider
the Dirichlet boundary value problem
−´u = u
p
(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
(5.1)
To seek weak solutions, we study the corresponding functional
J
p
(u) =
Ω
[u[
p+1
dx
on the constraint
H = ¦u ∈ H
1
0
(Ω) [
Ω
[u[
2
dx = 1¦.
When p <
n+2
n−2
, it is in the subcritical case. As we have seen in Chapter
2, the functional satisﬁes the (PS) condition. If we deﬁne
m
p
= inf
u∈H
J
p
(u),
then a minimizing sequence ¦u
k
¦, J
p
(u
k
)→m
p
possesses a convergent subse
quence in H
1
0
(Ω), and the limiting function u
0
would be the minimum of J
p
in H, hence it is the desired weak solution.
When p = τ :=
n+2
n−2
, it is in the critical case. One can ﬁnd a minimizing
sequence that does not converge. For example, when Ω = B
1
(0) is a ball,
consider
u
k
(x) = u(x) =
[n(n −2)k]
n−2
4
(1 +k[x[
2
)
n−2
2
η(x)
where η ∈ C
∞
0
(Ω) is a cut oﬀ function satisfying
η(x) =
1 , x ∈ B1
2
(0)
between 0 and 1 , elsewhere .
5.1 Prescribing Gaussian Curvature on S
2
129
Let
v
k
(x) =
u
k
(x)
Ω
[u
k
[
2
dx
.
Then one can verify that ¦v
k
¦ is a minimizing sequence of J
τ
in H with
J
(v
k
)→0. However it does not have a convergent subsequence. Actually, as
k→∞, the energy of ¦v
k
¦ are concentrating at one point 0. We say that the
sequence “blows up ” at point 0.
In essence, the compactness of the functional J
τ
here relies on the Sobolev
embedding:
H
1
(Ω) → L
p+1
(Ω).
As we learned in Chapter 1, this embedding is compact when p <
n+2
n−2
and is
not when p =
n+2
n−2
.
In the critical case, the compactness may be recovered by considering
other level sets or by changing the topology of the domain. For example, if Ω
is an annulus, then the above J
τ
has a minimizer in H. In this and the next
chapter, the readers will see examples from prescribing Gaussian and scalar
curvature, where the corresponding equations are in critical case and where
various means have been exploited to recover the compactness.
5.1 Prescribing Gaussian Curvature on S
2
Given a function K(x) on two dimensional unit sphere S := S
2
with standard
metric g
o
, can it be realized as the Gaussian curvature of some pointwise
conformal metric g? This has been an interesting problem in diﬀerential ge
ometry and is known as the Nirenberg problem. If we let g = e
2u
g
o
, then it is
equivalent to solving the following semilinear elliptic equation on S
2
:
−´u + 1 = K(x)e
2u
, x ∈ S
2
. (5.2)
where ´is the LaplaceBertrami operator associated with the standard metric
g
o
.
If we replace 2u by u and let R(x) = 2K(x), then equation (5.2) becomes
−´u + 2 = R(x)e
u(x)
. (5.3)
Here 2 and R(x) are actually the scalar curvature of S
2
with standard metric
g
o
and with the pointwise conformal metric e
u
g
o
, respectively.
5.1.1 Obstructions
For equation (5.3) to have a solution, the function R(x) must satisfy the
obvious GaussBonnet sign condition
S
R(x)e
u
dx = 8π. (5.4)
130 5 Prescribing Gaussian Curvature on Compact 2Manifolds
This can be seen by integrating both sides of the equation (5.3), and the fact
that
−
S
´u dx =
S
u 1 dx = 0.
Condition (5.4) implies that R(x) must be positive somewhere. Besides this
obvious necessary condition, there are other necessary conditions found by
Kazdan and Warner. It reads
S
φ
i
R(x)e
u
dx = 0, i = 1, 2, 3. (5.5)
Here φ
i
are the ﬁrst spherical harmonic functions, i.e
−´φ
i
= 2φ
i
(x), i = 1, 2, 3.
Actually, φ
i
(x) = x
i
in the coordinates x = (x
1
, x
2
, x
3
) of R
3
.
Later, Bouguignon and Ezin generalized this condition to
S
X(R)e
u
dx = 0. (5.6)
where X is any conformal Killing vector ﬁeld on standard S
2
.
Condition (5.5) gives rise to many examples of R(x) for which the equa
tion (5.3) has no solution. In particular, a monotone rotationally symmetric
function admits no solution. Then for which kinds of functions R(x), can one
solve (5.3)? This has been an interesting problem in geometry.
5.1.2 Variational Approach and Key Inequalities
To obtain the existence of a solution for equation (5.3), people usually use
a variational approach. First prove the existence of a weak solution, then by
a regularity argument, one can show that the weak solution is smooth and
hence is the classical solution.
To obtain a weak solution, we let
H
1
(S) := H
1,2
(S)
be the Hilbert space with norm
u
1
=
¸
S
([u[
2
+u
2
)dx
1/2
.
Let
H(S) = ¦u ∈ H
1
(S) [
S
udx = 0,
S
R(x)e
u
dx > 0¦.
Then by the Poincar´e inequality, we can use
5.1 Prescribing Gaussian Curvature on S
2
131
u =
¸
S
[u[
2
dx
1/2
as an equivalent norm in H(S).
Consider the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
in H(S).
To estimate the value of the functional J, we introduce some useful in
equalities.
Lemma 5.1.1 (MoserTrudinger Inequality)
Let S be a compact two dimensional Riemannian manifold. Then there
exists constant C, such that the inequality
S
e
4πu
2
dA ≤ C (5.7)
holds for all u ∈ H
1
(S), satisfying
S
[u[
2
dA ≤ 1 and
S
udA = 0. (5.8)
We skip the proof of this inequality. Interested readers many see the article
of Moser [Mo1] or the article of Chen [C1].
From this inequality, one can derive immediately that
Lemma 5.1.2 There exists constant C, such that for all u ∈ H
1
(S) holds
S
e
u
dA ≤ C exp
1
16π
S
[u[
2
dA+
1
4π
S
udA
. (5.9)
Proof. Let
¯ u =
1
4π
S
u(x)dA
be the average of u on S. Still denote
u =
¸
S
[u[
2
dx
1/2
.
i) For any v ∈ H
1
(S) with ¯ v = 0, let w =
v
v
. Then w = 1 and ¯ w = 0.
Applying Lemma 5.1.1 to such w, we have
S
e
4πv
2
v
2
dA ≤ C. (5.10)
132 5 Prescribing Gaussian Curvature on Compact 2Manifolds
From the obvious inequality
v
4
√
π
−
2
√
πv
v
2
≥ 0,
we deduce immediately
v ≤
v
2
16π
+
4πv
2
v
2
.
Together with (5.10), we have
S
e
v
dA ≤ C exp
1
16π
v
2
, ∀v ∈ H
1
(S), with ¯ v = 0. (5.11)
ii) For any u ∈ H
1
(S), let v = u−¯ u. Then ¯ v = 0, and we can apply (5.11)
to v and obtain
S
e
u−¯ u
dA ≤ C exp
1
16π
u
2
.
Consequently
S
e
u
dA ≤ C exp
1
16π
u
2
+ ¯ u
.
This completes the proof of the Lemma. ¯ .
From Lemma 5.1.2, we can conclude that the functional J() is bounded
from below in H(S). In fact, by the Lemma,
S
R(x)e
u
dA ≤ max R
S
e
u
dA ≤ max R(x) C exp
1
16π
u
2
.
It follows that
J(u) ≥
1
2
u
2
−8π
¸
ln(C max R) +
1
16π
u
2
≥ −8π ln(C max R) (5.12)
Therefore, the functional J is bounded from below in H(S). Now we can take
a minimizing sequence ¦u
k
¦ ⊂ H(S):
J(u
k
)→ inf
u∈H(S)
J(u).
However, as we mentioned before, a minimizing sequence may not be bounded,
that is, it may ‘leak’ to inﬁnity. To prevent this from happening, we need some
coerciveness of the functional. Although it is not true in general, we do have
coerciveness for J in some special subset of H(S), In particular, in the set
of even functions–antipodal symmetric functions. More precisely, we have the
following ”Distribution of Mass Principle” from Chen and Li [CL7] [CL8].
5.1 Prescribing Gaussian Curvature on S
2
133
Lemma 5.1.3 Let Ω
1
and Ω
2
be two subsets of S such that
dist(Ω
1
, Ω
2
) ≥ δ
o
> 0.
Let 0 < α
o
≤
1
2
. Then for any > 0, there exists a constant C = C(α
o
, δ
o
, ),
such that the inequality
S
e
u
dA ≤ C exp
1
32π
+
u
2
+ ¯ u
(5.13)
holds for all u ∈ H
1
(S) satisfying
Ω1
e
u
dA
S
e
u
dA
≥ α
o
and
Ω2
e
u
dA
S
e
u
dA
≥ α
o
. (5.14)
Intuitively, if we think e
u(x)
as the density of a solid sphere S at point
x, then
Ωi
e
u
dA is the mass of the region Ω
i
, and
S
e
u
dA the total mass
of the sphere. The Lemma says that if the mass of the sphere is kind of
distributed in two parts of the sphere, more precisely, if the total mass does
not concentrate at one point, then we can have a stronger inequality than
(5.9). In particular, for even functions, this stronger inequality (5.13) holds.
If we apply this stronger inequality to the family of even functions in H(S),
we have
J(u) ≥
1
2
u
2
−8π
¸
ln(C max R) +
1
32π
+
u
2
≥ −8π ln(C max R) +
1
4
−8π
u
2
.
Choosing small enough, we arrive at
J(u) ≥
1
8
u
2
−C
1
, for all even functions u in H(S). (5.15)
This stronger inequality guarantees the boundedness of a minimizing se
quence, and hence it converges to a weak limit u
o
in H
1
(S). As we will show in
the next section, the functional J is weakly lower semicontinuous, and there
fore, u
o
so obtained is the minimum of J(u), i.e., a weak solution of equation
(5.2). Then by a standard regularity argument, as we will see in the next next
section, we will be able to conclude that u
o
is a classical solution.
In many articles, the authors use the “center of mass ” analysis, i.e., con
sider the center of mass of the sphere with density e
u(x)
at point x:
P(u) =
S
x e
u
dA
S
e
u
dA
,
and use this to study the behavior of a minimizing sequence (See [CD], [CD1],
[CY], and [CY1]). Both analysis are equivalent on the sphere S
2
. However,
134 5 Prescribing Gaussian Curvature on Compact 2Manifolds
since “the center of mass” P(u) lies in the ambient space R
3
, on a general
two dimensional compact Riemannian surfaces, this kinds of analysis can not
be applied. While the “Distribution of Mass” analysis can be applied to any
compact surfaces, even surfaces with conical singularities. Interested reader
may see the author’s paper [CL7] [CL8]for more general version of inequality
(5.13).
Proof of Lemma 5.1.3.
Let g
1
, g
2
be two smooth functions, such that
1 ≥ g
i
≥ 0, g
i
(x) ≡ 1, for x ∈ Ω
i
, i = 1, 2
and supp g
1
∩ supp g
2
= ∅. It suﬃce to show that for u ∈ H
1
(S) with
S
udA =
0, (5.14) implies
S
e
u
dA ≤ C exp
(
1
32π
+)u
2
. (5.16)
Case i) If g
1
u ≤ g
2
u, then by (5.9)
S
e
u
dA ≤
1
α
o
Ω1
e
u
dA ≤
1
α
o
S
e
g1u
dA
≤
C
α
o
exp¦
1
16π
g
1
u
2
+g
1
u¦
≤
C
α
o
exp¦
1
32π
g
1
u +g
2
u
2
+g
1
u¦
≤
C
α
o
exp¦
1
32π
(1 +
1
)u
2
+c
1
(
1
)u
2
L
2¦. (5.17)
for some small
1
> 0. Here we have used the fact that
g
1
u +g
2
u
2
=
S
[(g
1
u +g
2
u)[
2
dA
=
S
[(g
1
+g
2
)u + (g
1
+g
2
)u[
2
dA
≤ u
2
+C
1
uu
L
2 +C
2
u
2
L
2.
In order to get rid of the term u
2
L
2
on the right hand side of the above
inequality, we employ the condition
S
udA = 0.
Given any η > 0, choose a, such that meas¦x ∈ S[u(x) ≥ a¦ = η. Apply
(5.17) to the function (u −a)
+
≡ max¦0, (u −a)¦, we have
S
e
u
dA ≤ e
a
S
e
u−a
dA ≤ e
a
S
e
(u−a)
+
dA
≤ C exp¦
1
32π
(1 +
1
)u
2
+c
1
(
1
)(u −a)
+

2
2
+a¦ (5.18)
5.1 Prescribing Gaussian Curvature on S
2
135
Where C = C(δ, α
o
).
By the H¨ older and Sobolev inequality (See Theorem 4.6.3)
(u −a)
+

2
L
2 ≤ η
1
2
(u −a)
+

2
L
4 ≤ cη
1
2
(u
2
+(u −a)
+

2
L
2) (5.19)
Choose η so small that Cη
1/2
≤
1
2
, then the above inequality implies
(u −a)
+

2
L
2 ≤ 2Cη
1
2
u
2
. (5.20)
Now by Poincar´e inequality (See Theorem 4.6.6)
a η ≤
u≥a
udA ≤
S
[u[dA ≤ cu
hence for any δ > 0,
a ≤ δu
2
+
c
2
4δη
2
. (5.21)
Now (5.16) follows from (5.18), (5.20) and (5.21).
Case ii) If g
2
u ≤ g
1
u, by a similar argument as in case i), we obtain
(5.17) and then (5.16). This completes the proof.
5.1.3 Existence of Weak Solutions
Theorem 5.1.1 (Moser) If the function R is even, that is if
R(x) = R(−x) ∀ x ∈ S
2
,
where x and −x are two antipodal points. Then the necessary and suﬃcient
condition for equation (5.3) to have a solution is R(x) be positive somewhere.
To prove the existence of a weak solution, as in the previous section, we
consider the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
in H
e
(S) = ¦u ∈ H(S) [ u is even almost everywhere ¦. Let ¦u
k
¦ be a mini
mizing sequence of J in H
e
(S). Then by (5.15), ¦u
k
¦ is bounded in H
1
(S), and
hence possesses a subsequence (still denoted by ¦u
k
¦) that converges weakly
to some element u
o
in H
1
(S). Then by the Compact Sobolev Embedding of
H
1
into L
2
u
k
→u
o
in L
2
(S). (5.22)
Consequently
u
o
(−x) = u
o
(x) almost everywhere , and
S
u
o
(x)dA = 0,
136 5 Prescribing Gaussian Curvature on Compact 2Manifolds
that is u
o
∈ H
e
(S).
Furthermore, as we showed in the previous chapter, the integral is weakly
lower semicontinuous, i.e.
lim
k→∞
S
[u
k
[
2
dA ≥
S
[u
o
[
2
dA.
To show that the functional J is weakly lower semicontinuous:
J(u
o
) ≤ limJ(u
k
) = inf
u∈He(S)
J(u).
We need the following
Lemma 5.1.4 If ¦u
k
¦ converges weakly to u in H
1
(S), then there exists a
subsequence (still denoted by ¦u
k
¦), such that
S
e
u
k
(x)
dA→
S
e
u(x)
dA, as k→∞.
We postpone the proof of this lemma for a moment. Now from this lemma,
we obviously have
S
R(x)e
u
k
dA→
S
R(x)e
uo
dA,
and it follows that J() is weakly lower semicontinuous. On the other hand,
we have
J(u
o
) ≥ inf
u∈He(S)
J(u).
Therefore, u
o
is a minimum of J in H
e
(S).
Consequently, for any v ∈ H
e
(S), we have
0 =< J
(u
o
), v >=
S
< u
o
, v > dA−
8π
S
R(x)e
uo
dA
S
R(x)e
uo
vdA.
For any w ∈ H(S), let
v(x) =
1
2
(w(x) +w(−x)) :=
1
2
(w(x) +w
−
(x)).
Then obviously, v ∈ H
e
(S), and it follows that
0 =< J
(u
o
), v >=
1
2
(< J
(u
o
), w > + < J
(u
o
), w
−
>) =< J
(u
o
), w > .
Here we have used the fact that u
o
(−x) = u
o
(x). This implies that u
o
is a
critical point of J in H
1
(S) and hence a constant multiple of u
o
is a weak
solution of equation (5.3). Now what left is to prove Lemma 5.1.4.
The Proof of Lemma 5.1.4.
5.1 Prescribing Gaussian Curvature on S
2
137
From Lemma 5.1.2, for any real number p > 0, we have
S
e
pu
dA ≤ C exp¦
p
2
16π
S
[u[
2
dA+
p
4π
S
udA¦. (5.23)
And by H¨ oder inequality, for any 0 < p < 2,
S
[(e
u
)[
p
dA =
S
e
pu
[u[
p
dA ≤
S
e
2p
2−p
u
dA
2−p
2
S
[u[
2
dA
p
2
.
(5.24)
Inequalities (5.23) and (5.24) imply that ¦e
u
k
¦ is a bounded sequence in H
1,p
for any 1 < p < 2. By the compact Sobolev embedding of H
1,p
into L
1
, there
exists a subsequence of ¦e
u
k
¦ which converges strongly to e
uo
in L
1
(S). This
completes the proof of the Lemma.
Chen and Ding [CD] generalized Moser’s result to a broader class of sym
metric functions, as we will describe in the following.
Let O(3) be the group of orthogonal transformations of R
3
. Let G be a
closed ﬁnite subgroup of O(3). Let
F
G
= ¦x ∈ S
2
[ gx = x, ∀g ∈ G¦
be the set of ﬁxed points under the action of G.
The action of G on S
2
induces an action of G on the Hilbert space H
1
(S):
g[u](x) → u(gx)
under which the ﬁxed point subspace of H
1
(S) is given by
X = ¦u ∈ H
1
(S) [ u(gx) = u(x), ∀g ∈ G¦.
Assume that R(x) is Gsymmetric, or Ginvariant, i.e.
R(gx) = R(x), ∀g ∈ G. (5.25)
Let X
∗
= X
¸
H(S). Recall that
H(S) = ¦u ∈ H
1
(S) [
S
udA = 0,
S
R(x)e
u
dA > 0¦.
Under the assumption (5.25), the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
is Ginvariant in X
∗
, i.e.
J(g[u]) = J(u), ∀g ∈ G, ∀u ∈ X
∗
.
We seek a minimum of J in X
∗
. The following lemma guarantees that such a
minimum is the critical point we desired.
138 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Lemma 5.1.5 Assume that u
o
is a critical point of J in X
∗
, then it is also
a critical point of J in H(S).
Proof. First, for any g ∈ G and any u, v ∈ H(S), we have
< J
(u), g[v] >=< J
(g
−1
[u]), v > . (5.26)
Where g
−1
is the inverse of g. This can be easily see from
< J
(u), v >=
S
uvdA−
8π
S
Re
u
dA
S
Re
u
vdA.
Assume
G = ¦g
1
, g
m
¦.
For any w ∈ H(S), let
v =
1
m
(g
1
[w] + +g
m
[w]).
Then for any g ∈ G
g[v] =
1
m
(gg
1
[w] + +gg
m
[w]) = v.
Hence v ∈ X
∗
. It follows that
0 = < J
(u
o
), v >=
1
m
m
¸
i
< J
(u
o
), g
i
[w] >
=
1
m
m
¸
i
< J
(g
−1
i
u
o
), w >=
1
m
m
¸
i
< J
(u
o
), w >
= < J
(u
o
), w > .
Therefore, u
o
is a critical point of J in H(S). ¯ .
Theorem 5.1.2 (Chen and Ding [CD]). suppose that R ∈ C
α
(S
2
) satisﬁes
(5.25) with max
S
R > 0. If F
G
= ∅, let m = max
F
G
R. Then equation (5.2)
has a solution provided one of the following conditions holds:
(i) F
G
= ∅,
(ii) m ≤ 0, or
(iii) m > 0 and c
∗
= inf
X∗
J < −8π ln 4πm.
Remark 5.1.1 In the case when the function R(x) is even, the group G =
¦id, −id¦, where id is the identity transform, i.e.
id(x) = x and −id(x) = −x ∀ x ∈ S
2
.
Obviously, in this case F
G
is empty. Hence the above Theorem contains
Moser’s existence result as a special case.
5.1 Prescribing Gaussian Curvature on S
2
139
To prove this theorem, we need another two lemmas.
Lemma 5.1.6 (Onofri [On]) For any u ∈ H
1
(S) with
S
udA = 0, holds
S
e
u
dA ≤ 4π exp
1
16π
S
[u[
2
dA
. (5.27)
Lemma 5.1.7 Suppose that ¦u
i
¦ is a sequence in H(S) with J(u
i
) ≤ β. If
u
i
→∞, then there exists a subsequence (still denoted by ¦u
i
¦) and a point
ξ ∈ S
2
with R(ξ) > 0, such that
S
R(x)e
ui
dA = (R(ξ) +o(1))
S
e
ui
dA, (5.28)
and
lim
i→∞
J(u
i
) ≥ −8π ln 4πR(ξ). (5.29)
Proof. We ﬁrst show that, there exists a subsequence of ¦u
i
¦ and a point
ξ ∈ S
2
, such that for any > 0, we have
S\B(ξ)
e
ui
dA→0. (5.30)
Otherwise, there exists two subsets of S
2
, Ω
1
and Ω
2
, with dist(Ω
1
, Ω
2
)
≥
o
> 0 such that ¦u
i
¦ satisﬁes all the conditions in Lemma 5.1.3. Hence the
inequality
S
e
ui
dA ≤ C exp(
1
16π
−δ)u
i

2
holds for some δ > 0. Then by the previous argument, we conclude that u
i

is bounded. This contradicts with our assumption, and hence veriﬁes (5.30).
Now (5.30) and the continuity assumption on R(x) implies (5.28).
Roughly speaking, the above argument shows that if a minimizing sequence
of J is unbounded, then passing to a subsequence, the mass of the surface S
with density e
ui(x)
at point x will be concentrating at one point ξ on S.
To verify (5.29), we use (5.28) and Onofri’s Lemma:
J(u
i
) =
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1))
S
e
ui
dA
=
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1)) −8π ln
S
e
ui
dA
≥
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1)) −8π(ln 4π +
1
16π
S
[u[
2
dA)
= −8π ln(4πR(ξ) +o(1)).
140 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Let i→∞, we arrive at (5.29). Since the right hand side of (5.29) is bounded
above by β, we must have R(ξ) > 0. This completes the proof of the Lemma.
Proof of the Theorem. Deﬁne
c
∗
= inf
X∗
J.
As we argue in the previous section (see (5.12)), we have c
∗
> −∞. Let
¦u
i
¦ ⊂ X
∗
be a minimizing sequence, i.e. J(u
i
)→c
∗
, as i→∞.
Similar to the reasoning at the beginning of this section, one can see that
we only need to show that ¦u
i
¦ is bounded in H
1
(S). Assume on the contrary
that u
i
→∞. Then from the proof of Lemma 5.1.7, there is a point ξ ∈ S
2
,
such that for any > 0, we have
S\B(ξ)
e
ui
dA→0. (5.31)
Since u
i
is invariant under the action of G, we must also have
S\B(gξ)
e
ui
dA→0. (5.32)
Because is arbitrary, we deduce that gξ = ξ, that is ξ ∈ F
G
.
Now under condition (i), F
G
is empty. Therefore, ¦u
i
¦ must be bounded.
Moreover, by Lemma 5.1.7,
R(ξ) > 0 and c
∗
≥ −8π ln 4πR(ξ).
Noticing that m ≥ R(ξ) by deﬁnition, this again contradicts with conditions
(ii) and (iii).
Therefore, under either one of the conditions (i), (ii), or (iii), the mini
mizing sequence ¦u
i
¦ must be bounded, and hence possesses a subsequence
converging weakly to some u
o
∈ X
∗
. This a constant multiple of u
o
is the
desired weak solution of equation (5.3).
Here conditions (i) and (ii) in Theorem 5.1.2 can be easily veriﬁed through
the group G or the given function R. However, one may wonder, under what
conditions on R, do we have condition (iii). The following theorem provides a
suﬃcient condition.
Theorem 5.1.3 (Chen and Ding [CD]). Suppose that R ∈ C
2
(S
2
), is G
invariant, and m = max
F
G
R > 0. Let
D = ¦x ∈ S
2
[ R(x) = m¦.
If ´R(x
o
) > 0 for some x
o
∈ D, then
c
∗
< −8π ln 4πm, (5.33)
and therefore (5.3) possesses a weak solution.
5.1 Prescribing Gaussian Curvature on S
2
141
Proof. Choose a spherical polar coordinate system on S := S
2
:
(θ, φ), −
π
2
≤ θ ≤
π
2
, 0 ≤ φ ≤ 2π
with x
o
= (
π
2
, φ) as the north pole.
Consider the following family of functions:
u
λ
(θ, φ) = ln
1 −λ
2
(1 −λsin θ)
2
, v
λ
= u
λ
−
1
4π
S
u
λ
dA, 0 < λ < 1.
One can verify that this family of functions satisfy the equation
−´u
λ
+ 2 = 2e
u
λ
. (5.34)
As λ→1, the family ¦u
λ
¦ ‘blows up’ (tend to ∞) at the one point x
o
,
while approaching −∞ everywhere else on S
2
. Therefore, the mass of the
sphere with density e
u
λ
(x)
is concentrating at point x
o
as λ→1.
It is easy to see that v
λ
∈ H(S) for λ closed to 1. Moreover, we claim that
v
λ
∈ X, i.e. v
λ
(gx) = v
λ
(x), ∀g ∈ G.
In fact, since g is an isometry of S
2
, and gx
o
= x
o
, we have
d(gx, x
o
) = d(gx, gx
o
) = d(x, x
o
),
where d(, ) is the geodesic distance on S
2
. But v
λ
depends only on θ =
d(x, x
o
), so
v
λ
(gx) = v
λ
(x), ∀g ∈ G.
It follows that v
λ
∈ X
∗
for λ suﬃciently closed to 1.
By a direct computation, one can verify that
1
2
S
[u
λ
[
2
dA+ 8π¯ u
λ
= 0.
Consequently,
J(v
λ
) = −8π ln
S
R(x)e
u
λ
dA.
Notice that c
∗
≤ J(v
λ
) by the deﬁnition of c
∗
. Thus, in order to show (5.33),
we only need to verify that, for λ suﬃciently close to 1
S
R(x)e
u
λ
dA > 4πm. (5.35)
Let δ > 0 be small, we have
S
Re
u
λ
dA = (1 −λ
2
)
2π
0
π
2
π
2
−δ
[R(θ, φ) −m]
cos θ
(1 −λsin θ)
2
dθdφ
+
2π
0
π
2
−δ
−
π
2
[R(θ, φ) −m]
cos θ
(1 −λsin θ)
2
dθdφ
¸
+ 4πm
= (1 −λ
2
)¦I +II¦ + 4πm.
142 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Here we have used the fact
S
e
u
λ
dA = 4π
which can be derived directly from equation (5.34).
For each ﬁxed δ, it is easy to check that the integral II remains bounded
as λ tends to 1. Thus (5.35) will hold if we can show that the integral I →+∞
as λ→1.
Notice that x
o
is a maximum point of the restriction of R on F
G
. Hence
by the principle of symmetric critically, x
o
is a critical point of R, that is
R(x
o
) = 0. Using this fact and the second order Taylor expansion of R(x)
near x
o
, we can derive
I = π
π
2
π
2
−δ
´R(x
o
) cos
2
θ +o
θ −
π
2
2
(1 −λsin θ)
2
cos θ dθ
≥ c
o
π
2
π
2
−δ
cos
3
θ
(1 −λsin θ)
2
dθ.
Here c
o
is a positive constant. We have chosen δ to be suﬃciently small and
have used the fact that ´R(x
o
) is positive. From the above inequality, one
can easily verify that as λ→1, I→+∞, since the integral
π
2
π
2
−δ
cos
3
θ
(1 −sin θ)
2
d θ
diverges to +∞.
This completes the proof of the Theorem. ¯ .
5.2 Prescribing Gaussian Curvature on Negatively
Curved Manifolds
Let M be a compact two dimensional Riemannian manifold. Let g
o
(x) be
a metrics on M with the corresponding LaplaceBeltrami operator ´ and
Gaussian curvature K
o
(x). Given a function K(x) on M, can it be realized as
the Gaussian curvature associated to the pointwise conformal metrics g(x) =
e
2u(x)
g
o
(x)? As we mentioned before, to answer this question, it is equivalent
to solve the following semilinear elliptic equation
−´u +K
o
(x) = K(x)e
2u
x ∈ M. (5.36)
To simplify this equation, we introduce the following simple existence re
sult.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 143
Lemma 5.2.1 Let f(x) ∈ L
2
(M). Then the equation
−´u = f(x) x ∈ M (5.37)
possesses a solution if and only if
M
f(x)d A = 0. (5.38)
Proof. The ‘only if’ part can be seen by integrating both side of the equation
(5.37) on M.
We now prove the ‘if’ part. Assume that (5.38) holds, we will obtain the
existence of a solution to (5.37). To this end, we consider the functional
J(u) =
1
2
M
[u[
2
d A−
M
f(x)ud A.
under the constraint
G = ¦u ∈ H
1
(M) [
M
u(x)d A = 0¦.
As we have seen before, we can use, in G, the equivalent norm
u =
M
[u[
2
d A
1
2
.
By H¨ oder and Poincar´e inequalities, we derive
J(u) ≥
1
2
u
2
−f
L
2u
L
2
≥
1
2
u
2
−c
o
f
L
2u
=
1
2
(u −c
o
f
L
2)
2
−
c
2
o
f
2
L
2
2
. (5.39)
The above inequality implies immediately that the functional J(u) is
bounded from below in the set G.
Let ¦u
k
¦ be a minimizing sequence of J in G, i.e.
J(u
k
)→inf
G
J, as k→∞.
Then again by (5.39), ¦u
k
¦ is bounded in H
1
, and hence possesses a subse
quence (still denoted by ¦u
k
¦) that converges weakly to some u
o
in H
1
(M).
From compact Sobolev imbedding
H
1
→→ L
2
,
144 5 Prescribing Gaussian Curvature on Compact 2Manifolds
we see that
u
k
→u
o
strongly in L
2
, and hence so in L
1
.
Therefore
M
u
o
(x)d A = lim
M
u
k
(x)d A = 0, (5.40)
and
M
f(x)u
o
(x)d A = lim
M
f(x)u
k
(x)d A. (5.41)
(5.40) implies that u
o
∈ G, and hence
J(u
o
) ≥ inf
G
J(u). (5.42)
While (5.41) and the weak lower semicontinuity of the Dirichlet integral:
liminf
M
[u
k
[
2
d A ≥
M
[u
o
[
2
d A
leads to
J(u
o
) ≤ inf
G
J(u).
From this and (5.42) we conclude that u
o
is the minimum of J in the set G,
and therefore there exists constant λ, such that u
o
is a weak solution of
−´u
o
−f(x) = λ.
Integrating both sides and by (5.38), we ﬁnd that λ = 0. Therefore u
o
is a
solution of equation (5.37). This completes the proof of the Lemma. ¯ .
We now simplify equation (5.36). First let ˜ u = 2u, then
−´˜ u + 2K
o
(x) = 2K(x)e
˜ u
. (5.43)
Let v(x) be a solution of
−´v = 2K
o
(x) −2
¯
K
o
(5.44)
where
¯
K
o
=
1
[M[
M
K
o
(x)d A
is the average of K
o
(x) on M. Because the integral of the right hand side of
(5.44) on M is 0, the solution of (5.44) exists due to Lemma 5.2.1.
Let w = ˜ u +v. Then it is easy to verify that
−´w + 2
¯
K
o
= 2K(x)e
−v
e
w
. (5.45)
Finally, letting
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 145
α = 2
¯
K
o
and R(x) = 2K(x)e
−v(x)
and rewriting w as u, we reduce equation (5.36) to
−´u +α = R(x)e
u(x)
, x ∈ M (5.46)
By the wellknown GaussBonnet Theorem, if g is a Riemannian metrics
on M with corresponding Gaussian curvature K
g
(x), then
M
K
g
(x)d A
g
= 2πχ(M)
where χ(M) is the Euler Characteristic of M, and d A
g
the area element
associated to metric g.
We say M is negatively curved, if χ(M) < 0, and hence α < 0.
5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper
Solutions
Let M be a compact two dimensional manifold. Let α < 0. In [KW], Kazdan
and Warner considered the equation
−´u +α = R(x)e
u(x)
, x ∈ M (5.47)
and proved
Theorem 5.2.1 (KazdanWarner) Let
¯
R be the average of R(x) on M. Then
i) A necessary condition for (5.46) to have a solution is
¯
R < 0.
ii) If
¯
R < 0, then there is a constant α
o
, with −∞ ≤ α
o
< 0, such that
one can solve (5.46) if α > α
o
, but cannot solve (5.46) if α < α
o
.
iii) α
o
= −∞ if and only if R(x) ≤ 0.
The existence of a solution was established by using the method of upper
and lower solutions. We say that u
+
is an upper solution of equation (5.47) if
−´u
+
+α ≥ R(x)e
u+
, x ∈ M.
Similarly, we say that u
−
is a lower solution of (5.47) if
−´u
−
+α ≤ R(x)e
u−
, x ∈ M.
The following approach is adapted from [KW] with minor modiﬁcations.
Lemma 5.2.2 If a solution of (5.47) exists, then
¯
R < 0. Even more strongly,
the unique solution φ of
´φ +αφ = R(x) , x ∈ M (5.48)
must be positive.
146 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Proof. We ﬁrst prove the second part. Using the substitution v = e
−u
, one
sees that (5.47) has a solution u if and only if there is a positive solution v of
´v +αv −R(x) −
[v[
2
v
= 0. (5.49)
Assume that v is such a positive solution. Let φ be the unique solution of
(5.48) and let w = φ −v. Then
´w +αw = −
[v[
2
v
≤ 0. (5.50)
It follows that w ≥ 0. Otherwise, there is a point x
o
∈ M, such that w(x
o
) < 0
and x
o
is a minimum of w, hence ´w(x
o
) ≥ 0. Consequently, we must have
´w(x
o
) +αw(x
o
) > 0.
A contradiction with (5.50). Therefore w ≥ 0 and φ ≥ v > 0. This shows
that a necessary condition for (5.47) to have a solution is that the unique
solution φ of (5.48) must be positive. Integrating both sides of (5.48), we see
immediately that
¯
R < 0. This completes the proof of the Lemma. ¯ .
Lemma 5.2.3 (The Existence of Upper Solutions).
i) If R(x) ≤ 0 but not identically 0, then there exist an upper solution u
+
of (5.47).
ii) If
¯
R < 0, then there exists a small δ > 0, such that for −δ ≤ α < 0,
there exists an upper solution u
+
of (5.47).
Proof. Let v be a solution of
−´v = R(x) −
¯
R.
We try to choose constant a and b, such that u
+
= av+b is an upper solution.
We have
−´u
+
+α −R(x)e
u+
= aR(x) −a
¯
R +α −R(x)e
av+b
. (5.51)
We want to choose constant a and b, so that the right hand side of (5.51) is
nonnegative.
In Case i), when R(x) ≤ 0, we regroup the right hand side of (5.51) as
f(a, b, x) ≡ (−a
¯
R +α) −R(x)(e
av+b
−a). (5.52)
For any given α < 0, we ﬁrst choose a suﬃciently large, so that −a
¯
R+α > 0.
Then we choose b large, so that
e
av(x)+b
−a ≥ 0 , ∀ x ∈ M.
This is possible because v(x) is continuous and M is compact. It follows from
(5.51) and (5.52) that
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 147
−´u
+
+α −R(x)e
u+
≥ 0.
In Case ii), we choose α ≥
a
¯
R
2
and e
b
= a, where a is a positive number
to be determine later. Then (5.52) becomes
f(a, b, x) ≥ −
a
¯
R
2
−aR(x)(e
av
−1)
≥ a
−
¯
R
2
−R
L
∞[e
av
−1[
= aR
L
∞
−
¯
R
2R
L
∞
−[e
av
−1[
.
Again by the continuity of v and the compactness of M, we see that
e
av(x)
→1 , uniformly on M as a→0.
Therefore, we can make
[e
av(x)
−1[ ≤
−
¯
R
2R
L
∞
, for all x ∈ M,
by choosing a suﬃciently small. Thus u
+
so constructed is an upper solution
of (5.47) for α ≥
a
¯
R
2
. This completes the proof of the Lemma. ¯ .
Lemma 5.2.4 (Existence of Lower Solutions) Given any R ∈ L
p
(M) and a
function u
+
∈ W
2,p
(M), there is a lower solution u
−
∈ W
2,p
(M) of (5.47)
such that u
−
< u
+
.
Proof. If R(x) is bounded from below, then one can clearly use any suf
ﬁciently large negative constant as u
−
. For general R(x) ∈ L
p
(M), let
K(x) = max¦1, −R(x)¦. Choose a positive constant a so that a
¯
K = −α.
Then aK(x) +α = 0 and (aK(x) + α) ∈ L
p
(M). Thus there is a solution w
of
´w = aK(x) +α.
By the L
p
regularity theory, w ∈ W
2,p
(M) and hence is continuous. Let
u
−
= w −λ. Then
−´u
−
+α −R(x)e
u−
= −´w +α −R(x)e
w−λ
= −aK(x) −R(x)e
w−λ
≤ −K(x)(a −e
w−λ
). (5.53)
Choosing λ suﬃciently large so that
a −e
w−λ
≥ 0,
148 5 Prescribing Gaussian Curvature on Compact 2Manifolds
and noticing that K(x) ≥ 0, by (5.53), we arrive at
−´u
−
+α −R(x)e
u−
≤ 0.
Therefore u
−
is a lower solution. Moreover, one can make λ large enough such
that u
−
= w −λ < u
+
. This completes the proof of the Lemma. ¯ .
Lemma 5.2.5 Let p > 2. If there exist upper and lower solutions, u
+
, u
−
∈
W
2,p
(M) of (5.47), and if u
−
≤ u
+
, then there is a solution u ∈ W
2,p
(M).
Moreover, u
−
≤ u ≤ u
+
, and u is C
∞
in any open set where R(x) is C
∞
.
Proof. We will rewrite the equation (5.47) as
Lu = f(x, u),
then start from the upper solution u
+
and apply iterations:
Lu
1
= f(x, u
+
) , Lu
i+1
= f(x, u
i
) , i = 1, 2, .
In order that for each i, u
−
≤ u
i
≤ u
+
, we need the operator L obeys some
maximum principle and f(x, ) possess some monotonicity. The Laplace oper
ator in the original equation does not obey maximum principles, however, if
we let
Lu = −´u +k(x)u,
where k(x) is any positive function on M, then the new operator L obeys the
maximum principle:
If Lu ≥ 0, then u(x) ≥ 0 , for all x ∈ M.
To see this, we suppose in the contrary, there is some point on M where u < 0.
Let x
o
be a negative minimum of u, then −´u(x
o
) ≤ 0, and it follows that
−´u(x
o
) +k(x
o
)u(x
o
) ≤ k(x
o
)u(x
o
) < 0.
A contradiction. Hence the maximum principle holds for L. We now write
equation (5.47) as
Lu ≡ −´u +k(x)u = R(x)e
u
−α +k(x)u := f(x, u).
Based on the maximum principle on L, to keep u
−
≤ u
i
≤ u
+
, it requires that
f(x, ) be monotone increasing, that is
∂f
∂u
= R(x)e
u
+k(x) ≥ 0.
To this end, we choose
k(x) = k
1
(x)e
u+(x)
, where k
1
(x) = max¦1, −R(x)¦.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 149
Base on this choice of L and f, we show by math induction that the
iteration sequence so constructed is monotone decreasing:
u
i+1
≤ u
i
≤ ≤ u
1
≤ u
+
, i = 1, 2, . (5.54)
In fact, from
Lu
+
≥ f(x, u
+
) and Lu
1
= f(x, u
+
)
we derive immediately
L(u
+
−u
1
) ≥ 0.
Then the maximum principle for L implies u
+
≥ u
1
. This completes the ﬁrst
step of our induction. Assume that for any integer m, u
m+1
≤ u
m
≤ u
+
, we
will show that u
m+2
≤ u
m+1
. Since f(x, ) is monotone increasing in u for
u ≤ u
+
, we have
f(x, u
m+1
) ≤ f(x, u
m
)
and hence L(u
m+1
−u
m+2
) ≥ 0. Therefore u
m+1
≥ u
m+2
. This veriﬁes (5.54)
through math induction. Similarly, one can show the other side of the inequal
ity and arrive at
u
−
≤ u
i+1
≤ u
i
≤ ≤ u
+
, i = 1, 2, . (5.55)
Since u
+
, u
−
, and u
i
are continuous, inequality (5.55) shows that ¦[u
i
[¦ is
uniformly bounded. Consequently, in the L
p
norm,
Lu
i+1

L
p = R(x)e
ui
−α +k(x)u
i

L
p ≤ C.
Hence ¦u
i
¦ is bounded in W
2,p
(M). By the compact imbedding of W
2,p
into
C
1
, there is a subsequence that converges uniformly to some function u in
C
1
(M). In view of the monotonicity (5.55), we conclude that the entire se
quence ¦u
i
¦ itself converges uniformly to u. Moreover
u
i+1
−u
j+1

W
2,p ≤ CL(u
i+1
−u
j+1
)
L
p
≤ C(R
L
pe
ui
−e
uj

L
∞ +k
L
pu
i
−u
j

L
∞).
Therefore, ¦u
i
¦ converges strongly in W
2,p
, so u ∈ W
2,p
. Since L : W
2,p
→L
p
is continuous, it follows that u is a solution of (5.47) and satisﬁes u
−
≤ u ≤ u
+
.
By the Schauder theory, one proves inductively that u ∈ C
∞
in any open
set where R(x) is C
∞
. More precisely, if R ∈ C
m+γ
, then u ∈ C
m+2+γ
. This
completes the proof of the Lemma. ¯ .
Based on these Lemmas, we are now able to complete the proof of the
Theorem 5.2.1.
Proof of Theorem 5.2.1.
Part i) can obviously derived from Lemma 5.2.2.
150 5 Prescribing Gaussian Curvature on Compact 2Manifolds
From Lemma 5.2.4 and 5.2.5, if there is a upper solution, then there is
a solution. Hence by Lemma 5.2.3, there exist a δ > 0, such that (5.47) is
solvable for −δ ≤ α < 0. Let
α
o
= inf¦α [ (5.47) is sovable ¦.
Then for any 0 > α > α
o
, one can solve (5.47). In fact, if for α
1
, (5.47)
possesses a solution u
1
, then for any α > α
1
, we have
−´u
1
+α > R(x)e
u1
,
that is, u
1
is an upper solution for equation (5.47) at α. Hence equation
−´u +α = R(x)e
u
also has a solution.
Lemma 5.2.3 infers that α
o
= −∞ if R(x) ≤ 0 but R(x) is not identically
0. Now to complete the proof of the Theorem, it suﬃce to prove that if R(x)
is positive somewhere, then α
o
> −∞. By virtue of Lemma 5.2.2, we only
need to show
Claim: There exist α < 0, such that the unique solution of
´φ +αφ = R(x) (5.56)
is negative somewhere.
Actually, we can prove a stronger result:
−αφ(x)→R(x) almost everywhere as α→−∞. (5.57)
We argue in the following 2 steps.
i) For a given α < 0, let u = u(x, α) be the solution of (5.56). Multiplying
both sides of equation (5.56) by u and integrate over M to obtain
−
M
[u[
2
dA+α
M
u
2
dA =
M
R(x)u(x)dA.
Consequently,
u
L
2 ≤
1
α
M
RudA.
Then by H¨ oder inequality
u
L
2 ≤
1
α
R
L
2.
This implies that
as α→−∞, u(x, α)→0, almost everywhere . (5.58)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 151
Since for any α < 0, the operator ´+α is invertible, we can express
u = (´+α)
−1
R(x).
From the above reasoning, one can see that, as α→−∞,
if R ∈ L
2
, then (´+α)
−1
R(x)→0 almost everywhere . (5.59)
ii) To see (5.57), we write
−αφ +R(x) = (´+α)
−1
´R. (5.60)
One can see the validity of (5.60) by simply apply the operator ´+α to both
sides.
Now by the assumption, we have ´R ∈ L
2
, and hence (5.59) implies that
−αφ +R(x)→0, a.e..
Or
φ(x)→
1
α
R(x). a.e.
Since R(x) is continuous and positive somewhere, for suﬃciently negative α,
φ(x) is negative somewhere.
This completes the proof of the Theorem.
5.2.2 Chen and Li’s Results
From the previous section, one can see from the Kazdan and Warner’s result
that in the case when R(x) is nonpositive, equation (5.46) has been solved
completely. i.e. a necessary and suﬃcient condition for equation (5.46) to have
a solution for any 0 > α > −∞ is R(x) ≤ 0. However, in the case when R(x)
changes signs, we have α
o
> −∞, and hence there are still some questions
remained. In particular, one may naturally ask:
Does equation (5.46) has a solution for the critical value α = α
o
?
In [CL1], Chen and Li answered this question aﬃrmatively.
Theorem 5.2.2 (ChenLi)
Assume that R(x) is a continuous function on M. Let α
o
be deﬁned in the
previous section. Then equation (5.46) has at least one solution when α = α
o
.
Proof. The Outline. Let ¦α
k
¦ be a sequence of numbers such that
α
k
> α
o
, and α
k
→α
o
, as k→∞.
We will prove the Theorem in 3 steps.
152 5 Prescribing Gaussian Curvature on Compact 2Manifolds
In step 1, we minimize the functional, for each ﬁxed α
k
,
J
k
(u) =
1
2
M
[u[
2
d A+α
k
M
ud A−
M
R(x)e
u
d A
in a class of functions that are between a lower and a Upper solution. Let the
minimizer be u
k
. Then let α
k
→α
o
.
In step 2, we show that the sequence of minimizers ¦u
k
¦ is bounded in the
region where R(x) is positively bounded away from 0.
In step 3, we prove that ¦u
k
¦ is bounded in H
1
(M) and hence converges
to a desired solution.
Step 1. For each α
k
> α
o
, by Kazdan and Warner, there exists a solution
φ
k
of
−´φ
k
+α
k
= R(x)e
φ
k
. (5.61)
We ﬁrst show that there exists a constant c
o
> 0, such that
φ
k
(x) ≥ −c
o
, ∀ x ∈ M and ∀ k = 1, 2, . (5.62)
Otherwise, for each k, let x
k
be a minimum of φ
k
(x) on M. Then obviously,
φ
k
(x
k
)→−∞, as k→∞.
On the other hand, since x
k
is a minimum, we have
−´φ
k
(x
k
) ≤ 0 , ∀ k = 1, 2, .
These contradict equation (5.61) because
α
k
→α
o
< 0 , as k→∞.
This veriﬁes (5.62).
Choose a suﬃciently negative constant A < −c
o
to be the lower solution
of equation (5.46) for all α
k
, that is,
−´A+α
k
≤ R(x)e
A
.
This is possible because R(x) is bounded below on M, and hence we can make
R(x)e
A
greater than any ﬁxed negative number.
For each ﬁxed α
k
, choose α
o
< ˜ α
k
< α
k
. By the result of Kazdan and
Warner, there exists a solution of equation (5.46) for α = ˜ α
k
. Call it ψ
k
. Then
−´ψ
k
+α
k
> −´ψ
k
+ ˜ α
k
= R(x)e
ψ
k
(x)
,
i.e. ψ
k
is a super solution of the equation
−´u +α
k
= R(x)e
u(x)
. (5.63)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 153
Illuminated by an idea of Ding and Liu [DL], we minimize the functional
J
k
(u) in a class of functions
H = ¦u ∈ C
1
(M) [ A ≤ u ≤ ψ
k
¦.
This is possible, because the functional J
k
(u) is bounded from below. In
fact, since M is compact and R(x) is continuous, there exists a constant C
1
,
such that
[R(x)[ ≤ C
1
, ∀ x ∈ M.
Consequently, for any u ∈ H,
J
k
(u) ≥
1
2
M
[u[
2
d A+α
k
M
ud A−C
1
M
e
ψ
k
d A
≥
1
2
(u −C
2
)
2
−C
3
. (5.64)
Here we have used the H¨ older and Poincar´e inequality. From (5.64), one
can easily see that J
k
is bounded from below. Also by (5.65) and a similar
argument as in the previous section, we can show that there is a subsequence
of a minimizing sequence which converges weakly to a minimum u
k
of J
k
(u)
in the set H. In order to show that u
k
is a solution of equation (5.63), we
need it be in the interior of H, i.e.
A < u
k
(x) < ψ
k
(x), ∀ x ∈ M. (5.65)
To verify this, we use a maximum principle.
First we show that
u
k
(x) < ψ
k
(x) , ∀ x ∈ M. (5.66)
Suppose in the contrary, there exists x
o
∈ M, such that u
k
(x
o
) = ψ
k
(x
o
), we
will derive a contradiction. Since u
k
(x
o
) > A, there is a δ > 0, such that
A < u
k
(x) , ∀x ∈ B
δ
(x
o
).
It follows that, for any positive function v(x) with its support in B
δ
(x
o
),
there exists an > 0, such that as − ≥ t ≤ 0, we have u
k
+ tv ∈ H.
Since u
k
is a minimum of J
k
in H, we have in particular that the function
g(t) = J
k
(u
k
+tv) achieves its minimum at t = 0 on the interval [−, 0]. Hence
g
(0) ≤ 0. This implies that
M
[−´u
k
+α
k
−R(x)e
u
k
]v(x)d A ≤ 0. (5.67)
Consequently
−´u
k
(x
o
) +α
k
−R(x
o
)e
u
k
(xo)
≤ 0. (5.68)
Let w(x) = ψ
k
(x) −u
k
(x). Then w(x) ≥ 0, and x
o
is a minimum of w. It
follows that −´w(x
o
) ≤ 0. While on the other hand, we have, by (5.68),
154 5 Prescribing Gaussian Curvature on Compact 2Manifolds
−´w(x
o
) = −´ψ
k
(x
o
) −(−´u
k
(x
o
)) ≥ −˜ α
k
+α
k
> 0.
Again a contradiction. Therefore we must have
u
k
(x) < ψ
k
(x) , ∀ x ∈ M.
Similarly, one can show that
A < u
k
(x) , ∀ x ∈ M.
This veriﬁes (5.65). Now for any v ∈ C
1
(M), u
k
+tv is in H for all suﬃciently
small t, and hence
d
dt
J
k
(u+tv)[
t=0
= 0. Then a direct computation as we did
in the past will lead to
−´u
k
+α
k
= R(x)e
u
k
, ∀ x ∈ M.
Step 2. It is obvious that the sequence ¦u
k
¦ so obtained in the previous
step is uniformly bounded from below by the negative constant A. To show
that it is also bounded from above, we ﬁrst consider the region where R(x) is
positively bounded away from 0. We will use a sup + inf inequality of Brezis,
Li, and Shafrir [BLS]:
Lemma 5.2.6 (Brezis, Li, and Shafrir). Assume that V is a Lipschitz func
tion satisfying
0 < a ≤ V (x) ≤ b < ∞
and let K be a compact subset of a domain Ω in R
2
. Then any solution u of
the equation
−´u = V (x)e
u
, x ∈ Ω,
satisﬁes
sup
K
u + inf
Ω
u ≤ C(a, b, V 
L
∞, K, Ω). (5.69)
Let x
o
be a point on M at which R(x
o
) > 0. Choose so small such that
R(x) ≥ a > 0 , ∀ x ∈ Ω ≡ B
2
(x
o
),
for some constant a.
Let v
k
be a solution of
−´v
k
−α
k
= 0 x ∈ Ω
v
k
(x) = 1 x ∈ ∂Ω.
Let w
k
= u
k
+v
k
. Then w
k
satisﬁes
−´w
k
= R(x)e
−v
k
e
w
k
.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 155
By the maximum principle, the sequence ¦v
k
¦ is bounded from above and from
below in the small ball Ω. Since ¦u
k
¦ is uniformly bounded from below, ¦w
k
¦
is also bounded from below. Locally, the metric is pointwise conformal to the
Euclidean metric, hence one can apply Lemma 5.2.6 to ¦w
k
¦ to conclude that
the sequence ¦w
k
¦ is also uniformly bounded from above in the smaller ball
B
(x
o
). Since x
o
is an arbitrary point where R is positive, we conclude that
¦w
k
¦ is uniformly bounded in the region where R(x) is positively bounded
away from 0. Now the uniform bound for ¦u
k
¦ in the same region follows suit.
Step 3. We show that the sequence ¦u
k
¦ is bounded in H
1
(M).
Choose a small δ > 0, such that the set
D = ¦x ∈ M [ R(x) > δ¦
is nonempty. It is open since R(x) is continuous. From the previous step, we
know that ¦u
k
¦ is uniformly bounded in D.
Let K(x) be a smooth function such that
K(x) < 0 , for x ∈ D; and K(x) = 0 , else where.
For each α
k
, let v
k
be the unique solution of
−´v
k
+α
k
= K(x)e
v
k
, x ∈ M.
Then since
−´v
k
+α
k
≤ 0 , and α
k
→α
o
The sequence ¦v
k
¦ is uniformly bounded.
Let w
k
= u
k
−v
k
, then
−´w
k
= R(x)e
u
k
−K(x)e
v
k
. (5.70)
We show that ¦w
k
¦ is bounded in H
1
(M).
On one hand, multiplying both sides of (5.70) by e
w
k
and integrating on
M, we have
M
[w
k
[
2
e
w
k
dA =
M
R(x)e
u
k
+w
k
dA−
D
K(x)e
u
k
dA. (5.71)
On the other hand, since each u
k
is a minimizer of the functional J
k
, the
second derivative J
k
(u
k
) is positively deﬁnite. It follows that for any function
φ ∈ H
1
(M),
M
[[φ[
2
−R(x)e
u
k
φ
2
]d A ≥ 0.
Choosing φ = e
w
k
2
, we derive
156 5 Prescribing Gaussian Curvature on Compact 2Manifolds
1
4
M
[w
k
[
2
e
w
k
dA ≥
M
R(x)e
u
k
+w
k
dA. (5.72)
Combining (5.71) and (5.72), we arrive at
−
D
K(x)e
u
k
dA ≥
3
4
M
[w
k
[
2
e
w
k
dA. (5.73)
Notice that ¦u
k
¦ is uniformly bounded in region D and ¦w
k
¦ is bounded
from below due to the fact that ¦u
k
¦ is bounded from below and ¦v
k
¦ is
bounded, (5.73) implies that
M
[w
k
[
2
dAis bounded, and so does
M
[u
k
[
2
dA.
Consider ˜ u
k
= u
k
−¯ u
k
, where ¯ u
k
is the average of u
k
. Obviously
M
˜ u
k
dA =
0, hence we can apply Poincar´e inequality to ˜ u
k
and conclude that ¦˜ u
k
¦ is
bounded in H
1
(M). Apparently, we have
M
u
2
k
dA ≤ 2
M
˜ u
2
k
dA+ ¯ u
2
k
[M[
.
If
M
u
2
k
dA is unbounded, then ¦¯ u
k
¦ is unbounded. Taking into account that
¦u
k
¦ is bounded in the region D where R(x) is positively bounded away from
0, we would have
D
˜ u
2
k
dA =
D
[u
k
− ¯ u
k
[
2
dA→∞,
for some subsequence, a contradiction with the fact that ¦˜ u
k
¦ is bounded
in H
1
(M). Therefore, ¦u
k
¦ must also be bounded in H
1
(M). Consequently,
there exists a subsequence of ¦u
k
¦ that converges to a function u
o
in H
1
(M),
and u
o
is the desired weak solution for
−´u
o
+α
o
= R(x)e
uo
.
This completes the proof of the Theorem. ¯ .
6
Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.1 Introduction
6.2 The Variational Approach
6.2.1 Estimate the Values of the Functional
6.2.2 The Variational Scheme
6.3 The A Priori Estimates
6.3.1 In the Region Where R < 0
6.3.2 In the Region Where R is small
6.3.3 In the Region Where R > 0
6.1 Introduction
On higher dimensional sphere S
n
with n ≥ 3, Kazdan and Warner raise the
following question:
Which functions R(x) can be realized as the scalar curvature of some
conformally related metrics? It is equivalent to consider the existence of a
solution to the following semilinear elliptic equation
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
, u > 0, x ∈ S
n
. (6.1)
The equation is “critical” in the sense that lack of compactness occurs.
Besides the obvious necessary condition that R(x) be positive somewhere,
there are wellknown obstructions found by Kazdan and Warner [KW2] and
later generalized by Bourguignon and Ezin [BE]. The conditions are:
S
n
X(R)dV
g
= 0 (6.2)
158 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
where dV
g
is the volume element of the conformal metric g and X is any
conformal vector ﬁeld associated with the standard metric g
0
. We call these
KazdanWarner type conditions. These conditions give rise to many examples
of R(x) for which equation (6.1) has no solution. In particular, a monotone
rotationally symmetric function R admits no solution.
In the last two decades, numerous studies were dedicated to these problems
and various suﬃcient conditions were found (please see the articles [Ba] [BC]
[C] [CC] [CD1] [CD2] [ChL] [CS] [CY1] [CY2] [CY3] [CY4] [ES] [Ha] [Ho] [Li]
[Li2] [Li3] [Mo] [SZ] [XY] and the references therein ). However, among others,
one problem of common concern left open, namely, were those KazdanWarner
type necessary conditions also suﬃcient ? In the case where R is rotationally
symmetric, the conditions become:
R > 0 somewhere and R
changes signs. (6.3)
Then
(a) is (6.3) a suﬃcient condition? and
(b) if not, what are the necessary and suﬃcient conditions?
Kazdan listed these as open problems in his CBMS Lecture Notes [Ka].
Recently, we answered question (a) negatively [CL3] [CL4]. We found some
stronger obstructions. Our results imply that for a rotationally symmetric
function R, if it is monotone in the region where it is positive, then problem
(6.1) admits no solution unless R is a constant.
In other words, a necessary condition to solve (6.1) is that
R
(r) changes signs in the region where R is positive. (6.4)
Now, is this a suﬃcient condition?
For prescribing Gaussian curvature equation (5.3) on S
2
, Xu and Yang
[XY] showed that if R is ‘nondegenerate,’ then (6.4) is a suﬃcient condition.
For equation (6.1) on higher dimensional spheres, a major diﬃculty is
that multiple blowups may occur when approaching a solution by a mini
max sequence of the functional or by solutions of subcritical equations.
In dimensions higher than 3, the ‘nondegeneracy’ condition is no longer
suﬃcient to guarantee the existence of a solution. It was illustrated by a
counter example in [Bi] constructed by Bianchi. He found some positive ro
tationally symmetric function R on S
4
, which is nondegenerate and non
monotone, and for which the problem admits no solution. In this situation,
a more proper condition is called the ‘ﬂatness condition’. Roughly speaking,
it requires that at every critical point of R, the derivatives of R up to order
(n − 2) vanish, while some higher but less than n
th
order derivative is dis
tinct from 0. For n = 3, the ‘nondegeneracy’ condition is a special case of
the ‘ﬂatness condition’. Although people are wondering if the ‘ﬂatness condi
tion’ is necessary, it is still used widely today (see [CY3], [Li2], and [SZ]) as a
standard assumption to guarantee the existence of a solution. The above men
tioned Bianchi’s counter example seems to suggest that the ‘ﬂatness condition’
is somewhat sharp.
6.1 Introduction 159
Now, a natural question is:
Under the ‘ﬂatness condition,’ is (6.4) a suﬃcient condition?
In this section, we answer the question aﬃrmatively. We prove that, under
the ‘ﬂatness condition,’ (6.4) is a necessary and suﬃcient condition for (6.1) to
have a solution. This is true in all dimensions n ≥ 3, and it applies to functions
R with changing signs. Thus, we essentially answer the open question (b)
posed by Kazdan.
There are many versions of ‘ﬂatness conditions,’ a general one was pre
sented in [Li2] by Y. Li. Here, to better illustrate the idea, in the statement
of the following theorem, we only list a typical and easytoverify one.
Theorem 6.1.1 Let n ≥ 3. Let R = R(r) be rotationally symmetric and
satisfy the following ﬂatness condition near every positive critical point r
o
:
R(r) = R(r
o
) +a[r −r
o
[
α
+h([r −r
o
[), with a = 0 and n−2 < α < n. (6.5)
where h
(s) = o(s
α−1
).
Then a necessary and suﬃcient condition for equation (6.1) to have a
solution is that
R
(r) changes signs in the regions where R > 0.
The theorem is proved by a variational approach. We blend in our new
ideas with other ideas in [XY], [Ba], [Li2], and [SZ]. We use the ‘center of
mass’ to deﬁne neighborhoods of ‘critical points at inﬁnity,’ obtain some quite
sharp and clean estimates in those neighborhoods, and construct a maxmini
variational scheme at subcritical levels and then approach the critical level.
Outline of the proof of Theorem 6.1.1.
Let γ
n
=
n(n−2)
4
and τ =
n+2
n−2
. We ﬁrst ﬁnd a positive solution u
p
of the
subcritical equation
−´u +γ
n
u = R(r)u
p
. (6.6)
for each p < τ and close to τ. Then let p→τ, take the limit.
To ﬁnd the solution of equation (6.6), we construct a maxmini variational
scheme. Let
J
p
(u) :=
S
n
Ru
p+1
dV
and
E(u) :=
S
n
([u[
2
+γ
n
u
2
)dV.
We seek critical points of J
p
(u) under the constraint
H = ¦u ∈ H
1
(S
n
) [ E(u) = γ
n
[S
n
[, u ≥ 0¦.
where [S
n
[ is the volume of S
n
.
160 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
If R has only one positive local maximum, then by condition (6.4), on each
of the poles, R is either nonpositive or has a local minimum. Then similar
to the approach in [C], we seek a solution by minimizing the functional in a
family of rotationally symmetric functions in H.
In the following, we assume that R has at least two positive local maxima.
In this case, the solutions we seek are not necessarily symmetric.
Our scheme is based on the following key estimates on the values of the
functional J
p
(u) in a neighborhood of the ‘critical points at inﬁnity’ associated
with each local maximum of R. Let r
1
be a positive local maximum of R. We
prove that there is an open set G
1
⊂ H (independent of p), such that on the
boundary ∂G
1
of G
1
, we have
J
p
(u) ≤ R(r
1
)[S
n
[ −δ, (6.7)
while there is a function ψ
1
∈ G
1
, such that
J
p
(ψ
1
) > R(r
1
)[S
n
[ −
δ
2
. (6.8)
Here δ > 0 is independent of p. Roughly speaking, we have some kinds of
‘mountain pass’ associated to each local maximum of R. The set G
1
is deﬁned
by using the ‘center of mass’.
Let r
1
and r
2
be two smallest positive local maxima of R. Let ψ
1
and ψ
2
be two functions deﬁned by (6.8) associated to r
1
and r
2
. Let γ be a path in
H connecting ψ
1
and ψ
2
, and let Γ be the family of all such paths.
Deﬁne
c
p
= sup
γ∈Γ
min
γ
J
p
(u). (6.9)
For each p < τ, by compactness, there is a critical point u
p
of J
p
(), such
that
J
p
(u
p
) = c
p
.
Obviously, a constant multiple of u
p
is a solution of (6.6). Moreover, by (6.7),
J
p
(u
p
) ≤ R(r
i
)[S
n
[ −δ, (6.10)
for any positive local maximum r
i
of R.
To ﬁnd a solution of (6.1), we let p→τ, take the limit. To show the conver
gence of a subsequence of ¦u
p
¦, we established apriori bounds for the solutions
in the following order.
(i) In the region where R < 0: This is done by the ‘Kelvin Transform’ and
a maximum principle.
(ii) In the region where R is small: This is mainly due to the boundedness
of the energy E(u
p
).
(iii) In the region where R is positively bounded away from 0: First due to
the energy bound, ¦u
p
¦ can only blow up at most ﬁnitely many points. Using
a Pohozaev type identity, we show that the sequence can only blow up at one
6.2 The Variational Approach 161
point and the point must be a local maximum of R. Finally, we use (6.10) to
argue that even one point blow up is impossible and thus establish an apriori
bound for a subsequence of ¦u
p
¦.
In subsection 2, we carry on the maxmini variational scheme and obtain
a solution u
p
of (6.6) for each p.
In subsection 3, we establish a priori estimates on the solution sequence
¦u
p
¦.
6.2 The Variational Approach
In this section, we construct a maxmini variational scheme to ﬁnd a solution
of
−´u +γ
n
u = R(r)u
p
(6.11)
for each p < τ :=
n+2
n−2
.
Let
J
p
(u) :=
S
n
Ru
p+1
dV
and
E(u) :=
S
n
([u[
2
+γ
n
u
2
)dV.
Let
(u, v) = (
S
uv +γ
n
uv)dV
be the inner product in the Hilbert space H
1
(S
n
), and
u :=
E(u)
be the corresponding norm.
We seek critical points of J
p
(u) under the constraint
H := ¦u ∈ H
1
(S
n
) [ E(u) = E(1) = γ
n
[S
n
[, u ≥ 0¦.
where [S
n
[ is the volume of S
n
.
One can easily see that a critical point of J
p
in S multiplied by a constant
is a solution of (6.11).
We divide the rest of the section into two parts.
First, we establish the key estimates (6.7) and (6.8).
Then, we carry on the maxmini variational scheme.
162 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.2.1 Estimate the Values of the Functional
To construct a maxmini variational scheme, we ﬁrst show that there is some
kinds of ‘mountain pass’ associated to each positive local maximum of R.
Unlike the classical ones, these ‘mountain passes’ are in some neighborhoods
of the ‘critical points at inﬁnity’ (See Proposition 6.2.2 and 6.2.1 below).
Choose a coordinate system in R
n+1
, so that the south pole of S
n
is at
the origin O and the center of the ball B
n+1
is at (0, , 0, 1). As usual, we
use [ [ to denote the distance in R
n+1
.
Deﬁne the center of mass of u as
q(u) =:
S
n
xu
τ+1
(x)dV
S
n
u
τ+1
(x)dV
(6.12)
It is actually the center of mass of the sphere S
n
with density u
τ+1
(x) at the
point x.
We recall some wellknown facts in conformal geometry.
Let φ
q
be the standard solution with its ‘center of mass’ at q ∈ B
n+1
, that
is, φ
q
∈ H and satisﬁes
−´u +γ
n
u = γ
n
u
τ
, (6.13)
We may also regard φ
q
as depend on two parameters λ and ˜ q, where ˜ q
is the intersection of S
n
with the ray passing the center and the point q.
When ˜ q = O (the south pole of S
n
), we can express, in the spherical polar
coordinates x = (r, θ) of S
n
(0 ≤ r ≤ π, θ ∈ S
n−1
) centered at the south pole
where r = 0,
φ
q
= φ
λ,˜ q
= (
λ
λ
2
cos
2
r
2
+ sin
2 r
2
)
n−2
2
. (6.14)
with 0 < λ ≤ 1. As λ→0, this family of functions blows up at south pole,
while tends to zero elsewhere.
Correspondingly, there is a family of conformal transforms
T
q
: H −→ H; T
q
u := u(h
λ
(x))[det(dh
λ
)]
n−2
2n
(6.15)
with
h
λ
(r, θ) = (2 tan
−1
(λtan
r
2
), θ).
Here det(dh
λ
) is the determinant of dh
λ
, and can be expressed explicitly as
det(dh
λ
) =
λ
cos
2
r
2
+λ
2
sin
2 r
2
n
.
It is wellknown that this family of conformal transforms leave the equation
(6.13), the energy E(), and the integral
S
n
u
τ+1
dV invariant. More generally
and more precisely, we have
6.2 The Variational Approach 163
Lemma 6.2.1 For any u, v ∈ H
1
(S
n
), and for any nonnegative integer k,
it holds the following
(i) (T
q
u, T
q
v) = (u, v) (6.16)
(ii)
S
n
(T
q
u)
k
(T
q
v)
τ+1−k
dV =
S
n
u
k
v
τ+1−k
dV. (6.17)
In particular,
E(T
q
u) = E(u) and
S
n
(T
q
u)
τ+1
dV =
S
n
u
τ+1
dV.
Proof. (i) We will use a property of the conformal Laplacian to prove the
invariance of the energy
E(T
q
u) = E(u) .
The proof for (i) is similar.
On a ndimensional Riemannian manifold (M, g),
L
g
:= ´
g
−c(n)R
g
is call a conformal Laplacian, where c(n) =
n−2
4(n−1)
, ´
g
and R
g
are the Laplace
Beltrami operator and the scalar curvature associated with the metric g re
spectively.
The conformal Laplacian has the following wellknown invariance property
under the conformal change of metrics. For ˆ g = w
4/(n−2)
g, w > 0, we have
L
ˆ g
u = w
−
n+2
n−2
L
g
(uw) , for all u ∈ C
∞
(M). (6.18)
Interested readers may ﬁnd its proof in [Bes].
If M is a compact manifold without boundary, then multiplying both sides
of (6.18) by u and integrating by parts, one would arrive at
M
¦[
ˆ g
u[
2
+c(n)R
ˆ g
[u[
2
¦dV
ˆ g
=
M
¦[
g
(uw)[
2
+c(n)R
g
[uw[
2
¦dV
g
. (6.19)
In our situation, we choose g to be the Euclidean metric ¯ g in R
n
with
¯ g
ij
= δ
ij
and ˆ g = g
o
, the standard metric on S
n
. Then it is wellknown that
g
o
= w
4
n−2
(x)¯ g
with
w(x) =
4
4 +[x[
2
n−2
2
satisfying the equation
−
¯
´w = γ
n
w
n+2
n−2
.
We also have
164 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
c(n)R
go
= γ
n
:=
n(n −2)
4
and
c(n)R
¯ g
= c(n)0 = 0.
Now formula (6.19) becomes
S
n
¦[
o
u[
2
+γ
n
[u[
2
¦dV
o
=
R
n
[
¯
(uw)[
2
dx, (6.20)
where
o
and
¯
are gradient operators associated with the metrics g
o
and ¯ g
respectively.
Again, let ˜ q be the intersection of S
n
with the ray passing through the
center of the sphere and the point q. Make a stereographic projection from
S
n
to R
n
as shown in the ﬁgure below, where ˜ q is placed at the origin of R
n
.
Under this projection, the point (r, θ) ∈ S
n
becomes x = (x
1
, , x
n
) ∈
R
n
, with
[x[
2
= tan
r
2
and
(T
q
u)(x) = u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
.
It follows from this and (6.20) that
E(T
q
u) =
R
n
[
o
¸
u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
¸
[
2
+γ
n
[u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
[
2
¸
dV
go
=
R
n
[
¯
¸
u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
w(x)
¸
[
2
dx
=
R
n
[
¯
¸
u(λx)
4λ
4 +λ[x[
2
n−2
2
¸
[
2
dx
=
R
n
[
¯
¸
u(y)
4
4 +[y[
2
n−2
2
¸
[
2
dy
=
R
n
[
¯
(u(y)w(y))[
2
dy
= E(u).
(ii) This can be derived immediately by making change of variable y =
h
λ
(x).
This completes the proof of the lemma. ¯ .
One can also verify that
T
q
φ
q
= 1.
6.2 The Variational Approach 165
The relations between q and λ, ˜ q and T
q
for ˜ q = O can be expressed in a
similar way.
We now carry on the estimates near the south pole (0, θ) which we as
sume to be a positive local maximum. The estimates near other positive local
maxima are similar. Our conditions on R implies that
R(r) = R(0) −ar
α
, for some a > 0, n −2 < α < n. (6.21)
in a small neighborhood of O.
Deﬁne
Σ = ¦u ∈ H [ [q(u)[ ≤ ρ
o
, v := min
t,q
u −tφ
q
 ≤ ρ
o
¦. (6.22)
Notice that the ‘centers of mass’ of functions in Σ are near the south pole O.
This is a neighborhood of the ‘critical points at inﬁnity’ corresponding to O.
We will estimate the functional J
p
in this neighborhood.
We ﬁrst notice that the supremum of J
p
in Σ approaches R(0)[S
n
[ as p→τ.
More precisely, we have
Proposition 6.2.1 For any δ
1
> 0, there is a p
1
≤ τ, such that for all
τ ≥ p ≥ p
1
,
sup
Σ
J
p
(u) > R(0)[S
n
[ −δ
1
. (6.23)
Then we show that on the boundary of Σ, J
p
is strictly less and bounded
away from R(0)[S
n
[.
Proposition 6.2.2 There exist positive constants ρ
o
, p
o
, and δ
o
, such that
for all p ≥ p
o
and for all u ∈ ∂Σ, holds
J
p
(u) ≤ R(0)[S
n
[ −δ
o
. (6.24)
Proof of Proposition 6.2.1
Through a straight forward calculation, one can show that
J
τ
(φ
λ,O
)→R(0)[S
n
[, as λ→0. (6.25)
For a given δ
1
> 0, by (6.25), one can choose λ
o
, such that φ
λo,O
∈ Σ, and
J
τ
(φ
λo,O
) > R(0)[S
n
[ −
δ
1
2
. (6.26)
It is easy to see that for a ﬁxed function φ
λo,O
, J
p
is continuous with
respect to p. Hence, by (6.26), there exists a p
1
, such that for all p ≥ p
1
,
166 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
J
p
(φ
λo,O
) > R(0)[S
n
[ −δ
1
.
This completes the proof of Proposition 6.2.1.
The proof of Proposition 6.2.2 is rather complex. We ﬁrst estimate J
p
for
a family of standard functions in Σ.
Lemma 6.2.2 For p suﬃciently close to τ, and for λ and [˜ q[ suﬃciently
small, we have
J
p
(φ
λ,˜ q
) ≤ (R(0) −C
1
[˜ q[
α
)[S
n
[(1 +o
p
(1)) −C
1
λ
α+δp
, (6.27)
where δ
p
:= τ −p and o
p
(1)→0 as p→τ.
Proof of Lemma 6.2.2. Let > 0 be small such that (6.21) holds in
B
(O) ⊂ S
n
. We express
J
p
(φ
λ,˜ q
) =
S
n
R(x)φ
p+1
λ,˜ q
dV =
B(O)
+
S
n
\B(O)
. (6.28)
From the deﬁnition (6.14) of φ
λ,˜ q
and the boundedness of R, we obtain
the smallness of the second integral:
S
n
\B(O)
R(x)φ
p+1
λ,˜ q
dV ≤ C
2
λ
n−δp
n−2
2
for ˜ q ∈ B
2
(O). (6.29)
To estimate the ﬁrst integral, we work in a local coordinate centered at O.
B(O)
R(x)φ
p+1
λ,˜ q
dV =
B(O)
R(x + ˜ q)φ
p+1
λ,O
dV
≤
B(O)
[R(0) −a[x + ˜ q[
α
]φ
p+1
λ,O
dV
≤
B(O)
[R(0) −C
3
[x[
α
−C
3
[˜ q[
α
]φ
p+1
λ,O
dV
≤ [R(0) −C
3
[˜ q[
α
][S
n
[[1 +o
p
(1)] −C
4
λ
α+δp(n−2)/2
. (6.30)
Here we have used the fact that [x + ˜ q[
α
≥ c([x[
α
+[˜ q[
α
) for some c > 0 in
one half of the ball B
(O) and the symmetry of φ
λ,O
.
Noticing that α < n and δ
p
→0, we conclude that (6.28), (6.29), and (6.30)
imply (6.27). This completes the proof of the Lemma.
To estimate J
p
for all u ∈ ∂Σ, we also need the following two lemmas that
describe some useful properties of the set Σ.
Lemma 6.2.3 (On the ‘center of mass’)
(i) Let q, λ, and ˜ q be deﬁned by (6.14). Then for suﬃciently small q,
[q[
2
≤ C([˜ q[
2
+λ
4
). (6.31)
(ii) Let ρ
o
, q, and v be deﬁned by (6.22). Then for ρ
o
suﬃciently small,
ρ
o
≤ [q[ +Cv. (6.32)
6.2 The Variational Approach 167
Lemma 6.2.4 (Orthogonality)
Let u ∈ Σ and v = u−t
o
φ
qo
be deﬁned by (6.22). Let T
qo
be the conformal
transform such that
T
qo
φ
qo
= 1. (6.33)
Then
S
n
T
qo
vdV = 0 (6.34)
and
S
n
T
qo
v ψ
i
(x)dV = 0, i = 1, 2, n. (6.35)
where ψ
i
are ﬁrst spherical harmonic functions on S
n
:
−´ψ
i
= nψ
i
(x) , i = 1, 2, , n.
If we place the center of the sphere S
n
at the origin of R
n+1
(not in our case),
then for x = (x
1
, , x
n
),
ψ
i
(x) = x
i
, i = 1, 2, , n.
Proof of Lemma 6.2.3.
(i) From the deﬁnition that φ
q
= φ
λ,˜ q
, and (6.14), and by an elementary
calculation, one arrives at
[q − ˜ q[ ∼ λ
2
, for λ suﬃciently small . (6.36)
Draw a perpendicular line segment from O to q˜ q, then one can see that
(6.31) is a direct consequence of the triangle inequality and (6.36).
(ii) For any u = tφ
q
+ v ∈ ∂Σ, the boundary of Σ, by the deﬁnition, we
have either v = ρ
o
or [q(u)[ = ρ
o
. If v = ρ
o
, then we are done. Hence we
assume that [q(u)[ = ρ
o
. It follows from the deﬁnition of the ‘center of mass’
that
ρ
o
= [
S
n
x(tφ
q
+v)
τ+1
S
n
(tφ
q
+v)
τ+1
[. (6.37)
Noticing that v ≤ ρ
o
is very small, we can expand the integrals in (6.37) in
terms of v:
ρ
o
= [
t
τ+1
xφ
τ+1
q
+ (τ + 1)t
τ
xφ
τ
q
v +o(v)
t
τ+1
φ
τ+1
q
+ (τ + 1)t
τ
φ
τ
q
v +o(v)
[
≤ [
xφ
τ+1
q
φ
τ+1
q
[ +Cv ≤ [q[ +Cv.
Here we have used the fact that v is small and that t is close to 1. This
completes the proof of Lemma 6.2.3.
168 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
Proof of Lemma 6.2.4.
Write L = ´+γ
n
. We use the fact that v = u −t
o
φ
qo
is the minimum of
E(u −tφ
q
) among all possible values of t and q.
(i) By a variation with respect to t, we have
0 = (v, φ
qo
) =
S
n
vLφ
qo
dV. (6.38)
It follows from (6.13) that
S
n
vφ
τ
qo
dV = 0.
Now, apply the conformal transform T
qo
to both v and φ
qo
in the above
integral. By the invariant property of the transform, we arrive at (6.34).
(ii) To prove (6.35), we make a variation with respect to q.
0 =
q
vLφ
q
[
q=qo
= γ
n
q
vφ
τ
q
[
q=qo
q
T
qo
v(T
qo
φ
q
)
τ
[
q=qo
=
q
T
qo
v(φ
q−qo
)
τ
[
q=qo
τ
T
qo
vφ
τ−1
q−qo
q
φ
q−qo
[
q=qo
= τ
T
qo
v(ψ
1
, , ψ
n
).
This completes the proof of the Lemma.
Proof of Proposition 6.2.2
Make a perturbation of R(x):
¯
R(x) =
R(x) x ∈ B
2ρo
(O)
m x ∈ S
n
` B
2ρo
(O),
(6.39)
where m = R [
∂B
2ρo(O)
.
Deﬁne
¯
J
p
(u) =
S
n
¯
R(x)u
p+1
dV.
The estimate is divided into two steps.
In step 1, we show that the diﬀerence between J
p
(u) and
¯
J
τ
(u) is very
small. In step 2, we estimate
¯
J
τ
(u).
Step 1.
First we show
¯
J
p
(u) ≤
¯
J
τ
(u)(1 +o
p
(1)). (6.40)
where o
p
(1)→0 as p→τ.
6.2 The Variational Approach 169
In fact, by the H¨older inequality
¯
Ru
p+1
≤
¯
R
τ+1
p+1
u
τ+1
dV
p+1
τ+1
[S
n
[
τ−p
τ+1
≤
¯
Ru
τ+1
dV
p+1
τ+1
[ R(0)[S
n
[ [
τ−p
τ+1
.
This implies (6.40).
Now estimate the diﬀerence between J
p
(u) and
¯
J
p
(u).
[ J
p
(u) −
¯
J
p
(u) [≤ C
1
S
n
\B2ρo
(O)
u
p+1
≤ C
2
S
n
\B2ρo
(O)
(tφ
q
)
p+1
+C
2
S
n
\B2ρo
(O)
v
p+1
≤ C
3
λ
n−δp
n−2
2
+C
3
v
p+1
.
(6.41)
Step 2.
By virtue of (6.40) and (6.41), we now only need to estimate
¯
J
τ
(u). For
any u ∈ ∂Σ, write u = v + tφ
q
. (6.38) implies that v and φ
q
are orthogonal
with respect to the inner product associated to E(), that is
S
n
(vφ
q
+γ
n
vφ
q
)dV = 0.
Notice that u
2
:= E(u), we have
u
2
= v
2
+t
2
φ
q

2
.
Using the fact that both u and φ
q
belong to H with
u = γ
n
[S
n
[ = φ
q
,
We derive
t
2
= 1 −
v
2
γ
n
[S
n
[
. (6.42)
It follows that
S
n
¯
R(x)u
τ+1
≤
t
τ+1
S
n
¯
R(x)φ
τ+1
q
+ (τ + 1)
S
n
¯
R(x)φ
τ
q
v +
τ(τ + 1)
2
S
n
φ
τ−1
q
v
2
+o(v
2
)
= I
1
+ (τ + 1)I
2
+
τ(τ + 1)
2
I
3
+o(v
2
). (6.43)
To estimate I
1
, we use (6.42) and (6.27),
170 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
I
1
≤ (1−
τ + 1
2
v
2
γ
n
[S
n
[
)R(0)[S
n
[(1−c
1
[˜ q[
α
−c
1
λ
α
) +O(λ
n
) +o(v
2
). (6.44)
for some positive constant c
1
.
To estimate I
2
, we use the H
1
orthogonality between v and φ
q
(see (6.38)),
I
2
=
S
n
(
¯
R(x) −m)φ
τ
q
vdV =
B2ρo
(O)
(
¯
R(x) −m)φ
τ
q
vdV
=
1
γ
n
B2ρo
(O)
(
¯
R(x) −m)Lφ
q
vdV
≤ Cρ
α
o
[
B2ρo
(O)
Lφ
q
vdV [= Cρ
α
o
[(φ
q
, v)[
≤ Cρ
α
o
φ
q
v ≤ C
4
ρ
α
o
v. (6.45)
To estimate I
3
, we use (6.34) and (6.35). It is wellknown that the ﬁrst
and second eigenspaces of −´ on S
n
are constants and ψ
i
corresponding to
eigenvalues 0 and n respectively. Now (6.34) and (6.35) imply that T
q
v is
orthogonal to these two eigenspaces and hence
S
n
(T
q
v)[
2
dV ≥ λ
2
S
n
[T
q
v[
2
dV ,
where λ
2
= n+c
2
, for some positive number c
2
, is the second nonzero eigen
value of −´. Consequently
v
2
= T
q
v
2
≥ (γ
n
+n +c
2
)
S
n
(T
q
v)
2
.
It follows that
I
3
≤ R(0)
S
n
φ
τ−1
q
v
2
dV = R(0)
S
n
(T
q
φ
q
)
τ−1
(T
q
v)
2
dV
= R(0)
S
n
(T
q
v)
2
≤
R(0)
γ
n
+n +c
2
v
2
. (6.46)
Now (6.44), (6.45), and (6.46) imply that there is a β > 0 such that
¯
J
τ
(u) ≤ R(0)[S
n
[[1 −β([˜ q[
α
+λ
α
+v
2
)]. (6.47)
Here we have used the fact that v is very small and ρ
α
o
v can be controlled
by [˜ q[
α
+λ
α
(see Lemma 6.2.2). Notice that the positive constant c
2
in (6.46)
has played a key role, and without it, the coeﬃcient of v
2
in (6.47) would
be 0 due to the fact that γ
n
=
n(n−1)
4
.
For any u ∈ ∂Σ, we have
either v = ρ
o
or [q(u)[ = ρ
o
.
By (6.31) and (6.47), in either case there exist a δ
o
> 0, such that
6.2 The Variational Approach 171
¯
J
τ
(u) ≤ R(0)[S
n
[ −2δ
o
. (6.48)
Now (6.24) is an immediate consequence of (6.40), (6.41), and (6.48) . This
completes the proof of the Proposition.
6.2.2 The Variational Scheme
In this part, we construct variational schemes to show the existence of a
solution to equation (6.11) for each p < τ.
Case (i): R has only one positive local maximum.
In this case, condition (6.4) implies that R must have local minima at
both poles. Similar to the ideas in [C], we seek a solution by maximizing the
functional J
p
in a class of rotationally symmetric functions:
H
r
= ¦u ∈ H [ u = u(r)¦. (6.49)
Obviously, J
p
is bounded from above in H
r
and it is wellknown that the
variational scheme is compact for each p < τ (See the argument in Chapter
2). Therefore any maximizing sequence possesses a converging subsequence in
H
r
and the limiting function multiplied by a suitable constant is a solution of
(6.11).
Case (ii): R has at least two positive local maxima.
Let r
1
and r
2
be the two smallest positive local maxima of R. By Propo
sition 6.2.1 and 6.2.2, there exist two disjoint open sets Σ
1
, Σ
2
⊂ H, ψ
i
∈
Σ
i
, p
o
< τ and δ > 0, such that for all p ≥ p
o
,
J
p
(ψ
i
) > R(r
i
)[S
n
[ −
δ
2
, i = 1, 2; (6.50)
and
J
p
(u) ≤ R(r
i
)[S
n
[ −δ, ∀u ∈ ∂Σ
i
, i = 1, 2. (6.51)
Let γ be a path in H joining ψ
1
and ψ
2
. Let Γ be the family of all such
paths. Deﬁne
c
p
= sup
γ∈Γ
min
u∈γ
J
p
(u). (6.52)
For each ﬁxed p < τ, by the wellknown compactness, there exists a critical
(saddle) point u
p
of J
p
with
J
p
(u
p
) = c
p
.
Moreover, due to (6.51) and the deﬁnition of c
p
, we have
J
p
(u
p
) ≤ min
i=1,2
R(r
i
)[S
n
[ −δ. (6.53)
One can easily see that a critical point of J
p
in H multiplied by a constant
is a solution of (6.11) and for all p close to τ, the constant multiples are
bounded from above and bounded away from 0.
172 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.3 The A Priori Estimates
In the previous section, we showed the existence of a positive solution u
p
to
the subcritical equation (6.11) for each p < τ. In this section, we prove that
as p→τ, there is a subsequence of ¦u
p
¦, which converges to a solution u
o
of
(6.1). The convergence is based on the following a priori estimate.
Theorem 6.3.1 Assume that R satisﬁes condition (6.5) in Theorem 6.1.1,
then there exists p
o
< τ, such that for all p
o
< p < τ, the solution of (6.11)
obtained by the variational scheme are uniformly bounded.
To prove the Theorem, we estimate the solutions in three regions where
R < 0, R close to 0, and R > 0 respectively.
6.3.1 In the Region Where R < 0
The apriori bound of the solutions is stated in the following proposition, which
is a direct consequence of Proposition 6.2.2 in our paper [CL6].
Proposition 6.3.1 For all 1 < p ≤ τ, the solutions of (6.11) are uniformly
bounded in the regions where R ≤ −δ, for every δ > 0. The bound depends
only on δ, dist(¦x [ R(x) ≤ −δ¦, ¦x [ R(x) = 0¦), and the lower bound of R.
6.3.2 In the Region Where R is Small
In [CL6], we also obtained estimates in this region by the method of moving
planes. However, in that paper, we assume that R be bounded away from
zero. In [CL7], for rotationally symmetric functions R on S
3
, we removed this
condition by using a blowing up analysis near R(x) = 0. That method may
be applied to higher dimensions with a few modiﬁcations. Now within our
variational frame in this paper, the a priori estimate is simpler. It is mainly
due to the energy bound.
Proposition 6.3.2 Let ¦u
p
¦ be solutions of equation (6.11) obtained by the
variational approach in section 2. Then there exists a p
o
< τ and a δ > 0,
such that for all p
o
< p ≤ τ, ¦u
p
¦ are uniformly bounded in the regions where
[R(x)[ ≤ δ.
Proof. It is easy to see that the energy E(u
p
) are bounded for all p. Now
suppose that the conclusion of the Proposition is violated. Then there exists
a subsequence ¦u
i
¦ with u
i
= u
pi
, p
i
→τ, and a sequence of points ¦x
i
¦, with
x
i
→x
o
and R(x
o
) = 0, such that
u
i
(x
i
)→∞.
We will use a rescaling argument to derive a contradiction. Since x
i
may not
be a local maximum of u
i
, we need to choose a point near x
i
, which is almost
a local maximum. To this end, let K be any large number and let
6.3 The A Priori Estimates 173
r
i
= 2K[u
i
(x
i
)]
−
p
i
−1
2
.
In a small neighborhood of x
o
, choose a local coordinate and let
s
i
(x
i
) = u
i
(x)(r
i
−[x −x
i
[)
2
p
i
−1
.
Let a
i
be the maximum of s
i
(x) in B
ri
(x
i
). Let λ
i
= [u
i
(a
i
)]
−
p
i
−1
2
.
Then from the deﬁnition of a
i
, one can easily verify that the ball B
λiK
(a
i
)
is in the interior of B
ri
(x
i
) and the value of u
i
(x) in B
λiK
(a
i
) is bounded
above by a constant multiple of u
i
(a
i
).
To see the ﬁrst part, we use the fact s
i
(a
i
) ≥ s
i
(x
i
). From this, we derive
u
i
(a
i
)(r
i
−[a
i
−x
i
[)
2
p
i
−1
≥ u
i
(x
i
)r
2
p
i
−1
i
= (2K)
2
p
i
−1
.
It follows that
r
i
−[a
i
−x
i
[
2
≥ λ
i
K, (6.54)
hence
B
λiK
(a
i
) ⊂ B
ri
(x
i
).
To see the second part, we use s
i
(a
i
) ≥ s
i
(x) and derive from (6.54),
u
i
(x) ≤ u
i
(a
i
)
r
i
−[a
i
−x
i
[
r
i
−[x −x
i
[
2
p
i
−1
≤ u
i
(a
i
) 2
2
p
i
−1
, ∀ x ∈ B
λiK
(a
i
).
Now we can make a rescaling.
v
i
(x) =
1
u
i
(a
i
)
u
i
(λ
i
x +a
i
).
Obviously v
i
is bounded in B
K
(0), and it follows by a standard argument
that ¦v
i
¦ converges to a harmonic function v
o
in B
K
(0) ⊂ R
n
, with v
o
(0) = 1.
Consequently for i suﬃciently large,
B
K
(0)
v
i
(x)
τ+1
dx ≥ cK
n
, (6.55)
for some c > 0.
On the other hand, the boundedness of the energy E(u
i
) implies that
S
n
u
τ+1
i
dV ≤ C, (6.56)
for some constant C. By a straight forward calculation, one can verify that,
for any K > 0,
S
n
u
τ+1
i
dV ≥
B
K
(0)
v
τ+1
i
dx. (6.57)
Obviously, (6.56) and (6.57) contradict with (6.55). This completes the
proof of the Proposition. ¯ .
174 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.3.3 In the Regions Where R > 0.
Proposition 6.3.3 Let ¦u
p
¦ be solutions of equation (6.11) obtained by the
variational approach in section 2. Then there exists a p
o
< τ, such that for
all p
o
< p ≤ τ and for any > 0, ¦u
p
¦ are uniformly bounded in the regions
where R(x) ≥ .
Proof. The proof is divided into 3 steps. In Step 1, we argue that the solutions
can only blow up at ﬁnitely many points because of the energy bound. In Step
2, we show that there is no more than one point blow up and the point must
be a local maximum of R. This is done mainly by using a Pohozaeve type
identity. In Step 3, we use (6.53) to conclude that even one point blow up is
impossible.
Step 1. The argument is standard. Let ¦x
i
¦ be a sequence of points such
that u
i
(x
i
)→∞and x
i
→x
o
with R(x
o
) > 0. Let r
i
, s
i
(x), and v
i
(x) be deﬁned
as in Part II. Then similar to the argument in Part II, we see that ¦v
i
(x)¦
converges to a standard function v
o
(x) in R
n
with
−´v
o
= R(x
o
)v
τ
o
.
It follows that
Br
i
(xo)
u
τ+1
i
dV ≥ c
o
> 0.
Because the total energy of u
i
is bounded, we can have only ﬁnitely many
such x
o
. Therefore, ¦u
i
¦ can only blow up at ﬁnitely many points.
Step 2. We have shown that ¦u
i
¦ has only isolated blow up points. As a
consequence of a result in [Li2] (also see [CL6] or [SZ]), we have
Lemma 6.3.1 Let R satisfy the ‘ﬂatness condition’ in Theorem 1, Then ¦u
i
¦
can have at most one simple blow up point and this point must be a local
maximum of R. Moreover, ¦u
i
¦ behaves almost like a family of the standard
functions φ
λi,˜ q
, with λ
i
= (max u
i
)
−
2
n−2
and with ˜ q at the maximum of R.
Step 3. Finally, we show that even one point blow up is impossible. For
convenience, let ¦u
i
¦ be the sequence of critical points of J
pi
we obtained in
Section 2. From the proof of Proposition 2.2, one can obtain
J
τ
(u
i
) ≤ min
k
R(r
k
)[S
n
[ −δ, (6.58)
for all positive local maxima r
k
of R, because each u
i
is the minimum of J
pi
on a path. Now if ¦u
i
¦ blow up at x
o
, then by Lemma 6.3.1, we have
J
τ
(u
i
)→R(x
o
)[S
n
[.
This is a contradiction with (6.58).
Therefore, the sequence ¦u
i
¦ is uniformly bounded and possesses a subse
quence converging to a solution of (6.1). This completes the proof of Theorem
6.1.1. ¯ .
7
Maximum Principles
7.1 Introduction
7.2 Weak Maximum Principle
7.3 The Hopf Lemma and Strong Maximum Principle
7.4 Maximum Principle Based on Comparison
7.5 A Maximum Principle for Integral Equations
7.1 Introduction
As a simple example of maximum principle, let’s consider a C
2
function u(x)
of one independent variable x. It is wellknown in calculus that at a local
maximum point x
o
of u, we must have
u
(x
o
) ≤ 0.
Based on this observation, then we have the following simplest version of
maximum principle:
Assume that u
(x) > 0 in an open interval (a, b), then u can not have
any interior maximum in the interval.
One can also see this geometrically. Since under the condition u
(x) > 0,
the graph is concave up, it can not have any local maximum.
More generally, for a C
2
function u(x) of nindependent variables x =
(x
1
, , x
n
), at a local maximum x
o
, we have
D
2
u(x
o
) := (u
xixj
(x
o
)) ≤ 0 ,
that is, the symmetric matrix is nonpositive deﬁnite at point x
o
. Correspond
ingly, the simplest version maximum principle reads:
176 7 Maximum Principles
If
(a
ij
(x)) ≥ 0 and
¸
ij
a
ij
(x)u
xixj
(x) > 0 (7.1)
in an open bounded domain Ω, then u can not achieve its maximum in the
interior of the domain.
An interesting special case is when (a
ij
(x)) is an identity matrix, in which
condition (7.1) becomes
´u > 0.
Unlike its onedimensional counterpart, condition (7.1) no longer implies
that the graph of u(x) is concave up. A simple counter example is
u(x
1
, x
2
) = x
2
1
−
1
2
x
2
2
.
One can easily see from the graph of this function that (0, 0) is a saddle point.
In this case, the validity of the maximum principle comes from the simple
algebraic fact:
For any two n n matrices A and B, if A ≥ 0 and B ≤ 0, then AB ≤ 0.
In this chapter, we will introduce various maximum principles, and most
of them will be used in the method of moving planes in the next chapter.
Besides this, there are numerous other applications. We will list some below.
i) Providing Estimates
Consider the boundary value problem
−´u = f(x) , x ∈ B
1
(0) ⊂ R
n
u(x) = 0 , x ∈ ∂B
1
(0).
(7.2)
If a ≤ f(x) ≤ b in B
1
(0), then we can compare the solution u with the
two functions
a
2n
(1 −[x[
2
) and
b
2n
(1 −[x[
2
)
which satisfy the equation with f(x) replaced by a and b, respectively, and
which vanish on the boundary as u does. Now, applying the maximum prin
ciple for ´ operator (see Theorem 7.1.1 in the following), we obtain
a
2n
(1 −[x[
2
) ≤ u(x) ≤
b
2n
(1 −[x[
2
).
ii) Proving Uniqueness of Solutions
In the above example, if f(x) ≡ 0, then we can choose a = b = 0, and
this implies that u ≡ 0. In other words, the solution of the boundary value
problem (7.2) is unique.
iii) Establishing the Existence of Solutions
(a) For a linear equation such as (7.2) in any bounded open domain Ω, let
u(x) = sup φ(x)
7.1 Introduction 177
where the sup is taken among all the functions that satisfy the corresponding
diﬀerential inequality
−´φ ≤ f(x) , x ∈ Ω
φ(x) = 0 , x ∈ ∂Ω.
Then u is a solution of (7.2).
(b) Now consider the nonlinear problem
−´u = f(u) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
(7.3)
Assume that f() is a smooth function with f
() ≥ 0. Suppose that there
exist two functions u(x) ≤ ¯ u(x), such that
−´u ≤ f(u) ≤ f(¯ u) ≤ −´¯ u.
These two functions are called sub (or lower) and super (or upper) solutions
respectively.
To seek a solution of problem (7.3), we use successive approximations. Let
−´u
1
= f(u) and −´u
i+1
= f(u
i
).
Then by maximum principle, we have
u ≤ u
1
≤ u
2
≤ ≤ u
i
≤ ≤ ¯ u.
Let u be the limit of the sequence ¦u
i
¦:
u(x) = limu
i
(x),
then u is a solution of the problem (7.3).
In Section 7.2 , we introduce and prove the weak maximum principles.
Theorem 7.1.1 (Weak Maximum Principle for −´.)
i) If
−´u(x) ≥ 0, x ∈ Ω,
then
min
¯
Ω
u ≥ min
∂Ω
u.
ii) If
−´u(x) ≤ 0, x ∈ Ω,
then
max
¯
Ω
u ≤ max
∂Ω
u.
178 7 Maximum Principles
This result can be extended to general uniformly elliptic operators. Let
D
i
=
∂
∂x
i
, D
ij
=
∂
2
∂x
i
∂x
j
.
Deﬁne
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x).
Here we always assume that a
ij
(x), b
i
(x), and c(x) are bounded continuous
functions in
¯
Ω. We say that L is uniformly elliptic if
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for any x ∈ Ω, any ξ ∈ R
n
and for some δ > 0.
Theorem 7.1.2 (Weak Maximum Principle for L) Let L be the uniformly
elliptic operator deﬁned above. Assume that c(x) ≡ 0.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u.
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
.
These Weak Maximum Principles infer that the minima or maxima of u
attain at some points on the boundary ∂Ω. However, they do not exclude
the possibility that the minima or maxima may also occur in the interior of
Ω. Actually this can not happen unless u is constant, as we will see in the
following.
Theorem 7.1.3 (Strong Maximum Principle for L with c(x) ≡ 0) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≡ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u attains its minimum value only on ∂Ω unless u is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u attains its maximum value only on ∂Ω unless u is constant.
This maximum principle (as well as the weak one) can also be applied to
the case when c(x) ≥ 0 with slight modiﬁcations.
7.1 Introduction 179
Theorem 7.1.4 (Strong Maximum Principle for L with c(x) ≥ 0) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≥ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u can not attain its nonpositive minimum in the interior of Ω unless u
is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u can not attain its nonnegative maximum in the interior of Ω unless
u is constant.
We will prove these Theorems in Section 7.3 by using the Hopf Lemma.
Notice that in the previous Theorems, we all require that c(x) ≥ 0.
Roughly speaking, maximum principles hold for ‘positive’ operators. −´ is
‘positive’, and obviously so does −´+ c(x) if c(x) ≥ 0. However, as we will
see in the next chapter, in practical problems it occurs frequently that the
condition c(x) ≥ 0 can not be met. Do we really need c(x) ≥ 0? The answer
is ‘no’. Actually, if c(x) is not ‘too negative’, then the operator ‘´+ c(x)’
can still remain ‘positive’ to ensure the maximum principle. These will be
studied in Section 7.4, where we prove the ‘Maximum Principles Based on
Comparisons’.
Let φ be a positive function on
¯
Ω satisfying
−´φ +λ(x)φ ≥ 0. (7.4)
Let u be a function such that
−´u +c(x)u ≥ 0 x ∈ Ω
u ≥ 0 on ∂Ω.
(7.5)
Theorem 7.1.5 (Maximum Principle Based on Comparison)
Assume that Ω is a bounded domain. If
c(x) > λ(x) , ∀x ∈ Ω,
then u ≥ 0 in Ω.
Also in Section 7.4, as consequences of Theorem 7.1.5, we derive the ‘Nar
row Region Principle’ and the ‘Decay at Inﬁnity Principle’. These principles
can be applied very conveniently in the ‘Method of Moving Planes’ to estab
lish the symmetry of solutions for semilinear elliptic equations, as we will see
in later sections.
In Section 7.5, we establish a maximum principle for integral inequalities.
180 7 Maximum Principles
7.2 Weak Maximum Principles
In this section, we prove the weak maximum principles.
Theorem 7.2.1 (Weak Maximum Principle for −´.)
i) If
−´u(x) ≥ 0, x ∈ Ω, (7.6)
then
min
¯
Ω
u ≥ min
∂Ω
u. (7.7)
ii) If
−´u(x) ≤ 0, x ∈ Ω, (7.8)
then
max
¯
Ω
u ≤ max
∂Ω
u. (7.9)
Proof. Here we only present the proof of part i). The entirely similar proof
also works for part ii).
To better illustrate the idea, we will deal with one dimensional case and
higher dimensional case separately.
First, let Ω be the interval (a, b). Then condition (7.6) becomes u
(x) ≤
0. This implies that the graph of u(x) on (a, b) is concave downward, and
therefore one can roughly see that the values of u(x) in (a, b) are large or
equal to the minimum value of u at the end points (See Figure 2).
a
b
Figure 2
1
To prove the above observation rigorously, we ﬁrst carry it out under the
stronger assumption that
−u
(x) > 0. (7.10)
Let m = min
∂Ω
u. Suppose in contrary to (7.7), there is a minimum x
o
∈ (a, b)
of u, such that u(x
o
) < m. Then by the second order Taylor expansion of u
around x
o
, we must have −u
(x
o
) ≤ 0. This contradicts with the assumption
(7.10).
Now for u only satisfying the weaker condition (7.6), we consider a per
turbation of u:
u
(x) = u(x) −x
2
.
Obviously, for each > 0, u
(x) satisﬁes the stronger condition (7.10), and
hence
7.2 Weak Maximum Principles 181
min
¯
Ω
u
≥ min
∂Ω
u
.
Now letting →0, we arrive at (7.7).
To prove the theorem in dimensions higher than one, we need the following
Lemma 7.2.1 (Mean Value Inequality) Let x
o
be a point in Ω. Let B
r
(x
o
) ⊂
Ω be the ball of radius r center at x
o
, and ∂B
r
(x
o
) be its boundary.
i) If −´u(x) > (=) 0 for x ∈ B
ro
(x
o
) with some r
o
> 0, then for any
r
o
> r > 0,
u(x
o
) > (=)
1
[∂B
r
(x
o
)[
∂Br(x
o
)
u(x)dS. (7.11)
It follows that, if x
o
is a minimum of u in Ω, then
−´u(x
o
) ≤ 0. (7.12)
ii) If −´u(x) < 0 for x ∈ B
ro
(x
o
) with some r
o
> 0, then for any r
o
>
r > 0,
u(x
o
) <
1
[∂B
r
(x
o
)[
∂Br(x
o
)
u(x)dS. (7.13)
It follows that, if x
o
is a maximum of u in Ω, then
−´u(x
o
) ≥ 0. (7.14)
We postpone the proof of the Lemma for a moment. This Lemma tell
us that, if −´u(x) > 0, then the value of u at the center of the small ball
B
r
(x
o
) is larger than its average value on the boundary ∂B
r
(x
o
). Roughly
speaking, the graph of u is locally somewhat concave downward. Now based
on this Lemma, to prove the theorem, we ﬁrst consider u
(x) = u(x) −[x[
2
.
Obviously,
−´u
= −´u + 2n > 0. (7.15)
Hence we must have
min
Ω
u
≥ min
∂Ω
u
(7.16)
Otherwise, if there exists a minimum x
o
of u in Ω, then by Lemma 7.2.1, we
have −´u
(x
o
) ≤ 0. This contradicts with (7.15). Now in (7.16), letting →0,
we arrive at the desired conclusion (7.7).
This completes the proof of the Theorem.
The Proof of Lemma 7.2.1. By the Divergence Theorem,
Br(x
o
)
´u(x)dx =
∂Br(x
o
)
∂u
∂ν
dS = r
n−1
S
n−1
∂u
∂r
(x
o
+rω)dS
ω
, (7.17)
where dS
ω
is the area element of the n − 1 dimensional unit sphere S
n−1
=
¦ω [ [ω[ = 1¦.
182 7 Maximum Principles
If ´u < 0, then by (7.17),
∂
∂r
¦
S
n−1
u(x
o
+rω)dS
ω
¦ < 0. (7.18)
Integrating both sides of (7.18) from 0 to r yields
S
n−1
u(x
o
+rω)dS
ω
−u(x
o
)[S
n−1
[ < 0,
where [S
n−1
[ is the area of S
n−1
. It follows that
u(x
o
) >
1
r
n−1
[S
n−1
[
∂Br(x
o
)
u(x)dS.
This veriﬁes (7.11).
To see (7.12), we suppose in contrary that −´u(x
o
) > 0. Then by the
continuity of ´u, there exists a δ > 0, such that
−´u(x) > 0, ∀x ∈ B
δ
(x
o
).
Consequently, (7.11) holds for any 0 < r < δ. This contradicts with the
assumption that x
o
is a minimum of u.
This completes the proof of the Lemma.
From the proof of Theorem 7.2.1, one can see that if we replace −´ op
erator by −´+ c(x) with c(x) ≥ 0, then the conclusion of Theorem 7.2.1 is
still true (with slight modiﬁcations). Furthermore, we can replace the Laplace
operator −´ with general uniformly elliptic operators. Let
D
i
=
∂
∂x
i
, D
ij
=
∂
2
∂x
i
∂x
j
.
Deﬁne
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x). (7.19)
Here we always assume that a
ij
(x), b
i
(x), and c(x) are bounded continuous
functions in
¯
Ω. We say that L is uniformly elliptic if
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for any x ∈ Ω, any ξ ∈ R
n
and for some δ > 0.
Theorem 7.2.2 (Weak Maximum Principle for L with c(x) ≡ 0). Let L be
the uniformly elliptic operator deﬁned above. Assume that c(x) ≡ 0.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u. (7.20)
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
u.
7.3 The Hopf Lemma and Strong Maximum Principles 183
For c(x) ≥ 0, the principle still applies with slight modiﬁcations.
Theorem 7.2.3 (Weak Maximum Principle for L with c(x) ≥ 0). Let L be
the uniformly elliptic operator deﬁned above. Assume that c(x) ≥ 0. Let
u
−
(x) = min¦u(x), 0¦ and u
+
(x) = max¦u(x), 0¦.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u
−
.
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
u
+
.
Interested readers may ﬁnd its proof in many standard books, say in [Ev],
page 327.
7.3 The Hopf Lemma and Strong Maximum Principles
In the previous section, we prove a weak form of maximum principle. In the
case Lu ≥ 0, it concludes that the minimum of u attains at some point on the
boundary ∂Ω. However it does not exclude the possibility that the minimum
may also attain at some point in the interior of Ω. In this section, we will show
that this can not actually happen, that is, the minimum value of u can only
be achieved on the boundary unless u is constant. This is called the “Strong
Maximum Principle”. We will prove it by using the following
Lemma 7.3.1 (Hopf Lemma). Assume that Ω is an open, bounded, and
connected domain in R
n
with smooth boundary ∂Ω. Let u be a function in
C
2
(Ω) ∩ C(
¯
Ω). Let
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x)
be uniformly elliptic in Ω with c(x) ≡ 0. Assume that
Lu ≥ 0 in Ω. (7.21)
Suppose there is a ball B contained in Ω with a point x
o
∈ ∂Ω ∩ ∂B and
suppose
u(x) > u(x
o
), ∀x ∈ B. (7.22)
Then for any outward directional derivative at x
o
,
∂u(x
o
)
∂ν
< 0. (7.23)
184 7 Maximum Principles
In the case c(x) ≥ 0, if we require additionally that u(x
o
) ≤ 0, then the
same conclusion of the Hopf Lemma holds.
Proof. Without loss of generality, we may assume that B is centered at
the origin with radius r. Deﬁne
w(x) = e
−αr
2
−e
−αx
2
.
Consider v(x) = u(x) +w(x) on the set D = B
r
2
(x
o
) ∩ B (See Figure 3).
x
o
0
D
r
Ω
Figure 3
1
We will choose α and appropriately so that we can apply the Weak
Maximum Principle to v(x) and arrive at
v(x) ≥ v(x
o
) ∀x ∈ D. (7.24)
We postpone the proof of (7.24) for a moment. Now from (7.24), we have
∂v
∂ν
(x
o
) ≤ 0, (7.25)
Noticing that
∂w
∂ν
(x
o
) > 0
We arrive at the desired inequality
∂u
∂ν
(x
o
) < 0.
Now to complete the proof of the Lemma, what left to verify is (7.24). We
will carry this out in two steps. First we show that
Lv ≥ 0. (7.26)
Hence we can apply the Weak Maximum Principle to conclude that
min
¯
D
v ≥ min
∂D
. (7.27)
7.3 The Hopf Lemma and Strong Maximum Principles 185
Then we show that the minimum of v on the boundary ∂D is actually
attained at x
o
:
v(x) ≥ v(x
o
) ∀x ∈ ∂D. (7.28)
Obviously, (7.27) and (7.28) imply (7.24).
To see (7.26), we directly calculate
Lw = e
−αx
2
¦4α
2
n
¸
i,j=1
a
ij
(x)x
i
x
j
−2α
n
¸
i=1
[a
ii
(x) −b
i
(x)x
i
] −c(x)¦ +c(x)e
−αr
2
≥ e
−αx
2
¦4α
2
n
¸
i,j=1
a
ij
(x)x
i
x
j
−2α
n
¸
i=1
[a
ii
(x) −b
i
(x)x
i
] −c(x)¦ (7.29)
By the ellipticity assumption, we have
n
¸
i,j=1
a
ij
(x)x
i
x
j
≥ δ[x[
2
≥ δ(
r
2
)
2
> 0 in D. (7.30)
Hence we can choose α suﬃciently large, such that Lw ≥ 0. This, together
with the assumption Lu ≥ 0 implies Lv ≥ 0, and (7.27) follows from the Weak
Maximum Principle.
To verify (7.28), we consider two parts of the boundary ∂D separately.
(i) On ∂D ∩ B, since u(x) > u(x
o
), there exists a c
o
> 0, such that
u(x) ≥ u(x
o
) +c
o
. Take small enough such that [w[ ≤ δ on ∂D∩B. Hence
v(x) ≥ u(x
o
) = v(x
o
) ∀x ∈ ∂D ∩ B.
(ii) On ∂D∩∂B, w(x) = 0, and by the assumption u(x) ≥ u(x
o
), we have
v(x) ≥ v(x
o
).
This completes the proof of the Lemma.
Now we are ready to prove
Theorem 7.3.1 (Strong Maximum Principle for L with c(x) ≡ 0.) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≡ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u attains its minimum only on ∂Ω unless u is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u attains its maximum only on ∂Ω unless u is constant.
Proof. We prove part i) here. The proof of part ii) is similar. Let m be
the minimum value of u in Ω. Set Σ = ¦x ∈ Ω [ u(x) = m¦. It is relatively
closed in Ω. We show that either Σ is empty or Σ = Ω.
186 7 Maximum Principles
We argue by contradiction. Suppose Σ is a nonempty proper subset of
Ω. Then we can ﬁnd an open ball B ⊂ Ω ` Σ with a point on its boundary
belonging to Σ. Actually, we can ﬁrst ﬁnd a point p ∈ Ω ` Σ such that
d(p, Σ) < d(p, ∂Ω), then increase the radius of a small ball center at p until
it hits Σ (before hitting ∂Ω). Let x
o
be the point at ∂B ∩ Σ. Obviously we
have in B
Lu ≥ 0 and u(x) > u(x
o
).
Now we can apply the Hopf Lemma to conclude that the normal outward
derivative
∂u
∂ν
(x
o
) < 0. (7.31)
On the other hand, x
o
is an interior minimum of u in Ω, and we must have
Du(x
o
) = 0. This contradicts with (7.31) and hence completes the proof of
the Theorem.
In the case when c(x) ≥ 0, the strong principle still applies with slight
modiﬁcations.
Theorem 7.3.2 (Strong Maximum Principle for L with c(x) ≥ 0.) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≥ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u can not attain its nonpositive minimum in the interior of Ω unless u
is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u can not attain its nonnegative maximum in the interior of Ω unless
u is constant.
Remark 7.3.1 In order that the maximum principle to hold, we assume that
the domain Ω be bounded. This is essential, since it guarantees the existence
of maximum and minimum of u in
¯
Ω. A simple counter example is when Ω
is the half space ¦x ∈ R
n
[ x
n
> 0¦, and u(x, y) = x
n
. Obviously, ´u = 0, but
u does not obey the maximum principle:
max
Ω
u ≤ max
∂Ω
u.
Equally important is the nonnegativeness of the coeﬃcient c(x). For exam
ple, set Ω = ¦(x, y) ∈ R
2
[ −
π
2
< x <
π
2
, −
π
2
< y <
π
2
¦. Then u = cos xcos y
satisﬁes
−´u −2u = 0, in Ω
u = 0, on ∂Ω.
7.3 The Hopf Lemma and Strong Maximum Principles 187
But, obviously, there are some points in Ω at which u < 0.
However, if we impose some sign restriction on u, say u ≥ 0, then both
conditions can be relaxed. A simple version of such result will be present in
the next theorem.
Also, as one will see in the next section, c(x) is actually allowed to be
negative, but not ‘too negative’.
Theorem 7.3.3 (Maximum Principle and Hopf Lemma for not necessarily
bounded domain and not necessarily nonnegative c(x). )
Let Ω be a domain in R
n
with smooth boundary. Assume that u ∈ C
2
(Ω)∩
C(
¯
Ω) and satisﬁes
−´u +
¸
n
i=1
b
i
(x)D
i
u +c(x)u ≥ 0, u(x) ≥ 0, x ∈ Ω
u(x) = 0 x ∈ ∂Ω
(7.32)
with bounded functions b
i
(x) and c(x). Then
i) if u vanishes at some point in Ω, then u ≡ 0 in Ω; and
ii) if u ≡0 in Ω, then on ∂Ω, the exterior normal derivative
∂u
∂ν
< 0.
To prove the Theorem, we need the following Lemma concerning eigenval
ues.
Lemma 7.3.2 Let λ
1
be the ﬁrst positive eigenvalue of
−´φ = λ
1
φ(x) x ∈ B
1
(0)
φ(x) = 0 x ∈ ∂B
1
(0)
(7.33)
with the corresponding eigenfunction φ(x) > 0. Then for any ρ > 0, the ﬁrst
positive eigenvalue of the problem on B
ρ
(0) is
λ
1
ρ
2
. More precisely, if we let
ψ(x) = φ(
x
ρ
), then
−´ψ =
λ
1
ρ
2
ψ(x) x ∈ B
ρ
(0)
ψ(x) = 0 x ∈ ∂B
ρ
(0)
(7.34)
The proof is straight forward, and will be left for the readers.
The Proof of Theorem 7.3.3.
i) Suppose that u = 0 at some point in Ω, but u ≡0 on Ω. Let
Ω
+
= ¦x ∈ Ω [ u(x) > 0¦.
Then by the regularity assumption on u, Ω
+
is an open set with C
2
boundary.
Obviously,
u(x) = 0 , ∀ x ∈ ∂Ω
+
.
Let x
o
be a point on ∂Ω
+
, but not on ∂Ω. Then for ρ > 0 suﬃciently small,
one can choose a ball B
ρ/2
(¯ x) ⊂ Ω
+
with x
o
as its boundary point. Let ψ be
188 7 Maximum Principles
the positive eigenfunction of the eigenvalue problem (7.34) on B
ρ
(x
o
) corre
sponding to the eigenvalue
λ
1
ρ
2
. Obviously, B
ρ
(x
o
) completely covers B
ρ/2
(¯ x).
Let v =
u
ψ
. Then from (7.32), it is easy to deduce that
0 ≤ −´v −2v
ψ
ψ
+
n
¸
i=1
b
i
(x)D
i
v +
−´ψ
ψ
+
n
¸
i=1
D
i
ψ
ψ
+c(x)
v
≡ −´v +
n
¸
i=1
˜
b
i
(x)D
i
v + ˜ c(x)v.
Let φ be the positive eigenfunction of the eigenvalue problem (7.33) on B
1
,
then
˜ c(x) =
λ
1
ρ
2
+
1
ρ
n
¸
i=1
D
i
φ
φ
+c(x).
This allows us to choose ρ suﬃciently small so that ˜ c(x) ≥ 0. Now we can
apply Hopf Lemma to conclude that, the outward normal derivative at the
boundary point x
o
of B
ρ/2
(¯ x),
∂v
∂ν
(x
o
) < 0, (7.35)
because
v(x) > 0 ∀x ∈ B
ρ/2
(¯ x) and v(x
o
) =
u(x
o
)
ψ(x
o
)
= 0.
On the other hand, since x
o
is also a minimum of v in the interior of Ω,
we must have
v(x
o
) = 0.
This contradicts with (7.35) and hence proves part i) of the Theorem.
ii) The proof goes almost the same as in part i) except we consider the
point x
o
on ∂Ω and the ball B
ρ/2
(¯ x) is in Ω with x
o
∈ ∂B
ρ/2
(¯ x). Then for
the outward normal derivative of u, we have
∂u
∂ν
(x
o
) =
∂v
∂ν
(x
o
)ψ(x
o
) +v(x
o
)
∂ψ
∂ν
(x
o
) =
∂v
∂ν
(x
o
)ψ(x
o
) < 0.
Here we have used a wellknown fact that the eigenfunction ψ on B
ρ
(x
o
) is ra
dially symmetric about the center x
o
, and hence ψ(x
o
) = 0. This completes
the proof of the Theorem.
7.4 Maximum Principles Based on Comparisons
In the previous section, we show that if (−´+c(x))u ≥ 0, then the maximum
principle, i.e. (7.20), applies. There, we required c(x) ≥ 0. We can think −´
7.4 Maximum Principles Based on Comparisons 189
as a ‘positive’ operator, and the maximum principle holds for any ‘positive’
operators. For c(x) ≥ 0, −´+c(x) is also ‘positive’. Do we really need c(x) ≥ 0
here? To answer the question, let us consider the Dirichlet eigenvalue problem
of −´:
−´φ −λφ(x) = 0 x ∈ Ω
φ(x) = 0 x ∈ ∂Ω.
(7.36)
We notice that the eigenfunction φ corresponding to the ﬁrst positive eigen
value λ
1
is either positive or negative in Ω. That is, the solutions of (7.36)
with λ = λ
1
obey Maximum Principle, that is, the maxima or minima of φ
are attained only on the boundary ∂Ω. This suggests that, to ensure the Max
imum Principle, c(x) need not be nonnegative, it is allowed to be as negative
as −λ
1
. More precisely, we can establish the following more general maximum
principle based on comparison.
Theorem 7.4.1 Assume that Ω is a bounded domain. Let φ be a positive
function on
¯
Ω satisfying
−´φ +λ(x)φ ≥ 0. (7.37)
Assume that u is a solution of
−´u +c(x)u ≥ 0 x ∈ Ω
u ≥ 0 on ∂Ω.
(7.38)
If
c(x) > λ(x) , ∀x ∈ Ω, (7.39)
then u ≥ 0 in Ω.
Proof. We argue by contradiction. Suppose that u(x) < 0 somewhere in
Ω. Let v(x) =
u(x)
φ(x)
. Then since φ(x) > 0, we must have v(x) < 0 somewhere
in Ω. Let x
o
∈ Ω be a minimum of v(x). By a direct calculation, it is easy to
verify that
−´v = 2v
φ
φ
+
1
φ
(−´u +
´φ
φ
u). (7.40)
On one hand, since x
o
is a minimum, we have
−´v(x
o
) ≤ 0 and v(x
o
) = 0. (7.41)
While on the other hand, by (7.37), (7.38), and (7.39), and taking into
account that u(x
o
) < 0, we have, at point x
o
,
−´u +
´φ
φ
u(x
o
) ≥ −´u +λ(x
o
)u(x
o
)
> −´u +c(x
o
)u(x
o
) ≥ 0.
This is an obvious contradiction with (7.40) and (7.41), and thus completes
the proof of the Theorem.
190 7 Maximum Principles
Remark 7.4.1 From the proof, one can see that conditions (7.37) and (7.39)
are required only at the points where v attains its minimum, or at points where
u is negative.
The Theorem is also valid on an unbounded domains if u is “nonnegative”
at inﬁnity:
Theorem 7.4.2 If Ω is an unbounded domain, besides condition (7.38), we
assume further that
liminf
x→∞
u(x) ≥ 0. (7.42)
Then u ≥ 0 in Ω.
Proof. Still consider the same v(x) as in the proof of Theorem 7.4.1. Now
condition (7.42) guarantees that the minima of v(x) do not “leak” away to
inﬁnity. Then the rest of the arguments are exactly the same as in the proof
of Theorem 7.4.1.
For convenience in applications, we provide two typical situations where
there exist such functions φ and c(x) satisfying condition (7.37) and (7.39),
so that the Maximum Principle Based on Comparison applies:
i) Narrow regions, and
ii) c(x) decays fast enough at ∞.
i) Narrow Regions. When
Ω = ¦x [ 0 < x
1
< l¦
is a narrow region with width l as shown:
Ω

0 l x
1
We can choose φ(x) = sin(
x1+
l
). Then it is easy
to see that −´φ = (
1
l
)
2
φ, where λ(x) =
−1
l
2
can be very negative when l is suﬃciently small.
Corollary 7.4.1 (Narrow Region Principle.) If u satisﬁes (7.38) with bounded
function c(x). Then when the width l of the region Ω is suﬃciently small, c(x)
satisﬁes (7.39), i.e. c(x) > λ(x) =
−1
l
2
. Hence we can directly apply Theorem
7.4.1 to conclude that u ≥ 0 in Ω, provided liminf
x→∞
u(x) ≥ 0.
ii) Decay at Inﬁnity. In dimension n ≥ 3, one can choose some positive
number q < n −2, and let φ(x) =
1
x
q
Then it is easy to verify that
7.5 A Maximum Principle for Integral Equations 191
−´φ =
q(n −2 −q)
[x[
2
φ.
In the case c(x) decays fast enough near inﬁnity, we can adapt the proof of
Theorem 7.4.1 to derive
Corollary 7.4.2 (Decay at Inﬁnity) Assume there exist R > 0, such that
c(x) > −
q(n −2 −q)
[x[
2
, ∀[x[ > R. (7.43)
Suppose
lim
x→∞
u(x)
[x[
q
= 0.
Let Ω be a region containing in B
C
R
(0) ≡ R
n
` B
R
(0). If u satisﬁes (7.38) on
¯
Ω, then
u(x) ≥ 0 for all x ∈ Ω.
Remark 7.4.2 From Remark 7.4.1, one can see that actually condition
(7.43) is only required at points where u is negative.
Remark 7.4.3 Although Theorem 7.4.1 as well as its Corollaries are stated
in linear forms, they can be easily applied to a nonlinear equation, for example,
−´u −[u[
p−1
u = 0 x ∈ R
n
. (7.44)
Assume that the solution u decays near inﬁnity at the rate of
1
[x[
s
with
s(p − 1) > 2. Let c(x) = −[u(x)[
p−1
. Then for R suﬃciently large, and for
the region Ω as stated in Corollary 7.4.2, c(x) satisﬁes (7.43) in Ω. If further
assume that
u [
∂Ω
≥ 0,
then we can derive from Corollary 7.4.2 that u ≥ 0 in the entire region Ω.
7.5 A Maximum Principle for Integral Equations
In this section, we introduce a maximum principle for integral equations.
Let Ω be a region in R
n
, may or may not be bounded. Assume
K(x, y) ≥ 0, ∀(x, y) ∈ Ω Ω.
Deﬁne the integral operator T by
(Tf)(x) =
Ω
K(x, y)f(y)dy.
192 7 Maximum Principles
Let   be a norm in a Banach space satisfying
f ≤ g, whenever 0 ≤ f(x) ≤ g(x) in Ω. (7.45)
There are many norms that satisfy (7.45), such as L
p
(Ω) norms, L
∞
(Ω) norm,
and C(Ω) norm.
Theorem 7.5.1 Suppose that
Tg ≤ θg with some 0 < θ < 1 ∀g. (7.46)
If
f(x) ≤ (Tf)(x), (7.47)
then
f(x) ≤ 0 for all x ∈ Ω. (7.48)
Proof. Deﬁne
Ω
+
= ¦x ∈ Ω [ f(x) > 0¦,
and
f
+
(x) = max¦0, f(x)¦, f
−
(x) = min¦0, f(x)¦.
Then it is easy to see that for any x ∈ Ω
+
,
0 ≤ f
+
(x) ≤ (Tf)(x) = (Tf)
+
(x) + (Tf)
−
(x) ≤ (Tf)
+
(x) (7.49)
While in the rest of the region Ω, we have f
+
(x) = 0 and (Tf)
+
(x) ≥ 0 by
the deﬁnitions. Therefore, (7.49) holds for any x ∈ Ω. It follows from (7.49)
that
f
+
 ≤ (Tf)
+
 ≤ θf
+
.
Since θ < 1, we must have
f
+
 = 0,
which implies that f
+
(x) ≡ 0, and hence f(x) ≤ 0. This completes the proof
of the Theorem.
Recall that in Section 5, after introducing the “Maximum Principle Based
on Comparison” for Partial Diﬀerential Equations, we explained how it can
be applied to the situations of “Narrow Regions” and “Decay at Inﬁnity”.
Parallel to this, we have similar applications for integral equations. Let α be
a number between 0 and n. Deﬁne
(Tf)(x) =
Ω
1
[x −y[
n−α
c(y)f(y)dy.
Assume that f satisﬁes the integral equation
f = Tf in Ω,
7.5 A Maximum Principle for Integral Equations 193
or more generally, the integral inequality
f ≤ Tf in Ω. (7.50)
Further assume that
Tf
L
p
(Ω)
≤ Cc(y)
L
τ
(Ω)
f
L
p
(Ω)
, (7.51)
for some p, τ > 1.
If we have some right integrability condition on c(y), then we can derived,
from Theorem 7.5.1, a maximum principle that will be applied to “Narrow
Regions” and “Near Inﬁnity”. More precisely, we have
Corollary 7.5.1 Assume that c(y) ≥ 0 and c(y) ∈ L
τ
(R
n
). Let f ∈ L
p
(R
n
)
be a nonnegative function satisfying (7.50) and (7.51). Then there exist posi
tive numbers R
o
and
o
depending on c(y) only, such that
if µ(Ω ∩ B
Ro
(0)) ≤
o
, then f
+
≡ 0 in Ω.
where µ(D) is the measure of the set D.
Proof. Since c(y) ∈ L
τ
(R
n
), by Lebesgue integral theory, when the measure
of the intersection of Ω with B
Ro
(0) is suﬃciently small, we can make the
integral
Ω
[c(y)[
τ
dy as small as we wish, and thus to obtain
Cc(y)
L
τ
(Ω)
< 1.
Now it follows from Theorem 7.5.1 that
f
+
(x) ≡ 0 , ∀ x ∈ Ω.
This completes the proof of the Corollary.
Remark 7.5.1 One can see that the condition µ(Ω ∩ B
Ro
(0)) ≤
o
in the
Corollary is satisﬁed in the following two situations.
i) Narrow Regions: The width of Ω is very small.
ii) Near Inﬁnity: Say, Ω = B
C
R
(0), the complement of the ball B
R
(0), with
suﬃciently large R.
As an immediate application of this “Maximum Principle”, we study an
integral equation in the next section. We will use the method of moving planes
to obtain the radial symmetry and monotonicity of the positive solutions.
8
Methods of Moving Planes and of Moving
Spheres
8.1 Outline of the Method of Moving Planes
8.2 Applications of Maximum Principles Based on Comparison
8.2.1 Symmetry of Solutions on a Unit Ball
8.2.2 Symmetry of Solutions of −∆u = u
p
in R
n
.
8.2.3 Symmetry of Solutions of −∆u = e
u
in R
2
8.3 Method of Moving Planes in a Local Way
8.3.1 The Background
8.3.2 The A Priori Estimates
8.4 Method of Moving Spheres
8.4.1 The Background
8.4.2 Necessary Conditions
8.5 Method of Moving Planes in an Integral Form and
Symmetry of Solutions for Integral Equations
The Method of Moving Planes (MMP) was invented by the Soviet math
ematician Alexanderoﬀ in the early 1950s. Decades later, it was further de
veloped by Serrin [Se], Gidas, Ni, and Nirenberg [GNN], Caﬀarelli, Gidas,
and Spruck [CGS], Li [Li], Chen and Li [CL] [CL1], Chang and Yang [CY],
and many others. This method has been applied to free boundary problems,
semilinear partial diﬀerential equations, and other problems. Particularly for
semilinear partial diﬀerential equations, there have seen many signiﬁcant con
tributions. We refer to the paper of Frenkel [F] for more descriptions on the
method.
The Method of Moving Planes and its variant–the Method of Moving
Spheres–have become powerful tools in establishing symmetries and mono
tonicity for solutions of partial diﬀerential equations. They can also be used
to obtain a priori estimates, to derive useful inequalities, and to prove non
existence of solutions.
196 8 Methods of Moving Planes and of Moving Spheres
From the previous chapter, we have seen the beauty and power of maxi
mum principles. The MMP greatly enhances the power of maximum principles.
Roughly speaking, the MMP is a continuous way of repeated applications of
maximum principles. During this process, the maximum principle has been
used inﬁnitely many times; and the advantage is that each time we only need
to use the maximum principle in a very narrow region. From the previous
chapter, one can see that, in such a narrow region, even if the coeﬃcients of
the equation are not ‘good,’ the maximum principle can still be applied. In the
authors’ research practice, we also introduced a form of maximum principle
at inﬁnity to the MMP and therefore simpliﬁed many proofs and extended
the results in more natural ways. We recommend the readers study this part
carefully, so that they will be able to apply it to their own research.
It is wellknown that by using a Green’s function, one can change a dif
ferential equation into an integral equation, and under certain conditions,
they are equivalent. To investigate the symmetry and monotonicity of inte
gral equations, the authors, together with Ou, created an integral form of
MMP. Instead of using local properties (say diﬀerentiability) of a diﬀeren
tial equation, they employed the global properties of the solutions of integral
equations.
In this chapter, we will apply the Method of Moving Planes and their
variant–the Method of Moving Spheres– to study semilinear elliptic equa
tions and integral equations. We will establish symmetry, monotonicity, a
priori estimates, and nonexistence of the solutions. During the process of
Moving Planes, the Maximum Principles introduced in the previous chapter
are applied in innovative ways.
In Section 8.2, we will establish radial symmetry and monotonicity for the
solutions of the following three semilinear elliptic problems
−´u = f(u) x ∈ B
1
(0)
u = 0 on ∂B
1
(0);
−´u = u
n+2
n−2
(x) x ∈ R
n
n ≥ 3;
and
−´u = e
u(x)
x ∈ R
2
.
During the moving of planes, the Maximum Principles Base on Comparison
will play a major role. In particular, the Narrow Region Principle and the
Decay at Inﬁnity Principle will be used repeatedly in dealing with the three
examples.
In Section 8.3, we will apply the Method of Moving Planes in a ‘local way’
to obtain a priori estimates on the solutions of the prescribing scalar curvature
equation on a compact Riemannian manifold M
−
4(n −1)
n −2
´
o
u +R
o
(x)u = R(x)u
n+2
n−2
, in M.
8.1 Outline of the Method of Moving Planes 197
We alow the function R(x) to change signs. In this situation, the traditional
blowingup analysis fails near the set where R(x) = 0. We will use the Method
of Moving Planes in an innovative way to obtain a priori estimates. Since the
Method of Moving Planes can not be applied to the solution u directly, we
introduce an auxiliary function to circumvent this diﬃculty.
In Section 8.4, we use the Method of Moving Spheres to prove a non
existence of solutions for the prescribing Gaussian and scalar curvature equa
tions
−´u + 2 = R(x)e
u
,
and
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
on S
2
and on S
n
(n ≥ 3), respectively. We prove that if the function R(x)
is rotationally symmetric and monotone in the region where it is positive,
then both equations admit no solution. This provides a stronger necessary
condition than the well known KazdanWarner condition, and it also becomes
a suﬃcient condition for the existence of solutions in most cases.
In Section 8.5, as an application of the maximum principle for integral
equations introduced in Section 7.5, we study the integral equation in R
n
u(x) =
R
n
1
[x −y[
n−α
u
n+α
n−α
(y)dy,
for any real number α between 0 and n. It arises as an EulerLagrange equation
for a functional in the context of the HardyLittlewoodSobolev inequalities.
Due to the diﬀerent nature of the integral equation, the traditional Method of
Moving Planes does not work. Hence we exploit its global property and develop
a new idea–the Integral Form of the Method of Moving Planes to obtain the
symmetry and monotonicity of the solutions. The Maximum Principle for
Integral Equations established in Chapter 7 is combined with the estimates
of various integral norms to carry on the moving of planes.
8.1 Outline of the Method of Moving Planes
To outline how the Method of Moving Planes works, we take the Euclidian
space R
n
for an example. Let u be a positive solution of a certain partial
diﬀerential equation. If we want to prove that it is symmetric and monotone
in a given direction, we may assign that direction as x
1
axis. For any real
number λ, let
T
λ
= ¦x = (x
1
, x
2
, , x
n
) ∈ R
n
[ x
1
= λ¦.
This is a plane perpendicular to x
1
axis and the plane that we will move with.
Let Σ
λ
denote the region to the left of the plane, i.e.
198 8 Methods of Moving Planes and of Moving Spheres
Σ
λ
= ¦x ∈ R
n
[ x
1
< λ¦.
Let
x
λ
= (2λ −x
1
, x
2
, , x
n
),
the reﬂection of the point x = (x
1
, , x
n
) about the plane T
λ
(See Figure 1).
x
x
λ
Σ
λ
T
λ
Figure 1
1
We compare the values of the solution u at point x and x
λ
, and we want
to show that u is symmetric about some plane T
λo
. To this end, let
w
λ
(x) = u(x
λ
) −u(x).
In order to show that, there exists some λ
o
, such that
w
λo
(x) ≡ 0 , ∀x ∈ Σ
λo
,
we generally go through the following two steps.
Step 1. We ﬁrst show that for λ suﬃciently negative, we have
w
λ
(x) ≥ 0 , ∀x ∈ Σ
λ
. (8.1)
Then we are able to start oﬀ from this neighborhood of x
1
= −∞, and move
the plane T
λ
along the x
1
direction to the right as long as the inequality (8.1)
holds.
Step 2. We continuously move the plane this way up to its limiting position.
More precisely, we deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
We prove that u is symmetric about the plane T
λo
, that is w
λo
(x) ≡ 0 for all
x ∈ Σ
λo
. This is usually carried out by a contradiction argument. We show
that if w
λo
(x) ≡ 0, then there would exist λ > λ
o
, such that (8.1) holds, and
this contradicts with the deﬁnition of λ
o
.
From the above illustration, one can see that the key to the Method of
Moving Planes is to establish inequality (8.1), and for partial diﬀerential equa
tions, maximum principles are powerful tools for this task. While for integral
equations, we use a diﬀerent idea. We estimate a certain norm of w
λ
on the
set
Σ
−
λ
= ¦x ∈ Σ
λ
[ w
λ
(x) < 0¦
where the inequality (8.1) is violated. We show that this norm must be zero,
and hence Σ
−
λ
is empty.
8.2 Applications of the Maximum Principles Based on Comparisons 199
8.2 Applications of the Maximum Principles Based on
Comparisons
In this section, we study some semilinear elliptic equations. We will apply
the method of moving planes to establish the symmetry of the solutions. The
essence of the method of moving planes is the application of various maximum
principles. In the proof of each theorem, the readers will see vividly how the
Maximum Principles Based on Comparisons are applied to narrow regions and
to solutions with decay at inﬁnity.
8.2.1 Symmetry of Solutions in a Unit Ball
We ﬁrst begin with an elegant result of Gidas, Ni, and Nirenberg [GNN1]:
Theorem 8.2.1 Assume that f() is a Lipschitz continuous function such
that
[f(p) −f(q)[ ≤ C
o
[p −q[ (8.2)
for some constant C
o
. Then every positive solution u of
−´u = f(u) x ∈ B
1
(0)
u = 0 on ∂B
1
(0).
(8.3)
is radially symmetric and monotone decreasing about the origin.
Proof.
As shown on Figure 4 below, let T
λ
= ¦x [ x
1
= λ¦ be the plane perpen
dicular to the x
1
axis. Let Σ
λ
be the part of B
1
(0) which is on the left of the
plane T
λ
. For each x ∈ Σ
λ
, let x
λ
be the reﬂection of the point x about the
plane T
λ
, more precisely, x
λ
= (2λ −x
1
, x
2
, , x
n
).
0
x x
λ
Σ
λ
T
λ
Figure 4
x
1
1
We compare the values of the solution u on Σ
λ
with those on its reﬂection.
Let
u
λ
(x) = u(x
λ
), and w
λ
(x) = u
λ
(x) −u(x).
200 8 Methods of Moving Planes and of Moving Spheres
Then it is easy to see that u
λ
satisﬁes the same equation as u does. Applying
the Mean Value Theorem to f(u), one can verify that w
λ
satisﬁes
´w
λ
+C(x, λ)w
λ
(x) = 0, x ∈ Σ
λ
,
where
C(x, λ) =
f(u
λ
(x)) −f(u(x))
u
λ
(x) −u(x)
,
and by condition (8.2),
[C(x, λ)[ ≤ C
o
. (8.4)
Step 1: Start Moving the Plane.
We start from the near left end of the region. Obviously, for λ suﬃciently
close to −1, Σ
λ
is a narrow (in x
1
direction) region, and on ∂Σ
λ
, w
λ
(x) ≥ 0.
( On T
λ
, w
λ
(x) = 0; while on the curve part of ∂Σ
λ
, w
λ
(x) > 0 since u > 0
in B
1
(0).)
Now we can apply the “Narrow Region Principle” ( Corollary 7.4.1 ) to
conclude that, for λ close to −1,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.5)
This provides a starting point for us to move the plane T
λ
Step 2: Move the Plane to Its Right Limit.
We now increase the value of λ continuously, that is, we move the plane
T
λ
to the right as long as the inequality (8.5) holds. We show that, by moving
this way, the plane will not stop before hitting the origin. More precisely, let
¯
λ = sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦,
we ﬁrst claim that
¯
λ ≥ 0. (8.6)
Otherwise, we will show that the plane can be further moved to the right
by a small distance, and this would contradicts with the deﬁnition of
¯
λ. In
fact, if
¯
λ < 0, then the image of the curved surface part of ∂Σ¯
λ
under the
reﬂection about T¯
λ
lies inside B
1
(0), where u(x) > 0 by assumption. It follows
that, on this part of ∂Σ¯
λ
, w¯
λ
(x) > 0. By the Strong Maximum Principle, we
deduce that
w¯
λ
(x) > 0
in the interior of Σ¯
λ
.
Let d
o
be the maximum width of narrow regions that we can apply the
“Narrow Region Principle”. Choose a small positive number δ, such that δ ≤
do
2
, −
¯
λ. We consider the function w¯
λ+δ
(x) on the narrow region (See Figure
5):
Ω
δ
= Σ¯
λ+δ
∩ ¦x [ x
1
>
¯
λ −
d
o
2
¦.
8.2 Applications of the Maximum Principles Based on Comparisons 201
It satisﬁes
´w¯
λ+δ
+C(x,
¯
λ +δ)w¯
λ+δ
= 0 x ∈ Ω
δ
w¯
λ+δ
(x) ≥ 0 x ∈ ∂Ω
δ
.
(8.7)
0
Ω
δ
T
¯
λ−
do
2
Σ
¯
λ−
do
2
T¯
λ+δ
Figure 5
x
1
1
The equation is obvious. To see the boundary condition, we ﬁrst notice
that it is satisﬁed on the two curved parts and one ﬂat part where x
1
=
¯
λ+δ
of the boundary ∂Ω
δ
due to the deﬁnition of w¯
λ+δ
. To see that it is also true
on the rest of the boundary where x
1
=
¯
λ −
do
2
, we use continuity argument.
Notice that on this part, w¯
λ
is positive and bounded away from 0. More
precisely and more generally, there exists a constant c
o
> 0, such that
w¯
λ
(x) ≥ c
o
, ∀x ∈ Σ
¯
λ−
do
2
.
Since w
λ
is continuous in λ, for δ suﬃciently small, we still have
w¯
λ+δ
(x) ≥ 0 , ∀x ∈ Σ
¯
λ−
do
2
.
Hence in particular, the boundary condition in (8.7) holds for such small δ.
Now we can apply the “Narrow Region Principle” to conclude that
w¯
λ+δ
(x) ≥ 0, ∀x ∈ Ω
δ
.
And therefore,
w¯
λ+δ
(x) ≥ 0, ∀x ∈ Σ¯
λ+δ
.
This contradicts with the deﬁnition of
¯
λ and thus establishes (8.6).
(8.6) implies that
u(−x
1
, x
) ≤ u(x
1
, x
) , ∀x
1
≥ 0, (8.8)
202 8 Methods of Moving Planes and of Moving Spheres
where x
= (x
2
, , x
n
).
We then start from λ close to 1 and move the plane T
λ
toward the left.
Similarly, we obtain
u(−x
1
, x
) ≥ u(x
1
, x
) , ∀x
1
≥ 0, (8.9)
Combining two opposite inequalities (8.8) and (8.9), we see that u(x) is
symmetric about the plane T
0
. Since we can place x
1
axis in any direction,
we conclude that u(x) must be radially symmetric about the origin. Also the
monotonicity easily follows from the argument. This completes the proof.
8.2.2 Symmetry of Solutions of −u = u
p
in R
n
In an elegant paper of Gidas, Ni, and Nirenberg [2], an interesting results is
the symmetry of the positive solutions of the semilinear elliptic equation:
´u +u
p
= 0 , x ∈ R
n
, n ≥ 3. (8.10)
They proved
Theorem 8.2.2 For p =
n+2
n−2
, all the positive solutions of (8.10) with rea
sonable behavior at inﬁnity, namely
u = O(
1
[x[
n−2
),
are radially symmetric and monotone decreasing about some point, and hence
assume the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for λ > 0 and for some x
o
∈ R
n
.
This uniqueness result, as was pointed out by R. Schoen, is in fact equiv
alent to the geometric result due to Obata [O]: A Riemannian metric on S
n
which is conformal to the standard one and having the same constant scalar
curvature is the pull back of the standard one under a conformal map of
S
n
to itself. Recently, Caﬀarelli, Gidas and Spruck [CGS] removed the de
cay assumption u = O([x[
2−n
) and proved the same result. In the case that
1 ≤ p <
n+2
n−2
, Gidas and Spruck [GS] showed that the only nonnegative solu
tion of (8.10) is identically zero. Then, in the authors paper [CL1], a simpler
and more elementary proof was given for almost the same result:
Theorem 8.2.3 i) For p =
n+2
n−2
, every positive C
2
solution of (8.10) must
be radially symmetric and monotone decreasing about some point, and
hence assumes the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for some λ > 0 and x
o
∈ R
n
.
8.2 Applications of the Maximum Principles Based on Comparisons 203
ii) For p <
n+2
n−2
, the only nonnegative solution of (8.10) is identically zero.
The proof of Theorem 8.2.2 is actually included in the more general proof
of the ﬁrst part of Theorem 8.2.3. However, to better illustrate the idea, we
will ﬁrst present the proof of Theorem 8.2.2 (mostly in our own idea). And
the readers will see vividly, how the “Decay at Inﬁnity” principle is applied
here.
Proof of Theorem 8.2.2.
Deﬁne
Σ
λ
= ¦x = (x
1
, , x
n
) ∈ R
n
[ x
1
< λ¦, T
λ
= ∂Σ
λ
and let x
λ
be the reﬂection point of x about the plane T
λ
, i.e.
x
λ
= (2λ −x
1
, x
2
, , x
n
).
(See the previous Figure 1.)
Let
u
λ
(x) = u(x
λ
), and w
λ
(x) = u
λ
(x) −u(x).
The proof consists of three steps. In the ﬁrst step, we start from the very
left end of our region R
n
, that is near x
1
= −∞. We will show that, for λ
suﬃciently negative,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.11)
Here, the “Decay at Inﬁnity” is applied.
Then in the second step, we will move our plane T
λ
in the the x
1
direction
toward the right as long as inequality (8.11) holds. The plane will stop at
some limiting position, say at λ = λ
o
. We will show that
w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
.
This implies that the solution u is symmetric and monotone decreasing about
the plane T
λo
. Since x
1
axis can be chosen in any direction, we conclude that
u must be radially symmetric and monotone about some point.
Finally, in the third step, using the uniqueness theory in Ordinary Diﬀer
ential Equations, we will show that the solutions can only assume the given
form.
Step 1. Prepare to Move the Plane from Near −∞.
To verify (8.11) for λ suﬃciently negative, we apply the maximum principle
to w
λ
(x). Write τ =
n+2
n−2
. By the deﬁnition of u
λ
, it is easy to see that, u
λ
satisﬁes the same equation as u does. Then by the Mean Value Theorem, it is
easy to verify that
−´w
λ
= u
τ
λ
(x) −u
τ
(x) = τψ
τ−1
λ
(x)w
λ
(x). (8.12)
where ψ
λ
(x) is some number between u
λ
(x) and u(x). Recalling the “Maxi
mum Principle Based on Comparison” (Theorem 7.4.1), we see here c(x) =
204 8 Methods of Moving Planes and of Moving Spheres
−τψ
τ−1
λ
(x). By the “Decay at Inﬁnity” argument (Corollary 7.4.2), it suﬃce
to check the decay rate of ψ
τ−1
λ
(x), and more precisely, only at the points ˜ x
where w
λ
is negative (see Remark 7.4.2 ). Apparently at these points,
u
λ
(˜ x) < u(˜ x),
and hence
0 ≤ u
λ
(˜ x) ≤ ψ
λ
(˜ x) ≤ u(˜ x).
By the decay assumption of the solution
u(x) = O
1
[x[
n−2
,
we derive immediately that
ψ
τ−1
λ
(˜ x) = O
(
1
[˜ x[
)
4
n−2
= O
1
[˜ x[
4
.
Here the power of
1
[˜ x[
is greater than two, which is what (actually more
than) we desire for. Therefore, we can apply the “Maximum Principle Based
on Comparison” to conclude that for λ suﬃciently negative ( [˜ x[ suﬃciently
large ), we must have (8.11). This completes the preparation for the moving
of planes.
Step 2. Move the Plane to the Limiting Position to Derive Symmetry.
Now we can move the plane T
λ
toward right, i.e., increase the value of λ,
as long as the inequality (8.11) holds. Deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
Obviously, λ
o
< +∞, due to the asymptotic behavior of u near x
1
= +∞. We
claim that
w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
. (8.13)
Otherwise, by the “Strong Maximum Principle” on unbounded domains ( see
Theorem 7.3.3 ), we have
w
λo
(x) > 0 in the interior of Σ
λo
. (8.14)
We show that the plane T
λo
can still be moved a small distance to the right.
More precisely, there exists a δ
o
> 0 such that, for all 0 < δ < δ
o
, we have
w
λo+δ
(x) ≥ 0 , ∀x ∈ Σ
λo+δ
. (8.15)
This would contradict with the deﬁnition of λ
o
, and hence (8.13) must hold.
Recall that in the last section, we use the “Narrow Region Principle ” to
derive (8.15). Unfortunately, it can not be applied in this situation, because
8.2 Applications of the Maximum Principles Based on Comparisons 205
the “narrow region” here is unbounded, and we are not able to guarantee that
w
λo
is bounded away from 0 on the left boundary of the “narrow region”.
To overcome this diﬃculty, we introduce a new function
¯ w
λ
(x) =
w
λ
(x)
φ(x)
,
where
φ(x) =
1
[x[
q
with 0 < q < n −2.
Then it is a straight forward calculation to verify that
−´¯ w
λ
= 2¯ w
λ
φ
φ
+
−´w
λ
+
´φ
φ
w
λ
1
φ
(8.16)
We have
Lemma 8.2.1 There exists a R
o
> 0 ( independent of λ), such that if x
o
is
a minimum point of ¯ w
λ
and ¯ w
λ
(x
o
) < 0, then [x
o
[ < R
o
.
We postpone the proof of the Lemma for a moment. Now suppose that (8.15) is
violated for any δ > 0. Then there exists a sequence of numbers ¦δ
i
¦ tending
to 0 and for each i, the corresponding negative minimum x
i
of w
λo+δi
. By
Lemma 8.2.1, we have
[x
i
[ ≤ R
o
, ∀ i = 1, 2, .
Then, there is a subsequence of ¦x
i
¦ (still denoted by ¦x
i
¦) which converges
to some point x
o
∈ R
n
. Consequently,
¯ w
λo
(x
o
) = lim
i→∞
¯ w
λo+δi
(x
i
) = 0 (8.17)
and
¯ w
λo
(x
o
) = lim
i→∞
¯ w
λo+δi
(x
i
) ≤ 0.
However, we already know ¯ w
λo
≥ 0, therefore, we must have ¯ w
λo
(x
o
) = 0. It
follows that
w
λo
(x
o
) = ¯ w
λo
(x
o
)φ(x
o
) + ¯ w
λo
(x
o
)φ = 0 + 0 = 0. (8.18)
On the other hand, by (8.14), since w
λo
(x
o
) = 0, x
o
must be on the
boundary of Σ
λo
. Then by the Hopf Lemma (see Theorem 7.3.3), we have,
the outward normal derivative
∂w
λo
∂ν
(x
o
) < 0.
This contradicts with (8.18). Now, to verify (8.15), what left is to prove the
Lemma.
206 8 Methods of Moving Planes and of Moving Spheres
Proof of Lemma 8.2.1. Assume that x
o
is a negative minimum of ¯ w
λ
.
Then
−´¯ w
λ
(x
o
) ≤ 0 and ¯ w
λ
(x
o
) = 0. (8.19)
On the other hand, as we argued in Step 1, by the asymptotic behavior of
u at inﬁnity, if [x
o
[ is suﬃciently large,
c(x
o
) := −τψ
τ−1
(x
o
) > −
q(n −2 −q)
[x
o
[
≡
´φ(x
o
)
φ(x
o
)
.
It follows from (8.12) that
−´w
λ
+
´φ
φ
w
λ
(x
o
) > 0.
This, together with (8.19) contradicts with (8.16), and hence completes the
proof of the Lemma.
Step 3. In the previous two steps, we show that the positive solutions
of (8.10) must be radially symmetric and monotone decreasing about some
point in R
n
. Since the equation is invariant under translation, without loss of
generality, we may assume that the solutions are symmetric about the origin.
Then they satisﬁes the following ordinary diﬀerential equation
−u
(r) −
n−1
r
u
(r) = u
τ
u
(0) = 0
u(0) =
[n(n−2)]
(n−2)/4
λ
n−2
2
for some λ > 0. One can verify that u(r) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+r
2
)
n−2
2
is a solution, and
by the uniqueness of the ODE problem, this is the only solution. Therefore,
we conclude that every positive solution of (8.10) must assume the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for λ > 0 and some x
o
∈ R
n
. This completes the proof of the Theorem.
Proof of Theorem 8.2.3.
i) The general idea in proving this part of the Theorem is almost the
same as that for Theorem 8.2.2. The main diﬀerence is that we have no decay
assumption on the solution u at inﬁnity, hence the method of moving planes
can not be applied directly to u. So we ﬁrst make a Kelvin transform to deﬁne
a new function
v(x) =
1
[x[
n−2
u(
x
[x[
2
).
8.2 Applications of the Maximum Principles Based on Comparisons 207
Then obviously, v(x) has the desired decay rate
1
[x[
n−2
at inﬁnity, but has a
possible singularity at the origin. It is easy to verify that v satisﬁes the same
equation as u does, except at the origin:
´v +v
τ
(x) = 0 x ∈ R
n
` ¦0¦, n ≥ 3. (8.20)
We will apply the method of moving planes to the function v, and show
that v is radially symmetric and monotone decreasing about some point. If
the center point is the origin, then by the deﬁnition of v, u is also symmetric
and monotone decreasing about the origin. If the center point of v is not the
origin, then v has no singularity at the origin, and hence u has the desired
decay at inﬁnity. Then the same argument in the proof of Theorem 8.2.2 would
imply that u is symmetric and monotone decreasing about some point.
Deﬁne
v
λ
(x) = v(x
λ
) , w
λ
(x) = v
λ
(x) −v(x).
Because v(x) may be singular at the origin, correspondingly w
λ
may be sin
gular at the point x
λ
= (2λ, 0, , 0). Hence instead of on Σ
λ
, we consider w
λ
on
˜
Σ
λ
= Σ
λ
` ¦x
λ
¦. And in our proof, we treat the singular point carefully.
Each time we show that the points of interest are away from the singularities,
so that we can carry on the method of moving planes to the end to show the
existence of a λ
o
such that w
λo
(x) ≡ 0 for x ∈
˜
Σ
λo
and v is strictly increasing
in the x
1
direction in
˜
Σ
λo
.
As in the proof of Theorem 8.2.2, we see that v
λ
satisﬁes the same equation
as v does, and
−´w
λ
= τψ
τ−1
λ
(x)w
λ
(x).
where ψ
λ
(x) is some number between v
λ
(x) and v(x).
Step 1. We show that, for λ suﬃciently negative, we have
w
λ
(x) ≥ 0, ∀x ∈
˜
Σ
λ
. (8.21)
By the asymptotic behavior
v(x) ∼
1
[x[
n−2
,
we derive immediately that, at a negative minimum point x
o
of w
λ
,
ψ
τ−1
λ
(x
o
) ∼ (
1
[x
o
[
n−2
)
τ−1
=
1
[x
o
[
4
,
the power of
1
[x
o
[
is greater than two, and we have the desired decay rate for
c(x) := −τψ
τ−1
λ
(x), as mentioned in Corollary 7.4.2. Hence we can apply the
“Decay at Inﬁnity” to w
λ
(x). The diﬀerence here is that, w
λ
has a singularity
208 8 Methods of Moving Planes and of Moving Spheres
at x
λ
, hence we need to show that, the minimum of w
λ
is away from x
λ
.
Actually, we will show that,
If inf
˜
Σ
λ
w
λ
(x) < 0, then the inﬁmum is achieved in Σ
λ
` B
1
(x
λ
) (8.22)
To see this, we ﬁrst note that for x ∈ B
1
(0),
v(x) ≥ min
∂B1(0)
v(x) =
0
> 0
due to the fact that v(x) > 0 and ´v ≤ 0.
Then let λ be so negative, that v(x) ≤
0
for x ∈ B
1
(x
λ
). This is possible
because v(x) → 0 as [x[ → ∞.
For such λ, obviously w
λ
(x) ≥ 0 on B
1
(x
λ
) ` ¦x
λ
¦. This implies (8.22).
Now similar to the Step 1 in the proof of Theorem 8.2.2, we can deduce that
w
λ
(x) ≥ 0 for λ suﬃciently negative.
Step 2. Again, deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈
˜
Σ
λ
¦.
We will show that
If λ
o
< 0, then w
λo
(x) ≡ 0, ∀x ∈
˜
Σ
λo
.
Deﬁne ¯ w
λ
=
w
λ
φ
the same way as in the proof of Theorem 8.2.2. Suppose
that ¯ w
λo
(x) ≡0, then by Maximum Principle we have
¯ w
λo
(x) > 0 , for x ∈
˜
Σ
λo
.
The rest of the proof is similar to the step 2 in the proof of Theorem 8.2.2
except that now we need to take care of the singularities. Again let λ
k
` λ
o
be a sequence such that ¯ w
λ
k
(x) < 0 for some x ∈
˜
Σ
λ
k
. We need to show that
for each k, inf
˜
Σ
λ
k
¯ w
λ
k
(x) can be achieved at some point x
k
∈
˜
Σ
λ
k
and that
the sequence ¦x
k
¦ is bounded away from the singularities x
λ
k
of w
λ
k
. This
can be seen from the following facts
a) There exists > 0 and δ > 0 such that
¯ w
λo
(x) ≥ for x ∈ B
δ
(x
λo
) ` ¦x
λo
¦.
b) lim
λ→λo
inf
x∈B
δ
(x
λ
)
¯ w
λ
(x) ≥ inf
x∈B
δ
(x
λo
)
¯ w
λo
(x) ≥ .
Fact (a) can be shown by noting that ¯ w
λo
(x) > 0 on
˜
Σ
λo
and ´w
λo
≤ 0,
while fact (b) is obvious.
Now through a similar argument as in the proof of Theorem 8.2.2 one can
easily arrive at a contradiction. Therefore ¯ w
λo
(x) ≡ 0.
8.2 Applications of the Maximum Principles Based on Comparisons 209
If λ
o
= 0, then we can carry out the above procedure in the opposite di
rection, namely, we move the plane in the negative x
1
direction from positive
inﬁnity toward the origin. If our planes T
λ
stop somewhere before the origin,
we derive the symmetry and monotonicity in x
1
direction by the above ar
gument. If they stop at the origin again, we also obtain the symmetry and
monotonicity in x
1
direction by combining the two inequalities obtained in
the two opposite directions. Since the x
1
direction can be chosen arbitrarily,
we conclude that the solution u must be radially symmetric about some point.
ii) To show the nonexistence of solutions in the case p <
n+2
n−2
, we notice
that after the Kelvin transform, v satisﬁes
´v +
1
[x[
n+2−p(n−2)
v
p
(x) = 0 , x ∈ R
n
` ¦0¦.
Due to the singularity of the coeﬃcient of v
p
at the origin, one can easily
see that v can only be symmetric about the origin if it is not identically
zero. Hence u must also be symmetric about the origin. Now given any two
points x
1
and x
2
in R
n
, since equation (8.10) is invariant under translations
and rotations, we may assume that the origin is at the mid point of the line
segment x
1
x
2
. Then from the above argument, we must have u(x
1
) = u(x
2
).
It follows that u is constant. Finally, from the equation (8.10), we conclude
that u ≡ 0. This completes the proof of the Theorem.
8.2.3 Symmetry of Solutions for −u = e
u
in R
2
When considering prescribing Gaussian curvature on two dimensional com
pact manifolds, if the sequence of approximate solutions “blows up”, then by
rescaling and taking limit, one would arrive at the following equation in the
entire space R
2
:
∆u + exp u = 0 , x ∈ R
2
R
2
exp u(x)dx < +∞
(8.23)
The classiﬁcation of the solutions for this limiting equation would provide es
sential information on the original problems on manifolds, also it is interesting
in its own right.
It is known that
φ
λ,x
o(x) = ln
32λ
2
(4 +λ
2
[x −x
o
[
2
)
2
for any λ > 0 and any point x
o
∈ R
2
is a family of explicit solutions.
We will use the method of moving planes to prove:
Theorem 8.2.4 Every solution of (8.23) is radially symmetric with respect
to some point in R
2
and hence assumes the form of φ
λ,x
o(x).
To this end, we ﬁrst need to obtain some decay rate of the solutions near
inﬁnity.
210 8 Methods of Moving Planes and of Moving Spheres
Some Global and Asymptotic Behavior of the Solutions
The following Theorem gives the asymptotic behavior of the solutions near
inﬁnity, which is essential to the application of the method of moving planes.
Theorem 8.2.5 If u(x) is a solution of (8.23), then as [x[ → +∞,
u(x)
ln [x[
→ −
1
2π
R
2
exp u(x)dx ≤ −4
uniformly.
This Theorem is a direct consequence of the following two Lemmas.
Lemma 8.2.2 (W. Ding) If u is a solution of
−´u = e
u
, x ∈ R
2
and
R
2
exp u(x)dx < +∞,
then
R
2
exp u(x)dx ≥ 8π.
Proof. For −∞ < t < ∞, let Ω
t
= ¦x [ u(x) > t¦, one can obtain
Ωt
exp u(x)dx = −
Ωt
´u =
∂Ωt
[u[ds
−
d
dt
[Ω
t
[ =
∂Ωt
ds
[u[
By the Schwartz inequality and the isoperimetric inequality,
∂Ωt
ds
[u[
∂Ωt
[u[ ≥ [∂Ω
t
[
2
≥ 4π[Ω
t
[.
Hence
−(
d
dt
[Ω
t
[)
Ωt
exp u(x)dx ≥ 4π[Ω
t
[
and so
d
dt
(
Ωt
exp u(x)dx)
2
= 2 exp t (
d
dt
[Ω
t
[)
Ωt
exp u(x)dx ≤ −8π[Ω
t
[e
t
.
Integrating from −∞ to ∞ gives
−(
R
2
exp u(x)dx)
2
≤ −8π
R
2
exp u(x)dx
which implies
R
2
exp u(x)dx ≥ 8π as desired.
Lemma 8.2.2 enables us to obtain the asymptotic behavior of the solutions
at inﬁnity.
8.2 Applications of the Maximum Principles Based on Comparisons 211
Lemma 8.2.3 . If u(x) is a solution of (8.23), then as [x[ → +∞,
u(x)
ln [x[
→ −
1
2π
R
2
exp u(x)dx uniformly.
Proof.
By a result of Brezis and Merle [BM], we see that the condition
R
2
exp u(x)dx <
∞ implies that the solution u is bounded from above.
Let
w(x) =
1
2π
R
2
(ln [x −y[ −ln([y[ + 1)) exp u(y)dy.
Then it is easy to see that
´w(x) = exp u(x) , x ∈ R
2
and we will show
w(x)
ln [x[
→
1
2π
R
2
exp u(x)dx uniformly as [x[ → +∞. (8.24)
To see this, we need only to verify that
I :=
R
2
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
e
u(y)
dy→0
as [x[→∞. Write I = I
1
+I
2
+I
3
, where I
1
, I
2
and I
3
are the integrals on the
three regions
D
1
= ¦y [ [x −y[ ≤ 1¦,
D
2
= ¦y [ [x −y[ > 1 and [y[ ≤ K¦
and
D
3
= ¦y [ [x −y[ > 1 and [y[ > K¦
respectively. We may assume that [x[ ≥ 3.
a) To estimate I
1
, we simply notice that
I
1
≤ C
x−y≤1
e
u(y)
dy −
1
ln [x[
x−y≤1
ln [x −y[e
u(y)
dy
Then by the boundedness of e
u(y)
and
R
2
e
u(y)
dy, we see that I
1
→0 as
[x[→∞.
b) For each ﬁxed K, in region D
2
, we have, as [x[→∞,
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
→0
hence I
2
→0.
212 8 Methods of Moving Planes and of Moving Spheres
c)To see I
3
→0, we use the fact that for [x −y[ > 1
[
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
[ ≤ C
Then let K→∞. This veriﬁes (8.24).
Consider the function v(x) = u(x) +w(x). Then ´v ≡ 0 and
v(x) ≤ C +C
1
ln([x[ + 1)
for some constant C and C
1
. Therefore v must be a constant. This completes
the proof of our Lemma.
Combining Lemma 8.2.2 and Lemma 8.2.3, we obtain our Theorem 8.2.5.
The Method of Moving Planes and Symmetry
In this subsection, we apply the method of moving planes to establish the
symmetry of the solutions. For a given solution, we move the family of lines
which are orthogonal to a given direction from negative inﬁnity to a critical
position and then show that the solution is symmetric in that direction about
the critical position. We also show that the solution is strictly increasing before
the critical position. Since the direction can be chosen arbitrarily, we conclude
that the solution must be radially symmetric about some point. Finally by
the uniqueness of the solution of the following O.D.E. problem
u
(r) +
1
r
u
(r) = f(u)
u
(0) = 0
u(0) = 1
we see that the solutions of (8.23) must assume the form φ
λ,x
0(x).
Assume that u(x) is a solution of (8.23). Without loss of generality, we
show the monotonicity and symmetry of the solution in the x
1
direction.
For λ ∈ R
1
, let
Σ
λ
= ¦(x
1
, x
2
) [ x
1
< λ¦
and
T
λ
= ∂Σ
λ
= ¦(x
1
, x
2
) [ x
1
= λ¦.
Let
x
λ
= (2λ −x
1
, x
2
)
be the reﬂection point of x = (x
1
, x
2
) about the line T
λ
. (See the previous
Figure 1.)
Deﬁne
w
λ
(x) = u(x
λ
) −u(x).
A straight forward calculation shows
´w
λ
(x) + (exp ψ
λ
(x))w
λ
(x) = 0 (8.25)
8.2 Applications of the Maximum Principles Based on Comparisons 213
where ψ
λ
(x) is some number between u(x) and u
λ
(x).
Step 1. As in the previous examples, we show that, for λ suﬃciently neg
ative, we have
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
.
By the “Decay at Inﬁnity” argument, the key is to check the decay rate of
c(x) := −e
ψ
λ
(x)
at points x
o
where w
λ
(x) is negative. At such point
u(x
oλ
) < u(x
o
), and hence ψ
λ
(x
o
) ≤ u(x
o
).
By virtue of Theorem 8.2.5, we have
e
ψ
λ
(x
o
)
= O(
1
[x
o
[
4
) (8.26)
Notice that we are in dimension two, while the key function φ =
1
x
q
given
in Corollary 7.4.2 requires 0 < q < n − 2, hence it does not work here. As a
modiﬁcation, we choose
φ(x) = ln([x[ −1).
Then it is easy to verify that
´φ
φ
(x) =
−1
[x[([x[ −1)
2
ln([x[ −1)
.
It follows from this and (8.26) that
e
ψ(x
o
)
+
´φ
φ
(x
o
) < 0 for suﬃciently large [x
o
[. (8.27)
This is what we desire for.
Then similar to the argument in Subsection 5.2, we introduce the function
¯ w
λ
(x) =
w
λ
(x)
φ(x)
.
It satisﬁes
´¯ w
λ
+ 2¯ w
λ
φ
φ
+
e
ψ
λ
(x)
+
´φ
φ
¯ w
λ
= 0. (8.28)
Moreover, by the asymptotic behavior of the solution u near inﬁnity (see
Lemma 8.2.3), we have, for each ﬁxed λ,
¯ w
λ
(x) =
u(x
λ
) −u(x)
ln([x[ −1)
→0 , as [x[→∞. (8.29)
Now, if w
λ
(x) < 0 somewhere, then by (8.29), there exists a point x
o
,
which is a negative minimum of ¯ w
λ
(x). At this point, one can easily derive a
contradiction from (8.27) and (8.28). This completes Step 1.
214 8 Methods of Moving Planes and of Moving Spheres
Step 2 . Deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
We show that
If λ
o
< 0, then w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
.
The argument is entirely similar to that in the Step 2 of the proof of
Theorem 8.2.2 except that we use φ(x) = ln(−x
1
+ 2). This completes the
proof of the Theorem.
8.3 Method of Moving Planes in a Local Way
8.3.1 The Background
Let M be a Riemannian manifold of dimension n ≥ 3 with metric g
o
. Given
a function R(x) on M, one interesting problem in diﬀerential geometry is to
ﬁnd a metric g that is pointwise conformal to the original metric g
o
and has
scalar curvature R(x). This is equivalent to ﬁnding a positive solution of the
semilinear elliptic equation
−
4(n −1)
n −2
´
o
u +R
o
(x)u = R(x)u
n+2
n−2
, x ∈ M, (8.30)
where ´
o
is the BeltramiLaplace operator of (M, g
o
) and R
o
(x) is the scalar
curvature of g
o
.
In recent years, there have seen a lot of progress in understanding equation
(8.30). When (M, g
o
) is the standard sphere S
n
, the equation becomes
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
, u > 0, x ∈ S
n
. (8.31)
It is the socalled critical case where the lack of compactness occurs. In this
case, the wellknown KazdanWarner condition gives rise to many examples of
R in which there is no solution. In the last few years, a tremendous amount of
eﬀort has been put to ﬁnd the existence of the solutions and many interesting
results have been obtained ( see [Ba] [BC] [Bi] [BE] [CL1] [CL3] [CL4] [CL6]
[CY1] [CY2] [ES] [KW1] [Li2] [Li3] [SZ] [Lin1] and the references therein).
One main ingredient in the proof of existence is to obtain a priori estimates
on the solutions. For equation (8.31) on S
n
, to establish a priori estimates,
a useful technique employed by most authors was a ‘blowingup’ analysis.
However, it does not work near the points where R(x) = 0. Due to this
limitation, people had to assume that R was positive and bounded away from
0. This technical assumption became somewhat standard and has been used
by many authors for quite a few years. For example, see articles by Bahri
8.3 Method of Moving Planes in a Local Way 215
[Ba], Bahri and Coron [BC], Chang and Yang [CY1], Chang, Gursky, and
Yang [CY2], Li [Li2] [Li3], and Schoen and Zhang [SZ].
Then for functions with changing signs, are there any a priori estimates?
In [CL6], we answered this question aﬃrmatively. We removed the above
wellknown technical assumption and obtained a priori estimates on the solu
tions of (8.31) for functions R which are allowed to change signs.
The main diﬃculty encountered by the traditional ‘blowingup’ analysis
is near the point where R(x) = 0. To overcome this, we used the ‘method of
moving planes’ in a local way to control the growth of the solutions in this
region and obtained an a priori estimate.
In fact, we derived a priori bounds for the solutions of more general equa
tions
−´u +
n(n −2)
4
u = R(x)u
p
, u > 0, x ∈ S
n
. (8.32)
for any exponent p between 1 +
1
A
and A, where A is an arbitrary positive
number. For a ﬁxed A, the bound is independent of p.
Proposition 8.3.1 Assume R ∈ C
2,α
(S
n
) and [R(x)[ ≥ β
0
for any x with
[R(x)[ ≤ δ
0
. Then there are positive constants and C depending only on β
0
,
δ
0
, M, and R
C
2,α
(S
n
)
, such that for any solution u of (8.32), we have u ≤ C
in the region where [R(x)[ ≤ .
In this proposition, we essentially required R(x) to grow linearly near its
zeros, which seems a little restrictive. Recently, in the case p =
n+2
n−2
, Lin [Lin1]
weaken the condition by assuming only polynomial growth of R near its zeros.
To prove this result, he ﬁrst established a Liouville type theorem in R
n
, then
use a blowing up argument.
Later, by introducing a new auxiliary function, we sharpen our method
of moving planes to further weaken the condition and to obtain more general
result:
Theorem 8.3.1 Let M be a locally conformally ﬂat manifold. Let
Γ = ¦x ∈ M [ R(x) = 0¦.
Assume that Γ ∈ C
2,α
and R ∈ C
2,α
(M) satisfying
∂R
∂ν
≤ 0 where ν is the
outward normal (pointing to the region where R is negative) of Γ, and
R(x)
[R(x)[
is
continuous near Γ,
= 0 at Γ.
(8.33)
Let D be any compact subset of M and let p be any number greater than 1.
Then the solutions of the equation
−´
o
u +R
o
(x)u = R(x)u
p
, in M (8.34)
are uniformly bounded near Γ ∩ D.
216 8 Methods of Moving Planes and of Moving Spheres
One can see that our condition (8.33) is weaker than Lin’s polynomial
growth restriction. To illustrate, one may simply look at functions R(x) which
grow like exp¦−
1
d(x)
¦, where d(x) is the distance from the point x to Γ.
Obviously, this kinds of functions satisfy our condition, but are not polynomial
growth.
Moreover our restriction on the exponent is much weaker. Besides being
n + 2
n −2
, p can be any number greater than 1.
8.3.2 The A Priori Estimates
Now, we estimate the solutions of equation (8.34) and prove Theorem 8.3.1.
Since M is a locally conformally ﬂat manifold, a local ﬂat metric can be
chosen so that equation (8.34) in a compact set D ⊂ M can be reduced to
−´u = R(x)u
p
, p > 1, (8.35)
in a compact set K in R
n
. This is the equation we will consider throughout
the section.
The proof of the Theorem is divided into two parts. We ﬁrst derive the
bound in the region(s) where R is negatively bounded away from zero. Then
based on this bound, we use the ‘method of moving planes’ to estimate the
solutions in the region(s) surrounding the set Γ.
Part I. In the region(s) where R is negative
In this part, we derive estimates in the region(s) where R is negatively
bounded away from zero. It comes from a standard elliptic estimate for sub
harmonic functions and an integral bound on the solutions. We prove
Theorem 8.3.2 The solutions of (8.35) are uniformly bounded in the region
¦x ∈ K : R(x) ≤ 0 and dist(x, Γ) ≥ δ¦. The bound depends only on δ, the
lower bound of R.
To prove Theorem 8.3.2, we introduce the following lemma of Gilbarg and
Trudinger [GT].
Lemma 8.3.1 (See Theorem 9.20 in [GT]). Let u ∈ W
2,n
(D) and suppose
´u ≥ 0. Then for any ball B
2r
(x) ⊂ D and any q > 0, we have
sup
Br(x)
u ≤ C(
1
r
n
B2r(x)
(u
+
)
q
dV )
1
q
, (8.36)
where C = C(n, D, q).
In virtue of this Lemma, to establish an a priori bound of the solutions,
one only need to obtain an integral bound. In fact, we prove
8.3 Method of Moving Planes in a Local Way 217
Lemma 8.3.2 Let B be any ball in K that is bounded away from Γ, then
there is a constant C such that
B
u
p
dx ≤ C. (8.37)
Proof. Let Ω be an open neighborhood of K such that dist (∂Ω, K ) ≥ 1.
Let ϕ be the ﬁrst eigenfunction of −´ in Ω, i.e.
−´ϕ = λ
1
ϕ(x) x ∈ Ω
ϕ(x) = 0 x ∈ ∂Ω.
Let
sgnR =
1 if R(x) > 0
0 if R(x) = 0
−1 if R(x) < 0.
Let α =
1+2p
p−1
. Multiply both sides of equation (8.35) by ϕ
α
[R[
α
sgnR and
then integrate. Taking into account that α > 2 and ϕ and R are bounded in
Ω, through a straight forward calculation, we have
Ω
ϕ
α
[R[
1+α
u
p
dx ≤ C
1
Ω
ϕ
α−2
[R[
α−2
udx ≤ C
2
Ω
ϕ
α
p
[R[
α−2
udx.
Applying H¨older inequality on the righthandside, we arrive at
Ω
ϕ
α
[R[
1+α
u
p
dx ≤ C
3
. (8.38)
Now (8.37) follows suit. This completes the proof of the Lemma.
The Proof of Theorem 8.3.2. It is a direct consequence of Lemma 8.3.1
and Lemma 8.3.2
Part II. In the region where R is small
In this part, we estimate the solutions in a neighborhood of Γ, where R is
small.
Let u be a given solution of (8.35), and x
0
∈ Γ. It is suﬃcient to show that
u is bounded in a neighborhood of x
0
. The key ingredient is to use the method
of moving planes to show that, near x
0
, along any direction that is close to
the direction of R, the values of the solution u is comparable. This result,
together with the boundedness of the integral
u
p
in a small neighborhood
of x
0
, leads to an a priori bound of the solutions. Since the method of moving
planes can not be applied directly to u, we construct some auxiliary functions.
The proof consists of the following four steps.
Step 1. Transforming the region
Make a translation, a rotation, or if necessary, a Kelvin transform. Denote
the new system by x = (x
1
, y) with y = (x
2
, , x
n
). Let x
1
axis pointing
218 8 Methods of Moving Planes and of Moving Spheres
to the right. In the new system, x
0
becomes the origin, the image of the set
Ω
+
:= ¦x [ R(x) > 0¦ is contained in the left of the yplane, and the image of
Γ is tangent to the yplane at origin. The image of Γ is also uniformly convex
near the origin.
Let x
1
= φ(y) be the equation of the image of Γ near the origin. Let
∂
1
D := ¦x [ x
1
= φ(y) +, x
1
≥ −2¦
The intersection of ∂
1
D and the plane ¦x ∈ R
n
[ x
1
= −2¦ encloses a part
of the plane, which is denote by ∂
2
D. Let D by the region enclosed by two
surfaces ∂
1
D and ∂
2
D. (See Figure 6.)
x
1 0
y
Γ
∂
1
D
∂
2
D
R(x) > 0
Figure 6
T
λ
R(x) < 0
1
The small positive number is chosen to ensure that
(a) ∂
1
D is uniformly convex,
(b)
∂R
∂x1
≤ 0 in D, and
(c)
R(x)
[R[
is continuous in D.
Step 2. Introducing an auxiliary function
Let m = max
∂1D
u. Let ˜ u be a C
2
extension of u from ∂
1
D to the entire
∂D, such that
0 ≤ ˜ u ≤ 2m, and [˜ u[ ≤ Cm.
8.3 Method of Moving Planes in a Local Way 219
Let w be the harmonic function
´w = 0 x ∈ D
w = ˜ u x ∈ ∂D
Then by the maximum principle and a standard elliptic estimate, we have
0 ≤ w(x) ≤ 2m and w
C
1
(D)
≤ C
1
m. (8.39)
Introduce a new function
v(x) = u(x) −w(x) +C
0
m[ +φ(y) −x
1
] +m[ +φ(y) −x
1
]
2
which is well deﬁned in D. Here C
0
is a large constant to be determined later.
One can easily verify that v(x) satisﬁes the following
´v +ψ(y) +f(x, v) = 0 x ∈ D
v(x) = 0 x ∈ ∂
1
D
where
ψ(y) = −C
0
m´φ(y) −m´[ +φ(y)]
2
−2m,
and
f(x, v) = 2mx
1
´φ(y)+R(x)¦v+w(x)−C
0
m[+φ(y)−x
1
]−m[+φ(y)−x
1
]
2
¦
p
At this step, we show that
v(x) > 0 ∀x ∈ D.
We consider the following two cases.
Case 1) φ(y) +
2
≤ x
1
≤ φ(y) + . In this region, R(x) is negative and
bounded away from 0. By Theorem 8.3.2 and the standard elliptic estimate,
we have
[
∂u
∂x
1
[ ≤ C
2
m,
for some positive constant C
2
. It follows from this and (8.39),
∂v
∂x
1
≤ (C
1
+C
2
−C
0
)m−2m( +φ(y) −x
1
).
Noticing that ( + φ(y) − x
1
) ≥ 0, one can choose C
0
suﬃciently large, such
that
∂v
∂x
1
< 0
Then since
v(x) = 0 ∀x ∈ ∂
1
D
we have v(x) > 0.
220 8 Methods of Moving Planes and of Moving Spheres
Case 2) −2 ≤ x
1
≤ φ(y) +
2
. In this region, again by (8.39), we have
v(x) ≥ −w(x) +C
0
m
2
≥ −2m+C
0
m
2
.
Again, choosing C
0
suﬃciently large (depending only on ), we arrive at v(x) >
0.
Step 3. Applying the Method of Moving Planes to v in x
1
direc
tion
In the previous step, we showed that v(x) ≥ 0 in D and v(x) = 0 on ∂
1
D.
This lies a foundation for the moving of planes. Now we can start from the
rightmost tip of region D, and move the plane perpendicular to x
1
axis toward
the left. More precisely, let
Σ
λ
= ¦x ∈ D [ x
1
≥ λ¦,
T
λ
= ¦x ∈ R
n
[ x
1
= λ¦.
Let x
λ
= (2λ − x
1
, y) be the reﬂection point of x with respect to the plane
T
λ
. Let
v
λ
(x) = v(x
λ
) and w
λ
(x) = v
λ
(x) −v(x).
We are going to show that
w
λ
(x) ≥ 0 ∀ x ∈ Σ
λ
. (8.40)
We want to show that the above inequality holds for −
1
≤ λ ≤ for some
0 <
1
< . This choice of
1
is to guarantee that when the plane T
λ
moves to
λ = −
1
, the reﬂection of Σ
λ
.about the plane T
λ
still lies in region D.
(8.40) is true when λ is less but close to , because Σ
λ
is a narrow region
and f(x, v) is Lipschitz continuous in v (Actually,
∂f
∂v
= R(x)). For detailed
argument, the readers may see the ﬁrst example in Section 5.
Now we decrease λ, that is, move the plane T
λ
toward the left as long as
the inequality (8.40) remains true. We show that the moving of planes can be
carried on provided
f(x, v(x)) ≤ f(x
λ
, v(x)) for x = (x
1
, y) ∈ D with x
1
> λ > −
1
. (8.41)
In fact, it is easy to see that v
λ
satisﬁes the equation
´v
λ
+ψ(y) +f(x
λ
, v
λ
) = 0,
and hence w
λ
satisﬁes
−´w
λ
= f(x
λ
, v
λ
) −f(x
λ
, v) +f(x
λ
, v) −f(x, v)
= R(x
λ
)(v
λ
−v) +f(x
λ
, v) −f(x, v)
= R(x
λ
)w
λ
+f(x
λ
, v) −f(x, v).
If (8.41) holds, then we have
8.3 Method of Moving Planes in a Local Way 221
−´w
λ
−R(x
λ
)w
λ
(x) ≥ 0. (8.42)
This enables us to apply the maximum principle to w
λ
.
Let
λ
o
= inf¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
If λ
o
> −
1
, then since v(x) > 0 in D, by virtue of (8.42) and the maximum
principle, we have
w
λo
(x) > 0, ∀x ∈ Σ
λo
and
∂w
λo
∂x
1
< 0, ∀x ∈ T
λo
∩ Σ
λo
.
Then, similar to the argument in the examples in Section 5, one can derive a
contradiction.
It is elementary to see that (8.41) holds if
∂f(x, v)
∂x
1
≤ 0 in the set
¦x ∈ D [ R(x) < 0 or x
1
> −2
1
¦.
To estimate
∂f
∂x
1
, we use
∂f
∂x
1
= 2m´φ(y) +u
p−1
¦
∂R
∂x
1
u +R(x)p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)]¦.
First, we notice that the uniform convexity of the image of Γ near the
origin implies
´φ(y) ≤ −a
0
< 0. (8.43)
for some constant a
0
.
We consider the following two possibilities.
a) R(x) ≤ 0. Choose C
0
suﬃciently large, so that
∂w
∂x
1
+C
0
m ≥ 0
Noticing that
∂R
∂x1
≤ 0 and ( +φ(y) −x
1
) ≥ 0 in D, we have
∂f
∂x
1
≤ 0.
b) R(x) > 0, x = (x
1
, y) with x
1
> −2
1
.
In the part where u ≥ 1, we use u to control
R(x)/
∂R
∂x
1
p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)].
More precisely, we write
∂f
∂x
1
= 2m´φ(y)+u
p−1
∂R
∂x
1
u +
R(x)/
∂R
∂x
1
p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)]
.
222 8 Methods of Moving Planes and of Moving Spheres
Choose
1
suﬃciently small, such that
[
∂R
∂x
1
[ ∼ [R[.
Then by condition (8.33) on R, we can make R(x)/
∂R
∂x1
arbitrarily small by a
small choice of
1
, and therefore arrive at
∂f
∂x1
≤ 0.
In the part where u ≤ 1, in virtue of (8.43), we can use 2m´φ(y) to control
Rp[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)].
Here again we use the smallness of R.
So far, our conclusion is: The method of moving planes can be carried on
up to λ = −
1
. More precisely, for any λ between −
1
and , the inequality
(8.40) is true.
Step 4. Deriving the a priori bound
Inequality (8.40) implies that, in a small neighborhood of the origin, the
function v(x) is monotone decreasing in x
1
direction. A similar argument
will show that this remains true if we rotate the x
1
axis by a small angle.
Therefore, for any point x
0
∈ Γ, one can ﬁnd a cone ∆
x0
with x
0
as its vertex
and staying to the left of x
0
, such that
v(x) ≥ v(x
0
) ∀x ∈ ∆
x0
(8.44)
Noticing that w(x) is bounded in D, (8.44) leads immediately to
u(x) +C
6
≥ u(x
0
) ∀x ∈ ∆
x0
(8.45)
More generally, by a similar argument, one can show that (8.45) is true
for any point x
0
in a small neighborhood of Γ. Furthermore, the intersection
of the cone ∆
x0
with the set ¦x[R(x) ≥
δ0
2
¦ has a positive measure and the
lower bound of the measure depends only on δ
0
and the C
1
norm of R. Now
the a priori bound of the solutions is a consequence of (8.45) and an integral
bound on u, which can be derived from Lemma 8.3.2.
This completes the proof of Theorem 8.3.1.
8.4 Method of Moving Spheres
8.4.1 The Background
Given a function K(x) on the two dimensional standard sphere S
2
, the well
known Nirenberg problem is to ﬁnd conditions on K(x), so that it can be
realized as the Gaussian curvature of some conformally related metric. This
is equivalent to solving the following nonlinear elliptic equation
8.4 Method of Moving Spheres 223
−´u + 1 = K(x)e
2u
(8.46)
on S
2
. Where ´ is the Laplacian operator associated to the standard metric.
On the higher dimensional sphere S
n
, a similar problem was raised by
Kazdan and Warner:
Which functions R(x) can be realized as the scalar curvature of some
conformally related metrics ?
The problem is equivalent to the existence of positive solutions of another
elliptic equation
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
(8.47)
For a unifying description, on S
2
, we let R(x) = 2K(x) be the scalar curvature,
then (8.46) is equivalent to
−´u + 2 = R(x)e
u
. (8.48)
Both equations are socalled ‘critical’ in the sense that lack of compactness
occurs. Besides the obvious necessary condition that R(x) be positive some
where, there are other wellknown obstructions found by Kazdan and Warner
[KW1] and later generalized by Bourguignon and Ezin [BE]. The conditions
are:
S
n
X(R)dA
g
= 0 (8.49)
where dA
g
is the volume element of the metric e
u
g
o
and u
4
n−2
g
o
for n = 2 and
for n ≥ 3 respectively. The vector ﬁeld X in (8.49) is a conformal Killing ﬁeld
associated to the standard metric g
o
on S
n
. In the following, we call these
necessary conditions KazdanWarner type conditions.
These conditions give rise to many examples of R(x) for which (8.48) and
(8.47) have no solution. In particular, a monotone rotationally symmetric
function R admits no solution.
Then for which R, can one solve the equations? What are the necessary
and suﬃcient conditions for the equations to have a solution? These are very
interesting problems in geometry.
In recent years, various suﬃcient conditions were obtained by many au
thors (For example, see [BC] [CY1] [CY2] [CY3] [CY4] [CL10] [CD1] [CD2]
[CC] [CS] [ES] [Ha] [Ho] [KW1] [Kw2] [Li1] [Li2] [Mo] and the references
therein.) However, there are still gaps between those suﬃcient conditions and
the necessary ones. Then one may naturally ask: ‘ Are the necessary condi
tions of KazdanWarner type also suﬃcient?’ This question has been open for
many years. In [CL3], we gave a negative answer to this question.
To ﬁnd necessary and suﬃcient conditions, it is natural to begin with
functions having certain symmetries. A pioneer work in this direction is [Mo]
by Moser. He showed that, for even functions R on S
2
, the necessary and
suﬃcient condition for (8.48) to have a solution is R being positive somewhere.
224 8 Methods of Moving Planes and of Moving Spheres
Note that, in the class of even functions, (1.2) is satisﬁed automatically. Thus
one can interpret Moser’s result as:
In the class of even functions, the KazdanWarner type conditions are the
necessary and suﬃcient conditions for (8.48) to have a solution.
A simple question was asked by Kazdan [Ka]:
What are the necessary and suﬃcient conditions for rotationally symmetric
functions?
To be more precise, for rotationally symmetric functions R = R(θ) where
θ is the distance from the north pole, the KazdanWarner type conditions take
the form:
R > 0 somewhere and R
changes signs (8.50)
In [XY], Xu and Yang found a family of rotationally symmetric functions
R
satisfying conditions (8.50), for which problem (8.48) has no rotationally
symmetric solution. Even though one didn’t know whether there were any
other nonsymmetric solutions for these functions R
, this result is still very
interesting, because it suggests that (8.50) may not be the suﬃcient condition.
In our earlier paper [CL3], we present a class of rotationally symmetric
functions satisfying (8.50) for which problem (8.48) has no solution at all. And
thus, for the ﬁrst time, pointed out that the Kazdan Warner type conditions
are not suﬃcient for (8.48) to have a solution. First, we proved the non
existence of rotationally symmetric solutions:
Proposition 8.4.1 Let R be rotationally symmetric. If
R is monotone in the region where R > 0, and R ≡ C, (8.51)
Then problem (8.48) and (8.47) has no rotationally symmetric solution.
This generalizes Xu and Yang’s result, since their family of functions R
satisfy (8.51). We also cover the higher dimensional cases.
Although we believed that for all such functions R, there is no solution at
all, we were not able to prove it at that time. However, we could show this
for a family of such functions.
Proposition 8.4.2 There exists a family of functions R satisfying the Kazdan
Warner type conditions (8.50), for which the problem (8.48) has no solution
at all.
At that time, we were not able to obtain the counter part of this result in
higher dimensions.
In [XY], Xu and Yang also proved:
Proposition 8.4.3 Let R be rotationally symmetric. Assume that
i) R is nondegenerate
ii)
R
changes signs in the region where R > 0 (8.52)
Then problem (8.48) has a solution.
8.4 Method of Moving Spheres 225
The above results and many other existence results tend to lead people
to believe that what really counts is whether R
changes signs in the region
where R > 0. And we conjectured in [CL3] that for rotationally symmetric
R, condition (8.52), instead of (8.50), would be the necessary and suﬃcient
condition for (8.48) or (8.47) to have a solution.
The main objective of this section is to present the proof on the necessary
part of the conjecture, which appeared in our later paper [CL4]. In [CL3], to
prove Proposition 8.4.2, we used the method of ‘moving planes’ to show that
the solutions of (8.48) are rotationally symmetric. This method only work
for a special family of functions satisfying (8.51). It does not work in higher
dimensions. In [CL4], we introduce a new idea. We call it the method of ‘mov
ing spheres’. It works in all dimensions, and for all functions satisfying (8.51).
Instead of showing the symmetry of the solutions, we obtain a comparison for
mula which leads to a direct contradiction. In an earlier work [P] , a similar
method was used by P. Padilla to show the radial symmetry of the solutions
for some nonlinear Dirichlet problems on annuluses.
The following are our main results.
Theorem 8.4.1 Let R be continuous and rotationally symmetric. If
R is monotone in the region where R > 0, and R ≡ C, (8.53)
Then problems (8.48) and (8.47) have no solution at all.
We also generalize this result to a family of nonsymmetric functions.
Theorem 8.4.2 Let R be a continuous function. Let B be a geodesic ball
centered at the North Pole x
n+1
= 1. Assume that
i) R(x) ≥ 0, R is nondecreasing in x
n+1
direction, for x ∈ B ;
ii) R(x) ≤ 0, for x ∈ S
n
` B;
Then problems (8.48) and (8.47) have no solution at all.
Combining our Theorem 1 with Xu and Yang’s Proposition 3, one can
obtain a necessary and suﬃcient condition in the nondegenerate case:
Corollary 8.4.1 Let R be rotationally symmetric and nondegenerate in the
sense that R
= 0 whenever R
= 0. Then the necessary and suﬃcient condi
tion for (8.48) to have a solution is
R > 0 somewhere and R
changes signs in the region where R > 0
8.4.2 Necessary Conditions
In this subsection, we prove Theorem 8.4.1 and Theorem 8.4.2.
For convenience, we make a stereographic projection from S
n
to R
n
. Then
it is equivalent to consider the following equation in Euclidean space R
n
:
226 8 Methods of Moving Planes and of Moving Spheres
−´u = R(r)e
u
, x ∈ R
2
. (8.54)
−´u = R(r)u
τ
u > 0, x ∈ R
n
, n ≥ 3, (8.55)
with appropriate asymptotic growth of the solutions at inﬁnity
u ∼ −4 ln [x[ for n=2 ; u ∼ C[x[
2−n
for n ≥ 3 (8.56)
for some C > 0. Here ´ is the Euclidean Laplacian operator, r = [x[ and
τ =
n+2
n−2
is the socalled critical Sobolev exponent. The function R(r) is the
projection of the original R in equations (8.48) and (8.47); and this projection
does not change the monotonicity of the function. The function R is also
bounded and continuous, and we assume these throughout the subsection.
For a radial function R satisfying (8.53), there are three possibilities:
i) R is nonpositive everywhere,
ii) R ≡ C is nonnegative everywhere and monotone,
iii) R > 0 and nonincreasing for r < r
o
, R ≤ 0 for r > r
o
; or vice versa.
The ﬁrst two cases violate the KazdanWarner type conditions, hence there
is no solution. Thus, we only need to show that in the remaining case, there
is also no solution. Without loss of generality, we may assume that:
R(r) > 0, R
(r) ≤ 0 for r < 1; R(r) ≤ 0 for r ≥ 1. (8.57)
The proof of Theorem 8.4.1 is based on the following comparison formulas.
Lemma 8.4.1 Let R be a continuous function satisfying (8.57). Let u be a
solution of (8.54) and (8.56) for n = 2, then
u(λx) > u(
λx
[x[
2
) −4 ln [x[ ∀x ∈ B
1
(0), 0 < λ ≤ 1 (8.58)
Lemma 8.4.2 Let R be a continuous function satisfying (8.57). Let u be a
solution of (8.55) and (8.56) for n > 2, then
u(λx) > [x[
2−n
u(
λx
[x[
2
) ∀x ∈ B
1
(0), 0 < λ ≤ 1 (8.59)
We ﬁrst prove Lemma 8.4.2. A similar idea will work for Lemma 8.4.1.
Proof of Lemma 8.4.2. We use a new idea called the method of
‘moving spheres ’. Let x ∈ B
1
(0), then λx ∈ B
λ
(0). The reﬂection point of λx
about the sphere ∂B
λ
(0) is
λx
x
2
. We compare the values of u at those pairs
of points λx and
λx
x
2
. In step 1, we start with the unit sphere and show that
(8.59) is true for λ = 1. This is done by Kelvin transform and the strong
maximum principle. Then in step 2, we move (shrink) the sphere ∂B
λ
(0)
8.4 Method of Moving Spheres 227
towards the origin. We show that one can always shrink ∂B
λ
(0) a little bit
before reaching the origin. By doing so, we establish the inequality for all
x ∈ B
1
(0) and λ ∈ (0, 1].
Step 1. In this step, we establish the inequality for x ∈ B
1
(0), λ =
1. We show that
u(x) > [x[
2−n
u(
x
[x[
2
) ∀x ∈ B
1
(0). (8.60)
Let v(x) = [x[
2−n
u(
x
x
2
) be the Kelvin transform. Then it is easy to verify
that v satisﬁes the equation
−´v = R(
1
r
)v
τ
(8.61)
and v is regular.
It follows from (8.57) that ´u < 0, and ´v ≥ 0 in B
1
(0). Thus −´(u −
v) > 0. Applying the maximum principle, we obtain
u > v in B
1
(0) (8.62)
which is equivalent to (8.60).
Step 2. In this step, we move the sphere ∂B
λ
(0) towards λ = 0.
We show that the moving can not be stopped until λ reaches the origin. This
is done again by the Kelvin transform and the strong maximum principle.
Let u
λ
(x) = λ
n
2
−1
u(λx). Then
−´u
λ
= R(λx)u
τ
λ
(x). (8.63)
Let v
λ
(x) = [x[
2−n
u
λ
(
x
x
2
) be the Kelvin transform of u
λ
. Then it is easy
to verify that
−´v
λ
= R(
λ
r
)v
τ
λ
(x). (8.64)
Let w
λ
= u
λ
−v
λ
. Then by (8.63) and (8.64),
´w
λ
+R(
λ
r
)ψ
λ
w
λ
= [R(
λ
r
) −R(λr)]u
τ
λ
(8.65)
where ψ
λ
is some function with values between τu
τ−1
λ
and τv
τ−1
λ
. Taking into
account of assumption (2.4), we have
R(
λ
r
) −R(λr) ≤ 0 for r ≤ 1, λ ≤ 1.
It follows from (8.65) that
´w
λ
+C
λ
(x)w
λ
≤ 0 (8.66)
where C
λ
(x) is a bounded function if λ is bounded away from 0.
228 8 Methods of Moving Planes and of Moving Spheres
It is easy to see that for any λ, strict inequality holds somewhere for (8.66).
Thus applying the strong maximum principle, we know that the inequality
(8.59) is equivalent to
w
λ
≥ 0. (8.67)
From step 1, we know (8.67) is true for (x, λ) ∈ B
1
(0) ¦1¦. Now we decrease
λ. Suppose (8.67) does not hold for all λ ∈ (0, 1] and let λ
o
> 0 be the smallest
number such that the inequality is true for (x, λ) ∈ B
1
(0) [λ
o
, 1]. We will
derive a contradiction by showing that for λ close to and less than λ
o
, the
inequality is still true. In fact, we can apply the strong maximum principle
and then the Hopf lemma to (8.66) for λ = λ
o
to get:
w
λo
> 0 in B
1
and
∂w
λo
∂r
< 0 on ∂B
1
. (8.68)
These combined with the fact that w
λ
≡ 0 on ∂B
1
imply that (8.67) holds for
λ close to and less than λ
o
.
Proof of Lemma 8.4.1. In step 1, we let
v(x) = u(
x
[x[
2
) −4 ln [x[.
In step 2, let
u
λ
(x) = u(λx) + 2 ln λ, v
λ
(x) = u
λ
(
x
[x[
2
) −4 ln [x[.
Then arguing as in the proof of Lemma 8.4.2 we derive the conclusion of
Lemma 8.4.1.
Now we are ready to prove Theorem 8.4.1.
Proof of Theorem 8.4.1. Taking λ to 0 in (8.58), we get ln [x[ > 0
for [x[ < 1 which is impossible. Letting λ→0 in (8.59) and using the fact
that u(0) > 0 we obtain again a contradiction. These complete the proof of
Theorem 8.4.1.
Proof of Theorem 8.4.2. Noting that in the proof of Theorem
8.4.1, we only compare the values of the solutions along the same radial di
rection, one can easily adapt the argument in the proof of Theorem 8.4.1 to
derive the conclusion of Theorem 8.4.2.
8.5 Method of Moving Planes in Integral Forms and
Symmetry of Solutions for an Integral Equation
Let R
n
be the n−dimensional Euclidean space, and let α be a real number
satisfying 0 < α < n. Consider the integral equation
u(x) =
R
n
1
[x −y[
n−α
u(y)
(n+α)/(n−α)
dy. (8.69)
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 229
It arises as an EulerLagrange equation for a functional under a constraint
in the context of the HardyLittlewoodSobolev inequalities. In [L], Lieb clas
siﬁed the maximizers of the functional, and thus obtained the best constant
in the HLS inequalities. He then posed the classiﬁcation of all the critical
points of the functional – the solutions of the integral equation (8.69) as an
open problem.
This integral equation is also closely related to the following family of
semilinear partial diﬀerential equations
(−∆)
α/2
u = u
(n+α)/(n−α)
. (8.70)
In the special case n ≥ 3 and α = 2, it becomes
−∆u = u
(n+2)/(n−2)
. (8.71)
As we mentioned in Section 6, solutions to this equation were studied by
Gidas, Ni, and Nirenberg [GNN]; Caﬀarelli, Gidas, and Spruck [CGS]; Chen
and Li [CL]; and Li [Li]. Recently, Wei and Xu [WX] generalized this result
to the solutions of (8.70) with α being any even numbers between 0 and n.
Apparently, for other real values of α between 0 and n, equation (8.70) is
also of practical interest and importance. For instance, it arises as the Euler
Lagrange equation of the functional
I(u) =
R
n
[(−∆)
α
4
u[
2
dx/(
R
n
[u[
2n
n−α
dx)
n−α
n
.
The classiﬁcation of the solutions would provide the best constant in the
inequality of the critical Sobolev imbedding from H
α
2
(R
n
) to L
2n
n−α
(R
n
):
(
R
n
[u[
2n
n−α
dx)
n−α
n
≤ C
R
n
[(−∆)
α
4
u[
2
dx.
As usual, here (−∆)
α/2
is deﬁned by Fourier transform. We can show that
(see [CLO]), all the solutions of partial diﬀerential equation (8.70) satisfy our
integral equation (8.69), and vise versa. Therefore, to classify the solutions
of this family of partial diﬀerential equations, we only need to work on the
integral equations (8.69).
In order either the HardyLittlewoodSobolev inequality or the above men
tioned critical Sobolev imbedding to make sense, u must be in L
2n
n−α
(R
n
). Un
der this assumption, we can show that (see [CLO]) a positive solution u is in
fact bounded, and therefore possesses higher regularity. Furthermore, by using
the method of moving planes, we can show that if a solution is locally L
2n
n−α
,
then it is in L
2n
n−α
(R
n
). Hence, in the following, we call a solution u regular if
it is locally L
2n
n−α
. It is interesting to notice that if this condition is violated,
then a solution may not be bounded. A simple example is u =
1
x
(n−α)/2
,
which is a singular solution. We studied such solutions in [CLO1].
We will use the method of moving planes to prove
230 8 Methods of Moving Planes and of Moving Spheres
Theorem 8.5.1 Every positive regular solution u(x) of (8.69) is radially
symmetric and decreasing about some point x
o
and therefore assumes the form
c(
t
t
2
+[x −x
o
[
2
)
(n−α)/2
, (8.72)
with some positive constants c and t.
To better illustrate the idea, we will ﬁrst prove the Theorem under stronger
assumption that u ∈ L
2n
n−2
(R
n
). Then we will show how the idea of this proof
can be extended under weaker condition that u is only locally in L
2n
n−2
.
For a given real number λ, deﬁne
Σ
λ
= ¦x = (x
1
, . . . , x
n
) [ x
1
≤ λ¦,
and let x
λ
= (2λ−x
1
, x
2
, . . . , x
n
) and u
λ
(x) = u(x
λ
). (Again see the previous
Figure 1.)
Lemma 8.5.1 For any solution u(x) of (1.1), we have
u(x) −u
λ
(x) =
Σ
λ
(
1
[x −y[
n−α
−
1
[x
λ
−y[
n−α
)(u(y)
n+α
n−α
−u
λ
(y)
n+α
n−α
)dy,
(8.73)
It is also true for v(x) =
1
[x[
n−α
u(
x
[x[
2
), the Kelvin type transform of u(x),
for any x = 0.
Proof. Since [x −y
λ
[ = [x
λ
−y[, we have
u(x) =
Σ
λ
1
[x −y[
n−α
u(y)
n+α
n−α
dy +
Σ
λ
1
[x
λ
−y[
n−α
u
λ
(y)
n+α
n−α
dy, and
u(x
λ
) =
Σ
λ
1
[x
λ
−y[
n−α
u(y)
n+α
n−α
dy +
Σ
λ
1
[x −y[
n−α
u
λ
(y)
n+α
n−α
.
This implies (8.73). The conclusion for v(x) follows similarly since it satisﬁes
the same integral equation (8.69) for x = 0.
Proof of the Theorem 8.5.1 (Under Stronger Assumption u ∈ L
2n
n−2
(R
n
)).
Let w
λ
(x) = u
λ
(x) − u(x). As in Section 6, we will ﬁrst show that, for λ
suﬃciently negative,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.74)
Then we can start moving the plane T
λ
= ∂Σ
λ
from near −∞ to the right,
as long as inequality (8.74) holds. Let
λ
o
= sup¦λ ≤ 0 [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 231
We will show that
λ
o
< ∞, and w
λo
(x) ≡ 0.
Unlike partial diﬀerential equations, here we do not have any diﬀerential
equations for w
λ
. Instead, we will introduce a new idea – the method of moving
planes in integral forms. We will exploit some global properties of the integral
equations.
Step 1. Start Moving the Plane from near −∞.
Deﬁne
Σ
−
λ
= ¦x ∈ Σ
λ
[ w
λ
(x) < 0¦. (8.75)
We show that for suﬃciently negative values of λ, Σ
−
λ
must be empty. By
Lemma 8.5.1, it is easy to verify that
u(x) −u
λ
(x) ≤
Σ
−
λ
1
[x −y[
n−α
−
1
[x
λ
−y[
n−α
[u
τ
(y) −u
τ
λ
(y)]dy
≤
Σ
−
λ
1
[x −y[
n−α
[u
τ
(y) −u
τ
λ
(y)]dy
= τ
Σ
−
λ
1
[x −y[
n−α
ψ
τ−1
λ
(y)[u(y) −u
λ
(y)]dy
≤ τ
Σ
−
λ
1
[x −y[
n−α
u
τ−1
(y)[u(y) −u
λ
(y)]dy,
where ψ
λ
(x) is valued between u
λ
(x) and u(x) by the Mean Value Theorem,
and since on Σ
−
λ
, u
λ
(x) < u(x), we have ψ
λ
(x) ≤ u(x).
We ﬁrst apply the classical HardyLittlewoodSobolev inequality (its equiv
alent form) to the above to obtain, for any q >
n
n−α
:
w
λ

L
q
(Σ
−
λ
)
≤ Cu
τ−1
w
λ

L
nq/(n+αq)
(Σ
−
λ
)
=
Σ
λ
−
[u
τ−1
(x)[
nq
n+αq
[w
λ
(x)[
nq
n+αq
dx
n+αq
nq
.
Then use the H¨older inequality to the last integral in the above. First choose
s =
n +αq
n
so that
nq
n +αq
s = q,
then choose
r =
n +αq
αq
232 8 Methods of Moving Planes and of Moving Spheres
to ensure
1
r
+
1
s
= 1.
We thus arrive at
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
By condition that u ∈ L
τ+1
(R
n
), we can choose N suﬃciently large, such
that for λ ≤ −N, we have
C¦
Σ
λ
u
τ+1
(y) dy¦
α
n
≤
1
2
.
Now (8.76) implies that w
λ

L
q
(Σ
−
λ
)
= 0, and therefore Σ
−
λ
must be measure
zero, and hence empty by the continuity of w
λ
(x). This veriﬁes (8.74) and
hence completes Step 1. The readers can see that here we have actually used
Corollary 7.5.1, with Ω = Σ
λ
, a region near inﬁnity.
Step 2. We now move the plane T
λ
= ¦x [ x
1
= λ¦ to the right as long as
(8.74) holds. Let λ
o
as deﬁned before, then we must have λ
o
< ∞. This can
be seen by applying a similar argument as in Step 1 from λ near +∞.
Now we show that
w
λo
(x) ≡ 0 , ∀ x ∈ Σ
λo
. (8.76)
Otherwise, we have w
λo
(x) ≥ 0, but w
λo
(x) ≡ 0 on Σ
λo
; we show that the
plane can be moved further to the right. More precisely, there exists an
depending on n, α, and the solution u(x) itself such that w
λ
(x) ≥ 0 on Σ
λ
for
all λ in [λ
o
, λ
o
+).
By Lemma 8.5.1, we have in fact w
λo
(x) > 0 in the interior of Σ
λo
. Let
Σ
−
λo
= ¦x ∈ Σ
λo
[ w
λo
(x) ≤ 0¦. Then obviously, Σ
−
λo
has measure zero, and
lim
λ→λo
Σ
−
λ
⊂ Σ
−
λo
. From the ﬁrst inequality of (8.76), we deduce,
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
(8.77)
Condition u ∈ L
τ+1
(R
n
) ensures that one can choose suﬃciently small,
so that for all λ in [λ
o
, λ
o
+),
C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
≤
1
2
.
Now by (8.77), we have w
λ

L
q
(Σ
−
λ
)
= 0, and therefore Σ
−
λ
must be empty.
Here we have actually applied Corollary 7.5.1 with Ω = Σ
−
λ
. For λ suﬃciently
close to λ
o
, Ω is contained in a very narrow region.
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 233
Proof of the Theorem 8.5.1 ( under Weaker Assumption that u is only
locally in L
τ+1
).
Outline. Since we do not assume any integrability condition of u(x) near
inﬁnity, we are not able to carry on the method of moving planes directly on
u(x). To overcome this diﬃculty, we consider v(x), the Kelvin type transform
of u(x). It is easy to verify that v(x) satisﬁes the same equation (8.69), but
has a possible singularity at origin, where we need to pay some particular
attention. Since u is locally L
τ+1
, it is easy to see that v(x) has no singularity
at inﬁnity, i.e. for any domain Ω that is a positive distance away from the
origin,
Ω
v
τ+1
(y) dy < ∞. (8.78)
Let λ be a real number and let the moving plane be x
1
= λ. We compare
v(x) and v
λ
(x) on Σ
λ
` ¦0¦. The proof consists of three steps. In step 1, we
show that there exists an N > 0 such that for λ ≤ −N, we have
v(x) ≤ v
λ
(x), ∀x ∈ Σ
λ
` ¦0¦. (8.79)
Thus we can start moving the plane continuously from λ ≤ −N to the right as
long as (8.79) holds. If the plane stops at x
1
= λ
o
for some λ
o
< 0, then v(x)
must be symmetric and monotone about the plane x
1
= λ
o
. This implies that
v(x) has no singularity at the origin and u(x) has no singularity at inﬁnity.
In this case, we can carry on the moving planes on u(x) directly to obtain
the radial symmetry and monotonicity. Otherwise, we can move the plane all
the way to x
1
= 0, which is shown in step 2. Since the direction of x
1
can
be chosen arbitrarily, we deduce that v(x) must be radially symmetric and
decreasing about the origin. We will show in step 3 that, in any case, u(x) can
not have a singularity at inﬁnity, and hence both u and v are in L
τ+1
(R
n
).
Step 1 and Step 2 are entirely similar to the proof under stronger assump
tion. What we need to do is to replace u there by v and Σ
λ
there by Σ
λ
`¦0¦.
Hence we only show Step 3 here.
Step 3. Finally, we showed that u has the desired asymptotic behavior at
inﬁnity, i.e., it satisﬁes
u(x) = O(
1
[x[
n−2
).
Suppose in the contrary, let x
1
and x
2
be any two points in R
n
and let x
o
be
the midpoint of the line segment x
1
x
2
. Consider the Kelvin type transform
centered at x
o
:
v(x) =
1
[x −x
o
[
n−α
u(
x −x
o
[x −x
o
[
2
).
Then v(x) has a singularity at x
o
. Carry on the arguments as in Steps 1 and 2,
we conclude that v(x) must be radially symmetric about x
o
, and in particular,
u(x
1
) = u(x
2
). Since x
1
and x
2
are any two points in R
n
, u must be constant.
This is impossible. Similarly, The continuity and higher regularity of u follows
234 8 Methods of Moving Planes and of Moving Spheres
from standard theory on singular integral operators. This completes the proof
of the Theorem.
A
Appendices
A.1 Notations
A.1.1 Algebraic and Geometric Notations
1. A = (a
ij
) is a an mn matrix with ij
th
entry a
ij
.
2. det(a
ij
) is the determinant of the matrix (a
ij
).
3. A
T
= transpose of the matrix A.
4. R
n
= ndimensional real Euclidean space with a typical point x =
(x
1
, , x
n
).
5. Ω usually denotes and open set in R
n
.
6.
¯
Ω is the closure of Ω.
7. ∂Ω is the boundary of Ω.
8. For x = (x
1
, , x
n
) and y = (y
1
, , y
n
) in R
n
,
x y =
n
¸
i=1
x
i
y
i
, [x[ =
n
¸
i=1
x
2
i
1
2
.
9. [x −y[ is the distance of the two points x and y in R
n
.
10. B
r
(x
o
) = ¦x ∈ R
n
[ [x − x
o
[ < r¦ is the ball of radius r centered at the
point x
o
in R
n
.
11. S
n
is the sphere of radius one centered at the origin in R
n+1
, i.e. the
boundary of the ball of radius one centered at the origin:
B
1
(0) = ¦x ∈ R
n+1
[ [x[ < 1¦.
A.1.2 Notations for Functions and Derivatives
1. For a function u : Ω ⊂ R
n
→R
1
, we write
u(x) = u(x
1
, , x
n
) , x ∈ Ω.
We say u is smooth if it is inﬁnitely diﬀerentiable.
236 A Appendices
2. For two functions u and v, u ≡ v means u is identically equal to v.
3. We set
u := v
to deﬁne u as equaling v.
4. We usually write u
xi
for
∂u
∂xi
, the ﬁrst partial derivative of u with respect
to x
i
. Similarly
u
xixj
=
∂
2
u
∂x
i
∂x
j
u
xixjx
k
=
∂
3
u
∂x
i
∂x
j
∂x
k
.
5.
D
α
u(x) :=
∂
α
u(x)
∂x
α1
1
∂x
αn
n
where α = (α
1
, , α
n
) is a multiindex of order
[α[ = α
1
+α
2
+ α
n
.
6. For a nonnegative integer k,
D
k
u(x) := ¦D
α
u(x) [ [α[ = k¦
the set of all partial derivatives of order k.
If k = 1, we regard the elements of Du as being arranged in a vector
Du := (u
x1
, , u
xn
) = u,
the gradient vector.
If k = 2, we regard the elements of D
2
u being arranged in a matrix
D
2
u :=
¸
¸
∂
2
u
∂x1∂x1
∂
2
u
∂x1∂xn
∂
2
u
∂xn∂x1
∂
2
u
∂xn∂xn
¸
,
the Hessian matrix.
7. The Laplacian
´u :=
n
¸
i=1
u
xixi
is the trace of D
2
u.
8.
[D
k
u[ :=
¸
¸
α=k
[D
α
u[
2
¸
1/2
.
A.1 Notations 237
A.1.3 Function Spaces
1.
C(Ω) := ¦u : Ω→R
1
[ u is continuous ¦
2.
C(
¯
Ω) := ¦u ∈ C(Ω) [ u is uniformly continuous¦
3.
u
C(
¯
Ω)
:= sup
x∈Ω
[u(x)[
4.
C
k
(Ω) := ¦u : Ω→R
1
[ u is ktimes continuously diﬀerentiable ¦
5.
C
k
(
¯
Ω) := ¦u ∈ C
k
(Ω) [ D
α
u is uniformly continuous for all [α[ ≤ k¦
6.
u
C
k
(
¯
Ω)
:=
¸
α≤k
D
α
u
C(
¯
Ω)
7. Assume Ω is an open subset in R
n
and 0 < α ≤ 1. If there exists a
constant C such that
[u(x) −u(y)[ ≤ C[x −y[
α
, ∀ x, y ∈ Ω,
then we say that u is H¨ oder continuous in Ω.
8. The α
th
H¨ oder seminorm of u is
[u]
C
0,α
(
¯
Ω)
:= sup
x=y
[u(x) −u(y)[
[x −y[
α
9. The α
th
H¨ oder norm is
u
C
0,α
(
¯
Ω)
:= u
C(
¯
Ω)
+ [u]
C
0,α
(
¯
Ω)
10. The H¨ oder space C
k,α
(
¯
Ω) consists of all functions u ∈ C
k
(
¯
Ω) for which
the norm
u
C
k,α
(
¯
Ω)
:=
¸
β≤k
D
β
u
C(
¯
Ω)
+
¸
β=k
[D
β
u]
C
0,α
(
¯
Ω)
is ﬁnite.
11. C
∞
(Ω) is the set of all inﬁnitely diﬀerentiable functions
12. C
k
0
(Ω) denotes the subset of functions in C
k
(Ω) with compact support in
Ω.
238 A Appendices
13. L
p
(Ω) is the set of Lebesgue measurable functions u on Ω with
u
L
p
(Ω)
:=
Ω
[u(x)[
p
dx
1/p
< ∞.
14. L
∞
(Ω) is the set of Lebesgue measurable functions u with
u
L
∞
(Ω)
:= ess sup
Ω
[u[ < ∞.
15. L
p
loc
(Ω) denotes the set of Lebesgue measurable functions u on Ω whose
p
th
power is locally integrable, i.e. for any compact subset K ⊂ Ω,
K
[u(x)[
p
dx < ∞.
16. W
k,p
(Ω) denotes the Sobolev space consisting functions u with
u
W
k,p
(Ω)
:=
¸
0≤α≤k
Ω
[D
α
u[
p
dx
1/p
< ∞.
17. H
k,p
(Ω) is the completion of C
∞
(Ω) under the norm u
W
k,p
(Ω)
.
18. For any open subset Ω ⊂ R
n
, any nonnegative integer k and any p ≥ 1,
it is shown (See Adams [Ad]) that
H
k,p
(Ω) = W
k,p
(Ω).
19. For p = 2, we usually write
H
k
(Ω) = H
k,2
(Ω).
A.1.4 Notations for Estimates
1. In the process of estimates, we use C to denote various constants that can
be explicitly computed in terms of known quantities. The value of C may
change from line to line in a given computation.
2. We write
u = O(v) as x→x
o
,
provided there exists a constant C such that
[u(x)[ ≤ C[v(x)[
for x suﬃciently close to x
o
.
3. We write
u = o(v) as x→x
o
,
provided
[u(x)[
[v(x)[
→0 as x→x
o
.
A.1 Notations 239
A.1.5 Notations from Riemannian Geometry
1. M denotes a diﬀerentiable manifold.
2. T
p
M is the tangent space of M at point p.
3. T
∗
p
M is the cotangent space of M at point p.
4. A Riemannian metric g can be expressed in a matrix
g =
¸
g
11
g
1n
g
n1
g
nn
¸
where in a local coordinates
g
ij
(p) =<
∂
∂x
i
p
,
∂
∂x
j
p
>
p
.
5. (M, g) denotes an ndimensional manifold M with Riemannain metric g.
In a local coordinates, the length element
ds :=
¸
n
¸
i,j=1
g
ij
dx
i
dx
j
¸
1/2
.
The volume element
dV :=
det(g
ij
)dx
1
dx
n
.
6. K(x) denotes the Gaussian curvature, and R(x) scalar curvature of a
manifold M at point x.
7. For two diﬀerentiable vector ﬁelds X and Y on M, the Lie Bracket is
[X, Y ] := XY −Y X.
8. D denotes an aﬃne connection on a diﬀerentiable manifold M.
9. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if
DV
dt
= 0.
10. Given a Riemannian manifold M with metric <, >, there exists a unique
connection D on M that is compatible with the metric <, >, i.e.
< X, Y >
c(t)
= constant ,
for any pair of parallel vector ﬁelds X and Y along any smooth curve c(t).
11. In a local coordinates (x
1
, , x
n
), this unique Riemannian connection
can be expressed as
D ∂
∂x
i
∂
∂x
j
=
¸
k
Γ
k
ij
∂
∂x
k
,
where Γ
k
ij
is the Christoﬀel symbols.
240 A Appendices
12. We denote
i
= D ∂
∂x
i
.
13. The summation convention. We write, for instance,
X
i
∂
∂x
i
:=
¸
i
X
i
∂
∂x
i
,
Γ
j
ik
dx
k
:=
¸
k
Γ
j
ik
dx
k
,
and so on.
14. A smooth function f(x) on M is a (0, 0)tensor. f is a (1, 0)tensor
deﬁned by
f =
i
fdx
i
=
∂f
∂x
i
dx
i
.
2
f is a (2, 0)tensor:
2
f =
j
∂f
∂x
i
dx
i
⊗dx
j
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
dx
i
⊗dx
j
.
In the Riemannian context,
2
f is called the Hessian of f and denoted
by Hess(f). Its ij
th
component is
(
2
f)
ij
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
.
15. The trace of the Hessian matrix ((
2
f)
ij
) is deﬁned to be the Laplace
Beltrami operator ´.
´ =
1
[g[
n
¸
i,j=1
∂
∂x
i
(
[g[ g
ij
∂
∂x
j
)
where [g[ is the determinant of (g
ij
).
16. We abbreviate:
M
uvdV
g
:=
M
< u, v > dV
g
where
< u, v >= g
ij
i
u
j
v = g
ij
∂u
∂x
i
∂v
∂x
j
is the scalar product associated with g for 1forms.
A.2 Inequalities 241
17. The norm of the k
th
covariant derivative of u, [
k
u[ is deﬁned in a local
coordinates chart by
[
k
u[
2
= g
i1j1
g
i
k
j
k
(
k
u)
i1···i
k
(
k
u)
j1···j
k
.
In particular, we have
[
1
u[
2
= [u[
2
= g
ij
(u)
i
(u)
j
= g
ij
i
u
j
u = g
ij
∂u
∂x
i
∂u
∂x
j
.
18. (
p
k
(M) is the space of C
∞
functions on M such that
M
[
j
u[
p
dV
g
< ∞, ∀ j = 0, 1, , k.
19. The Sobolev space H
k,p
(M) is the completion of (
p
k
(M) with respect to
the norm
u
H
k,p
(M)
=
k
¸
j=0
M
[
j
u[
p
dV
g
1/p
.
A.2 Inequalities
Young’s Inequality. Assume 1 < p, q < ∞,
1
p
+
1
q
= 1. Then for any a, b > 0,
it holds
ab ≤
a
p
p
+
b
q
q
(A.1)
Proof. We will use the convexity of the exponential function e
x
. Let x
1
< x
2
be any two point in R
1
. Then since
1
p
+
1
q
= 1,
1
p
x
1
+
1
q
x
2
is a point between
x
1
and x
2
. By the convexity of e
x
, we have
e
1
p
x1+
1
q
x2
≤
1
p
e
x1
+
1
q
e
x2
. (A.2)
Taking x
1
= ln a
p
and x
2
= ln b
q
in (A.2), we arrive at inequality (A.1). ¯ .
CauchySchwarz Inequality.
[x y[ ≤ [x[[y[ , ∀ x, y ∈ R
n
.
Proof. For any real number λ > 0, we have
0 ≤ [x ±λy[
2
= [x[
2
±2λx y +λ
2
[y[
2
.
It follows that
±x y ≤
1
2λ
[x[
2
+
λ
2
[y[
2
.
The minimum value of the right hand side is attained at λ =
x
y
. At this value
of λ, we arrive at desired inequality. ¯ .
242 A Appendices
H¨ oder’s Inequality. Let Ω be a domain in R
n
. Assume that u ∈ L
p
(Ω)
and v ∈ L
q
(Ω) with 1 ≤ p, q ≤ ∞ and
1
p
+
1
q
= 1. Then
Ω
[uv[dx ≤ u
L
p
(Ω)
v
L
q
(Ω)
. (A.3)
Proof. Let f =
u
u
L
p
(Ω)
and g =
v
v
L
q
(Ω)
. Then obviously, f
L
p
(Ω)
= 1 =
g
L
q
(Ω)
. It follows from the Young’s inequality (A.1),
Ω
[f g[dx ≤
1
p
Ω
[f[
p
dx +
1
q
Ω
[g[
q
dx =
1
p
+
1
q
= 1.
This implies (A.3). ¯ .
Minkowski’s Inequality. Assume 1 ≤ p ≤ ∞. Then for any u, v ∈
L
p
(Ω),
u +v
L
p
(Ω)
≤ u
L
p
(Ω)
+v
L
p
(Ω)
.
Proof. By the above H¨ oder inequality, we have
u +v
p
L
p
(Ω)
=
Ω
[u +v[
p
dx ≤
Ω
[u +v[
p−1
([u[ +[v[)dx
≤
Ω
[u +v[
p
dx
p−1
p
¸
Ω
[u[
p
dx
1
p
+
Ω
[v[
p
dx
1
p
¸
= u +v
p−1
L
p
(Ω)
u
L
p
(Ω)
+v
L
p
(Ω)
.
Then divide both sides by u +v
p−1
L
p
(Ω)
. ¯ .
Interpolation Inequality for L
p
Norms. Assume that u ∈ L
r
(Ω) ∩
L
s
(Ω) with 1 ≤ r ≤ s ≤ ∞. Then for any r ≤ t ≤ s and 0 < θ < 1 such that
1
t
=
θ
r
+
1 −θ
s
(A.4)
we have u ∈ L
t
(Ω), and
u
L
t
(Ω)
≤ u
θ
L
r
(Ω)
u
1−θ
L
s
(Ω)
. (A.5)
Proof. Write
Ω
[u[
t
dx =
Ω
[u[
a
[u[
t−a
dx.
Choose p and q so that
ap = r , (t −a)q = s , and
1
p
+
1
q
= 1. (A.6)
A.3 CalderonZygmund Decomposition 243
Then by H¨ oder inequality, we have
Ω
[u[
t
dx ≤
Ω
[u[
r
dx
a/r
Ω
[u[
s
dx
(t−a)/s
.
Divide through the powers by t, we obtain
u
t
≤ u
a/t
r
u
1−a/t
s
.
Now let θ = a/t, we arrive at desired inequality (A.5). Here the restriction
(A.4) comes from (A.6). ¯ .
A.3 CalderonZygmund Decomposition
Lemma A.3.1 For f ∈ L
1
(R
n
), f ≥ 0, ﬁxed α > 0, ∃ E, G such that
(i) R
n
= E ∪ G, E ∩ G = ∅
(ii) f(x) ≤ α, a.e. x ∈ E
(iii) G =
∞
¸
k=1
Q
k
, ¦Q
k
¦: disjoint cubes s.t.
α <
1
[Q
k
[
Q
k
f(x)dx ≤ 2
n
α
Proof. Since
R
n
f(x)dx is ﬁnite, for a given α > 0, one can pick a cube Q
o
suﬃciently large, such that
Qo
f(x)dx ≤ α[Q
o
[. (A.7)
Divide Q
o
into 2
n
equal subcubes with disjoint interior. Those subcubes
Q satisfying
Q
f(x)dx ≤ α[Q[
are similarly subdivided, and this process is repeated indeﬁnitely. Let O de
note the set of subcubes Q thus obtained that satisfy
Q
f(x)dx > α[Q[.
For each Q ∈ O, let
˜
Q be its predecessor, i.e., Q is one of the 2
n
subcubes of
˜
Q. Then obviously, we have [
˜
Q[/[Q[ = 2
n
, and consequently,
α <
1
[Q[
Q
f(x)dx ≤
1
[Q[
˜
Q
f(x)dx ≤
1
[Q[
α[
˜
Q[ = 2
n
α. (A.8)
Let
244 A Appendices
G =
¸
Q∈Q
Q , and E = Q
o
` G.
Then iii) follows immediately. To see ii), noticing that each point of E lies in
a nested sequence of cubes Q with diameters tending to zero and satisfying
Q
f(x)dx ≤ α[Q[,
now by Lebesgue’s Diﬀerentiation Theorem, we have
f(x) ≤ α , a.e in E. (A.9)
This completes the proof of the Lemma. ¯ .
Lemma A.3.2 Let T be a linear operator from L
p
(Ω)∩L
q
(Ω) into itself with
1 ≤ p < q < ∞. If T is of weak type (p, p) and weak type (q, q), then for any
p < r < q, T is of strong type (r, r). More precisely, if there exist constants
B
p
and B
q
, such that, for any t > 0,
µ
Tf
(t) ≤
B
p
f
p
t
p
and µ
Tf
(t) ≤
B
q
f
q
t
q
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
then
Tf
r
≤ CB
θ
p
B
1−θ
q
f
r
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
where
1
r
=
θ
p
+
1 −θ
q
and C depends only on p, q, and r.
Proof. For any number s > 0, let
g(x) =
f(x) if [f(x)[ ≤ s
0 if [f(x)[ > s.
We split f into the good part g and the bad part b: f(x) = g(x) +b(x). Then
[Tf(x)[ ≤ [Tg(x)[ +[Tb(x)[,
and hence
µ(t) ≡ µ
Tf
(t) ≤ µ
Tg
(
t
2
) +µ
Tb
(
t
2
)
≤
2B
q
t
q
Ω
[g(x)[
q
dx +
2B
p
t
p
Ω
[b(x)[
p
dx.
It follows that
A.3 CalderonZygmund Decomposition 245
Ω
[Tf[
r
dx =
∞
0
µ(t)d(t
r
) = r
∞
0
t
r−1
µ(t)dt
≤ r(2B
q
)
q
∞
0
t
r−1−q
f≤s
[f(x)[
q
dx
dt
+ r(2B
p
)
p
∞
0
t
r−1−p
f>s
[f(x)[
p
dx
dt
≡ r(2B
q
)
q
I
q
+r(2B
p
)
p
I
p
. (A.10)
Let s = t/A for some positive number A to be ﬁxed later. Then
I
q
= A
r−q
∞
0
s
r−1−q
f≤s
[f(x)[
q
dx
ds
= A
r−q
Ω
[f(x)[
q
∞
f
s
r−1−q
ds
dx
=
A
r−q
q −r
Ω
[f(x)[
r
dx. (A.11)
Similarly,
I
p
= A
r−p
∞
0
s
r−1−p
f>s
[f(x)[
p
dx
ds =
Ω
[f(x)[
p
f
0
s
r−1−p
ds
=
A
r−p
r −p
Ω
[f(x)[
r
dx. (A.12)
Combining (A.10), (A.11), and (A.12), we derive
Ω
[Tf(x)[
r
dx ≤ rF(A)
Ω
[f(x)[
r
dx, (A.13)
where
F(A) =
(2B
q
)
q
A
r−q
q −r
+
(2B
p
)
p
A
r−p
r −p
.
By elementary calculus, one can easily verify that the minimum of F(A) is
A = 2B
q/(q−p)
q
B
p/(p−q)
p
.
For this value of A, (A.13) becomes
Ω
[Tf(x)[
r
dx ≤ r2
r
1
q −r
+
1
r −p
B
q(r−p)/(q−p)
q
B
p(q−r)/(q−p)
p
Ω
[f(x)[
r
dx.
Letting
C = 2
r
q −r
+
r
r −p
1/r
, and
1
r
=
θ
p
+
1 −θ
q
,
246 A Appendices
we arrive immediately at
Tf
L
r
(Ω)
≤ CB
θ
p
B
1−θ
q
f
L
r
(Ω)
.
This completes the proof of the Lemma. ¯ .
A.4 The Contraction Mapping Principle
Let X be a linear space with norm  , and let T be a mapping from X into
itself. If there exists a number θ < 1, such that
Tx −Ty ≤ θx −y for all x, y ∈ X,
Then T is called a contraction mapping .
Theorem A.4.1 Let T be a contraction mapping in a Banach space B. Then
it has a unique ﬁxed point in B, that is, there is a unique x ∈ B, such that
Tx = x.
Proof. Let x
o
be any point in the Banach space B. Deﬁne
x
1
= Tx
o
, and x
k+1
= Tx
k
k = 1, 2, .
We show that ¦x
k
¦ is a Cauchy sequence in B. In fact, for any integers n > m,
x
n
−x
m
 ≤
n−1
¸
k=m
x
k+1
−x
k

=
n−1
¸
k=m
T
k
x
1
−T
k
x
o

≤
n−1
¸
k=m
θ
k
x
1
−x
o

≤
θ
m
1 −θ
x
1
−x
o
→0 as m→∞.
Since a Banach is complete, the sequence ¦x
k
¦ converges to an element x
in B. Taking limit on both side of
x
k+1
= Tx
k
,
we arrive at
Tx = x. (A.14)
A.5 The ArzelaAscoli Theorem 247
To see the uniqueness, assume that y is a solution of (A.14). Then
x −y = Tx −Ty ≤ θx −y.
Therefore, we must have
x −y = 0,
That is x = y. ¯ .
A.5 The ArzelaAscoli Theorem
We say that the sequence of functions ¦u
k
(x)¦ are uniformly equicontinuous
if for each > 0, there exists δ > 0, such that
[u
k
(x) −u
k
(y)[ < whenever [x −y[ < δ
for all x, y ∈ R
n
and k = 1, 2, .
Theorem A.5.1 (ArzelaAscoli Compactness Criterion for Uniform Conver
gence).
Assume that ¦u
k
(x)¦ is a sequence of realvalued functions deﬁned on R
n
satisfying
[u
k
(x)[ ≤ M k = 1, 2, , x ∈ R
n
for some positive number M, and ¦u
k
(x)¦ are uniformly equicontinuous. Then
there exists a subsequence ¦u
ki
(x)¦ of ¦u
k
(x)¦ and a continuous function u(x),
such that
u
ki
→u
uniformly on any compact subset of R
n
.
References
[Ad] R. Adams, Sobolev Spaces, Academic Press, San Diego, 1978.
[AR] A. Ambrosetti and P. Rabinowitz, Dual variational method in critical
point theory and applications, J. Funct. Anal. 14(1973), 349381.
[Au] T. Aubin, Nonlinear Analysis on Manifolds. MongeAmpere Equations,
SpringerVerlag, 1982.
[Ba] A. Bahri, Critical points at inﬁnity in some variational problems, Pit
man, 1989, Research Notes in Mathematics Series, Vol. 182, 1988.
[BC] A. Bahri and J. Coron, The scalar curvature problem on three
dimensional sphere, J. Funct. Anal., 95(1991), 106172.
[BE] J. Bourguignon and J. Ezin, Scalar curvature functions in a class of
metrics and conformal transformations, Trans. AMS, 301(1987), 723
736.
[Bes] A. Besse, Einstein Manifolds, SpringerVerlag, Berling, Heidelberg,
1987.
[Bi] G. Bianchi, Nonexistence and symmetry of solutions to the scalar curva
ture equation, Comm. Partial Diﬀerential Equations, 21(1996), 229234.
[BLS] H. Brezis, Y. Li, and I. Shafrir, A sup + inf inequality for some nonlinear
elliptic equations involving exponential nonlinearities, J. Funct. Anal.
115 (1993), 344358.
[BM] H. Brezis, and F. Merle, Uniform estimates and blowup behavior for
solutions of u = V (x)e
u
in two dimensions, Comm. PDE. 16 (1991),
12231253.
[C] W. Chen, Scalar curvatures on S
n
, Mathematische Annalen, 283(1989),
353365.
[C1] W. Chen, A Tr¨ udinger inequality on surfaces with conical singularities,
Proc. AMS 3(1990), 821832.
[Ca] M. do Carmo, Riemannian Geometry, Translated by F. Flaherty,
Birkhauser Boston, 1992.
[CC] K. Cheng and J. Chern, Conformal metrics with prescribed curvature
on S
n
, J. Diﬀ. Equs., 106(1993), 155185.
[CD1] W. Chen and W. Ding, A problem concerning the scalar curvature on
S
2
, Kexue Tongbao, 33(1988), 533537.
[CD2] W. Chen and W. Ding, Scalar curvatures on S
2
, Trans. AMS,
303(1987), 365382.
250 References
[CGS] L.Caﬀarelli, B.Gidas, and J.Spruck, Asymptotic symmetry and local
behavior of semilinear elliptic equations with critical Sobolev growth,
Comm. Pure Appl. Math. XLII(1989), 271297.
[ChL] K. Chang and Q. Liu, On Nirenberg’s problem, International J. Math.,
4(1993), 5988.
[CL1] W. Chen and C. Li, Classiﬁcation of solutions of some nonlinear elliptic
equations , Duke Math. J., 63(1991), 615622.
[CL2] W. Chen and C. Li, Qualitative properties of solutions to some nonlinear
elliptic equations in R
2
, Duke Math J., 71(1993), 427439.
[CL3] W. Chen and C. Li, A note on KazdanWarner type conditions, J. Diﬀ.
Geom. 41(1995), 259268.
[CL4] W. Chen and C. Li, A necessary and suﬃcient condition for the Niren
berg problem, Comm. Pure Appl. Math., 48(1995), 657667.
[CL5] W. Chen and C. Li A priori estimates for solutions to nonlinear elliptic
equations, Arch. Rat. Mech. Anal., 122(1993), 145157.
[CL6] W. Chen and C. Li, A priori estimates for prescribing scalar curvature
equations, Ann. Math., 145(1997), 547564.
[CL7] W. Chen and C. Li, Prescribing Gaussian curvature on surfaces with
conical singularities, J. Geom. Analysis, 1(1991), 359372.
[CL8] W. Chen and C. Li, Gaussian curvature on singular surfaces, J. Geom.
Anal., 3(1993), 315334.
[CL9] W. Chen and C. Li, What kinds of singular surface can admit constant
curvature, Duke Math J., 2(1995), 437451.
[CL10] W. Chen and C. Li, Prescribing scalar curvature on S
n
, Paciﬁc J.
Math, 1(2001), 6178.
[CL11] W. Chen and C. Li, Indeﬁnite elliptic problems in a domain, Disc.
Cont. Dyn. Sys., 3(1997), 333340.
[CL12] W. Chen and C. Li, Gaussian curvature in the negative case, Proc.
AMS, 131(2003), 741744.
[CL13] W. Chen and C. Li, Regularity of solutions for a system of integral
equations, C.P.A.A., 4(2005), 18.
[CL14] W. Chen and C. Li, Moving planes, moving spheres, and a priori esti
mates, J. Diﬀ. Equa. 195(2003), 113.
[CLO] W. Chen, C. Li, and B. Ou, Classiﬁcation of solutions for an integral
equation, , CPAM, 59(2006), 330343.
[CLO1] W. Chen, C. Li, and B. Ou, Qualitative properties of solutions for an
integral equation, Disc. Cont. Dyn. Sys. 12(2005), 347354.
[CLO2] W. Chen, C. Li, and B. Ou, Classiﬁcation of solutions for a system of
integral equations, Comm. in PDE, 30(2005), 5965.
[CS] K. Cheng and J. Smoller, Conformal metrics with prescribed Gaussian
curvature on S
2
, Trans. AMS, 336(1993), 219255.
[CY1] A. Chang, and P. Yang, A perturbation result in prescribing scalar
curvature on S
n
, Duke Math. J., 64(1991), 2769.
[CY2] A. Chang, M. Gursky, and P. Yang, The scalar curvature equation on
2 and 3spheres, Calc. Var., 1(1993), 205229.
[CY3] A. Chang and P. Yang, Prescribing Gaussian curvature on S
2
, Acta.
Math., 159(1987), 214259.
[CY4] A. Chang, P. Yang, Conformal deformation of metric on S
2
, J. Diﬀ.
Geom., 23(1988) 259296.
References 251
bibitem[DL]DL W. Ding and J. Liu, A note on the problem of prescribing
Gaussian curvature on surfaces, Trans. AMS, 347(1995), 10591065.
[ES] J. Escobar and R. Schoen, Conformal metric with prescribed scalar
curvature, Invent. Math. 86(1986), 243254.
[Ev] L. Evans, Partial Diﬀerential Equations, Graduate Studies in Math,
Vol 19, AMS, 1998.
[F] L. Frenkel, An Introduction to Maximum Principles and Symmetry in
Elliptic Problems, Cambridge Unversity Press, New York, 2000.
[He] E. Hebey, Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequal
ities, AMS, Courant Institute of Mathematical Sciences Lecture Notes
5, 1999.
[GNN] B. Gidas, W. Ni, and L. Nirenberg, Symmetry of positive solutions of
nonlinear elliptic equations in R
n
, In the book Mathematical Analysis
and Applications, Academic Press, New York, 1981.
[GNN1] B. Gidas, W. Ni, and L. Nirenberg, Symmetry and related properties
via the maximum principle, Comm. Math. Phys., 68(1979), 209243.
[GS] B. Gidas and J. Spruck, Global and local behavior of positive solutions
of nonlinear elliptic equations, Comm. Pure Appl. Math., 34(1981), 525
598.
[GT] D. Gilbarg and N. Trudinger, Elliptic partial diﬀerential equations of
second order, SpringerVerlag, 1983.
[Ha] Z. Han, Prescribing Gaussian curvature on S
2
, Duke Math. J., 61(1990),
679703.
[Ho] C. Hong, A best constant and the Gaussian curvature, Proc. of AMS,
97(1986), 737747.
[Ka] J. Kazdan, Prescribing the curvature of a Riemannian manifold, AMS
CBMS Lecture, No.57, 1984.
[KW1] J. Kazdan and F. Warner, Curvature functions for compact two mani
folds, Ann. of Math. , 99(1974), 1447.
[Kw2] J. Kazdan and F. Warner, Existence and conformal deformation of met
rics with prescribing Gaussian and scalar curvature, Annals of Math.,
101(1975), 317331.
[L] E.Lieb, Sharp constants in the HardyLittlewoodSobolev and related
inequalities, Ann. of Math. 118(1983), 349374.
[Li] C. Li, Local asymptotic symmetry of singular solutions to nonlinear
elliptic equations, Invent. Math. 123(1996), 221231.
[Li0] C. Li, Monotonicity and symmetry of solutions of fully nonlinear elliptic
equations on bounded domain, Comm. in PDE. 16(2&3)(1991), 491526.
[Li1] Y. Li, On prescribing scalar curvature problem on S
3
and S
4
, C. R.
Acad. Sci. Paris, t. 314, Serie I (1992), 5559.
[Li2] Y. Li, Prescribing scalar curvature on S
n
and related problems, J.
Diﬀerential Equations 120 (1995), 319410.
[Li3] Y. Li, Prescribing scalar curvature on S
n
and related problems, II: Ex
istence and compactness, Comm. Pure Appl. Math. 49 (1996), 541597.
[Lin1] C. Lin, On Liouville theorem and apriori estimates for the scalar curva
ture equations, Ann. Scuola. Norm. Sup. Pisa, Cl. Sci. 27(1998), 107130.
[Mo] J. Moser, On a nonlinear problem in diﬀerential geometry, Dynamical
Systems, Academic Press, N.Y., (1973).
[P] P. Padilla, Symmetry properties of positive solutions of elliptic equations
on symmetric domains, Appl. Anal. 64(1997), 153169.
252 References
[O] M. Obata, The conjecture on conformal transformations of Riemannian
manifolds, J. Diﬀ. Geom., 6(1971), 247258.
[On] E. Onofri, On the positivity of the eﬀective action in a theory of random
surfaces, Comm. Math. Phys., 86 (1982), 321326.
[Rab] P. Rabinowitz, Minimax Methods in Critical Point Theory with Appli
cations to Diﬀerential Equations, CBMS series, No. 65, 1986.
[Sch] M. Schechter, Modern Methods in Partial Diﬀerential Equations,
McGrawHill Inc., 1977.
[Sch1] M. Schester, Principles of Functional Analysis, Graduate Studies in
Math, Vol. 6, AMS, 2002.
[SZ] R. Schoen and D. Zhang, Prescribed scalar curvature on the nspheres,
Calc. Var. Partial Diﬀerential Equations 4 (1996), 125.
[WX] J. Wei and X. Xu, Classiﬁcation of solutions of higher order conformally
invariant equations, Math. Ann. 313(1999), 207228.
[XY] X. Xu and P. Yang, Remarks on prescribing Gauss curvature, Tran.
AMS, Vol. 336, No. 2, (1993), 831840.
Preface
In this book we intend to present basic materials as well as real research examples to young researchers in the ﬁeld of nonlinear analysis for partial diﬀerential equations (PDEs), in particular, for semilinear elliptic PDEs. We hope it will become a good reading material for graduate students and a handy textbook for professors to use in a topic course on nonlinear analysis. We will introduce a series of wellknown typical methods in nonlinear analysis. We will ﬁrst illustrate them by using simple examples, then we will lead readers to the research front and explain how these methods can be applied to solve practical problems through careful analysis of a series of recent research articles, mostly the authors’ recent papers. From our experience, roughly speaking, in applying these commonly used methods, there are usually two aspects: i) general scheme (more or less universal) and, ii) key points for each individual problem. We will use our own research articles to show readers what the general schemes are and what the key points are; and with the understandings of these examples, we hope the readers will be able to apply these general schemes to solve their own research problems with the discovery of their own key points. In Chapter 1, we introduce basic knowledge on Sobolev spaces and some commonly used inequalities. These are the major spaces in which we will seek weak solutions of PDEs. Chapter 2 shows how to ﬁnd weak solutions for some typical linear and semilinear PDEs by using functional analysis methods, mainly, the calculus of variations and critical point theories including the wellknown Mountain Pass Lemma. In Chapter 3, we establish W 2,p a priori estimates and regularity. We prove that, in most cases, weak solutions are actually diﬀerentiable, and hence are classical ones. We will also present a Regularity Lifting Theorem. It is a simple method to boost the regularity of solutions. It has been used extensively in various forms in the authors’ previous works. The essence of the approach is wellknown in the analysis community. However, the version we present here
VI
Preface
contains some new developments. It is much more general and is very easy to use. We believe that our method will provide convenient ways, for both experts and nonexperts in the ﬁeld, in obtaining regularities. We will use examples to show how this theorem can be applied to PDEs and to integral equations. Chapter 4 is a preparation for Chapter 5 and 6. We introduce Riemannian manifolds, curvatures, covariant derivatives, and Sobolev embedding on manifolds. Chapter 5 deals with semilinear elliptic equations arising from prescribing Gaussian curvature, on both positively and negatively curved manifolds. We show the existence of weak solutions in critical cases via variational approaches. We also introduce the method of lower and upper solutions. Chapter 6 focus on solving a problem from prescribing scalar curvature on S n for n ≥ 3. It is in the critical case where the corresponding variational functional is not compact at any level sets. To recover the compactness, we construct a maxmini variational scheme. The outline is clearly presented, however, the detailed proofs are rather complex. The beginners may skip these proofs. Chapter 7 is devoted to the study of various maximum principles, in particular, the ones based on comparisons. Besides classical ones, we also introduce a version of maximum principle at inﬁnity and a maximum principle for integral equations which basically depends on the absolute continuity of a Lebesgue integral. It is a preparation for the method of moving planes. In Chapter 8, we introduce the method of moving planes and its variant– the method of moving spheres–and apply them to obtain the symmetry, monotonicity, a priori estimates, and even nonexistence of solutions. We also introduce an integral form of the method of moving planes which is quite diﬀerent than the one for PDEs. Instead of using local properties of a PDE, we exploit global properties of solutions for integral equations. Many research examples are illustrated, including the wellknown one by Gidas, Ni, and Nirenberg, as well as a few from the authors’ recent papers. Chapters 7 and 8 function as a selfcontained group. The readers who are only interested in the maximum principles and the method of moving planes, can skip the ﬁrst six chapters, and start directly from Chapter 7.
Yeshiva University University of Colorado
Wenxiong Chen Congming Li
Contents
1
Introduction to Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Approximation by Smooth Functions . . . . . . . . . . . . . . . . . . . . . . . 1.4 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Compact Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Other Basic Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Poincar´’s Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . e 1.6.2 The Classical HardyLittlewoodSobolev Inequality . . . . Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Second Order Elliptic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Methods of Linear Functional Analysis . . . . . . . . . . . . . . . . . . . . . 2.3.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Some Basic Principles in Functional Analysis . . . . . . . . . 2.3.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Variational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Semilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Calculus of Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Existence of Minimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Existence of Minimizers Under Constraints . . . . . . . . . . . 2.4.5 Minimax Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 Existence of a Minimax via the Mountain Pass Theorem Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 W 2,p a Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Newtonian Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Uniform Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 W 2,p Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 The Case p ≥ 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 3 8 9 21 32 34 34 36 41 42 43 44 44 44 48 50 50 50 52 55 59 65 73 75 75 81 86 88
2
3
. . . . . . . . . . . . . . . . . . . . . . . 172 6. . . . . . . . . 95 3.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . 120 4. . . . . .2. . . . . . . . . . . . . . .2 Curvature of Surfaces in R3 . . . .3. . . . . . . . . . . . . . . .3 In the Regions Where R > 0. . . . .4. . . . . . 109 4. . . . . . . .3. . . . . . 129 5. . . . . . . . . 115 4.1. . . . . . . . . . . . . 94 3. . . . . . . . . . . . . . . . . . . . . . . . . . 161 6. . . . . . . . . . . . .1 Obstructions . . . . . . . . . . . .5 Calculus on Manifolds .3. . . . . . . . . . . for n ≥ 3 . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5. . . . . . . . . . . . . . . . . . . . . . . . . 162 6. . .1 Prescribing Gaussian Curvature on S 2 . . .3 Other Useful Results Concerning the Existence. . . . . . . .3 Curvature on Riemannian Manifolds . . . . . .6 Sobolev Embeddings . . . . . . .2 Prescribing Gaussian Curvature on Negatively Curved Manifolds . . . 171 6. 97 3. . . . . . . . . . . . . . . . . . . 151 Prescribing Scalar Curvature on S n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Equations on Prescribing Gaussian and Scalar Curvature123 4. . 96 3. . . . . . . . . . . . . . . . . . . . .VIII Contents 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4. . . . . . . . . . 107 4. . . . .2 Tangent Spaces . . . . . . . . . . .3. . . . . 172 6. . . . . . . . . . . . . . Uniqueness. . . . . . . . . . . . . . .2 The Variational Approach . . . . . . . .5. . . . . . 145 5. . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . 95 3.5. . . . . . . . . . . .2 Regularity Lifting Theorem . . . . 172 6. . . . . . . . . . . 157 6. . . 124 Prescribing Gaussian Curvature on Compact 2Manifolds . . . . . . . . . . . . . . . . . . . .2 Integrals . . . . . . . . . . . . . . . . . .3 Riemannian Metrics . . . . . .3 Existence of Weak Solutions . 142 5. . . . . . . . . . . 174 5 6 . . . . . . . . . .4. . . . . .2 In the Region Where R is Small . . . . . .1. .3. . . . . . .2. . . . . . . . . .2 The Variational Scheme . . . . . . . . 129 5. . .1 Kazdan and Warner’s Results. . . . . . . . . .2 The Case 1 < p < 2. . . . . . . . . Method of Lower and Upper Solutions . . .2. . . . . 92 3. . . . . . .3 Regularity Lifting . . . . . . . . . . . . . . . . . . . . . .1 In the Region Where R < 0 . . . . . . . . . . . . . . . . . . 103 4 Preliminary Analysis on Riemannian Manifolds . . . . . . .4 Curvature . 107 4. . . .1 Estimate the Values of the Functional . . . . . . . .1 Curvature of Plane Curves . . .3 Applications to PDEs . . . 117 4. . . . . . . .2. . . . . 112 4. .2 Variational Approach and Key Inequalities . . . . . . . . . . . . . 157 6. . . . . . . . . . . . . . . . . . .1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator . . . . . . . . . . . . . 115 4. . . and Regularity . . . . . . . . . . . . . . . . 120 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 4. . . . . . .3. . . . . . .1 Introduction . . . . 135 5. . . . . . . . . . . . . . .2 Chen and Li’s Results . . . . . . . . . . . . . . . . . . . . . . . . .4 Applications to Integral Equations . . . . . . . . . . .1 Diﬀerentiable Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5.2. . . . . . . . . . . .3 The A Priori Estimates . . . . . . . . . . . . .1 Bootstrap . . . .
. . . . . . . . . . . . . . . . . . . . . . .1 Symmetry of Solutions in a Unit Ball . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 The Background . . . . . 239 A.Contents IX 7 Maximum Principles . . . . . . . . 241 A. . . . . . . 180 7. . . . . . . . . . . . . . . . . . . . 199 8. . . . . . . . . . . . . . . . . . . . . 222 8. . .2 The A Priori Estimates . . . . . . . .3 Symmetry of Solutions for − u = eu in R2 . .1. . . . .5 A Maximum Principle for Integral Equations . . . . . . . . . . . . . . . . . . .3 CalderonZygmund Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 7. . . . . .2 Necessary Conditions . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . .1 The Background . . . . . .3 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Applications of the Maximum Principles Based on Comparisons . . . . . . . . . . . . . . . . . . . . . . . .4 Method of Moving Spheres .3 The Hopf Lemma and Strong Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Notations from Riemannian Geometry . . . . . . . . . . .2 Inequalities . . . . . 246 A.1. . . . . . . . . . . . . . . . . . . . . . 237 A. . . . . 175 7.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 A. . . . . . . .3. . . . . . . 235 A. . . . .1.2. . . 214 8. . . . . . 188 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 7. . 225 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 8 A References .2 Notations for Functions and Derivatives . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . .1 Algebraic and Geometric Notations . . . . . . . . . . . 249 . . . . 214 8. . . . . . . . . . . . .1 Notations . .3 Method of Moving Planes in a Local Way .1 Introduction . . . . . .4 The Contraction Mapping Principle . . . .5 The ArzelaAscoli Theorem . . . . 235 A. . . . . . . . . 222 8. . . . . . . . . . . . . . . . 235 A. . . . . . . .4 Notations for Estimates . . . . . . . . 199 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Appendices . .5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation . . . . . .3. .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Methods of Moving Planes and of Moving Spheres .2 Weak Maximum Principles . . . . . . . . . . . 202 8. . . . .1 Outline of the Method of Moving Planes . . . .4. . . 235 A. . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . .4 Maximum Principles Based on Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Symmetry of Solutions of − u = up in Rn . . . . . . . . . . . . . . 243 A. . . . . 209 8. . . . . 216 8. . . . . . . . . . . 197 8. . . . 195 8. . . . . .
.
it is imperative that we need not only develop suitable metrics to measure diﬀerent states (functions).1 1. many well known mathematicians. The role of Sobolev Spaces in the analysis of PDEs is somewhat similar to the role of Euclidean spaces in the study of geometry. Hardy and J. Nirenberg. For example. A. In mathematics. and .2 Poincar´’s Inequality e The Classical HardyLittlewoodSobolev Inequality In real life. More recently. Functions are used to describe physical states. E. in the theory of partial diﬀerential equations. Very often. in particular. such as H.5 1.6. the absolute value a − b is used to measure the diﬀerence between two numbers a and b. we use a sequence of approximate solutions to approach a real one. The fundamental research on the relations among various Sobolev Spaces (Sobolev norms) was ﬁrst carried out by G.6 Distributions Sobolev Spaces Approximation by Smooth Functions Sobolev Embeddings Compact Embedding Other Basic Inequalities 1. They have many applications in various branches of mathematics. For these purposes. and how close these solutions are to the real one depends on how we measure them. the Sobolev Spaces were introduced. Caﬀarelli. Chang. people use numbers to quantify the surrounding objects. Hence. which metric we are choosing. L.2 1.6. L. but we must also study relationships among diﬀerent metrics. Lieb.1 1. Littlewood in the 1910s and then by S. Brezis. J. that is. temperature is a function of time and place. Serrin.1 Introduction to Sobolev Spaces 1.4 1.3 1. Sobolev in the 1930s.
s) d x is an antiderivative of f (x. what the sharp estimates are. From the deﬁnition of the functional in either (1. Correspondingly. especially for nonlinear partial diﬀerential equations. in particular. This kind of variational approach is particularly powerful in dealing with nonlinear equations. Stein. let’s start oﬀ with a simple example. x ∈ ∂Ω (1. given a PDE problem.2 1 Introduction to Sobolev Spaces E. x ∈ Ω u = 0. by an appropriate regularity argument. Hence the critical points of the functional are solutions of the problem only in the ‘weak’ sense. one may recover the diﬀerentiability of the solutions. and seek critical points of the functional in that space.’ to derive the existence.3) f (x.1) in the classical sense.4) .1).3). To roughly illustrate this kind of application. instead of f (x). so that they can still satisfy equation (1. we have the functional J(u) = where F (x. Then it becomes a semilinear equation.1) To prove the existence of solutions. However. u) d x. u) = 0 1 2  u2 d x − Ω u Ω F (x. For example. in equation (1. and which functions are ‘critically’ related to these sharp estimates. one can see that the function u in the space need not be second order diﬀerentiable as required by the classical solutions of (1. (1. the calculus of variations. One may also consider the corresponding variational functional J(u) = 1 2  u2 d x − Ω Ω f (x) u d x (1. the ‘ﬁxed point theory’ or ‘the degree theory. which functions achieve these sharp estimates.2) or (1. has seen more and more applications. In general. To ﬁnd the existence of weak solutions for partial diﬀerential equations. u). one may view − as an operator acting on a proper linear space and then apply some known principles of functional analysis. ·). for instance. The main objectives are to determine if and how the norms dominate each other.2) in a proper linear space. we consider f (x.1). the method of functional analysis. our intention is to view it as an operator A acting on some proper linear spaces X and Y of functions and to symbolically write the equation as Au = f (1. Let Ω be a bounded domain in Rn and consider the Dirichlet problem associated with the Laplace equation: − u = f (x). have worked in this area.
the key is to ﬁnd an appropriate operator ‘A’ and appropriate spaces ‘X’ and ‘Y’. we will introduce the distributions. In Section 1. . the weak convergent sequence in the “stronger” space becomes a strong convergent sequence in the “weaker” space. via functional analysis approach. In solving a partial diﬀerential equation. one can substantially weaken the notion of partial derivatives. We may also associate this operator with a functional J(·). It is very common that people use “energy” minimization or conservation. and these ﬁrst derivatives need not be continuous or even be deﬁned everywhere. what really involved were only the ﬁrst derivatives of u instead of the second derivatives as required classically for a second order equation. we can just work on smooth functions to establish a series of important inequalities. The advantage is that it allows one to divide the task of ﬁnding “suitable” smooth solutions for a PDE into two major steps: Step 1. in many cases. we show that.1 Distributions 3 Then we can apply the general and elegant principles of linear or nonlinear functional analysis to study the solvability of various equations involving A. The limit function of a convergent sequence of approximate solutions would be the desired exact solution of the equation. it is not so convenient to work directly on weak derivatives.1 Distributions As we have seen in the introduction. Before going onto the details of this chapter.1. it is natural to ﬁrst ﬁnd a sequence of approximate solutions. Then in the next three sections. 1. As we will see later. mainly the notion of the weak derivatives. to show the existence of such weak solutions. whose critical points are the solutions of the equation (1. Existence of Weak Solutions. for the functional J(u) in (1. As we will see in the next few chapters. and then go on to investigate the convergence of the sequence.3). which are the elements of the Sobolev spaces. sometimes use ﬁnite dimensional approximation. The result can then be applied to a broad class of partial diﬀerential equations.4).3. the readers may have a glance at the introduction of the next chapter to gain more motivations for studying the Sobolev spaces. Therefore. every bounded sequence has a weakly convergent subsequence. there are two basic stages in showing convergence: i) in a reﬂexive Banach space.2. To derive many useful properties in Sobolev spaces. and then ii) by the compact embedding from a “stronger” Sobolev space into a “weaker” one. these weak derivatives can be approximated by smooth functions.1. the Sobolev spaces are designed precisely for this purpose and will work out properly. In this process.2) or (1. Hence in Section 1. One seeks solutions which are less diﬀerentiable but are easier to obtain. We then deﬁne Sobolev spaces in Section 1.
Such functions are Lebesque measurable functions f deﬁned on Ω and with the property that 1/p f Lp (K) := K f (x)p dx <∞ for every compact subset K in Ω. Various functions spaces and the related embedding theories are basic tools in both analysis.1 Assume BR (xo ) := {x ∈ Rn  x − xo  < R} ⊂ Ω. ∂xi (1. for i = 1. · · · . Through integration by parts. among which Sobolev spaces are the most frequently used ones.5) ∂u does not exist.1.’ which will be the elements of the Sobolev spaces. Let Lp (Ω) be the space of pth power locally summable functions for loc 1 ≤ p ≤ ∞. Regularity Lifting. the integral on ∂xi the right hand side of (1. Let Rn be the ndimensional Euclidean space and Ω be an open connected ∞ subset in Rn . Let D(Ω) = C0 (Ω) be the linear space of inﬁnitely diﬀerentiable functions with compact support in Ω. In this section. ∂u φ(x)dx = − ∂xi u(x) Ω Ω ∂φ dx. u (x) := ρ ∗ u := n Rn 1 exp{ x−xo 2 −r2 } for x − xo  < r 0 elsewhere Then u ∈ ∞ C0 (Ω) for suﬃciently small. Both the existence of weak solutions and the regularity lifting have become two major branches of today’s PDE analysis.1. we have. Assume that u is a C 1 function in Ω and φ ∈ D(Ω).5) still makes sense if u is a locally L1 summable Now if u is not in C 1 (Ω). n. Example 1. This is called the space of test functions on Ω. 2. and supp u ⊂ K ⊂⊂ Ω. Let 1 x−y ρ( )u(y)dy. One uses various analysis tools to boost the diﬀerentiability of the known weak solutions and try to show that they are actually classical ones. then . However. we introduce the notion of ‘weak derivatives. the following function f (x) = ∞ is in C0 (Ω).4 1 Introduction to Sobolev Spaces Step 2. ∞ Example 1. then for any r < R.2 Assume ρ ∈ C0 (Rn ). u ∈ Lp (Ω).
. Example 1. Let α = (α1 . the regular αth partial derivative of u is Dα u = ∂ α1 ∂ α2 · · · ∂ αn u .6) as the weak representatives of Dα u.e. The same idea works for higher partial derivatives.6) makes no sense. For this reason. However the right hand side is valid for functions u with much weaker diﬀerentiability. let u(x) = cos x if − π < x ≤ 0 1 − x if 0 < x < 1. (1. i.1. we deﬁne the ﬁrst derivative function v(x) that satisﬁes v(x)φ(x)dx = − Ω Ω ∂u weakly as the ∂xi u ∂φ dx ∂xi for all functions φ ∈ D(Ω). v ∈ L1 (Ω). Now if u is not k times diﬀerentiable. 1). u only need to be locally L1 summable. · · · .3 For n = 1 and Ω = (−π. Deﬁnition 1.1 Distributions 5 function. α2 . written v = Dα u provided v(x) φ(x) d x = (−1)α Ω Ω u Dα φ d x for all test functions φ ∈ D(Ω). αn ) be a multiindex of order k := α := α1 + α2 + · · · + αn . we arrive at Dα u φ(x) d x = (−1)α Ω Ω u Dα φ d x.6) There is no boundary term because φ vanishes near the boundary.1 For u. the left hand side of (1.1. ∂xα1 ∂xα2 · · · ∂xαn n 1 2 Given any test function φ ∈ D(Ω). Thus it is natural to choose those functions v that satisfy (1. through a straight forward integration by parts k times.1. Then its weak derivative u (x) can be represented by v(x) = − sin x if − π < x ≤ 0 −1 if 0 < x < 1. For u ∈ C k (Ω). we say that v is the αth weak derivative loc of u.
2 A distribution is a continuous linear functional on D(Ω). is the collection of all continuous linear functionals on D(Ω). and we call it a distribution. for any φ ∈ D(Ω). In this example. we verify.7) still holds.1. Lemma 1. and (1. . we must have v(x) = w(x) almost everywhere . we have 1 0 1 u(x)φ (x)dx = −π −π 0 cos x φ (x)dx + 0 (1 − x)φ (x)dx 1 = −π sin x φ(x)dx + φ(0) − φ(0) + 0 1 φ(x)dx =− −π v(x)φ(x)dx. one can see that. Since the weak derivative is deﬁned by integrals. Ω Therefore.6 1 Introduction to Sobolev Spaces To see this. one may alter the values of the weak derivative v(x) in a set of measure zero. it should be unique up to a set of measure zero. the function u is not diﬀerentiable at x = 0. More generally. we can view a weak derivative as a linear functional acting on the space of test functions D(Ω). Proof. through integration by parts. denoted by D (Ω). Dα u. in the classical sense. It follows that (v(x) − w(x))φ(x)dx = 0 ∀φ ∈ D(Ω). we have v(x)φ(x)dx = (−1)α Ω Ω u(x)Dα φdx = Ω w(x)φ(x)dx for any φ ∈ D(Ω).1 (Uniqueness of Weak Derivatives).7) In fact. By the deﬁnition of the weak derivatives. From the deﬁnition. If v and w are the weak αth partial derivatives of u. However. (1. we have Deﬁnition 1. that 1 1 u(x)φ (x)dx = − −π −π v(x)φ(x)dx. then v(x) = w(x) almost everywhere.1. The linear space of distributions or the generalized functions on Ω.
For any φ ∈ D(Ω). The most important and most commonly used distributions are locally summable functions. we have 1 1 0. 1). It is easy to verify that Tf (·) is a continuous linear functional on D(Ω). In fact. we may regard δ0 (x) as f (x) in the sense of distributions. and let f (x) = Then. for x = xo . for x = xo δxo (x) = ∞. the delta function at xo can be deﬁned as δxo (φ) = φ(xo ). let Ω = (−1. we have T (φk )→T (φ). for x ≥ 0. who often simply view δxo as 0. one can show that such a delta function is not locally summable. Let xo be a point in Ω. consider loc Tf (φ) = Ω f (x)φ(x)dx. Compare this with the deﬁnition of weak derivatives. for x < 0 1. as k→∞. To explain this. This kind of “function” has been used widely and so successfully by physicists and engineers. and we say that φk →φ in D(Ω) if a) there exists K ⊂⊂ Ω such that suppφk . then we say that µ is (or can be realized as ) a locally summable function and identify it as f . ∀φ ∈ D(Ω). . Surprisingly. Dα φk →Dα φ uniformly as k→∞. For any distribution µ. if there is an f ∈ L1 (Ω) such that loc µ(φ) = Tf (φ). for any sequence {φk } ⊂ D(Ω) with φk →φ in D(Ω). the continuity of a functional T on D(Ω) means that. Hence it is a distribution. such a delta function is the derivative of some function in the following distributional sense. and hence it is a distribution. and b) for any α.1. However. It is not a function at all. for any f ∈ Lp (Ω) with 1 ≤ p ≤ ∞.1 Distributions 7 Here. suppφ ⊂ K. An interesting example of a distribution that is not a locally summable function is the wellknown Dirac delta function. − −1 f (x)φ (x)dx = − 0 φ (x)dx = φ(0) = δ0 (φ).
8
1 Introduction to Sobolev Spaces
1.2 Sobolev Spaces
Now suppose given a function f ∈ Lp (Ω), we want to solve the partial diﬀerential equation ∆u = f (x) in the sense of weak derivatives. Naturally, we would seek solutions u, such that ∆u are in Lp (Ω). More generally, we would start from the collections of all distributions whose second weak derivatives are in Lp (Ω). In a variational approach, as we have seen in the introduction, to seek critical points of the functional 1 2  u2 dx −
Ω Ω
F (x, u)dx,
a natural set of functions we start with is the collection of distributions whose ﬁrst weak derivatives are in L2 (Ω). More generally, we have Deﬁnition 1.2.1 The Sobolev space W k,p (Ω) (k ≥ 0 and p ≥ 1) is the collection of all distributions u on Ω such that for all multiindex α with α ≤ k, Dα u can be realized as a Lp function on Ω. Furthermore, W k,p (Ω) is a Banach space with the norm u :=
α≤k Ω
1/p Dα u(x)p dx .
In the special case when p = 2, it is also a Hilbert space and we usually denote it by H k (Ω).
k,p ∞ Deﬁnition 1.2.2 W0 (Ω) is the closure of C0 (Ω) in W k,p (Ω). k,p Intuitively, W0 (Ω) is the space of functions whose up to (k − 1)th order derivatives vanish on the boundary.
Example 1.2.1 Let Ω = (−1, 1). Consider the function f (x) = xβ . For 0 < β < 1, it is obviously not diﬀerentiable at the origin. However for any 1 , it is in the Sobolev space W 1,p (Ω). More generally, let Ω be 1≤p< 1−β an open unit ball centered at the origin in Rn , then the function xβ is in W 1,p (Ω) if and only if n β >1− . (1.8) p To see this, we ﬁrst calculate fxi (x) = and hence βxi , x2−β for x = 0,
1.3 Approximation by Smooth Functions
9
 f (x) = Fix a small have
β . x1−β
(1.9)
> 0. Then for any φ ∈ D(Ω), by integration by parts, we f (x)φxi (x)dx +
Ω\B (0) ∂B (0)
fxi (x)φ(x)dx = −
Ω\B (0)
f φνi dS.
where ν = (ν1 , ν2 , · · · , νn ) is an inwardnormal vector on ∂B (0). Now, under the condition that β > 1 − n/p, fxi is in Lp (Ω) ⊂ L1 (Ω), and 
∂B (0)
f φνi dS ≤ C
n−1+β
→0,
as →0.
It follows that xβ φxi (x)dx = −
Ω Ω
βxi φ(x)dx. x2−β
Therefore, the weak ﬁrst partial derivatives of xβ are βxi , i = 1, 2, · · · , n. x2−β Moreover, from (1.9), one can see that  f  is in Lp (Ω) if and only if β >1− n . p
1.3 Approximation by Smooth Functions
While working in Sobolev spaces, for instance, proving inequalities, it may feel inconvenient and cumbersome to manage the weak derivatives directly based on its deﬁnition. To get around, in this section, we will show that any function in a Sobolev space can be approached by a sequence of smooth functions. In other words, the smooth functions are dense in Sobolev spaces. Based on this, when deriving many useful properties of Sobolev spaces, we can just work on smooth functions and then take limits. At the end of the section, for more convenient application of the approximation results, we introduce an Operator Completion Theorem. We also prove an Extension Theorem. Both theorems will be used frequently in the next few sections. The idea in approximation is based on molliﬁers. Let j(x) = One can verify that c e x2 −1 if x < 1 0 if x ≥ 1.
1
10
1 Introduction to Sobolev Spaces
∞ j(x) ∈ C0 (B1 (0)).
Choose the constant c, such that j(x)dx = 1.
Rn
For each
> 0, deﬁne j (x) =
1
n
x j( ).
Then obviously j (x)dx = 1, ∀ > 0.
Rn ∞ One can also verify that j ∈ C0 (B (0)), and
lim j (x) = →0
0 for x = 0 ∞ for x = 0.
The above observations suggest that the limit of j (x) may be viewed as a delta function, and from the wellknown property of the delta function, we would naturally expect, for any continuous function f (x) and for any point x ∈ Ω, (J f )(x) :=
Ω
j (x − y)f (y)dy→f (x),
as →0.
(1.10)
More generally, if f (x) is only in Lp (Ω), we would expect (1.10) to hold almost everywhere. We call j (x) a molliﬁer and (J f )(x) the molliﬁcation of f (x). We will show that, for each > 0, J f is a C ∞ function, and as →0, J f →f in W k,p . Notice that, actually (J f )(x) =
B (x)∩Ω
j (x − y)f (y)dy,
hence, in order that J f (x) to approximate f (x) well, we need B (x) to be completely contained in Ω to ensure that j (x − y)dy = 1
B (x)∩Ω
(one of the important property of delta function). Equivalently, we need x to be in the interior of Ω. For this reason, we ﬁrst prove a local approximation theorem. Theorem 1.3.1 (Local approximation by smooth functions). k,p For any f ∈ W k,p (Ω), J f ∈ C ∞ (Rn ) and J f → f in Wloc (Ω) as →0.
1.3 Approximation by Smooth Functions
11
Then, to extend this result to the entire Ω, we will choose inﬁnitely many open sets Oi , i = 1, 2, · · · , each of which has a positive distance to the boundary of Ω, and whose union is the whole Ω. Based on the above theorem, we are able to approximate a W k,p (Ω) function on each Oi by a sequence of smooth functions. Combining this with a partition of unity, and a cut oﬀ function if Ω is unbounded, we will then prove Theorem 1.3.2 (Global approximation by smooth functions). For any f ∈ W k,p (Ω), there exists a sequence of functions {fm } ⊂ ∞ C (Ω) ∩ W k,p (Ω) such that fm → f in W k,p (Ω) as m→∞. Theorem 1.3.3 (Global approximation by smooth functions up to the boundary). Assume that Ω is bounded with C 1 boundary ∂Ω, then for any f ∈ k,p W (Ω), there exists a sequence of functions {fm } ⊂ C ∞ (Ω) = C ∞ (Ω) ∩ W k,p (Ω) such that fm → f in W k,p (Ω) as m→∞.
∞ When Ω is the entire space Rn , the approximation by C ∞ or by C0 functions are essentially the same. We have k,p Theorem 1.3.4 W k,p (Rn ) = W0 (Rn ). In other words, for any f ∈ W k,p (Rn ), ∞ there exists a sequence of functions {fm } ⊂ C0 (Rn ), such that
fm →f,
as m→∞;
in W k,p (Rn ).
Proof of Theorem 1.3.1 We prove the theorem in three steps. In step 1, we show that J f ∈ C ∞ (Rn ) and Jf
Lp (Ω)
≤ f
Lp (Ω) .
From the deﬁnition of J f (x), we can see that it is well deﬁned for all x ∈ Rn , and it vanishes if x is of distance away from Ω. Here and in the following, for simplicity of argument, we extend f to be zero outside of Ω. In step 2, we prove that, if f is in Lp (Ω), (J f )→f in Lp (Ω). loc We ﬁrst verify this for continuous functions and then approximate Lp functions by continuous functions. In step 3, we reach the conclusion of the Theorem. For each f ∈ W k,p (Ω) and α ≤ k, Dα f is in Lp (Ω). Then from the result in Step 2, we have J (Dα f )→Dα f Hence what we need to verify is in Lloc (Ω).
12
1 Introduction to Sobolev Spaces
Dα (J f )(x) = J (Dα f )(x). As the readers will notice, the arguments in the last two steps only work in any compact subset of Ω. Step 1. Let ei = (0, · · · , 0, 1, 0, · · · , 0) be the unit vector in the xi direction. Fix > 0 and x ∈ Rn . By the deﬁnition of J f , we have, for h < , (J f )(x + hei ) − (J f )(x) = h Since as h→0,
B2 (x)∩Ω
j (x + hei − y) − j (x − y) f (y)dy. h (1.11)
j (x + hei − y) − j (x − y) ∂j (x − y) → h ∂xi uniformly for all y ∈ B2 (x) ∩ Ω, we can pass the limit through the integral sign in (1.11) to obtain ∂(J f )(x) = ∂xi Similarly, we have Dα (J f )(x) =
Ω α Dx j (x − y)f (y)dy. Ω
∂j (x − y) f (y)dy. ∂xi
Noticing that j (·) is inﬁnitely diﬀerentiable, we conclude that J f is also inﬁnitely diﬀerentiable. Then we derive Jf
p−1 p
Lp (Ω)
≤ f
1
Lp (Ω) .
(1.12)
By the H¨lder inequality, we have o (J f )(x) = 
Ω
j
(x − y)j p (x − y)f (y)dy
p−1 p
≤
Ω
j (x − y)dy
j (x − y)f (y) dy
p Ω
1 p
1 p
≤
B (x)
j (x − y)f (y)p dy
.
It follows that (J f )(x)p dx ≤
Ω Ω B (x)
j (x − y)f (y)p dy dx f (y)p
Ω B (y)
≤ ≤
Ω
j (x − y)dx dy
f (y)p dy.
for any compact subset K of Ω.13) infers that for suﬃciently small . For the continuous function g. where χj is the characteristic function of some measurable set Aj . the last term in the above inequality tends to zero uniformly as →0. choose a a continuous function g. Lp (K) Step 3. We prove that. By writing (J f )(x) − f (x) = B (0) j (y)[f (x − y) − f (x)]dy.13) for continuous functions. as →0. we have δ J g − g Lp (K) < . (1. 3 It follows from this and (1.1.3 Approximation by Smooth Functions 13 This veriﬁes (1. and each simple function can be approximated by a continuous function.y< x∈K.13).p (Ω). We show that . Now assume that f ∈ W k. This proves (1. For any f in Lp (Ω). such that δ (1. we have α D f ∈ Lp (Ω). Step 2.13) We ﬁrst show this for a continuous function f . 3 This can be derived from the wellknown fact that any Lp function can be k approximated by a simple function of the form j=1 aj χj (x). and given any δ > 0.12). (1. Then for any α with α ≤ k. we have (J f )(x) − f (x) ≤ ≤ x∈K.14) f − g Lp (Ω) < .14) that J f −f Lp (K) ≤ J f −J g Lp (K) + J g−g Lp (K) + g−f Lp (K) ≤ 2 f − g Lp (K) + J g − g δ δ ≤ 2· + 3 3 = δ.y< max f (x − y) − f (x) B (0) j (y)dy max f (x − y) − f (x). This veriﬁes (1. Due to the continuity of f and the compactness of K. J f −f Lp (K) →0.
2. Consequently.1.p (Ω) ≤ 2i+1 i = 0. 1. for suﬃciently small . Let 1 Ωi = {x ∈ Ω  dist(x. 3. obvious ηi f ∈ W k. x ∈ Ω. ∀x ∈ K. ∂Ω) > }. ηi ∈ C0 (Oi ) ∞ i=0 ηi (x) = 1.3. we ﬁx a point x in K.p (Ω). i = 1. Given any function f ∈ W k. ∂Ω). 2. The Proof of Theorem 1. Fix a δ > 0. This completes the proof of the Theorem 1.3. 1 To see this. Choose < 2 dist(K. Ω = Here we have used the fact that j (x − y) and all its derivatives are supported in B (x) ⊂ Ω. we have J (Dα f )→Dα f in Lp (K).2 Step 1.16) . · · · ¯i ) suppfi ⊂ (Ωi+4 \ Ω i = 1. i=0 ∞ 0 ≤ ηi (x) ≤ 1.14 1 Introduction to Sobolev Spaces Dα (J f )→Dα f By the result in Step 2. Dα (J f )(x) = J (Dα f )(x). · · · i ¯ Write Oi = Ωi+3 \ Ωi+1 . Now what left to verify is. then any point y in the ball B (x) is in the interior of Ω. For a Bounded Region Ω. Dα (J f )(x) = Ω α Dx j (x − y)f (y)dy α Dx j (x − y)f (y)dy B (x) α Dy j (x − y)f (y)dy B (x) = = (−1)α = B (x) j (x − y)Dα f (y)dy j (x − y)Dα f (y)dy. (1. Choose i > 0 so small that fi := J i (ηi f ) satisﬁes δ fi − ηi f W k. · · · (1. i=0 Choose a smooth partition of unity {ηi }∞ associated with the open sets i=0 {Oi }∞ . Choose some open set O0 ⊂⊂ Ω so that Ω = ∪∞ Oi .15) in Lp (K).p (Ω) and supp(ηi f ) ⊂ Oi .
1 In the above argument. such that .3. Show that the series ∞ in X if i vi < ∞.p (Ω). but ∞ their diﬀerence converges.p (Ω) ≤ Cδ. Then how did we prove that g ∈ W k. Remark 1.1. we write ∞ g−f = i=0 (fi − ηi f ).p (Ω)? We showed that the diﬀerence (g − f ) is in W k.16) that ∞ ∞ fi − η i f i=0 W k. Hint: Show that the partial sum is a Cauchy sequence.3.3 Approximation by Smooth Functions ∞ 15 Set g = i=0 fi . x ∈ Rn \ BR (0). from the above inequality. because each fi is in C ∞ (Ω). Moreover.17) Choose a cut oﬀ function φ ∈ C ∞ (Rn ) satisfying φ(x) = 1.p (Ω) (see Exercise 1. and for each open set O ⊂⊂ Ω. by the argument in Step 1.17). 2i+1 From here we can see that the series converges in W k. such that f W k. neither fi nor ηi f converge.3. we complete step 1.p the inﬁnite series fi does not converge to g in W (Ω). x ∈ BR−2 (0).p (Ω). and Dα φ(x) ≤ 1 ∀x ∈ Rn .p ≤ δ.18) Now in the bounded domain Ω ∩ BR (0). Exercise 1.p (Ω) ≤δ i=0 1 = δ. Since δ > 0 is any number. ∞ i vi converges Step 2. hence (g − f ) ∈ W (Ω). To see that g ∈ W k.p (Ω). Let X be a Banach space and vi ∈ X. For Unbounded Region Ω. there exists a constant C.p (Ω\BR−2 (0)) ≤ δ. Then by (1.p (Ω) ∞ i=0 (fi −ηi f ) k. there are at most ﬁnitely many nonzero terms in the sum. it is easy to verify that. Given any δ > 0.1 . Then g ∈ C ∞ (Ω). (1. 0. (1. k. such that φf − f W k.p (Ω). there exist R > 0. and therefore g ∈ W k. ∀α ≤ k.1 below). since f ∈ W k. we have g−f W k. Although each partial sum of fi is in C0 (Ω). there is a function g ∈ C ∞ (Ω ∩ BR (0)). It follows from (1.
p (Ω) ≤ φg − φf ≤ C1 g − f W k.p (Ω∩B ≤ (C1 + C)δ. More precisely. we cover Ω with ﬁnitely many open sets. Step 1. Approximating in a Small Set Covering ∂Ω. This completes the proof of the Theorem 1. Set D = Ω ∩ Br/2 (xo ). for a suﬃciently small r > 0. so that the ball B (x) lies in Ω ∩ Br (xo ) for all x ∈ D and for all small > 0. we have Dα (J f ) − Dα f Lp (D) ≤ Dα (J f ) − Dα f Lp (D) + Dα f − Dα f Lp (D) . we can make a C 1 change of coordinates locally.p (Ω) W k. and by (1. · · · xn−1 )} with some C 1 function Φ.3.3. To circumvent this. · · · . . the main diﬃculty here is that for a point on the boundary.3 We will still use the molliﬁers to approximate a function. we will translate the function a little bit inward. As compared to the proofs of the previous theorem. we ﬁrst translate f a distance in the xn direction to become f (x) = f (x ). Let xo be any point on ∂Ω. there is room to mollify a given W k. This D is obtained by shifting D toward the inside of Ω by a units. for any multiindex α ≤ k. and on each set that covers the boundary layer. so that in the new coordinates system (x1 .p (Ω) + φf − f + Cδ R (0)) W k.18) and (1.p (D). xn ).p function f on D within Ω.p (Ω∩BR (0)) ≤ δ. Br (xo ) ∩ Ω = {x ∈ Br (xo )  xn > φ(x1 . deﬁne x = x + a en . and D = {x  x ∈ D}. Since ∂Ω is C 1 . so that there is room to mollify within Ω. φg − f W k. then mollify it. Now.2.19). Choose a suﬃciently large.16 1 Introduction to Sobolev Spaces g−f W k. there is no room to mollify.19) Obviously. Shift every point x ∈ D in xn direction a units. we can express. We claim that J f →f in W k. Actually. The Proof of Theorem 1. Then we will again use the partition of unity to complete the proof. (1. the function φg is in C ∞ (Ω).
(1. Deﬁne i=0 N g= i=0 ηi gi . Then by a direct computation.3. there exists a sequence of numbers {Rm } with Rm →∞.3. such that gi − f W k. while the second term also vanishes in the process due to the continuity of the translation in the Lp norm.20) and (1.1 implies that the ﬁrst term on the right hand side goes to zero as →0.p (Di ) = C(N + 1)δ.p (D0 ) ≤ δ. · · · .21) Let {ηi } be a smooth partition of unity subordinated to the open sets {Di }N . This completes the proof of the Theorem 1. for each Di . (1.p (Rn ) →0. and f = i=0 ηi f. and select a function i=0 ¯ g0 ∈ C ∞ (D0 ) such that g0 − f W k. for each α ≤ k N Dα g − Dα f Lp (Ω) ≤ i=0 N Dα (ηi gi ) − Dα (ηi f ) gi − f i=0 Lp (Di ) ≤C W k. Step 2.20) Choose an open set D0 ⊂⊂ Ω such that Ω ⊂ ∪N Di . (1. 1 . we can ﬁnd ﬁnitely many such sets D. The Proof of Theorem 1. the union of which covers ∂Ω. Similar to the proof of Theorem 1. call them Di . between 0 and 1 . Applying the Partition of Unity. there exists gi ∈ C ∞ (Di ). Given δ > 0. N ¯ Then obviously g ∈ C ∞ (Ω). such that for . as R→∞.3.4 ∞ Let φ(r) be a C0 cut oﬀ function such that for 0 ≤ r ≤ 1. from the argument ¯ in Step 1. 2. φ(r) = 0. it follows from (1.3. N . Since ∂Ω is compact.22) Thus.21) that. i = 1.p (Di ) ≤ δ.1. one can show that φ( x )f (x) − f (x) R W k.3. for 1 < r < 2.1. for r ≥ 2.3 Approximation by Smooth Functions 17 A similar argument as in the proof of Theorem 1.
we can just ﬁrst work on smooth functions and then extend them to the whole Sobolev spaces.3. Rm ≤ 1 . Theorem 1. from the Approximation Theorem 1.23) and (1. when we derive various inequalities in Sobolev spaces. Assume T :D→Y ¯ is a bounded linear map. In order to make such extensions more conveniently. Let Y be a Banach space.5 Let D be a dense linear subspace of a normed space X.24). such that for fm := J m (gm ). whole space X.p (Rn ) ≤ 2 →0.3. we have Obviously. we prove the following Operator Completion Theorem in general Banach spaces. (1.23) m On the other hand. Hence there exist m. Then there exists an extension T of T from D to the ¯ is a bounded linear operator from X to Y .p (Rn ). there exists a sequence {xi } ∈ D. for each ﬁxed gm − f W k.p (Rn ) fm − gm 1 .p . such that xi →xo . Proof. without going through approximation process in each particular space. W k. W k. which show that smooth functions are dense in Sobolev spaces W k. in W k.18 1 Introduction to Sobolev Spaces gm (x) := φ( we have x )f (x). Given any element xo ∈ X.p (Rn ) + gm −f W k.4. particularly in the next three sections. Now we have proved all the four Approximation Theorems. (1.24) m ∞ is in C0 (Rn ).p (Rn ) ≤ ≤ fm −gm W k.p is the completion of C k under the norm · W k. as i→∞.3. such that T ¯ T = T ¯ and T x = T x ∀ x ∈ D. J (gm )→gm . as m→∞. . m This completes the proof of Theorem 1.3. In other words. as →0.p (Rn ) m. and by (1. each function fm fm −f W k. Later. since D is dense in X.p .
This completes the proof.3 Approximation by Smooth Functions 19 It follows that T xi − T xj ≤ T xi − xj →0. As a direct application of this Operator Completion Theorem. it is easy to show that. we prove the following Extension Theorem.p set O ⊃ Ω.e.1. ¯ Hence T is also bounded. there exists a bounded linear operator E : W k. Proof. (1. xn ) ∈ Rn  x < 1. {T xi } converges to some element yo in Y . ¯ Now we prove the theorem for u ∈ C k (Ω). xn < 0. We deﬁne E(u) in the following two steps. since C k (Ω) is dense in W k. i=0 . xn > 0}.25) To see that (1.p (Ω) → W0 (O) such that Eu = u a. Since Y is a Banach space.25) is well deﬁned. E(u)(x) = u(x) almost everywhere on Ω based on ¯ ¯ E(u)(x) = u(x) on Ω ∀ u ∈ C k (Ω). The special case when Ω is a half ball + Br (0) := {x = (x . Deﬁne u(x .3.25). in Ω. Then y1 − yo = lim T xi − T xi ≤ Climi→∞ xi − xi = 0. for x ∈ D. −λi xn ) .6 Assume that Ω is bounded and ∂Ω is C k . We extend u to be a C k function on the whole ball Br (0).p (Ω) (see Theorem 1. This implies that {T xi } is a Cauchy sequence in Y . suppose there is another sequence {xi } that converges to xo in X and T xi →y1 in Y . as i. xn ) = ¯ u(x . xn ≥ 0 k ai u(x . j→∞. Step 1.p (Ω). T ¯ T xo = lim T xi ≤ lim T i→∞ i→ ∞ xi = T xo . Theorem 1. Then we can apply the Operator Completion Theorem to extend E ¯ to W k. Let ¯ T xo = yo . and for other x ∈ X.3).3.p (Ω). Moreover Obviously. xn ) . To deﬁne the extension operator E. for u ∈ W k. we ﬁrst work on functions u ∈ ¯ C k (Ω). From this density. i→∞ ¯ ¯ Now. deﬁne T x by (1. ¯ is a linear operator. + Assume u ∈ C k (Br (0)). deﬁne T x = T x. then for any open k.
From Step 1. i i=0 to determine a0 . Dm and a corresponding partition of unity η0 . there exists a o k neighborhood U of x and a C transformation φ : U →Rn which satisﬁes that φ(xo ) = 0. ak . such that Bro (0) ⊂⊂ φ(U ). · · · m. i = 1. xn ) ¯ Then solve the algebraic system k ai λj = 1 . we ﬁrst pick λi . ¯ Let O be an open set containing Ω. · · · . ηm . such that 0 < λ0 < λ1 < · · · < λk < 1. a1 . D1 . And let ui (y) = u(φ−1 (y) be the function deﬁned on the half ball ˜ i + ¯ Bri (0) = φi (Di ∩ Ω) . and hence the system has a unique solution. · · · . 1. All such Dxo and Ω forms an open ¯ covering of Ω. Given any xo ∈ ∂Ω. and φ(U ∩ Ω) is n contained in R+ . Then there exists an ro > 0. each ui (y) can be extended as a C k function ui (y) onto the ˜ ¯ whole ball Bri (0). One can easily verify that. such that and ∞ ηi ∈ C0 (Di ) i = 0. j = 0. 1. the extended function u is in C k (Br (0)). Choose ro suﬃciently small so that Dxo ⊂ O. Now. −1 Dxo := φ (Bro (0)) is an open set containing xo . η1 . · · · . ¯ Step 2. for such λi and ai . and φ ∈ C k (Dxo ). hence there exist a ﬁnite subcovering D0 := Ω. u . ∂ j u(x . i=0 Let φi be the mapping associated with Di as described in Step 1. Reduce the general domains to the case of half ball covered in Step 1 via domain transformations and a partition of unity. we can deﬁne the extension of u from Ω to its neighborhood O as m E(u) = i=1 ηi (x)¯i (φi (x)) + η0 u(x). · · · . 2. m m ηi (x) = 1 ∀ x ∈ Ω. · · · .20 1 Introduction to Sobolev Spaces To guarantee that all the partial derivatives up to the order k are ∂xj n continuous across the hyper plane xn = 0. Since the coeﬃcient determinant det(λj ) is not i zero. φ(U ∩ ∂Ω) lies on the hyper plane xn = 0. k.
¯ Moreover.p . we have m m E(u)(x) = i=1 m ηi (x)¯i (φi (x)) + η0 (x)u(x) = u i=1 ηi (x)˜i (φi (x)) + η0 (x)u(x) u m = = ηi (x)u(φ−1 (φi (x))) i i=1 m + η0 (x)u(x) = i=1 ηi (x)u(x) + η0 (x)u(x) ηi (x) u(x) = u(x). as compare to the previous approximation theorems.p (Ω) . Remark 1. and 21 E(u) W k.1.3.p (Ω) →0 . Let O be an open ¯ set containing Ω.p (Ω). From the Extension Theorem.2 Notice that the Extension Theorem actually implies an improved version of the Approximation Theorem under stronger assumption on ∂Ω (be C k instead of C 1 ). one can derive immediately Corollary 1. .p (O) ≤C u W k.1 Assume that Ω is bounded and ∂Ω is C k . we start with functions in a Sobolev space W k. Then it is natural that we would like to know whether the functions in this space also automatically belong to some other spaces. and for each u ∈ W k. Then the linear operator ∞ J (E(u)) : W k. Here.3.p (Ω)→C0 (O ) is bounded. the improvement is that one can write out the explicit form of the approximation. Let O := {x ∈ Rn  dist(x. E(u) ∈ C0 (O). as →0.4 Sobolev Embeddings ∞ Obviously. we have J (E(u)) − u W k. for any x ∈ Ω. O) ≤ }.4 Sobolev Embeddings When we seek weak solutions of partial diﬀerential equations. The following theorem answers the question and at meantime provides inequalities among the relevant norms. 1. i=0 This completes the proof of the Extension Theorem.
27) Rn Du(y)p dy. Assume Ω is bounded and has a C 1 boundary. And to control the Lq norm for larger q by W 1.p (Ω).γ p (Ω) ≤C u W k.26) holds. we start with the smooth functions with compact supports in Rn . Here. we have 1 λn λp Duλ p dx = n λ Rn uλ (x)q dx = u(y)q dy. if n p n p is not an integer is an integer . and what is more meaningful is to ﬁnd out how large this q can be.4. We would like to know for what value of q.26) ∞ with constant C independent of u ∈ C0 (Rn ). (1.p (Ω). one would expect more. Now suppose (1. Then it must also be true for the rescaled function of u: uλ (x) = u(λx). We ﬁrst consider the functions in W 1. [b] is the integer part of the number b. such n u where γ= C k−[ n ]−1.γ (Ω). Rn . For simplicity. if p p any positive number < 1.p norm–the norm of the derivatives Du Lp (Ω) = Ω Du dx p 1 p –would suppose to be more useful in practice. From the deﬁnition. The proof of the Theorem is based upon several simpler theorems. Let u ∈ W k. (1. Naturally. that is uλ By substitution. then u ∈ Lq (Ω) with 1 = p − n and there exists a constant p q C such that u Lq (Ω) ≤ C u W k. can we establish the inequality u Lq (Rn ) ≤ C Du Lp (Rn ) . 1 k (i) If k < n .1 (General Sobolev inequalities).p (Ω) (ii) If k > that n p. and Rn Lq (Rn ) ≤ C Duλ Lp (Rn ) .p (Ω) 1 + n − n. apparently these functions belong to Lq (Ω) for 1 ≤ q ≤ p. then u ∈ C k−[ p ]−1.22 1 Introduction to Sobolev Spaces Theorem 1. and there exists a constant C.
1 However. n−p It turns out that this condition is also suﬃcient. p∗ := n−p →∞. . we must require p < n.1. n. Ω). it is necessarily that the power of λ here be zero. n). For n = 1. Unfortunately. Theorem 1. from x u(x) = −∞ u (x)dx.2 (GagliardoNirenbergSobolev inequality). such that u Lp∗ (Ω) ≤C u W 1.4. Suppose that 1 ≤ p < n and u ∈ W 1. Assume that 1 ≤ p < n. Since any function in W 1. as will be stated in the following theorem. we derive immediately that ∞ u(x) ≤ −∞ u (x)dx.27) that u Lq (Rn ) ≤ Cλ1− p + q Du n n Lp (Rn ) . Here for q to be positive.29) np In the limiting case as p→n.p (Ω). open subset of Rn with C 1 boundary. such that 1 u Lp∗ (Rn ) ≤ C Du Lp (Rn ) .p (Ω) .p (Rn ) can be approached by a sequence of func1 tions in C0 (Rn ). it is false. for n > 1. Then one may suspect that u ∈ L∞ when p = n.4.28) where p∗ = np n−p . u ∈ C0 (Rn ) (1.n (Ω).3 Assume that Ω is a bounded. Then there exists a constant C = C(p. Therefore. in order that C to be independent of u. this is true only in dimension one. but not to L∞ (Ω).p (Rn ). For functions in W 1. we can extended them to be W 1. This is a more delicate situation. It belongs to W 1. that is q= np . and we will deal with it later.28) holds for all functions u in W 1. Then u is in Lp∗ (Ω) and there exists a constant C = C(p.1 Inequality (1.4 Sobolev Embeddings 23 It follows from (1. we derive immediately Corollary 1. (1. One counter example is u = log log(1 + x ) on Ω = B1 (0).p (Rn ) functions by Extension Theorem and arrive at Theorem 1.4.p (Ω).
3.4.28) are called ‘soft analyseses’. Applying Theorem 1.1 and Theorem 1.28). Take the supremum over all pairs x. and the steps leading from (1.2). j → ∞.4. First. Secondly.2 is the ‘key’ and inequality (1. we reduce the proof to functions with enough smoothness (which is just Theorem 1. It follows that u(y) − u(x) y − x1− p 1 ∞ ≤ −∞ u (t) dt p 1 p .p (Rn ). This is o indeed true in general. Given any function u ∈ W 1. for p > n.p (Rn ) .2.2. First. by the Approximation Theorem. y ∈ R1 with x < y.27) to (1.4. as i. y in R1 . for any x. the proof of (1. To get some rough idea what these spaces might be. one sees that Theorem 1. we consider the special case where Ω = Rn . by H¨lder inequality. we will prove Theorem 1.γ (Rn ) ≤C u W 1. Instead of dealing with functions in W k. Assume n < p ≤ ∞. we will show the ‘soft’ parts ﬁrst and then the ‘hard’ ones. Here.27) there provides the foundation for the proof of the Sobolev embedding.p . Obviously.4 (Morrey’s inequality).4.p (Ω) to W 1.γ (R1 ) with γ = 1 − p . Then. and we have Theorem 1.4.p (Rn ) → 0 as k → ∞. one would expect W 1. we deal with general domains by extending functions u ∈ W 1.24 1 Introduction to Sobolev Spaces Naturally.2 is true and derive Corollary 1. u ∈ C 1 (Rn ) ≤ C ui − uj W 1.4. p To establish an inequality like (1.p (Rn ) via the Extension Theorem. Often. let us ﬁrst consider the simplest case when n = 1 and p > 1. n) such that u where γ = 1 − n . there ∞ exists a sequence {uk } ⊂ C0 (Rn ) such that u − uk W 1.4. . we follow two basic principles.4. and consequently. we have y u(y) − u(x) = x u (t)dt. In the following. The Proof of Corollary 1. we obtain ui − uj Lp∗ (Rn ) C 0. Then there exists a constant C = C(p.p (Rn ) → 0.27) is called a ‘hard analysis’. we will assume that Theorem 1. o y y u(y) − u(x) ≤ x u (t)dt ≤ x u (t) dt p 1 p y 1 1− p · x dt .p to embed into better spaces.1. the left hand side of the above 1 inequality is the norm in the H¨lder space C 0.4.
moreover.3.1 (General H¨lder Inequality) Assume that o ui ∈ Lpi (Ω) for i = 1. to apply inequality (1. Now for functions in W 1. More precisely. such that ˜ u = u.31) to uγ for a properly chosen γ > 1 to extend the inequality to the case when p > 1.30) Now we can apply the GagliardoNirenbergSobolev inequality to u to derive ˜ u Lp∗ (Ω) ≤ u ˜ Lp∗ (O) ≤C u ˜ W 1. for 1. n.2. The Proof of Theorem 1.3. we prove u Rn n n−1 n−1 n dx ≤ Rn Dudx. ˜ almost everywhere in Ω. there exists a function u in W0 (O).p (O) ≤ CC1 u W 1.p (Rn ) =C u W 1. We ﬁrst establish the inequality for p = 1.p (Ω) . (1.4.27). by the Extension Theorem (Theorem 1.32) .31) Then we will apply (1. We need Lemma 1. The Proof of Theorem 1. O). there exists a constant C1 = C1 (p. we ﬁrst extend them to be functions with compact supports in Rn . This completes the proof of the Theorem.p every u in W 1. let O be an open set that covers Ω. Consequently.p (Ω). m and 1 1 1 + + ··· + = 1. · · · . 2.6). Ω.p (Ω).1.p (Rn ) .p (Ω) .4. (1. p1 p2 pm m Then u1 u2 · · · um dx ≤ Ω ui pi dx i Ω 1 pi . This completes the proof of the Corollary.p (O) ≤ C1 u W 1.4. we arrive at u Lp∗ (Rn ) ∗ = lim uk Lp∗ (Rn ) ≤ lim C uk W 1.4 Sobolev Embeddings 25 Thus {uk } also converges to u in Lp (Rn ). (1. i. such that u ˜ W 1.e.
· · · . we have u(x) = It follows that u(x) ≤ −∞ x1 −∞ ∞ ∂u (y1 . x2 )dy1 . 2. Now. we ﬁrst derive inequality (1. −∞ The above two inequalities together imply that ∞ ∞ u(x)2 ≤ −∞ Du(y1 . We write u(x) = Consequently.33). ∂y1 Du(y1 . we arrive at (1. yi . we have u(x) ≤ ∞ Du(x1 . Step 1. y2 )dy2 . · · · n. xn )dyi . xi+1 . we prove 2 u(x)2 dx ≤ R2 R2 Dudx .31) for n = 2. i = 1. y2 )dy2 . · · · . And it follows that n u(x) n n−1 ∞ ≤ i=1 −∞ Du(x1 . that is.32). Integrating both sides with respect to x1 and applying the general H¨lder o inequality (1. · · · n. x2 )dy1 · −∞ Du(x1 .33) Since u has a compact support. i = 1. · · · . xi−1 . · · · . Then we deal with the general situation when n > 2. ∞ xi −∞ ∂u (x1 . xn )dyi . yi . we obtain ∞ u −∞ n n−1 ∞ 1 n−1 ∞ n ∞ 1 n−1 dx1 ≤ −∞ ∞ Dudy1 −∞ i=2 1 n−1 Dudyi −∞ ∞ n ∞ −∞ dx1 1 n−1 ≤ −∞ Dudy1 i=2 −∞ Dudx1 dyi . yi . ∂yi u(x) ≤ −∞ Du(x1 . To better illustrate the idea. 2.26 1 Introduction to Sobolev Spaces The proof can be obtained by applying induction to the usual H¨lder inequalo ity for two functions. The case p = 1. · · · . Now we are ready to prove the Theorem. Similarly. · · · . xn )dyi 1 n−1 . integrate both sides of the above inequality with respect to x1 and x2 from −∞ to ∞. x2 )dy1 . . (1.
we obtain o u n−1 dx ≤ Rn Rn n n n−1 Dudx . . Exercise 1. The Case p > 1. we deduce ∞ ∞ ··· −∞ ∞ −∞ u n−1 dx1 · · · dxn−1 ≤ ∞ 1 n−1 n ∞ ∞ 1 n−1 ··· −∞ ∞ −∞ ∞ Dudy1 dx2 · · · dxn−1 −∞ 1 n−1 ··· −∞ Dudx1 dy2 dx3 · · · dxn−1 1 n−1 ··· × ··· −∞ −∞ Dudx1 · · · dxn−2 dyn−1 Rn Dudx .4 Sobolev Embeddings 27 Then integrate the above inequality with respect to x2 . xn−1 . · · · . We have ∞ −∞ ∞ −∞ ∞ −∞ ∞ 1 n−1 u n−1 dx1 dx2 ≤ ∞ −∞ ∞ 1 n−1 n n ∞ −∞ ∞ 1 n−1 Dudx1 dy2 −∞ −∞ Dudy1 · i=3 −∞ Dudx1 dyi dx2 .31). Finally. and by the H¨lder inequality. we have o u Rn γn n−1 n−1 n dx ≤ Rn D(uγ )dx = γ Rn uγ−1 Dudx Du Rn γn γ+n−1 γ+n−1 γn ≤γ Rn u γn n−1 (γ−1)(n−1) γn dx dx .1. Again. we arrive at o ∞ −∞ ∞ −∞ ∞ −∞ n ∞ 1 n−1 u n−1 dx1 dx2 ≤ ∞ −∞ ∞ −∞ ∞ ∞ 1 n−1 n Dudy1 dx2 −∞ ∞ −∞ −∞ 1 n−1 Dudx1 dy2 . Applying (1.31) to the function uγ with γ > 1 to be chosen later.1 Write your own proof with all details for the cases n = 3 and n = 4. × Dudx1 dx2 dyi i=3 −∞ Continuing this way by integrating with respect to x3 . This veriﬁes (1. Step 2. applying the General H¨lder Inequality.4. integrating both sides with respect to xn and applying the general H¨lder inequality.
we verify (1. We will carry the proof out in three steps. and hence y = x + sω. s = x − y. so that γn = p.28 1 Introduction to Sobolev Spaces It follows that u Rn γn n−1 n−1 γn dx ≤γ Rn Du γn γ+n−1 γ+n−1 γn dx .34) and (1. (1.36). n−p 1 p we obtain u n−p dx Rn np n−p np ≤γ Rn Dup dx . x − y s u(x + sω) − u(x) ≤ 0 Du(x + τ ω)dτ. Step 1. that is γ+n−1 γ= p(n − 1) . The Proof of Theorem 1. (1. Now choose γ. We start from 1 u(y) − u(x) = 0 d u(x + t(y − x))dt = dt s Du(x + τ ω) · ωdτ. 0 where ω= It follows that y−x . y − xn−1 (1. and in Step 2 and Step 3.4 (Morrey’s inequality). Integrating both sides with respect to ω on the unit sphere ∂B1 (0). This completes the proof of the Theorem. respectively. We will establish two inequalities sup u ≤ C u Rn W 1. we prove (1. then converting the integral on the right hand side from polar to rectangular coordinates.35) Both of them can be derived from the following 1 Br (x) u(y) − u(x)dy ≤ C Br (x) Br (x) Du(y) dy.p (Rn ) . In Step 1. we obtain .36) where Br (x) is the volume of Br (x).34) and sup x=y u(x) − u(y) ≤ C Du x − y1−n/p Lp (Rn ) .4.35).
37). integrating with respect to s from 0 to r.36) and the H¨lder inequality. we arrive at u(y) − u(x)dy ≤ Br (x) rn n Br (x) Du(y) dy.38) ≤ C1 Rn Dup dy .4 Sobolev Embeddings s 29 u(x + sω) − u(x)dσ ≤ ∂B1 (0) 0 s ∂B1 (0) Du(x + τ ω)dσdτ Du(x + τ ω) 0 ∂B1 (0) = = Bs (x) τ n−1 dσdτ τ n−1 Du(z) dz. For each ﬁxed x ∈ Rn . . and taking into account that the integrand on the right hand side is nonnegative.39). Also it is obvious that I2 ≤ C u Lp (Rn ) . Step 2. (1. and hence x − y (n−1)p p−1 is ﬁnite. x − yn−1 This veriﬁes (1. By (1.39) Now (1. we have u(x) = 1 B1 (x) 1 ≤ B1 (x) = I1 + I2 .36). (n−1)p p−1 Here we have used the condition that p > n. x − zn−1 Multiplying both sides by sn−1 . we deduce o I1 ≤ C B1 (x) u(x)dy B1 (x) u(x) − u(y)dy + B1 (x) 1 B1 (x) u(y)dy B1 (x) (1.34) is an immediate consequence of (1.1. (1.38). so that the integral dy B1 (x) < n.37) Du(y) dy x − yn−1 1/p ≤C Rn Dup dy B1 (x) 1/p dy x − y (n−1)p p−1 p−1 p dy (1. and (1.
The Proof of Theorem 1. we have 1 D u(x) − u(z)dz ≤ D C Br (x) u(x) − u(z)dz Br (x) 1/p p ≤C Rn Du dz Br (x) Lp (Rn ) .40).p (Ω) .p (Ω) and u W k−1.4. dz x − z (n−1)p p−1 p−1 p = Cr1−n/p Du Similarly. This can be done by applying Theorem 1. For any pair of ﬁxed points x and y in Rn . (1.4.p (Ω). Again denote p∗ = n−p .43).3. (1.p (Ω) and u where p∗∗ = W k−2.3 again to W k−1.35) and thus completes the proof of the Theorem.4. (1.30 1 Introduction to Sobolev Spaces Step 3.3 successively on q np 1 1 the integer k. we have Dα u ∈ Lp (Ω). Dα u ∈ ∗ ∗ ∗ W 1.p∗ (Ω) ≤ C1 u W k. and (1. n (i) Assume that u ∈ W k.p∗ (Ω) ≤ C2 C1 u W k.43) 1 k with 1 = p − n . we have u ∈ W k−2.p∗∗ (Ω) ∗ ∗∗ ≤ C2 u W k−1. By Theorem 1. or 1 1 1 1 1 1 1 2 = ∗ − =( − )− = − . Applying Theorem 1.p (Ω) . We want to show that u ∈ p Lq (Ω) and u Lq (Ω) ≤ C u W k.41).40) Again by (1. thus u ∈ W k−1. This implies (1. Then u(x) − u(y) ≤ 1 D u(x) − u(z)dz + D 1 D u(z) − u(y)dz. np∗ n−p∗ . then p1 = p − n . For α ≤ k−1.p (Ω) .1 (the General Sobolev Inequality).42) Now (1.p (Ω) with k < . D (1. we arrive at (1.42) yield u(x) − u(y) ≤ Cx − y1−n/p Du Lp (Rn ) . ∗∗ p p n p n n p n Continuing this way k times. 1 D (1. let r = x − y and D = Br (x) ∩ Br (y).p (Ω).36).4.41) u(z) − u(y)dz ≤ Cr1−n/p Du D Lp (Rn ) . .
p For this choice of m.44) we require that p > n. C 0. u C k− n −1. p Obviously. the smallest such integer m is m= n .γ p (Ω) C 0. This completes the proof of the Theorem. when n is not an integer. and in this case. we can apply Morrey’s inequality (1. W k. q can be any number p p > n.4 Sobolev Embeddings 31 (ii) Now assume that k > u n . we will try to ﬁnd a smallest integer m. More precisely.p (Ω) . we have m = . That is.p (Ω) → W k−m. p γ =1− n n n =1+ − . with any α ≤ k − m − 1 to obtain Dα u Or equivalently. Recall that in the Morrey’s inequality p ≤C u W 1. in our situation.p (Ω) . we want q= and equivalently.γ (Ω) (1. However.q (Ω). [ ] ≤C u Here.q (Ω) ≤ C1 C2 u W k.p (Ω) . q p p n n is an integer. with q > n.1. n − mp n − 1. While .44) to Dα u. we can use the result in the previous step.γ (Ω) ≤ C1 u W k−m. which implies that γ can be any positive number < 1. m> np > n. To remedy this. We can ﬁrst decrease the order of diﬀerentiation to increase the power of summability. this condition is not necessarily met. such that W k.
as stated below. We will show the strong convergence of this sequence in three steps with the help of a family of molliﬁers uk (x) = Ω j (y)uk (x − y)dy.1 (Ω) possesses a convergent subsequence in L1 (Ω).32 1 Introduction to Sobolev Spaces 1. ∀ u ∈ W 1. This proves the Theorem. The ﬁrst part–inequality (1. it is also bounded in W 1. In fact. there is a subsequence (still denoted by {uk }) that converges strongly to uo in L1 (Ω). Proof. Now. due to the strict inequality on the exponent.p (Ω) is embedded into Lp (Ω) np with p∗ = n−p .1. assume that {uk } is a bounded sequence in W 1. we conclude immediately that {uk } converges strongly to uo in Lq (Ω). in the sense that i) there is a constant C. we proved that W 1.p (Ω) into Lq (Ω) is still true. Let {uk } be a bounded sequence in W 1. On the other hand.5.p (Ω). since p ≥ 1 and Ω is bounded. if the region Ω is bounded.1 (RellichKondrachov Compact Embedding).p (Ω) is compactly imbedded into Lq (Ω): W 1.5 Compact Embedding In the previous section. such that u Lq (Ω) ∗ ≤C u W 1. Assume that Ω is a bounded open subset in Rn with C 1 boundary ∂Ω. by Lemma 1. By the Sobolev embedding. Suppose 1 ≤ p < n.1 (Ω).p (Ω). the embedding of W 1. We now come back to prove the Lemma.5. for q < p∗ .p (Ω) possesses a convergent subsequence in Lq (Ω).p (Ω). (1. . What we need to show is that if {uk } is a bounded sequence in W 1. This can be derived immediately from the following Lemma 1.p (Ω). {uk } is bounded in Lp∗ (Ω).1 (Ω).p (Ω) → → Lq (Ω).5.1 Every bounded sequence in W 1.45)– is included in the general Embedding Theorem. We postpone the proof of the Lemma for a moment and see how it implies the Theorem. W 1. Then there exists a subsequence (still denoted by {uk }) which converges weakly to an element uo in W 1. Theorem 1. Actually. Applying the H¨lder inequality o uk − uo Lq (Ω) ≤ uk − uo θ L1 (Ω) uk − uo 1−θ Lp∗ (Ω) .45) and ii) every bounded sequence in W 1. Obviously. one can expect more. Then for each 1 ≤ q < p∗ .p (Ω) . then it possesses a convergent subsequence in Lq (Ω).
we prove that (1. we show that uk →uk in L1 (Ω) as →0 . 2.48) It follows that uk − uk L1 (G) →0 . · · · . we have uk (x) − uk (x) = B1 (0) 1 j (y)[uk (x − y) − uk (x)]dy j (y) B1 (0) 0 = =− d uk (x − ty)dt dy dt 1 j (y) B1 (0) 0 Duk (x − ty)dt y dy. corresponding to the above convergent sequence {uk }. then for all x ∈ Rn and for all k = 1. that Ω = Rn . Based on the Extension Theorem. Since every W 1. ≤ G Duk (z)dz ≤ (1. Finally.1 (G) . for all k = 1. (1. Integrating with respect to x and changing the order of integration.49) Step 2. for each ﬁxed > 0. the sequence of functions {uk } all have compact support in a bounded open set G ⊂ Rn .5 Compact Embedding 33 First.50) . as →0 . and uk W 1. Step 1. Now ﬁx an have > 0. we obtain 1 uk − uk L1 (G) ≤ B1 (0) j(y) 0 G Duk (x − ty)dx dt dy uk W 1. we extract diagonally a subsequence of {uk } which converges strongly in L1 (Ω). · · ·. we may assume.1 function can be approached by a sequence of smooth functions. uniformly in k. Then.1.47) (1.46) there is a subsequence of {uk } which converges uniformly . 2. we j (x − y)uk (y)dy B (x) L∞ (Rn ) uk (x) ≤ ≤ j uk L1 (G) ≤ C n < ∞. uniformly in k. we may also assume that each uk is smooth. From the property of molliﬁers. without loss of generality. (1.1 (G) ≤ C < ∞ .
1. Therefore. u33 . as i. 3. Pick the diagonal subsequence from the above: {uii } ⊂ {uk }.51) imply that.52) Step 3. Then select the corresponding subsequence from {uk }: u11 . (1. · · · .p Assume Ω is bounded. 1/3.51) (1. and denote the corresponding subsequence that satisﬁes (1. because uii −ujj L1 (G) 1/i 1/k 1/k 1/k ≤ uii −uii 1/i L1 (G) + uii −ujj 1/i 1/j L1 (G) + ujj −ujj 1/j L1 (G) →0 . for each ﬁxed > 0. the sequence {uk } is uniformly bounded and equicontinuous. by the ArzelaAscoli Theorem (see the Appendix). due to the fact that each of the norm on the right hand side →0. · · · . Now choose to be 1. (1. 2. Then u Lp (Ω) ≤ C Du Lp (Ω) .6. Suppose u ∈ W0 (Ω) for some 1 ≤ p ≤ ∞.52) by uk1 . (1.1 Poincar´’s Inequality e For functions that vanish on the boundary.53) . uk2 . · · · .6 Other Basic Inequalities 1. it possesses a subsequence (still denoted by {uk } ) which converges uniformly on G. 1/k.50) and (1. e 1. for k = 1. j→∞. This is our desired subsequence. u22 .6. This completes the proof of the Lemma and hence the Theorem. Duk (x) ≤ C n+1 < ∞.i→∞ lim uj − ui L1 (G) = 0. · · · successively. uk3 · · · . 1/2. we have Theorem 1.34 1 Introduction to Sobolev Spaces Similarly. in particular j.1 (Poincar´’s Inequality I).
we must have vo ≡ 0. o the one we present in the following is a uniﬁed proof that works for both this and the next theorem.p (Ω). {vk } is bounded in W 1. for each φ ∈ C0 (Ω). Ω vφxi dx = lim k →∞ Ω vk φxi dx = − lim k→∞ vk.xi φdx = 0. a.55) .e. ∞ Thus vo is a constant.2 (Poincar´ Inequality II). For functions in W 1. Ω).6.54) and therefore completes the proof of the Theorem. However. we may assume that {uk } ⊂ C0 (Ω). ∀ u ∈ W 1.p (Ω). Consequently. From the compact embedding results in the previous section. For convenience. {vk } converges strongly in Lp (Ω) to vo . and Dvk p →0 .6. Then uk p vk p = 1 . and therefore vo p = 1. one can easily prove it by using the Sobolev innq equality u Lp ≤ u Lq with p = n−q and the H¨lder inequality. p. Assume 1 ≤ p ≤ ∞. Proof. we have another version of Poincar´’s inequality.6 Other Basic Inequalities 35 Remark 1. as k→∞. (1. Let u be the average of u on Ω. while uk p →∞ .p (Ω) and hence possesses a subsequence (still denoted by {vk }) that converges weakly to some vo ∈ W 1. one can take an equivalent 1. 1.p ∞ ∞ Since C0 (Ω) is dense in W0 (Ω). e Let Ω be a bounded. Suppose inequality 1.53) does not hold. Ω It follows that Dvo (x) = 0 . Taking into account that vk ∈ C0 (Ω). 0 ii) Just for this theorem.1 i) Now based on this inequality.p (Ω).54) ∞ On the other hand.p (1. and open subset in Rn with C 1 boundary. connected. uk Let vk = . we abbreviate u Lp (Ω) as u p . such that u−u ¯ p ≤ C Du p . then there exists a sequence {uk } ⊂ W0 (Ω). (1. Then there exists a ¯ constant C = C(n.1. such that Duk p = 1 . This contradicts with (1.p (Ω) = Du Lp (Ω) .p norm of W0 (Ω) as u W 1. e Theorem 1. which may not be zero on the the boundary.
and where f r := f Lr (Rn ) . 1] 1 for x ∈ [2.3 (HardyLittlewoodSobolev Inequality).56) where C(n. Then one can see obviously that ∞ f (x) = 0 χ{f >a} (x) da ∞ (1. Instead of letting uk vk = . Proof. Then f (x)x − y−λ g(y)dxdy ≤ C(n. s.2 The connectness of Ω is essential in this version of the inequality. 1] ∪ [2. Without loss of generality.2 The Classical HardyLittlewoodSobolev Inequality Theorem 1. and u(x) = −1 for x ∈ [0. we choose uk p uk − uk ¯ .58) (1. Let 0 < λ < n and s. 3] .57) (1.6.6. r s n Assume that f ∈ Lr (Rn ) and g ∈ Ls (Rn ). Ω = [0.36 1 Introduction to Sobolev Spaces The proof of this Theorem is similar to the previous one. Let 1 if x ∈ G χG (x) = 0 if x ∈G be the characteristic function of the set G. λ)f r gs Rn Rn (1. s. A simple counter example is when n = 1. λ) = nB n λ/n (n − λ)rs λ/n 1 − 1/r λ/n + λ/n 1 − 1/s λ/n with B n  being the volume of the unit ball in Rn . vk = uk − uk p ¯ Remark 1. 3].59) g(x) = 0 ∞ χ{g>b} (x) db x−λ = λ 0 c−λ−1 χ{x<c} (x) dc . 1. r > 1 such that 1 1 λ + + = 2.6. we may assume that both f and g are nonnegative and f r = 1 = g s .
56). respectively. b. (1.60) u(c) = B n cn . and let v(a) = Rn χ{f >a} (x) dx . u(c)v(a).61) It is easy to see that I(a. c) dc 0 ≤ u(c)≤v(a) c−λ−1 w(b)u(c) dc + u(c)>v(a) c−λ−1 w(b)v(a) dc . c) ≤ Rn Rn χ{x<c} (x − y)χ{g>b} (y) dx dy u(c)χ{g>b} (y) dy = u(c)w(b) Rn ≤ Similarly. b.62) We integrate with respect to c ﬁrst.6 Other Basic Inequalities 37 To see the last identity. b.59) into the left hand side of (1. w(b) = Rn χ{g>b} (y) dy. one may ﬁrst write ∞ x−λ = 0 χ{x−λ >˜} (x) d˜. b.1. (1. v(a)w(b)}. c) dc db da. we have ∞ c−λ−1 I(a. ˜ Substituting (1. the volume of the ball of radius c. we have I := Rn f (x)x − y−λ g(y) dx dy Rn ∞ ∞ 0 ∞ ∞ 0 0 0 ∞ ∞ = λ 0 Rn Rn c−λ−1 χ{f >a} (x)χ{x<c} (x − y)χ{g>b} (y) dx dy dc db da c−λ−1 I(a. Then we can express the norms as ∞ ∞ f r r =r 0 ar−1 v(a) da = 1 and g s s =s 0 bs−1 w(b) db = 1. b. the measure of the sets {x  f (x) > a} and {y  g(y) > b}. one can show that I(a. c c and then let c = c−λ . c) is bounded above by other pairs and arrive at I(a. 0 = λ Let (1.58). By (1. c) ≤ min{u(c)w(b).57). and (1.62). (1.
64) ≤ B n λ/n v(a)w(b)1−λ/n λ(n − λ) In view of (1.63). we split the bintegral into two parts.65) To estimate the ﬁrst integral in (1.60).65) is bounded above by λ n − s(n − λ) λ/n 0 ∞ ∞ 1−λ/n v(a)ar−1 da 0 w(b)bs−1 db = 1 rs λ/n 1 − 1/r λ/n . By virtue of (1.66) since mn/λ < 1. b. (1.64). we also obtain ∞ c−λ−1 I(a. (1. and (1. we derive I ≤ × 0 n B n λ/n n−λ ∞ ar/s ∞ ∞ v(a) 0 w(b)1−λ/n db da + 0 v(a)1−λ/n ar/s w(b) db da.61). c) dc 0 ≤ u(c)≤w(b) c−λ−1 v(a)u(c) dc + u(c)>w(b)) c−λ−1 w(b)v(a) dc (1. (1.65). (1.63) Exchanging v(a) with w(b) in the second line of (1.67) . we use H¨lder inequality with o m = (s − 1)(1 − λ/n) ar/s w(b)1−λ/n bm b−m db 0 ar/s 1−λ/n ar/s λ/n ≤ 0 ar/s w(b)b s−1 db 0 1−λ/n b −mn/λ db ≤ 0 w(b)b s−1 db · ar−1 . one from 0 to ar/s and the other from ar/s to ∞. It follows that the ﬁrst integral in (1.38 1 Introduction to Sobolev Spaces (v(a)/B n )1/n = w(b)B  n 0 c−λ−1+n dc + w(b)v(a) (v(a)/B n )1/n c−λ−1 dc = B n λ/n B n λ/n w(b)v(a)1−λ/n + w(b)v(a)1−λ/n n−λ λ B n λ/n = w(b)v(a)1−λ/n λ(n − λ) (1.63).
1.6 Other Basic Inequalities
39
To estimate the second integral in (1.65), we ﬁrst rewrite it as
∞ bs/r
w(b)
0 0
v(a)1−λ/n da db,
then an analogous computation shows that it is bounded above by 1 rs λ/n 1 − 1/s
λ/n
.
(1.68)
Now the desired HardyLittlewoodSobolev inequality follows directly from (1.65), (1.67), and (1.68). Theorem 1.6.4 (An equivalent form of the HardyLittlewoodSobolev inequality) n Let g ∈ Lp (Rn ) for n−α < p < ∞. Deﬁne T g(x) =
Rn
x − yα−n g(y)dy.
Then Tg
p
≤ C(n, p, α) g
np n+αp
.
(1.69)
Proof. By the classical HardyLittlewoodSobolev inequality, we have < f, T g >=< T f, g >≤ C(n, s, α) f where < ·, · > is the L2 inner product. Consequently, Tg where
p r
g
s
,
= sup < f, T g > ≤ C(n, s, α) g s ,
f
r =1
1 p 1 r
+ +
1 r 1 s
=1 = n+α . n
Solving for s, we arrive at s=
np . n + αp
This completes the proof of the Theorem. Remark 1.6.3 To see the relation between inequality (1.69) and the Sobolev inequality, let’s rewrite it as Tg with
nq n−αq
≤C g
q
(1.70)
40
1 Introduction to Sobolev Spaces
1 < q :=
np n < . n + αp α
Let u = T g. Then one can show that (see [CLO]), (− ) 2 u = g. Now inequality (1.70) becomes the Sobolev one u
nq n−αq α
≤ C (− ) 2 u q .
α
2 Existence of Weak Solutions
2.1 2.2 2.3
Second Order Elliptic Operators Weak Solutions Method of Linear Functional Analysis 2.3.1 2.3.2 2.3.3 Linear Equations Some Basic Principles in Functional Analysis Existence of Weak Solutions Semilinear Equations Calculus of Variations Existence of Minimizers Existence of Minimizers under Constrains Minimax Critical Points Existence of a Minimax via the Mountain Pass Theorem
2.4
Variational Methods 2.4.1 2.4.2 2.4.3 2.4.4 2.4.5 2.4.6
Many physical models come with natural variational structures where one can minimize a functional, usually an energy E(u). Then the minimizers are weak solutions of the related partial diﬀerential equation E (u) = 0. For example, for an open bounded domain Ω ⊂ Rn , and for f ∈ L2 (Ω), the functional 1  u2 dx − f u dx E(u) = 2 Ω Ω
1 possesses a minimizer among all u ∈ H0 (Ω) (try to show this by using the knowledge from the previous chapter). As we will see in this chapter, the minimizer is a weak solution of the Dirichlet boundary value problem
42
2 Existence of Weak Solutions
− u = f (x) , x ∈ Ω u(x) = 0 , x ∈ ∂Ω. In practice, it is often much easier to obtain a weak solution than a classical one. One of the seven millennium problem posed by the Clay Institute of Mathematical Sciences is to show that some suitable weak solutions to the NavieStoke equation with good initial values are in fact classical solutions for all time. In this chapter, we consider the existence of weak solutions to second order elliptic equations, both linear and semilinear. For linear equations, we will use the LaxMilgram Theorem to show the existence of weak solutions. For semilinear equations, we will introduce variational methods. We will consider the corresponding functionals in proper Hilbert spaces and seek weak solutions of the equations associated with the critical points of the functionals. We will use examples to show how to ﬁnd a minimizer, a minimizer under constraint, and a minimax critical point. The wellknown Mountain Pass Theorem is introduced and applied to show the existence of such a minimax. To show that a weak solution possesses the desired diﬀerentiability so that it is actually a classical one is called the regularity method, which will be studied in Chapter 3.
2.1 Second Order Elliptic Operators
Let Ω be an open bounded set in Rn . Let aij (x), bi (x), and c(x) be bounded functions on Ω with aij (x) = aji (x). Consider the second order partial diﬀerential operator L either in the divergence form
n n
Lu = −
(aij (x)uxi )xj +
i,j=1 i=1
bi (x)uxi + c(x)u,
(2.1)
or in the nondivergence form
n n
Lu = −
i,j=1
aij (x)uxi xj +
i=1
bi (x)uxi + c(x)u.
(2.2)
We say that L is uniformly elliptic if there exists a constant δ > 0, such that
n
aij (x)ξi ξj ≥ δξ2
i,j=1
for almost every x ∈ Ω and for all ξ ∈ Rn . In the simplest case when aij (x) = δij , bi (x) ≡ 0, and c(x) ≡ 0, L reduces to − . And, as we will see later, that the solutions of the general second order elliptic equation Lu = 0 share many similar properties with the harmonic functions.
2.2 Weak Solutions
43
2.2 Weak Solutions
Let the operator L be in the divergence form as deﬁned in (2.1). Consider the second order elliptic equation with Dirichlet boundary condition Lu = f , x ∈ Ω u = 0 , x ∈ ∂Ω (2.3)
When f = f (x), the equation is linear; and when f = f (x, u), semilinear. Assume that, f (x) is in L2 (Ω), or for a given solution u, f (x, u) is in 2 L (Ω). ∞ Multiplying both sides of the equation by a test function v ∈ C0 (Ω), and integrating by parts on Ω, we have
n n
Ω i,j=1
aij (x)uxi vxj +
i=1
bi (x)uxi v + c(x)uv dx =
Ω
f vdx.
(2.4)
There are no boundary terms because both u and v vanish on ∂Ω. One can see that the integrals in (2.4) are well deﬁned if u, v, and their ﬁrst derivatives are 1 square integrable. Write H 1 (Ω) = W 1,2 (Ω), and let H0 (Ω) the completion of ∞ 1 C0 (Ω) in the norm of H (Ω):
1/2
u =
Ω
(Du2 + u2 )dx
.
1 1 Then (2.4) remains true for any v ∈ H0 (Ω). One can easily see that H0 (Ω) 1 is also the most appropriate space for u. On the other hand, if u ∈ H0 (Ω) satisﬁes (2.4) and is second order diﬀerentiable, then through integration by parts, we have 1 (Lu − f )vdx = 0 ∀v ∈ H0 (Ω). Ω
This implies that
Lu = f , ∀x ∈ Ω.
And therefore u is a classical solution of (2.3). The above observation naturally leads to Deﬁnition 2.2.1 The weak solution of problem (2.3) is a function u ∈ 1 1 H0 (Ω) that satisﬁes (2.4) for all v ∈ H0 (Ω). Remark 2.2.1 Actually, here the condition on f can be relaxed a little bit. By the Sobolev embedding H 1 (Ω) → L n−2 (Ω),
2n
j=1 aij (x)uxi vxj + i=1 bi (x)uxi v + c(x)uv dx 1 deﬁned on H0 (Ω). While the right hand side of (2. please see [Sch1].3 Methods of Linear Functional Analysis 2.3. (2. u) = up for any p ≤ n−2 generally n+2 f (x. which will be used to obtain the existence of weak solutions. v] =< f. which is the left hand side of (2.2 Some Basic Principles in Functional Analysis Here we list brieﬂy some basic deﬁnitions and principles of functional analysis. v] = Ω i. ∀u. Deﬁnition 2. ∀u. then for any u ∈ H 1 (Ω). ∀ v ∈ H0 (Ω). f only need to be in 2n n+2 or more L n+2 (Ω). 2n 2. if f (x. v >:= Ω f vdx. x ∈ Ω u=0. 2. In the semilinear case. to ﬁnd a weak solution of (2. we introduce the bilinear form n n B[u.3. < f. Now. u) is in L n+2 (Ω). normed linear space with norm · that satisﬁes (i) u + v ≤ u + v .5). f (x. x ∈ ∂Ω. it is equivalent to show that there 1 exists a function u ∈ H0 (Ω).6) This can be realized by representation type theorems in Functional Analysis. we can apply representation type theorems in Linear Functional Analysis. such that 1 B[u. (iii) u = 0 if and only if u = 0. λ ∈ R. v >.3. (ii) λu = λ u . such as Riesz Representation Theorem or LaxMilgram Theorem.44 2 Existence of Weak Solutions 2n we see that v is in L n−2 (Ω).5) to ﬁnd its weak solutions. To this end. For more details. v ∈ X. (2.1 Linear Equations For the Dirichelet problem with linear equation Lu = f (x) . and hence by duality.4) may be regarded as a linear 1 functional on H0 (Ω). u) ≤ C1 + C2 u n−2 .4) in the deﬁnition of the weak solutions.1 A Banach space X is a complete. . v ∈ X.
and (iii) H¨lder spaces C 0. ∀w ∈ M.γ (Ω) . . there is a v ∈ M . u) = 0 if and only if u = 0. o Deﬁnition 2. (ii) A linear operator A : X→Y is bounded provided A := sup u X ≤1 Au Y < ∞. Deﬁnition 2. a.3. (ii) Sobolev spaces W k. (ii) the mapping u → (u. w) = 0. u >:= f [u] to denote the pairing of X ∗ and X. (2. u) ≥ 0. v) = Ω [u(x)v(x) + Du(x) · Dv(x)]dx. Lemma 2. denoted by X ∗ . H = M ⊕ M ⊥ := {u ∈ H  u⊥M }.3. ∀u. Then for every u ∈ H.3. v) = Ω u(x)v(x)dx. u).7) In other words.p (Ω) with norm u W k. (iii) (u. Let M be a closed subspace of H.3 (i) A mapping A : X→Y is a linear operator if A[au + bv] = aA[u] + bA[u] ∀ u. ·) which generates the norm u := (u. such that (u − v.3 Methods of Linear Functional Analysis 45 The examples of Banach spaces are: (i) Lp (Ω) with norm u Lp (Ω) . and (iv) (u. Examples of Hilbert spaces are (i) L2 (Ω) with inner product (u. The collection of all bounded linear functionals on X. ∀u ∈ H. is called the dual space of X. v ∈ X. u)1/2 with the following properties (i) (u. We use < f. and 1 (ii) Sobolev spaces H 1 (Ω) or H0 (Ω) with inner product (u. v ∈ H.2.γ (Ω) with norm u C 0.1 (Projection). (iii) A bounded linear operator f : X→R1 is called a bounded linear functional on X.2 A Hilbert space H is a Banach space endowed with an inner product (·.p (Ω) . v) = (v. v) is linear for each v ∈ H. b ∈ R1 .
iv) There exists v. v ∈ H and (ii) m u 2 ≤ B[u. and H ∗ be its dual space. The readers may ﬁnd the proof of the Lemma in the book [Sch] ( Theorem 13) or as given in the following exercise. more precisely. Based on this Projection Lemma. Let H be a real Hilbert space with inner product (·. Let T be a linear operator from H to R1 and T ≡0. ∀v ∈ H. ∀u ∈ H. for each u∗ ∈ H ∗ .3.2 Prove the Riesz Representation Theorem in 4 steps. there exists a unique element u ∈ H. Then for each bounded linear functional f on H.2 (LaxMilgram Theorem). Then T v = (u. Show that i) K(T ) := {u ∈ H  T u = 0} is closed. v) . one can prove Theorem 2. u]. Assume that there exist constants M. Exercise 2. there exists a unique element u ∈ H.3. such that B[u. v∈M Hint: Show that a minimizing sequence is a Cauchy sequence. there exist a vo ∈ M such that vo − u = inf v − u . From Riesz Representation Theorem.3. ii) H = K(T ) ⊕ K(T )⊥ . ∀w ∈ M. m > 0. ii) Show that (u − vo . v) ∀v ∈ H. such that < u∗ . v] =< f.46 2 Existence of Weak Solutions Here v is the projection of u on M . ·). Let B : H × H→R1 be a bilinear mapping. ∀u ∈ H. The readers may see the book [Sch1] for its proof or do it as the following exercise. The mapping u∗ → u is a linear isomorphism from H ∗ onto H. This is a wellknown theorem in functional analysis. Then H ∗ can be canonically identiﬁed with H. iii) The dimension of K(T ) is 1.3. such that T v = 1. one can derive the following Theorem 2. .1 (Riesz Representation Theorem). v >= (u. w) = 0 . v] ≤ M u v . Let H be a real Hilbert space with norm · . such that (i) B[u.1 i) Given any u ∈ H and u ∈M . v >. ∀u. Exercise 2.
any a1 . 2. for any u ∈ H. v >= (f . a2 ∈ R1 . If B[u. v] = a1 (Au1 . i. v] = a1 B[u1 .2. B[u. such that ˜ < f. there exists a unique element u ∈ H. for a given bounded ˜ linear functional f on H. u2 ∈ H. Au] ≤ M u Au .9). In part (a).10) ˜ From (2.11) ≤ B[u. v] can be made as an inner product in H. Au and consequently Au ≤ M u ∀u ∈ H. u] = (Au. v] may not be symmetric. (2. (2. for each ﬁxed element u ∈ H. and each v ∈ H. then the conditions (i) and (ii) ensure that B[u. we prove that it is onetoone and onto. v] is symmetric. we have (A(a1 u1 + a2 u2 ).e. On one hand. by condition (i). such that ˜ Au = f . we apply condition (ii): m u 2 2 = (Au. i. and hence there exist a u ∈ H. ∀v ∈ H. there is an f ∈ H. v) Hence A(a1 u1 + a2 u2 ) = a1 Au1 + a2 Au2 . such ˜ that B[u. we proceed as follows. Moreover. and the conclusion of the theorem follows directly from the Reisz Representation Theorem. v) = B[(a1 u1 + a2 u2 ). This veriﬁes that A is bounded. v] + a2 B[u2 . and (2. v) = (a1 Au1 + a2 Au2 .8). u) ≤ Au u . v] = B[v. (2. (a) For any u1 .3 Methods of Linear Functional Analysis 47 Proof. (b) To see that A is onetoone. u (2. We carry this out in two parts. ∀v ∈ H. by the Riesz Representation Theorem. v) + a2 (Au2 . ·] is also a bounded linear functional on H. It suﬃce to show that for each f ∈ H. Au) = B[u. v] = (˜.8) On the other hand.9) Denote this mapping from u to u by A. and in part (b). we show that A is a bounded linear operator. B[u.e. v). u]. ˜ (2. . In the case B[u. 1. ˜ A : H→H. by condition (i). A is linear. Au = u.10). v).
48 2 Existence of Weak Solutions and it follows that m u ≤ Au . one can also see that R(A).3. such that 0 = (u − v.3 Existence of Weak Solutions Now we explore that in what situations. Therefore R(A) = H. i. our second order elliptic operator L would satisfy the conditions (i) and (ii) requested by the LaxMilgram Theorem. Now if Au = Av. v)H := Ω DuDvdx and the corresponding norm u := Ω H Du2 dx. by (2. R(A) is closed. This completes the proof of the Theorem. By Poincar´ inequality. o . such that Auk = vk . the range of A in H is closed. by the Projection Lemma.11). To show that R(A) = H.1. 1 Let H = H0 (Ω) be our Hilbert space with inner product (u. m u − v ≤ Au − Av . assume that vk ∈ R(A) and vk →v in H.3. v) := Ω (DuDv + uv)dx. there exists a v ∈ R(A). 2n For any v ∈ H. Hence A is onetoone. Consequently. For any u ∈ H. there is a uk . Then for each vk . In fact. then from (2.12). u = v ∈ R(A) and hence H ⊂ R(A). From (2. by the Sobolev Embedding. Auk →Au and v = Au ∈ H. Now.12). v is in L n−2 (Ω). uk is a Cauchy sequence in H.12) implies that ui − uj ≤ C vi − vj . Inequality (2. Therefore. A(u − v)) = B[(u − v). 2. and hence uk →u for some u ∈ H. we need the Projection Lemma 2. (2.e.12) This implies u = v. And by virtue of the H¨lder inequality. we can use the equivalent inner product e (u. (u − v)] ≥ m u − v 2 .
∀v ∈ H. u] = u 2 H. we have . v] ≤ u Condition (ii) is also satisﬁed: B[u. and the L∞ norm of bi (x) and c(x). condition (i) is met: B[u. c L∞ (Ω) ≤ C. v >:= Ω f (x)v(x)dx ≤ Ω f (x) 2n n+2 n+2 2n dx Ω v(x) 2n n−2 dx .j=1 aij (x)uxi vxj + i=1 Under the assumption that aij L∞ (Ω) . the Dirichlet problem − u = f (x). x ∈ Ω. one can easily verify that o B[u. First from the uniform ellipticity condition. we have B[u. 2n In general.3 Methods of Linear Functional Analysis 49 n−2 2n < f. v] = Ω 2n Du · Dvdx. v >. Obviously. However. and by H¨lder inequality. We have actually proved Theorem 2. v] =< f. H v H. Now applying the LaxMilgram Theorem. In the simplest case when L = − . hence we only need to assume that f ∈ L n+2 (Ω). i. v] ≤ 3C u H v H. bi L∞ (Ω) . u(x) = 0. we conclude that there exists a unique u ∈ H such that B[u. It depends on the sign of c(x). x ∈ ∂Ω 1 has a unique weak solution u in H0 (Ω).3. B[u.3 For every f (x) in L n+2 (Ω). Hence condition (i) is always satisﬁed. v] = Ω n n bi (x)uxi v + c(x)uv dx.2. condition (ii) may not automatically hold.
Others are saddle points. To seek weak solutions of . Hence we have Theorem 2.1 Semilinear Equations To ﬁnd the weak solutions of semilinear equations Lu = f (x. then condition (ii) is met.2 Calculus of Variations For a continuously diﬀerentiable function g deﬁned on Rn .4 Assume that bi (x) ≡ 0 and c(x) satisﬁes (2. Roughly speaking.13) for some δ > 0. u) = 0 1 2 (Lu · u)dx − Ω u Ω F (x. s)ds.3. From here one can see that if bi (x) ≡ 0 and c(x) ≥ 0. we will associate the equation with the functional J(u) = where F (x. u) the LaxMilgram Theorem can no longer be applied. Actually. the condition on c(x) can be relaxed a little bit.14) 2. u)dx f (x. We show that the critical point of J(u) is a weak solution of our problem.4. the problem Lu = f (x) x ∈ Ω u(x) = 0 x ∈ ∂Ω 1 has a unique weak solution in H0 (Ω).4.4 Variational Methods 2. 2. Then for 2n every f ∈ L n+2 (Ω). a critical point is a point where Dg vanishes. for instance. Then we will focus on how to seek critical points in various situations. then condition (ii) holds.14). if c(x) ≥ −δ/2 with δ as in (2.50 2 Existence of Weak Solutions n aij (x)uxi uxj dx ≥ δ Ω i. (2.13). We will use variational methods to obtain the existence. (2.j=1 Ω Du2 dx = δ u 2 H. The simplest sort of critical points are global or local maxima or minima.
J(u + tv) = and 1 2 D(u + tv)2 dx − Ω Ω i. Ω 1 Since this is true for any v ∈ H0 (Ω). then we will demonstrate that u is actually a weak solution of our problem. let us start from a simple example: − u = f (x) . To illustrate the idea. Now suppose that there is some function u which happen to be a minimizer 1 of J(·) in H0 (Ω).16) This implies that u is a weak solution of problem (2. d J(u + tv) t=0 = 0. (2. Consider the realvalue function g(t) = J(u + tv). x ∈ Ω. we conclude − u − f (x) = 0. Here. if u is also second order diﬀerentiable. . Consequently.15) f (x)udx 1 in H0 (Ω). we obtain [− u − f (x)] vdx = 0. then through integration by parts. u(x) = 0 . ∀v ∈ H0 (Ω). explicitly.15). g (0) = 0 yields Du · Dvdx − Ω Ω 1 f (x)vdx = 0. we must have g (0) = 0. Furthermore. we study the functional J(u) = 1 2 Du2 dx − Ω Ω (2. dt f (x)(u + tv)dx. and therefore.e. the function g(t) has a minimum at t = 0. 1 Let v be any function in H0 (Ω). t ∈ R. Since u is a minimizer of J(·). To ﬁnd weak solutions of the problem. d J(u + tv) = dt D(u + tv) · Dvdx − Ω Ω f (x)vdx. we generalize this concept to inﬁnite dimensional spaces. x ∈ ∂Ω.2.4 Variational Methods 51 partial diﬀerential equations.
any point that satisﬁes d J(u + tv) t=0 . To this end. or a saddle point of the functional.52 2 Existence of Weak Solutions Therefore.3 Existence of Minimizers In the previous subsection. Then our next question is: Does the functional J actually possess a critical point? We will show that there exists a minimizer of J. i) We say that J is Frechet diﬀerentiable at u ∈ X if there exists a continuous linear map L = L(u) from X to X ∗ satisfying: For any > 0.1 One can verify that. 2. u is a classical solution of the problem. here u need not to be a minima. dt More precisely. and iii) weak lower semicontinuity.4. v >  ≤ where < L(u). there is a δ = δ( . or more generally. Remark 2. v >= lim t→0 d J(u + tv) − J(u) = J(u + tv) t=0 . t dt v X whenever v X < δ. u). i) Boundedness from Below. v >= 0 ∀v ∈ X. we have seen that the critical points of the functional 1 J(u) = Du2 dx − f (x)udx 2 Ω Ω are weak solutions of the Dirichlet problem (2. . if J is Frechet diﬀerentiable at u. From the above argument.15). ii) A critical point of J is a point at which J (u) = 0. then < J (u). the readers can see that. such that J(u + v) − J(u)− < L(u). in order to be a weak solution of the problem. iii) We call J (u) = 0 the EulerLagrange equation of the functional J. that is < J (u).4. we verify that J possesses the following three properties: i) boundedness from below. it can be a maxima.4.1 Let J be a linear functional on a Banach space X. we have Deﬁnition 2. ii) coercivity. v >:= L(u)(v) is the duality between X and X ∗ . The mapping L(u) is usually denoted by J (u).
uo > as k→∞ for every bounded linear functional f on X. ii) Coercivity.2. then xk goes away to inﬁnity.2 Let X be a real Banach space. 1 + x2 Obviously. from the second part of (2. then we say that {uk } converges weakly to uo in X. uk > → < f. Actually. A functional that is bounded from below does not guarantee that it has a minimum. Again. and write uk uo . i. This implies that any minimizing sequence must be bounded in H. then J(uk ) must also become unbounded. then a bounded minimizing sequence {uk } would possesses a convergent subsequence.4 Variational Methods 53 1 First we prove that J is bounded from below in H := H0 (Ω) if f ∈ L2 (Ω). If we take a minimizing sequence {xk } such that g(xk )→1. x ∈ R1 .e. the inﬁmum of g(x) is 1. This suggests that in order to prevent a minimizing sequence from ‘leaking’ to inﬁnity. iii) Weak Lower Semicontinuity. for simplicity. then J(uk )→∞.17) Therefore. And this is indeed true for our functional J.17). L 2 2 L2 (Ω) (2. A simple counter example is g(x) = 1 . the functional J is bounded from below. we use the equivalent norm in H: u H = Ω Du2 dx. if a sequence {uk } goes to inﬁnity. but it can never be attained. and the limit would be a minimizer of J. That is. This is no longer true in inﬁnite dimensional spaces. If H is a ﬁnite dimensional space. if u H →∞. If a sequence {uk } ⊂ X satisﬁes < f. we have e o J(u) ≥ 1 u 2 − C u H f L2 (Ω) H 2 1 C2 2 = u H − C f L2 (Ω) − f 2 2 C2 ≥− f 2 2 (Ω) . one can see that if uk H →∞. We therefore turn our attention to weak topology. By Poincar´ and H¨lder inequalities. Deﬁnition 2.4. However. Therefore a minimizing sequence would be retained in a bounded set. . we need to have some sort of ‘coercive’ condition on our functional J.
and therefore H is reﬂexive. we only need a weaker one: J(uo ) ≤ lim inf J(uki ).54 2 Existence of Weak Solutions Deﬁnition 2.) Let X be a reﬂexive Banach space. u). there is a unique v ∈ H.4. However. uo is a minimizer.4. we wish that the functional J we consider is continuous with respect to this weak convergence. 1 Now let’s come back to our Hilbert space H := H0 (Ω). It follows that H ∗ = H. Theorem 2. we do not really need such a continuity. and hence possesses a subsequence {uki } that converges weakly to an element uo in H: (uki .4. instead. if for every weakly convergent sequence uk we have J(uo ) ≤ lim inf J(uk ). k →∞ uo . v). by the deﬁnition of a minimizing sequence. this is not the case with our functional. v)→(uo . that is i→ ∞ lim J(uki ) = J(uo ). where X ∗ is the dual of X. such that < f. i→∞ and hence J(uo ) = lim inf J(uki ). now a minimizing sequence {uk } is bounded. then uo would be our desired minimizer. Naturally. . i→∞ Since on the other hand. which converges weakly in X. u >= (v. in X. Deﬁnition 2. By the Reisz Representation Theorem. we obviously have J(uo ) ≥ lim inf J(uki ). ∀u ∈ H. By the Coercivity and Weak Compactness Theorem. Fortunately.1 (Weak Compactness.3 A Banach space is called reﬂexive if (X ∗ )∗ = X. ∀v ∈ H. Assume that the sequence {uk } is bounded in X. Then there exists a subsequence of {uk }. i→∞ Therefore.4 We say that a functional J(·) is weakly lower semicontinuous on a Banach space X. nor is it true for most other functionals of interest. for every linear functional f on H.
(2. x ∈ ∂Ω. Here the second term on the right goes to zero due to the weak convergence uk uo . x ∈ ∂Ω. So far.2 Assume that Ω is a bounded domain with smooth boundary. Therefore lim inf k→∞ Duk 2 dx ≥ Ω Ω Duo 2 dx.18) and (2. it f uo dx.19) imply the weakly lower semicontinuity.20) .4. with 1 < p < n+2 n−2 . and uk Since for each ﬁxed f ∈ L n+2 (Ω). x ∈ Ω. (2. x ∈ Ω u(x) = 0 . we have proved Theorem 2. which is a weak solution of the boundary value problem − u = f (x) . as k→∞.4 Variational Methods 55 Now we show that our functional J is weakly lower semicontinuous in H. 2. Ω f udx is a linear functional on H.2. 2n Then for every f ∈ L n+2 (Ω). u(x) = 0.19) Now (2.4 Existence of Minimizers Under Constraints Now we consider the semilinear Dirichlet problem − u = up−1 u. we have Duk 2 dx ≥ Ω Ω Duo 2 dx + 2 Ω Duo · (Duk − Duo )dx. follows immediately f uk dx→ Ω Ω 2n uo in H.4. Assume that {uk } ⊂ H. the functional J(u) = 1 2 Du2 dx − Ω Ω f udx 1 possesses a minimum uo in H0 (Ω). (2.18) From the algebraic inequality a2 + b2 ≥ 2ab.
this functional is not bounded from below. Ω up+1 dx = 1}. a critical point of the functional J in H := H0 (Ω) is a weak solution of (2. u∈M It follows that Ω Duk 2 dx is bounded. hence {uk } is bounded in H.e. By the Weak Compactness Theorem. by the Compact Sobolev Embedding n−2 H 1 (Ω) → → Lp+1 (Ω). u ≡ 0 is a solution. We call this a trivial solution. The weak lower semicontinuity of the functional I yields I(uo ) ≤ lim inf I(uk ) = m. Of course. since p+1 < Theorem (2. Let {uk } ⊂ M be a minimizing sequence. i.56 2 Existence of Weak Solutions Obviously. there exist a subsequence (for simplicity. Ω 1 2 Du2 dx − Ω 1 p+1 up+1 dx. . as t→∞.21) 2n . which converges weakly to uo in H. For this reason. Therefore. Ω Noticing p + 1 > 2. Let J(u) = It is easy to verify that d J(u + tv) t=0 = dt Du · Dv − up−1 uv dx. k →∞ lim I(uk ) = inf I(u) := m. ﬁxed an element u ∈ H and consider J(tu) = t2 2 Du2 dx − Ω tp+1 p+1 up+1 dx. what we are more interested is whether there exists nontrivial solutions. We seek minimizers of I in M. instead of J. we will consider another functional I(u) := in the constrain set M := {u ∈ H  G(u) := Ω 1 2 Du2 dx. To see this. we still denote it by {uk }). Ω 1 Therefore. there is no minimizer of J(u). we ﬁnd that J(tu)→ − ∞.20) However. k →∞ On the other hand.
let’s ﬁrst recall the Lagrange Multiplier for functions deﬁned on Rn . It can be decomposed as the sum of two perpendicular vectors: Df (xo ) = Df (xo ) N +Df (xo ) T . I (u) is a linear functional on H. v∈M Then there exists a real number λ such that I (u) = λG (u). To link uo to a weak solution of (2. Together with (2. By deﬁnition. xo being a critical point of f (x) on the level set g(x) = c means that the tangential projection Df (xo ) T = 0.22) for some constant λ. Let f (x) and g(x) be smooth functions in Rn . Before proving the theorem. At a ﬁxed point u ∈ H. where the two terms on the right hand side are the projection of Df (xo ) onto the normal and tangent spaces of the level set g(x) = c at point xo . Theorem 2. Geometrically. let u be a minimizer of I in M. and therefore I(uo ) ≥ m. While the normal projection Df (xo ) N is parallel to Dg(xo ).e. I(u) = min I(v). ∀v ∈ H. Ω That is uo ∈ M .4 Variational Methods 57 we ﬁnd that uk converges strongly to uo in Lp+1 (Ω). Df (xo ) is a vector in Rn . v >= λ < G (u). Now we have shown the existence of a minimizer uo of I in M . or < I (u).2. we may generalize this perception into inﬁnite dimensional space H. we obtain I(uo ) = m. i. v >.22). Heuristically. and we therefore have (2.20).21). It is well known that the critical points ( in particular. By the Reisz Representation . the minima ) xo of f (x) under the constraint g(x) = c satisfy Df (xo ) = λDg(xo ) (2.3 (Lagrange Multiplier). we derive the corresponding EulerLagrange equation for this minimizer under the constraint. and hence uo p+1 dx = 1.4.
and g(t. such that < I (u). for any v ∈ H. and hence at the minimum u of J on M . we can no longer use (2. ∂g (0. we consider g(t. according to the Implicit Function Theorem. s(t)) = 1.24) . we have d I(u + tv) t=0 . I (u) must be perpendicular to the tangent space at point u. we have G (u) ∈ H. We now prove this rigorously. there exists w (may just take w as u). we must have d I(u + tv + s(t)w) t=0 =< I (u). ∀v ∈ H. Similarly for G (u).3. To remedy this. such that the right hand side of the above is not zero. we can not guarantee that G(u + tv) := Ω u + tvp+1 dx = 1 for all small t. and it is a normal vector of M at point u. for suﬃciently small t. such that s(0) = 0. it can be identiﬁed as a point (or a vector) in H. s) := G(u + tv + sw). w >= (p + 1) ∂s up−1 uwdx. 0) =< G (u). Then we can calculate the variation of J(u + tv + s(t)w) as t→0. w > s (0) = 0. For each small t.23) dt Now u is only the minimum on M . v >= (I (u). Ω Since u ∈ M . dt (2.4. and hence I (u) = λG (u). there exists a C 1 function s : R1 →R1 . denoted by I (u). In fact. v > + < I (u). Now u + tv + s(t)w is on M . v). (2.58 2 Existence of Weak Solutions Theorem. Recall that if u is the minimum of J in the whole space H. At a minimum u of I on the hyper surface {w ∈ H  G(w) = 1}. The Proof of Theorem 2.23) because u + tv can not be kept on M . Ω up+1 dx = 1. Hence. we try to show that there is an s = s(t). such that G(u + tv + s(t)w) = 1. then.
For this reason. while along the direction of yaxis. y) = x2 − y 2 from R2 to R1 . ∀v ∈ H. (2. Df (0.20). w > s (0).20) in H. we also call (0. v >= λ < G (uo ). Our minimizer uo of I under the constraint G(u) = 1 satisﬁes < I (uo ).4 Variational Methods 59 On the other hand.4. 2. w > < I (u). 0)s (0) dt ∂t ∂s = < G (u). Then it is easy to verify that D˜ · Dvdx = u Ω Ω ˜p−1 uvdx. For a simple example. v >. < G (u).5 Minimax Critical Points Besides local or global minima and maxima. The way to locate or to show . w > Consequently s (0) = − Substitute this into (2. y). This is possible ˜ since p > 1. we have 0= dg ∂g ∂g (0. v > + < G (u). y) = (0.2.24). ∀v ∈ H.e.0) a minimax critical point of f (x. 0) + (0. that is Duo · Dvdx = λ Ω Ω uo p−1 uo vdx. 0) is a critical point of f . Let u = auo with λ/ap−1 = 1.25) One can see that λ > 0. (0. v >. (x. 0) = (0. u ˜ Now u is the desired weak solution of (2. If we go on the graph along the direction of xaxis. However. This completes the proof of the Theorem. it is neither a local maximum nor a local minimum. let’s consider the function f (x. and denote λ= we arrive at < I (u). w > .25). y) in R3 looks like a horse saddle. ∀v ∈ H. ˜ Exercise 2. 0) = 0.4. < G (u). < G (u). there are other types of critical points: saddle points or minimax points. v >= λ < G (u). The graph of z = f (x.0) is the minimum. Now let’s come back to the weak solution of problem (2.1 Show that λ > 0 in (2.0) is the maximum. v > . i. (0.
γ(1) = e}. R1 )  γ(0) = 0. Suppose we don’t know whether f (x. To illustrate the main idea. we will apply a similar idea to seek its minimax critical points. we pick up two points on xy plane.1] where Γ = {γ  γ = γ(s) is continuous on [0. Let X be a real Banach space and J ∈ C 1 (X. for instance. and more importantly.4. 1) and Q = (−1. (PS)). Theorem 2. let’s still take this simple example. P = (1. Suppose J satisﬁes (PS). Unlike ﬁnite dimensional space. there is a highest point. but we do detect a ‘mountain range’ on the graph near xz plane. R1 ). in an inﬁnite dimensional space. γ∈Γ s∈[0. f (P )) to (Q. γ(1) = Q} We then try to show that there exists a point Po at which f attains this minimax value c. and (J2 ) there is an e ∈ X \ Bρ (0). α > 0 such that J ∂Bρ (0) ≥ α.1]. To go from (P.5 We say that the functional J satisﬁes the PalaisSmale condition (in short. 1]. Deﬁnition 2.1] where Γ = {γ ∈ C([0.4. Ambrosetti and Rabinowitz [AR] establish the following wellknown theorem on the existence of a minimax critical point. −1) on two sides of the ‘mountain range’. (J1 ) there exist constants ρ. f (Q)). Hence we require the functional J to satisfy some compactness condition. which is the PalaisSmale condition. J(0) = 0. For a functional J deﬁned on an inﬁnite dimensional space X. f (Q)) on the graph.4 (Mountain Pass Theorem). y) possesses a minimax point. such that J(e) ≤ 0. More precisely. if any sequence {uk } ∈ X for which J(uk ) is bounded and J (uk )→0 as k→∞ possesses a convergent subsequence. Let γ(s) be a continuous curve linking the two points P and Q with γ(0) = P and γ(1) = Q. we deﬁne c = inf max f (γ(s)). Then we take the inﬁmum among the highest points on all the possible paths linking P and Q. Then J possesses a critical value c ≥ α which can be characterized as c = inf max J(u) γ∈Γ u∈γ[0. a bounded sequence may not converge. To show the existence of a minimax point. one needs ﬁrst to climb over the ’mountain range’ near xz plane. and γ(0) = P. Under this compactness condition. .60 2 Existence of Weak Solutions the existence of such minimax point is called a minimax variational method. Df (Po ) = 0. Hence on this path. then to decent to the point (Q.
say fa and fb with 0 < a < b. the path connecting the two low points (0. Let X be a real Banach space. This S is a saddle point. To illustrate this. fa is connected while f−a is not. In the ﬁgure below. the minimax critical point of J. Denote the level set of f by fc := {(x. y) ∈ R2  f (x. More precisely and generally.e. y) = x2 − y 2 . Assume that J ∈ C 1 (X. J(0)) and (e. let’s come back to the simple example f (x. One can see that 0 is a critical value and for any a > 0. If there is no critical value between the two level sets. there is another low point e. One convenient way is let the points in the set fb ‘ﬂow’ along the direction of −Df into fa . Suppose that c is not a critical value of J. a critical value c means a value of the functional which is attained by a critical point u. . which is the lowest as compared to the highest points on the nearby paths.4.2. R1 ) and satisﬁes the (PS). J(e)) contains a highest point S. we have Theorem 2. then one can continuously deform the set fb into fa . and if at the other side of the range. y) ≤ c}. i. on the graph of J. J (u) = 0 and J(u) = c. the level sets fa and f−a has diﬀerent topology. Heuristically. The proof of the Theorem is mainly based on a deformation theorem which exploits the change of topology of the level sets of J when the value of J goes through a critical value. the Theorem says if the point 0 is located in a valley surrounded by a mountain range. the points on the graph ‘ﬂow downhills’ into a lower level.5 (Deformation Theorem). then there must be a mountain pass from 0 to e which contains a minimax critical point of J.4 Variational Methods 61 Here. correspondingly.
dη dt (t. To satisfy condition (ii). u) (2. so that the value of J(η(t. ∀u ∈ X. Therefore (2. 1] × X to X. t ∈ [0. there is a subsequence {uki } that converges to an element uo ∈ X. However. to make the ﬂow stay still in the set M := {u ∈ X  J(u) ≤ c − or J(u) ≥ c + }.28) 2 and let . u) ≤ J(u).26) That is. u) from [0. there would exist sequences ak →0. there exists a constant 0 < δ < and a continuous mapping η(t. k →0 and elements uk ∈ Jc+ k \ Jc− k with J (uk ) ≤ ak . This is a contradiction with the assumption that c is not a critical value of J. such that (i) η(0. such that a2 δ< . Proof of the Deformation Theorem. u)).62 2 Existence of Weak Solutions Then for each suﬃciently small > 0. This is actually guaranteed by the (PS). To modify further. Roughly speaking. The main idea is to solve an ODE initial value problem (with respect to t) for each u ∈ X. the Theorem says if c is not a critical value of J. ∀u ∈ Jc+ \ Jc− . roughly look like the following: = −J (η(t. then one can continuously deform a higher level set Jc+δ into a lower level Jc−δ by ‘ﬂowing downhills’. η(0. we choose 0 < δ < . u)). (iv) η(1. Step 2. (2. Since J ∈ C 1 (X. the two points 0 and e as in the Mountain Pass theorem still remain ﬁxed. u) = u. Step 1 We ﬁrst choose a small > 0. 1]. so that the ﬂow will satisfy other desired conditions. Jc+δ ) ⊂ Jc−δ .27) Otherwise. ∀u ∈ X. i. we could multiply the right hand side of equation (2. we need to modify the right hand side of the equation a little bit. but this would make the ﬂow go too slow near M . we must have J(uo ) = c and J (uo ) = 0. Then by virtue of the (PS) condition. (iii) J(η(t.e.26) by dist(η. u) = u. c + ]. ∀u ∈J −1 [c − . < 1. We claim that there exist constants 0 < a. Condition (ii) is to ensure that after the deformation. M ). (ii) η(1. to let the ‘ﬂow’ go along the direction of −J (η(t.27) holds. u) would decrease as t increases. u) = u. R1 ). such that J (u) ≥ a. so that the ﬂow would not go too slow in Jc+ \ Jc− . (2.
Step 3. . M ) . F (η(t. (t. dt By the deﬁnition of h and (2. Notice that for η ∈ N . One can verify that F is bounded and Lipschitz continuous on bounded sets. g(η) ≡ 1.29) becomes d J(η(t. u) = u. Condition (i) is satisﬁed automatically. N ) 1 if 0 ≤ s ≤ 1 1 s if s > 1. hence condition (ii) is also satisﬁed. M ) + dist(u. u) ∈ Jc−δ . To verify (iii) and (iv).2.4 Variational Methods 63 N := {u ∈ X  c − δ ≤ J(u) ≤ c + δ}. u)). we have η(1.27). the right hand side is nonpositive. there exists a unique solution η(t. we have. Now for each ﬁxed u ∈ X. u) > dt dt = < J (η(t. deﬁne g(u) := and let h(s) := We ﬁnally modify the −J (η) as F (η) := −g(η)h( J (η) )J (η). u) dist(u. u)) η(0. From the deﬁnition of g g(u) = 0 ∀u ∈ M . u)) = < J (η(t. u)) ) J (η(t. (2. we consider the modiﬁed ODE problem = F (η(t. u))h( J (η(t. To see (iv).29) In the above equality. u)) is nonincreasing. dist(u. u) 2 . This veriﬁes (iii). we compute d dη J(η(t. u)). obviously. u)) 2 . and hence J(η(t. and (2. u) for all time t ≥ 0. u)) > = −g(η(t. dη dt (t. it suﬃce to show that for each u in N = Jc+δ \ Jc−δ . The introduction of the factor h(·) is to ensure the boundedness of F (·). and therefore by the wellknown ODE theory. u)) = −h( J (η(t. Now for u ∈ X. u)) ) J (η(t.
To see this. This establishes (iv) and therefore completes the proof of the Theorem.1]. . we take a particular path γ(t) = te and consider the function g(t) = J(γ(t)). We will use the Deformation Theorem to continuously deform a path γ ∈ Γ down to the level set Jc−δ to derive a contradiction. 1]) ⊂ Jc−δ . u)) = dt In any case. J(η(1. γ([0. u∈γ([0.1]) t∈[0. It is continuous on the closed interval [0. 1]) ∩ ∂Bρ (0) = ∅. Hence by condition (J1 ). due to the deﬁnition. u)) ≤ −a if J (η(t. choose the in the Deformation Theorem to be α . γ([0.64 2 Existence of Weak Solutions d J(η(t. Now by the Deformation Theorem. u)) > 1. this path can be continuously deformed down to the level set Jc−δ .1] On the other hand. And it follows that c ≥ α. and hence bounded from above. we derive − J (η(t. Suppose the number c so deﬁned is not a critical value. and 0 < δ < . or in order words. that is η(1. More precisely. i. such that u∈γ([0. u∈γ([0.1]) max J(u) ≥ v∈∂Bρ (0) inf J(v) ≥ α. u)) ≤ 1 − J (η(t. With this Deformation Theorem. for any γ ∈ Γ γ([0.e. u)) ≤ J(η(0. 1]) ⊂ Jc+δ . The Proof of the Mountain Pass Theorem. u)) − a2 ≤ c + δ − a2 ≤ c − δ. u)) 2 ≤ −a2 if J (η(t. we 2 can pick a path γ ∈ Γ . we have c < ∞. From the deﬁnition of c. the proof of the Mountain Pass Theorem becomes very simple.1]) max J(u) ≤ c + δ. Therefore c ≤ max J(u) = max g(t) < ∞. First.
(f2 ) can be dropped.31) possesses a nontrivial weak solution. This means that η(1.2 A simple example of such function f (x. η(1. . γ([0.6 Existence of a Minimax via the Mountain Pass Theorem Now we apply the Mountain Pass Theorem to seek weak solutions of the semilinear elliptic problem − u = f (x. (2. such that f (x. u) is up−1 u.4.30) On the other hand. s) ≤ c1 + c2 sp . it suﬃce f (x. u) satisﬁes ¯ (f1 )f (x. c2 ≥ 0.2. u(x) = 0.4 Variational Methods u∈η(1. If n = 1. u). (f3 )f (x. s) ∈ C(Ω × R1 . 1])) is also a path in Γ . Remark 2. with 0 ≤ p < that n+2 n−2 if n > 2. We assume that f (x. where F (x. Theorem 2.30) and hence completes the proof of the Theorem. e) = e. 2. since 0 and e are in Jc− .1])) max J(u) ≥ c. s) = 0 s f (x. 0 < µF (x. and (f4 ) there are constants µ > 2 and r ≥ 0 such that for s ≥ r.4. we prefer this simple model that better illustrates the main ideas. by (ii) of the Deformation Theorem. s) = o(s) as s→0.1])) 65 max J(u) ≤ c − δ. t)dt.31) Although the method we illustrate here could be applied to a more general second order uniformly elliptic equations. s). if n = 2. then problem (2. (f2 ) there exists constants c1 . (2.γ([0. and therefore we must have u∈η(1.4. s) ≤ c1 eφ(s) . with φ(s)/s2 →0 as s→∞. s) ≤ sf (x. 0) = 0 and η(1. This contradicts with (2. x ∈ Ω.γ([0.6 (Rabinowitz [Ra]) If f satisﬁes condition (f1 ) − (f4 ). R1 ). x ∈ ∂Ω.
now we only need to check the second term I(u) := Ω F (x. a2 ≥ 0 such that g(x. Since we have veriﬁed that the ﬁrst term Du2 dx Ω is continuously diﬀerentiable. u + φ) − g(·. We ﬁrst need to show that J ∈ C 1 (H. Given any > 0. ξ ∈ R1 . whenever Let and Let φ Lr (Ω) < δ. u(x)) belongs to C(Lr (Ω). then g(x. u(x))s dx ≤ Ω Ω (a1 + a2 ur/s )s dx (1 + ur )dx. g(x. R1 ). Ls (Ω)). Proposition 2. u) ∈ Ls (Ω). and (g2 ) there are constants r. That is g(x. R1 ).66 2 Existence of Weak Solutions To ﬁnd weak solutions of the problem. We verify that J satisﬁes the conditions in the Mountain Pass Theorem. u) ¯ Ω1 := {x ∈ Ω  φ(x) ≤ m} ¯ Ω2 := Ω \ Ω1 . By (g2 ). ξ) ≤ a1 + a2 ξr/s ¯ for all x ∈ Ω. such that. Now we verify the continuity of the map. (2. Fix u ∈ Lr (Ω). u)dx.4. Ω ≤ a3 It follows that if u ∈ Lr (Ω). we want to show that.32) . s ≥ 1 and a1 . Ls (Ω) < . ·) : Lr (Ω)→Ls (Ω).1 (Rabinowitz [Ra]) Let Ω ⊂ Rn be a bounded domain and let g satisfy ¯ (g1 )g ∈ C(Ω × R1 . we seek minimax critical points of the functional 1 Du2 dx − F (x. Then the map u(x)→g(x. Proof. u)dx J(u) = 2 Ω Ω 1 on the Hilbert space H := H0 (Ω). there is a δ > 0. we have g(·.
2.34) φr dx ≥ mr Ω2 .4 Variational Methods 67 Ii = Ωi g(x. where Ω is the volume of Ω. It follows that I1 ≤ Ω1 η s ≤ Ωη s . Then I ∈ C 1 (H.33).33) 2 Now we ﬁx this m. ∀v ∈ H. Ωη s < 2 and therefore s I1 ≤ . there exists m > 0. ∀x ∈ Ω1 .34) is less than 2 . for some constant c1 and c2 . u(x)) ≤ η. and estimate I2 . ·). . i = 1. u(x) + φ(x)) − g(x. By the continuity of g(x.4.e. Let n ≥ 3. u(x))s dx. such that Ω2  is suﬃciently s small.2 (Rabinowitz [Ra]) Assume that f (x. u)vdx. Proposition 2. This veriﬁes (2. by (2. whenever uk u in H. δr > Ω (2. For the given > 0. (2. we have I2 ≤ Ω2 (c1 + c2 (ur + φr )) dx ≤ c1 Ω2  + c2 Ω2 ur dx + c2 φ r Lr (Ω) . Moreover. I(·) is weakly continuous and I (·) is compact. s) satisﬁes (f1 ) and (f2 ). for any η > 0. so that s . we can make Ω2 ur dx as small as we wish by letting Ω2  small. 2. v >= Ω f (x. we choose η and then m. Consequently. (2.35) Since u ∈ Lr (Ω). for such a small δ. as φ Lr (Ω) < δ. such that g(x. Now by (2. and therefore completes the proof of the Proposition. and hence the right hand side of (2. i. Moreover.35). I(uk )→I(u) and I (uk )→I (u). I1 + I2 ≤ s 2 + s 2 . By (g2 ). u(x) + φ(x)) − g(x.32). R) and < I (u). we can choose δ so small.
(2. u). u) It follows from (2.37) Here we have applied the Mean Value Theorem (with ξ(x) ≤ v(x)). We will ﬁrst show that I is Frechet diﬀerentiable on H and then prove that I (u) is continuous. we deduce that the map u(x)→f (x. v ∈ H. the H¨lder inequality.36) and hence I(·) is diﬀerentiable. To verify the continuity of I (·). and < I (u). u + ξ) − f (·.37) that Q(u. u(x) + v(x)) − F (x.4. Hence for any given > 0. u(x))v(x)dx.1. u(x) + ξ(x)) − f (x. we can choose suﬃciently small δ > 0. v >= Ω f (x. Q(u. and the Sobolev inequality o v 2n L n−2 (Ω) ≤K v . u(x))v(x)dx f (x. 2n L n+2 (Ω) < K .38) 2. u(x)) − f (x. (2. (2. We want to show that given any > 0. This veriﬁes (2. v) ≤ Ω F (x. Here v denotes the H 1 (Ω) norm of v. as φ →0. we have 2n 2n v n−2 < Kδ. u) L n+2 (Ω) 2n v 2n L n−2 (Ω) L n+2 (Ω) K v . v) < . u) ≤ f (·.36) whenever v ≤ δ. Then by (f2 ). and therefore L (Ω) L (Ω) 2n 2n f (·. such that whenever v < δ. and hence ξ n−2 < Kδ. we show that for each ﬁxed u. I (u + φ) − I (u) →0.39) .68 2 Existence of Weak Solutions Proof. such that Q(u. u(x)) · v(x)dx Ω 2n = ≤ f (·. let r = n−2 and s = n+2 . whenever v < δ. u + ξ) − f (·. 1. 2n 2n In Proposition 2. Let u. there exists a δ = δ( . u + ξ) − f (·. u)vdx ≤ v (2. In fact. v) := I(u + v) − I(u) − Ω f (x. u(x)) is continuous from L n−2 (Ω) to L n+2 (Ω).
2. it is bounded in L n−2 (Ω).4. Therefore. 3. {uk } is bounded in H. u + φ) − f (·.39) follows again from Proposition 2. u(x))]v(x)dx f (·.42) . we obtain I (uk ) − I (u) ≤ K f (·. I (·) is continuous.1. I (u + φ) − I (u) = sup < I (u + φ) − I (u).38). u + φ) − f (·. ξk (x)) · uk (x) − u(x)dx Ω Lr (Ω) 2n ≤ ≤ f (·. (2. o Now (2. the last term of (2. u) 2n L n+2 (Ω) Here we have applied (2. so does {ξk }. v > v ≤1 = sup v ≤1 Ω [f (x. Finally. and r= It is easy to see that q< r 2n 2n . Furthermore.4 Variational Methods 69 By deﬁnition. and uk u in H. and the Sobolev inequality. This veriﬁes the weak continuity of I(·). we verify the weak continuity of I(u) and the compactness of I (u). u) 2n ≤ sup v ≤1 L n+2 (Ω) K v (2. 2n and hence by Sobolev Embedding. ξk ) ≤ C ξk uk − u uk − u Lq (Ω) Lq (Ω) L n−2 (Ω) (2. I(uk ) − I(u) ≤ Ω I(uk )→I(u) as k→∞. uk ) − f (·. u(x))dx f (x. p(n − 2) r−1 2n − p(n − 2) 2n . by Compact Embedding of H into Lq (Ω). Using a similar argument as in deriving (2. F (x. uk (x)) − F (x. u) 2n L n+2 (Ω) .41) approaches zero as k→∞. q= = .41) Here we have applied the Mean Value Theorem and the H¨lder inequality with o ξk (x) between uk (x) and u(x). Assume that {uk } ⊂ H. the H¨lder inequality. n−2 Since a weak convergent sequence is bounded.40). u(x) + φ(x)) − f (x.40) ≤ K f (·. We show that In fact.
(2. Now. Consequently. This completes the proof of the proposition. we arrive at I (uk )→I (u) as k→∞.4. 2np H → → L n+2 (Ω). together with (2. and thus possesses a nontrivial minimax critical point. u) strongly in L n+2 (Ω).1.43) for some positive constant co and Co . uk )uk − µF (x.42). Since J (uk ) is bounded.31) and prove Theorem 2. uk )] dx 2 Ω Ω µ ≥ ( − 1) Duk 2 dx + [f (x. 2np Consequently. Then for any u. We will verify that J(·) satisﬁes the conditions in the Mountain Pass Theorem. we derive uk →u strongly in L n+2 (Ω). Then by the Compact Embedding. Assume that {uk } is a sequence in H satisfying J(uk ) ≤ M and J (uk )→0.43) implies that {uk } must be bounded in H. v >= Ω 2n Du · Dvdx. Let A : H→H ∗ be the duality map between H and its dual space H ∗ . First we verify the (PS) condition. By (f4 ). we have µM + J (uk ) uk ≥ µJ(uk )− < J (uk ).70 2 Existence of Weak Solutions Since p < n+2 n−2 . Now let’s come back to Problem (2. which is a weak solution of (2. f (·. v ∈ H. (2. uk )→f (·.4. we have 2np n+2 < 2n n−2 . .6. form Proposition 2. uk > µ = ( − 1) Duk 2 dx + [f (x. uk )] dx 2 Ω uk (x)≤r µ ≥ ( − 1) Duk 2 dx − (a1 r + a2 rp+1 )Ω 2 Ω = co uk 2 − Co . which converges weakly to some element uo in H. uk )uk − µF (x. We show that {uk } possesses a convergent subsequence. < Au. Hence there exists a subsequence (still denoted by {uk }).31).
2. u). we ﬁx M . (2. for any given > 0. s) ≤ s2 + M sp+1 . there is a constant M = M (δ). uk )→f (·. f (·. uo ). It follows that uk = A−1 J (uk ) + A−1 f (·. 4 To see the existence of a point e ∈ H.44) This veriﬁes the (PS). we deduce J∂Bρ (0) ≥ 1 2 ρ . From Proposition 2. so that CM ρp−1 ≤ 1 . By (f3 ). (2. such that. s) ≤ s2 . (2.46) It follows.4. Combining (2. u)dx. via the Poincar´ and the Sobolev inequality. ¯ for all x ∈ Ω and for all s ∈ R.48). u)dx ≤ Ω u2 dx + M Ω up+1 dx ≤ C( + M u p−1 ) u 2 . tu)dx.2. whenever s < δ.4 Variational Methods 71 A−1 J (u) = u − A−1 f (·. 8 Then form (2. we ﬁx any u = 0 in H.48) For this . (2.47) And consequently. u) strongly in H ∗ . then choose u = ρ small. F (x.45) ¯ While by (f2 ). uk )→A−1 f (·. (2. as k→∞. (2. we estimate F (x. F (x. and consider J(tu) = t2 2 Du2 dx − Ω Ω F (x.49) . F (x. J(u) ≥ Choose = 1 8C . there is a δ > 0.46).45) and (2. that e  Ω F (x. for all Ω ¯ x ∈ Ω. To see there is a ‘mountain range’ surrounding the origin. such that J(e) ≤ 0. s) ≤ M sp+1 . 1 − C( + M u 2 p−1 ) u 2. we have. such that for all x ∈ Ω.
and it follows that F (x. (2. s) ≥ a3 sµ − a4 . hence it possesses a minimax critical point uo . we conclude that J(tu)→ − ∞. s) = s−µ−2 s (−µF (x. as t→∞. Now J satisﬁes all the conditions in the Mountain Pass Theorem. s) ≥ a3 sµ . we deduce. s)) ds In any case.72 2 Existence of Weak Solutions By (f4 ). which is a weak solution of (2. . and taking into account that µ > 2.31). Moreover. for s ≥ r. s) + sf (x. we have for s ≥ r. d s−µ F (x. J(uo ) > 0. Combining (2.50) ≥ 0 if s ≥ r ≤ 0 if s ≤ −r.49) and (2. therefore it is a nontrivial solution.4. F (x. This completes the proof of the Theorem 2.6.50).
1 The Case p ≥ 2. This is called the regularity argument. the weak solutions are easier to obtain than the classical ones. we used functional analysis. we will introduce methods to show that. in practice. 3. so that they can become classical ones.3 3. we are often required to ﬁnd classical solutions.1 3. These questions will be answered here.1 3. Usually. and Regularity 3. we take the equation . one would naturally want to know whether these weak solutions are actually diﬀerentiable. mainly calculus of variations to seek the existence of weak solutions for second order linear or semilinear elliptic equations. by Sobolev imbedding.p a Priori Estimates 3. Uniqueness.3.1 W 2.3 Regularity of Solutions 3. In this chapter. 3. Uniform Elliptic Equations 3.2.2. Therefore.2 3. and this is particularly true for nonlinear equations.3 Other Useful Results Concerning the Existence.2 The Case 1 < p < 2.3. in most cases.2 and hence.1.3.3. To see the diﬀerence between the regularity and the a priori estimate.p Regularity of Solutions 3. in Lp space for p ≤ n−2 . However. These weak solutions we obtained were in the n+2 Sobolev space W 1.4 Bootstraps the Regularity Lifting Theorem Applications to PDEs Applications to Integral Equations In the previous chapter.2. the regularity is equivalent to the a priori estimate of solutions. Very often.1.2 W 2.2 Newtonian Potentials. a weak solution is in fact smooth.3 Regularity Lifting 3.
then use the method of “frozen coeﬃcients” to generalize it to uniformly elliptic equations.2 how the a priori estimates and uniqueness of solutions lead to the regularity.3.1) is in W 2. In Section 3.1.p regularity. In Section 3. e.p is a weak solution and if f ∈ Lp . We will ﬁrst establish this for a Newtonian potential. such that n aij (x)ξi ξj ≥ δξ2 i. then any weak solution u of (3. We show that if the function f (x) is in Lp (Ω).p is a weak solution and if f ∈ Lp . we establish a W 2. We show that if f (x) is in p L (Ω). It is much more general and very easy to use.p If u ∈ W0 ∩ W 2.j=1 for a. Roughly speaking.p .p a priori estimate for the solutions of (3. we preassumed that u is in W 2. In Section 3. It provides a simple method for the study of regularity.j=1 aij (x)uxi xj + i=1 bi (x)uxi + c(x)u = f (x).p a priori estimate says 1. x ∈ Ω and for all ξ ∈ Rn .p (Ω). Consider the second order partial diﬀerential equation n n Lu := − i. such that u W 2.p ≤ C( u Lp + f Lp ). that is. The version we present here contains some new developments.2. the W 2. Let Ω be an open bounded set in Rn . and c(x) be bounded functions on Ω with aij (x) = aji (x).p (Ω) is a strong solution of (3. . then there exists a constant C. we introduce and prove a regularity lifting theorem. and u ∈ W 2. then u W 2.74 3 Regularity of Solutions − u = f (x) for example. We will also use examples to show how this method can be applied to boost the regularity of the solutions for PDEs as well as for integral equations. there exists a constant δ > 0. then u ∈ W 2. bi (x).1).1) We assume L is uniformly elliptic.p . (3.p regularity theory infers that If u ∈ W 1. In the a priori estimate. We believe that the method will be helpful to both experts and nonexperts in the ﬁeld. It might seem a little bit surprising at this moment that the a priori estimates turn out to be powerful tools in deriving regularities.p ≤C f Lp . Let aij (x). we establish W 2. The readers will see in section 3. While the W 2.1).
p a Priori Estimates We establish W 2.1.3) (3. To this end. ∀t > 0} < ∞. Deﬁne µf (t) = {x ∈ Ω  f (x) > t}. (3.3. Write w = N f.1. Then w ∈ W 2. deﬁne the linear operator T as T f = Dij N f = Dij Rn Γ (x − y)f (y)dy To prove Theorem 3.1. 3. For p ≥ 1. it is equivalent to show that T : Lp (Ω) −→ Lp (Ω) is a bounded linear operator .1 W 2. where N is obviously a linear operator.5) Our proof can actually be applied to more general operators. and let w be the Newtonian potential of f . Let Γ (x) = 1 1 n(n − 2)ωn xn−2 n ≥ 3.1 Let f ∈ Lp (Ω) for 1 < p < ∞.1 W 2.p estimate for the solutions of Lu = f (x) . x ∈ Ω. we introduce the concept of weak type and strong type operators. For ﬁxed i. Ω We prove Theorem 3. j. x ∈ Ω.1 Newtonian Potentials Let Ω be a bounded domain of Rn .p a Priori Estimates 75 3.2) First we start with the Newtonian potential. The Newtonian potential is known as w(x) = Γ (x − y)f (y)dy. . be the fundamental solution of the Laplace equation. and D2 w Lp (Ω) (3.e. (3. a. a weak Lp space Lp (Ω) is the collection of functions f that w satisfy f p Lp (Ω) w = sup{µf (t)tp .4) ≤C f Lp (Ω) .1.p (Ω) and w = f (x).
Let ˆ f (ξ) = 1 (2π)n/2 ei<x. q) if Tf Lq (Ω) ≤C f Lp (Ω) .1.6) and (3. Outline of Proof of (3.76 3 Regularity of Solutions An operator T : Lp (Ω) → Lq (Ω) is of strong type (p.4 T is of strong type (p. we use CalderonZygmund’s Decomposition Lemma to prove Lemma 3. Finally.7) It follows from (3. ∀f ∈ Lp (Ω). and i = −1.1.6) and the wellknown Plancherel’s Theorem: ˆ f L2 (Rn ) = f L2 (Rn ) .7). p). ∀f ∈ Lp (Ω). ˆ and Djk f (ξ) = −ξj ξk f (ξ).1. ∞ ∞ First we consider f ∈ C0 (Ω) ⊂ C0 (Rn ).ξ> f (x)dx.1.1).5).3 T is of strong type (r. where n < x. 2). we use Fourier transform to show Lemma 3. Rn be the Fourier transform of f . We decompose the proof into the proofs of the following four lemmas. Thirdly. We need the following simple property of the transform ˆ Dk f (ξ) = −iξk f (ξ) . q) if Tf Lq (Ω) w ≤C f Lp (Ω) . r) for any 1 < r ≤ 2.1. (3.2 T is of weak type (1. for 1 < p < ∞.1 T : L2 (Ω) → L2 (Ω) is a bounded linear operator. The Proof of Lemma 3. ∀x ∈ Rn . we conclude Lemma 3.1. Then obviously w ∈ C ∞ (Rn ) and satisﬁes w = f (x) . we employ Marcinkiewicz Interpolation Theorem to derive Lemma 3.e. Secondly. by duality. . ξ >= k xk ξk √ is the inner product of Rn . (3. i. T is of strong type (2. First. T is of weak type (p.
G such that Qk . Lemma 3.p a Priori Estimates 77 f (x)2 dx = Ω Rn f (x)2 dx  w(x)2 dx Rn = = Rn  w(ξ)2 dx ξ4 w(ξ)2 dξ ˆ Rn n 2 2 ξk ξj w(ξ)2 dξ ˆ = = k.5 For f ∈ L1 (Rn ).8) Now to verify (3. we ﬁrst extend f to vanish outside Ω.1 W 2.t. This proves the Lemma. to apply the Calder´nZygmund’s Decomposition o Lemma. The Proof of Lemma 3.e.8) for any f ∈ L2 (Ω).1. ∃ (i) Rn = E ∪ G. x ∈ E (iii) G = ∞ E. We need the following wellknown Calder´nZygmund’s Decomposition o Lemma (see its proof in Appendix C).1. {Qk }: disjoint cubes s. For any given α > 0. k=1 α< 1 Qk  f (x)dx ≤ 2n α Qk For any f ∈ L1 (Ω). and take the limit.2.j=1 n Rn Dkj w(ξ)2 dξ Dkj w(x)2 dx k. a.j=1 n Rn = k. we simply pick a sequence {fk } ⊂ that converges to f in L2 (Ω). E ∩ G = ∅ (ii) f (x) ≤ α.3. (3. Qo . ﬁxed α > 0. ∀f ∈ C0 (Ω). Consequently. Tf ∞ C0 (Ω) L2 (Ω) ≤ f L2 (Ω) ∞ . ﬁx a large cube Qo in Rn .j=1 Rn = = Rn D2 w2 dx. such that f (x)dx ≤ αQo .
T f = T g + T b.10) b(x)dx = 0 for k = 1. (3.12) We then estimate µT b . k = 1. bkm (x)dx = Qk Qk bk (x)dx = 0. 2.) We will show that C (a) G∗  ≤ f L1 (Ω) . where f (x) for x ∈ E g(x) = 1 Qk  Qk f (x)dx for x ∈ Qk . ∞ T bk . · · · . we have g(x) ≤ 2n α . and Qk (3. · · · . from the deﬁnition of g. The estimate of the ﬁrst one 2 2 is easy. and therefore α α µT f (α) ≤ µT g ( ) + µT b ( ). By Lemma 3. 2.13) . To estimate µT b ( α ). we divide Rn into two parts: G∗ 2 and E ∗ := Rn \ G∗ (see below for the precise deﬁnition of G∗ .78 3 Regularity of Solutions We show that µT f (α) := {x ∈ Rn  T f (x) ≥ α} ≤ C f L1 (Ω) α . (3. almost everywhere and b(x) = 0 for x ∈ E . 2 2 We will estimate µT g ( α ) and µT b ( α ) separately. we derive 4 α µT g ( ) ≤ 2 2 α g 2 (x)dx ≤ 2n+2 α g(x)dx ≤ 2n+2 α f (x)dx.1 and (3.9) Split the function f into the “good” part g and “bad” part b: f = g + b. (3. and α α C C (b) {x ∈ E ∗  T b(x) ≥ } ≤ T b(x)dx ≤ f L1 (Ω) .1.11) We ﬁrst estimate µT g . Since the operator T is linear. let {bkm } ⊂ C0 (Qk ) be a sequence converging to bk in 2 L (Ω) satisfying b(x) for x ∈ Qk 0 elsewhere . Let bk (x) = Then Tb = k=1 ∞ For each ﬁxed k. because g ∈ L2 . Obviously. (3. E∗ 2 α α α These will imply the desired estimate for µT b ( 2 ).10).
one can see that due to the singularity of Dij Γ (x − y) in Qk and the fact that bkm may not be bounded in Qk . One small trick here is to add a term ¯ (which is 0 by (3. ¯ ¯ Now letting m→∞ in (3.1 W 2.3. For this reason. (3.13)): Dij Γ (x − y )bkm (y)dy ¯ Qk to produce a helpful factor δk by applying the mean value theorem to the diﬀerence: Dij Γ (x − y) − Dij Γ (x − y ) = (y − y ) · D(Dij Γ )(x − ξ) ≤ δk D(Dij )(x − ξ).p a Priori Estimates 79 From the expression T bkm (x) = Qk Dij Γ (x − y)bkm (y)dy. we obtain T bk (x)dx ≤ C Rn \Bk Qk ∞ bk (y)dy.15) ≤C k=1 Qk bk (y)dy ≤ C Rn . We now estimate the integral in the complement of Bk : T bkm (x)dx = Rn \Bk Qo \Bk  Qk Dij Γ (x − y)bkm (y)dydx [Dij Γ (x − y) − Dij Γ (x − y )]bkm (y)dydx ¯ Qk = Qo \Bk  ≤ C δk Qo \Bk ∞ 1 dx ·  xn+1 Qk bkm (y)dy Qk ≤ C1 δk δk 1 dr · r2 bkm (y)dy (3. one can only estimate T bkm (x) when x is of a positive distance away from Qk . we cover Qk by a bigger ball Bk which has the same center as Qk . It follows that ∞ ∞ T b(x)dx ≤ C E∗ k=1 ∞ Rn \G∗ T bk dx ≤ C k=1 Rn \Bk T bk dx f (x)dx. and the radius of the ball δk is the same as the diameter of Qk .14).14) ≤ C2 Qk bkm (y)dy where y is the center of the cube Qk . Let G∗ = Bk k=1 and E ∗ = Rn \ G∗ .
In the previous lemmas. 2 ∗ Eα = {x ∈ E ∗  T b(x) ≥ Then by (3.18) Now the desired inequality (3. we have o ∞ ∞ (3. 1 θ 1−θ = + r p q and C depends only on p.1. ∀f ∈ Lp (Ω) ∩ Lq (Ω).1.3 is a direct consequence of the Marcinkiewicz interpolation theorem in the following restricted form: Lemma 3.6 Let T be a linear operator from Lp (Ω) ∩ Lq (Ω) into itself with 1 ≤ p < q < ∞. we have shown that the operator T is of weak type (1.18).3. This completes the proof of the Lemma.9) is a direct consequence of (3. The proof of Lemma 3. (3. 2)). r). Now Lemma 3.4. p) and weak type (q. .19) . 2 2 By (iii) in the Calder´nZygmund’s Decomposition Lemma. (3. then for any p < r < q. The proof of this lemma can be found in Appendix C.17. such that µT f (t) ≤ then Tf where r θ 1−θ ≤ CBp Bq f r Bp f t p p and µT f (t) ≤ Bq f t q q . for any 1 < r ≤ 2. we have Tg Lr (Ω) ≤ Cr g Lr (Ω) . and r. q. Rn Write (3. The proof of Lemma 3.17) α }. we know that.1. 2) (of course also weak type (2. (3.12).80 3 Regularity of Solutions Obviously α α µT b ( ) ≤ G∗  + {x ∈ E ∗  T b(x) ≥ }. If T is of weak type (p. and (3. ∀f ∈ Lp (Ω) ∩ Lq (Ω). From the previous lemma. if there exist constants Bp and Bq . 1) and strong type (2. T is of strong type (r. we derive ∗ Eα  α ≤ 2 T b(x)dx ≤ ∗ Eα T b(x)dx ≤ C E∗ Rn f (x)dx.1. (3. More precisely.16) G∗  = k=1 Bk  = C k=1 Qk  ≤ C α ∞ f (x)dx = k=1 Qk C α f (x)dx.16). q).15).
let r = p−1 . and the domain Ω. follows from (3. f > Lp . Lr =1 ≤ g sup Lr =1 f Lp Tg Lr ≤ Cr f This completes the proof of the Lemma. It sup Tf Lp = g sup Lr =1 < g. Deﬁnition 3.j=1 aij (x)uxi xj + n i=1 bi (x)uxi uW 2.2 Let u ∈ W 2. and λξ2 ≤ aij ξi ξj ≤ Λξ2 for some positive constants λ and Λ. Lu := − u(x) = 0. x ∈ Ω x ∈ ∂Ω.2 W0 (Ω) be a strong solution of (3.1.21). λ. T f >=< T g. (3. bi (x) ∈ L∞ (Ω). f > .21). Λ. p. n i.2 Uniform Elliptic Equations In this section.e. Based on the result in the previous section–the estimates on the Newtonian potentials–we will establish a priori estimates on the strong solutions of (3. we consider general second order uniform elliptic equations with Dirichlet boundary condition + c(x)u = f (x).1.3. g >= Ω f (x)g(x)dx be the duality between f and g.p (Ω) Assume that f ∈ Lp (Ω). 3.α boundary. c(x) ∈ L∞ (Ω).1 We say that u is a strong solution of Lu = f (x). Obviously.19) and (3. 1 < r < 2. x ∈ Ω if u is twice weakly diﬀerentiable in Ω and satisﬁes the equation almost everywhere in Ω.20) that 1 r (3. Then 1. .1.1 W 2. Then it is easy to verify that < g.22) where C is a constant depending on bi (x)L∞ (Ω) . Theorem 3. p Given any 2 < p < ∞. i.p (Ω) ≤ C(uLp (Ω) + f Lp (Ω) ) (3.20) + 1 p = 1. T f >= g < T g.p a Priori Estimates 81 Let < f. n. aij (x) ∈ C0 (Ω).21) ¯ We assume that Ω is bounded with C 2. c(x)L∞ (Ω) .
near a point xo .p (Ω) ≤ uW 2.p (Ω\Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ) where Ωδ = {x ∈ Ω  d(x. c(x) ∈ Lp (Ω) .25) Theorem 3. . Locally.p (Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ).p (Ω\Ω2δ ) + uW 2. we may regard the leading coeﬃcients aij (x) of the operator approximately as the constant aij (xo ) (as if the functions aij are frozen at point xo ).23) where K is any compact subset of Ω. Boundary Estimate uW 2. c(x) ∈ L∞ (Ω) can be replaced by the weaker ones bi (x).1.p (Ω) ≤ C(uLp (Ω) + f Lp (Ω) ) (3. We divide the proof into two major parts. the main idea is the well known method of “frozen” coeﬃcients.25) and the following well known GargaliadoJohnNirenberg type interpolation estimate  uLp (Ω) ≤ CuLp (Ω)  1/2 2 (3. Interior Estimate  2 uLp (K) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ) (3.82 3 Regularity of Solutions Remark 3. for any p > n. we obtain uW 2.25). Combining the interior and the boundary estimates.1 Conditions bi (x). 4 1 Choosing < 2C .p (Ω) ≤ C  2 uLp (Ω) + C( C uLp (Ω) + f Lp (Ω) ). Proof of Theorem 3.2 is then a simple consequence of (3. (3. we can then absorb the term  2 uLp (Ω) on the right hand side by the left hand side and arrive at the conclusion of our theorem uW 2.26) For both the interior and the boundary estimates. and thus we are able to treat the operator as a constant coeﬃcient one and apply the potential estimates derived in the previous section.2. we get: uW 2.1. Part 1. 4 Substituting this estimate of  uLp (Ω) into our inequality (3. ∂Ω) > δ}.1.24) uLp (Ω) ≤  1/2 2 uLp (Ω) + C uLp (Ω) . Part 2.
First we deﬁne the cutoﬀ function φ(s) to be a compact supported C ∞ function: φ(s) = 1 if s ≤ 1 φ(s) = 0 if s ≥ 2.1≤i.y∈Ω. and it measures the ‘module’ continuity x − xo  ) and w(x) = η(x)u(x). We abbreviated i. With a simple linear transformation. we may assume aij (xo ) = δij . In the above.3).x. w = Γ ∗ F (see the exercise after the proof of the theorem). Apparently both w and Γ ∗ F := are solutions of 1 n(n − 2)ωn 1 F (y)dy x − yn−2 n Rn u = F. we obtain: .j≤n aij (x) − aij (y) 0.1 W 2. For any xo ∈ Ω2δ . following the Newtonian potential estimate (see Theorem 3. Now. δ ∂2η ∂u ∂η + 2aij (x) ∂xi ∂xj ∂xi ∂xj ∂u ∂2w + η(x)(bi (x) + c(x)u − f (x)) + ∂xi ∂xj ∂xi = (aij (xo ) − aij (x)) + aij (x)u(x) ∂2η ∂u ∂η + 2aij (x) ∂xi ∂xj ∂xi ∂xj := F (x) for x ∈ Rn . let η(x) = φ( then aij (xo ) ∂2w ∂2w ∂2w = (aij (xo ) − aij (x)) + aij (x) ∂xi ∂xj ∂xi ∂xj ∂xi ∂xj = (aij (xo ) − aij (x)) + aij (x)u(x) ∂2w ∂2u + η(x)aij (x) + ∂xi ∂xj ∂xi ∂xj sup x−y≤δ.p a Priori Estimates 83 Part 1: Interior Estimates.j=1 aij as aij and so on.3. Here we notice that all terms are supported in B2δ := B2δ (xo ) ⊂ Ω. Thus by the uniqueness. we skipped the summation sign . Then we quantify the ‘module’ continuity of aij with (δ) = The function (δ) 0 as δ of the functions aij .
and 2 we derive  2 uLp (K) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ).27) Calculating term by term. (3.. we get  2 wLp (B2δ ) ≤ C (2δ) 2 wLp (B2δ ) + C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ) ≤ 1  2 wLp (B2δ ) + C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ).. Hence there are ﬁnitely many balls {Bδ (xi )  i = 1.  2 2 uLp (Bδ ) =  2 wLp (Bδ ) . we have F Lp (B2δ ) ≤ (2δ) 2 wLp (B2δ ) +f Lp (B2δ ) +C( uLp (B2δ ) +uLp (B2δ ) ).30) This establishes the interior estimates. let δ < 1 dist(K.29) ≤ C(f Lp (Ω) +  uLp (Ω) + uLp (Ω) ) For any compact subset K of Ω. (3. then K ⊂ Ω2δ . Let’s just describe how we can apply almost the same scheme here.28) on each of the balls and sum up to obtain m  2 uLp (Ω2δ ) ≤ i=1 m  2 uLp (Bδ )(xi ) ≤ i=1 C(f Lp (B2δ )(xi ) +  uLp (B2δ )(xi ) + uLp (B2δ )(xi ) ) (3. (3... we notice that the collection of balls {Bδ (x)  x ∈ Ω2δ } forms an open covering of Ω2δ . uLp (Bδ ) ≤ C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ). Part 2.. Combining this with the previous inequality (3.. We now apply the above estimate (3. Bδ (xo ) ∂Ω .27) and choosing δ so small such that C (2δ) < 1/2.28) To extend this estimate from a δball to a compact set. Boundary Estimates The boundary estimates are very similar.m} that have already covered Ω2δ .84 3 Regularity of Solutions  2 wLp (B2δ ) =  2 wLp (Rn ) ≤ CF Lp (Rn ) = CF Lp (B2δ ) . 2 This is equivalent to  2 wLp (B2δ ) ≤ C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ) Since u = w on Bδ (xo ). . For any point xo ∈ ∂Ω. we have  Consequently. ∂Ω).
for y ∈ Br (0) ¯ b ¯ + u(y) = 0 . we may assume that equation (3. The last inequality in the above was due to the symmetric extension of w to w from the half ball to the whole ¯ ball. Applying the method of frozen coeﬃcients (with w(y) = φ( 2y )u(y)) we get: r + w(y) = F (y) on Br (0). + then ψ is a diﬀeomorphism that maps a neighborhood of xo in Ω onto Br (0) = {y ∈ Br (0)  yn > 0}. i. · · · . ¯ Through the same calculations as in Part 1. −yn ). for y ∈ ∂Br (0) with yn = 0. yn−1 .j=1 aij (y)uyi yj + i=1 ¯i (y)uyi + c(y)u = f (y) . Let y = ψ(x) = (x − xo . we derive the basic interior estimate  2 uLp (Br (xo )) ≤ C(f Lp (B2r (xo )) +  uLp (B2r (xo )) + uLp (B2r (xo )) ) ≤ C1 (f Lp (B2r (xo )∩Ω) +  uLp (B2r (xo )∩Ω) + uLp (B2r (xo )∩Ω) ) for any xo on ∂Ω and for some small radius r. ∂xl ∂x If necessary. Since ¯ a plane is still mapped to a plane. xn − h(x )). we may assume that Bδ (xo ) ∂Ω is given by the graph of xn = h(x1 .p a Priori Estimates 85 is a C 2. The equation becomes n n + ¯ − i. xn−1 ) = h(x ).. we make a linear transformation so that aij (0) = δij . yn ). · · · . and Ω is on top of this graph locally. . + ¯ Now let w(y) and F (y) be the odd extension of w(y) and F (y) from Br (0) ¯ to Br (0). (3. −w(y1 .. With a suitable rotation. yn−1 . .31) is valid for some smaller r. if yn < 0. for y ∈ Br (0). w(y) = w(y1 . x2 . yn−1 .. For example. yn ) = ¯ ¯ ¯ Similarly for F (y).α graph for δ small. ¯ w(y) = F (y) .e.31) where the new coeﬃcients are computed from the original ones via the chain rule of diﬀerentiation. Then one can check that: w(y1 .1 W 2. if yn ≥ 0. aij (y) = ¯ ∂ψ j ∂ψ i −1 (ψ (y))alk (ψ −1 (y)) k (ψ −1 (y)). · · · .3.
we will use the a priori estimates established in the previous section to derive the W 2.p Regularity of Solutions In this section. x ∈ Ω u(x) = 0 . we get: uW 2.32) This establishes the boundary estimates and hence completes the proof of the theorem. 3..p Deﬁnition 3.1. . 2.j=1 (aij (x)uxi )xj + i=1 bi (x)uxi + c(x)u.33) 1.86 3 Regularity of Solutions These balls form a covering of the compact set ∂Ω..2. then w = Γ ∗ F . and following the same steps as we did for the interior estimates.p regularity for the weak solutions. Ω Ω i. and thus has a ﬁnite covering Bri (xi ) (i = 1. ¯ Hint: (a) Show that Γ ∗ F (x) = 0 for x ∈Ω.j=1 1 p aij (x)uxi vxj + i (3.2 W 2. (c) Show that (J w) = J ( w) and thus J w = Γ ∗ (J F ). k). Summing the estimates up. x ∈ ∂Ω 1.q if for any v ∈ W0 (Ω). . (3. 2. Let →0 to derive w = Γ ∗ F . These balls also cover a neighborhood of ∂Ω including Ω\Ωδ for some δ small.1 We say that u ∈ W0 (Ω) is a weak solution of Lu = f (x) .1 Assume that Ω is bounded.. 2 n (b) Show that it is true for w ∈ C0 (R ).34) n bi (x)uxi v + c(x)uv dx = f (x)vdx.p (Ω\Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ). and w ∈ W0 (Ω). (3.35) where + 1 q = 1.. Let L be a second order uniformly elliptic operator in divergence form n n Lu = − i. n (3.p Exercise 3. Show that if − w = F .
1 Assume f ∈ Lp (B1 (0)) with p ≥ 2.2.35) be true for all v ∈ C0 (Ω). If f (x) ∈ Lp (Ω).3.1 Since C0 (Ω) is dense in W0 (Ω). x ∈ ∂B1 (0) exists a unique solution u ∈ W 2. To show the uniqueness. which will be used to deduce the existence of solutions for equation − ij (aij (x)wxi )xj = F (x). Outline of the Proof. x ∈ B1 (0) (3. uniqueness.34).p ∞ Remark 3. As one will see. .2).p version of Fredholm Alternative for p ≥ 2 based on the regularity result in stage 1.p (Ω) .2 W 2. then u ∈ W 2. and regularity for the solution of the Laplace equation on a unit ball. and then use the “frozen” coeﬃcients method to derive the regularity for general operator L. we consider the case 1 < p < 2.2. The proof of the theorem is quite complex and will be accomplished in two stages.1. this uniqueness plays a key role in deriving the regularity.2.37) We will prove this proposition in subsection 3. The proof of the regularity is based on the following fundamental proposition on the existence. 1.p Assume that u ∈ W0 (Ω) is a weak solution of (3. we ﬁrst prove a W 2. and this is more convenient in many applications.p Regularity of Solutions 87 1.2.1 Assume Ω is a bounded domain in Rn . Then the Dirichlet problem u = f (x) .36) u(x) = 0 . The main result of the section is Theorem 3. Proposition 3.p (B1 (0)) satisfying u W 2. because one can rather easily show the uniqueness of the weak solutions by multiplying both sides of the equation by the solution itself and integrating by parts.p (B1 ) ≤C f Lp (B1 ) . Let L be a uniformly elliptic operator deﬁned in (3. In stage 1. Then we use this equation with a proper choice of F (x) to derive a contradiction if there exists a nontrivial solution of equation − ij (aij (x)vxi )xj = 0. (3.2. In stage 2 (subsection 3.33) with aij (x) Lipschitz continuous and bi (x) and c(x) bounded. we ﬁrst consider the case when p ≥ 2. The main diﬃculty lies in the uniqueness of the weak solution. we only need to require ∞ (3.
x ∈ Ω . (3. the conditions on the coeﬃcients of L can be weakened. Suppose the inequality (3. This will use the Regularity Lifting Theorem in Section 3. To this end. and hence will be deliberated there. It follows from the a priori estimate (3. and gk Lp →0.p (Ω) ≤ C( u Lp (Ω) + f Lp (Ω) ). to derive the regularity. p uk L uk Lp = 1 .1 The Case p ≥ 2 We will prove Proposition 3. we need Lemma 3.2.p →∞ as k→∞. 3.3. x ∈ Ω (3.38) it holds the a priori estimate u W 2. Remark 3.88 3 Regularity of Solutions After proving the uniqueness.39) and ii) (uniqueness) if Lu = 0.2.2. x ∈ Ω such that uk W 2. fk uk and gk = . then there exists a sequence of functions {fk } with fk Lp = 1 and the corresponding sequence of solutions {uk } for Luk = fk (x) . (3.42) Lvk = gk (x) . (3.40) Proof.39) that uk Let vk = Then vk From the equation Lp Lp →∞ as k→∞.) Assume that i) for solutions of Lu = f (x) . everything else is the same as in stage 1.2.1 and use it to derive the regularity of weak solutions for general operator L.40) is false. Then we have a better a priori estimate for the solution of (3.1 (Better A Priori Estimates at Presence of Uniqueness.p (Ω) ≤C f Lp (Ω) .41) (3. then u = 0.38) u W 2.2 Actually.
From (3. for n ≥ 3 x n(n − 2)ωn (x x2 − y)n−2 . Let w = u − v.2 W 2.1. we abbreviate Br (0) as Br . we arrive at Lv = 0 . together with the zero boundary condition. This contradicts with the fact v Lp = 1 and therefore completes the proof. implies w = 0 almost everywhere. Then w weakly satisﬁes w = 0 . it is wellknown that.n≥3 n=2 is the fundamental solution of Laplace equation as introduce in the deﬁnition of Newtonian Potentials. y)f (y)dy is the solution. we must have v = 0. Therefore by the uniqueness assumption.p .42). By the compact Sobolev embedding. x ∈ B1 w(x) = 0 . x ∈ ∂B1 Multiplying both sides of the equation by w and then integrating on B1 .36). For convenience.41) and (3. y) is the Green’s function. y) = 1 1 .43) (3. (3.3. if f is continuous.p . this.p Regularity of Solutions 89 it holds the a priori estimate vk W 2. y) = Γ (x − y) − h(x.p ≤ C( vk Lp + gk Lp ). we derive  w2 dx = 0. then u(x) = B1 G(x. . the same sequence converges strongly to v in Lp . x ∈ Ω.43) imply that {vk } is bounded in W 2. and hence v Lp = 1. B1 Hence w = 0 almost everywhere. Here G(x.2. To see the uniqueness. and hence there exists a subsequence (still denoted by {vk }) that converges weakly to v in W 2. in which Γ (x) = 1 1 n(n−2)ωn xn−2 1 2π ln x . For the existence. assume that both u and v are solutions of (3. The Proof of Proposition 3. and h(x.
Applying the Poincar` inequality. then the corresponding solutions {uδi } is a Cauchy sequence in W 2. in W 2. x2 Proof of Regularity Theorem 3. D2 uδ is in Lp (B1 ). The key diﬀerence here is that in the a priori estimate.2. as i. where fδ (x) = Now. such that x x − y ≥ co . we start from a smaller ball B1−δ for each δ > 0.44. (3. y) is to ensure that G(x. by our result for the Newtonian Potentials. uo = lim uδi . Let uδ (x) = B1 G(x. Moreover.1.p (B1 ). The addition of h(x. and here we only assume that u is a weak solution in W 1. and hence it is in e W 2. noticing that h(x. then there exist a constant co > 0.36). After reading the proof of the a priori estimates. ∀ x ∈ B1 (0).37) holds for uo .90 3 Regularity of Solutions is a harmonic function.1 for p ≥ 2.p a priori estimate in the previous section.1 Show that if y ≤ 1 − δ for some δ > 0.p (B1 ). the reader may notice that if one can establish a W 2.p .p regularity in a small . This completes the proof of Proposition 3. y) vanishes on the boundary. The general frame of the proof is similar to the W 2. y) is smooth when y is away from the boundary (see the exercise below). For the second part. j→∞. y)fδ (y)dy. due to uniqueness and Lemma 3. from 3. f (x) . Moreover. we see that the better a priori estimate (3.2. we preassume that u is a strong solution in W 2.p .p (B1 ) ≤ C f Lp (B1 ) . we have the better a priori estimate uδ W 2.2. x ∈ B1−δ 0. elsewhere . i→∞ Then uo is a solution of (3.p (B1 ) ≤ C fδi − fδj Lp (B1 ) →0 . Exercise 3.1. we derive that uδ is also in Lp (B1 ). We have obtained the needed estimate on the ﬁrst part Γ (x − y)f (y)dy B1 when we worked on the Newtonian Potentials.p (B1 ).44) Pick a sequence {δi } tending to zero. because uδi − uδj Let W 2.2.
j=1 aij as aij and so on.p v = Kv + where (Kv)(x) = −1 −1 (3. For any xo ∈ Ω2δ := {x ∈ Ω  dist(x. 1. Also. for any v ∈ C0 (B2δ (xo )). one can derive the regularity on the whole of Ω. we omitted the summation signs .2. By (3. x ∈ B2δ (xo ) w(x) = 0 .45) In the above. we deﬁne the cutoﬀ function φ(s) = 1 . x − xo  ) and w(x) = η(x)u(x). In order words.p (B2δ (xo )). We abbreviated i. we may assume that aij (xo ) = δij and write equation (3. if s ≥ 2. one can easily verify that F (x) is in Lp (B2δ (xo )).34). By virtue of Proposition 3.47) ([aij (xo ) − aij (x)]vxi )xj .p Let u be a W0 (Ω) weak solution of (3. η(x) = φ( aij (xo )wxi vxj dx = B2δ (xo ) B2δ (xo ) let [aij (xo )−aij (x)]wxi vxj dx+ B2δ (xo ) F (x)vdx where F (x) = f (x) − (aij (x)ηxi u)xj − bi (x)uxi − c(x)u(x). With a change of coordinates. ∞ one can verify that. ∂Ω) ≥ 2δ}.35) and a straight forward calculation. δ Then w is supported in B2δ (xo ). w is a weak solution of aij (xo )wxi xj = ([aij (xo ) − aij (x)]wxi )xj − F (x) . then by an entirely similar argument as in obtaining the a priori estimates. the operator is invertible.2 W 2. ([aij (xo ) − aij (x)]vxi )xj ∈ Lp (B2δ (xo )).46) F (3. n (3. x ∈ ∂B2δ (xo ). As in the proof of the a priori estimates.p Regularity of Solutions 91 neighborhood of each point in Ω. Consider the equation in W 2. if s ≤ 1 0 .1.3. . obviously.45) as w = ([aij (xo ) − aij (x)]wxi )xj − F (x) For any v ∈ W 2.
Exercise 3. we must have w = v and thus conclude that w is also in W 2.2. there exists a unique solution v of equation (3.p (B2δ (xo )) to itself. From the Regularity Theorem in this chapter. consider equation (L + α)w − αw = fk .p Lemma 3. one can verify that (see the exercise below).p (B2δ (xo )). for δ suﬃciently small. i. Therefore.p version of the Fredholm Alternative. From the previous chapter. then for any f ∈ L (Ω). x ∈ Ω.2 LaxMilgram Theorem. we can write v = (L + α)−1 g where (L + α)−1 is a bounded linear operator from L2 (Ω) to W 2.p (Ω). for any given g ∈ L2 (Ω). (3. 1.2.47). x ∈ B2δ (xo ). such that Kφ − Kψ W 2. one can show (as an exercise) the uniqueness of the weak solution of (3. 3.2 Let p ≥ 2. Therefore. one can see that for α suﬃciently large. there exist a unique W0 (Ω) weak solution v of the equation (L + α)v = g . we have v ∈ W 2. such that Lu = f (x) and u W 2.46) when p ≥ 2. there exists a constant 0 < γ < 1.46).2.1.p (B2δ (xo )) ≤γ φ−ψ W 2.2 The Case 1 < p < 2. For each k.2 Assume that each aij (x) is Lipschitz continuous.p p weak solution). Let (Kv)(x) = −1 ([aij (xo ) − aij (x)]vxi )xj .e. Therefore. Similar to the proof of Proposition 3.49) . ∞ Let {fk } ⊂ C0 (Ω) be a sequence of smooth approximations of f (x).92 3 Regularity of Solutions Under the assumption that aij (x) are Lipschitz continuous.48) Proof.2.1. This completes the stage 1 of proof for Theorem 3.2. (3.p (B2δ (xo )). there exist a unique u ∈ W0 (Ω) ∩ W 2.2 (Ω).p (Ω) ≤C f Lp .2. If u = 0 whenever Lu = 0 (in the sense of W0 (Ω) 1.46). Show that. K is a contracting map from W 2.p (B2δ (xo )) .3 Prove the uniqueness of the W 1. Exercise 3. the operator K is a contracting map in W 2. This v is also a weak solution of equation (3. This is actually a W 2.2 (Ω). for suﬃciently small δ. by 1.p (B2δ (xo )) weak solution for equation (3.
2 W 2. Now assume 1 < p < 2. consider the equation .p tinuous. 2.2. x ∈ Ω. (3. Assume that aij (x) are Lipschitz con1.p Regularity of Solutions 93 or equivalently.1.52) Proof. as i.p (Ω).p ≤ C fi − fj Lp →0 .2 (Ω). 1. we derive immediately that uk is also in W 2.50) where I is the identity operator.p ≤ C fk Lp . · · · Moreover. as k→∞. . Suppose there exist a nontrivial weak solution u. or we can deduce this from the Regularity Theorem 3. One 1.2 into W 1. and F = (L + α)−1 fk . ) n Let Ω be a bounded domain in R . and furthermore.p Proposition 3.2. If u is a W0 (Ω) weak solution of (aij (x)uxi )xj = 0. The second half of the above is just the assumption of the theorem.3.p uk →u in W0 (Ω) . j→∞. Now this function u is the desired solution. (I − K)w = F.2 can see that K is a compact operator from W0 (Ω) into itself. we have proved the uniqueness.p (Ω). for any integers i and j. K = α(L + α)−1 . we conclude from Theorem 3.51) Furthermore.p and hence it converges to an element u in W 2.e.50) exists a unique solution if and only if the corresponding homogeneous equation (I − K)w = 0 has only trivial solution. i. Let {uk } be a sequence ∞ of C0 (Ω) functions such that 1. k = 1.2 . Now we can apply the Fredholm Alternative: Equation (3. (3. due to uniqueness.50) possesses a unique solution uk in W0 (Ω).1 that uk is in W 2. 1. This implies that {uk } is a Cauchy sequence in W 2. When p ≥ 2. we have the improved a priori estimate (see Lemma 3. It follows that ui − uj W 2. For each k.2. because of the compact embedding of W 2.2 (Ω). Since fk is in Lp (Ω) for any p. since both α(L + α)−1 uk and (L + α)−1 fk are in W 2. then u = 0 almost every where.2. uk − α(L + α)−1 uk = (L + α)−1 fk . hence.2 equation (3. L(ui − uj ) = fi (x) − fj (x) .1) uk W 2. (3.2 (Uniqueness of W0 Weak Solution for 1 < p < ∞.
55) then u ∈ W 2. Repeating this process. (3. Case ii) If q > p. and Fk (x) is in Lq (Ω). n−p If p1 ≥ q. and by Regularity. then we use the Sobolev embedding W 2.53). q is greater than 2.q (Ω) and it holds the a priori estimate u W 2.53) 1 where 1 + p = 1.q (Ω) ≤ C( u Lq + f Lq ). then obviously.q Proof.p Taking into account that u ∈ W0 (Ω). q Obviously. we will arrive at a pk ≥ q.2. and therefore. with p1 = np . u ∈ W 2.94 3 Regularity of Solutions (aij (x)vxi )xj = Fk (x) := ·  uk p/q uk 1 +  uk 2 .1 and the W 2. If p1 < q. Since we 1.q already have uniqueness for W0 weak solution.56) 1.2. it is easy to see that  uk p/q uk 1 +  uk 2 →  up/q u 1 +  u2 in Lq (Ω). Case i) If q ≤ p.p1 . if u ∈ W0 (Ω) is a weak solution of Lu = f (x) .2 guarantees the existence of a solution vk for equation (3. q > 1 and f ∈ Lq (Ω).54) Since uk →u in W 1.2.54). This completes stage 2 of the proof for Theorem 3. we use the fact that u is a W 1.p → W 1. (3.p1 . we must have u = 0 almost everywhere. by (3. This completes the proof of the proposition.3 Other Useful Results Concerning the Existence. 3. the rest of the arguments are entirely the same as in stage 1. 1.2. and the results follow from the Regularity Theorem 3.2. After proving the uniqueness. Uniqueness.1. we are done. and Regularity 1.p1 weak solution. (3. for p < 2.2 Given any pair p. Through integration by parts several times. one can verify that 0= Ω vk (aij (x)uxi )xj dx = Ω  uk p/q uk 1 +  uk 2 · u dx. . Lemma 3. x ∈ Ω.p . we ﬁnally deduce that u = 0 almost everywhere. u is also a W0 (Ω) weak solution. (3. after a few steps.q a priori estimates.p Theorem 3.
p regularity in this case. then u = 0. Now.2.p Version of the Fredholm Alternative). the proof of this theorem is entirely the same as for Lemma 3. The equation (3.p (Ω).p any f ∈ Lp (Ω).2. 1.p (Ω) ≤C f Lp . 2n Since u ∈ L n−2 (Ω).q 1.1 Bootstrap Assume that u ∈ H 1 (Ω) is a weak solution of − u = up (x) . Let 1 < p < ∞.1 For any pair p.57) boosts the solution u to W 2. then u = 0. write p= for some δ > 0. there exist a unique u ∈ W0 (Ω) ∩ W 2.q1 (Ω) with q1 = By the Sobolev Embedding 2n . Theorem 3. x ∈ Ω 2n (3.3 Regularity Lifting 3. u ∈ L n−2 (Ω).p If u = 0 whenever Lu = 0 (in the sense of W0 (Ω) weak solution). up ∈ L p(n−2) (Ω) = L (n+2)−δ(n−2) (Ω). Finally the “Schauder Estimate” will lift u to C 2+α (Ω) and hence to be a classical solution. we can derive immediately the equivalence between 1.2.2. n+2 For each ﬁxed p < n−2 . (n + 2) − δ(n − 2) 2n 2n n+2 −δ n−2 . then for 1. we have stated this theorem in Lemma 3. q > 1.3. We call this the “Bootstrap Method” as will be illustrated below.3 (W 2. the following are equivalent 1. If the power p is less than the n+2 critical number n−2 . 1. then the regularity of u can be enhanced repeatedly through the equation until we reach that u ∈ C α (Ω).3. 3.q ii) If u ∈ W0 (Ω) is a weak solution of Lu = 0. For 2 ≤ p < ∞.2. after we obtained the W 2. such that Lu = f (x) and u W 2. for 1 < p < 2.p the uniqueness of weak solutions in any two spaces W0 and W0 : Corollary 3.57) Then by Sobolev Embedding.p i) If u ∈ W0 (Ω) is a weak solution of Lu = 0.3 Regularity Lifting 95 From this Theorem.2.
We are done.3. then u ∈ Ls2 (Ω). for both experts and nonexperts in the ﬁeld. one can see that. we have either u ∈ Lq (Ω) for any q.q2 (Ω).2 Regularity Lifting Theorem Here. u ∈ C 2+α (Ω). the so called “critical power”. To deal with this situation. it is based on the following Regularity Lifting Theorem. (1 − δ)[n + 2 − δ(n − 2)] − 4 1 The integrable power of u has again been ampliﬁed by at least 1−δ times. after a ﬁnite many steps. i. n − 2q1 n−21−δ 1 1−δ nq1 The integrable power of u has been ampliﬁed by Now. we will boost u to Lq (Ω) for any q. We believe that our method will provide convenient ways. Continuing this process. with s2 = It is easy to verify that s2 ≥ 2n 1 1 = s1 . in obtaining regularities. and the integrable power of the solution u can not be boosted this way. Finally. we present a simple method to boost regularity of solutions. then δ = 0.e. (1 − δ)(n − 2) 1 − δ 1−δ 2n . if p = n−2 . However. if (1 − δ)[n + 2 − δ(n − 2)] − 4 ≤ 0 . if 2q2 ≥ n. by the Schauder estimate. we introduce a “Regularity Lifting Method” in the next subsection. Essentially. up ∈ Lq2 (Ω) with q2 = (> 1) times. The essence of the approach is wellknown in the analysis community. and hence in C α (Ω) for some 0 < α < 1. then. n+2 From the above argument. it is a classical solution.96 3 Regularity of Solutions W 2. s1 4n = .q1 (Ω) → L n−2q1 (Ω) we have u ∈ Ls1 (Ω) with s1 = nq1 2n 1 = . It has been used extensively in various forms in the authors previous works. or u ∈ C α (Ω). By Sobolev embedding. p (1 − δ)[n + 2 − δ(n − 2)] And hence through the equation. It is much more general and is very easy to use. 3. . the version we present here contains some new developments. and therefore. we derive u ∈ W 2. If 2q2 < n.
Let Deﬁne a new norm · Z by · Z · X and · Y be two norms on Z. 0 < θ2 < 1 such that T h1 − T h2 Y ≤ θ2 h1 − h2 Y . 0 < θ1 < 1 such that T h1 − T h2 X ≤ θ1 h1 − h2 X. Similarly. according to what one needs. Proof. we assume that Z is complete with respect to the norm · Z .3 Regularity Lifting 97 Let Z be a given vector space. Step 1. Since T : Z −→ Z is a contraction. one can choose p. 3. Theorem 3. ≤ θ h1 − h2 Step 2. T h1 − T h2 Z = p T h1 − T h2 p p X + T h1 − T h2 p X p + θ2 h1 − h2 p Y p Y ≤ p θ1 h1 − h2 Z. It’s easy to see that Z = X ∩ Y . Let X and Y be the completion of Z under · X and · Y . for any h1 . given g ∈ Z. Here. Then f also belongs to Z. = p · p X + · p Y For simplicity. h2 ∈ Z. (3. First show that T : Z −→ Z is a contraction. Then by Sobolev embedding. f = h ∈ Z since both h and f are solutions of x = T x + g in X. We see that T : X −→ X is also a contraction and g ∈ Z ⊂ X. 2n .3 Applications to PDEs Now.3. Since T is a contraction on X. θ2 }.3.58) Still assume that Ω is a smooth bounded domain in Rn with n ≥ 3. Let 1 u ∈ H0 (Ω) be a weak solution of equation (3. Then.1 Let T be a contraction map from X into itself and from Y into itself. there exists a constant θ1 . The equation x = T x + g has a unique solution in X. 1 ≤ p ≤ ∞. respectively. we explain how the “Regularity Lifting Theorem” proved in the previous subsection can be used to boost the regularity of week solutions involving critical exponent: n+2 − u = u n−2 . we can ﬁnd a constant θ2 .58). Thus. Let θ = max{θ1 . Assume that f ∈ X. and that there exits a function g ∈ Z such that f = T f + g. u ∈ L n−2 (Ω). we can ﬁnd a solution h ∈ Z such that h = T h + g.3.
α C (Ω).3.3.58).58) in two parts: u n−2 = u n−2 u := a(x)u. Then by our Regularity Theorem 3.3. For a positive number A. Then u is in Lp (Ω) for any 1 ≤ p < ∞.1. we consider the regularity of the weak solution of the following equation − u = a(x)u + b(x). Let G(x. y) be the Green’s function of − (T f )(x) = (− )−1 f (x) = Ω in Ω.2 Assume that a(x). and hence it is a classical solution. y)f (y)dy. n ).q (Ω). Now let’s come back to nonlinear equation (3. Then obviously 0 < G(x. we arrive at u ∈ C 2. n n n+2 4 (3. more generally.α regularity.59) in H0 (Ω).58). 1 Corollary 3.α (Ω).α (Ω) via Sobolev embedding. a(x) ∈ L 2 (Ω). This implies that u ∈ C 1. 2 (Ω) via the W 2. Finally.1 If u is a H0 (Ω) weak solution of equation (3. deﬁne aA (x) = and a(x) if a(x) ≥ A 0 otherwise . we can only have n u ∈ W 2. Remark 3. we ﬁrst conclude that u is in Lq (Ω) for any 1 < q < ∞. and hence derive that u ∈ Lp (Ω) for any p < ∞ by Sobolev embedding. Assume that u is a 1 H0 (Ω) weak solution.59) Theorem 3. Proof of Theorem 3. x − yn−2 L n−2q (Ω) ≤ Γ ∗f nq L n−2q (Rn ) ≤C f Lq (Ω) .1 Even in the best situation when a(x) ≡ 0. 2 Tf nq Cn . we may extend f to be zero outside Ω when necessary. for any q ∈ (1.98 3 Regularity of Solutions We can split the right hand side of (3.p estimate. .2. Hence. From the above theorem. aB (x) = a(x) − aA (x). Let u be any weak solution 1 of equation (3.2. b(x) ∈ L 2 (Ω). then u ∈ 2. y) < Γ (x − y) := and it follows that. Here. u is in W 2. Deﬁne G(x. from the classical C 2.3. Then obviously.
we derive immediately that u ∈ Lp (Ω). n For any n−2 < p < ∞. one can choose a large number A. y)b(y)dy. choose 1 < q < to be zero outside Ω. y)aA (y)u(y)dy. n 2. i) The Estimate of the Operator TA . we derive o TA u Lp (Ω) ≤ Γ ∗ aA u n Lp (Rn ) ≤ C aA u Lq (Ω) ≤ C aA L 2 (Ω) n u Lp (Ω) .3 Regularity Lifting 99 Let (TA u)(x) = Ω G(x. We will show that. (a) First consider TA u Lp (Ω) ≤ 1 FA (x) := Ω G(x. Then equation (3. for any 1 ≤ p < ∞. Since a(x) ∈ L 2 (Ω). such that 2 p= nq n − 2q and then by H¨lder inequality. there is a q. ≤ Γ ∗b Lp (Rn ) ≤C b Lq (Ω) L 2 (Ω) n Hence 1 FA (x) ∈ Lp (Ω). such that aA and hence arrive at L 2 (Ω) n ≤ 1 2 1 u Lp (Ω) . . y)[aB (y)u(y) + b(y)]dy. 1 < q < n .59) can be written as u(x) = (TA u)(x) + FA (x). and ii) FA (x) is in Lp (Ω). Then by the “Regularity Lifting Theorem”. Extending b(x) < ∞. 2 That is TA : Lp (Ω)→Lp (Ω) is a contracting operator. n For any n−2 < p < ∞. we have 1 FA Lp (Ω) such that p = ≤C b nq n−2q . i) TA is a contracting operator from Lp (Ω) to Lp (Ω) for A large. where FA (x) = Ω G(x.3. ii) The Integrability of FA (x).
However. for any 1 < p < ∞ if 3 ≤ n ≤ 6 2n u ∈ Lp (Ω) . we arrive at u ∈ Lp (Ω) . for the following values of p and dimension n. Let L be a second order uniformly elliptic operator in divergence form n n Lu = − i. Applying the Regularity Lifting Theorem. Continuing this way. The only thing that prevent us from reaching the full range of p is the 2n 2 estimate of FA (x). for any 1 < p < . for any 1 < p < n−6 if n > 6. y)aB (y)u(y)dy.j=1 (aij (x)uxi )xj + i=1 bi (x)uxi + c(x)u. n−6 Now from this point on. we have 2 FA Lp (Ω) ≤ aB u Lq (Ω) ≤C u Lq (Ω) . n−6 n−2 we conclude that. Each step. 1 < p < ∞ when 3 ≤ n ≤ 6 2n 1 < p < n−6 when n > 6. for any 1 < p < n−10 if n > 10. we prove our theorem for all dimensions.60) we will prove . from the starting point where u ∈ L n−2 (Ω). By the boundedness of aB (x). Now we will use the similar idea and the Regularity Lifting Theorem to carry out Stage 3 of the proof of the Regularity Theorem. we will reach u ∈ Lp (Ω) . FA (x) ∈ Lp (Ω). through an entire similar argument as above. (3.100 3 Regularity of Solutions (b) Then estimate 2 FA (x) = Ω G(x. This completes the proof of the Theorem. for any 1 < p < ∞ if 3 ≤ n ≤ 10 2n u ∈ Lp (Ω) . and n−2 2n 2n when q = . and hence TA is a contracting operator from Lp (Ω) to Lp (Ω). Noticing that u ∈ Lq (Ω) for any 1 < q ≤ p= 2n . we have arrived at 2n u ∈ Lp (Ω) . we add 4 to the dimension n for which holds u ∈ Lp (Ω) . 1 ≤ p < ∞.
Let Lo u := − n (aij (x)uxi )xj . deﬁne biA (x) = Let bi (x) .3 Regularity Lifting 101 Theorem 3. x) where TA u = −L−1 o i (3. biB (x) := bi (x) − biA (x). 1.2.61) as u = TA u + FA (u.2.3. if n/2 < p < ∞.3. if bi (x) ≥ A 0.p Regularity under Weaker Conditions) Let Ω be a bounded domain in Rn and 1 < p < ∞. ii) n if 1 < p < n L (Ω). Then u is in W 2. Proof.n+δ (Ω) for some δ > 0. if n < p < ∞. i. Then L−1 is a bounded linear operator from Lp (Ω) to o o W 2.3.61) as n n (3.p (Ω).p (Ω). and it follows from Theorem 3. Denote it by L−1 . bi (x) ∈ Ln+δ (Ω).62) biA (x)uxi + cA (x)u . iii) n/2 if 1 < p < n/2 L (Ω). if p = n/2 p L (Ω).2. elsewhere .j=1 (aij (x)uxi )xj = − i=1 bi (x)uxi + c(x)u + f (x).3 (W 2. if p = n p L (Ω). Assume that i) aij (x) ∈ W 1.61) − i. c(x) ∈ Ln/2+δ (Ω). we know that equation Lo u = 0 has only trivial solution. Similarly for CA (x) and CB (x). For each i and each positive number A.j=1 By Proposition 3.p Suppose f ∈ Lp (Ω) and u is a W0 (Ω) weak solution of Lu = f (x) x ∈ Ω. Write equation (3. the operator Lo is invertible. Now we can rewrite equation (3.
e. cA Ln/2 can be made arbitrarily small if A is suﬃciently large. if n < p < ∞. f ∈ Lp (Ω).65). o one can see that.64) and (3.65) Under the conditions of the theorem. x).p (Ω) to W 2.p v W 2.p . ∀ v ∈ W 2. ·) ∈ W 2.p (Ω).p (Ω). This implies (3. Since biB (x) and ciB (x) are bounded. It follows from the Contract Mapping Theorem.p (Ω).p (Ω). i.63).p (Ω) . we deduce that TA is a contract mapping from W 2.p (Ω). by H¨lder inequality and Sobolev embedding from W 2. such that v = TA v + FA (u. x) = −L−1 o i biB (x)uxi + cB (x)u + L−1 f (x). In fact. (3.p . o We ﬁrst show that FA (u. Then we prove that TA is a contracting operator from W 2. and cA v cA cA ≤C· cA Ln/2 Lp v W 2.p .63) ∈ Lp (Ω) for any u ∈ W 1. if 1 < p < n biA Ln+δ v W 2. biA Ln v W 2. 2. Hence for any > 0. there exists an A > 0. By uniqueness.p .p (Ω). there exist a v ∈ W 2. biA Ln . ∀u ∈ W 1. . one can easily verify that biB (x)uxi + cB (x)u i (3.p to W 1. the ﬁrst norms on the right hand side of (3.q . if p = n/2 if n/2 < p < ∞. for any v ∈ W 2. such that biA (x)vxi + cA (x)v i Lp ≤ v W 2.p (Ω).p (Ω).64) biA Lp v W 2.p . Therefore we derive that u ∈ W 2.p . Lp Ln/2+δ v W 2. Taking into account that L−1 is a bounded operator from Lp (Ω) to o W (Ω).102 3 Regularity of Solutions and FA (u. This completes the proof of the theorem.p Now u and v satisfy the same equation. if p = n biA Dv Lp ≤ C · (3.p (Ω) to W 2. if 1 < p < n/2 . we must have u = v.
which is the well known Yamabe equation.69) and (3. We will use the Regularity Lifting Theorem to boost u to Lq (Rn ) for any 1 < q < ∞. In the context of the HardyLittlewoodSobolev inequality. . n−α (3. n−α (3. and u ∈ L α (Rn ).70) n(p−1) n .71) then conditions (3. and K(y) ≤ C(1 + Rn n Then u is in Lq (Rn ) for any 1 < q < ∞.4 Applications to Integral Equations As another application of the Regularity Lifting Theorem. one can prove (See [CLO]) that the integral equation (3.66) and the PDE (3. it is natural 2n to start with u ∈ L n−α (Rn ). and hence in L∞ (Rn ). we consider the following equation u(x) = Rn K(y)u(y)p−1 u(y) dy x − yn−α (3. yγ (3. Remark 3. x ∈ Rn . we consider the following integral equation u(x) = Rn n+α 1 u(y) n−α dy .67) Actually. In the special case α = 2.69) K(y)u(y)p−1  α dy < ∞ . Assume that u ∈ Lqo (Rn ) for some qo > n . K(x) ≤ M.68).3. (3. (3. It is also closely related to the following family of semilinear partial differential equations (−∆)α/2 u = u(n+α)/(n−α) .68) for some 1 < p < ∞.2 i) If p> 1 ) for some γ < α .3.70) are satisﬁed. n−α x − y (3.3. We prove Theorem 3.3.67) becomes −∆u = u(n+2)/(n−2) .67) are equivalent. x ∈ Rn .3 Regularity Lifting 103 3. x ∈ Rn n ≥ 3.66) where α is any real number satisfying 0 < α < n.4 Let u be a solution of (3. More generally. It arises as an EulerLagrange equation for a functional under a constraint in the context of the HardyLittlewoodSobolev inequalities.
we apply the H¨lder inequality to the o right hand side of the above inequality H(y) n+αq f (y) n+αq dy Rn nq n+αq ·r 1 r nq nq n+αq nq ≤ Rn H(y) n α dy Rn f (y) nq n+αq ·s 1 s n+αq nq dy (3. one can verify that ua satisﬁes the equation ua = Tua ua + g(x) (3. deﬁne ua (x) = u(x) .68) when K(x) ≡ 1 possesses singular solutions such as c u(x) = α . we ﬁrst apply the HardyLittlewoodSobolev inequality Lq Tua w ≤ C(α. it is obvious that g(x) ∈ L∞ ∩ Lqo . n. then equation (3.104 3 Regularity of Solutions ii) Last condition in (3. Then since u satisﬁes equation (3. x p−1 Proof of Theorem 3.73) Then write H(x) = K(x)ua (x)p−1 .74) = Rn α n H(y) dy w Lq .70). if u(x) > a .71) is somewhat sharp in the sense that if it is violated. or if x > a ua (x) = 0 . Here we have chosen s= n + αq n + αq and r = .4: Deﬁne the linear operator Tv w = Rn K(y)v(y)p−1 w dy. otherwise . x − yn−α For any real number a > 0. x − yn−α Under the second part of the condition (3.72) with the function g(x) = Rn K(y)ub (y)p−1 ub (y) dy − ub (x).3. n αq . Let ub (x) = u(x) − ua (x).68). For any q > to obtain n n−α . q) Kua p−1 w nq L n+αq . (3.
77) in Lqo . From (3.75). we must have ua = w ∈ Lpo ∩ Lqo for any n po > n−α . ua is a solution of (3. Let w be the solution of (3. Tua w Lq ≤ 1 w 2 Lq . n.72). q)( K(y)ua (y)p−1  α dy) n w n α Lq .76) to both the case q = qo and the case q = po > qo . This completes the proof of the Theorem.75) By (3. By the uniqueness.74) that Tua w Lq ≤ C(α. . then w is also a solution in Lqo .76) Applying (3. (3. for suﬃciently large a. we deduce. we see that the equation w = Tua w + g(x) (3.77) has a unique solution in both Lqo and Lpo ∩ Lqo .3 Regularity Lifting 105 It follows from (3.3.77) in Lpo ∩ Lqo . So does u. (3.70) and (3. and by the Contracting Mapping Theorem.73) and (3.
.
for example. which is a generalization of the ﬂat Euclidean space.5.4.2 4. and curvatures.3 Curvature on Riemannian Manifolds 4. it is a union of open sets of R2 .5 Calculus on Manifolds 4. We call this curved space a manifold. Intuitively. For more rigorous arguments and detailed discussions. and the manifold as a whole is obtained by pasting together pieces of the Euclidean spaces. the Universe we live in is a curved space.4.1 4.1 Diﬀerentiable Manifolds A simple example of a two dimensional diﬀerentiable manifold is a regular surface in R3 .5. tangent space. each neighborhood of a point on a manifold is homeomorphic to an open set in the Euclidean space.4 Preliminary Analysis on Riemannian Manifolds 4. we will introduce the most basic concepts of diﬀerentiable manifolds.1 Curvature of Plane Curves 4.4.3 Equations on Prescribing Gaussian and Scalar Curvature 4.4 Diﬀerentiable Manifolds Tangent Spaces Riemannian Metrics Curvature 4.2 Integrals 4.1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator 4. the book Riemannian Geometry by do Carmo [Ca]. please see.3 4.6 Sobolev Embeddings According to Einstein. In this section. Roughly speaking.2 Curvature of Surfaces in R3 4. organized in such a . Riemannian metrics. We will try to be as intuitive and brief as possible. 4.5.
V3 = {x ∈ S 1  x1 > 0}. we can use the following four coordinates charts: V1 = {x ∈ S 1  x2 > 0}.1 (Diﬀerentiable Manifolds) A diﬀerentiable ndimensional manifold M is (i) a union of open sets. Therefore. φ4 (x) = x2 . for each p ∈ U . x2 > 0. S 1 is a 1dimensional diﬀerentiable manifold. V4 = {x ∈ S 1  x1 < 0}. (iii) The family {Vα . A manifold M is a union of open sets. x1 > 0. . 2 They are both diﬀerentiable functions. Obviously. and (U. the images of the intersection φα (U ) and φβ (U ) are open sets in Rn and the mappings φβ ◦ φ−1 α are diﬀerentiable. M = α Vα and a family of homeomorphisms φα from Vα to an open set φα (Vα ) in Rn . one can easily complete it into a maximal one. · · · . In fact. such that (ii) For any pair α and β with Vα Vβ = U = ∅. then we can deﬁne the coordinates of p to be (x1 . φ1 (x) = x1 . if φU (p) = (x1 .1. On the intersection of any two open sets. The ndimensional unit sphere S n = {x ∈ Rn+1  x2 + · · · + x2 = 1}. we have x2 = x1 = 1 − x2 . In each open set U ⊂ M . x2 . Given a diﬀerentiable structure on M . φU ) is called a coordinates chart. φ3 (x) = x2 . Same is true on other intersections. V2 = {x ∈ S 1  x2 < 0}. This is called a local coordinates of p. More precisely. V2 . then it is called a diﬀerentiable structure of M . S 1 is the union of the four open sets V1 . we have Deﬁnition 4. φ2 (x) = x1 . with this diﬀerentiable structure. 1 n+1 Take the unit circle S 1 for instance. · · · xn ). V3 . and V4 . A trivial example of ndimensional manifolds is the Euclidean space Rn . say on V1 V3 . by taking the union of all the coordinates charts of M that are compatible with (in the sense of condition (ii)) the ones in the given structure. φα } are maximal relative to the conditions (i) and (ii). xn ) in Rn . The following are two nontrivial examples. 1 1 − x2 . let φU be the homeomorphism from U to φU (U ) in Rn . If a family of coordinates charts that covers M are well made so that the transition from one to the other is in a diﬀerentiable way as in condition (ii). one can deﬁne a coordinates chart. with the diﬀerentiable structure given by the identity.108 4 Preliminary Analysis on Riemannian Manifolds way that at the intersection of any two open sets the change from one to the other can be made in a diﬀerentiable manner. x2 . Example 1.
xi+1 /xi . · · · . ξn+1 ). n + 1. j k ξi j ξi = j 1 i . · · · xn+1 ) ∼ (λx1 . . let Vi = {[x]  xi = 0}. we can deﬁne a tangent space at each point. Deﬁne i i i i φi ([x]) = (ξ1 . at each given point p. For i = 1. there exists a tangent line or a tangent plane. · · · . or the quotient space of Rn+1 \ {0} by the equivalence relation (x1 . ξj are obviously smooth. λxn+1 ). we have to ﬁnd a characteristic property of the tangent vector which will not involve the concepts of velocity. 0) ∈ Rn+1 . φi } is a diﬀerentiable structure of P n (R) and hence P n (R) is a diﬀerentiable manifold. Apparently. · · · xi−1 /xi . ). then [x1 . t ∈ (− . and each tangent vector is deﬁned as the “velocity in the ambient space R3 .2 Tangent Spaces 109 Example 2. n+1 P n (R) = i=1 Vi . · · · . Observe that if xi = 0. i where ξk = xk /xi (i = k) and 1 ≤ i ≤ n + 1. xn+1 ] = [x1 /xi . ξi−1 .4. For a regular surfaces S in R3 . xn (t)). thus {Vi . We denote the points in P n (R) by [x]. 4. there is no such an ambient space. Vi is the set of straight lines of Rn+1 which passes through the origin and which do not belong to the hyperplane xi = 0. with γ(0) = p. The ndimensional real projective space P n (R). let’s consider a smooth curve in Rn : γ(t) = (x1 (t). · · · . · · · . the changes of coordinates i ξ j = ξk k = i. the tangent plane is the space of all tangent vectors. 2. Since on a diﬀerential manifold. To this end. Similarly.2 Tangent Spaces We know that at every point of a regular curve or at a regular surface. Geometrically. 1. · · · . It is the set of straight lines of Rn+1 which passes through the origin (0. given a diﬀerentiable structure on a manifold. · · · . On the intersection Vi Vj . λ ∈ R. ξi+1 . · · · xn+1 /xi ]. λ = 0.
and . φ) that covers p with φ(p) = 0 ∈ φ(U ).110 4 Preliminary Analysis on Riemannian Manifolds The tangent vector of γ at point p is v = (x1 (0). ∂xi Hence the directional derivative is an operator on diﬀerentiable functions that depends uniquely on the vector v.e. · · · . Then the directional derivative of f along vector v ∈ Rn can be expressed as d (f ◦ γ) t=0 = dt n i=1 ∂xi ∂f (p) (0) = ( ∂xi ∂t xi (0) i ∂ )f. 0. xn (0)). Deﬁnition 4. γ (0) = i xi (0) ∂ ∂xi . Let γ(t) = (x1 (t). i. ∂xi ∂xi p i i This shows that the vector γ (0) can be expressed as the linear combination of ∂ ∂xi p .1 A diﬀerentiable function γ : (− . Let (x1 . Then γ (0)f = d d f (γ(t)) t=0 = f (x1 (t). p Here ∂ ∂xi p is the tangent vector at p of the “coordinate curve”: xi →φ−1 (0. xn ) be the coordinates in a local chart (U. 0). xn (t)) be a curve in M with γ(0) = p. The tangent vector to the curve γ at t = 0 is an operator (or function) γ (0) : D→R given by γ (0)f = d (f ◦ γ) t=0 . We can identify v with this operator and thus generalize the concept of tangent vectors to diﬀerentiable manifolds M . · · · . · · · . xn (t)) t=0 dt dt ∂ ∂f = xi (0) = xi (0) f. 0. )→M is called a (diﬀerentiable) curve in M . · · · . The set of all tangent vectors at p is called the tangent space at p and denoted by Tp M . · · · . Let f (x) be a smooth function near p. xi . dt A tangent vector at p is the tangent vector of some curve γ at t = 0 with γ(0) = p.2. · · · . Let D be the set of all functions that are diﬀerentiable at point p = γ(0) ∈ M .
dfp ( ∂xi p ∂xi p In particular. (dx2 )p . We call dfp the diﬀerential of f .2 Tangent Spaces 111 ∂ ∂xi f= p ∂ f (φ−1 (0. 0. And the cotangent space of M is T ∗M = p∈M ∗ Tp M. )→M . From the deﬁnition. We ﬁrst introduce the diﬀerential of a diﬀerentiable mapping. Obviously. it is a linear mapping from one tangent space Tp M to the other Tf (p) N . dfp = i ∂ ∂xj ∂f ∂xi ) = δij . Deﬁnition 4. Deﬁne the mapping dfp : Tp M →Tf (p) N by dfp (v) = β (0). And one can show that it does not depend on the choice of γ (See [Ca]). we have (dxi )p ( And consequently. · · · . Let β = f ◦ γ. p From here we can see that (dx1 )p .2 Let M and N be two diﬀerential manifolds and let f : M →N be a diﬀerentiable mapping. p (dxi )p . with γ(0) = p and γ (0) = v. ∂x1 p ∂xn p The dual space of the tangent space Tp M is called the cotangent space. choose a diﬀerentiable curve γ : (− . xi . When N = R1 . (dxn )p ∗ form a basis of the cotangent space Tp M at point p. for the diﬀerential (dxi )p of the coordinates function xi .···. . 0. the collection of all such diﬀerentials form the cotangent space. one can derive that ∂ ∂ )= f. ∗ and denoted by Tp M .3 The tangent space of M is TM = p∈M Tp M. · · · . For every p ∈ M and for each v ∈ Tp M . . We can realize it in the following.2.2. · · · . ∂xi One can see that the tangent space Tp M is an ndimensional linear space with the basis ∂ ∂ .4. 0)) xi =0 . Deﬁnition 4.
which varies diﬀerentiably. The restriction dt of the curve γ to a closed interval [a.1 A diﬀerentiable manifold M has a Riemannian metric. A diﬀerentiable manifold with a given Riemannian metric will be called a Riemannian manifold. as well as other quantities in geometry. For each t ∈ I. and γ (t) ≡ dγ ≡ dγ( dt ) is a tangent vector of Tγ(t) M . area. a . there is a natural way of measuring the length of vectors tangent to S.3 Riemannian Metrics To measure the arc length.1 A Riemannian metric on a diﬀerentiable manifold M is a correspondence which associates to each point p of M an inner product < ·. > enable us to measure not only the lengths of curves on S. q ∂ ∂xj >q q is a diﬀerentiable function of q for all q near p.3. γ (t) >. We deﬁne the length of the segment by b < γ (t). For a surface S in the three dimensional Euclidean space R3 . γ (t) >1/2 dt. and let γ : I→M be a diﬀerentiable mapping ( curve ) on M . which are simply the length of the vectors in the ambient space R3 . Let I be an open interval in R1 . its length is the integral b γ (t)dt.3. The deﬁnition of the inner product <. More precisely. Given a curve γ(t) on S for t ∈ [a. dγ is a linear mapping from Tt I to d Tγ(t) M . or ‘metrics’. It is clear this deﬁnition does not depend on the choice of coordinate system. but also the area of domains on S. we have Deﬁnition 4. Also one can prove that ( see [Ca]) Proposition 4. that is. These can be expressed in terms of the inner product <. > in R3 . a where the length of the velocity vector γ (t) is given by the inner product in R3 : γ (t) = < γ (t). or volume on a manifold. we need to introduce some kind of measurements. gij (q) ≡< ∂ ∂xi . Now we show how a Riemannian metric can be used to measure the length of a curve on a manifold M . Now we generalize this concept to diﬀerentiable manifolds without using the ambient space.112 4 Preliminary Analysis on Riemannian Manifolds 4. · >p on the tangent space Tp M . b] ⊂ I is called a segment. b].
it is the same as det(gij ). and by virtue of (4. · · · .3 Riemannian Metrics 113 Here the arc length element is ds =< γ (t).4. φ) with coordinates (x1 . i = 1. it is natural to deﬁne the volume of D as vol(D) = det(gij )dx1 · · · dxn . Let {e1 . (4. let’s begin with the volume of the parallelepiped form by the tangent vectors. · · · . · · · . xn (t)). el >= jl j aij akj .j=1 n < ∂ ∂ . 0. · · · . · · · n. φ(U ) A trivial example is M = Rn . . This expresses the length element ds in terms of the metric gij . p Consequently. · · · xn ). (4. >p xi (t)dt xj (t)dt ∂xi ∂xj = i. 1. · · · ∂xn is the determinant det(aij ).1) As we mentioned in the previous section. γ (t) >1/2 dt. To derive the formula for the volume of a region (an open connected subset) D in M in terms of metric gij . · · · .j=1 gij (p)dxi dxj . let (x1 . Let ∂ (p) = aij ej . the ndimensional Euclidean space with ∂ = ei = (0. φ) that covers p = γ(t) = (x1 (t).2). ej >= δij . ∂xi j Then gik (p) =< ∂ ∂ . >p = ∂xi ∂xk aij akl < ej . 0). From the above arguments. xn ) be the coordinates in a local chart (U.2) ∂ ∂ We know the volume of the parallelepiped form by ∂x1 . Then γ (t) = i xi (t) ∂ ∂xi . ds2 = < γ (t). en } be an orthonormal basis of Tp M . Assume that the region D is cover by a coordinate chart (U. γ (t) > dt2 n = i. ∂xi The metric is given by gij =< ei .
ψ). 0). ∂θ ∂θ g12 (p) = g21 (p) =< and g22 (p) =< ∂ ∂ . These conform with our knowledge on the length and area in the spherical coordinates. and the area element is dA = det(gij )dθdψ = sin θdθdψ. ψ) with x = sin θ cos ψ y = sin θ sin ψ z = cos θ to illustrate how we use the measurement (inner product) in R3 to derive the expression of the standard metric gij in term of the local coordinate (θ. Let p be a point on S 2 covered by a local coordinates chart (U. .. the 2dimensional unit sphere with standard metric. ψ) of the coordinate curves ψ = constant and θ =constant respectively. we have. >p = sin2 θ. ∂θ ∂ψ ∂ ∂ . .114 4 Preliminary Analysis on Riemannian Manifolds A less trivial example is S 2 . sin θ cos ψ. ψ) on S2. We will use the usual spherical coordinates (θ. φ) with coordinates (θ. ) = (cos θ cos ψ. ∂θ ∂θ ∂θ ∂x ∂y ∂z .e. the length element is ds = g11 dθ2 + 2g12 dθdψ + g22 dψ 2 = dθ2 + sin2 dψ 2 . ∂ψ ∂ψ Consequently. − sin θ). ∂ ∂θ and ∂ ∂ψ =( p p the ∂x ∂y ∂z . y. In the coordinates of R3 . >p = 0. with the metric inherited from the ambient space R3 = {(x. . ∂ψ ∂ψ ∂ψ =( p It follows that g11 (p) =< ∂ ∂ . ) = (− sin θ sin ψ. i. >p = cos2 θ cos2 ψ + cos2 θ sin2 ψ + sin2 θ = 1. z)}. We ﬁrst derive the expression for −1 ∂ ∂θ p and ∂ ∂ψ tangent vectors at point p = φ (θ. cos θ sin ψ.
Let ∆α be the amount of change of the angle from T(p) to T(q). This intersection is a plane curve. To measure the extent of bending of a curve. then the unit tangent vector at c(t) is T(t) = and (x (t). we introduce curvature. We deﬁne the curvature at p as dα ∆α = (p). one can verify that k(c(t)) = dT x (t)y (t) − y (t)x (t) = . (4.4.4 Curvature 115 4. and ∆s the arc length between p and q on the curve. and a circle of radius one bends more than a circle of radius 2. we ﬁnd that the curvature at point (x. The maximum and minimum values of the normal curvature at point p are called principal curvatures. y (t)) [x (t)]2 + [y (t)]2 . Let q be a nearby point. replacing t by x and y(t) by f (x). p be a point on S. ds = [x (t)]2 + [y (t)]2 dt. By a straight forward calculation. we feel that a straight line does not bend at all. and negative otherwise.2 Curvature of Surfaces in R3 1.4 Curvature 4.1 Curvature of Plane Curves Consider curves on a plane. Let p be a point on a curve. y(t)). and its curvature is taken as the absolute value of the normal curvature. ds {[x (t)]2 + [y (t)]2 }3/2 If a plane curve is given explicitly as y = f (x). or the amount it deviates from being straight.4. f(x)) is f (x) k(x. and usually denoted by k1 (p) and k2 (p). which is positive if the curve turns in the same direction as the surface’s chosen normal. consider the intersection of the surface with the plane containing T(p) and N(p). Given a tangent vector T(p) at p.4. and T(p) be the unit tangent vector at p.3) {1 + [f (x)]2 }3/2 4. . f (x)) = . Principle Curvature Let S be a surface in R3 . and N(p) a normal vector of S at p. then in the above formula. The normal curvature varies as the tangent vector T(p) changes. Intuitively. k(p) = lim q →p ∆s ds If a plane curve is given parametrically as c(t) = (x(t).
we write fi = ∂xi and we will use fij to denote ∂xi ∂xj . i = 1. and the xy plane is tangent to S. A sphere of radius R has Gaussian curvature R2 . one may chose outer normals or inner normals). we arrive at kn (u) = (f11 u2 + 2f12 u1 u2 + f22 u2 )(0) = uT F u. the Gaussian curvature of a sphere is positive. 0). for a closed surface. and uT denotes the transpose of u. It follows that .3). ∂ f ∂f where. It is deﬁned as the product of the two principal curvatures K(p) = k1 (p) · k2 (p). (4. 0) = 0 and f1 (0. x2 ). y) be a diﬀerentiable function.4) Since the matrix F is symmetric. Assume that the origin is on S. From this. 2. and 1 of a torus or a plane is zero.e.116 4 Preliminary Analysis on Riemannian Manifolds 2. F ei = λi ei . Then by (4. 2. We calculate the Gaussian curvature of S at the origin 0 = (0. Consider a surface S in R deﬁned as the graph of x3 = f (x1 . 0) = 0 = f2 (0. Gaussian Curvature. 2 Using the assumption that f1 (0) = 0 = f2 (0). Then we can express u = cos θ e1 + sin θ e2 . One way to measure the extent of bending of a surface is by Gaussian curvature. 0. for simplicity. of a one sheet hyperboloids is negative. the two eigenvectors e1 and e2 are orthogonal. one can see that no matter which side normal is chosen (say. and it is obvious that eT F ei = λi . i. i = 1. i Let θ be the angle between u and e1 . 0). Let λ1 and λ2 be two eigenvalues of the matrix F with corresponding unit eigenvectors e1 and e2 . that is f (0. u2 ) be a unit vector on the tangent plane. Let u = (u1 . 3 Let f (x. the normal curvature kn (u) in u direction is ∂2f ∂u2 ∂f [( ∂u )2 + 2 2 1]3/2 (0) ∂f ∂ where ∂u and ∂uf are ﬁrst and second directional derivative of f in u direction. 1 2 where F= f11 f12 f21 f22 (0).
n then ∂f . ∂xi Deﬁnition 4. From (4.1 For two diﬀerentiable vector ﬁelds X and Y. by deﬁnition. Y ]f = XY f − Y Xf = i. we see that λ1 and λ2 are solutions of the quadratic equation f11 − λ f12 = λ2 − (f11 + f22 )λ + (f11 f22 − f12 f21 ) = 0. the Gaussian curvature at 0 is K(0) = k1 k2 = λ1 λ2 = (f11 f22 − f12 f21 )(0).j=1 ai ∂bj ∂aj − bi ∂xi ∂xi . Assume that λ1 ≤ λ2 . If X(p) = n i ∂ ai (p) ∂xi and Y (p) = n ∂ j bj (p) ∂xj . In a local coordinates. we need a few preparations. and therefore they are principal curvatures k1 and k2 . f21 f22 − λ Therefore. Then λ1 ≤ λ1 + sin2 θ (λ2 − λ1 ) = kn (u) = cos2 θ (λ1 − λ2 ) + λ2 ≤ λ2 . Vector Fields and Brackets A vector ﬁeld X on M is a correspondence that assigns each point p ∈ M a vector X(p) in the tangent space Tp M . deﬁne the Lie Bracket as [X.4. 4. Hence λ1 and λ2 are minimum and maximum values of normal curvature.4.4). we can express n X(p) = i ai (p) ∂ . it is a directional derivative operator in the direction X(p): n (Xf )(p) = i ai (p) ∂f . It is a mapping from M to T M . there are several kinds of curvatures. 1. then we say the vector ﬁeld X is diﬀerentiable. ∂xj [X. Y ] = XY − Y X. If this mapping is diﬀerentiable.4.4 Curvature 117 kn (u) = uT F u = cos2 θ λ1 + sin2 θ λ2 . Before introducing them. ∂xi When applying to a smooth function f on M .3 Curvature on Riemannian Manifolds On a general n dimensional Riemannian manifold M .
the covariant derivative is deﬁned by aﬃne connections. Given an aﬃne connection D on M .4. let n c(t) = (x1 (t). ∂xi Then by the properties of the aﬃne connection. dt dt In a local coordinates. One can see that dV is a vector in the ambient space R3 . and let f and g be smooth functions on M . An aﬃne connection D on a diﬀerentiable manifold M is a mapping from D(M ) × D(M ) to D(M ) which maps (X. t ∈ I. a vector ﬁeld dV V (t) along a curve c(t) is parallel if only if = 0.118 4 Preliminary Analysis on Riemannian Manifolds 2. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if = 0. Let c : I→S be a curve on S. we have < X. On an ndimensional diﬀerentiable manifold M . Y >= constant. for a vector ﬁeld V (t) = Y (c(t)) along a diﬀerentiable curve c : I→M .3 Let M be a diﬀerentiable manifold with an aﬃne connecDV tion D. Y ) to DX Y and satisﬁes 1)Df X+gY Z = f DX Z + gDY Z 2)DX (Y + Z) = DX Y + DX Z 3)DX (f Y ) = f DX Y + X(f )Y. Let D(M ) be the set of all smooth vector ﬁelds. and denote dt DV it by . ∂xi ∂x dt j (4. >. We call this the covariant derivative of V (t) on S – the derivative dt viewed by the twodimensional creatures on S. · · · xn (t)) and V (t) = i vi (t) ∂ . Deﬁnition 4. Y. dt .2 Let X. and let V (t) be a vector ﬁeld along c(t). and for a pair of vector dt ﬁelds X and Y along c(t).5) i i. one can express DV = dt dvi ∂ + dt ∂xi ∂ dxi vj D ∂ . we have Deﬁnition 4.j Recall that in Euclidean spaces with inner product <. To consider that rate of change of V (t) restricted to the surface S. Z ∈ D(M ). deﬁne the covariant derivative of V along c(t) as DV = D dc Y. but it may not belong to the tangent dt plane of S. we begin with a surface S in R3 . Covariant Derivatives and Riemannian Connections To be more intuitive. Similarly. dV we make an orthogonal projection of onto the tangent plane.4.
4. Y )X. Moreover this connection is symmetric. and (g ij ) is the inverse Deﬁnition 4. Y >c(t) = constant . then we say that D is compatible with the metric <.4. Y > .4. DX Y − DY X = [X. we have < X.5 Let M be a Riemannian manifold with the Riemannian connection D. 3.Y ] Z. We skip the proof of the theorem.4 Curvature 119 Deﬁnition 4. Y ) : D(M )→D(M ) deﬁned by R(X. there exists a unique connection D on M that is compatible with <. Y >2 is the area of the parallelogram spun by X and Y . Theorem 4. and X2 Y 2 − < X. The following fundamental theorem of Levi and Civita guarantees the existence of such an aﬃne connection on a Riemannian manifold. Curvature >. Y )Z = DY DX Z − DX DY Z + D[X. xn ). X2 Y 2 − < X. gij =< matrix of (gij ). D ∂ ∂xi ∂x ∂xk j k where k Γij = 1 2 m ∂gjm ∂gmi ∂gij + − ∂xi ∂xj ∂xm ∂ ∂ ∂xi . · · · . Deﬁnition 4.4. Y >2 where X and Y are any two linearly independent vectors in E. The curvature R is a correspondence that assigns every pair of smooth vector ﬁeld X.4 Let M be a diﬀerentiable manifold with an aﬃne connection D and a Riemannian metric <. Interested readers may see the book of do Carmo [ca] (page 55). >. If for any pair of parallel vector ﬁelds X and Y along any smooth curve c(t).e.1 Given a Riemannian manifold M with metric <. i. In a local coordinates (x1 . ∀Z ∈ D(M ). >. >.4. Y ]. ∂xj g mk is called the Christoﬀel symbols. this unique Riemannian connection can be expressed as ∂ k ∂ = Γij .6 The sectional curvature of a given two dimensional subspace E ⊂ Tp M at point p is K(E) = < R(X. . >. Y ∈ D(M ) a mapping R(X.
120 4 Preliminary Analysis on Riemannian Manifolds One can show that K(E) so deﬁned is independent of the choice of X and Y . for instance. then one can verify that its sectional curvature at every point is K(E) = 1. Example.1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator Let p and q be two nonnegative integers.4. It is a mapping from Γ (M ) × Γ (M ) to Γ (M ). · · ·. and they both depends only on the point p. At a point x on M . Deﬁne Tp (Tx M ) as the space of (p. · · · . It is actually the Gaussian curvature of the section. that is. i=1 One can show that the above deﬁnition does not depend on the choice of the orthonormal basis. X and Y2 . Ricci curvature is the same as sectional curvature. i=1 The scalar curvature at p is R(p) = 1 n n Ricp (Yi ). and X and Yn−1 : Ricp (X) = 1 n−1 n−1 < R(X. For convenience. q)tensors on Tx M . and so on. and thus introduce higher order covariant derivatives. then the Ricci curvature in the direction X is the average of the sectional curvature of the n − 1 sections spun by X and Y1 . Here we will extend this deﬁnition to the space of diﬀerentiable tensor ﬁelds. we adapt the summation convention and write. the space of (p + q)linear forms . 4. as usual. ∂xi j Γik dxk := k j Γik dxk .7 Let X = Yn be a unit vector in Tp M . Yn−1 } be an orthonormal basis of the hyper plane in Tp M orthogonal to X. Let M be the unit sphere with standard metric.5. and let {Y1 .5 Calculus on Manifolds In the previous chapter. Yi )X. we q denote the tangent space by Tx M . Xi ∂ := ∂xi Xi i ∂ . Yi > . On two dimensional manifolds. Deﬁnition 4. 4. where Γ (M ) is the set of all smooth vector ﬁelds on M . we introduced the concept of aﬃne connection D on a diﬀerentiable manifold M.
we can express T = Ti11···ipq dxi1 ⊗ · · · ⊗ dxip ⊗ j ···j ∂ ∂ ⊗ ··· ⊗ . if we set i =D ∂ ∂xi . ∂xj1 ∂xjq Recall that.4.7) Notice that (4. If we apply it to a (1.6) Now we naturally extend this aﬃne connection D to diﬀerentiable (p. For a point x on M . DX T is deﬁned to be a (p. j ···j ∂Ti11···ipq ∂xi q x j ···j p − k=1 j ···j 1 q α Γiik (x)T (x)i1 ···ik−1 αik+1 ···ip j ···j + k=1 jk 1 k−1 Γiα (x)T (x)i1 ···ip αjk+1 jq .6) is a particular case when (4. 0)tensor.5 Calculus on Manifolds ∗ ∗ η : Tx M × · · · × Tx M × Tx M × · · · × Tx M →R1 . For smooth vector ﬁelds X. X n ) and Y = (Y 1 .7) is applied to the (0. ∂ ∂xk then i ∂ ∂xj k (x) = Γij . then i (dx j j ) = −Γik dxk . by the bilinear properties of the connection.8) ˜ Given two diﬀerentiable tensor ﬁelds T and T . · · · . q) tensor ﬁelds T on M. we have . (4. in a local coordinates. q)tensor on Tx M by DX T (x) = X i ( where ( 1 q i T )(x)i1 ···ip = i T )(x). 1)tensor Y . p q 121 In a local coordinates. we have DX Y = DX i = Xi = Xi ∂ ∂xi Y = Xi iY ∂Y j ∂ k ∂ + Y j Γij ∂xi ∂xj ∂xk ∂Y j j + Γik Y k ∂xi ∂ ∂xj (4. for the aﬃne connection D. and a vector X ∈ Tx M . say dxj . Y ∈ Γ (M ) with X = (X 1 . Y n ). x k where Γij are the Christoﬀel symbols. · · · . (4.
∂xi ∂xj ∂xk 2 The trace of the Hessian matrix (( Bertrami operator acting on f .j=1 n where g is the determinant of (gij ). Similarly. ∂xi ∂xj ∂xk j 2 Here we have used (4. . and the theory of Lebesgue integral applies. one can deﬁne 2 T .2 Integrals On a smooth ndimensional Riemannian manifold (M. g).5. 0)tensor deﬁned by f= And 2 i i f dx f is = ∂f i dx .122 4 Preliminary Analysis on Riemannian Manifolds ˜ ˜ ˜ DX (T ⊗ T ) = DX T ⊗ T + T ⊗ DX T .8). q)tensor ﬁeld T . Through a straight forward calculation. 0)tensor: 2 f = = = ∂f i dx ⊗ dxj ∂xi ∂f ∂f (dxi ) dxj dxi + j ∂xi ∂xi j ∂2f k ∂f − Γij dxi ⊗ dxj . we deﬁne tensor ﬁeld whose components are given by 1 q ( T )i1 ···ip+1 = ( T to be the (p + 1. the Hessian of f and denoted by Hess(f ). Its ij th component is ( 2 f is called f )ij = ∂2f k ∂f − Γij .7) and (4. Hence a (1. one can deﬁne a natural positive Radon measure on M . 4. a smooth function f (x) on M is a (0. one can verify that = 1 ∂ ∂ ( g g ij ) ∂xi ∂xj g i. · · ·. For example. 3 T . ∂xi f is a (2. q) j ···j j1 ···jq i1 T )i2 ···ip+1 . In the Riemannian context. For a diﬀerentiable (p. 0)tensor. f = g ij f )ij ) is deﬁned to be the Laplace ∂2f k ∂f − Γij ∂xi ∂xj ∂xk where (g ij ) is the inverse matrix of (gij ).
one can verify that − Here o ou + Ko (x) = K(x)e2u(x) . there exists a corresponding family of partition of unity (Ui . . Let g(x) = e2u(x) go (x) for some smooth function u(x) on M . φi )i of M . Suppose M is a two dimensional Riemannian manifold with a metric go and the corresponding Gaussian curvature Ko (x). ηi )i is a family of partition of unity associated to (Ui . φi )i and the partition of unity (Ui .3 Equations on Prescribing Gaussian and Scalar Curvature A Riemannian metric g on M is said to be pointwise conformal to another metric go if there exist a positive function ρ. one can verify the following integration by parts formula − M g u vdVg = M < u. φi . ηi )i . v >= g ij iu jv = g ij ∂u ∂v ∂xi ∂xj is the scalar product associated with g for 1forms. φi )i if (i) (ηi )i is a smooth partition of unity associated to the covering (Ui )i .5. such that g(x) = ρ(x)go (x). φi . ηi )i is a partition of unity associate to (Ui . φi . ∀ x ∈ M.4. v > dVg . 4.5 Calculus on Manifolds 123 Let (Ui . We say that a family (Ui . φi )i . (4. and dx stands for the volume element of Rn . (4. we deﬁne its integral on M as the sum of the integrals on open sets in Rn : f dVg = M i φi (Ui ) (ηi gf ) ◦ φ−1 dx i where (Ui . and (ii) for any i. Then by a straight forward calculation. φi . g is the determinant of (gij ) in the charts (Ui . ∀ x ∈ M. ηi )i .10) is the LaplaceBertrami operator associated to go . One can show that. for each family of coordinates chart (Ui . supp ηi ⊂ Ui . φi )i . and < u. For smooth functions u and v on a compact manifold M . One can prove that such a deﬁnition does not depend on the choice of the charts (Ui . φi )i be a family of coordinates charts that covers M . φi )i of M . and given a family of coordinates charts (Ui . Let f be a continuous function on M with compact support.9) where g is the LaplaceBertrami operator associated to the metric g. Let K(x) be the Gaussian curvature associated to the metric g.
∀ x ∈ S n . · · · . H k. In Chapter 5 and 6. we list some of them in the following. for conformal relation between scalar curvatures on ndimensional manifolds ( n ≥ 3). Let u be a smooth function on M and k be a nonnegative integer.2 (M ) is a Hilbert space when equipped with the equivalent norm . 1. We denote k u the k th covariant derivative of u as deﬁned in the previous section. In particular.p (M ) = j=0 M  j up dVg .6 Sobolev Embeddings Let (M.1 For any integer k. and its norm  k u is deﬁned in a local coordinates chart by  k u2 = g i1 j1 · · · g ik jk ( 0 k u)i1 ···ik ( k u)j1 ···jk . let Ck (M ) be the space of C ∞ functions on M such that  M j up dVg < ∞ . ∂xi ∂xj p For an integer k and a real number p ≥ 1. say on standard sphere S n . we will study the existence and qualitative properties of the solutions u for the above two equations respectively.11) where go is the standard metric on S n with the corresponding LaplaceBertrami operator o .6.124 4 Preliminary Analysis on Riemannian Manifolds Similarly.1 The Sobolev space H k. R(x) is the scalar curvature associated to the point4 wise conformal metric g = u n−2 go . For application purpose. 4 4(n − 1) (4. we have − ou + n+2 n(n − 2) (n − 2) u= R(x)u n−2 . Interested readers may ﬁnd the proofs in [He] or [Au]. k. g) be a smooth Riemannian manifold. ∀ j = 0. Many results concerning the Sobolev spaces on Rn introduced in Chapter 1 can be generalized to Sobolev spaces on Riemannian manifolds. p Deﬁnition 4. we have   1 u = u and iu ju u2 =  u2 = g ij ( u)i ( u)j = g ij = g ij ∂u ∂u . Theorem 4.p (M ) is the completion of Ck (M ) with respect to the norm k 1/p u H k. 4.6.
q (M ) into C 0 (M ) is compact. v >= i=1 M < i u. In particular.q (M ) ⊂ Lp (M ) .q (M ) ⊂ H m. the embedding of H k.p (M ) is compact.6. Asnq sume 1 ≤ q < n and p = n−q .p (M ) . compact ndimensional Riemannian manifold. then H k.5 (Compact Embeddings) Let (M.q (M ) ⊂ H m. g) be a smooth.2 If M is compact. Theorem 4. Asn sume that q ≥ 1 and m. g) be a smooth. More generally. the embedding of H 1. (ii) For q > n. Then nq for any real number p such that 1 ≤ p < n−q(k−m) .6. k are two integers with 0 ≤ m < k. compact ndimensional Riemannian manifold. If q > k−m . i v > dVg where < ·.p (M ) does not depend on the metric. compact ndimensional Riemannian manifold. and if p = nq n−q(k−m) . The corresponding scalar product is deﬁned by k < u. Theorem 4. if (k − λ) > n . Theorem 4.4. Then H 1. then H k.q (M ) ⊂ C m (M ) . then H k. (i) Assume that q ≥ 1 and m. In particular. q .6 Sobolev Embeddings k 1/2 125 u = i=1 M  i u dVg 2 . · > is the scalar product on covariant tensor ﬁelds associated to the metric g.6.q (M ) into Lp (M ) is compact nq if 1 ≤ p < n−q . the embedding H k. k are two integers with 0 ≤ m < k. k are two integers with 0 ≤ m < k.4 (Sobolev Embeddings II) Let (M. g) be a smooth.3 (Sobolev Embeddings I) Let (M. Theorem 4.6. the embedding of H 1. if m.q (M ) into C λ (M ) is compact.
6 (Poincar´ Inequality) e Let (M. 1 V ol(M.126 4 Preliminary Analysis on Riemannian Manifolds Theorem 4. n) be real.6. g) be a smooth.p (M ). where u= ¯ is the average of u on M .g) M udVg . 1/p 1/p u − up dVg ¯ M ≤C M  up dVg . Then there exists a positive constant C such that for any u ∈ H 1. compact ndimensional Riemannian manifold and let p ∈ [1.
and can be approximated by subcritical cases.2. One can easily see that there exists a sequence {xk }.5 Prescribing Gaussian Curvature on Compact 2Manifolds 5. {xk } possesses no convergent subsequence because this sequence is not bounded.2 Gaussian Curvature on Negatively Curved Manifolds 5.1 5. we consider the corresponding functional J(·) in an appropriate Hilbert space.1 tions 5. b) critical case which is the limiting situation. or c) super critical case–the remaining cases.3 Obstructions Variational Approaches and Key Inequalities Existence of Weak Solutions Kazdan and Warner’s Results. how the calculus of variations can be applied to seek weak solutions of these equations. among other methods.2 5. To show the existence of weak solutions for a certain equation. people usually divide it into three cases: a) subcritical case where the level set of J is compact. however. say. To roughly see when a functional may lose compactness.1. J satisﬁes the (P S) condition (see also Chapter 2): Every sequence {uk } that satisﬁes J(uk )→c and J (uk )→0 possesses a convergent subsequence. The functional has no compactness at level . Method of Lower and Upper SoluChen and Li’s Results 5. We will illustrate.2.1.2 In this and the next chapters. According to the compactness of the functional. such that J(xk )→0 and J (xk )→0 as k→∞. xk = −k. for instance. we will present the readers with real research examples–semilinear equations arising from prescribing Gaussian and scalar curvatures.1 Prescribing Gaussian Curvature on S 2 5. let’s consider J(x) = (x2 −1)ex deﬁned on one dimensional space R1 .1.
then one can verify that it possesses a subsequence which converges to a critical point x0 of J. For instance {sin kx} is a bounded sequence in L2 ([0. the functional satisﬁes the (P S) condition. u∈H then a minimizing sequence {uk }. That is. We know that in a ﬁnite dimensional space. one would try to recover the compactness through various means. when Ω = B1 (0) is a ball. For example. Take again the one dimensional example J(x) = (x2 − 1)ex . this x0 is a minimum of J. 1]) but does not have a convergent subsequence. . One can ﬁnd a minimizing sequence that does not converge. it is in the subcritical case. Let Ω be an open bounded subset in Rn . 1 and let H0 (Ω) be the Hilbert space introduced in previous chapters. even a bounded sequence may not have a convergent subsequence. Jp (uk )→mp possesses a convergent subse1 quence in H0 (Ω). If we deﬁne  u2 dx = 1}. we study the corresponding functional Jp (u) = Ω (5. In the situation where the lack of compactness occurs. and the limiting function u0 would be the minimum of Jp in H. x ∈ ∂Ω. hence it is the desired weak solution. it is in the critical case. x ∈ Ω u(x) = 0 . While in inﬁnite dimensional space. we can recover the compactness.128 5 Prescribing Gaussian Curvature on Compact 2Manifolds J = 0. which has no compactness at level J = 0.1) up+1 dx on the constraint 1 H = {u ∈ H0 (Ω)  Ω n+2 When p < n−2 . n+2 When p = τ := n−2 . a bounded sequence has a convergent subsequence. consider n−2 [n(n − 2)k] 4 uk (x) = u(x) = n−2 η(x) (1 + kx2 ) 2 ∞ where η ∈ C0 (Ω) is a cut oﬀ function satisfying η(x) = 1. Actually. elsewhere . to ﬁnd a critical point of the functional. To seek weak solutions. To further illustrate the concept of compactness. x ∈ B 1 (0) 2 between 0 and 1 . However. we consider some examples in inﬁnite dimensional space. if we pick a sequence {xk } satisfying J(xk ) < 0 and J (xk )→0. mp = inf Jp (u). Consider the Dirichlet boundary value problem − u = up (x) . As we have seen in Chapter 2. if we restrict it to levels less than 0.
1 Prescribing Gaussian Curvature on S 2 Given a function K(x) on two dimensional unit sphere S := S 2 with standard metric go . the readers will see examples from prescribing Gaussian and scalar curvature. For example. the function R(x) must satisfy the obvious GaussBonnet sign condition R(x)eu dx = 8π.  uk 2 dx Ω Then one can verify that {vk } is a minimizing sequence of Jτ in H with J (vk )→0. respectively. 5. can it be realized as the Gaussian curvature of some pointwise conformal metric g? This has been an interesting problem in diﬀerential geometry and is known as the Nirenberg problem. However it does not have a convergent subsequence. as k→∞. then it is equivalent to solving the following semilinear elliptic equation on S 2 : − u + 1 = K(x)e2u . In this and the next chapter.3) to have a solution. In essence.2) where is the LaplaceBertrami operator associated with the standard metric go . where the corresponding equations are in critical case and where various means have been exploited to recover the compactness. Actually.2) becomes − u + 2 = R(x)eu(x) .1. then equation (5. x ∈ S 2 . We say that the sequence “blows up ” at point 0. then the above Jτ has a minimizer in H. 5.3) Here 2 and R(x) are actually the scalar curvature of S 2 with standard metric go and with the pointwise conformal metric eu go . (5. the compactness may be recovered by considering other level sets or by changing the topology of the domain.5. (5.4) . n+2 As we learned in Chapter 1.1 Obstructions For equation (5. S (5. the compactness of the functional Jτ here relies on the Sobolev embedding: H 1 (Ω) → Lp+1 (Ω). If we replace 2u by u and let R(x) = 2K(x). this embedding is compact when p < n−2 and is n+2 not when p = n−2 . In the critical case.1 Prescribing Gaussian Curvature on S 2 129 Let vk (x) = uk (x) . the energy of {vk } are concentrating at one point 0. If we let g = e2u go . if Ω is an annulus.
2 Variational Approach and Key Inequalities To obtain the existence of a solution for equation (5.130 5 Prescribing Gaussian Curvature on Compact 2Manifolds This can be seen by integrating both sides of the equation (5.3)? This has been an interesting problem in geometry. To obtain a weak solution.6) where X is any conformal Killing vector ﬁeld on standard S 2 . Then for which kinds of functions R(x). S (5. φi (x) = xi in the coordinates x = (x1 . there are other necessary conditions found by Kazdan and Warner. one can show that the weak solution is smooth and hence is the classical solution.3) has no solution. It reads φi R(x)eu dx = 0. First prove the existence of a weak solution. Actually. can one solve (5. Besides this obvious necessary condition. 3. then by a regularity argument. Later. 2. we let H 1 (S) := H 1. In particular. S R(x)eu dx > 0}. a monotone rotationally symmetric function admits no solution. we can use e . i. 3. Condition (5. people usually use a variational approach. Then by the Poincar´ inequality.3). S (5. S S Condition (5. x2 . x3 ) of R3 . 5.5) gives rise to many examples of R(x) for which the equation (5. i = 1. and the fact that − u dx = u · 1 dx = 0. 2.2 (S) be the Hilbert space with norm 1/2 u Let 1 = S ( u2 + u2 )dx . Bouguignon and Ezin generalized this condition to X(R)eu dx = 0.e − φi = 2φi (x). H(S) = {u ∈ H 1 (S)  S udx = 0.1. i = 1.5) Here φi are the ﬁrst spherical harmonic functions.4) implies that R(x) must be positive somewhere.3).
2 There exists constant C. such that the inequality e4πu dA ≤ C S 2 (5. satisfying  u2 dA ≤ 1 and S S udA = 0. (5. such that for all u ∈ H 1 (S) holds eu dA ≤ C exp S 1 16π  u2 dA + S 1 4π udA . Lemma 5.1. S (5. To estimate the value of the functional J.9) Proof. Consider the functional J(u) = 1 2  u2 dx − 8π ln S S R(x)eu dx in H(S).7) holds for all u ∈ H 1 (S).1.1 (MoserTrudinger Inequality) Let S be a compact two dimensional Riemannian manifold. we have e S 4πv 2 v 2 dA ≤ C.1. Then w = 1 and w = 0.1 to such w. we introduce some useful inequalities. Interested readers many see the article of Moser [Mo1] or the article of Chen [C1].8) We skip the proof of this inequality. let w = ¯ Applying Lemma 5.5.10) .1 Prescribing Gaussian Curvature on S 2 1/2 131 u = S  u2 dx as an equivalent norm in H(S). (5. From this inequality. Still denote 1/2 u = S  u2 dx v v . . Let u= ¯ 1 4π u(x)dA S be the average of u on S. ¯ i) For any v ∈ H 1 (S) with v = 0. one can derive immediately that Lemma 5. Then there exists constant C.
In fact. u∈H(S) However. To prevent this from happening. From Lemma 5. we have the following ”Distribution of Mass Principle” from Chen and Li [CL7] [CL8]. ∀v ∈ H 1 (S). In particular.1.11) ii) For any u ∈ H 1 (S). and we can apply (5. It follows that J(u) ≥ 1 1 u 2 − 8π ln(C max R) + u 2 16π ≥ −8π ln(C max R) 2 (5. with v = 0. R(x)eu dA ≤ max R S S eu dA ≤ max R(x) · C exp 1 u 16π 2 . in the set of even functions–antipodal symmetric functions.10). let v = u − u. Although it is not true in general. + 16π v 2 1 v 16π 2 . we can conclude that the functional J(·) is bounded from below in H(S). we need some coerciveness of the functional.11) ¯ ¯ to v and obtain 1 u u 2 . as we mentioned before.132 5 Prescribing Gaussian Curvature on Compact 2Manifolds From the obvious inequality √ v 2 πv √ − v 4 π we deduce immediately v≤ Together with (5. Now we can take a minimizing sequence {uk } ⊂ H(S): J(uk )→ inf J(u). ¯ (5. by the Lemma. the functional J is bounded from below in H(S). that is. we do have coerciveness for J in some special subset of H(S).12) Therefore. v 2 4πv 2 .2. we have ev dA ≤ C exp S 2 ≥ 0. it may ‘leak’ to inﬁnity. eu−¯ dA ≤ C exp 16π S Consequently eu dA ≤ C exp S 1 u 16π 2 +u . ¯ This completes the proof of the Lemma. Then v = 0. More precisely. . a minimizing sequence may not be bounded.
e. Ω2 ) ≥ δo > 0. a weak solution of equation (5. and therefore. [CY]. In many articles.5.13) holds for all u ∈ H 1 (S) satisfying Ω1 S eu dA eu dA ≥ αo and Ω2 S eu dA eu dA ≥ αo . 1 Let 0 < αo ≤ 2 . uo so obtained is the minimum of J(u). As we will show in the next section. ).1. Then for any such that the inequality > 0. this stronger inequality (5. If we apply this stronger inequality to the family of even functions in H(S). there exists a constant C = C(αo . we have J(u) ≥ 1 u 2 2 − 8π ln(C max R) + 1 − 8π 4 1 + 32π u 2. i.3 Let Ω1 and Ω2 be two subsets of S such that dist(Ω1 . then we can have a stronger inequality than (5. and hence it converges to a weak limit uo in H 1 (S).2).9). u 2 ≥ −8π ln(C max R) + Choosing small enough. eu dA S and use this to study the behavior of a minimizing sequence (See [CD]. for even functions. as we will see in the next next section. [CD1].e. In particular. we will be able to conclude that uo is a classical solution. for all even functions u in H(S).15) This stronger inequality guarantees the boundedness of a minimizing sequence. Both analysis are equivalent on the sphere S 2 .1 Prescribing Gaussian Curvature on S 2 133 Lemma 5. more precisely. (5. i. consider the center of mass of the sphere with density eu(x) at point x: P (u) = S x eu dA .. δo . Then by a standard regularity argument. the authors use the “center of mass ” analysis. The Lemma says that if the mass of the sphere is kind of distributed in two parts of the sphere. then Ωi eu dA is the mass of the region Ωi . However. . (5. we arrive at 1 u 8 2 J(u) ≥ − C1 . and S eu dA the total mass of the sphere.14) Intuitively. and [CY1]). the functional J is weakly lower semicontinuous.13) holds.. if we think eu(x) as the density of a solid sphere S at point x. if the total mass does not concentrate at one point. 1 + 32π eu dA ≤ C exp S u 2 +u ¯ (5.
17) for some small 1 > 0. (5.14) implies eu dA ≤ C exp ( S udA = 1 + ) u 32π 2 . g2 be two smooth functions. While the “Distribution of Mass” analysis can be applied to any compact surfaces. we employ the condition S udA = 0. Here we have used the fact that 2 g1 u + g2 u = S  (g1 u + g2 u)2 dA ( g1 + S 2 = ≤ u g2 )u + (g1 + g2 ) u2 dA L2 + C1 u u + C2 u 2 L2 .16) Case i) If g1 u ≤ g2 u .13). 2 S and supp g1 ∩ supp g2 = ∅.18) . gi (x) ≡ 1. even surfaces with conical singularities. It suﬃce to show that for u ∈ H 1 (S) with 0. (5. (u − a)}. this kinds of analysis can not be applied.134 5 Prescribing Gaussian Curvature on Compact 2Manifolds since “the center of mass” P (u) lies in the ambient space R3 . Proof of Lemma 5. i = 1. we have eu dA ≤ ea S S eu−a dA ≤ ea · S 1) e(u−a) dA u 2 + 1 (1 + ≤ C exp{ 32π + c1 ( 1 ) (u − a)+ 2 2 + a} (5. Given any η > 0. Let g1 . such that meas{x ∈ Su(x) ≥ a} = η.9) eu dA ≤ S 1 1 eu dA ≤ eg1 u dA αo Ω1 αo S C 1 g1 u 2 + g1 u} ≤ exp{ αo 16π C 1 ≤ exp{ g1 u + g2 u 2 + g1 u} αo 32π C 1 ≤ exp{ (1 + 1 ) u 2 + c1 ( 1 ) u αo 32π 2 L2 }. In order to get rid of the term u 2 2 on the right hand side of the above L inequality. Interested reader may see the author’s paper [CL7] [CL8]for more general version of inequality (5. f or x ∈ Ωi . Apply (5. (5. then by (5.17) to the function (u − a)+ ≡ max{0.1. choose a. on a general two dimensional compact Riemannian surfaces.3. such that 1 ≥ gi ≥ 0.
Case ii) If g2 u ≤ g1 u .20) Now by Poincar´ inequality (See Theorem 4. we obtain (5.20) and (5. Let {uk } be a minimizing sequence of J in He (S).15).19) Choose η so small that Cη 1/2 ≤ 1 . Then by (5.16).1 (Moser) If the function R is even. Then the necessary and suﬃcient condition for equation (5. then the above inequality implies 2 (u − a)+ 2 L2 ≤ 2Cη 2 u 2 . (5. and S uo (x)dA = 0. 4δη 2 (5.5. we consider the functional J(u) = 1 2  u2 dx − 8π ln S S R(x)eu dx in He (S) = {u ∈ H(S)  u is even almost everywhere }. where x and −x are two antipodal points. This completes the proof.22) Consequently uo (−x) = uo (x) almost everywhere .18).1 Prescribing Gaussian Curvature on S 2 135 Where C = C(δ.21) Now (5.17) and then (5.1. (5.6. To prove the existence of a weak solution.3 Existence of Weak Solutions Theorem 5. By the H older and Sobolev inequality (See Theorem 4.1. and hence possesses a subsequence (still denoted by {uk }) that converges weakly to some element uo in H 1 (S).21). αo ).16) follows from (5. a≤δ u 2 + c2 .3) to have a solution is R(x) be positive somewhere.6. Then by the Compact Sobolev Embedding of H 1 into L2 uk →uo in L2 (S). 1 (5. . by a similar argument as in case i). as in the previous section. 5.6) e a·η ≤ u≥a udA ≤ S udA ≤ c u hence for any δ > 0. that is if R(x) = R(−x) ∀ x ∈ S 2 .3) ¨ (u − a)+ 2 L2 ≤ η 2 (u − a)+ 1 2 L4 ≤ cη 2 ( u 1 2 + (u − a)+ 2 L2 ) (5. {uk } is bounded in H 1 (S).
To show that the functional J is weakly lower semicontinuous: J(uo ) ≤ limJ(uk ) = We need the following Lemma 5. w > . We postpone the proof of this lemma for a moment. u∈He (S) Therefore. for any v ∈ He (S). i.4 If {uk } converges weakly to u in H 1 (S). This implies that uo is a critical point of J in H 1 (S) and hence a constant multiple of uo is a weak solution of equation (5.136 5 Prescribing Gaussian Curvature on Compact 2Manifolds that is uo ∈ He (S). Furthermore.e. The Proof of Lemma 5. 2 Here we have used the fact that uo (−x) = uo (x). On the other hand. such that euk (x) dA→ S S u∈He (S) inf J(u). as we showed in the previous chapter. v >= S < uo . S For any w ∈ H(S). let v(x) = 1 1 (w(x) + w(−x)) := (w(x) + w− (x)).4.4.3). and it follows that 0 =< J (uo ). 2 2 Then obviously.1. limk→∞  uk 2 dA ≥ S S  uo 2 dA. eu(x) dA . as k→∞. v >= 1 (< J (uo ). then there exists a subsequence (still denoted by {uk }). we have 0 =< J (uo ). w− >) =< J (uo ). w > + < J (uo ). Now from this lemma. the integral is weakly lower semicontinuous. Consequently. and it follows that J(·) is weakly lower semicontinuous.1. . we have J(uo ) ≥ inf J(u). v > dA − 8π R(x)euo dA S R(x)euo vdA. we obviously have R(x)euk dA→ S S R(x)euo dA. Now what left is to prove Lemma 5. v ∈ He (S). uo is a minimum of J in He (S).1.
24) Inequalities (5.23) And by H oder inequality.24) imply that {euk } is a bounded sequence in H 1. for any real number p > 0.e. 1 2  u2 dx − 8π ln S S R(x)eu dx .e.5. This completes the proof of the Lemma.25). we have epu dA ≤ C exp{ S p2 16π  u2 dA + S p 4π udA}.1 Prescribing Gaussian Curvature on S 2 137 From Lemma 5. S (5. (5. ∀g ∈ G.p into L1 . Let X∗ = X H(S). for any 0 < p < 2. Recall that u dA = 0. ∀g ∈ G} be the set of ﬁxed points under the action of G. ∀g ∈ G.1.2. ∀g ∈ G}. S S (5. By the compact Sobolev embedding of H 1. We seek a minimum of J in X∗ . as we will describe in the following.25) H(S) = {u ∈ H 1 (S)  R(x)eu dA > 0}. Chen and Ding [CD] generalized Moser’s result to a broader class of symmetric functions. Assume that R(x) is Gsymmetric. ∀u ∈ X∗ . J(g[u]) = J(u).23) and (5. ¨  (e ) dA = u p S S e  u dA ≤ S pu p e 2p 2−p u 2−p 2 dA S  u dA 2 p 2 . i. Let G be a closed ﬁnite subgroup of O(3). R(gx) = R(x). the functional J(u) = is Ginvariant in X∗ .p for any 1 < p < 2. The following lemma guarantees that such a minimum is the critical point we desired. Let O(3) be the group of orthogonal transformations of R3 . or Ginvariant. Under the assumption (5. there exists a subsequence of {euk } which converges strongly to euo in L1 (S). i. The action of G on S 2 induces an action of G on the Hilbert space H 1 (S): g[u](x) → u(gx) under which the ﬁxed point subspace of H 1 (S) is given by X = {u ∈ H 1 (S)  u(gx) = u(x). Let FG = {x ∈ S 2  gx = x.
then it is also a critical point of J in H(S).2) has a solution provided one of the following conditions holds: (i) FG = ∅. w >= i < J (uo ). gi [w] > i 1 m m < J (uo ).1.25) with maxS R > 0. If FG = ∅. Where g −1 is the inverse of g. It follows that 0 = < J (uo ). or (iii) m > 0 and c∗ = inf X∗ J < −8π ln 4πm. let m = maxFG R. Remark 5. where id is the identity transform. we have < J (u).5 Assume that uo is a critical point of J in X∗ .1.26) u vdA − 8π Reu dA S Reu vdA.e. w > i = < J (uo ). v >= = 1 m m −1 < J (gi uo ). · · · gm }.138 5 Prescribing Gaussian Curvature on Compact 2Manifolds Lemma 5. w > . Hence the above Theorem contains Moser’s existence result as a special case. Theorem 5.2 (Chen and Ding [CD]). let v= Then for any g ∈ G g[v] = G = {g1 .1. S Assume For any w ∈ H(S). m 1 m m Hence v ∈ X∗ . the group G = {id. v >= S (5. in this case FG is empty. v ∈ H(S). Then equation (5. v > . Therefore. Obviously. (ii) m ≤ 0. 1 (g1 [w] + · · · + gm [w]). i. −id}. for any g ∈ G and any u. Proof. This can be easily see from < J (u). g[v] >=< J (g −1 [u]). m 1 (gg1 [w] + · · · + ggm [w]) = v.1 In the case when the function R(x) is even. suppose that R ∈ C α (S 2 ) satisﬁes (5. uo is a critical point of J in H(S). First. id(x) = x and − id(x) = −x ∀ x ∈ S 2 . .
Lemma 5. such that for any > 0.28) and i→ ∞ lim J(ui ) ≥ −8π ln 4πR(ξ).29) Proof. We ﬁrst show that. Ω2 ) ≥ o > 0 such that {ui } satisﬁes all the conditions in Lemma 5. (5. Now (5. with dist(Ω1 . This contradicts with our assumption.1. the above argument shows that if a minimizing sequence of J is unbounded.29).1 Prescribing Gaussian Curvature on S 2 139 To prove this theorem. we use (5.28) and Onofri’s Lemma: J(ui ) = 1  u2 dA − 8π ln(R(ξ) + o(1)) eui dA 2 S S 1 2 =  u dA − 8π ln(R(ξ) + o(1)) − 8π ln eui dA 2 S S 1 1 ≥  u2 dA − 8π ln(R(ξ) + o(1)) − 8π(ln 4π + 2 S 16π = −8π ln(4πR(ξ) + o(1)). then there exists a subsequence (still denoted by {ui }) and a point ξ ∈ S 2 with R(ξ) > 0. To verify (5. the mass of the surface S with density eui (x) at point x will be concentrating at one point ξ on S. and hence veriﬁes (5. we have eui dA→0. then passing to a subsequence. we conclude that ui is bounded. If ui →∞. (5. holds (5. Hence the inequality 1 − δ) ui 2 eui dA ≤ C exp( 16π S holds for some δ > 0. there exists a subsequence of {ui } and a point ξ ∈ S 2 . Roughly speaking. there exists two subsets of S 2 . Then by the previous argument. Ω1 and Ω2 .7 Suppose that {ui } is a sequence in H(S) with J(ui ) ≤ β.28).  u2 dA) S . we need another two lemmas.6 (Onofri [On]) For any u ∈ H 1 (S) with eu dA ≤ 4π exp S S udA = 0. such that R(x)eui dA = (R(ξ) + o(1)) S S eui dA.30).1.30) Otherwise.3.1. S\B (ξ) (5. S Lemma 5.30) and the continuity assumption on R(x) implies (5.5.27) 1 16π  u2 dA .
R(ξ) > 0 and c∗ ≥ −8π ln 4πR(ξ). the minimizing sequence {ui } must be bounded. S\B (ξ) (5. one may wonder. Since the right hand side of (5.140 5 Prescribing Gaussian Curvature on Compact 2Manifolds Let i→∞.3 (Chen and Ding [CD]). and hence possesses a subsequence converging weakly to some uo ∈ X∗ . i. FG is empty. there is a point ξ ∈ S 2 . X∗ As we argue in the previous section (see (5.1. (5. we have c∗ > −∞. Let {ui } ⊂ X∗ be a minimizing sequence. then c∗ < −8π ln 4πm.1. Assume on the contrary that ui →∞.7. Now under condition (i).29) is bounded above by β. If R(xo ) > 0 for some xo ∈ D.1.33) .29). This completes the proof of the Lemma. Therefore. we must also have eui dA→0. Proof of the Theorem. as i→∞. The following theorem provides a suﬃcient condition. This a constant multiple of uo is the desired weak solution of equation (5. Therefore. J(ui )→c∗ . However.7.12)). Moreover. by Lemma 5. that is ξ ∈ FG . and m = maxFG R > 0. one can see that we only need to show that {ui } is bounded in H 1 (S).1. under what conditions on R. {ui } must be bounded.e. such that for any > 0. this again contradicts with conditions (ii) and (iii). Then from the proof of Lemma 5. S\B (gξ) (5. is Ginvariant. Theorem 5.32) Because is arbitrary. Noticing that m ≥ R(ξ) by deﬁnition. we have eui dA→0. Similar to the reasoning at the beginning of this section. we must have R(ξ) > 0.3) possesses a weak solution. Suppose that R ∈ C 2 (S 2 ). (ii). Deﬁne c∗ = inf J. we deduce that gξ = ξ. under either one of the conditions (i). Let D = {x ∈ S 2  R(x) = m}. Here conditions (i) and (ii) in Theorem 5.3). we arrive at (5.31) Since ui is invariant under the action of G. or (iii).2 can be easily veriﬁed through the group G or the given function R. do we have condition (iii). and therefore (5.
5.1 Prescribing Gaussian Curvature on S 2
141
Proof. Choose a spherical polar coordinate system on S := S 2 : π π (θ, φ), − ≤ θ ≤ , 0 ≤ φ ≤ 2π 2 2 with xo = ( π , φ) as the north pole. 2 Consider the following family of functions: uλ (θ, φ) = ln 1 − λ2 1 , vλ = uλ − (1 − λ sin θ)2 4π − uλ + 2 = 2euλ . uλ dA, 0 < λ < 1.
S
One can verify that this family of functions satisfy the equation (5.34) As λ→1, the family {uλ } ‘blows up’ (tend to ∞) at the one point xo , while approaching −∞ everywhere else on S 2 . Therefore, the mass of the sphere with density euλ (x) is concentrating at point xo as λ→1. It is easy to see that vλ ∈ H(S) for λ closed to 1. Moreover, we claim that vλ ∈ X, i.e. vλ (gx) = vλ (x), ∀g ∈ G. In fact, since g is an isometry of S 2 , and gxo = xo , we have d(gx, xo ) = d(gx, gxo ) = d(x, xo ), where d(·, ·) is the geodesic distance on S 2 . But vλ depends only on θ = d(x, xo ), so vλ (gx) = vλ (x), ∀g ∈ G. It follows that vλ ∈ X∗ for λ suﬃciently closed to 1. By a direct computation, one can verify that 1 2 Consequently, J(vλ ) = −8π ln
S
 uλ 2 dA + 8π¯λ = 0. u
S
R(x)euλ dA.
Notice that c∗ ≤ J(vλ ) by the deﬁnition of c∗ . Thus, in order to show (5.33), we only need to verify that, for λ suﬃciently close to 1 R(x)euλ dA > 4πm.
S
(5.35)
Let δ > 0 be small, we have
2π
Reuλ dA = (1 − λ2 )
S 2π 0
π 2 π 2 −δ
[R(θ, φ) − m]
cos θ dθdφ (1 − λ sin θ)2 + 4πm
+
0
π 2 −δ
[R(θ, φ) − m]
−π 2
cos θ dθdφ (1 − λ sin θ)2
= (1 − λ2 ){I + II} + 4πm.
142
5 Prescribing Gaussian Curvature on Compact 2Manifolds
Here we have used the fact euλ dA = 4π
S
which can be derived directly from equation (5.34). For each ﬁxed δ, it is easy to check that the integral II remains bounded as λ tends to 1. Thus (5.35) will hold if we can show that the integral I → + ∞ as λ→1. Notice that xo is a maximum point of the restriction of R on FG . Hence by the principle of symmetric critically, xo is a critical point of R, that is R(xo ) = 0. Using this fact and the second order Taylor expansion of R(x) near xo , we can derive I=π ≥ co
π 2 −δ π 2 π 2 −δ π 2
R(xo ) cos2 θ + o cos3 θ dθ. (1 − λ sin θ)2
θ−
π 2 2
(1 − λ sin θ)2
cos θ dθ
Here co is a positive constant. We have chosen δ to be suﬃciently small and have used the fact that R(xo ) is positive. From the above inequality, one can easily verify that as λ→1, I→ + ∞, since the integral
π 2 π 2 −δ
cos3 θ dθ (1 − sin θ)2
diverges to +∞. This completes the proof of the Theorem.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
Let M be a compact two dimensional Riemannian manifold. Let go (x) be a metrics on M with the corresponding LaplaceBeltrami operator and Gaussian curvature Ko (x). Given a function K(x) on M , can it be realized as the Gaussian curvature associated to the pointwise conformal metrics g(x) = e2u(x) go (x)? As we mentioned before, to answer this question, it is equivalent to solve the following semilinear elliptic equation − u + Ko (x) = K(x)e2u x ∈ M. (5.36)
To simplify this equation, we introduce the following simple existence result.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
143
Lemma 5.2.1 Let f (x) ∈ L2 (M ). Then the equation − u = f (x) x ∈ M possesses a solution if and only if f (x)d A = 0.
M
(5.37)
(5.38)
Proof. The ‘only if’ part can be seen by integrating both side of the equation (5.37) on M . We now prove the ‘if’ part. Assume that (5.38) holds, we will obtain the existence of a solution to (5.37). To this end, we consider the functional J(u) = under the constraint G = {u ∈ H 1 (M ) 
M
1 2
 u2 d A −
M M
f (x)ud A.
u(x)d A = 0}.
As we have seen before, we can use, in G, the equivalent norm u =
M
 u d A
2
1 2
.
By H oder and Poincar´ inequalities, we derive ¨ e J(u) ≥ 1 u 2 − f L2 u L2 2 1 u 2 − co f L2 u ≥ 2 c2 f 1 = ( u − co f L2 )2 − o 2 2
2 L2
.
(5.39)
The above inequality implies immediately that the functional J(u) is bounded from below in the set G. Let {uk } be a minimizing sequence of J in G, i.e. J(uk )→ inf J,
G
as k→∞.
Then again by (5.39), {uk } is bounded in H 1 , and hence possesses a subsequence (still denoted by {uk }) that converges weakly to some uo in H 1 (M ). From compact Sobolev imbedding H 1 → → L2 ,
144
5 Prescribing Gaussian Curvature on Compact 2Manifolds
we see that uk →uo strongly in L2 , Therefore uo (x)d A = lim
M M
and hence so in L1 . uk (x)d A = 0, f (x)uk (x)d A.
M
(5.40)
and f (x)uo (x)d A = lim
M
(5.41)
(5.40) implies that uo ∈ G, and hence J(uo ) ≥ inf J(u).
G
(5.42)
While (5.41) and the weak lower semicontinuity of the Dirichlet integral: lim inf
M
 uk 2 d A ≥
M
 uo 2 d A
leads to
J(uo ) ≤ inf J(u).
G
From this and (5.42) we conclude that uo is the minimum of J in the set G, and therefore there exists constant λ, such that uo is a weak solution of − uo − f (x) = λ. Integrating both sides and by (5.38), we ﬁnd that λ = 0. Therefore uo is a solution of equation (5.37). This completes the proof of the Lemma. We now simplify equation (5.36). First let u = 2u, then ˜
˜ − u + 2Ko (x) = 2K(x)eu . ˜
(5.43)
Let v(x) be a solution of ¯ − v = 2Ko (x) − 2Ko where 1 ¯ Ko = M  Ko (x)d A
M
(5.44)
is the average of Ko (x) on M . Because the integral of the right hand side of (5.44) on M is 0, the solution of (5.44) exists due to Lemma 5.2.1. Let w = u + v. Then it is easy to verify that ˜ ¯ − w + 2Ko = 2K(x)e−v ew . Finally, letting (5.45)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
145
¯ α = 2Ko and R(x) = 2K(x)e−v(x) and rewriting w as u, we reduce equation (5.36) to − u + α = R(x)eu(x) , x∈M (5.46)
By the wellknown GaussBonnet Theorem, if g is a Riemannian metrics on M with corresponding Gaussian curvature Kg (x), then Kg (x)d Ag = 2πχ(M )
M
where χ(M ) is the Euler Characteristic of M , and d Ag the area element associated to metric g. We say M is negatively curved, if χ(M ) < 0, and hence α < 0. 5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper Solutions Let M be a compact two dimensional manifold. Let α < 0. In [KW], Kazdan and Warner considered the equation − u + α = R(x)eu(x) , and proved ¯ Theorem 5.2.1 (KazdanWarner) Let R be the average of R(x) on M . Then ¯ i) A necessary condition for (5.46) to have a solution is R < 0. ¯ < 0, then there is a constant αo , with −∞ ≤ αo < 0, such that ii) If R one can solve (5.46) if α > αo , but cannot solve (5.46) if α < αo . iii) αo = −∞ if and only if R(x) ≤ 0. The existence of a solution was established by using the method of upper and lower solutions. We say that u+ is an upper solution of equation (5.47) if − u+ + α ≥ R(x)eu+ , x ∈ M. Similarly, we say that u− is a lower solution of (5.47) if − u− + α ≤ R(x)eu− , x ∈ M. The following approach is adapted from [KW] with minor modiﬁcations. ¯ Lemma 5.2.2 If a solution of (5.47) exists, then R < 0. Even more strongly, the unique solution φ of φ + αφ = R(x) , x ∈ M must be positive. (5.48) x∈M (5.47)
51) is nonnegative. we must have w(xo ) + αw(xo ) > 0.48) must be positive. there exists an upper solution u+ of (5. We try to choose constant a and b. Using the substitution v = e−u . hence w(xo ) ≥ 0.51) as ¯ f (a. This shows that a necessary condition for (5. Lemma 5.51) We want to choose constant a and b. Let v be a solution of ¯ − v = R(x) − R. In Case i). This completes the proof of the Lemma. then there exists a small δ > 0. (5. ∀ x ∈ M. We ﬁrst prove the second part.146 5 Prescribing Gaussian Curvature on Compact 2Manifolds Proof. one sees that (5.2. so that the right hand side of (5. x) ≡ (−aR + α) − R(x)(eav+b − a). Let φ be the unique solution of (5. Consequently.50).47).52) that . A contradiction with (5. It follows from (5. such that u+ = av + b is an upper solution.49) Assume that v is such a positive solution. then there exist an upper solution u+ of (5. v (5. Then we choose b large. (5. i) If R(x) ≤ 0 but not identically 0. This is possible because v(x) is continuous and M is compact. such that w(xo ) < 0 and xo is a minimum of w. we regroup the right hand side of (5.48) and let w = φ − v. ¯ ii) If R < 0. there is a point xo ∈ M . Proof. Otherwise.47) has a solution u if and only if there is a positive solution v of v + αv − R(x) −  v2 = 0.47) to have a solution is that the unique solution φ of (5. such that for −δ ≤ α < 0. v (5. We have ¯ − u+ + α − R(x)eu+ = aR(x) − aR + α − R(x)eav+b . Therefore w ≥ 0 and φ ≥ v > 0. so that −aR + α > 0. Integrating both sides of (5. we see ¯ immediately that R < 0.48). Then w + αw = −  v2 ≤ 0. when R(x) ≤ 0.3 (The Existence of Upper Solutions).52) ¯ For any given α < 0.50) It follows that w ≥ 0. we ﬁrst choose a suﬃciently large. b.47). so that eav(x)+b − a ≥ 0 .51) and (5.
2 R L∞ Again by the continuity of v and the compactness of M . Thus u+ so constructed is an upper solution ¯ R of (5. R In Case ii). Choosing λ suﬃciently large so that a − ew−λ ≥ 0. then one can clearly use any sufﬁciently large negative constant as u− .p (M ). This completes the proof of the Lemma. Proof.53) .p (M ) and hence is continuous. Therefore. Choose a positive constant a so that aK = −α. Lemma 5. For general R(x) ∈ Lp (M ). by choosing a suﬃciently small. By the Lp regularity theory.p (M ) of (5. If R(x) is bounded from below. Thus there is a solution w of w = aK(x) + α. (5.5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 147 − u+ + α − R(x)eu+ ≥ 0. x) ≥ − ¯ aR − aR(x)(eav − 1) 2 ¯ R ≥ a − − R L∞ eav − 1 2 ¯ −R = a R L∞ − eav − 1 .52) becomes ¯ f (a. we can make eav(x) − 1 ≤ ¯ −R . let ¯ K(x) = max{1. where a is a positive number to be determine later. Then (5. Let u− = w − λ. Then − u− + α − R(x)eu− = − w + α − R(x)ew−λ = −aK(x) − R(x)ew−λ ≤ −K(x)(a − ew−λ ).47) such that u− < u+ . w ∈ W 2.4 (Existence of Lower Solutions) Given any R ∈ Lp (M ) and a function u+ ∈ W 2.47) for α ≥ a2 . we see that eav(x) →1 . b. there is a lower solution u− ∈ W 2.2. 2 R L∞ for all x ∈ M. p Then aK(x) + α = 0 and (aK(x) + α) ∈ L (M ). we choose α ≥ a2 and eb = a. uniformly on M as a→0. −R(x)}.
In order that for each i. u+ . ·) possess some monotonicity. u− ∈ W 2. . we choose k(x) = k1 (x)eu+ (x) . for all x ∈ M. we need the operator L obeys some maximum principle and f (x. The Laplace operator in the original equation does not obey maximum principles. where k1 (x) = max{1. Hence the maximum principle holds for L. Moreover.53).5 Let p > 2. there is some point on M where u < 0. then − u(xo ) ≤ 0. and u is C ∞ in any open set where R(x) is C ∞ . then u(x) ≥ 0 . where k(x) is any positive function on M . Lui+1 = f (x. then there is a solution u ∈ W 2. to keep u− ≤ ui ≤ u+ . This completes the proof of the Lemma. Based on the maximum principle on L. A contradiction. 2. it requires that f (x. Let xo be a negative minimum of u.47). u). we arrive at − u− + α − R(x)eu− ≤ 0. −R(x)}. To see this. however. by (5. that is ∂f = R(x)eu + k(x) ≥ 0. Lemma 5. we suppose in the contrary. one can make λ large enough such that u− = w − λ < u+ . Proof.p (M ) of (5. u− ≤ u ≤ u+ . and it follows that − u(xo ) + k(xo )u(xo ) ≤ k(xo )u(xo ) < 0.148 5 Prescribing Gaussian Curvature on Compact 2Manifolds and noticing that K(x) ≥ 0. We now write equation (5. then the new operator L obeys the maximum principle: If Lu ≥ 0. then start from the upper solution u+ and apply iterations: Lu1 = f (x. and if u− ≤ u+ . Therefore u− is a lower solution. u). u− ≤ ui ≤ u+ . We will rewrite the equation (5. If there exist upper and lower solutions. ·) be monotone increasing.p (M ).47) as Lu ≡ − u + k(x)u = R(x)eu − α + k(x)u := f (x. u+ ) . Moreover.2. ∂u To this end.47) as Lu = f (x. ui ) . · · · . i = 1. if we let Lu = − u + k(x)u.
from Lu+ ≥ f (x.2.p . Part i) can obviously derived from Lemma 5. This veriﬁes (5. we will show that um+2 ≤ um+1 . Since f (x. Similarly.47) and satisﬁes u− ≤ u ≤ u+ . . inequality (5. In fact. Lui+1 Lp = R(x)eui − α + k(x)ui Lp ≤ C. Therefore.1. there is a subsequence that converges uniformly to some function u in C 1 (M ). we have f (x. one proves inductively that u ∈ C ∞ in any open set where R(x) is C ∞ . Hence {ui } is bounded in W 2.54) L(u+ − u1 ) ≥ 0.2. Then the maximum principle for L implies u+ ≥ u1 . it follows that u is a solution of (5.55) shows that {ui } is uniformly bounded. In view of the monotonicity (5. u+ ) (5. u− . Assume that for any integer m.5.p →Lp is continuous. um ) and hence L(um+1 − um+2 ) ≥ 0. 2.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 149 Base on this choice of L and f . 2.2.55). one can show the other side of the inequality and arrive at u− ≤ ui+1 ≤ ui ≤ · · · ≤ u+ .55) Since u+ . um+1 ) ≤ f (x. · · · .p (M ).p into C 1 . we are now able to complete the proof of the Theorem 5. i = 1. in the Lp norm. By the compact imbedding of W 2. we show by math induction that the iteration sequence so constructed is monotone decreasing: ui+1 ≤ ui ≤ · · · ≤ u1 ≤ u+ . Based on these Lemmas. so u ∈ W 2. · · · . and ui are continuous. we conclude that the entire sequence {ui } itself converges uniformly to u. Therefore um+1 ≥ um+2 . then u ∈ C m+2+γ . More precisely. if R ∈ C m+γ .1. This completes the ﬁrst step of our induction. This completes the proof of the Lemma. u+ ) we derive immediately and Lu1 = f (x. Since L : W 2. um+1 ≤ um ≤ u+ . ·) is monotone increasing in u for u ≤ u+ .54) through math induction.2. By the Schauder theory. Moreover ui+1 − uj+1 W 2. i = 1.p . (5.p ≤ C L(ui+1 − uj+1 ) ≤ C( R Lp Lp L∞ e ui −e uj + k Lp ui − uj L∞ ). Consequently. Proof of Theorem 5. {ui } converges strongly in W 2.
u(x.3.56) We argue in the following 2 steps. In fact.47). if there is a upper solution. Now to complete the proof of the Theorem. By virtue of Lemma 5. u Then by H oder inequality ¨ u This implies that L2 L2 ≤ 1 α R udA. there exist a δ > 0. as α→ − ∞. (5. Hence by Lemma 5. such that (5.56) by u and integrate over M to obtain − M  u2 dA + α M u2 dA = M R(x)u(x)dA. Multiplying both sides of equation (5. α)→0. Consequently.56). i) For a given α < 0.2. almost everywhere . then for any α > α1 .47) is sovable }.2. if for α1 .47) at α.2. we have − u1 + α > R(x)eu1 . Hence equation − u + α = R(x)eu also has a solution.58) . that is. let u = u(x. M ≤ 1 R α L2 . u1 is an upper solution for equation (5.2. we can prove a stronger result: −αφ(x)→R(x) almost everywhere as α→ − ∞. (5. Let αo = inf{α  (5.150 5 Prescribing Gaussian Curvature on Compact 2Manifolds From Lemma 5. (5. such that the unique solution of φ + αφ = R(x) is negative somewhere. then there is a solution. Lemma 5.2. α) be the solution of (5. Actually.47) possesses a solution u1 . it suﬃce to prove that if R(x) is positive somewhere.5.3 infers that αo = −∞ if R(x) ≤ 0 but R(x) is not identically 0.47) is solvable for −δ ≤ α < 0. Then for any 0 > α > αo . one can solve (5. we only need to show Claim: There exist α < 0.2.57) (5.4 and 5. then αo > −∞.
one can see that.57).. and hence (5. Let {αk } be a sequence of numbers such that αk > αo .2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 151 Since for any α < 0. From the above reasoning. we can express + α)−1 R(x). 1 φ(x)→ R(x).e.59) ii) To see (5. i. equation (5.5.2 Chen and Li’s Results From the previous section.e. one can see from the Kazdan and Warner’s result that in the case when R(x) is nonpositive. as k→∞.60) by simply apply the operator + α to both sides. (5. . then ( + α)−1 R(x)→0 almost everywhere . In particular. we have R ∈ L2 . The Outline. if R ∈ L2 . as α→ − ∞. for suﬃciently negative α. Or Proof. and αk →αo . This completes the proof of the Theorem. Then equation (5. we write −αφ + R(x) = ( + α)−1 R.46) to have a solution for any 0 > α > −∞ is R(x) ≤ 0. in the case when R(x) changes signs.59) implies that −αφ + R(x)→0.46) has a solution for the critical value α = αo ? In [CL1]. φ(x) is negative somewhere. (5. We will prove the Theorem in 3 steps.60) One can see the validity of (5. we have αo > −∞.2.2. Let αo be deﬁned in the previous section. Now by the assumption. a necessary and suﬃcient condition for equation (5. a. Chen and Li answered this question aﬃrmatively. α Since R(x) is continuous and positive somewhere.46) has been solved completely. a. and hence there are still some questions remained. the operator u=( + α is invertible. However. Theorem 5.46) has at least one solution when α = αo .2 (ChenLi) Assume that R(x) is a continuous function on M .e. 5. one may naturally ask: Does equation (5.
let xk be a minimum of φk (x) on M . ∀ k = 1. for each k.63) . (5. For each ﬁxed αk .46) for all αk .61) because αk →αo < 0 . By the result of Kazdan and ˜ Warner. there exists a solution φk of − φk + αk = R(x)eφk . (5. Then let αk →αo . we prove that {uk } is bounded in H 1 (M ) and hence converges to a desired solution.62) Otherwise. 2. · · · .152 5 Prescribing Gaussian Curvature on Compact 2Manifolds In step 1. by Kazdan and Warner. Choose a suﬃciently negative constant A < −co to be the lower solution of equation (5.62). On the other hand. we show that the sequence of minimizers {uk } is bounded in the region where R(x) is positively bounded away from 0. ψk is a super solution of the equation − u + αk = R(x)eu(x) . In step 2.46) for α = αk . we have − φk (xk ) ≤ 0 . since xk is a minimum. Then ˜ − ψk + αk > − ψk + αk = R(x)eψk (x) .e. − A + αk ≤ R(x)eA . Call it ψk . ˜ i. Step 1. such that φk (x) ≥ −co . For each αk > αo . (5. as k→∞. Let the minimizer be uk . choose αo < αk < αk .61) We ﬁrst show that there exists a constant co > 0. and hence we can make R(x)eA greater than any ﬁxed negative number. there exists a solution of equation (5. φk (xk )→ − ∞ . for each ﬁxed αk . This is possible because R(x) is bounded below on M . that is. This veriﬁes (5. we minimize the functional. · · · . as k→∞. ∀ x ∈ M and ∀ k = 1. Then obviously. In step 3. These contradict equation (5. 2. Jk (u) = 1 2  u2 d A + αk M M ud A − M R(x)eu d A in a class of functions that are between a lower and a Upper solution.
(5. ∀ x ∈ M. such that as − ≥ t ≤ 0. i. there is a δ > 0. such that R(x) ≤ C1 . and xo is a minimum of w.64) Here we have used the H older and Poincar´ inequality.68). In fact.67) Consequently − uk (xo ) + αk − R(xo )euk (xo ) ≤ 0. To verify this. such that uk (xo ) = ψk (xo ). A < uk (x) < ψk (x).65) Suppose in the contrary. From (5. . we minimize the functional Jk (u) in a class of functions H = {u ∈ C 1 (M )  A ≤ u ≤ ψk }. since M is compact and R(x) is continuous. 0]. First we show that uk (x) < ψk (x) . M (5. (5. Since uk (xo ) > A. 2 ud A − C1 M M eψk d A (5. In order to show that uk is a solution of equation (5.65) and a similar argument as in the previous section. It follows that − w(xo ) ≤ 0. one ¨ e can easily see that Jk is bounded from below. This is possible. Also by (5. Consequently. It follows that. for any positive function v(x) with its support in Bδ (xo ). ∀x ∈ Bδ (xo ). ∀ x ∈ M.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 153 Illuminated by an idea of Ding and Liu [DL].63). we use a maximum principle. Then w(x) ≥ 0. we have. such that A < uk (x) . we will derive a contradiction.66) (5.5. we need it be in the interior of H. Since uk is a minimum of Jk in H. This implies that [− uk + αk − R(x)euk ]v(x)d A ≤ 0. there exists a constant C1 .68) Let w(x) = ψk (x) − uk (x). Jk (u) ≥ 1  u2 d A + αk 2 M 1 ≥ ( u − C2 )2 − C3 . there exists xo ∈ M . because the functional Jk (u) is bounded from below.e. for any u ∈ H. by (5. While on the other hand. Hence g (0) ≤ 0. we have in particular that the function g(t) = Jk (uk + tv) achieves its minimum at t = 0 on the interval [− . ∀ x ∈ M.64). we have uk + tv ∈ H. there exists an > 0. we can show that there is a subsequence of a minimizing sequence which converges weakly to a minimum uk of Jk (u) in the set H.
6 (Brezis. b. Similarly. and Shafrir). (5. for some constant a. and Shafrir [BLS]: Lemma 5. To show that it is also bounded from above. uk + tv is in H for all suﬃciently d small t. we ﬁrst consider the region where R(x) is positively bounded away from 0. Li. .69) so small such that Let xo be a point on M at which R(xo ) > 0. This veriﬁes (5. Li. ∀ x ∈ M. Therefore we must have uk (x) < ψk (x) . K. Then a direct computation as we did in the past will lead to − uk + αk = R(x)euk .2. Then any solution u of the equation − u = V (x)eu . Assume that V is a Lipschitz function satisfying 0 < a ≤ V (x) ≤ b < ∞ and let K be a compact subset of a domain Ω in R2 . Let vk be a solution of − vk − αk = 0 x ∈ Ω vk (x) = 1 x ∈ ∂Ω. x ∈ Ω. one can show that A < uk (x) . Ω). and hence dt Jk (u + tv)t=0 = 0. K Ω V L∞ . Step 2. Then wk satisﬁes − wk = R(x)e−vk ewk . Now for any v ∈ C 1 (M ). Choose R(x) ≥ a > 0 . ∀ x ∈ M. Let wk = uk + vk . satisﬁes sup u + inf u ≤ C(a.65). α Again a contradiction. ∀ x ∈ Ω ≡ B2 (xo ). It is obvious that the sequence {uk } so obtained in the previous step is uniformly bounded from below by the negative constant A. We will use a sup + inf inequality of Brezis.154 5 Prescribing Gaussian Curvature on Compact 2Manifolds − w(xo ) = − ψk (xo ) − (− uk (xo )) ≥ −˜ k + αk > 0. ∀ x ∈ M.
From the previous step. the metric is pointwise conformal to the Euclidean metric. Locally. Let wk = uk − vk . then − wk = R(x)euk − K(x)evk .6 to {wk } to conclude that the sequence {wk } is also uniformly bounded from above in the smaller ball B (xo ). Then since − vk + αk ≤ 0 . It is open since R(x) is continuous. M Choosing φ = e wk 2 . For each αk . and αk →αo The sequence {vk } is uniformly bounded. It follows that for any function φ ∈ H 1 (M ). the sequence {vk } is bounded from above and from below in the small ball Ω. for x ∈ D. (5.70) We show that {wk } is bounded in H 1 (M ). we have  wk 2 ewk dA = M M R(x)euk +wk dA − D K(x)euk dA. we derive .2. Let K(x) be a smooth function such that K(x) < 0 .70) by ewk and integrating on M . Now the uniform bound for {uk } in the same region follows suit. such that the set D = {x ∈ M  R(x) > δ} is nonempty. {wk } is also bounded from below.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 155 By the maximum principle. (5. We show that the sequence {uk } is bounded in H 1 (M ). Since xo is an arbitrary point where R is positive. and K(x) = 0 . Choose a small δ > 0.71) On the other hand.5. since each uk is a minimizer of the functional Jk . Step 3. multiplying both sides of (5. On one hand. else where. hence one can apply Lemma 5. [ φ2 − R(x)euk φ2 ]d A ≥ 0. let vk be the unique solution of − vk + αk = K(x)evk . the second derivative Jk (uk ) is positively deﬁnite. x ∈ M. we conclude that {wk } is uniformly bounded in the region where R(x) is positively bounded away from 0. we know that {uk } is uniformly bounded in D. Since {uk } is uniformly bounded from below.
¯ for some subsequence. This completes the proof of the Theorem.73) Notice that {uk } is uniformly bounded in region D and {wk } is bounded from below due to the fact that {uk } is bounded from below and {vk } is bounded. . M (5. hence we can apply Poincar´ inequality to uk and conclude that {˜k } is e ˜ u bounded in H 1 (M ).71) and (5.73) implies that M  wk 2 dA is bounded.72). a contradiction with the fact that {˜k } is bounded u in H 1 (M ). we would have u2 dA = ˜k D D uk − uk 2 dA→∞. Consequently. Taking into account that u k {uk } is bounded in the region D where R(x) is positively bounded away from 0. Apparently. we have u2 dA ≤ 2 k M M u2 dA + u2 M  . {uk } must also be bounded in H 1 (M ). where uk is the average of uk .156 5 Prescribing Gaussian Curvature on Compact 2Manifolds 1 4  wk 2 ewk dA ≥ M M R(x)euk +wk dA. then {¯k } is unbounded. Obviously M uk dA = ˜ u ¯ ˜ 0. (5. and uo is the desired weak solution for − uo + αo = R(x)euo . and so does M  uk 2 dA.72) Combining (5. we arrive at − D K(x)euk dA ≥ 3 4  wk 2 ewk dA. (5. Therefore. there exists a subsequence of {uk } that converges to a function uo in H 1 (M ). ˜k ¯k If M u2 dA is unbounded. Consider uk = uk −¯k .
1 Introduction On higher dimensional sphere S n with n ≥ 3.2 Estimate the Values of the Functional The Variational Scheme In the Region Where R < 0 In the Region Where R is small In the Region Where R > 0 6. Kazdan and Warner raise the following question: Which functions R(x) can be realized as the scalar curvature of some conformally related metrics? It is equivalent to consider the existence of a solution to the following semilinear elliptic equation − u+ n+2 n(n − 2) n−2 u= R(x)u n−2 .2. 4 4(n − 1) (6.1) The equation is “critical” in the sense that lack of compactness occurs.3 The A Priori Estimates 6. x ∈ S n .1 6.2 Introduction The Variational Approach 6. Besides the obvious necessary condition that R(x) be positive somewhere.3 6.2 6. for n ≥ 3 6. The conditions are: X(R)dVg = 0 Sn (6.2.3.3. there are wellknown obstructions found by Kazdan and Warner [KW2] and later generalized by Bourguignon and Ezin [BE].6 Prescribing Scalar Curvature on S n . u > 0.3.2) .1 6.1 6.
1) on higher dimensional spheres. We call these KazdanWarner type conditions. Xu and Yang [XY] showed that if R is ‘nondegenerate. then problem (6. However. In other words. the derivatives of R up to order (n − 2) vanish. while some higher but less than nth order derivative is distinct from 0. among others. we answered question (a) negatively [CL3] [CL4].’ then (6. It was illustrated by a counter example in [Bi] constructed by Bianchi.4) Now.3) a suﬃcient condition? and (b) if not. These conditions give rise to many examples of R(x) for which equation (6. The above mentioned Bianchi’s counter example seems to suggest that the ‘ﬂatness condition’ is somewhat sharp. what are the necessary and suﬃcient conditions? Kazdan listed these as open problems in his CBMS Lecture Notes [Ka]. (6. Recently. the ‘nondegeneracy’ condition is a special case of the ‘ﬂatness condition’. a monotone rotationally symmetric function R admits no solution. the ‘nondegeneracy’ condition is no longer suﬃcient to guarantee the existence of a solution.1) has no solution. [Li2]. a more proper condition is called the ‘ﬂatness condition’. Although people are wondering if the ‘ﬂatness condition’ is necessary. Roughly speaking. namely. if it is monotone in the region where it is positive. We found some stronger obstructions. For equation (6. In dimensions higher than 3. for n ≥ 3 where dVg is the volume element of the conformal metric g and X is any conformal vector ﬁeld associated with the standard metric g0 . it requires that at every critical point of R. In this situation. which is nondegenerate and nonmonotone.3) Then (a) is (6.158 6 Prescribing Scalar Curvature on S n . In particular. it is still used widely today (see [CY3]. numerous studies were dedicated to these problems and various suﬃcient conditions were found (please see the articles [Ba] [BC] [C] [CC] [CD1] [CD2] [ChL] [CS] [CY1] [CY2] [CY3] [CY4] [ES] [Ha] [Ho] [Li] [Li2] [Li3] [Mo] [SZ] [XY] and the references therein ). For n = 3.3) on S 2 . Our results imply that for a rotationally symmetric function R.1) admits no solution unless R is a constant. the conditions become: R > 0 somewhere and R changes signs. were those KazdanWarner type necessary conditions also suﬃcient ? In the case where R is rotationally symmetric.4) is a suﬃcient condition. He found some positive rotationally symmetric function R on S 4 . and for which the problem admits no solution. In the last two decades. . and [SZ]) as a standard assumption to guarantee the existence of a solution. is this a suﬃcient condition? For prescribing Gaussian curvature equation (5.1) is that R (r) changes signs in the region where R is positive. a major diﬃculty is that multiple blowups may occur when approaching a solution by a minimax sequence of the functional or by solutions of subcritical equations. a necessary condition to solve (6. one problem of common concern left open. (6.
Let Jp (u) := Rup+1 dV Sn and E(u) := Sn ( u2 + γn u2 )dV.1 Introduction 159 Now. and [SZ]. [Ba]. Thus. Then a necessary and suﬃcient condition for equation (6.’ (6. we answer the question aﬃrmatively. we only list a typical and easytoverify one.4) a suﬃcient condition? In this section.6) for each p < τ and close to τ .1) to have a solution is that R (r) changes signs in the regions where R > 0.’ is (6.6). under the ‘ﬂatness condition. Here. we essentially answer the open question (b) posed by Kazdan.1. Outline of the proof of Theorem 6. and it applies to functions R with changing signs.6. We ﬁrst ﬁnd a positive solution up of the 4 subcritical equation − u + γn u = R(r)up .1. This is true in all dimensions n ≥ 3.1. (6.1 Let n ≥ 3. Let R = R(r) be rotationally symmetric and satisfy the following ﬂatness condition near every positive critical point ro : R(r) = R(ro ) + ar − ro α + h(r − ro ). The theorem is proved by a variational approach. We prove that. take the limit. u ≥ 0}.’ obtain some quite sharp and clean estimates in those neighborhoods. to better illustrate the idea. .1) to have a solution. We use the ‘center of mass’ to deﬁne neighborhoods of ‘critical points at inﬁnity. Li. where S n  is the volume of S n . n+2 Let γn = n(n−2) and τ = n−2 . Theorem 6.’ a general one was presented in [Li2] by Y. To ﬁnd the solution of equation (6. we construct a maxmini variational scheme. a natural question is: Under the ‘ﬂatness condition.4) is a necessary and suﬃcient condition for (6. Then let p→τ . (6.5) where h (s) = o(sα−1 ). [Li2]. and construct a maxmini variational scheme at subcritical levels and then approach the critical level. in the statement of the following theorem. with a = 0 and n − 2 < α < n. We blend in our new ideas with other ideas in [XY]. There are many versions of ‘ﬂatness conditions. We seek critical points of Jp (u) under the constraint H = {u ∈ H 1 (S n )  E(u) = γn S n .
we have some kinds of ‘mountain pass’ associated to each local maximum of R. To show the convergence of a subsequence of {up }. Jp (up ) ≤ R(ri )S n  − δ.1). Let r1 be a positive local maximum of R. Our scheme is based on the following key estimates on the values of the functional Jp (u) in a neighborhood of the ‘critical points at inﬁnity’ associated with each local maximum of R. for n ≥ 3 If R has only one positive local maximum. and let Γ be the family of all such paths.7) Here δ > 0 is independent of p.10) for any positive local maximum ri of R.8) (6. a constant multiple of up is a solution of (6. Let ψ1 and ψ2 be two functions deﬁned by (6. while there is a function ψ1 ∈ G1 . Deﬁne cp = sup min Jp (u). by (6. such that on the boundary ∂G1 of G1 . such that δ Jp (ψ1 ) > R(r1 )S n  − . (6. by compactness. there is a critical point up of Jp (·).160 6 Prescribing Scalar Curvature on S n .4). we seek a solution by minimizing the functional in a family of rotationally symmetric functions in H. In this case.7). The set G1 is deﬁned by using the ‘center of mass’. (ii) In the region where R is small: This is mainly due to the boundedness of the energy E(up ). we let p→τ .8) associated to r1 and r2 . take the limit. (6. we show that the sequence can only blow up at one . on each of the poles. Then similar to the approach in [C]. Let γ be a path in H connecting ψ1 and ψ2 . such that Jp (up ) = cp . the solutions we seek are not necessarily symmetric. Roughly speaking. then by condition (6. Using a Pohozaev type identity. R is either nonpositive or has a local minimum.9) γ∈Γ γ For each p < τ . Let r1 and r2 be two smallest positive local maxima of R. we have Jp (u) ≤ R(r1 )S n  − δ.6). (iii) In the region where R is positively bounded away from 0: First due to the energy bound. Moreover. To ﬁnd a solution of (6. In the following. {up } can only blow up at most ﬁnitely many points. Obviously. We prove that there is an open set G1 ⊂ H (independent of p). 2 (6. we assume that R has at least two positive local maxima. we established apriori bounds for the solutions in the following order. (i) In the region where R < 0: This is done by the ‘Kelvin Transform’ and a maximum principle.
We divide the rest of the section into two parts. where S n  is the volume of S n .11) for each p < τ := Let n+2 n−2 . we establish a priori estimates on the solution sequence {up }. In subsection 2. First. we construct a maxmini variational scheme to ﬁnd a solution of − u + γn u = R(r)up (6. Finally. and u := E(u) be the corresponding norm.7) and (6.2 The Variational Approach 161 point and the point must be a local maximum of R. 6.8). Then. we establish the key estimates (6.6) for each p.10) to argue that even one point blow up is impossible and thus establish an apriori bound for a subsequence of {up }. we carry on the maxmini variational scheme and obtain a solution up of (6. Let (u. we use (6. . u ≥ 0}. v) = ( S u v + γn uv)dV be the inner product in the Hilbert space H 1 (S n ). We seek critical points of Jp (u) under the constraint H := {u ∈ H 1 (S n )  E(u) = E(1) = γn S n .2 The Variational Approach In this section. In subsection 3. we carry on the maxmini variational scheme. One can easily see that a critical point of Jp in S multiplied by a constant is a solution of (6.6.11). Jp (u) := Sn Rup+1 dV and E(u) := Sn ( u2 + γn u2 )dV.
Deﬁne the center of mass of u as q(u) =: Sn xuτ +1 (x)dV uτ +1 (x)dV Sn (6. 2 Here det(dhλ ) is the determinant of dhλ . in the spherical polar ˜ coordinates x = (r. the energy E(·). It is wellknown that this family of conformal transforms leave the equation (6. Correspondingly. while tends to zero elsewhere.13) We may also regard φq as depend on two parameters λ and q .2. n−2 λ φq = φλ.2 and 6. Unlike the classical ones. for n ≥ 3 6. there is a family of conformal transforms Tq : H −→ H.2. As λ→0.˜ = ( 2 ) 2 .1 Estimate the Values of the Functional To construct a maxmini variational scheme.162 6 Prescribing Scalar Curvature on S n . with Tq u := u(hλ (x))[det(dhλ )] n−2 2n (6. and can be expressed explicitly as det(dhλ ) = λ + λ2 sin2 n r 2 cos2 r 2 . we ﬁrst show that there is some kinds of ‘mountain pass’ associated to each positive local maximum of R.13). 1). θ ∈ S n−1 ) centered at the south pole where r = 0. these ‘mountain passes’ are in some neighborhoods of the ‘critical points at inﬁnity’ (See Proposition 6. so that the south pole of S n is at the origin O and the center of the ball B n+1 is at (0. Let φq be the standard solution with its ‘center of mass’ at q ∈ B n+1 . θ). θ) of S n (0 ≤ r ≤ π.15) r hλ (r. 0. When q = O (the south pole of S n ).2. · · · . Choose a coordinate system in Rn+1 . (6. this family of functions blows up at south pole. θ) = (2 tan−1 (λ tan ). φq ∈ H and satisﬁes − u + γn u = γn uτ . As usual. and the integral S n uτ +1 dV invariant. We recall some wellknown facts in conformal geometry. we can express. we use  ·  to denote the distance in Rn+1 .12) It is actually the center of mass of the sphere S n with density uτ +1 (x) at the point x.14) q 2 r + sin2 r λ cos 2 2 with 0 < λ ≤ 1. where q ˜ ˜ is the intersection of S n with the ray passing the center and the point q. that is. we have .1 below). More generally and more precisely. (6.
Tq v) = (u. ˆ n+2 for all u ∈ C ∞ (M ). Proof.1 For any u. . g and Rg are the LaplaceBeltrami operator and the scalar curvature associated with the metric g respectively. E(Tq u) = E(u) and Sn (Tq u)τ +1 dV = Sn uτ +1 dV. Sn (Tq u)k (Tq v)τ +1−k dV = (6. we have ˆ Lg u = w− n−2 Lg (uw) . If M is a compact manifold without boundary.2 The Variational Approach 163 Lemma 6.18) Interested readers may ﬁnd its proof in [Bes]. The proof for (i) is similar. one would arrive at { M 2 g u ˆ + c(n)Rg u2 }dVg = ˆ ˆ M { 2 g (uw) + c(n)Rg uw2 }dVg . v) (ii) Sn (6. the standard metric on S n .6. For g = w4/(n−2) g.18) by u and integrating by parts. v ∈ H 1 (S n ). and for any nonnegative integer k. we choose g to be the Euclidean metric g in Rn with ¯ gij = δij and g = go . The conformal Laplacian has the following wellknown invariance property under the conformal change of metrics. (i) We will use a property of the conformal Laplacian to prove the invariance of the energy E(Tq u) = E(u) .2.16) uk v τ +1−k dV.17) In particular. g). it holds the following (i) (Tq u. w > 0. (6. (6.19) In our situation. then multiplying both sides of (6. where c(n) = 4(n−1) . Lg := g − c(n)Rg n−2 is call a conformal Laplacian. Then it is wellknown that ¯ ˆ go = w n−2 (x)¯ g with w(x) = satisfying the equation We also have 4 4 + x2 n+2 n−2 2 4 − ¯ w = γn w n−2 . On a ndimensional Riemannian manifold (M.
One can also verify that Tq φq = 1. ˜ Under this projection. with r x = tan 2 2 and n−2 λ(4 + x2 ) 2 (Tq u)(x) = u(λx) . xn ) ∈ Rn . Make a stereographic projection from S n to Rn as shown in the ﬁgure below. (ii) This can be derived immediately by making change of variable y = hλ (x). Again.20) that E(Tq u) = Rn  o u(λx) λ(4 + x2 ) 4 + λx2 n−2 2  + γn u(λx) 2 λ(4 + x2 ) 4 + λx2 n−2 2 2 dVgo = Rn  ¯ u(λx)  ¯ u(λx) Rn λ(4 + x2 ) 4 + λx2 4λ 4 + λx2 4 4 + y2 n−2 2 w(x) 2 dx n−2 2 = = Rn 2 dx  ¯ u(y) n−2 2 2 dy = Rn  ¯ (u(y)w(y))2 dy = E(u). 4 + λx2 It follows from this and (6. ¯ Now formula (6. where q is placed at the origin of Rn . θ) ∈ S n becomes x = (x1 . for n ≥ 3 c(n)Rgo = γn := and n(n − 2) 4 c(n)Rg = c(n)0 = 0. · · · . (6. This completes the proof of the lemma.19) becomes { Sn o u 2 + γn u2 }dVo = Rn  ¯ (uw)2 dx.164 6 Prescribing Scalar Curvature on S n . . let q be the intersection of S n with the ray passing through the ˜ center of the sphere and the point q. the point (r.20) where o and ¯ are gradient operators associated with the metrics go and g ¯ respectively.
there is a p1 ≤ τ . and δo .24) Proof of Proposition 6. v := min u − tφq ≤ ρo }.25) For a given δ1 > 0. q and Tq for q = O can be expressed in a ˜ ˜ similar way. Proposition 6. such that for all τ ≥ p ≥ p1 . θ) which we assume to be a positive local maximum.1 Through a straight forward calculation. The estimates near other positive local maxima are similar. such that φλo . (6. such that for all p ≥ po and for all u ∈ ∂Σ.O .2. po . in a small neighborhood of O. one can show that Jτ (φλ. by (6.2. 2 (6. and Jτ (φλo . t. holds Jp (u) ≤ R(0)S n  − δo .O ) > R(0)S n  − δ1 . as λ→0. Deﬁne Σ = {u ∈ H  q(u) ≤ ρo .23) Σ Then we show that on the boundary of Σ.25). there exists a p1 .2. Hence. . by (6. (6.26) It is easy to see that for a ﬁxed function φλo . one can choose λo . n − 2 < α < n. We ﬁrst notice that the supremum of Jp in Σ approaches R(0)S n  as p→τ .1 For any δ1 > 0. for some a > 0.2 There exist positive constants ρo . (6.6. More precisely. We will estimate the functional Jp in this neighborhood. sup Jp (u) > R(0)S n  − δ1 .22) Notice that the ‘centers of mass’ of functions in Σ are near the south pole O. Jp is strictly less and bounded away from R(0)S n .O )→R(0)S n . Jp is continuous with respect to p.q (6.2 The Variational Approach 165 The relations between q and λ. This is a neighborhood of the ‘critical points at inﬁnity’ corresponding to O.26). such that for all p ≥ p1 .O ∈ Σ.21) (6. Our conditions on R implies that R(r) = R(0) − arα . we have Proposition 6. We now carry on the estimates near the south pole (0.
q ρo ≤ q + C v . Then for ρo suﬃciently small.2. .˜ and the boundedness of R.O [R(0) − ax + q α ]φp+1 dV ˜ λ.29).˜ q n−2 2 for q ∈ B 2 (O).˜ q ···. We express Jp (φλ. Noticing that α < n and δp →0.2 is rather complex. ˜ q2 ≤ C(˜2 + λ4 ).28). We ﬁrst estimate Jp for a family of standard functions in Σ. q q where δp := τ − p and op (1)→0 as p→τ . for n ≥ 3 Jp (φλo .22).27) > 0 be small such that (6. we have Jp (φλ. Let B (O) ⊂ S n .3 (On the ‘center of mass’) (i) Let q. and v be deﬁned by (6. Lemma 6.31) (6.27). and for λ and ˜ suﬃciently q small.O (6.˜ q ≤ B (O) B (O) R(x + q )φp+1 dV ˜ λ.28) From the deﬁnition (6.˜) = q Sn (6.˜) ≤ (R(0) − C1 ˜α )S n (1 + op (1)) − C1 λα+δp . (6.14). This completes the proof of Proposition 6. Lemma 6.14) of φλ. we conclude that (6.O ≤ B (O) [R(0) − C3 xα − C3 ˜α ]φp+1 dV q λ. λ. and (6. Then for suﬃciently small q.2.2 For p suﬃciently close to τ .30) imply (6. This completes the proof of the Lemma. we also need the following two lemmas that describe some useful properties of the set Σ. ˜ (6. To estimate Jp for all u ∈ ∂Σ. (6.2. we obtain q the smallness of the second integral: S n \B (O) R(x)φp+1 dV ≤ C2 λn−δp λ. we work in a local coordinate centered at O. The proof of Proposition 6.2.2.21) holds in ··· + B (O) S n \B (O) R(x)φp+1 dV = λ. B (O) R(x)φp+1 dV = λ.29) To estimate the ﬁrst integral.O ) > R(0)S n  − δ1 .O .2. (6.1.166 6 Prescribing Scalar Curvature on S n . Proof of Lemma 6.32) (ii) Let ρo . and q be deﬁned by (6.30) ≤ [R(0) − C3 ˜α ]S n [1 + op (1)] − C4 λα+δp (n−2)/2 . q α α α Here we have used the fact that x + q  ≥ c(x + ˜ ) for some c > 0 in ˜ q one half of the ball B (O) and the symmetry of φλ. q.
n. by the deﬁnition.6.31) is a direct consequence of the triangle inequality and (6. Sn (6. Proof of Lemma 6.2. ψi (x) = xi . n. i = 1. xn ). then we are done.2.37) (tφq + v)τ +1 Sn Noticing that v ≤ ρo is very small. one arrives at q − q  ∼ λ2 . 2. i = 1. (i) From the deﬁnition that φq = φλ.36).22). we have either v = ρo or q(u) = ρo .14).3. · · · .3. 2. Let Tqo be the conformal transform such that Tqo φqo = 1. · · · . (6. and by an elementary q calculation.36) ˜ Draw a perpendicular line segment from O to q q .4 (Orthogonality) Let u ∈ Σ and v = u − to φqo be deﬁned by (6. the boundary of Σ.37) in terms of v : ρo =  tτ +1 tτ +1 xφτ +1 + (τ + 1)tτ q φτ +1 + (τ + 1)tτ q ≤ xφτ +1 q φτ +1 q xφτ v + o( v ) q φτ v + o( v ) q   + C v ≤ q + C v .34) and Tqo v · ψi (x)dV = 0. (ii) For any u = tφq + v ∈ ∂Σ. 2. i = 1. · · · . If v = ρo . (6. It follows from the deﬁnition of the ‘center of mass’ that τ +1 n x(tφq + v) ρo =  S . then one can see that (6. (6.35) where ψi are ﬁrst spherical harmonic functions on S n : − ψi = n ψi (x) . · · · n. This completes the proof of Lemma 6.2 The Variational Approach 167 Lemma 6. .˜. and (6. If we place the center of the sphere S n at the origin of Rn+1 (not in our case). ˜ for λ suﬃciently small .33) Then Tqo vdV = 0 Sn (6. we can expand the integrals in (6. Here we have used the fact that v is small and that t is close to 1. then for x = (x1 .2. Hence we assume that q(u) = ρo .
The estimate is divided into two steps. for n ≥ 3 Proof of Lemma 6. ψn ). Deﬁne R(x) x ∈ B2ρo (O) m x ∈ S n \ B2ρo (O). we estimate Jτ (u). (6.40) where op (1)→0 as p→τ . (i) By a variation with respect to t.2 Make a perturbation of R(x): ¯ R(x) = where m = R ∂B2ρo (O) . By the invariant property of the transform.35). we arrive at (6. This completes the proof of the Lemma. q Sn Now. First we show ¯ ¯ Jp (u) ≤ Jτ (u)(1 + op (1)). ¯ In step 1. We use the fact that v = u − to φqo is the minimum of E(u − tφq ) among all possible values of t and q. Proof of Proposition 6.2. Write L = + γn . (6.168 6 Prescribing Scalar Curvature on S n . (6.13) that vφτo dV = 0.39) ¯ Jp (u) = Sn ¯ R(x)up+1 dV. In step 2. . φqo ) = Sn vLφqo dV.2. Step 1. we make a variation with respect to q.38) It follows from (6. 0= q q vLφq q=qo = γn q q vφτ q=qo q Tqo v(Tqo φq )τ q=qo = τ Tqo v(φq−qo )τ q=qo q φq−qo q=qo Tqo vφτ −1o q−q =τ Tqo v(ψ1 .4. apply the conformal transform Tqo to both v and φqo in the above integral. (ii) To prove (6. we have 0 = (v. we show that the diﬀerence between Jp (u) and Jτ (u) is very ¯ small. · · · .34).
6. Using the fact that both u and φq belong to H with u = γn S n  = φq .42) and (6. We derive t2 = 1 − It follows that ¯ R(x)uτ +1 ≤ Sn v 2 .2 The Variational Approach 169 In fact. Sn Notice that u 2 := E(u). that is ( v φq + γn vφq )dV = 0. write u = v + tφq . we have u 2 = v 2 + t2 φq 2 . we now only need to estimate Jτ (u). For any u ∈ ∂Σ.41) Step 2. This implies (6. by the H¨lder inequality o ¯ Rup+1 ≤ ≤ ¯ R p+1 uτ +1 dV τ +1 p+1 τ +1 p+1 τ +1 S n  τ +1 τ −p τ −p ¯ Ruτ +1 dV  R(0)S n   τ +1 .42) tτ +1 Sn ¯ R(x)φτ +1 + (τ + 1) q τ (τ + 1) ¯ R(x)φτ v + q 2 Sn Sn φτ −1 v 2 + o( v 2 ) q (6.40).41).40) and (6.27). .43) = I1 + (τ + 1)I2 + τ (τ + 1) I3 + o( v 2 ). (6. we use (6. ¯ Now estimate the diﬀerence between Jp (u) and Jp (u). (6. γn S n  (6.38) implies that v and φq are orthogonal with respect to the inner product associated to E(·). 2 To estimate I1 . ¯ By virtue of (6. ¯  Jp (u) − Jp (u) ≤ C1 S n \B2ρo (O) up+1 v p+1 ≤ C3 λn−δp n−2 2 ≤ C2 S n \B2ρo (O) (tφq )p+1 + C2 S n \B 2ρo (O) + C3 v p+1 .
for some positive number c2 . and (6.45).47) would be 0 due to the fact that γn = n(n−1) . in either case there exist a δo > 0.34) and (6. Consequently v It follows that I3 ≤ R(0) Sn 2 = Tq v 2 ≥ (γn + n + c2 ) Sn (Tq v)2 .35).47) Here we have used the fact that v is very small and ρα v can be controlled o by ˜α + λα (see Lemma 6. and without it. To estimate I2 . we use (6.44) q 2 γn S n  for some positive constant c1 .2. we have either v = ρo or q(u) = ρo .46) Now (6. for n ≥ 3 I1 ≤ (1 − τ +1 v 2 )R(0)S n (1 − c1 ˜α − c1 λα ) + O(λn ) + o( v 2 ).34) and (6. v) o v ≤ C 4 ρα v . o (6.35) imply that Tq v is orthogonal to these two eigenspaces and hence (Tq v)2 dV ≥ λ2 Sn Sn Tq v2 dV .45) ≤ Cρα φq o To estimate I3 . 4 For any u ∈ ∂Σ. Now (6. φτ −1 v 2 dV = R(0) q (Tq v)2 ≤ (Tq φq )τ −1 (Tq v)2 dV Sn = R(0) Sn R(0) v 2. the coeﬃcient of v 2 in (6. γ n + n + c2 (6.46) q has played a key role. I2 = Sn ¯ (R(x) − m)φτ vdV = q B2ρo (O) ¯ (R(x) − m)φτ vdV q = 1 γn ¯ (R(x) − m)Lφq · vdV B2ρo (O) ≤ Cρα  o B2ρo (O) Lφq · vdV = Cρα (φq .31) and (6.2).38)). (6.44). Notice that the positive constant c2 in (6. we use the H 1 orthogonality between v and φq (see (6. By (6. (6. is the second nonzero eigenvalue of − . It is wellknown that the ﬁrst and second eigenspaces of − on S n are constants and ψi corresponding to eigenvalues 0 and n respectively. q (6.170 6 Prescribing Scalar Curvature on S n .47). where λ2 = n + c2 . such that .46) imply that there is a β > 0 such that ¯ Jτ (u) ≤ R(0)S n [1 − β(˜α + λα + v 2 )].
11).41). In this case. Case (i): R has only one positive local maximum.2 The Variational Scheme In this part. (6. and (6.49) Obviously.48) Now (6. condition (6. (6. Therefore any maximizing sequence possesses a converging subsequence in Hr and the limiting function multiplied by a suitable constant is a solution of (6. i = 1.1 and 6.50) (6. we have Jp (up ) ≤ min R(ri )S n  − δ. i = 1. This completes the proof of the Proposition. Deﬁne cp = sup min Jp (u).2 The Variational Approach 171 ¯ Jτ (u) ≤ R(0)S n  − 2δo . 2. ψi ∈ Σi . 2 and Jp (u) ≤ R(ri )S n  − δ.11) and for all p close to τ .48) .2.4) implies that R must have local minima at both poles. Case (ii): R has at least two positive local maxima. δ Jp (ψi ) > R(ri )S n  − . the constant multiples are bounded from above and bounded away from 0. we construct variational schemes to show the existence of a solution to equation (6.51) and the deﬁnition of cp . there exists a critical (saddle) point up of Jp with Jp (up ) = cp .40).51) Let γ be a path in H joining ψ1 and ψ2 .11) for each p < τ .2. due to (6.53) One can easily see that a critical point of Jp in H multiplied by a constant is a solution of (6. Moreover. there exist two disjoint open sets Σ1 . By Proposition 6. Σ2 ⊂ H.24) is an immediate consequence of (6. 2. po < τ and δ > 0. (6.2. ∀u ∈ ∂Σi . by the wellknown compactness. Let r1 and r2 be the two smallest positive local maxima of R. such that for all p ≥ po . (6. .52) γ∈Γ u∈γ For each ﬁxed p < τ .6. Similar to the ideas in [C]. (6. 6. Jp is bounded from above in Hr and it is wellknown that the variational scheme is compact for each p < τ (See the argument in Chapter 2).2. Let Γ be the family of all such paths. we seek a solution by maximizing the functional Jp in a class of rotationally symmetric functions: Hr = {u ∈ H  u = u(r)}.2 (6. i=1.
1 In the Region Where R < 0 The apriori bound of the solutions is stated in the following proposition.3. That method may be applied to higher dimensions with a few modiﬁcations. and a sequence of points {xi }. for every δ > 0.2.2 in our paper [CL6]. It is mainly due to the energy bound. we also obtained estimates in this region by the method of moving planes.11) are uniformly bounded in the regions where R ≤ −δ. we assume that R be bounded away from zero. We will use a rescaling argument to derive a contradiction. The bound depends only on δ. which is almost a local maximum.2 Let {up } be solutions of equation (6.1 Assume that R satisﬁes condition (6.3. Now within our variational frame in this paper.172 6 Prescribing Scalar Curvature on S n . R close to 0. we prove that as p→τ . such that ui (xi )→∞. let K be any large number and let . 6. then there exists po < τ . in that paper.3. we estimate the solutions in three regions where R < 0. which converges to a solution uo of (6. {up } are uniformly bounded in the regions where R(x) ≤ δ. The convergence is based on the following a priori estimate.5) in Theorem 6. Theorem 6.1 For all 1 < p ≤ τ . such that for all po < p < τ .3 The A Priori Estimates In the previous section. To this end. we need to choose a point near xi . the solution of (6. for n ≥ 3 6. Then there exists a subsequence {ui } with ui = upi . with xi →xo and R(xo ) = 0. such that for all po < p ≤ τ . Proof. we removed this condition by using a blowing up analysis near R(x) = 0.11) for each p < τ .1. for rotationally symmetric functions R on S 3 . Now suppose that the conclusion of the Proposition is violated.1). there is a subsequence of {up }. dist({x  R(x) ≤ −δ}. pi →τ . the solutions of (6. It is easy to see that the energy E(up ) are bounded for all p.2 In the Region Where R is Small In [CL6]. the a priori estimate is simpler.11) obtained by the variational approach in section 2.1. In [CL7]. To prove the Theorem. and the lower bound of R. we showed the existence of a positive solution up to the subcritical equation (6. 6. and R > 0 respectively. However.11) obtained by the variational scheme are uniformly bounded. Proposition 6.3. {x  R(x) = 0}).3. In this section. which is a direct consequence of Proposition 6. Since xi may not be a local maximum of ui . Then there exists a po < τ and a δ > 0. Proposition 6.
and it follows by a standard argument that {vi } converges to a harmonic function vo in BK (0) ⊂ Rn . (6. Sn uτ +1 dV ≥ i BK (0) τ vi +1 dx. 1 ui (λi x + ai ). (6. Then from the deﬁnition of ai . BK (0) (6. This completes the proof of the Proposition.55).54). for any K > 0. the boundedness of the energy E(ui ) implies that Sn uτ +1 dV ≤ C. . From this. we use the fact si (ai ) ≥ si (xi ). (6. On the other hand. one can easily verify that the ball Bλi K (ai ) is in the interior of Bri (xi ) and the value of ui (x) in Bλi K (ai ) is bounded above by a constant multiple of ui (ai ).57) contradict with (6. 2 In a small neighborhood of xo . with vo (0) = 1. we use si (ai ) ≥ si (x) and derive from (6. one can verify that. By a straight forward calculation. i (6.56) for some constant C. Now we can make a rescaling. Let ai be the maximum of si (x) in Bri (xi ). choose a local coordinate and let si (xi ) = ui (x)(ri − x − xi ) pi −1 . ui (x) ≤ ui (ai ) ≤ ui (ai ) 2 pi −1 . To see the ﬁrst part.6. 2 Bλi K (ai ) ⊂ Bri (xi ).54) 2 ri − ai − xi  ≥ λi K. vi (x) = ∀ x ∈ Bλi K (ai ).3 The A Priori Estimates 173 ri = 2K[ui (xi )]− pi −1 2 . ui (ai ) Obviously vi is bounded in BK (0).56) and (6. ri − ai − xi  ri − x − xi  2 2 pi −1 To see the second part. we derive ui (ai )(ri − ai − xi ) pi −1 ≥ ui (xi )ri i It follows that hence 2 2 p −1 pi −1 = (2K) pi −1 .57) Obviously. Let λi = [ui (ai )]− 2 . vi (x)τ +1 dx ≥ cK n .55) for some c > 0. Consequently for i suﬃciently large.
{ui } can only blow up at ﬁnitely many points.3.˜.3 Let {up } be solutions of equation (6.58) for all positive local maxima rk of R. Proposition 6. Then {ui } can have at most one simple blow up point and this point must be a local maximum of R. For convenience. Finally. we use (6. {up } are uniformly bounded in the regions where R(x) ≥ . Then similar to the argument in Part II. Now if {ui } blow up at xo . si (x). Let ri . we show that there is no more than one point blow up and the point must be a local maximum of R. we see that {vi (x)} converges to a standard function vo (x) in Rn with τ − vo = R(xo )vo .3 In the Regions Where R > 0. q Step 3. Step 1. because each ui is the minimum of Jpi on a path. In Step 1. then by Lemma 6. It follows that uτ +1 dV ≥ co > 0. In Step 2. we argue that the solutions can only blow up at ﬁnitely many points because of the energy bound.1. we show that even one point blow up is impossible.3. This is done mainly by using a Pohozaeve type identity. for n ≥ 3 6. k (6.1. Proof. Therefore. one can obtain Jτ (ui ) ≤ min R(rk )S n  − δ.174 6 Prescribing Scalar Curvature on S n . with λi = (max ui )− n−2 and with q at the maximum of R. Moreover. From the proof of Proposition 2. The argument is standard. let {ui } be the sequence of critical points of Jpi we obtained in Section 2. . the sequence {ui } is uniformly bounded and possesses a subsequence converging to a solution of (6. {ui } behaves almost like a family of the standard 2 ˜ functions φλi .1 Let R satisfy the ‘ﬂatness condition’ in Theorem 1.53) to conclude that even one point blow up is impossible. we can have only ﬁnitely many such xo . This completes the proof of Theorem 6. The proof is divided into 3 steps.3.1.11) obtained by the variational approach in section 2. Therefore. Let {xi } be a sequence of points such that ui (xi )→∞ and xi →xo with R(xo ) > 0. i Bri (xo ) Because the total energy of ui is bounded.3.58). In Step 3. As a consequence of a result in [Li2] (also see [CL6] or [SZ]). Step 2. we have Jτ (ui )→R(xo )S n . This is a contradiction with (6. we have Lemma 6. Then there exists a po < τ . We have shown that {ui } has only isolated blow up points. and vi (x) be deﬁned as in Part II.1).2. such that for all po < p ≤ τ and for any > 0.
Correspondingly. More generally. then u can not have any interior maximum in the interval. the graph is concave up. the simplest version maximum principle reads: .7 Maximum Principles 7. the symmetric matrix is nonpositive deﬁnite at point xo .3 7. for a C 2 function u(x) of nindependent variables x = (x1 . b). xn ). we must have u (xo ) ≤ 0.4 7. Based on this observation. that is. Since under the condition u (x) > 0. at a local maximum xo . It is wellknown in calculus that at a local maximum point xo of u. One can also see this geometrically.2 7. · · · . it can not have any local maximum. we have D2 u(xo ) := (uxi xj (xo )) ≤ 0 .5 Introduction Weak Maximum Principle The Hopf Lemma and Strong Maximum Principle Maximum Principle Based on Comparison A Maximum Principle for Integral Equations 7.1 7.1 Introduction As a simple example of maximum principle. then we have the following simplest version of maximum principle: Assume that u (x) > 0 in an open interval (a. let’s consider a C 2 function u(x) of one independent variable x.
condition (7. we obtain a b (1 − x2 ) ≤ u(x) ≤ (1 − x2 ). and which vanish on the boundary as u does. the solution of the boundary value problem (7. x ∈ B1 (0) ⊂ Rn u(x) = 0 .2) in any bounded open domain Ω. Besides this. (7. if f (x) ≡ 0. then AB ≤ 0. In this chapter. iii) Establishing the Existence of Solutions (a) For a linear equation such as (7.176 7 Maximum Principles If (aij (x)) ≥ 0 and ij aij (x)uxi xj (x) > 0 (7. we will introduce various maximum principles.1) no longer implies that the graph of u(x) is concave up. 0) is a saddle point. Unlike its onedimensional counterpart. x2 ) = x2 − x2 . applying the maximum principle for operator (see Theorem 7. and most of them will be used in the method of moving planes in the next chapter. Now.1. then we can choose a = b = 0.1) in an open bounded domain Ω. respectively.2) is unique. then we can compare the solution u with the two functions b a (1 − x2 ) and (1 − x2 ) 2n 2n which satisfy the equation with f (x) replaced by a and b. there are numerous other applications. 1 2 2 One can easily see from the graph of this function that (0. A simple counter example is 1 u(x1 .1 in the following). in which condition (7. An interesting special case is when (aij (x)) is an identity matrix. x ∈ ∂B1 (0). if A ≥ 0 and B ≤ 0. In this case. i) Providing Estimates Consider the boundary value problem − u = f (x) . In other words.1) becomes u > 0. 2n 2n ii) Proving Uniqueness of Solutions In the above example. We will list some below. then u can not achieve its maximum in the interior of the domain.2) If a ≤ f (x) ≤ b in B1 (0). let u(x) = sup φ(x) . and this implies that u ≡ 0. the validity of the maximum principle comes from the simple algebraic fact: For any two n × n matrices A and B.
3). then max u ≤ max u. x ∈ Ω φ(x) = 0 . ¯ Let u be the limit of the sequence {ui }: u(x) = lim ui (x).3) Assume that f (·) is a smooth function with f (·) ≥ 0. Theorem 7. x ∈ Ω.2). In Section 7. we introduce and prove the weak maximum principles. we have u ≤ u1 ≤ u2 ≤ · · · ≤ ui ≤ · · · ≤ u. such that − u ≤ f (u) ≤ f (¯) ≤ − u. To seek a solution of problem (7. then u is a solution of the problem (7. (7. x ∈ ∂Ω.7. ¯ Ω ∂Ω − ui+1 = f (ui ).2 . ¯ Ω ∂Ω x ∈ Ω.1 Introduction 177 where the sup is taken among all the functions that satisfy the corresponding diﬀerential inequality − φ ≤ f (x) . x ∈ Ω u(x) = 0 .3). Let − u1 = f (u) and Then by maximum principle. Suppose that there ¯ exist two functions u(x) ≤ u(x). (b) Now consider the nonlinear problem − u = f (u) . then min u ≥ min u. . Then u is a solution of (7.1.1 (Weak Maximum Principle for − . x ∈ ∂Ω. we use successive approximations. u ¯ These two functions are called sub (or lower) and super (or upper) solutions respectively.) i) If − u(x) ≥ 0. ii) If − u(x) ≤ 0.
Actually this can not happen unless u is constant. x ∈ Ω. Theorem 7. any ξ ∈ Rn and for some δ > 0. x ∈ Ω. i) If Lu ≥ 0 in Ω. Let u be a function in C 2 (Ω) ∩ C(Ω). as we will see in the following. i L=− ij aij (x)Dij + Here we always assume that aij (x). However.1. bounded. . Dij = . ¯ Ω ∂Ω ii) If Lu ≤ 0 in Ω.178 7 Maximum Principles This result can be extended to general uniformly elliptic operators.2 (Weak Maximum Principle for L) Let L be the uniformly elliptic operator deﬁned above. and c(x) are bounded continuous ¯ functions in Ω. i) If Lu(x) ≥ 0. then u attains its maximum value only on ∂Ω unless u is constant. ii) If Lu(x) ≤ 0.1. This maximum principle (as well as the weak one) can also be applied to the case when c(x) ≥ 0 with slight modiﬁcations. Theorem 7.3 (Strong Maximum Principle for L with c(x) ≡ 0) Assume that Ω is an open. then max u ≤ max . then u attains its minimum value only on ∂Ω unless u is constant. then min u ≥ min u. ¯ Ω ∂Ω These Weak Maximum Principles infer that the minima or maxima of u attain at some points on the boundary ∂Ω. they do not exclude the possibility that the minima or maxima may also occur in the interior of Ω. and connected domain in Rn with smooth bound¯ ary ∂Ω. Assume that c(x) ≡ 0. bi (x). Let Di = Deﬁne ∂ ∂2 . Assume that c(x) ≡ 0 in Ω. We say that L is uniformly elliptic if aij (x)ξi ξj ≥ δξ2 for any x ∈ Ω. ∂xi ∂xi ∂xj bi (x)Di + c(x).
and obviously so does − + c(x) if c(x) ≥ 0. where we prove the ‘Maximum Principles Based on Comparisons’.+ c(x)’ can still remain ‘positive’ to ensure the maximum principle. ¯ Let φ be a positive function on Ω satisfying − φ + λ(x)φ ≥ 0.4.7. bounded.4. Actually. then the operator ‘. Let u be a function such that − u + c(x)u ≥ 0 x ∈ Ω u≥0 on ∂Ω. Assume that c(x) ≥ 0 in Ω. We will prove these Theorems in Section 7. in practical problems it occurs frequently that the condition c(x) ≥ 0 can not be met. Let u be a function in C 2 (Ω) ∩ C(Ω). as we will see in the next chapter. In Section 7. x ∈ Ω.4) . we derive the ‘Narrow Region Principle’ and the ‘Decay at Inﬁnity Principle’. (7.1 Introduction 179 Theorem 7.1. Roughly speaking. Theorem 7. as we will see in later sections. − is ‘positive’.5. we all require that c(x) ≥ 0. These principles can be applied very conveniently in the ‘Method of Moving Planes’ to establish the symmetry of solutions for semilinear elliptic equations. if c(x) is not ‘too negative’.1. Also in Section 7. These will be studied in Section 7.5 (Maximum Principle Based on Comparison) Assume that Ω is a bounded domain. as consequences of Theorem 7. ∀x ∈ Ω. Notice that in the previous Theorems.4 (Strong Maximum Principle for L with c(x) ≥ 0) Assume that Ω is an open. maximum principles hold for ‘positive’ operators. ii) If Lu(x) ≤ 0. x ∈ Ω. Do we really need c(x) ≥ 0? The answer is ‘no’. then u can not attain its nonnegative maximum in the interior of Ω unless u is constant. i) If Lu(x) ≥ 0. However.5) (7. then u can not attain its nonpositive minimum in the interior of Ω unless u is constant. and connected domain in Rn with smooth bound¯ ary ∂Ω.5. we establish a maximum principle for integral inequalities. If c(x) > λ(x) .3 by using the Hopf Lemma.1. then u ≥ 0 in Ω.
a Figure 2 b To prove the above observation rigorously. First. Then by the second order Taylor expansion of u around xo . Theorem 7. (7.9) ii) If − u(x) ≤ 0. we consider a perturbation of u: u (x) = u(x) − x2 . ¯ Ω ∂Ω x ∈ Ω. b) is concave downward.2.10). The entirely similar proof also works for part ii).6). then min u ≥ min u.10).7) (7.6) (7. let Ω be the interval (a. To better illustrate the idea. then max u ≤ max u. we ﬁrst carry it out under the stronger assumption that −u (x) > 0.1 (Weak Maximum Principle for − . Now for u only satisfying the weaker condition (7. we must have −u (xo ) ≤ 0. Suppose in contrary to (7. b).7).180 7 Maximum Principles 7. and therefore one can roughly see that the values of u(x) in (a. there is a minimum xo ∈ (a.10) Let m = min∂Ω u. we will deal with one dimensional case and higher dimensional case separately. This implies that the graph of u(x) on (a. Proof. u (x) satisﬁes the stronger condition (7. ¯ Ω ∂Ω (7.) i) If − u(x) ≥ 0.6) becomes u (x) ≤ 0. Here we only present the proof of part i).8) (7. Then condition (7. Obviously. b) of u. and . b) are large or equal to the minimum value of u at the end points (See Figure 2).2 Weak Maximum Principles In this section. we prove the weak maximum principles. x ∈ Ω. This contradicts with the assumption (7. such that u(xo ) < m. for each hence > 0.
we arrive at the desired conclusion (7. To prove the theorem in dimensions higher than one.16) Otherwise. This Lemma tell us that. u(x)dx = Br (xo ) ∂Br (xo ) ∂u dS = rn−1 ∂ν S n−1 ∂u o (x + rω)dSω .7). then for any ro > r > 0. ¯ Ω ∂Ω Now letting →0. if − u(x) > 0. (7. This completes the proof of the Theorem. ∂r (7.12) ii) If − u(x) < 0 for x ∈ Bro (xo ) with some ro > 0. then − u(xo ) ≤ 0. (7. then − u(xo ) ≥ 0. 1 u(x)dS. Now based on this Lemma. letting →0.15).14) We postpone the proof of the Lemma for a moment.2. then the value of u at the center of the small ball Br (xo ) is larger than its average value on the boundary ∂Br (xo ).2. .13) u(xo ) < ∂Br (xo ) ∂Br (xo ) It follows that. the graph of u is locally somewhat concave downward.7). if xo is a minimum of u in Ω. This contradicts with (7. (7.1 (Mean Value Inequality) Let xo be a point in Ω. The Proof of Lemma 7. Now in (7.1.11) u(xo ) > (=) ∂Br (xo ) ∂Br (xo ) It follows that. and ∂Br (xo ) be its boundary. Obviously.2 Weak Maximum Principles 181 min u ≥ min u . we need the following Lemma 7. i) If − u(x) > (=) 0 for x ∈ Bro (xo ) with some ro > 0.16). we arrive at (7. if xo is a maximum of u in Ω. Let Br (xo ) ⊂ Ω be the ball of radius r center at xo .2.17) where dSω is the area element of the n − 1 dimensional unit sphere S n−1 = {ω  ω = 1}.7. (7. we have − u (xo ) ≤ 0. then for any ro > r > 0. 1 u(x)dS. then by Lemma 7. By the Divergence Theorem.1.15) Hence we must have min u ≥ min u Ω ∂Ω (7. if there exists a minimum xo of u in Ω. to prove the theorem. we ﬁrst consider u (x) = u(x) − x2 . Roughly speaking. (7. − u = − u + 2 n > 0.
2. S n−1 (7.20) ii) If Lu ≤ 0 in Ω. ¯ Ω ∂Ω . Furthermore.2. Let L be the uniformly elliptic operator deﬁned above. such that − u(x) > 0.12). This completes the proof of the Lemma.11). Let Di = Deﬁne L=− ij ∂2 ∂ . Assume that c(x) ≡ 0. Theorem 7. there exists a δ > 0. Dij = . then the conclusion of Theorem 7.19) Here we always assume that aij (x). It follows that u(xo ) > 1 rn−1 S n−1  ∂Br (xo ) u(x)dS. This contradicts with the assumption that xo is a minimum of u. any ξ ∈ Rn and for some δ > 0. bi (x).182 7 Maximum Principles If u < 0. then by (7. We say that L is uniformly elliptic if aij (x)ξi ξj ≥ δξ2 for any x ∈ Ω.1. then min u ≥ min u. i) If Lu ≥ 0 in Ω. we suppose in contrary that − u(xo ) > 0. ∂xi ∂xi ∂xj bi (x)Di + c(x). one can see that if we replace − operator by − + c(x) with c(x) ≥ 0.1 is still true (with slight modiﬁcations). (7. Then by the continuity of u. ¯ Ω ∂Ω (7. This veriﬁes (7. then max u ≤ max u.2 (Weak Maximum Principle for L with c(x) ≡ 0). ∀x ∈ Bδ (xo ). Consequently.18) Integrating both sides of (7. From the proof of Theorem 7. and c(x) are bounded continuous ¯ functions in Ω.18) from 0 to r yields u(xo + rω)dSω − u(xo )S n−1  < 0. S n−1 where S n−1  is the area of S n−1 .17).2. ∂ { ∂r u(xo + rω)dSω } < 0.11) holds for any 0 < r < δ. we can replace the Laplace operator − with general uniformly elliptic operators. i aij (x)Dij + (7. To see (7.
However it does not exclude the possibility that the minimum may also attain at some point in the interior of Ω. Let u− (x) = min{u(x). the principle still applies with slight modiﬁcations. In this section.1 (Hopf Lemma). page 327. Assume that Ω is an open. ¯ Ω ∂Ω Interested readers may ﬁnd its proof in many standard books. Theorem 7. 7. then min u ≥ min u− . we prove a weak form of maximum principle. and connected domain in Rn with smooth boundary ∂Ω.3.21) Suppose there is a ball B contained in Ω with a point xo ∈ ∂Ω ∩ ∂B and suppose u(x) > u(xo ). 0} and u+ (x) = max{u(x).3 The Hopf Lemma and Strong Maximum Principles In the previous section. This is called the “Strong Maximum Principle”. ∂ν (7. that is. ¯ Ω ∂Ω ii) If Lu ≤ 0 in Ω. We will prove it by using the following Lemma 7.7. Assume that Lu ≥ 0 in Ω. Let L=− ij aij (x)Dij + i bi (x)Di + c(x) be uniformly elliptic in Ω with c(x) ≡ 0. Let u be a function in ¯ C 2 (Ω) ∩ C(Ω).23) . ∂u(xo ) < 0. the minimum value of u can only be achieved on the boundary unless u is constant. (7. bounded. Assume that c(x) ≥ 0.2. it concludes that the minimum of u attains at some point on the boundary ∂Ω. then max u ≤ max u+ .3 (Weak Maximum Principle for L with c(x) ≥ 0).3 The Hopf Lemma and Strong Maximum Principles 183 For c(x) ≥ 0. ∀x ∈ B. Let L be the uniformly elliptic operator deﬁned above. (7. i) If Lu ≥ 0 in Ω. 0}. say in [Ev].22) Then for any outward directional derivative at xo . In the case Lu ≥ 0. we will show that this can not actually happen.
184
7 Maximum Principles
In the case c(x) ≥ 0, if we require additionally that u(xo ) ≤ 0, then the same conclusion of the Hopf Lemma holds. Proof. Without loss of generality, we may assume that B is centered at the origin with radius r. Deﬁne w(x) = e−αr − e−αx . Consider v(x) = u(x) + w(x) on the set D = B r (xo ) ∩ B (See Figure 3). 2
2 2
r 0 D xo Figure 3
We will choose α and appropriately so that we can apply the Weak Maximum Principle to v(x) and arrive at v(x) ≥ v(xo ) ∀x ∈ D. (7.24)
Ω
We postpone the proof of (7.24) for a moment. Now from (7.24), we have ∂v o (x ) ≤ 0, ∂ν Noticing that ∂w o (x ) > 0 ∂ν We arrive at the desired inequality ∂u o (x ) < 0. ∂ν Now to complete the proof of the Lemma, what left to verify is (7.24). We will carry this out in two steps. First we show that Lv ≥ 0. Hence we can apply the Weak Maximum Principle to conclude that min v ≥ min .
¯ D ∂D
(7.25)
(7.26)
(7.27)
7.3 The Hopf Lemma and Strong Maximum Principles
185
Then we show that the minimum of v on the boundary ∂D is actually attained at xo : v(x) ≥ v(xo ) ∀x ∈ ∂D. (7.28) Obviously, (7.27) and (7.28) imply (7.24). To see (7.26), we directly calculate
n n
Lw = e−αx {4α2
i,j=1 n
2
aij (x)xi xj − 2α
i=1 n
[aii (x) − bi (x)xi ] − c(x)} + c(x)e−αr [aii (x) − bi (x)xi ] − c(x)}
i=1
2
≥ e−αx {4α2
i,j=1
2
aij (x)xi xj − 2α
(7.29)
By the ellipticity assumption, we have r aij (x)xi xj ≥ δx2 ≥ δ( )2 > 0 in D. 2 i,j=1
n
(7.30)
Hence we can choose α suﬃciently large, such that Lw ≥ 0. This, together with the assumption Lu ≥ 0 implies Lv ≥ 0, and (7.27) follows from the Weak Maximum Principle. To verify (7.28), we consider two parts of the boundary ∂D separately. (i) On ∂D ∩ B, since u(x) > u(xo ), there exists a co > 0, such that u(x) ≥ u(xo ) + co . Take small enough such that w ≤ δ on ∂D ∩ B. Hence v(x) ≥ u(xo ) = v(xo ) ∀x ∈ ∂D ∩ B. (ii) On ∂D ∩ ∂B, w(x) = 0, and by the assumption u(x) ≥ u(xo ), we have v(x) ≥ v(xo ). This completes the proof of the Lemma. Now we are ready to prove Theorem 7.3.1 (Strong Maximum Principle for L with c(x) ≡ 0.) Assume that Ω is an open, bounded, and connected domain in Rn with smooth bound¯ ary ∂Ω. Let u be a function in C 2 (Ω) ∩ C(Ω). Assume that c(x) ≡ 0 in Ω. i) If Lu(x) ≥ 0, x ∈ Ω, then u attains its minimum only on ∂Ω unless u is constant. ii) If Lu(x) ≤ 0, x ∈ Ω, then u attains its maximum only on ∂Ω unless u is constant. Proof. We prove part i) here. The proof of part ii) is similar. Let m be the minimum value of u in Ω. Set Σ = {x ∈ Ω  u(x) = m}. It is relatively closed in Ω. We show that either Σ is empty or Σ = Ω.
186
7 Maximum Principles
We argue by contradiction. Suppose Σ is a nonempty proper subset of Ω. Then we can ﬁnd an open ball B ⊂ Ω \ Σ with a point on its boundary belonging to Σ. Actually, we can ﬁrst ﬁnd a point p ∈ Ω \ Σ such that d(p, Σ) < d(p, ∂Ω), then increase the radius of a small ball center at p until it hits Σ (before hitting ∂Ω). Let xo be the point at ∂B ∩ Σ. Obviously we have in B Lu ≥ 0 and u(x) > u(xo ). Now we can apply the Hopf Lemma to conclude that the normal outward derivative ∂u o (x ) < 0. (7.31) ∂ν On the other hand, xo is an interior minimum of u in Ω, and we must have Du(xo ) = 0. This contradicts with (7.31) and hence completes the proof of the Theorem. In the case when c(x) ≥ 0, the strong principle still applies with slight modiﬁcations. Theorem 7.3.2 (Strong Maximum Principle for L with c(x) ≥ 0.) Assume that Ω is an open, bounded, and connected domain in Rn with smooth bound¯ ary ∂Ω. Let u be a function in C 2 (Ω) ∩ C(Ω). Assume that c(x) ≥ 0 in Ω. i) If Lu(x) ≥ 0, x ∈ Ω, then u can not attain its nonpositive minimum in the interior of Ω unless u is constant. ii) If Lu(x) ≤ 0, x ∈ Ω, then u can not attain its nonnegative maximum in the interior of Ω unless u is constant. Remark 7.3.1 In order that the maximum principle to hold, we assume that the domain Ω be bounded. This is essential, since it guarantees the existence ¯ of maximum and minimum of u in Ω. A simple counter example is when Ω n is the half space {x ∈ R  xn > 0}, and u(x, y) = xn . Obviously, u = 0, but u does not obey the maximum principle: max u ≤ max u.
Ω ∂Ω
Equally important is the nonnegativeness of the coeﬃcient c(x). For example, set Ω = {(x, y) ∈ R2  − π < x < π , − π < y < π }. Then u = cos x cos y 2 2 2 2 satisﬁes − u − 2u = 0, in Ω u = 0, on ∂Ω.
7.3 The Hopf Lemma and Strong Maximum Principles
187
But, obviously, there are some points in Ω at which u < 0. However, if we impose some sign restriction on u, say u ≥ 0, then both conditions can be relaxed. A simple version of such result will be present in the next theorem. Also, as one will see in the next section, c(x) is actually allowed to be negative, but not ‘too negative’. Theorem 7.3.3 (Maximum Principle and Hopf Lemma for not necessarily bounded domain and not necessarily nonnegative c(x). ) Let Ω be a domain in Rn with smooth boundary. Assume that u ∈ C 2 (Ω)∩ ¯ and satisﬁes C(Ω) − u+ u(x) = 0
n i=1 bi (x)Di u
+ c(x)u ≥ 0, u(x) ≥ 0, x ∈ Ω x ∈ ∂Ω
(7.32)
with bounded functions bi (x) and c(x). Then i) if u vanishes at some point in Ω, then u ≡ 0 in Ω; and ii) if u ≡0 in Ω, then on ∂Ω, the exterior normal derivative ∂u < 0. ∂ν
To prove the Theorem, we need the following Lemma concerning eigenvalues. Lemma 7.3.2 Let λ1 be the ﬁrst positive eigenvalue of − φ = λ1 φ(x) x ∈ B1 (0) φ(x) = 0 x ∈ ∂B1 (0) (7.33)
with the corresponding eigenfunction φ(x) > 0. Then for any ρ > 0, the ﬁrst λ1 positive eigenvalue of the problem on Bρ (0) is 2 . More precisely, if we let ρ ψ(x) = φ( x ), then ρ λ1 − ψ = 2 ψ(x) x ∈ Bρ (0) (7.34) ρ ψ(x) = 0 x ∈ ∂Bρ (0) The proof is straight forward, and will be left for the readers. The Proof of Theorem 7.3.3. i) Suppose that u = 0 at some point in Ω, but u ≡ 0 on Ω. Let Ω+ = {x ∈ Ω  u(x) > 0}. Then by the regularity assumption on u, Ω+ is an open set with C 2 boundary. Obviously, u(x) = 0 , ∀ x ∈ ∂Ω+ . Let xo be a point on ∂Ω+ , but not on ∂Ω. Then for ρ > 0 suﬃciently small, one can choose a ball Bρ/2 (¯) ⊂ Ω+ with xo as its boundary point. Let ψ be x
188
7 Maximum Principles
the positive eigenfunction of the eigenvalue problem (7.34) on Bρ (xo ) correλ1 x sponding to the eigenvalue 2 . Obviously, Bρ (xo ) completely covers Bρ/2 (¯). ρ u Let v = . Then from (7.32), it is easy to deduce that ψ 0 ≤ − v−2 v·
n
ψ + bi (x)Di v + ψ i=1
n
Di ψ − ψ + + c(x) v ψ ψ i=1
n
≡− v+
i=1
˜i (x)Di v + c(x)v. b ˜
Let φ be the positive eigenfunction of the eigenvalue problem (7.33) on B1 , then n 1 Di φ λ1 + c(x). c(x) = 2 + ˜ ρ ρ i=1 φ This allows us to choose ρ suﬃciently small so that c(x) ≥ 0. Now we can ˜ apply Hopf Lemma to conclude that, the outward normal derivative at the boundary point xo of Bρ/2 (¯), x ∂v o (x ) < 0, ∂ν because v(x) > 0 ∀x ∈ Bρ/2 (¯) and v(xo ) = x u(xo ) = 0. ψ(xo ) (7.35)
On the other hand, since xo is also a minimum of v in the interior of Ω, we must have v(xo ) = 0. This contradicts with (7.35) and hence proves part i) of the Theorem. ii) The proof goes almost the same as in part i) except we consider the point xo on ∂Ω and the ball Bρ/2 (¯) is in Ω with xo ∈ ∂Bρ/2 (¯). Then for x x the outward normal derivative of u, we have ∂u o ∂v o ∂ψ o ∂v o (x ) = (x )ψ(xo ) + v(xo ) (x ) = (x )ψ(xo ) < 0. ∂ν ∂ν ∂ν ∂ν Here we have used a wellknown fact that the eigenfunction ψ on Bρ (xo ) is radially symmetric about the center xo , and hence ψ(xo ) = 0. This completes the proof of the Theorem.
7.4 Maximum Principles Based on Comparisons
In the previous section, we show that if (− + c(x))u ≥ 0, then the maximum principle, i.e. (7.20), applies. There, we required c(x) ≥ 0. We can think −
4. we have − v(xo ) ≤ 0 and v(xo ) = 0.37). That is. and taking into account that u(xo ) < 0. we can establish the following more general maximum principle based on comparison. − u+ φ u(xo ) ≥ − u + λ(xo )u(xo ) φ > − u + c(xo )u(xo ) ≥ 0. c(x) need not be nonnegative.39) (7. at point xo . and (7.36) with λ = λ1 obey Maximum Principle. it is easy to verify that φ 1 φ − v=2 v· + (− u + u). Then since φ(x) > 0.39). Let xo ∈ Ω be a minimum of v(x). For c(x) ≥ 0. We argue by contradiction.41). We notice that the eigenfunction φ corresponding to the ﬁrst positive eigenvalue λ1 is either positive or negative in Ω. Let φ be a positive ¯ function on Ω satisfying − φ + λ(x)φ ≥ 0. This suggests that. ∀x ∈ Ω. by (7. .7. let us consider the Dirichlet eigenvalue problem of − : − φ − λφ(x) = 0 x ∈ Ω (7. (7. that is. Let v(x) = φ(x) in Ω. (7. (7. we must have v(x) < 0 somewhere Ω. and the maximum principle holds for any ‘positive’ operators.4 Maximum Principles Based on Comparisons 189 as a ‘positive’ operator. then u ≥ 0 in Ω. it is allowed to be as negative as −λ1 . Suppose that u(x) < 0 somewhere in u(x) .37) Assume that u is a solution of − u + c(x)u ≥ 0 x ∈ Ω u≥0 on ∂Ω. we have.38). to ensure the Maximum Principle. Theorem 7. (7. If c(x) > λ(x) . Do we really need c(x) ≥ 0 here? To answer the question.40) and (7.40) φ φ φ On one hand. By a direct calculation.1 Assume that Ω is a bounded domain. the solutions of (7. and thus completes the proof of the Theorem. Proof. − +c(x) is also ‘positive’. the maxima or minima of φ are attained only on the boundary ∂Ω.41) (7. This is an obvious contradiction with (7. More precisely.38) While on the other hand.36) φ(x) = 0 x ∈ ∂Ω. since xo is a minimum.
so that the Maximum Principle Based on Comparison applies: i) Narrow regions.1.1. we assume further that lim inf u(x) ≥ 0. one can choose some positive 1 number q < n − 2.4. When Ω = {x  0 < x1 < l} is a narrow region with width l as shown: We can choose φ(x) = sin( x1l+ ). and ii) c(x) decays fast enough at ∞. (7. or at points where u is negative.) If u satisﬁes (7.e. Now condition (7.4.4.37) and (7. besides condition (7.38) with bounded function c(x).39).2 If Ω is an unbounded domain.42) guarantees that the minima of v(x) do not “leak” away to inﬁnity.39). In dimension n ≥ 3. where λ(x) = l −1 l2 can be very negative when l is suﬃciently small. i) Narrow Regions. i. For convenience in applications. Then the rest of the arguments are exactly the same as in the proof of Theorem 7. Corollary 7. Then it is easy Ω 0 l x1 to see that − φ = ( 1 )2 φ.4. Still consider the same v(x) as in the proof of Theorem 7. provided lim inf x→∞ u(x) ≥ 0. Proof. we provide two typical situations where there exist such functions φ and c(x) satisfying condition (7.190 7 Maximum Principles Remark 7.42) x→∞ Then u ≥ 0 in Ω.1 From the proof.4. c(x) > λ(x) = 2 . and let φ(x) = xq Then it is easy to verify that .4. Hence we can directly apply Theorem l 7. one can see that conditions (7.1 (Narrow Region Principle. ii) Decay at Inﬁnity.39) are required only at the points where v attains its minimum.38). c(x) −1 satisﬁes (7.37) and (7.1 to conclude that u ≥ 0 in Ω. The Theorem is also valid on an unbounded domains if u is “nonnegative” at inﬁnity: Theorem 7. Then when the width l of the region Ω is suﬃciently small.
Assume that the solution u decays near inﬁnity at the rate of (7.2 From Remark 7. Deﬁne the integral operator T by (T f )(x) = Ω K(x.1 to derive Corollary 7. Assume K(x. ∀(x.5 A Maximum Principle for Integral Equations 191 − φ= q(n − 2 − q) φ.4. one can see that actually condition (7. x2 (7. such that c(x) > − Suppose u(x) lim = 0. and for the region Ω as stated in Corollary 7.4. for example.38) on ¯ Ω.4.2 (Decay at Inﬁnity) Assume there exist R > 0. x→∞ xq C Let Ω be a region containing in BR (0) ≡ Rn \ BR (0). we can adapt the proof of Theorem 7.1. 7. y) ∈ Ω × Ω.2. ∀x > R. If further assume that u ∂Ω ≥ 0. If u satisﬁes (7. Remark 7. x2 In the case c(x) decays fast enough near inﬁnity.2 that u ≥ 0 in the entire region Ω. c(x) satisﬁes (7. then u(x) ≥ 0 for all x ∈ Ω. then we can derive from Corollary 7. Let c(x) = −u(x)p−1 . − u − up−1 u = 0 x ∈ Rn .3 Although Theorem 7.7.4. we introduce a maximum principle for integral equations. Let Ω be a region in Rn .4.4. q(n − 2 − q) .43) is only required at points where u is negative. they can be easily applied to a nonlinear equation.43) in Ω. y) ≥ 0.1 as well as its Corollaries are stated in linear forms.4.43) Remark 7.5 A Maximum Principle for Integral Equations In this section.4. . y)f (y)dy. Then for R suﬃciently large. may or may not be bounded.44) 1 with xs s(p − 1) > 2.
and hence f (x) ≤ 0. f − (x) = min{0. (7. Recall that in Section 5. which implies that f + (x) ≡ 0. .192 7 Maximum Principles Let · be a norm in a Banach space satisfying f ≤ g .1 Suppose that Tg ≤ θ g If f (x) ≤ (T f )(x).49) holds for any x ∈ Ω. f + (x) = max{0. Parallel to this. then f (x) ≤ 0 for all x ∈ Ω.45) There are many norms that satisfy (7. 0 ≤ f + (x) ≤ (T f )(x) = (T f )+ (x) + (T f )− (x) ≤ (T f )+ (x) (7. we must have f + = 0. and C(Ω) norm. Therefore. (7. Since θ < 1. f (x)}. L∞ (Ω) norm. Proof. Theorem 7. Then it is easy to see that for any x ∈ Ω+ . It follows from (7.47) with some 0 < θ < 1 ∀g.46) Ω+ = {x ∈ Ω  f (x) > 0}.48) (7. we have similar applications for integral equations. such as Lp (Ω) norms. we have f + (x) = 0 and (T f )+ (x) ≥ 0 by the deﬁnitions. Deﬁne and (7. Let α be a number between 0 and n. f (x)}.5.49) that f + ≤ (T f )+ ≤ θ f + . x − yn−α Assume that f satisﬁes the integral equation f = Tf in Ω. Deﬁne (T f )(x) = Ω 1 c(y)f (y)dy. whenever 0 ≤ f (x) ≤ g(x) in Ω. (7. after introducing the “Maximum Principle Based on Comparison” for Partial Diﬀerential Equations. This completes the proof of the Theorem.49) While in the rest of the region Ω. we explained how it can be applied to the situations of “Narrow Regions” and “Decay at Inﬁnity”.45).
we can make the integral Ω c(y)τ dy as small as we wish. o in the As an immediate application of this “Maximum Principle”.1 that f + (x) ≡ 0 . such that if µ(Ω ∩ BRo (0)) ≤ o. . (7.1 Assume that c(y) ≥ 0 and c(y) ∈ Lτ (Rn ). by Lebesgue integral theory. If we have some right integrability condition on c(y).7. (7.1 One can see that the condition µ(Ω ∩ BRo (0)) ≤ Corollary is satisﬁed in the following two situations. We will use the method of moving planes to obtain the radial symmetry and monotonicity of the positive solutions.51) for some p. the complement of the ball BR (0).50) and (7.5. with suﬃciently large R.5 A Maximum Principle for Integral Equations 193 or more generally. then we can derived. Let f ∈ Lp (Rn ) be a nonnegative function satisfying (7. ∀ x ∈ Ω. Remark 7. we study an integral equation in the next section.50) ≤ C c(y) Lτ (Ω) f Lp (Ω) . a maximum principle that will be applied to “Narrow Regions” and “Near Inﬁnity”. from Theorem 7.1. Since c(y) ∈ Lτ (Rn ). This completes the proof of the Corollary.51). when the measure of the intersection of Ω with BRo (0) is suﬃciently small. C ii) Near Inﬁnity: Say.5. and thus to obtain C c(y) Lτ (Ω) < 1. Now it follows from Theorem 7. Proof.5. we have Corollary 7. Ω = BR (0). Then there exist positive numbers Ro and o depending on c(y) only. i) Narrow Regions: The width of Ω is very small. τ > 1. More precisely. where µ(D) is the measure of the set D. the integral inequality f ≤ Tf Further assume that Tf Lp (Ω) in Ω.5. then f + ≡ 0 in Ω.
.
and to prove nonexistence of solutions.2 Applications of Maximum Principles Based on Comparison 8. Gidas. there have seen many signiﬁcant contributions.3.1 The Background 8. and many others. Particularly for semilinear partial diﬀerential equations.5 Method of Moving Planes in an Integral Form and Symmetry of Solutions for Integral Equations The Method of Moving Planes (MMP) was invented by the Soviet mathematician Alexanderoﬀ in the early 1950s.8 Methods of Moving Planes and of Moving Spheres 8.2 Symmetry of Solutions of −∆u = up in Rn . They can also be used to obtain a priori estimates. it was further developed by Serrin [Se]. Ni.1 Symmetry of Solutions on a Unit Ball 8. . 8.3.4 8. This method has been applied to free boundary problems.4. The Method of Moving Planes and its variant–the Method of Moving Spheres–have become powerful tools in establishing symmetries and monotonicity for solutions of partial diﬀerential equations. We refer to the paper of Frenkel [F] for more descriptions on the method. and other problems.1 The Background 8. Decades later. to derive useful inequalities.2. Gidas. and Spruck [CGS]. semilinear partial diﬀerential equations.1 Outline of the Method of Moving Planes 8. Caﬀarelli.4.2 The A Priori Estimates Method of Moving Spheres 8. Li [Li].2.2.3 Method of Moving Planes in a Local Way 8. Chen and Li [CL] [CL1]. Chang and Yang [CY].3 Symmetry of Solutions of −∆u = eu in R2 8.2 Necessary Conditions 8. and Nirenberg [GNN].
even if the coeﬃcients of the equation are not ‘good. − u = u n−2 (x) x ∈ Rn n ≥ 3.2. the Maximum Principles Base on Comparison will play a major role. and the advantage is that each time we only need to use the maximum principle in a very narrow region. the Maximum Principles introduced in the previous chapter are applied in innovative ways. and under certain conditions. We recommend the readers study this part carefully. We will establish symmetry. During the process of Moving Planes.196 8 Methods of Moving Planes and of Moving Spheres From the previous chapter. . we will apply the Method of Moving Planes and their variant–the Method of Moving Spheres– to study semilinear elliptic equations and integral equations. one can change a differential equation into an integral equation. It is wellknown that by using a Green’s function. In Section 8. and − u = eu(x) x ∈ R2 .3. a priori estimates. Instead of using local properties (say diﬀerentiability) of a diﬀerential equation. From the previous chapter. The MMP greatly enhances the power of maximum principles. n+2 During the moving of planes. they employed the global properties of the solutions of integral equations. they are equivalent. we also introduced a form of maximum principle at inﬁnity to the MMP and therefore simpliﬁed many proofs and extended the results in more natural ways. In the authors’ research practice. we will establish radial symmetry and monotonicity for the solutions of the following three semilinear elliptic problems − u = f (u) x ∈ B1 (0) u=0 on ∂B1 (0).’ the maximum principle can still be applied. To investigate the symmetry and monotonicity of integral equations. one can see that. the Narrow Region Principle and the Decay at Inﬁnity Principle will be used repeatedly in dealing with the three examples. Roughly speaking. n+2 in M. so that they will be able to apply it to their own research. the MMP is a continuous way of repeated applications of maximum principles. created an integral form of MMP. the maximum principle has been used inﬁnitely many times. During this process. in such a narrow region. monotonicity. together with Ou. and nonexistence of the solutions. In particular. we have seen the beauty and power of maximum principles. In Section 8. the authors. we will apply the Method of Moving Planes in a ‘local way’ to obtain a priori estimates on the solutions of the prescribing scalar curvature equation on a compact Riemannian manifold M − 4(n − 1) n−2 ou + Ro (x)u = R(x)u n−2 . In this chapter.
5. In this situation. Due to the diﬀerent nature of the integral equation.4. Let u be a positive solution of a certain partial diﬀerential equation. This provides a stronger necessary condition than the well known KazdanWarner condition. n−α x − y for any real number α between 0 and n.e. Since the Method of Moving Planes can not be applied to the solution u directly. and it also becomes a suﬃcient condition for the existence of solutions in most cases. the traditional Method of Moving Planes does not work. the traditional blowingup analysis fails near the set where R(x) = 0. We prove that if the function R(x) is rotationally symmetric and monotone in the region where it is positive.5. let Tλ = {x = (x1 . we use the Method of Moving Spheres to prove a nonexistence of solutions for the prescribing Gaussian and scalar curvature equations − u + 2 = R(x)eu . We will use the Method of Moving Planes in an innovative way to obtain a priori estimates. Hence we exploit its global property and develop a new idea–the Integral Form of the Method of Moving Planes to obtain the symmetry and monotonicity of the solutions. · · · . respectively. xn ) ∈ Rn  x1 = λ}.8. This is a plane perpendicular to x1 axis and the plane that we will move with. x2 .1 Outline of the Method of Moving Planes To outline how the Method of Moving Planes works. If we want to prove that it is symmetric and monotone in a given direction. we take the Euclidian space Rn for an example. For any real number λ. i. and − u+ n+2 n−2 n(n − 2) u= R(x)u n−2 4 4(n − 1) on S 2 and on S n (n ≥ 3). In Section 8. we study the integral equation in Rn u(x) = Rn n+α 1 u n−α (y)dy. The Maximum Principle for Integral Equations established in Chapter 7 is combined with the estimates of various integral norms to carry on the moving of planes. 8. In Section 8. we may assign that direction as x1 axis. Let Σλ denote the region to the left of the plane. we introduce an auxiliary function to circumvent this diﬃculty. as an application of the maximum principle for integral equations introduced in Section 7. It arises as an EulerLagrange equation for a functional in the context of the HardyLittlewoodSobolev inequalities. . then both equations admit no solution.1 Outline of the Method of Moving Planes 197 We alow the function R(x) to change signs.
such that wλo (x) ≡ 0 . we have wλ (x) ≥ 0 . we generally go through the following two steps. − and hence Σλ is empty. While for integral equations. we deﬁne λo = sup{λ  wλ (x) ≥ 0. We prove that u is symmetric about the plane Tλo . xn ). · · · . Step 1. ∀x ∈ Σλ . that is wλo (x) ≡ 0 for all x ∈ Σλo . x Σλ Tλ xλ Figure 1 We compare the values of the solution u at point x and xλ . Let xλ = (2λ − x1 . let wλ (x) = u(xλ ) − u(x).198 8 Methods of Moving Planes and of Moving Spheres Σλ = {x ∈ Rn  x1 < λ}. and this contradicts with the deﬁnition of λo . To this end. x2 . xn ) about the plane Tλ (See Figure 1). there exists some λo . and for partial diﬀerential equations. Step 2. This is usually carried out by a contradiction argument. (8. such that (8.1) is violated. In order to show that. we use a diﬀerent idea.1) holds.1) holds. ∀x ∈ Σλ }. We continuously move the plane this way up to its limiting position. From the above illustration. 1 . one can see that the key to the Method of Moving Planes is to establish inequality (8. and we want to show that u is symmetric about some plane Tλo . More precisely. ∀x ∈ Σλo . the reﬂection of the point x = (x1 . and move the plane Tλ along the x1 direction to the right as long as the inequality (8. maximum principles are powerful tools for this task. then there would exist λ > λo .1). We ﬁrst show that for λ suﬃciently negative. We show that this norm must be zero. We estimate a certain norm of wλ on the set − Σλ = {x ∈ Σλ  wλ (x) < 0} where the inequality (8. · · · .1) Then we are able to start oﬀ from this neighborhood of x1 = −∞. We show that if wλo (x) ≡ 0.
The essence of the method of moving planes is the application of various maximum principles.2. let xλ be the reﬂection of the point x about the plane Tλ . let Tλ = {x  x1 = λ} be the plane perpendicular to the x1 axis. we study some semilinear elliptic equations. 8.2) for some constant Co . x2 . and wλ (x) = uλ (x) − u(x). Proof.2. In the proof of each theorem. and Nirenberg [GNN1]: Theorem 8.2 Applications of the Maximum Principles Based on Comparisons 199 8. (8. Then every positive solution u of − u = f (u) x ∈ B1 (0) u=0 on ∂B1 (0). · · · .1 Assume that f (·) is a Lipschitz continuous function such that f (p) − f (q) ≤ Co p − q (8. is radially symmetric and monotone decreasing about the origin. uλ (x) = u(xλ ).3) x xλ Σλ Tλ Figure 4 Let We compare the values of the solution u on Σλ with those on its reﬂection. xλ = (2λ − x1 . As shown on Figure 4 below. We will apply the method of moving planes to establish the symmetry of the solutions. Let Σλ be the part of B1 (0) which is on the left of the plane Tλ .8.1 Symmetry of Solutions in a Unit Ball We ﬁrst begin with an elegant result of Gidas. the readers will see vividly how the Maximum Principles Based on Comparisons are applied to narrow regions and to solutions with decay at inﬁnity. Ni. 0 x1 . more precisely. For each x ∈ Σλ .2 Applications of the Maximum Principles Based on Comparisons In this section. xn ).
More precisely. while on the curve part of ∂Σλ . ∀x ∈ Σλ . −λ. for λ close to −1.5) Otherwise. by moving this way. wλ (x) ≥ 0.6) (8. Σλ is a narrow (in x1 direction) region.5) holds. (8. we will show that the plane can be further moved to the right ¯ by a small distance. We consider the function wλ+δ (x) on the narrow region (See Figure 5): ¯ do Ωδ = Σλ+δ ∩ {x  x1 > λ − }. wλ (x) = 0. for λ suﬃciently close to −1. x ∈ Σλ . We show that. on this part of ∂Σλ . we ﬁrst claim that ¯ λ ≥ 0. wλ (x) ≥ 0. one can verify that wλ satisﬁes wλ + C(x.) Now we can apply the “Narrow Region Principle” ( Corollary 7. we ¯ ¯ deduce that wλ (x) > 0 ¯ in the interior of Σλ . ¯ 2 . ∀x ∈ Σλ }. C(x.4. and on ∂Σλ . where u(x) > 0 by assumption. f (uλ (x)) − f (u(x)) . uλ (x) − u(x) (8. It follows ¯ that. By the Strong Maximum Principle. that is. λ) = and by condition (8.200 8 Methods of Moving Planes and of Moving Spheres Then it is easy to see that uλ satisﬁes the same equation as u does. This provides a starting point for us to move the plane Tλ Step 2: Move the Plane to Its Right Limit.2). ( On Tλ . such that δ ≤ do ¯ ¯ 2 .1 ) to conclude that. where C(x. wλ (x) > 0 since u > 0 in B1 (0). Applying the Mean Value Theorem to f (u).4) Step 1: Start Moving the Plane. then the image of the curved surface part of ∂Σ ¯ under the fact. We start from the near left end of the region. ¯ Let do be the maximum width of narrow regions that we can apply the “Narrow Region Principle”. We now increase the value of λ continuously. λ) ≤ Co . wλ (x) > 0. λ)wλ (x) = 0. let ¯ λ = sup{λ  wλ (x) ≥ 0. In ¯ < 0. the plane will not stop before hitting the origin. we move the plane Tλ to the right as long as the inequality (8. Choose a small positive number δ. and this would contradicts with the deﬁnition of λ. Obviously. if λ λ reﬂection about Tλ lies inside B1 (0).
¯ ¯ 2 Since wλ is continuous in λ. ¯ And therefore. λ + δ)wλ+δ = 0 x ∈ Ωδ ¯ ¯ wλ+δ (x) ≥ 0 x ∈ ∂Ωδ . ¯ ¯ 2 Hence in particular. there exists a constant co > 0. x ) . ∀x ∈ Σλ+δ . (8. we use continuity argument. for δ suﬃciently small.8.8) 1 . ¯ (8. 2 Notice that on this part. To see that it is also true ¯ ¯ on the rest of the boundary where x1 = λ − do . the boundary condition in (8. wλ+δ (x) ≥ 0.7) holds for such small δ. More ¯ precisely and more generally.6). (8. such that wλ (x) ≥ co . we still have wλ+δ (x) ≥ 0 . ¯ ¯ ¯ This contradicts with the deﬁnition of λ and thus establishes (8.2 Applications of the Maximum Principles Based on Comparisons 201 It satisﬁes ¯ wλ+δ + C(x. To see the boundary condition.7) Σλ− do Ωδ 0 ¯ 2 x1 Tλ− do Tλ+δ ¯ ¯ 2 Figure 5 The equation is obvious. ∀x1 ≥ 0. we ﬁrst notice ¯ that it is satisﬁed on the two curved parts and one ﬂat part where x1 = λ + δ of the boundary ∂Ωδ due to the deﬁnition of wλ+δ . ∀x ∈ Σλ− do . ∀x ∈ Σλ− do . ∀x ∈ Ωδ .6) implies that u(−x1 . Now we can apply the “Narrow Region Principle” to conclude that wλ+δ (x) ≥ 0. x ) ≤ u(x1 . wλ is positive and bounded away from 0.
and hence assumes the form u(x) = [n(n − 2)λ2 ] n−2 4 n−2 2 (λ2 + x − xo 2 ) for some λ > 0 and xo ∈ Rn . Schoen. namely (8. an interesting results is the symmetry of the positive solutions of the semilinear elliptic equation: u + up = 0 .202 8 Methods of Moving Planes and of Moving Spheres where x = (x2 .10) u = O( 1 ). we conclude that u(x) must be radially symmetric about the origin. x ∈ Rn . xn ). Gidas and Spruck [GS] showed that the only nonnegative solution of (8. They proved n+2 Theorem 8. we obtain u(−x1 .2. as was pointed out by R. We then start from λ close to 1 and move the plane Tλ toward the left. · · · .10) is identically zero.2 Symmetry of Solutions of − u = up in Rn In an elegant paper of Gidas. ∀x1 ≥ 0. This completes the proof. This uniqueness result.9). xn−2 are radially symmetric and monotone decreasing about some point.2. 8. a simpler and more elementary proof was given for almost the same result: n+2 Theorem 8. Also the monotonicity easily follows from the argument. n ≥ 3. is in fact equivalent to the geometric result due to Obata [O]: A Riemannian metric on S n which is conformal to the standard one and having the same constant scalar curvature is the pull back of the standard one under a conformal map of S n to itself. Gidas and Spruck [CGS] removed the decay assumption u = O(x2−n ) and proved the same result. all the positive solutions of (8. Ni. In the case that n+2 1 ≤ p < n−2 . x ) .3 i) For p = n−2 . (8. Recently.8) and (8.9) Combining two opposite inequalities (8. and hence assume the form u(x) = [n(n − 2)λ2 ] (λ2 + x − n−2 4 n−2 2 xo 2 ) for λ > 0 and for some xo ∈ Rn .2. Since we can place x1 axis in any direction. in the authors paper [CL1].10) with reasonable behavior at inﬁnity. Caﬀarelli. . and Nirenberg [2].10) must be radially symmetric and monotone decreasing about some point. we see that u(x) is symmetric about the plane T0 . x ) ≥ u(x1 . Then. Similarly. every positive C 2 solution of (8.2 For p = n−2 .
for λ suﬃciently negative. wλ (x) ≥ 0.11) for λ suﬃciently negative. By the deﬁnition of uλ . xn ). ∀x ∈ Σλo .1). Since x1 axis can be chosen in any direction. the only nonnegative solution of (8. λ (8. (See the previous Figure 1. we will ﬁrst present the proof of Theorem 8. Then by the Mean Value Theorem.2 is actually included in the more general proof of the ﬁrst part of Theorem 8. And the readers will see vividly. To verify (8.2. we apply the maximum principle n+2 to wλ (x).) Let uλ (x) = u(xλ ). x2 . Deﬁne Σλ = {x = (x1 .2. Recalling the “Maximum Principle Based on Comparison” (Theorem 7. uλ satisﬁes the same equation as u does. we see here c(x) = . (8. Step 1.2. in the third step. · · · .2. Prepare to Move the Plane from Near −∞.4. we start from the very left end of our region Rn . This implies that the solution u is symmetric and monotone decreasing about the plane Tλo . · · · .2 (mostly in our own idea). xn ) ∈ Rn  x1 < λ}. and wλ (x) = uλ (x) − u(x). The proof of Theorem 8. In the ﬁrst step. to better illustrate the idea. Proof of Theorem 8. We will show that.12) where ψλ (x) is some number between uλ (x) and u(x). using the uniqueness theory in Ordinary Diﬀerential Equations. we will move our plane Tλ in the the x1 direction toward the right as long as inequality (8.8. i. However.11) holds. We will show that wλo (x) ≡ 0. that is near x1 = −∞.11) Here. how the “Decay at Inﬁnity” principle is applied here. xλ = (2λ − x1 . it is easy to see that. we will show that the solutions can only assume the given form. it is easy to verify that τ − wλ = uτ (x) − uτ (x) = τ ψλ−1 (x)wλ (x).e. The plane will stop at some limiting position. Finally. we conclude that u must be radially symmetric and monotone about some point.2. The proof consists of three steps. the “Decay at Inﬁnity” is applied. Then in the second step.2 Applications of the Maximum Principles Based on Comparisons 203 ii) For p < n+2 n−2 .10) is identically zero.3. say at λ = λo . Tλ = ∂Σλ and let xλ be the reﬂection point of x about the plane Tλ . ∀x ∈ Σλ . Write τ = n−2 .
=O 1 ˜4 x .4. as long as the inequality (8. ∀x ∈ Σλ }. Apparently at these points.13) Otherwise.13) must hold. Here the power of Step 2. Now we can move the plane Tλ toward right. and hence (8.14) We show that the plane Tλo can still be moved a small distance to the right. it can not be applied in this situation. i. 1 is greater than two. Therefore. More precisely. due to the asymptotic behavior of u near x1 = +∞. we have wλo +δ (x) ≥ 0 .3. By the “Decay at Inﬁnity” argument (Corollary 7. by the “Strong Maximum Principle” on unbounded domains ( see Theorem 7.15). x x and hence 0 ≤ uλ (˜) ≤ ψλ (˜) ≤ u(˜). ∀x ∈ Σλo +δ . increase the value of λ. for all 0 < δ < δo . x x x By the decay assumption of the solution u(x) = O we derive immediately that τ ψλ−1 (˜) = O ( x 4 1 n−2 ) ˜ x 1 xn−2 . it suﬃce τ to check the decay rate of ψλ−1 (x). λo < +∞.11) holds.e. Obviously. we use the “Narrow Region Principle ” to derive (8. (8.. uλ (˜) < u(˜). (8. ∀x ∈ Σλo . We claim that wλo (x) ≡ 0. which is what (actually more ˜ x than) we desire for.3 ). we must have (8. we have wλo (x) > 0 in the interior of Σλo . (8.204 8 Methods of Moving Planes and of Moving Spheres τ −τ ψλ−1 (x). only at the points x ˜ where wλ is negative (see Remark 7. there exists a δo > 0 such that. Deﬁne λo = sup{λ  wλ (x) ≥ 0.15) This would contradict with the deﬁnition of λo .4. Recall that in the last section. Move the Plane to the Limiting Position to Derive Symmetry. we can apply the “Maximum Principle Based on Comparison” to conclude that for λ suﬃciently negative ( ˜ suﬃciently x large ). because . Unfortunately.11). and more precisely. This completes the preparation for the moving of planes.2).2 ).
we have.1 There exists a Ro > 0 ( independent of λ). we already know wλo ≥ 0.18). Then.2 Applications of the Maximum Principles Based on Comparisons 205 the “narrow region” here is unbounded. To overcome this diﬃculty. By Lemma 8. such that if xo is a minimum point of wλ and wλ (xo ) < 0. to verify (8. therefore. xo must be on the boundary of Σλo . then xo  < Ro .2.1. what left is to prove the Lemma. Consequently.3). It ¯ ¯ follows that wλo (xo ) = wλo (xo )φ(xo ) + wλo (xo ) φ = 0 + 0 = 0. ¯ ¯ We postpone the proof of the Lemma for a moment. Then by the Hopf Lemma (see Theorem 7. the outward normal derivative ∂wλo o (x ) < 0. ∀ i = 1. ¯ ¯ i→∞ However.15) is violated for any δ > 0. ¯ ¯ (8. Now.16) wλo (xo ) = lim wλo +δi (xi ) ≤ 0. wλo (xo ) = lim ¯ i→ ∞ and wλo +δi (xi ) = 0 ¯ (8. we have xi  ≤ Ro . we must have wλo (xo ) = 0.8. Now suppose that (8. by (8. since wλo (xo ) = 0. . and we are not able to guarantee that wλo is bounded away from 0 on the left boundary of the “narrow region”.15).17) φ φ + − wλ + wλ φ φ 1 φ (8. Then there exists a sequence of numbers {δi } tending to 0 and for each i. ∂ν This contradicts with (8. · · · .18) On the other hand. 2. the corresponding negative minimum xi of wλo +δi . there is a subsequence of {xi } (still denoted by {xi }) which converges to some point xo ∈ Rn . φ(x) 1 with 0 < q < n − 2.2. we introduce a new function wλ (x) = ¯ where φ(x) = wλ (x) .14). xq Then it is a straight forward calculation to verify that − wλ = 2 wλ · ¯ ¯ We have Lemma 8.3.
Since the equation is invariant under translation.16).19) contradicts with (8.3. if xo  is suﬃciently large. Proof of Theorem 8. by the asymptotic behavior of u at inﬁnity. as we argued in Step 1. this is the only solution. we conclude that every positive solution of (8. Then they satisﬁes the following ordinary diﬀerential equation n−1 −u (r) − r u (r) = uτ u (0) = 0 (n−2)/4 u(0) = [n(n−2)] n−2 λ 2 is a solution. we may assume that the solutions are symmetric about the origin. we show that the positive solutions of (8. and n−2 (λ2 + r2 ) 2 by the uniqueness of the ODE problem.1. Assume that xo is a negative minimum of wλ .10) must be radially symmetric and monotone decreasing about some point in Rn . c(xo ) := −τ ψ τ −1 (xo ) > − It follows from (8. without loss of generality. Step 3.10) must assume the form for some λ > 0. In the previous two steps. i) The general idea in proving this part of the Theorem is almost the same as that for Theorem 8. and hence completes the proof of the Lemma.19) On the other hand. φ φ(xo ) q(n − 2 − q) ≡ . v(x) = xn−2 x2 .2.2. ¯ ¯ (8. ¯ Then − wλ (xo ) ≤ 0 and wλ (xo ) = 0. The main diﬀerence is that we have no decay assumption on the solution u at inﬁnity. This completes the proof of the Theorem. So we ﬁrst make a Kelvin transform to deﬁne a new function 1 x u( ). together with (8. hence the method of moving planes can not be applied directly to u.206 8 Methods of Moving Planes and of Moving Spheres Proof of Lemma 8.2. Therefore.2. o x φ(xo ) This.12) that − wλ + φ wλ (xo ) > 0. One can verify that u(r) = u(x) = [n(n − 2)λ2 ] n−2 4 n−2 2 [n(n − 2)λ2 ] n−2 4 (λ2 + x − xo 2 ) for λ > 0 and some xo ∈ Rn .
We show that. Step 1. we see that vλ satisﬁes the same equation as v does. for λ suﬃciently negative. Hence we can apply the “Decay at Inﬁnity” to wλ (x).2. It is easy to verify that v satisﬁes the same equation as u does. ∀x ∈ Σλ . u is also symmetric and monotone decreasing about the origin. correspondingly wλ may be singular at the point xλ = (2λ. and show that v is radially symmetric and monotone decreasing about some point.20) We will apply the method of moving planes to the function v. τ ψλ−1 (xo ) ∼ ( 1 1 )τ −1 = o 4 . except at the origin: Then obviously. And in our proof. (8. at a negative minimum point xo of wλ . Hence instead of on Σλ . we treat the singular point carefully. as mentioned in Corollary 7. so that we can carry on the method of moving planes to the end to show the ˜ existence of a λo such that wλo (x) ≡ 0 for x ∈ Σλo and v is strictly increasing ˜λ . and we have the desired decay rate for xo  τ c(x) := −τ ψλ−1 (x).4. we consider wλ ˜ on Σλ = Σλ \ {xλ }. If the center point is the origin.21) we derive immediately that. where ψλ (x) is some number between vλ (x) and v(x). v(x) has the desired decay rate v + v τ (x) = 0 x ∈ Rn \ {0}. The diﬀerence here is that. wλ has a singularity the power of .8.2 Applications of the Maximum Principles Based on Comparisons 207 1 at inﬁnity.2. Each time we show that the points of interest are away from the singularities. Then the same argument in the proof of Theorem 8. By the asymptotic behavior v(x) ∼ 1 . but has a xn−2 possible singularity at the origin. n ≥ 3.2 would imply that u is symmetric and monotone decreasing about some point. then v has no singularity at the origin. 0. then by the deﬁnition of v. we have ˜ wλ (x) ≥ 0.2. wλ (x) = vλ (x) − v(x).2. Because v(x) may be singular at the origin. 0). xo n−2 x  1 is greater than two. and τ − wλ = τ ψλ−1 (x)wλ (x). · · · . and hence u has the desired decay at inﬁnity. in the x1 direction in Σ o As in the proof of Theorem 8. If the center point of v is not the origin. Deﬁne vλ (x) = v(xλ ) . xn−2 (8.
2 one can easily arrive at a contradiction.208 8 Methods of Moving Planes and of Moving Spheres at xλ .2. ∀x ∈ Σλo . Step 2.2. ¯ ¯ ˜ Fact (a) can be shown by noting that wλo (x) > 0 on Σλo and wλo ≤ 0. obviously wλ (x) ≥ 0 on B1 (xλ ) \ {xλ }. Now through a similar argument as in the proof of Theorem 8. we ﬁrst note that for x ∈ B1 (0). This implies (8. If inf Σλ wλ (x) < 0. ¯ while fact (b) is obvious. inf ˜ wλk (x) can be achieved at some point xk ∈ Σλk and that ¯ Σλk the sequence {xk } is bounded away from the singularities xλk of wλk . We need to show that ¯ ˜ for each k. deﬁne ˜ λo = sup{λ  wλ (x) ≥ 0. This can be seen from the following facts a) There exists > 0 and δ > 0 such that wλo (x) ≥ ¯ b) for x ∈ Bδ (xλo ) \ {xλo }.2 except that now we need to take care of the singularities. ¯ . For such λ. we will show that.2. ¯ ˜ for x ∈ Σλo . Therefore wλo (x) ≡ 0. Again. Now similar to the Step 1 in the proof of Theorem 8. then the inﬁmum is achieved in Σλ \ B1 (xλ ) ˜ To see this. v(x) ≥ min v(x) = ∂B1 (0) 0 (8. wλ the same way as in the proof of Theorem 8.2. Again let λk λo ˜ be a sequence such that wλk (x) < 0 for some x ∈ Σλk . Actually. then by Maximum Principle we have ¯ Deﬁne wλ = ¯ wλo (x) > 0 . that v(x) ≤ 0 for x ∈ B1 (xλ ). the minimum of wλ is away from xλ . This is possible because v(x) → 0 as x → ∞. The rest of the proof is similar to the step 2 in the proof of Theorem 8.22). then wλo (x) ≡ 0. We will show that ˜ If λo < 0. Suppose φ that wλo (x) ≡0. hence we need to show that. ∀x ∈ Σλ }.2.2. we can deduce that wλ (x) ≥ 0 for λ suﬃciently negative.22) >0 due to the fact that v(x) > 0 and v ≤ 0. Then let λ be so negative. limλ→λo inf x∈Bδ (xλ ) wλ (x) ≥ inf x∈Bδ (xλo ) wλo (x) ≥ .
Finally. we conclude that u ≡ 0. If our planes Tλ stop somewhere before the origin. Now given any two points x1 and x2 in Rn .23) exp u(x)dx < +∞ R2 The classiﬁcation of the solutions for this limiting equation would provide essential information on the original problems on manifolds.4 Every solution of (8. We will use the method of moving planes to prove: Theorem 8.xo (x) = ln 32λ2 (4 + λ2 x − xo 2 )2 for any λ > 0 and any point xo ∈ R2 is a family of explicit solutions. we ﬁrst need to obtain some decay rate of the solutions near inﬁnity. we derive the symmetry and monotonicity in x1 direction by the above argument. xn+2−p(n−2) n+2 n−2 . we conclude that the solution u must be radially symmetric about some point. To this end.3 Symmetry of Solutions for − u = eu in R2 When considering prescribing Gaussian curvature on two dimensional compact manifolds. If they stop at the origin again. ii) To show the nonexistence of solutions in the case p < that after the Kelvin transform. x ∈ Rn \ {0}. Then from the above argument. we move the plane in the negative x1 direction from positive inﬁnity toward the origin. if the sequence of approximate solutions “blows up”.xo (x). we must have u(x1 ) = u(x2 ).2. we notice Due to the singularity of the coeﬃcient of v p at the origin. one would arrive at the following equation in the entire space R2 : ∆u + exp u = 0 . v satisﬁes v+ 1 v p (x) = 0 . 8. x ∈ R2 (8.8. also it is interesting in its own right. then we can carry out the above procedure in the opposite direction. since equation (8.10). then by rescaling and taking limit. This completes the proof of the Theorem. Since the x1 direction can be chosen arbitrarily. one can easily see that v can only be symmetric about the origin if it is not identically zero. It follows that u is constant.2. It is known that φλ.2 Applications of the Maximum Principles Based on Comparisons 209 If λo = 0. we may assume that the origin is at the mid point of the line segment x1 x2 . Hence u must also be symmetric about the origin. . from the equation (8.23) is radially symmetric with respect to some point in R2 and hence assumes the form of φλ.10) is invariant under translations and rotations. we also obtain the symmetry and monotonicity in x1 direction by combining the two inequalities obtained in the two opposite directions. namely.
Lemma 8. 1 u(x) →− ln x 2π uniformly.210 8 Methods of Moving Planes and of Moving Spheres Some Global and Asymptotic Behavior of the Solutions The following Theorem gives the asymptotic behavior of the solutions near inﬁnity.2.23).2. For −∞ < t < ∞. Lemma 8. which is essential to the application of the method of moving planes. ds ·  u  u ≥ ∂Ωt 2 ≥ 4πΩt . then as x → +∞. Integrating from −∞ to ∞ gives −( R2 exp u(x)dx)2 ≤ −8π R2 exp u(x)dx which implies R2 exp u(x)dx ≥ 8π as desired. This Theorem is a direct consequence of the following two Lemmas. . x ∈ R 2 and R2 R2 exp u(x)dx ≤ −4 exp u(x)dx < +∞. let Ωt = {x  u(x) > t}. ∂Ωt ∂Ωt Hence −( and so d ( dt Ωt d Ωt ) · dt Ωt exp u(x)dx ≥ 4πΩt  d Ωt ) · dt exp u(x)dx)2 = 2 exp t · ( Ωt exp u(x)dx ≤ −8πΩt et .2. Ding) If u is a solution of − u = eu .2 enables us to obtain the asymptotic behavior of the solutions at inﬁnity. then R2 Proof.5 If u(x) is a solution of (8. one can obtain Ωt exp u(x)dx = − Ωt u= ∂Ωt  uds − d Ωt  = dt ∂Ωt ds  u By the Schwartz inequality and the isoperimetric inequality. exp u(x)dx ≥ 8π.2 (W. Theorem 8.
3 . Proof. we see that the condition ∞ implies that the solution u is bounded from above. we need only to verify that I := R2 ln x − y − ln(y + 1) − ln x u(y) e dy→0 ln x as x→∞. By a result of Brezis and Merle [BM]. x ∈ R2 and we will show w(x) 1 → ln x 2π R2 R2 exp u(x)dx < exp u(x)dx uniformly as x → +∞. where I1 . u(x) 1 →− ln x 2π R2 exp u(x)dx uniformly. We may assume that x ≥ 3.23). Write I = I1 + I2 + I3 . 2π R Then it is easy to see that w(x) = exp u(x) . If u(x) is a solution of (8.2. . a) To estimate I1 . b) For each ﬁxed K. as x→∞. ln x − y − ln(y + 1) − ln x →0 ln x hence I2 →0. D2 = {y  x − y > 1 and y ≤ K} and D3 = {y  x − y > 1 and y > K} respectively.24) To see this. then as x → +∞.2 Applications of the Maximum Principles Based on Comparisons 211 Lemma 8.8. we simply notice that I1 ≤ C x−y≤1 eu(y) dy − 1 ln x ln x − yeu(y) dy x−y≤1 Then by the boundedness of eu(y) and R2 eu(y) dy. I2 and I3 are the integrals on the three regions D1 = {y  x − y ≤ 1}. we have. (8. in region D2 . Let 1 w(x) = 2 (ln x − y − ln(y + 1)) exp u(y)dy. we see that I1 →0 as x→∞.
5.3. we show the monotonicity and symmetry of the solution in the x1 direction.212 8 Methods of Moving Planes and of Moving Spheres c)To see I3 →0. The Method of Moving Planes and Symmetry In this subsection.24). x2 ) about the line Tλ . Then v(x) ≤ C + C1 ln(x + 1) for some constant C and C1 . (See the previous Figure 1.2. x2 ) be the reﬂection point of x = (x1 . x2 )  x1 < λ} and Let Tλ = ∂Σλ = {(x1 . Finally by the uniqueness of the solution of the following O. Assume that u(x) is a solution of (8.x0 (x). let Σλ = {(x1 .2.2. problem u (r) + 1 u (r) = f (u) r u (0) = 0 u(0) = 1 we see that the solutions of (8. A straight forward calculation shows wλ (x) + (exp ψλ (x))wλ (x) = 0 (8.E. Therefore v must be a constant.2 and Lemma 8. xλ = (2λ − x1 . We also show that the solution is strictly increasing before the critical position.25) . we obtain our Theorem 8. For λ ∈ R1 . we use the fact that for x − y > 1  ln x − y − ln(y + 1) − ln x ≤C ln x v ≡ 0 and Then let K→∞. we move the family of lines which are orthogonal to a given direction from negative inﬁnity to a critical position and then show that the solution is symmetric in that direction about the critical position. Combining Lemma 8. Since the direction can be chosen arbitrarily. x2 )  x1 = λ}. Without loss of generality.) Deﬁne wλ (x) = u(xλ ) − u(x). Consider the function v(x) = u(x) + w(x). For a given solution.D. This veriﬁes (8.23) must assume the form φλ. we conclude that the solution must be radially symmetric about some point. we apply the method of moving planes to establish the symmetry of the solutions. This completes the proof of our Lemma.23).
φ (8.29). ¯ (8. one can easily derive a ¯ contradiction from (8.5. By the “Decay at Inﬁnity” argument. which is a negative minimum of wλ (x). As in the previous examples. Then similar to the argument in Subsection 5.4. we choose φ(x) = ln(x − 1). At this point.3). Step 1. hence it does not work here.2.27) and (8.2. ∀x ∈ Σλ .2 requires 0 < q < n − 2.28) φ φ + eψλ (x) + φ φ Moreover. and hence ψλ (xo ) ≤ u(xo ). As a modiﬁcation. if wλ (x) < 0 somewhere. while the key function φ = xq given in Corollary 7. . then by (8.29) Now. there exists a point xo . wλ (x) = ¯ u(xλ ) − u(x) →0 . ln(x − 1) as x→∞.8.26) that eψ(x ) + o φ o (x ) < 0 for suﬃciently large xo .27) This is what we desire for. we have.2. for λ suﬃciently negative. by the asymptotic behavior of the solution u near inﬁnity (see Lemma 8. the key is to check the decay rate of c(x) := −eψλ (x) at points xo where wλ (x) is negative.26) 1 Notice that we are in dimension two. we have wλ (x) ≥ 0. Then it is easy to verify that −1 φ (x) = . φ x(x − 1)2 ln(x − 1) It follows from this and (8. This completes Step 1. we show that. for each ﬁxed λ.2 Applications of the Maximum Principles Based on Comparisons 213 where ψλ (x) is some number between u(x) and uλ (x). (8. we have eψλ (x o ) = O( 1 ) xo 4 (8. φ(x) wλ = 0. we introduce the function wλ (x) = ¯ It satisﬁes wλ + 2 wλ ¯ ¯ wλ (x) . By virtue of Theorem 8.28). At such point u(xo λ ) < u(xo ).
8. Deﬁne λo = sup{λ  wλ (x) ≥ 0. When (M. x ∈ S n . ∀x ∈ Σλ }. However. the equation becomes − u+ n+2 n(n − 2) n−2 u= R(x)u n−2 . u > 0.1 The Background Let M be a Riemannian manifold of dimension n ≥ 3 with metric go . Due to this limitation. see articles by Bahri . ∀x ∈ Σλo . it does not work near the points where R(x) = 0.30). to establish a priori estimates. go ) and Ro (x) is the scalar curvature of go .214 8 Methods of Moving Planes and of Moving Spheres Step 2 . In the last few years. For equation (8. Given a function R(x) on M. We show that If λo < 0. 4 4(n − 1) (8. the wellknown KazdanWarner condition gives rise to many examples of R in which there is no solution. go ) is the standard sphere S n . This is equivalent to ﬁnding a positive solution of the semilinear elliptic equation − 4(n − 1) n−2 ou + Ro (x)u = R(x)u n−2 .3 Method of Moving Planes in a Local Way 8. one interesting problem in diﬀerential geometry is to ﬁnd a metric g that is pointwise conformal to the original metric go and has scalar curvature R(x). This technical assumption became somewhat standard and has been used by many authors for quite a few years. there have seen a lot of progress in understanding equation (8. One main ingredient in the proof of existence is to obtain a priori estimates on the solutions. n+2 (8. This completes the proof of the Theorem. then wλo (x) ≡ 0. a useful technique employed by most authors was a ‘blowingup’ analysis. people had to assume that R was positive and bounded away from 0.3.30) where o is the BeltramiLaplace operator of (M. In recent years. The argument is entirely similar to that in the Step 2 of the proof of Theorem 8. x ∈ M. a tremendous amount of eﬀort has been put to ﬁnd the existence of the solutions and many interesting results have been obtained ( see [Ba] [BC] [Bi] [BE] [CL1] [CL3] [CL4] [CL6] [CY1] [CY2] [ES] [KW1] [Li2] [Li3] [SZ] [Lin1] and the references therein).2. In this case.31) on S n .2 except that we use φ(x) = ln(−x1 + 2). For example.31) It is the socalled critical case where the lack of compactness occurs.
α (S n ) . In this proposition. Lin [Lin1] weaken the condition by assuming only polynomial growth of R near its zeros. = 0 at Γ. such that for any solution u of (8. and Yang [CY2]. Gursky. Let Γ = {x ∈ M  R(x) = 0}. δ0 .34) are uniformly bounded near Γ ∩ D. Li [Li2] [Li3]. we sharpen our method of moving planes to further weaken the condition and to obtain more general result: Theorem 8. and R C 2. ∂R ≤ 0 where ν is the ∂ν outward normal (pointing to the region where R is negative) of Γ . we derived a priori bounds for the solutions of more general equations n(n − 2) u = R(x)up . Recently. Then the solutions of the equation − ou + Ro (x)u = R(x)up . he ﬁrst established a Liouville type theorem in Rn . To prove this result. are there any a priori estimates? In [CL6]. where A is an arbitrary positive number.33) Let D be any compact subset of M and let p be any number greater than 1. we used the ‘method of moving planes’ in a local way to control the growth of the solutions in this region and obtained an a priori estimate.1 Let M be a locally conformally ﬂat manifold.32). Chang.3. M. by introducing a new auxiliary function. In fact.1 Assume R ∈ C 2. . Chang and Yang [CY1]. we have u ≤ C in the region where R(x) ≤ . We removed the above wellknown technical assumption and obtained a priori estimates on the solutions of (8.α (M ) satisfying R(x)  R(x) is continuous near Γ. To overcome this. For a ﬁxed A. and Assume that Γ ∈ C 2. u > 0. Bahri and Coron [BC].α and R ∈ C 2. Later. we answered this question aﬃrmatively. (8. Then there are positive constants and C depending only on β0 . − u+ Proposition 8. Then for functions with changing signs.32) 4 1 for any exponent p between 1 + A and A. the bound is independent of p. we essentially required R(x) to grow linearly near its n+2 zeros. x ∈ S n .31) for functions R which are allowed to change signs. which seems a little restrictive. The main diﬃculty encountered by the traditional ‘blowingup’ analysis is near the point where R(x) = 0. and Schoen and Zhang [SZ]. in the case p = n−2 .8. then use a blowing up argument. (8.α (S n ) and  R(x) ≥ β0 for any x with R(x) ≤ δ0 .3 Method of Moving Planes in a Local Way 215 [Ba].3. in M (8.
In virtue of this Lemma. D.1.3.3. one only need to obtain an integral bound. Let u ∈ W 2. The proof of the Theorem is divided into two parts. Besides being n+2 . Moreover our restriction on the exponent is much weaker. We ﬁrst derive the bound in the region(s) where R is negatively bounded away from zero.2 The solutions of (8. To prove Theorem 8. grow like exp{− d(x) Obviously. we have sup u ≤ C( Br (x) 1 rn (u+ )q dV ) q .35) in a compact set K in Rn .3. This is the equation we will consider throughout the section. Part I.34) in a compact set D ⊂ M can be reduced to − u = R(x)up . n−2 8. Since M is a locally conformally ﬂat manifold.35) are uniformly bounded in the region {x ∈ K : R(x) ≤ 0 and dist(x. It comes from a standard elliptic estimate for subharmonic functions and an integral bound on the solutions. p can be any number greater than 1. the lower bound of R.34) and prove Theorem 8. we derive estimates in the region(s) where R is negatively bounded away from zero. Then for any ball B2r (x) ⊂ D and any q > 0. to establish an a priori bound of the solutions. The bound depends only on δ. one may simply look at functions R(x) which 1 }. where d(x) is the distance from the point x to Γ . we estimate the solutions of equation (8. p > 1. We prove Theorem 8. this kinds of functions satisfy our condition. we use the ‘method of moving planes’ to estimate the solutions in the region(s) surrounding the set Γ .n (D) and suppose u ≥ 0. To illustrate. but are not polynomial growth. B2r (x) 1 (8. Then based on this bound. Lemma 8.20 in [GT]).2 The A Priori Estimates Now.3. In the region(s) where R is negative In this part. we prove .33) is weaker than Lin’s polynomial growth restriction.36) where C = C(n.3.1 (See Theorem 9. a local ﬂat metric can be chosen so that equation (8.2.216 8 Methods of Moving Planes and of Moving Spheres One can see that our condition (8. Γ ) ≥ δ}. we introduce the following lemma of Gilbarg and Trudinger [GT]. q). (8. In fact.
through a straight forward calculation. we estimate the solutions in a neighborhood of Γ .3.35). leads to an a priori bound of the solutions. along any direction that is close to the direction of R.3. a Kelvin transform. K ) ≥ 1. we construct some auxiliary functions. y) with y = (x2 .38) Now (8. This completes the proof of the Lemma. or if necessary. Since the method of moving planes can not be applied directly to u. − ϕ = λ1 ϕ(x) x ∈ Ω ϕ(x) = 0 x ∈ ∂Ω.8.37) Proof. we arrive at o ϕα R1+α up dx ≤ C3 .2 Part II. i.35) by ϕα Rα sgnR and p−1 then integrate.3. Let x1 axis pointing . xn ). Multiply both sides of equation (8. In the region where R is small In this part. The Proof of Theorem 8. Denote the new system by x = (x1 . The proof consists of the following four steps. Let u be a given solution of (8.e. The key ingredient is to use the method of moving planes to show that. · · · . This result.2 Let B be any ball in K that is bounded away from Γ . Taking into account that α > 2 and ϕ and R are bounded in Ω.3. α Applying H¨lder inequality on the righthandside. and x0 ∈ Γ . Let α = 1+2p . It is a direct consequence of Lemma 8. the values of the solution u is comparable. Transforming the region Make a translation. Let Ω be an open neighborhood of K such that dist (∂Ω.37) follows suit. It is suﬃcient to show that u is bounded in a neighborhood of x0 . near x0 . Ω (8. Let ϕ be the ﬁrst eigenfunction of − in Ω.2.1 and Lemma 8. Step 1. B (8. a rotation. we have ϕα R1+α up dx ≤ C1 Ω Ω ϕα−2 Rα−2 udx ≤ C2 Ω ϕ p Rα−2 udx. Let 1 if R(x) > 0 sgnR = 0 if R(x) = 0 −1 if R(x) < 0. together with the boundedness of the integral up in a small neighborhood of x0 . where R is small. then there is a constant C such that up dx ≤ C.3 Method of Moving Planes in a Local Way 217 Lemma 8.
∂R (b) ∂x1 ≤ 0 in D. Let ∂1 D := {x  x1 = φ(y) + . (See Figure 6. Let D by the region enclosed by two surfaces ∂1 D and ∂2 D. (c)  R Step 2. x1 ≥ −2 } The intersection of ∂1 D and the plane {x ∈ Rn  x1 = −2 } encloses a part of the plane. and  u ≤ Cm. ˜ ˜ is chosen to ensure that . The image of Γ is also uniformly convex near the origin. the image of the set Ω + := {x  R(x) > 0} is contained in the left of the yplane. x0 becomes the origin. and the image of Γ is tangent to the yplane at origin. Let u be a C 2 extension of u from ∂1 D to the entire ˜ ∂D. such that 0 ≤ u ≤ 2m. Introducing an auxiliary function Let m = max∂1 D u. In the new system.218 8 Methods of Moving Planes and of Moving Spheres to the right. Let x1 = φ(y) be the equation of the image of Γ near the origin. and R(x) is continuous in D.) Γ y ∂1 D ∂2 D R(x) > 0 0 R(x) < 0 x1 Tλ Figure 6 The small positive number (a) ∂1 D is uniformly convex. which is denote by ∂2 D.
(8. By Theorem 8.3.3 Method of Moving Planes in a Local Way 219 Let w be the harmonic function w=0x∈D w = u x ∈ ∂D ˜ Then by the maximum principle and a standard elliptic estimate. one can choose C0 suﬃciently large.39) We consider the following two cases. Case 1) φ(y) + 2 ≤ x1 ≤ φ(y) + . we have 0 ≤ w(x) ≤ 2m Introduce a new function v(x) = u(x) − w(x) + C0 m[ + φ(y) − x1 ] + m[ + φ(y) − x1 ]2 which is well deﬁned in D. ∂v ≤ (C1 + C2 − C0 )m − 2m( + φ(y) − x1 ). One can easily verify that v(x) satisﬁes the following v + ψ(y) + f (x. v) = 2mx1 φ(y)+R(x){v+w(x)−C0 m[ +φ(y)−x1 ]−m[ +φ(y)−x1 ]2 }p At this step. and w C 1 (D) ≤ C1 m. R(x) is negative and bounded away from 0.39). It follows from this and (8. such that ∂v <0 ∂x1 Then since we have v(x) > 0.2 and the standard elliptic estimate.  ∂x1 for some positive constant C2 . Here C0 is a large constant to be determined later. In this region. ∂x1 Noticing that ( + φ(y) − x1 ) ≥ 0. v) = 0 x ∈ D v(x) = 0 x ∈ ∂1 D where and f (x.8. we have ∂u  ≤ C2 m. ψ(y) = −C0 m φ(y) − m [ + φ(y)]2 − 2m. we show that v(x) > 0 ∀x ∈ D. v(x) = 0 ∀x ∈ ∂1 D .
v) − f (x. and move the plane perpendicular to x1 axis toward the left.40) remains true. Applying the Method of Moving Planes to v in x1 direction In the previous step. We show that the moving of planes can be carried on provided f (x. we arrive at v(x) > 0.39). vλ ) − f (xλ . Let xλ = (2λ − x1 . This lies a foundation for the moving of planes. v) − f (x. the readers may see the ﬁrst example in Section 5. In fact. v). v) + f (xλ . again by (8. that is. 2 Again. If (8. Tλ = {x ∈ Rn  x1 = λ}. This choice of 1 is to guarantee that when the plane Tλ moves to λ = − 1 .220 8 Methods of Moving Planes and of Moving Spheres Case 2) −2 ≤ x1 ≤ φ(y) + 2 . it is easy to see that vλ satisﬁes the equation vλ + ψ(y) + f (xλ . then we have (8. Now we decrease λ.41) . For detailed ∂v argument. y) be the reﬂection point of x with respect to the plane Tλ . v(x)) for x = (x1 . We are going to show that wλ (x) ≥ 0 ∀ x ∈ Σλ . In this region. Let vλ (x) = v(xλ ) and wλ (x) = vλ (x) − v(x).40) We want to show that the above inequality holds for − 1 ≤ λ ≤ for some 0 < 1 < . ∂f = R(x)). and hence wλ satisﬁes − wλ = f (xλ . vλ ) = 0. v) is Lipschitz continuous in v (Actually.about the plane Tλ still lies in region D.40) is true when λ is less but close to . y) ∈ D with x1 > λ > − 1 . we have v(x) ≥ −w(x) + C0 m 2 ≥ −2m + C0 m . Now we can start from the rightmost tip of region D. (8.41) holds. let Σλ = {x ∈ D  x1 ≥ λ}. choosing C0 suﬃciently large (depending only on ). Step 3. v(x)) ≤ f (xλ . move the plane Tλ toward the left as long as the inequality (8. v) = R(xλ )(vλ − v) + f (xλ . (8. v) = R(xλ )wλ + f (xλ . v) − f (x. because Σλ is a narrow region and f (x. the reﬂection of Σλ . we showed that v(x) ≥ 0 in D and v(x) = 0 on ∂1 D. More precisely.
∂x1 More precisely. so that ∂w + C0 m ≥ 0 ∂x1 Noticing that ∂R ∂x1 ≤ 0 and ( + φ(y) − x1 ) ≥ 0 in D. we notice that the uniform convexity of the image of Γ near the origin implies φ(y) ≤ −a0 < 0.3 Method of Moving Planes in a Local Way 221 − wλ − R(xλ )wλ (x) ≥ 0. We consider the following two possibilities. we use ∂x1 ∂f ∂R ∂w = 2m φ(y) + up−1 { u + R(x)p[ + C0 m + 2m( + φ(y) − x1 )]}. ∂f (x. we have ∂f ≤ 0. In the part where u ≥ 1.8. y) with x1 > −2 1 . v) It is elementary to see that (8. we write ∂f ∂R = 2m φ(y)+up−1 ∂x1 ∂x1 u + R(x)/ ∂R ∂x1 p[ ∂w + C0 m + 2m( + φ(y) − x1 )] . ∀x ∈ Σλ }.42) If λo > − 1 . Let λo = inf{λ  wλ (x) ≥ 0. (8. (8. then since v(x) > 0 in D. one can derive a contradiction. we use u to control R(x)/ ∂R ∂x1 p[ ∂w + C0 m + 2m( + φ(y) − x1 )]. This enables us to apply the maximum principle to wλ . x = (x1 . ∂x1 ∂x1 ∂x1 First. Choose C0 suﬃciently large.42) and the maximum principle.41) holds if ≤ 0 in the set ∂x1 {x ∈ D  R(x) < 0 or x1 > −2 1 }. ∀x ∈ Σλo and ∂wλo < 0.43) for some constant a0 . we have wλo (x) > 0. by virtue of (8. ∂x1 . To estimate ∂f . ∂x1 b) R(x) > 0. ∂x1 Then. ∀x ∈ Tλo ∩ Σλo . a) R(x) ≤ 0. similar to the argument in the examples in Section 5.
45) and an integral bound on u. such that  ∂R  ∼  R.1.40) is true. (8. in a small neighborhood of the origin. by a similar argument.44) Noticing that w(x) is bounded in D. A similar argument will show that this remains true if we rotate the x1 axis by a small angle.1 The Background Given a function K(x) on the two dimensional standard sphere S 2 .222 8 Methods of Moving Planes and of Moving Spheres Choose 1 suﬃciently small. More precisely. the inequality (8. one can ﬁnd a cone ∆x0 with x0 as its vertex and staying to the left of x0 . This is equivalent to solving the following nonlinear elliptic equation . we can make R(x)/ ∂x1 arbitrarily small by a ∂f small choice of 1 . for any point x0 ∈ Γ .33) on R.3. one can show that (8.4 Method of Moving Spheres 8. ∂x1 ∂R Then by condition (8. ∂x1 Here again we use the smallness of R.44) leads immediately to u(x) + C6 ≥ u(x0 ) ∀x ∈ ∆x0 (8.40) implies that. the intersection 0 of the cone ∆x0 with the set {xR(x) ≥ δ2 } has a positive measure and the lower bound of the measure depends only on δ0 and the C 1 norm of R. 8. the wellknown Nirenberg problem is to ﬁnd conditions on K(x). Therefore. Furthermore. our conclusion is: The method of moving planes can be carried on up to λ = − 1 . Deriving the a priori bound Inequality (8. so that it can be realized as the Gaussian curvature of some conformally related metric. in virtue of (8. the function v(x) is monotone decreasing in x1 direction. and therefore arrive at ∂x1 ≤ 0.43).45) is true for any point x0 in a small neighborhood of Γ .45) More generally.3. for any λ between − 1 and .2. In the part where u ≤ 1. So far. This completes the proof of Theorem 8. Now the a priori bound of the solutions is a consequence of (8. Step 4. such that v(x) ≥ v(x0 ) ∀x ∈ ∆x0 (8. we can use 2m φ(y) to control Rp[ ∂w + C0 m + 2m( + φ(y) − x1 )].4. which can be derived from Lemma 8.
then (8. In recent years.49) Sn where dAg is the volume element of the metric eu go and u n−2 go for n = 2 and for n ≥ 3 respectively. on S 2 . see [BC] [CY1] [CY2] [CY3] [CY4] [CL10] [CD1] [CD2] [CC] [CS] [ES] [Ha] [Ho] [KW1] [Kw2] [Li1] [Li2] [Mo] and the references therein. we let R(x) = 2K(x) be the scalar curvature. various suﬃcient conditions were obtained by many authors (For example.47) For a unifying description. The vector ﬁeld X in (8. In [CL3]. for even functions R on S 2 . 4 .) However.46) on S 2 . To ﬁnd necessary and suﬃcient conditions. (8.49) is a conformal Killing ﬁeld associated to the standard metric go on S n . He showed that.46) is equivalent to − u + 2 = R(x)eu .48) to have a solution is R being positive somewhere. Where is the Laplacian operator associated to the standard metric.4 Method of Moving Spheres 223 − u + 1 = K(x)e2u (8. On the higher dimensional sphere S n . we gave a negative answer to this question. the necessary and suﬃcient condition for (8. Besides the obvious necessary condition that R(x) be positive somewhere.8. it is natural to begin with functions having certain symmetries. a monotone rotationally symmetric function R admits no solution. These conditions give rise to many examples of R(x) for which (8. a similar problem was raised by Kazdan and Warner: Which functions R(x) can be realized as the scalar curvature of some conformally related metrics ? The problem is equivalent to the existence of positive solutions of another elliptic equation − u+ n+2 n−2 n(n − 2) u= R(x)u n−2 4 4(n − 1) (8. In the following. we call these necessary conditions KazdanWarner type conditions. The conditions are: X(R)dAg = 0 (8. Then one may naturally ask: ‘ Are the necessary conditions of KazdanWarner type also suﬃcient?’ This question has been open for many years. there are still gaps between those suﬃcient conditions and the necessary ones. there are other wellknown obstructions found by Kazdan and Warner [KW1] and later generalized by Bourguignon and Ezin [BE]. can one solve the equations? What are the necessary and suﬃcient conditions for the equations to have a solution? These are very interesting problems in geometry.47) have no solution.48) and (8. A pioneer work in this direction is [Mo] by Moser.48) Both equations are socalled ‘critical’ in the sense that lack of compactness occurs. In particular. Then for which R.
this result is still very interesting.50). because it suggests that (8.2 There exists a family of functions R satisfying the KazdanWarner type conditions (8. This generalizes Xu and Yang’s result. (8. And thus. since their family of functions R satisfy (8. for rotationally symmetric functions R = R(θ) where θ is the distance from the north pole.224 8 Methods of Moving Planes and of Moving Spheres Note that. Proposition 8. First. we could show this for a family of such functions. in the class of even functions.4. In [XY]. Xu and Yang found a family of rotationally symmetric functions R satisfying conditions (8.48) and (8.3 Let R be rotationally symmetric.2) is satisﬁed automatically.48) to have a solution. At that time. for the ﬁrst time.Warner type conditions are not suﬃcient for (8. we were not able to prove it at that time. However.50) for which problem (8.1 Let R be rotationally symmetric. In our earlier paper [CL3].48) has no solution at all.48) has no solution at all. A simple question was asked by Kazdan [Ka]: What are the necessary and suﬃcient conditions for rotationally symmetric functions? To be more precise. Although we believed that for all such functions R. we present a class of rotationally symmetric functions satisfying (8. the KazdanWarner type conditions take the form: R > 0 somewhere and R changes signs (8.48) to have a solution. (8. We also cover the higher dimensional cases.48) has a solution.50) may not be the suﬃcient condition. Even though one didn’t know whether there were any other nonsymmetric solutions for these functions R . the KazdanWarner type conditions are the necessary and suﬃcient conditions for (8. and R ≡ C. for which problem (8.4.4.50) In [XY]. we were not able to obtain the counter part of this result in higher dimensions.52) .48) has no rotationally symmetric solution. there is no solution at all. pointed out that the Kazdan.50). for which the problem (8. Xu and Yang also proved: Proposition 8.51). we proved the nonexistence of rotationally symmetric solutions: Proposition 8. (1.51) Then problem (8. Assume that i) R is nondegenerate ii) R changes signs in the region where R > 0 Then problem (8. If R is monotone in the region where R > 0. Thus one can interpret Moser’s result as: In the class of even functions.47) has no rotationally symmetric solution.
51).48) or (8. R is nondecreasing in xn+1 direction.4. It does not work in higher dimensions. Then problems (8.2 Necessary Conditions In this subsection.4 Method of Moving Spheres 225 The above results and many other existence results tend to lead people to believe that what really counts is whether R changes signs in the region where R > 0.53) 8.48) and (8.4.1 and Theorem 8. And we conjectured in [CL3] that for rotationally symmetric R. we make a stereographic projection from S n to Rn . We also generalize this result to a family of nonsymmetric functions. condition (8. we prove Theorem 8. to prove Proposition 8. and R ≡ C. The following are our main results. Then the necessary and suﬃcient condition for (8. Theorem 8. which appeared in our later paper [CL4]. We call it the method of ‘moving spheres’.48) and (8.48) are rotationally symmetric. Assume that i) R(x) ≥ 0.50). Combining our Theorem 1 with Xu and Yang’s Proposition 3. we introduce a new idea.48) to have a solution is R > 0 somewhere and R changes signs in the region where R > 0 (8. a similar method was used by P. The main objective of this section is to present the proof on the necessary part of the conjecture.4. Then problems (8. Padilla to show the radial symmetry of the solutions for some nonlinear Dirichlet problems on annuluses. would be the necessary and suﬃcient condition for (8. we used the method of ‘moving planes’ to show that the solutions of (8. instead of (8. for x ∈ B .4. This method only work for a special family of functions satisfying (8.47) have no solution at all. Instead of showing the symmetry of the solutions. we obtain a comparison formula which leads to a direct contradiction. Then it is equivalent to consider the following equation in Euclidean space Rn : .1 Let R be continuous and rotationally symmetric. one can obtain a necessary and suﬃcient condition in the nondegenerate case: Corollary 8.2.47) have no solution at all. and for all functions satisfying (8.52). Theorem 8. ii) R(x) ≤ 0.47) to have a solution. For convenience.2 Let R be a continuous function. Let B be a geodesic ball centered at the North Pole xn+1 = 1. If R is monotone in the region where R > 0.2.51). for x ∈ S n \ B.4. In an earlier work [P] .1 Let R be rotationally symmetric and nondegenerate in the sense that R = 0 whenever R = 0.4.8. In [CL4]. It works in all dimensions.4. In [CL3].
The function R is also bounded and continuous. Lemma 8.57) The proof of Theorem 8. and this projection does not change the monotonicity of the function. We compare the values of u at those pairs λx of points λx and x2 .1. Then in step 2. 0 < λ ≤ 1 (8.47). then u(λx) > u( λx ) − 4 ln x x2 ∀x ∈ B1 (0). and we assume these throughout the subsection.226 8 Methods of Moving Planes and of Moving Spheres − u = R(r)eu .59) is true for λ = 1. we only need to show that in the remaining case. Let u be a solution of (8. u ∼ Cx2−n for n ≥ 3 (8.59) We ﬁrst prove Lemma 8. (8. there is also no solution. Without loss of generality. Thus.57).4. n ≥ 3. R ≤ 0 for r > ro .56) for n > 2. there are three possibilities: i) R is nonpositive everywhere. Let x ∈ B1 (0). The ﬁrst two cases violate the KazdanWarner type conditions.55) with appropriate asymptotic growth of the solutions at inﬁnity u ∼ −4 ln x for n=2 . A similar idea will work for Lemma 8. The function R(r) is the projection of the original R in equations (8.58) Lemma 8. Proof of Lemma 8. Here is the Euclidean Laplacian operator. then λx ∈ Bλ (0).1 is based on the following comparison formulas.53).4.56) for some C > 0. In step 1.55) and (8. r = x and n+2 τ = n−2 is the socalled critical Sobolev exponent.2.4. hence there is no solution. (8. we may assume that: R(r) > 0.4. For a radial function R satisfying (8. The reﬂection point of λx λx about the sphere ∂Bλ (0) is x2 . iii) R > 0 and nonincreasing for r < ro . then u(λx) > x2−n u( λx ) x2 ∀x ∈ B1 (0).2 Let R be a continuous function satisfying (8.4. R (r) ≤ 0 for r < 1. ii) R ≡ C is nonnegative everywhere and monotone. x ∈ Rn . Let u be a solution of (8.54) (8.4. − u = R(r)uτ u > 0. or vice versa.2.54) and (8. we start with the unit sphere and show that (8.1 Let R be a continuous function satisfying (8. we move (shrink) the sphere ∂Bλ (0) . This is done by Kelvin transform and the strong maximum principle. x ∈ R2 .48) and (8. We use a new idea called the method of ‘moving spheres ’.56) for n = 2.57). 0 < λ ≤ 1 (8. R(r) ≤ 0 for r ≥ 1.
65) that for r ≤ 1.62) x Let vλ (x) = x2−n uλ ( x2 ) be the Kelvin transform of uλ . Then it is easy to verify that λ τ − vλ = R( )vλ (x). λ ≤ 1. 1].57) that u < 0. Then − uλ = R(λx)uτ (x).4 Method of Moving Spheres 227 towards the origin. and v ≥ 0 in B1 (0).61) and v is regular. Then by (8. we move the sphere ∂Bλ (0) towards λ = 0. We show that one can always shrink ∂Bλ (0) a little bit before reaching the origin. Step 2.63) (8.4). (8.60) u(x) > x2−n u( 2 ) ∀x ∈ B1 (0). Step 1. λ = 1. n Let uλ (x) = λ 2 −1 u(λx). It follows from (8. In this step. Thus − (u − v) > 0.63) and (8. λ (8.64) r Let wλ = uλ − vλ . we have λ R( ) − R(λr) ≤ 0 r It follows from (8. λ λ wλ + R( )ψλ wλ = [R( ) − R(λr)]uτ λ r r (8. we obtain u > v in B1 (0) which is equivalent to (8. we establish