Wenxiong Chen and Congming Li
Methods on Nonlinear Elliptic
Equations
SPIN AIMS’ internal project number, if known
– Monograph –
September 22, 2008
AIMS
Preface
In this book we intend to present basic materials as well as real research
examples to young researchers in the ﬁeld of nonlinear analysis for partial
diﬀerential equations (PDEs), in particular, for semilinear elliptic PDEs. We
hope it will become a good reading material for graduate students and a handy
textbook for professors to use in a topic course on nonlinear analysis.
We will introduce a series of wellknown typical methods in nonlinear
analysis. We will ﬁrst illustrate them by using simple examples, then we will
lead readers to the research front and explain how these methods can be
applied to solve practical problems through careful analysis of a series of
recent research articles, mostly the authors’ recent papers.
From our experience, roughly speaking, in applying these commonly used
methods, there are usually two aspects:
i) general scheme (more or less universal) and,
ii) key points for each individual problem.
We will use our own research articles to show readers what the general
schemes are and what the key points are; and with the understandings of these
examples, we hope the readers will be able to apply these general schemes to
solve their own research problems with the discovery of their own key points.
In Chapter 1, we introduce basic knowledge on Sobolev spaces and some
commonly used inequalities. These are the major spaces in which we will seek
weak solutions of PDEs.
Chapter 2 shows how to ﬁnd weak solutions for some typical linear and
semilinear PDEs by using functional analysis methods, mainly, the calculus
of variations and critical point theories including the wellknown Mountain
Pass Lemma.
In Chapter 3, we establish W
2,p
a priori estimates and regularity. We prove
that, in most cases, weak solutions are actually diﬀerentiable, and hence are
classical ones. We will also present a Regularity Lifting Theorem. It is a simple
method to boost the regularity of solutions. It has been used extensively in
various forms in the authors’ previous works. The essence of the approach is
wellknown in the analysis community. However, the version we present here
VI Preface
contains some new developments. It is much more general and is very easy
to use. We believe that our method will provide convenient ways, for both
experts and nonexperts in the ﬁeld, in obtaining regularities. We will use
examples to show how this theorem can be applied to PDEs and to integral
equations.
Chapter 4 is a preparation for Chapter 5 and 6. We introduce Rieman
nian manifolds, curvatures, covariant derivatives, and Sobolev embedding on
manifolds.
Chapter 5 deals with semilinear elliptic equations arising from prescrib
ing Gaussian curvature, on both positively and negatively curved manifolds.
We show the existence of weak solutions in critical cases via variational ap
proaches. We also introduce the method of lower and upper solutions.
Chapter 6 focus on solving a problem from prescribing scalar curvature
on S
n
for n ≥ 3. It is in the critical case where the corresponding variational
functional is not compact at any level sets. To recover the compactness, we
construct a maxmini variational scheme. The outline is clearly presented,
however, the detailed proofs are rather complex. The beginners may skip
these proofs.
Chapter 7 is devoted to the study of various maximum principles, in partic
ular, the ones based on comparisons. Besides classical ones, we also introduce a
version of maximum principle at inﬁnity and a maximum principle for integral
equations which basically depends on the absolute continuity of a Lebesgue
integral. It is a preparation for the method of moving planes.
In Chapter 8, we introduce the method of moving planes and its variant–
the method of moving spheres–and apply them to obtain the symmetry, mono
tonicity, a priori estimates, and even nonexistence of solutions. We also intro
duce an integral form of the method of moving planes which is quite diﬀerent
than the one for PDEs. Instead of using local properties of a PDE, we exploit
global properties of solutions for integral equations. Many research examples
are illustrated, including the wellknown one by Gidas, Ni, and Nirenberg, as
well as a few from the authors’ recent papers.
Chapters 7 and 8 function as a selfcontained group. The readers who are
only interested in the maximum principles and the method of moving planes,
can skip the ﬁrst six chapters, and start directly from Chapter 7.
Yeshiva University Wenxiong Chen
University of Colorado Congming Li
Contents
1 Introduction to Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Approximation by Smooth Functions. . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5 Compact Embedding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.6 Other Basic Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.6.1 Poincar´e’s Inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.6.2 The Classical HardyLittlewoodSobolev Inequality . . . . 36
2 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.1 Second Order Elliptic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.3 Methods of Linear Functional Analysis . . . . . . . . . . . . . . . . . . . . . 44
2.3.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.3.2 Some Basic Principles in Functional Analysis . . . . . . . . . 44
2.3.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 48
2.4 Variational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.1 Semilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.2 Calculus of Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.3 Existence of Minimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.4.4 Existence of Minimizers Under Constraints . . . . . . . . . . . 55
2.4.5 Minimax Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4.6 Existence of a Minimax via the Mountain Pass Theorem 65
3 Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.1 W
2,p
a Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.1.1 Newtonian Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.1.2 Uniform Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2 W
2,p
Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2.1 The Case p ≥ 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
VIII Contents
3.2.2 The Case 1 < p < 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.2.3 Other Useful Results Concerning the Existence,
Uniqueness, and Regularity . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.3 Regularity Lifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.1 Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.2 Regularity Lifting Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.3.3 Applications to PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.3.4 Applications to Integral Equations . . . . . . . . . . . . . . . . . . . 103
4 Preliminary Analysis on Riemannian Manifolds . . . . . . . . . . . . 107
4.1 Diﬀerentiable Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.2 Tangent Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3 Riemannian Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4 Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.1 Curvature of Plane Curves. . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.2 Curvature of Surfaces in R
3
. . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.3 Curvature on Riemannian Manifolds . . . . . . . . . . . . . . . . . 117
4.5 Calculus on Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.5.1 Higher Order Covariant Derivatives and the
LaplaceBertrami Operator . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.5.2 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature123
4.6 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5 Prescribing Gaussian Curvature on Compact 2Manifolds . . 127
5.1 Prescribing Gaussian Curvature on S
2
. . . . . . . . . . . . . . . . . . . . . 129
5.1.1 Obstructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.1.2 Variational Approach and Key Inequalities . . . . . . . . . . . 130
5.1.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 135
5.2 Prescribing Gaussian Curvature on Negatively Curved
Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5.2.1 Kazdan and Warner’s Results. Method of Lower and
Upper Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.2.2 Chen and Li’s Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6 Prescribing Scalar Curvature on S
n
, for n ≥ 3 . . . . . . . . . . . . . 157
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.2 The Variational Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.2.1 Estimate the Values of the Functional . . . . . . . . . . . . . . . . 162
6.2.2 The Variational Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.3 The A Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.3.1 In the Region Where R < 0 . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.3.2 In the Region Where R is Small . . . . . . . . . . . . . . . . . . . . . 172
6.3.3 In the Regions Where R > 0. . . . . . . . . . . . . . . . . . . . . . . . 174
Contents IX
7 Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.2 Weak Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.3 The Hopf Lemma and Strong Maximum Principles . . . . . . . . . . 183
7.4 Maximum Principles Based on Comparisons . . . . . . . . . . . . . . . . 188
7.5 A Maximum Principle for Integral Equations . . . . . . . . . . . . . . . . 191
8 Methods of Moving Planes and of Moving Spheres . . . . . . . . 195
8.1 Outline of the Method of Moving Planes . . . . . . . . . . . . . . . . . . . 197
8.2 Applications of the Maximum Principles Based on
Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
8.2.1 Symmetry of Solutions in a Unit Ball . . . . . . . . . . . . . . . 199
8.2.2 Symmetry of Solutions of −´u = u
p
in R
n
. . . . . . . . . . . 202
8.2.3 Symmetry of Solutions for −´u = e
u
in R
2
. . . . . . . . . . . 209
8.3 Method of Moving Planes in a Local Way . . . . . . . . . . . . . . . . . . 214
8.3.1 The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
8.3.2 The A Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
8.4 Method of Moving Spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.4.1 The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.4.2 Necessary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
8.5 Method of Moving Planes in Integral Forms and Symmetry
of Solutions for an Integral Equation . . . . . . . . . . . . . . . . . . . . . . . 228
A Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
A.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
A.1.1 Algebraic and Geometric Notations . . . . . . . . . . . . . . . . . . 235
A.1.2 Notations for Functions and Derivatives . . . . . . . . . . . . . . 235
A.1.3 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
A.1.4 Notations for Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
A.1.5 Notations from Riemannian Geometry . . . . . . . . . . . . . . . 239
A.2 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
A.3 CalderonZygmund Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 243
A.4 The Contraction Mapping Principle . . . . . . . . . . . . . . . . . . . . . . . . 246
A.5 The ArzelaAscoli Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
1
Introduction to Sobolev Spaces
1.1 Distributions
1.2 Sobolev Spaces
1.3 Approximation by Smooth Functions
1.4 Sobolev Embeddings
1.5 Compact Embedding
1.6 Other Basic Inequalities
1.6.1 Poincar´e’s Inequality
1.6.2 The Classical HardyLittlewoodSobolev Inequality
In real life, people use numbers to quantify the surrounding objects. In
mathematics, the absolute value [a − b[ is used to measure the diﬀerence
between two numbers a and b. Functions are used to describe physical states.
For example, temperature is a function of time and place. Very often, we use
a sequence of approximate solutions to approach a real one; and how close
these solutions are to the real one depends on how we measure them, that
is, which metric we are choosing. Hence, it is imperative that we need not
only develop suitable metrics to measure diﬀerent states (functions), but we
must also study relationships among diﬀerent metrics. For these purposes,
the Sobolev Spaces were introduced. They have many applications in various
branches of mathematics, in particular, in the theory of partial diﬀerential
equations.
The role of Sobolev Spaces in the analysis of PDEs is somewhat similar
to the role of Euclidean spaces in the study of geometry. The fundamental
research on the relations among various Sobolev Spaces (Sobolev norms) was
ﬁrst carried out by G. Hardy and J. Littlewood in the 1910s and then by S.
Sobolev in the 1930s. More recently, many well known mathematicians, such
as H. Brezis, L. Caﬀarelli, A. Chang, E. Lieb, L. Nirenberg, J. Serrin, and
2 1 Introduction to Sobolev Spaces
E. Stein, have worked in this area. The main objectives are to determine if
and how the norms dominate each other, what the sharp estimates are, which
functions achieve these sharp estimates, and which functions are ‘critically’
related to these sharp estimates.
To ﬁnd the existence of weak solutions for partial diﬀerential equations,
especially for nonlinear partial diﬀerential equations, the method of functional
analysis, in particular, the calculus of variations, has seen more and more
applications.
To roughly illustrate this kind of application, let’s start oﬀ with a simple
example. Let Ω be a bounded domain in R
n
and consider the Dirichlet problem
associated with the Laplace equation:
−´u = f(x), x ∈ Ω
u = 0, x ∈ ∂Ω
(1.1)
To prove the existence of solutions, one may view −´as an operator acting
on a proper linear space and then apply some known principles of functional
analysis, for instance, the ‘ﬁxed point theory’ or ‘the degree theory,’ to derive
the existence. One may also consider the corresponding variational functional
J(u) =
1
2
Ω
[u[
2
d x −
Ω
f(x) ud x (1.2)
in a proper linear space, and seek critical points of the functional in that
space. This kind of variational approach is particularly powerful in dealing
with nonlinear equations. For example, in equation (1.1), instead of f(x), we
consider f(x, u). Then it becomes a semilinear equation. Correspondingly, we
have the functional
J(u) =
1
2
Ω
[u[
2
d x −
Ω
F(x, u) d x, (1.3)
where
F(x, u) =
u
0
f(x, s) d x
is an antiderivative of f(x, ). From the deﬁnition of the functional in either
(1.2) or (1.3), one can see that the function u in the space need not be second
order diﬀerentiable as required by the classical solutions of (1.1). Hence the
critical points of the functional are solutions of the problem only in the ‘weak’
sense. However, by an appropriate regularity argument, one may recover the
diﬀerentiability of the solutions, so that they can still satisfy equation (1.1)
in the classical sense.
In general, given a PDE problem, our intention is to view it as an op
erator A acting on some proper linear spaces X and Y of functions and to
symbolically write the equation as
Au = f (1.4)
1.1 Distributions 3
Then we can apply the general and elegant principles of linear or nonlinear
functional analysis to study the solvability of various equations involving A.
The result can then be applied to a broad class of partial diﬀerential equations.
We may also associate this operator with a functional J(), whose critical
points are the solutions of the equation (1.4). In this process, the key is to
ﬁnd an appropriate operator ‘A’ and appropriate spaces ‘X’ and ‘Y’. As we
will see later, the Sobolev spaces are designed precisely for this purpose and
will work out properly.
In solving a partial diﬀerential equation, in many cases, it is natural to
ﬁrst ﬁnd a sequence of approximate solutions, and then go on to investigate
the convergence of the sequence. The limit function of a convergent sequence
of approximate solutions would be the desired exact solution of the equation.
As we will see in the next few chapters, there are two basic stages in showing
convergence:
i) in a reﬂexive Banach space, every bounded sequence has a weakly con
vergent subsequence, and then
ii) by the compact embedding from a “stronger” Sobolev space into a
“weaker” one, the weak convergent sequence in the “stronger” space becomes
a strong convergent sequence in the “weaker” space.
Before going onto the details of this chapter, the readers may have a glance
at the introduction of the next chapter to gain more motivations for studying
the Sobolev spaces.
In Section 1.1, we will introduce the distributions, mainly the notion of
the weak derivatives, which are the elements of the Sobolev spaces.
We then deﬁne Sobolev spaces in Section 1.2.
To derive many useful properties in Sobolev spaces, it is not so convenient
to work directly on weak derivatives. Hence in Section 1.3, we show that, these
weak derivatives can be approximated by smooth functions. Then in the next
three sections, we can just work on smooth functions to establish a series of
important inequalities.
1.1 Distributions
As we have seen in the introduction, for the functional J(u) in (1.2) or (1.3),
what really involved were only the ﬁrst derivatives of u instead of the second
derivatives as required classically for a second order equation; and these ﬁrst
derivatives need not be continuous or even be deﬁned everywhere. Therefore,
via functional analysis approach, one can substantially weaken the notion of
partial derivatives. The advantage is that it allows one to divide the task of
ﬁnding “suitable” smooth solutions for a PDE into two major steps:
Step 1. Existence of Weak Solutions. One seeks solutions which are less
diﬀerentiable but are easier to obtain. It is very common that people use
“energy” minimization or conservation, sometimes use ﬁnite dimensional ap
proximation, to show the existence of such weak solutions.
4 1 Introduction to Sobolev Spaces
Step 2. Regularity Lifting. One uses various analysis tools to boost the
diﬀerentiability of the known weak solutions and try to show that they are
actually classical ones.
Both the existence of weak solutions and the regularity lifting have become
two major branches of today’s PDE analysis. Various functions spaces and
the related embedding theories are basic tools in both analysis, among which
Sobolev spaces are the most frequently used ones.
In this section, we introduce the notion of ‘weak derivatives,’ which will
be the elements of the Sobolev spaces.
Let R
n
be the ndimensional Euclidean space and Ω be an open connected
subset in R
n
. Let D(Ω) = C
∞
0
(Ω) be the linear space of inﬁnitely diﬀeren
tiable functions with compact support in Ω. This is called the space of test
functions on Ω.
Example 1.1.1 Assume
B
R
(x
o
) := ¦x ∈ R
n
[ [x −x
o
[ < R¦ ⊂ Ω,
then for any r < R, the following function
f(x) =
exp¦
1
x−x
o

2
−r
2
¦ for [x −x
o
[ < r
0 elsewhere
is in C
∞
0
(Ω).
Example 1.1.2 Assume ρ ∈ C
∞
0
(R
n
), u ∈ L
p
(Ω), and supp u ⊂ K ⊂⊂ Ω.
Let
u
(x) := ρ
∗ u :=
R
n
1
n
ρ(
x −y
)u(y)dy.
Then u
∈ C
∞
0
(Ω) for suﬃciently small.
Let L
p
loc
(Ω) be the space of p
th
power locally summable functions for
1 ≤ p ≤ ∞. Such functions are Lebesque measurable functions f deﬁned on
Ω and with the property that
f
L
p
(K)
:=
K
[f(x)[
p
dx
1/p
< ∞
for every compact subset K in Ω.
Assume that u is a C
1
function in Ω and φ ∈ D(Ω). Through integration
by parts, we have, for i = 1, 2, , n,
Ω
∂u
∂x
i
φ(x)dx = −
Ω
u(x)
∂φ
∂x
i
dx. (1.5)
Now if u is not in C
1
(Ω), then
∂u
∂x
i
does not exist. However, the integral on
the right hand side of (1.5) still makes sense if u is a locally L
1
summable
1.1 Distributions 5
function. For this reason, we deﬁne the ﬁrst derivative
∂u
∂x
i
weakly as the
function v(x) that satisﬁes
Ω
v(x)φ(x)dx = −
Ω
u
∂φ
∂x
i
dx
for all functions φ ∈ D(Ω).
The same idea works for higher partial derivatives. Let α = (α
1
, α
2
, , α
n
)
be a multiindex of order
k := [α[ := α
1
+α
2
+ +α
n
.
For u ∈ C
k
(Ω), the regular αth partial derivative of u is
D
α
u =
∂
α1
∂
α2
∂
αn
u
∂x
α1
1
∂x
α2
2
∂x
αn
n
.
Given any test function φ ∈ D(Ω), through a straight forward integration by
parts k times, we arrive at
Ω
D
α
uφ(x) d x = (−1)
α
Ω
uD
α
φd x. (1.6)
There is no boundary term because φ vanishes near the boundary.
Now if u is not k times diﬀerentiable, the left hand side of (1.6) makes no
sense. However the right hand side is valid for functions u with much weaker
diﬀerentiability, i.e. u only need to be locally L
1
summable. Thus it is natural
to choose those functions v that satisfy (1.6) as the weak representatives of
D
α
u.
Deﬁnition 1.1.1 For u, v ∈ L
1
loc
(Ω), we say that v is the αth weak derivative
of u, written
v = D
α
u
provided
Ω
v(x) φ(x) d x = (−1)
α
Ω
uD
α
φd x
for all test functions φ ∈ D(Ω).
Example 1.1.3 For n = 1 and Ω = (−π, 1), let
u(x) =
cos x if −π < x ≤ 0
1 −x if 0 < x < 1.
Then its weak derivative u
(x) can be represented by
v(x) =
−sin x if −π < x ≤ 0
−1 if 0 < x < 1.
6 1 Introduction to Sobolev Spaces
To see this, we verify, for any φ ∈ D(Ω), that
1
−π
u(x)φ
(x)dx = −
1
−π
v(x)φ(x)dx. (1.7)
In fact, through integration by parts, we have
1
−π
u(x)φ
(x)dx =
0
−π
cos xφ
(x)dx +
1
0
(1 −x)φ
(x)dx
=
0
−π
sin xφ(x)dx +φ(0) −φ(0) +
1
0
φ(x)dx
= −
1
−π
v(x)φ(x)dx.
In this example, one can see that, in the classical sense, the function u is
not diﬀerentiable at x = 0. Since the weak derivative is deﬁned by integrals,
one may alter the values of the weak derivative v(x) in a set of measure zero,
and (1.7) still holds. However, it should be unique up to a set of measure zero.
Lemma 1.1.1 (Uniqueness of Weak Derivatives). If v and w are the weak
αth partial derivatives of u, D
α
u, then v(x) = w(x) almost everywhere.
Proof. By the deﬁnition of the weak derivatives, we have
Ω
v(x)φ(x)dx = (−1)
α
Ω
u(x)D
α
φdx =
Ω
w(x)φ(x)dx
for any φ ∈ D(Ω). It follows that
Ω
(v(x) −w(x))φ(x)dx = 0 ∀φ ∈ D(Ω).
Therefore, we must have
v(x) = w(x) almost everywhere .
¯ .
From the deﬁnition, we can view a weak derivative as a linear functional
acting on the space of test functions D(Ω), and we call it a distribution. More
generally, we have
Deﬁnition 1.1.2 A distribution is a continuous linear functional on D(Ω).
The linear space of distributions or the generalized functions on Ω, denoted
by D
(Ω), is the collection of all continuous linear functionals on D(Ω).
1.1 Distributions 7
Here, the continuity of a functional T on D(Ω) means that, for any se
quence ¦φ
k
¦ ⊂ D(Ω) with φ
k
→φ in D(Ω), we have
T(φ
k
)→T(φ), as k→∞;
and we say that φ
k
→φ in D(Ω) if
a) there exists K ⊂⊂ Ω such that suppφ
k
, suppφ ⊂ K, and
b) for any α, D
α
φ
k
→D
α
φ uniformly as k→∞.
The most important and most commonly used distributions are locally
summable functions. In fact, for any f ∈ L
p
loc
(Ω) with 1 ≤ p ≤ ∞, consider
T
f
(φ) =
Ω
f(x)φ(x)dx.
It is easy to verify that T
f
() is a continuous linear functional on D(Ω), and
hence it is a distribution.
For any distribution µ, if there is an f ∈ L
1
loc
(Ω) such that
µ(φ) = T
f
(φ), ∀φ ∈ D(Ω),
then we say that µ is (or can be realized as ) a locally summable function and
identify it as f.
An interesting example of a distribution that is not a locally summable
function is the wellknown Dirac delta function. Let x
o
be a point in Ω. For
any φ ∈ D(Ω), the delta function at x
o
can be deﬁned as
δ
x
o(φ) = φ(x
o
).
Hence it is a distribution. However, one can show that such a delta function is
not locally summable. It is not a function at all. This kind of “function” has
been used widely and so successfully by physicists and engineers, who often
simply view δ
x
o as
δ
x
o(x) =
0, for x = x
o
∞, for x = x
o
.
Surprisingly, such a delta function is the derivative of some function in the
following distributional sense. To explain this, let Ω = (−1, 1), and let
f(x) =
0, for x < 0
1, for x ≥ 0.
Then, we have
−
1
−1
f(x)φ
(x)dx = −
1
0
φ
(x)dx = φ(0) = δ
0
(φ).
Compare this with the deﬁnition of weak derivatives, we may regard δ
0
(x) as
f
(x) in the sense of distributions.
8 1 Introduction to Sobolev Spaces
1.2 Sobolev Spaces
Now suppose given a function f ∈ L
p
(Ω), we want to solve the partial diﬀer
ential equation
∆u = f(x)
in the sense of weak derivatives. Naturally, we would seek solutions u, such
that ∆u are in L
p
(Ω). More generally, we would start from the collections of
all distributions whose second weak derivatives are in L
p
(Ω).
In a variational approach, as we have seen in the introduction, to seek
critical points of the functional
1
2
Ω
[u[
2
dx −
Ω
F(x, u)dx,
a natural set of functions we start with is the collection of distributions whose
ﬁrst weak derivatives are in L
2
(Ω). More generally, we have
Deﬁnition 1.2.1 The Sobolev space W
k,p
(Ω) (k ≥ 0 and p ≥ 1) is the col
lection of all distributions u on Ω such that for all multiindex α with [α[ ≤ k,
D
α
u can be realized as a L
p
function on Ω. Furthermore, W
k,p
(Ω) is a Ba
nach space with the norm
[[u[[ :=
¸
¸
α≤k
Ω
[D
α
u(x)[
p
dx
¸
1/p
.
In the special case when p = 2, it is also a Hilbert space and we usually
denote it by H
k
(Ω).
Deﬁnition 1.2.2 W
k,p
0
(Ω) is the closure of C
∞
0
(Ω) in W
k,p
(Ω).
Intuitively, W
k,p
0
(Ω) is the space of functions whose up to (k −1)th order
derivatives vanish on the boundary.
Example 1.2.1 Let Ω = (−1, 1). Consider the function f(x) = [x[
β
. For
0 < β < 1, it is obviously not diﬀerentiable at the origin. However for any
1 ≤ p <
1
1 −β
, it is in the Sobolev space W
1,p
(Ω). More generally, let Ω be
an open unit ball centered at the origin in R
n
, then the function [x[
β
is in
W
1,p
(Ω) if and only if
β > 1 −
n
p
. (1.8)
To see this, we ﬁrst calculate
f
xi
(x) =
βx
i
[x[
2−β
, for x = 0,
and hence
1.3 Approximation by Smooth Functions 9
[f(x)[ =
β
[x[
1−β
. (1.9)
Fix a small > 0. Then for any φ ∈ D(Ω), by integration by parts, we
have
Ω\B(0)
f
xi
(x)φ(x)dx = −
Ω\B(0)
f(x)φ
xi
(x)dx +
∂B(0)
fφν
i
dS.
where ν = (ν
1
, ν
2
, , ν
n
) is an inwardnormal vector on ∂B
(0).
Now, under the condition that β > 1 −n/p, f
xi
is in L
p
(Ω) ⊂ L
1
(Ω), and
[
∂B(0)
fφν
i
dS[ ≤ C
n−1+β
→0, as →0.
It follows that
Ω
[x[
β
φ
xi
(x)dx = −
Ω
βx
i
[x[
2−β
φ(x)dx.
Therefore, the weak ﬁrst partial derivatives of [x[
β
are
βx
i
[x[
2−β
, i = 1, 2, , n.
Moreover, from (1.9), one can see that [f[ is in L
p
(Ω) if and only if
β > 1 −
n
p
.
1.3 Approximation by Smooth Functions
While working in Sobolev spaces, for instance, proving inequalities, it may feel
inconvenient and cumbersome to manage the weak derivatives directly based
on its deﬁnition. To get around, in this section, we will show that any function
in a Sobolev space can be approached by a sequence of smooth functions. In
other words, the smooth functions are dense in Sobolev spaces. Based on this,
when deriving many useful properties of Sobolev spaces, we can just work on
smooth functions and then take limits.
At the end of the section, for more convenient application of the approxi
mation results, we introduce an Operator Completion Theorem. We also prove
an Extension Theorem. Both theorems will be used frequently in the next few
sections.
The idea in approximation is based on molliﬁers. Let
j(x) =
c e
1
x
2
−1
if [x[ < 1
0 if [x[ ≥ 1.
One can verify that
10 1 Introduction to Sobolev Spaces
j(x) ∈ C
∞
0
(B
1
(0)).
Choose the constant c, such that
R
n
j(x)dx = 1.
For each > 0, deﬁne
j
(x) =
1
n
j(
x
).
Then obviously
R
n
j
(x)dx = 1, ∀ > 0.
One can also verify that j
∈ C
∞
0
(B
(0)), and
lim
→0
j
(x) =
0 for x = 0
∞ for x = 0.
The above observations suggest that the limit of j
(x) may be viewed as a
delta function, and from the wellknown property of the delta function, we
would naturally expect, for any continuous function f(x) and for any point
x ∈ Ω,
(J
f)(x) :=
Ω
j
(x −y)f(y)dy→f(x), as →0. (1.10)
More generally, if f(x) is only in L
p
(Ω), we would expect (1.10) to hold almost
everywhere. We call j
(x) a molliﬁer and (J
f)(x) the molliﬁcation of f(x).
We will show that, for each > 0, J
f is a C
∞
function, and as →0,
J
f→f in W
k,p
.
Notice that, actually
(J
f)(x) =
B(x)∩Ω
j
(x −y)f(y)dy,
hence, in order that J
f(x) to approximate f(x) well, we need B
(x) to be
completely contained in Ω to ensure that
B(x)∩Ω
j
(x −y)dy = 1
(one of the important property of delta function). Equivalently, we need x to
be in the interior of Ω. For this reason, we ﬁrst prove a local approximation
theorem.
Theorem 1.3.1 (Local approximation by smooth functions).
For any f ∈ W
k,p
(Ω), J
f ∈ C
∞
(R
n
) and J
f → f in W
k,p
loc
(Ω) as →0.
1.3 Approximation by Smooth Functions 11
Then, to extend this result to the entire Ω, we will choose inﬁnitely many
open sets O
i
, i = 1, 2, , each of which has a positive distance to the bound
ary of Ω, and whose union is the whole Ω. Based on the above theorem, we are
able to approximate a W
k,p
(Ω) function on each O
i
by a sequence of smooth
functions. Combining this with a partition of unity, and a cut oﬀ function if
Ω is unbounded, we will then prove
Theorem 1.3.2 (Global approximation by smooth functions).
For any f ∈ W
k,p
(Ω), there exists a sequence of functions ¦f
m
¦ ⊂
C
∞
(Ω) ∩ W
k,p
(Ω) such that f
m
→ f in W
k,p
(Ω) as m→∞.
Theorem 1.3.3 (Global approximation by smooth functions up to the bound
ary).
Assume that Ω is bounded with C
1
boundary ∂Ω, then for any f ∈
W
k,p
(Ω), there exists a sequence of functions ¦f
m
¦ ⊂ C
∞
(Ω) = C
∞
(Ω) ∩
W
k,p
(Ω) such that f
m
→ f in W
k,p
(Ω) as m→∞.
When Ω is the entire space R
n
, the approximation by C
∞
or by C
∞
0
functions are essentially the same. We have
Theorem 1.3.4 W
k,p
(R
n
) = W
k,p
0
(R
n
). In other words, for any f ∈ W
k,p
(R
n
),
there exists a sequence of functions ¦f
m
¦ ⊂ C
∞
0
(R
n
), such that
f
m
→f, as m→∞; in W
k,p
(R
n
).
Proof of Theorem 1.3.1
We prove the theorem in three steps.
In step 1, we show that J
f ∈ C
∞
(R
n
) and
J
f
L
p
(Ω)
≤ f
L
p
(Ω)
.
From the deﬁnition of J
f(x), we can see that it is well deﬁned for all x ∈ R
n
,
and it vanishes if x is of distance away from Ω. Here and in the following,
for simplicity of argument, we extend f to be zero outside of Ω.
In step 2, we prove that, if f is in L
p
(Ω),
(J
f)→f in L
p
loc
(Ω).
We ﬁrst verify this for continuous functions and then approximate L
p
functions
by continuous functions.
In step 3, we reach the conclusion of the Theorem. For each f ∈ W
k,p
(Ω)
and [α[ ≤ k, D
α
f is in L
p
(Ω). Then from the result in Step 2, we have
J
(D
α
f)→D
α
f in L
loc
(Ω).
Hence what we need to verify is
12 1 Introduction to Sobolev Spaces
D
α
(J
f)(x) = J
(D
α
f)(x).
As the readers will notice, the arguments in the last two steps only work
in any compact subset of Ω.
Step 1.
Let e
i
= (0, , 0, 1, 0, , 0) be the unit vector in the x
i
direction.
Fix > 0 and x ∈ R
n
. By the deﬁnition of J
f, we have, for [h[ < ,
(J
f)(x +he
i
) −(J
f)(x)
h
=
B2(x)∩Ω
j
(x +he
i
−y) −j
(x −y)
h
f(y)dy.
(1.11)
Since as h→0,
j
(x +he
i
−y) −j
(x −y)
h
→
∂j
(x −y)
∂x
i
uniformly for all y ∈ B
2
(x) ∩ Ω, we can pass the limit through the integral
sign in (1.11) to obtain
∂(J
f)(x)
∂x
i
=
Ω
∂j
(x −y)
∂x
i
f(y)dy.
Similarly, we have
D
α
(J
f)(x) =
Ω
D
α
x
j
(x −y)f(y)dy.
Noticing that j
() is inﬁnitely diﬀerentiable, we conclude that J
f is also
inﬁnitely diﬀerentiable.
Then we derive
J
f
L
p
(Ω)
≤ f
L
p
(Ω)
. (1.12)
By the H¨older inequality, we have
[(J
f)(x)[ = [
Ω
j
p−1
p
(x −y)j
1
p
(x −y)f(y)dy[
≤
Ω
j
(x −y)dy
p−1
p
Ω
j
(x −y)[f(y)[
p
dy
1
p
≤
B(x)
j
(x −y)[f(y)[
p
dy
1
p
.
It follows that
Ω
[(J
f)(x)[
p
dx ≤
Ω
B(x)
j
(x −y)[f(y)[
p
dy
dx
≤
Ω
[f(y)[
p
B(y)
j
(x −y)dx
dy
≤
Ω
[f(y)[
p
dy.
1.3 Approximation by Smooth Functions 13
This veriﬁes (1.12).
Step 2.
We prove that, for any compact subset K of Ω,
J
f −f
L
p
(K)
→0, as →0. (1.13)
We ﬁrst show this for a continuous function f. By writing
(J
f)(x) −f(x) =
B(0)
j
(y)[f(x −y) −f(x)]dy,
we have
[(J
f)(x) −f(x)[ ≤ max
x∈K,y<
[f(x −y) −f(x)[
B(0)
j
(y)dy
≤ max
x∈K,y<
[f(x −y) −f(x)[.
Due to the continuity of f and the compactness of K, the last term in the above
inequality tends to zero uniformly as →0. This veriﬁes (1.13) for continuous
functions.
For any f in L
p
(Ω), and given any δ > 0, choose a a continuous function
g, such that
f −g
L
p
(Ω)
<
δ
3
. (1.14)
This can be derived from the wellknown fact that any L
p
function can be
approximated by a simple function of the form
¸
k
j=1
a
j
χ
j
(x), where χ
j
is the
characteristic function of some measurable set A
j
; and each simple function
can be approximated by a continuous function.
For the continuous function g, (1.13) infers that for suﬃciently small , we
have
J
g −g
L
p
(K)
<
δ
3
.
It follows from this and (1.14) that
J
f −f
L
p
(K)
≤ J
f −J
g
L
p
(K)
+J
g −g
L
p
(K)
+g −f
L
p
(K)
≤ 2f −g
L
p
(K)
+J
g −g
L
p
(K)
≤ 2
δ
3
+
δ
3
= δ.
This proves (1.13).
Step 3.
Now assume that f ∈ W
k,p
(Ω). Then for any α with [α[ ≤ k, we have
D
α
f ∈ L
p
(Ω). We show that
14 1 Introduction to Sobolev Spaces
D
α
(J
f)→D
α
f in L
p
(K). (1.15)
By the result in Step 2, we have
J
(D
α
f)→D
α
f in L
p
(K).
Now what left to verify is, for suﬃciently small ,
D
α
(J
f)(x) = J
(D
α
f)(x), ∀x ∈ K.
To see this, we ﬁx a point x in K. Choose <
1
2
dist(K, ∂Ω), then any
point y in the ball B
(x) is in the interior of Ω. Consequently,
D
α
(J
f)(x) =
Ω
D
α
x
j
(x −y)f(y)dy
=
B(x)
D
α
x
j
(x −y)f(y)dy
= (−1)
α
B(x)
D
α
y
j
(x −y)f(y)dy
=
B(x)
j
(x −y)D
α
f(y)dy
=
Ω
j
(x −y)D
α
f(y)dy.
Here we have used the fact that j
(x−y) and all its derivatives are supported
in B
(x) ⊂ Ω.
This completes the proof of the Theorem 1.3.1. ¯ .
The Proof of Theorem 1.3.2
Step 1. For a Bounded Region Ω.
Let
Ω
i
= ¦x ∈ Ω [ dist(x, ∂Ω) >
1
i
¦, i = 1, 2, 3,
Write O
i
= Ω
i+3
`
¯
Ω
i+1
. Choose some open set O
0
⊂⊂ Ω so that Ω = ∪
∞
i=0
O
i
.
Choose a smooth partition of unity ¦η
i
¦
∞
i=0
associated with the open sets
¦O
i
¦
∞
i=0
,
0 ≤ η
i
(x) ≤ 1, η
i
∈ C
∞
0
(O
i
)
¸
∞
i=0
η
i
(x) = 1, x ∈ Ω.
Given any function f ∈ W
k,p
(Ω), obvious η
i
f ∈ W
k,p
(Ω) and supp(η
i
f) ⊂ O
i
.
Fix a δ > 0. Choose
i
> 0 so small that f
i
:= J
i
(η
i
f) satisﬁes
f
i
−η
i
f
W
k,p
(Ω)
≤
δ
2
i+1
i = 0, 1,
suppf
i
⊂ (Ω
i+4
`
¯
Ω
i
) i = 1, 2,
(1.16)
1.3 Approximation by Smooth Functions 15
Set g =
¸
∞
i=0
f
i
. Then g ∈ C
∞
(Ω), because each f
i
is in C
∞
(Ω), and for
each open set O ⊂⊂ Ω, there are at most ﬁnitely many nonzero terms in the
sum. To see that g ∈ W
k,p
(Ω), we write
g −f =
∞
¸
i=0
(f
i
−η
i
f).
It follows from (1.16) that
∞
¸
i=0
f
i
−η
i
f
W
k,p
(Ω)
≤ δ
∞
¸
i=0
1
2
i+1
= δ.
From here we can see that the series
¸
∞
i=0
(f
i
−η
i
f) converges in W
k,p
(Ω) (see
Exercise 1.3.1 below), hence (g − f) ∈ W
k,p
(Ω), and therefore g ∈ W
k,p
(Ω).
Moreover, from the above inequality, we have
g −f
W
k,p
(Ω)
≤ δ.
Since δ > 0 is any number, we complete step 1.
Remark 1.3.1 In the above argument, neither
¸
f
i
nor
¸
η
i
f converge, but
their diﬀerence converges. Although each partial sum of
¸
f
i
is in C
∞
0
(Ω),
the inﬁnite series
¸
f
i
does not converge to g in W
k,p
(Ω). Then how did we
prove that g ∈ W
k,p
(Ω)? We showed that the diﬀerence (g−f) is in W
k,p
(Ω).
Exercise 1.3.1 .
Let X be a Banach space and v
i
∈ X. Show that the series
¸
∞
i
v
i
converges
in X if
¸
∞
i
v
i
 < ∞.
Hint: Show that the partial sum is a Cauchy sequence.
Step 2. For Unbounded Region Ω.
Given any δ > 0, since f ∈ W
k,p
(Ω), there exist R > 0, such that
f
W
k,p
(Ω\B
R−2
(0))
≤ δ. (1.17)
Choose a cut oﬀ function φ ∈ C
∞
(R
n
) satisfying
φ(x) =
1, x ∈ B
R−2
(0),
0, x ∈ R
n
` B
R
(0);
and [D
α
φ(x)[ ≤ 1 ∀x ∈ R
n
, ∀[α[ ≤ k.
Then by (1.17), it is easy to verify that, there exists a constant C, such that
φf −f
W
k,p
(Ω)
≤ Cδ. (1.18)
Now in the bounded domain Ω ∩ B
R
(0), by the argument in Step 1, there is
a function g ∈ C
∞
(Ω ∩ B
R
(0)), such that
16 1 Introduction to Sobolev Spaces
g −f
W
k,p
(Ω∩B
R
(0))
≤ δ. (1.19)
Obviously, the function φg is in C
∞
(Ω), and by (1.18) and (1.19),
φg −f
W
k,p
(Ω)
≤ φg −φf
W
k,p
(Ω)
+φf −f
W
k,p
(Ω)
≤ C
1
g −f
W
k,p
(Ω∩B
R
(0))
+Cδ
≤ (C
1
+C)δ.
This completes the proof of the Theorem 1.3.2. ¯ .
The Proof of Theorem 1.3.3
We will still use the molliﬁers to approximate a function. As compared to
the proofs of the previous theorem, the main diﬃculty here is that for a point
on the boundary, there is no room to mollify. To circumvent this, we cover
Ω with ﬁnitely many open sets, and on each set that covers the boundary
layer, we will translate the function a little bit inward, so that there is room
to mollify within Ω. Then we will again use the partition of unity to complete
the proof.
Step 1. Approximating in a Small Set Covering ∂Ω.
Let x
o
be any point on ∂Ω. Since ∂Ω is C
1
, we can make a C
1
change
of coordinates locally, so that in the new coordinates system (x
1
, , x
n
), we
can express, for a suﬃciently small r > 0,
B
r
(x
o
) ∩ Ω = ¦x ∈ B
r
(x
o
) [ x
n
> φ(x
1
, x
n−1
)¦
with some C
1
function Φ.
Set
D = Ω ∩ B
r/2
(x
o
).
Shift every point x ∈ D in x
n
direction a units, deﬁne
x
= x +ae
n
.
and
D
= ¦x
[ x ∈ D¦.
This D
is obtained by shifting D toward the inside of Ω by a units. Choose
a suﬃciently large, so that the ball B
(x) lies in Ω ∩ B
r
(x
o
) for all x ∈ D
and for all small > 0. Now, there is room to mollify a given W
k,p
function
f on D
within Ω. More precisely, we ﬁrst translate f a distance in the x
n
direction to become f
(x) = f(x
), then mollify it. We claim that
J
f
→f in W
k,p
(D).
Actually, for any multiindex [α[ ≤ k, we have
D
α
(J
f
) −D
α
f
L
p
(D)
≤ D
α
(J
f
) −D
α
f

L
p
(D)
+D
α
f
−D
α
f
L
p
(D)
.
1.3 Approximation by Smooth Functions 17
A similar argument as in the proof of Theorem 1.3.1 implies that the ﬁrst
term on the right hand side goes to zero as →0; while the second term also
vanishes in the process due to the continuity of the translation in the L
p
norm.
Step 2. Applying the Partition of Unity.
Since ∂Ω is compact, we can ﬁnd ﬁnitely many such sets D, call them D
i
,
i = 1, 2, , N, the union of which covers ∂Ω. Given δ > 0, from the argument
in Step 1, for each D
i
, there exists g
i
∈ C
∞
(
¯
D
i
), such that
g
i
−f
W
k,p
(Di)
≤ δ. (1.20)
Choose an open set D
0
⊂⊂ Ω such that Ω ⊂ ∪
N
i=0
D
i
, and select a function
g
0
∈ C
∞
(
¯
D
0
) such that
g
0
−f
W
k,p
(D0)
≤ δ. (1.21)
Let ¦η
i
¦ be a smooth partition of unity subordinated to the open sets
¦D
i
¦
N
i=0
. Deﬁne
g =
N
¸
i=0
η
i
g
i
.
Then obviously g ∈ C
∞
(
¯
Ω), and f =
¸
N
i=0
η
i
f. Similar to the proof of Theo
rem 1.3.1, it follows from (1.20) and (1.21) that, for each [α[ ≤ k
D
α
g −D
α
f
L
p
(Ω)
≤
N
¸
i=0
D
α
(η
i
g
i
) −D
α
(η
i
f)
L
p
(Di)
≤ C
N
¸
i=0
g
i
−f
W
k,p
(Di)
= C(N + 1)δ.
This completes the proof of the Theorem 1.3.3. ¯ .
The Proof of Theorem 1.3.4
Let φ(r) be a C
∞
0
cut oﬀ function such that
φ(r) =
1 , for 0 ≤ r ≤ 1;
between 0 and 1 , for 1 < r < 2;
0 , for r ≥ 2.
Then by a direct computation, one can show that
φ(
[x[
R
)f(x) −f(x)
W
k,p
(R
n
)
→0, as R→∞. (1.22)
Thus, there exists a sequence of numbers ¦R
m
¦ with R
m
→∞, such that for
18 1 Introduction to Sobolev Spaces
g
m
(x) := φ(
[x[
R
m
)f(x),
we have
g
m
−f
W
k,p
(R
n
)
≤
1
m
. (1.23)
On the other hand, from the Approximation Theorem 1.3.3, for each ﬁxed
m,
J
(g
m
)→g
m
, as →0, in W
k,p
(R
n
).
Hence there exist
m
, such that for
f
m
:= J
m
(g
m
),
we have
f
m
−g
m

W
k,p
(R
n
)
≤
1
m
. (1.24)
Obviously, each function f
m
is in C
∞
0
(R
n
), and by (1.23) and (1.24),
f
m
−f
W
k,p
(R
n
)
≤ f
m
−g
m

W
k,p
(R
n
)
+g
m
−f
W
k,p
(R
n
)
≤
2
m
→0, as m→∞.
This completes the proof of Theorem 1.3.4. ¯ .
Now we have proved all the four Approximation Theorems, which show
that smooth functions are dense in Sobolev spaces W
k,p
. In other words, W
k,p
is the completion of C
k
under the norm  
W
k,p. Later, particularly in the
next three sections, when we derive various inequalities in Sobolev spaces, we
can just ﬁrst work on smooth functions and then extend them to the whole
Sobolev spaces. In order to make such extensions more conveniently, without
going through approximation process in each particular space, we prove the
following Operator Completion Theorem in general Banach spaces.
Theorem 1.3.5 Let D be a dense linear subspace of a normed space X. Let
Y be a Banach space. Assume
T : D → Y
is a bounded linear map. Then there exists an extension
¯
T of T from D to the
whole space X, such that
¯
T is a bounded linear operator from X to Y ,

¯
T = T and
¯
Tx = Tx ∀ x ∈ D.
Proof. Given any element x
o
∈ X, since D is dense in X, there exists a
sequence ¦x
i
¦ ∈ D, such that
x
i
→x
o
, as i→∞.
1.3 Approximation by Smooth Functions 19
It follows that
Tx
i
−Tx
j
 ≤ Tx
i
−x
j
→0, as i, j→∞.
This implies that ¦Tx
i
¦ is a Cauchy sequence in Y . Since Y is a Banach space,
¦Tx
i
¦ converges to some element y
o
in Y . Let
¯
Tx
o
= y
o
. (1.25)
To see that (1.25) is well deﬁned, suppose there is another sequence ¦x
i
¦ that
converges to x
o
in X and Tx
i
→y
1
in Y . Then
y
1
−y
o
 = lim
i→∞
Tx
i
−Tx
i
 ≤ Clim
i→∞
x
i
−x
i
 = 0.
Now, for x ∈ D, deﬁne
¯
Tx = Tx; and for other x ∈ X, deﬁne
¯
Tx by (1.25).
Obviously,
¯
T is a linear operator. Moreover

¯
Tx
o
 = lim
i→∞
Tx
i
 ≤ lim
i→∞
Tx
i
 = Tx
o
.
Hence
¯
T is also bounded. This completes the proof. ¯ .
As a direct application of this Operator Completion Theorem, we prove
the following Extension Theorem.
Theorem 1.3.6 Assume that Ω is bounded and ∂Ω is C
k
, then for any open
set O ⊃ Ω, there exists a bounded linear operator E : W
k,p
(Ω) → W
k,p
0
(O)
such that Eu = u a.e. in Ω.
Proof. To deﬁne the extension operator E, we ﬁrst work on functions u ∈
C
k
(
¯
Ω). Then we can apply the Operator Completion Theorem to extend E
to W
k,p
(Ω), since C
k
(
¯
Ω) is dense in W
k,p
(Ω) (see Theorem 1.3.3). From this
density, it is easy to show that, for u ∈ W
k,p
(Ω),
E(u)(x) = u(x) almost everywhere on Ω
based on
E(u)(x) = u(x) on
¯
Ω ∀ u ∈ C
k
(
¯
Ω).
Now we prove the theorem for u ∈ C
k
(
¯
Ω). We deﬁne E(u) in the following
two steps.
Step 1. The special case when Ω is a half ball
B
+
r
(0) := ¦x = (x
, x
n
) ∈ R
n
[ [x[ < 1, x
n
> 0¦.
Assume u ∈ C
k
(B
+
r
(0)). We extend u to be a C
k
function on the whole
ball B
r
(0). Deﬁne
¯ u(x
, x
n
) =
u(x
, x
n
) , x
n
≥ 0
¸
k
i=0
a
i
u(x
, −λ
i
x
n
) , x
n
< 0.
20 1 Introduction to Sobolev Spaces
To guarantee that all the partial derivatives
∂
j
¯ u(x
, x
n
)
∂x
j
n
up to the order k are
continuous across the hyper plane x
n
= 0, we ﬁrst pick λ
i
, such that
0 < λ
0
< λ
1
< < λ
k
< 1.
Then solve the algebraic system
k
¸
i=0
a
i
λ
j
i
= 1 , j = 0, 1, , k,
to determine a
0
, a
1
, , a
k
. Since the coeﬃcient determinant det(λ
j
i
) is not
zero, and hence the system has a unique solution. One can easily verify that,
for such λ
i
and a
i
, the extended function ¯ u is in C
k
(B
r
(0)).
Step 2. Reduce the general domains to the case of half ball covered in Step
1 via domain transformations and a partition of unity.
Let O be an open set containing
¯
Ω. Given any x
o
∈ ∂Ω, there exists a
neighborhood U of x
o
and a C
k
transformation φ : U→R
n
which satisﬁes
that φ(x
o
) = 0, φ(U ∩ ∂Ω) lies on the hyper plane x
n
= 0, and φ(U ∩ Ω) is
contained in R
n
+
. Then there exists an r
o
> 0, such that B
ro
(0) ⊂⊂ φ(U),
D
x
o := φ
−1
(B
ro
(0)) is an open set containing x
o
, and φ ∈ C
k
(D
x
o). Choose
r
o
suﬃciently small so that D
x
o ⊂ O. All such D
x
o and Ω forms an open
covering of
¯
Ω, hence there exist a ﬁnite subcovering
D
0
:= Ω, D
1
, , D
m
and a corresponding partition of unity
η
0
, η
1
, , η
m
,
such that
η
i
∈ C
∞
0
(D
i
) i = 0, 1, , m
and
m
¸
i=0
η
i
(x) = 1 ∀ x ∈ Ω.
Let φ
i
be the mapping associated with D
i
as described in Step 1. And let
˜ u
i
(y) = u(φ
−1
i
(y) be the function deﬁned on the half ball
B
+
ri
(0) = φ
i
(D
i
∩
¯
Ω) , i = 1, 2, m.
From Step 1, each ˜ u
i
(y) can be extended as a C
k
function ¯ u
i
(y) onto the
whole ball B
ri
(0).
Now, we can deﬁne the extension of u from Ω to its neighborhood O as
E(u) =
m
¸
i=1
η
i
(x)¯ u
i
(φ
i
(x)) +η
0
u(x).
1.4 Sobolev Embeddings 21
Obviously, E(u) ∈ C
∞
0
(O), and
E(u)
W
k,p
(O)
≤ Cu
W
k,p
(Ω)
.
Moreover, for any x ∈
¯
Ω, we have
E(u)(x) =
m
¸
i=1
η
i
(x)¯ u
i
(φ
i
(x)) +η
0
(x)u(x) =
m
¸
i=1
η
i
(x)˜ u
i
(φ
i
(x)) +η
0
(x)u(x)
=
m
¸
i=1
η
i
(x)u(φ
−1
i
(φ
i
(x))) +η
0
(x)u(x) =
m
¸
i=1
η
i
(x)u(x) +η
0
(x)u(x)
=
m
¸
i=0
η
i
(x)
u(x) = u(x).
This completes the proof of the Extension Theorem. ¯ .
Remark 1.3.2 Notice that the Extension Theorem actually implies an im
proved version of the Approximation Theorem under stronger assumption on
∂Ω (be C
k
instead of C
1
). From the Extension Theorem, one can derive im
mediately
Corollary 1.3.1 Assume that Ω is bounded and ∂Ω is C
k
. Let O be an open
set containing
¯
Ω. Let
O
:= ¦x ∈ R
n
[ dist(x, O) ≤ ¦.
Then the linear operator
J
(E(u)) : W
k,p
(Ω)→C
∞
0
(O
)
is bounded, and for each u ∈ W
k,p
(Ω), we have
J
(E(u)) −u
W
k,p
(Ω)
→0 , as →0.
Here, as compare to the previous approximation theorems, the improvement
is that one can write out the explicit form of the approximation.
1.4 Sobolev Embeddings
When we seek weak solutions of partial diﬀerential equations, we start with
functions in a Sobolev space W
k,p
. Then it is natural that we would like to
know whether the functions in this space also automatically belong to some
other spaces. The following theorem answers the question and at meantime
provides inequalities among the relevant norms.
22 1 Introduction to Sobolev Spaces
Theorem 1.4.1 (General Sobolev inequalities).
Assume Ω is bounded and has a C
1
boundary. Let u ∈ W
k,p
(Ω).
(i) If k <
n
p
, then u ∈ L
q
(Ω) with
1
q
=
1
p
−
k
n
and there exists a constant
C such that
u
L
q
(Ω)
≤ Cu
W
k,p
(Ω)
(ii) If k >
n
p
, then u ∈ C
k−[
n
p
]−1,γ
(Ω), and there exists a constant C, such
that
u
C
k−[
n
p
]−1,γ
(Ω)
≤ Cu
W
k,p
(Ω)
where
γ =
1 +
n
p
−
n
p
, if
n
p
is not an integer
any positive number < 1, if
n
p
is an integer .
Here, [b] is the integer part of the number b.
The proof of the Theorem is based upon several simpler theorems. We
ﬁrst consider the functions in W
1,p
(Ω). From the deﬁnition, apparently these
functions belong to L
q
(Ω) for 1 ≤ q ≤ p. Naturally, one would expect more,
and what is more meaningful is to ﬁnd out how large this q can be. And to
control the L
q
norm for larger q by W
1,p
norm–the norm of the derivatives
Du
L
p
(Ω)
=
Ω
[Du[
p
dx
1
p
–would suppose to be more useful in practice.
For simplicity, we start with the smooth functions with compact supports
in R
n
. We would like to know for what value of q, can we establish the in
equality
u
L
q
(R
n
)
≤ CDu
L
p
(R
n
)
, (1.26)
with constant C independent of u ∈ C
∞
0
(R
n
). Now suppose (1.26) holds. Then
it must also be true for the rescaled function of u:
u
λ
(x) = u(λx),
that is
u
λ

L
q
(R
n
)
≤ CDu
λ

L
p
(R
n
)
. (1.27)
By substitution, we have
R
n
[u
λ
(x)[
q
dx =
1
λ
n
R
n
[u(y)[
q
dy, and
R
n
[Du
λ
[
p
dx =
λ
p
λ
n
R
n
[Du(y)[
p
dy.
1.4 Sobolev Embeddings 23
It follows from (1.27) that
u
L
q
(R
n
)
≤ Cλ
1−
n
p
+
n
q
Du
L
p
(R
n
)
.
Therefore, in order that C to be independent of u, it is necessarily that the
power of λ here be zero, that is
q =
np
n −p
.
It turns out that this condition is also suﬃcient, as will be stated in the
following theorem. Here for q to be positive, we must require p < n.
Theorem 1.4.2 (GagliardoNirenbergSobolev inequality).
Assume that 1 ≤ p < n. Then there exists a constant C = C(p, n), such
that
u
L
p
∗
(R
n
)
≤ CDu
L
p
(R
n
)
, u ∈ C
1
0
(R
n
) (1.28)
where p
∗
=
np
n−p
.
Since any function in W
1,p
(R
n
) can be approached by a sequence of func
tions in C
1
0
(R
n
), we derive immediately
Corollary 1.4.1 Inequality (1.28) holds for all functions u in W
1,p
(R
n
).
For functions in W
1,p
(Ω), we can extended them to be W
1,p
(R
n
) functions
by Extension Theorem and arrive at
Theorem 1.4.3 Assume that Ω is a bounded, open subset of R
n
with C
1
boundary. Suppose that 1 ≤ p < n and u ∈ W
1,p
(Ω). Then u is in L
p∗
(Ω)
and there exists a constant C = C(p, n, Ω), such that
u
L
p∗
(Ω)
≤ Cu
W
1,p
(Ω)
. (1.29)
In the limiting case as p→n, p∗ :=
np
n−p
→∞. Then one may suspect that
u ∈ L
∞
when p = n. Unfortunately, this is true only in dimension one. For
n = 1, from
u(x) =
x
−∞
u
(x)dx,
we derive immediately that
[u(x)[ ≤
∞
−∞
[u
(x)[dx.
However, for n > 1, it is false. One counter example is u = log log(1 +
1
x
) on
Ω = B
1
(0). It belongs to W
1,n
(Ω), but not to L
∞
(Ω). This is a more delicate
situation, and we will deal with it later.
24 1 Introduction to Sobolev Spaces
Naturally, for p > n, one would expect W
1,p
to embed into better spaces.
To get some rough idea what these spaces might be, let us ﬁrst consider the
simplest case when n = 1 and p > 1. Obviously, for any x, y ∈ R
1
with x < y,
we have
u(y) −u(x) =
y
x
u
(t)dt,
and consequently, by H¨older inequality,
[u(y) −u(x)[ ≤
y
x
[u
(t)[dt ≤
y
x
[u
(t)[
p
dt
1
p
y
x
dt
1−
1
p
.
It follows that
[u(y) −u(x)[
[y −x[
1−
1
p
≤
∞
−∞
[u
(t)[
p
dt
1
p
.
Take the supremum over all pairs x, y in R
1
, the left hand side of the above
inequality is the norm in the H¨older space C
0,γ
(R
1
) with γ = 1 −
1
p
. This is
indeed true in general, and we have
Theorem 1.4.4 (Morrey’s inequality).
Assume n < p ≤ ∞. Then there exists a constant C = C(p, n) such that
u
C
0,γ
(R
n
)
≤ Cu
W
1,p
(R
n
)
, u ∈ C
1
(R
n
)
where γ = 1 −
n
p
.
To establish an inequality like (1.28), we follow two basic principles.
First, we consider the special case where Ω = R
n
. Instead of dealing with
functions in W
k,p
, we reduce the proof to functions with enough smoothness
(which is just Theorem 1.4.2).
Secondly, we deal with general domains by extending functions u ∈
W
1,p
(Ω) to W
1,p
(R
n
) via the Extension Theorem.
Here, one sees that Theorem 1.4.2 is the ‘key’ and inequality (1.27) there
provides the foundation for the proof of the Sobolev embedding. Often, the
proof of (1.27) is called a ‘hard analysis’, and the steps leading from (1.27)
to (1.28) are called ‘soft analyseses’. In the following, we will show the ‘soft’
parts ﬁrst and then the ‘hard’ ones. First, we will assume that Theorem 1.4.2
is true and derive Corollary 1.4.1 and Theorem 1.4.3. Then, we will prove
Theorem 1.4.2.
The Proof of Corollary 1.4.1.
Given any function u ∈ W
1,p
(R
n
), by the Approximation Theorem, there
exists a sequence ¦u
k
¦ ⊂ C
∞
0
(R
n
) such that u−u
k

W
1,p
(R
n
)
→ 0 as k → ∞.
Applying Theorem 1.4.2, we obtain
u
i
−u
j

L
p
∗
(R
n
)
≤ Cu
i
−u
j

W
1,p
(R
n
)
→ 0, as i, j → ∞.
1.4 Sobolev Embeddings 25
Thus ¦u
k
¦ also converges to u in L
p
∗
(R
n
). Consequently, we arrive at
u
L
p
∗
(R
n
)
= limu
k

L
p
∗
(R
n
)
≤ limCu
k

W
1,p
(R
n
)
= Cu
W
1,p
(R
n
)
.
This completes the proof of the Corollary. ¯ .
The Proof of Theorem 1.4.3.
Now for functions in W
1,p
(Ω), to apply inequality (1.27), we ﬁrst extend
them to be functions with compact supports in R
n
. More precisely, let O be
an open set that covers Ω, by the Extension Theorem (Theorem 1.3.6), for
every u in W
1,p
(Ω), there exists a function ˜ u in W
1,p
0
(O), such that
˜ u = u, almost everywhere in Ω;
moreover, there exists a constant C
1
= C
1
(p, n, Ω, O), such that
˜ u
W
1,p
(O)
≤ C
1
u
W
1,p
(Ω)
. (1.30)
Now we can apply the GagliardoNirenbergSobolev inequality to ˜ u to derive
u
L
p
∗
(Ω)
≤ ˜ u
L
p
∗
(O)
≤ C˜ u
W
1,p
(O)
≤ CC
1
u
W
1,p
(Ω)
.
This completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.2.
We ﬁrst establish the inequality for p = 1, i.e. we prove
R
n
[u[
n
n−1
dx
n−1
n
≤
R
n
[Du[dx. (1.31)
Then we will apply (1.31) to [u[
γ
for a properly chosen γ > 1 to extend the
inequality to the case when p > 1.
We need
Lemma 1.4.1 (General H¨older Inequality) Assume that
u
i
∈ L
pi
(Ω) for i = 1, 2, , m
and
1
p
1
+
1
p
2
+ +
1
p
m
= 1.
Then
Ω
[u
1
u
2
u
m
[dx ≤
m
¸
i
Ω
[u
i
[
pi
dx
1
p
i
. (1.32)
26 1 Introduction to Sobolev Spaces
The proof can be obtained by applying induction to the usual H¨older inequal
ity for two functions.
Now we are ready to prove the Theorem.
Step 1. The case p = 1.
To better illustrate the idea, we ﬁrst derive inequality (1.31) for n = 2,
that is, we prove
R
2
[u(x)[
2
dx ≤
R
2
[Du[dx
2
. (1.33)
Since u has a compact support, we have
u(x) =
x1
−∞
∂u
∂y
1
(y
1
, x
2
)dy
1
.
It follows that
[u(x)[ ≤
∞
−∞
[Du(y
1
, x
2
)[dy
1
.
Similarly, we have
[u(x)[ ≤
∞
−∞
[Du(x
1
, y
2
)[dy
2
.
The above two inequalities together imply that
[u(x)[
2
≤
∞
−∞
[Du(y
1
, x
2
)[dy
1
∞
−∞
[Du(x
1
, y
2
)[dy
2
.
Now, integrate both sides of the above inequality with respect to x
1
and x
2
from −∞ to ∞, we arrive at (1.33).
Then we deal with the general situation when n > 2. We write
u(x) =
xi
−∞
∂u
∂y
i
(x
1
, , x
i−1
, y
i
, x
i+1
, , x
n
)dy
i
, i = 1, 2, n.
Consequently,
[u(x)[ ≤
∞
−∞
[Du(x
1
, , y
i
, , x
n
)[dy
i
, i = 1, 2, n.
And it follows that
[u(x)[
n
n−1
≤
n
¸
i=1
∞
−∞
[Du(x
1
, , y
i
, , x
n
)[dy
i
1
n−1
.
Integrating both sides with respect to x
1
and applying the general H¨older
inequality (1.32), we obtain
∞
−∞
[u[
n
n−1
dx
1
≤
∞
−∞
[Du[dy
1
1
n−1
∞
−∞
n
¸
i=2
∞
−∞
[Du[dy
i
1
n−1
dx
1
≤
∞
−∞
[Du[dy
1
1
n−1
n
¸
i=2
∞
−∞
∞
−∞
[Du[dx
1
dy
i
1
n−1
.
1.4 Sobolev Embeddings 27
Then integrate the above inequality with respect to x
2
. We have
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
2
≤
∞
−∞
∞
−∞
[Du[dx
1
dy
2
1
n−1
∞
−∞
∞
−∞
[Du[dy
1
1
n−1
n
¸
i=3
∞
−∞
∞
−∞
[Du[dx
1
dy
i
1
n−1
¸
dx
2
.
Again, applying the General H¨older Inequality, we arrive at
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
2
≤
∞
−∞
∞
−∞
[Du[dy
1
dx
2
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dy
2
1
n−1
n
¸
i=3
∞
−∞
∞
−∞
∞
−∞
[Du[dx
1
dx
2
dy
i
1
n−1
.
Continuing this way by integrating with respect to x
3
, , x
n−1
, we deduce
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
n−1
≤
∞
−∞
∞
−∞
[Du[dy
1
dx
2
dx
n−1
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dy
2
dx
3
dx
n−1
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dx
n−2
dy
n−1
1
n−1
R
n
[Du[dx
1
n−1
.
Finally, integrating both sides with respect to x
n
and applying the general
H¨older inequality, we obtain
R
n
[u[
n
n−1
dx ≤
R
n
[Du[dx
n
n−1
.
This veriﬁes (1.31).
Exercise 1.4.1 Write your own proof with all details for the cases n = 3 and
n = 4.
Step 2. The Case p > 1.
Applying (1.31) to the function [u[
γ
with γ > 1 to be chosen later, and by
the H¨older inequality, we have
R
n
[u[
γn
n−1
dx
n−1
n
≤
R
n
[D([u[
γ
)[dx = γ
R
n
[u[
γ−1
[Du[dx
≤ γ
R
n
[u[
γn
n−1
dx
(γ−1)(n−1)
γn
R
n
[Du[
γn
γ+n−1
dx
γ+n−1
γn
.
28 1 Introduction to Sobolev Spaces
It follows that
R
n
[u[
γn
n−1
dx
n−1
γn
≤ γ
R
n
[Du[
γn
γ+n−1
dx
γ+n−1
γn
.
Now choose γ, so that
γn
γ +n −1
= p, that is
γ =
p(n −1)
n −p
,
we obtain
R
n
[u[
np
n−p
dx
n−p
np
≤ γ
R
n
[Du[
p
dx
1
p
.
This completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.4 (Morrey’s inequality).
We will establish two inequalities
sup
R
n
[u[ ≤ Cu
W
1,p
(R
n
)
, (1.34)
and
sup
x=y
[u(x) −u(y)[
[x −y[
1−n/p
≤ CDu
L
p
(R
n
)
. (1.35)
Both of them can be derived from the following
1
[B
r
(x)[
Br(x)
[u(y) −u(x)[dy ≤ C
Br(x)
[Du(y)[
[y −x[
n−1
dy, (1.36)
where [B
r
(x)[ is the volume of B
r
(x). We will carry the proof out in three
steps. In Step 1, we prove (1.36), and in Step 2 and Step 3, we verify (1.34)
and (1.35), respectively.
Step 1. We start from
u(y) −u(x) =
1
0
d
dt
u(x +t(y −x))dt =
s
0
Du(x +τω) ωdτ,
where
ω =
y −x
[x −y[
, s = [x −y[, and hence y = x +sω.
It follows that
[u(x +sω) −u(x)[ ≤
s
0
[Du(x +τω)[dτ.
Integrating both sides with respect to ω on the unit sphere ∂B
1
(0), then
converting the integral on the right hand side from polar to rectangular coor
dinates, we obtain
1.4 Sobolev Embeddings 29
∂B1(0)
[u(x +sω) −u(x)[dσ ≤
s
0
∂B1(0)
[Du(x +τω)[dσdτ
=
s
0
∂B1(0)
[Du(x +τω)[
τ
n−1
τ
n−1
dσdτ
=
Bs(x)
[Du(z)[
[x −z[
n−1
dz.
Multiplying both sides by s
n−1
, integrating with respect to s from 0 to r, and
taking into account that the integrand on the right hand side is nonnegative,
we arrive at
Br(x)
[u(y) −u(x)[dy ≤
r
n
n
Br(x)
[Du(y)[
[x −y[
n−1
dy.
This veriﬁes (1.36).
Step 2. For each ﬁxed x ∈ R
n
, we have
[u(x)[ =
1
[B
1
(x)[
B1(x)
[u(x)[dy
≤
1
[B
1
(x)[
B1(x)
[u(x) −u(y)[dy +
1
[B
1
(x)[
B1(x)
[u(y)[dy
= I
1
+I
2
. (1.37)
By (1.36) and the H¨older inequality, we deduce
I
1
≤ C
B1(x)
[Du(y)[
[x −y[
n−1
dy
≤ C
R
n
[Du[
p
dy
1/p
B1(x)
dy
[x −y[
(n−1)p
p−1
dy
p−1
p
≤ C
1
R
n
[Du[
p
dy
1/p
. (1.38)
Here we have used the condition that p > n, so that
(n−1)p
p−1
< n, and hence
the integral
B1(x)
dy
[x −y[
(n−1)p
p−1
is ﬁnite.
Also it is obvious that
I
2
≤ Cu
L
p
(R
n
)
. (1.39)
Now (1.34) is an immediate consequence of (1.37), (1.38), and (1.39).
30 1 Introduction to Sobolev Spaces
Step 3. For any pair of ﬁxed points x and y in R
n
, let r = [x − y[ and
D = B
r
(x) ∩ B
r
(y). Then
[u(x) −u(y)[ ≤
1
[D[
D
[u(x) −u(z)[dz +
1
[D[
D
[u(z) −u(y)[dz. (1.40)
Again by (1.36), we have
1
[D[
D
[u(x) −u(z)[dz ≤
C
[B
r
(x)[
Br(x)
[u(x) −u(z)[dz
≤ C
R
n
[Du[
p
dz
1/p
Br(x)
dz
[x −z[
(n−1)p
p−1
p−1
p
= Cr
1−n/p
Du
L
p
(R
n
)
. (1.41)
Similarly,
1
[D[
D
[u(z) −u(y)[dz ≤ Cr
1−n/p
Du
L
p
(R
n
)
. (1.42)
Now (1.40), (1.41), and (1.42) yield
[u(x) −u(y)[ ≤ C[x −y[
1−n/p
Du
L
p
(R
n
)
.
This implies (1.35) and thus completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.1 (the General Sobolev Inequality).
(i) Assume that u ∈ W
k,p
(Ω) with k <
n
p
. We want to show that u ∈
L
q
(Ω) and
u
L
q
(Ω)
≤ Cu
W
k,p
(Ω)
. (1.43)
with
1
q
=
1
p
−
k
n
. This can be done by applying Theorem 1.4.3 successively on
the integer k. Again denote p
∗
=
np
n−p
, then
1
p
∗
=
1
p
−
1
n
. For [α[ ≤ k−1, D
α
u ∈
W
1,p
(Ω). By Theorem 1.4.3, we have D
α
u ∈ L
p
∗
(Ω), thus u ∈ W
k−1,p
∗
(Ω)
and
u
W
k−1,p
∗
(Ω)
≤ C
1
u
W
k,p
(Ω)
.
Applying Theorem 1.4.3 again to W
k−1,p
∗
(Ω), we have u ∈ W
k−2,p
∗∗
(Ω) and
u
W
k−2,p
∗∗
(Ω)
≤ C
2
u
W
k−1,p
∗
(Ω)
≤ C
2
C
1
u
W
k,p
(Ω)
,
where p
∗∗
=
np∗
n−p
∗
, or
1
p
∗∗
=
1
p
∗
−
1
n
= (
1
p
−
1
n
) −
1
n
=
1
p
−
2
n
.
Continuing this way k times, we arrive at (1.43).
1.4 Sobolev Embeddings 31
(ii) Now assume that k >
n
p
. Recall that in the Morrey’s inequality
u
C
0,γ
(Ω)
≤ Cu
W
1,p
(Ω)
, (1.44)
we require that p > n. However, in our situation, this condition is not neces
sarily met. To remedy this, we can use the result in the previous step. We can
ﬁrst decrease the order of diﬀerentiation to increase the power of summability.
More precisely, we will try to ﬁnd a smallest integer m, such that
W
k,p
(Ω) → W
k−m,q
(Ω),
with q > n. That is, we want
q =
np
n −mp
> n,
and equivalently,
m >
n
p
−1.
Obviously, the smallest such integer m is
m =
¸
n
p
.
For this choice of m, we can apply Morrey’s inequality (1.44) to D
α
u, with
any [α[ ≤ k −m−1 to obtain
D
α
u
C
0,γ
(Ω)
≤ C
1
u
W
k−m,q
(Ω)
≤ C
1
C
2
u
W
k,p
(Ω)
.
Or equivalently,
u
C
k−
[
n
p
]
−1,γ
(Ω)
≤ Cu
W
k,p
(Ω)
.
Here, when
n
p
is not an integer,
γ = 1 −
n
q
= 1 +
¸
n
p
−
n
p
.
While
n
p
is an integer, we have m =
n
p
, and in this case, q can be any number
> n, which implies that γ can be any positive number < 1.
This completes the proof of the Theorem. ¯ .
32 1 Introduction to Sobolev Spaces
1.5 Compact Embedding
In the previous section, we proved that W
1,p
(Ω) is embedded into L
p
∗
(Ω)
with p
∗
=
np
n−p
. Obviously, for q < p
∗
, the embedding of W
1,p
(Ω) into L
q
(Ω)
is still true, if the region Ω is bounded. Actually, due to the strict inequality
on the exponent, one can expect more, as stated below.
Theorem 1.5.1 (RellichKondrachov Compact Embedding).
Assume that Ω is a bounded open subset in R
n
with C
1
boundary ∂Ω.
Suppose 1 ≤ p < n. Then for each 1 ≤ q < p
∗
, W
1,p
(Ω) is compactly imbedded
into L
q
(Ω):
W
1,p
(Ω) →→ L
q
(Ω),
in the sense that
i) there is a constant C, such that
u
L
q
(Ω)
≤ Cu
W
1,p
(Ω)
, ∀ u ∈ W
1,p
(Ω); (1.45)
and
ii) every bounded sequence in W
1,p
(Ω) possesses a convergent subsequence
in L
q
(Ω).
Proof. The ﬁrst part–inequality (1.45)– is included in the general Embedding
Theorem. What we need to show is that if ¦u
k
¦ is a bounded sequence in
W
1,p
(Ω), then it possesses a convergent subsequence in L
q
(Ω). This can be
derived immediately from the following
Lemma 1.5.1 Every bounded sequence in W
1,1
(Ω) possesses a convergent
subsequence in L
1
(Ω).
We postpone the proof of the Lemma for a moment and see how it implies
the Theorem.
In fact, assume that ¦u
k
¦ is a bounded sequence in W
1,p
(Ω). Then there
exists a subsequence (still denoted by ¦u
k
¦) which converges weakly to an ele
ment u
o
in W
1,p
(Ω). By the Sobolev embedding, ¦u
k
¦ is bounded in L
p∗
(Ω).
On the other hand, it is also bounded in W
1,1
(Ω), since p ≥ 1 and Ω is
bounded. Now, by Lemma 1.5.1, there is a subsequence (still denoted by ¦u
k
¦)
that converges strongly to u
o
in L
1
(Ω). Applying the H¨older inequality
u
k
−u
o

L
q
(Ω)
≤ u
k
−u
o

θ
L
1
(Ω)
u
k
−u
o

1−θ
L
p∗
(Ω)
,
we conclude immediately that ¦u
k
¦ converges strongly to u
o
in L
q
(Ω). This
proves the Theorem.
We now come back to prove the Lemma. Let ¦u
k
¦ be a bounded sequence
in W
1,1
(Ω). We will show the strong convergence of this sequence in three
steps with the help of a family of molliﬁers
u
k
(x) =
Ω
j
(y)u
k
(x −y)dy.
1.5 Compact Embedding 33
First, we show that
u
k
→u
k
in L
1
(Ω) as →0 , uniformly in k. (1.46)
Then, for each ﬁxed > 0, we prove that
there is a subsequence of ¦u
k
¦ which converges uniformly . (1.47)
Finally, corresponding to the above convergent sequence ¦u
k
¦, we extract
diagonally a subsequence of ¦u
k
¦ which converges strongly in L
1
(Ω).
Based on the Extension Theorem, we may assume, without loss of gener
ality, that Ω = R
n
, the sequence of functions ¦u
k
¦ all have compact support
in a bounded open set G ⊂ R
n
, and
u
k

W
1,1
(G)
≤ C < ∞, for all k = 1, 2, .
Since every W
1,1
function can be approached by a sequence of smooth
functions, we may also assume that each u
k
is smooth.
Step 1. From the property of molliﬁers, we have
u
k
(x) −u
k
(x) =
B1(0)
j
(y)[u
k
(x −y) −u
k
(x)]dy
=
B1(0)
j
(y)
1
0
d
dt
u
k
(x −ty)dt dy
= −
B1(0)
j
(y)
1
0
Du
k
(x −ty)dt y dy.
Integrating with respect to x and changing the order of integration, we
obtain
u
k
−u
k

L
1
(G)
≤
B1(0)
j(y)
1
0
G
[Du
k
(x −ty)[dxdt dy
≤
G
[Du
k
(z)[dz ≤ u
k

W
1,1
(G)
. (1.48)
It follows that
u
k
−u
k

L
1
(G)
→0 , as →0 , uniformly in k. (1.49)
Step 2. Now ﬁx an > 0, then for all x ∈ R
n
and for all k = 1, 2, , we
have
[u
k
(x)[ ≤
B(x)
j
(x −y)[u
k
(y)[dy
≤ j

L
∞
(R
n
)
u
k

L
1
(G)
≤
C
n
< ∞. (1.50)
34 1 Introduction to Sobolev Spaces
Similarly,
[Du
k
(x)[ ≤
C
n+1
< ∞. (1.51)
(1.50) and (1.51) imply that, for each ﬁxed > 0, the sequence ¦u
k
¦ is uni
formly bounded and equicontinuous. Therefore, by the ArzelaAscoli Theo
rem (see the Appendix), it possesses a subsequence (still denoted by ¦u
k
¦ )
which converges uniformly on G, in particular
lim
j,i→∞
u
j
−u
i

L
1
(G)
= 0. (1.52)
Step 3. Now choose to be 1, 1/2, 1/3, , 1/k, successively, and denote
the corresponding subsequence that satisﬁes (1.52) by
u
1/k
k1
, u
1/k
k2
, u
1/k
k3
,
for k = 1, 2, 3, . Pick the diagonal subsequence from the above:
¦u
1/i
ii
¦ ⊂ ¦u
k
¦.
Then select the corresponding subsequence from ¦u
k
¦:
u
11
, u
22
, u
33
, .
This is our desired subsequence, because
u
ii
−u
jj

L
1
(G)
≤ u
ii
−u
1/i
ii

L
1
(G)
+u
1/i
ii
−u
1/j
jj

L
1
(G)
+u
1/j
jj
−u
jj

L
1
(G)
→0 , as i, j→∞,
due to the fact that each of the norm on the right hand side →0.
This completes the proof of the Lemma and hence the Theorem. ¯ .
1.6 Other Basic Inequalities
1.6.1 Poincar´e’s Inequality
For functions that vanish on the boundary, we have
Theorem 1.6.1 (Poincar´e’s Inequality I).
Assume Ω is bounded. Suppose u ∈ W
1,p
0
(Ω) for some 1 ≤ p ≤ ∞. Then
u
L
p
(Ω)
≤ CDu
L
p
(Ω)
. (1.53)
1.6 Other Basic Inequalities 35
Remark 1.6.1 i) Now based on this inequality, one can take an equivalent
norm of W
1,p
0
(Ω) as u
W
1,p
0
(Ω)
= Du
L
p
(Ω)
.
ii) Just for this theorem, one can easily prove it by using the Sobolev in
equality u
L
p ≤ u
L
q with p =
nq
n−q
and the H¨older inequality. However,
the one we present in the following is a uniﬁed proof that works for both this
and the next theorem.
Proof. For convenience, we abbreviate u
L
p
(Ω)
as u
p
. Suppose inequality
(1.53) does not hold, then there exists a sequence ¦u
k
¦ ⊂ W
1,p
0
(Ω), such that
Du
k

p
= 1 , while u
k

p
→∞, as k→∞.
Since C
∞
0
(Ω) is dense in W
1,p
0
(Ω), we may assume that ¦u
k
¦ ⊂ C
∞
0
(Ω).
Let v
k
=
u
k
u
k

p
. Then
v
k

p
= 1 , and Dv
k

p
→0 .
Consequently, ¦v
k
¦ is bounded in W
1,p
(Ω) and hence possesses a subsequence
(still denoted by ¦v
k
¦) that converges weakly to some v
o
∈ W
1,p
(Ω). From the
compact embedding results in the previous section, ¦v
k
¦ converges strongly
in L
p
(Ω) to v
o
, and therefore
v
o

p
= 1 . (1.54)
On the other hand, for each φ ∈ C
∞
0
(Ω),
Ω
vφ
xi
dx = lim
k→∞
Ω
v
k
φ
xi
dx = − lim
k→∞
Ω
v
k,xi
φdx = 0.
It follows that
Dv
o
(x) = 0 , a.e.
Thus v
o
is a constant. Taking into account that v
k
∈ C
∞
0
(Ω), we must have
v
o
≡ 0. This contradicts with (1.54) and therefore completes the proof of the
Theorem. ¯ .
For functions in W
1,p
(Ω), which may not be zero on the the boundary, we
have another version of Poincar´e’s inequality.
Theorem 1.6.2 (Poincar´e Inequality II).
Let Ω be a bounded, connected, and open subset in R
n
with C
1
boundary.
Let ¯ u be the average of u on Ω. Assume 1 ≤ p ≤ ∞. Then there exists a
constant C = C(n, p, Ω), such that
u − ¯ u
p
≤ CDu
p
, ∀ u ∈ W
1,p
(Ω). (1.55)
36 1 Introduction to Sobolev Spaces
The proof of this Theorem is similar to the previous one. Instead of letting
v
k
=
u
k
u
k

p
, we choose
v
k
=
u
k
− ¯ u
k
u
k
− ¯ u
k

p
.
Remark 1.6.2 The connectness of Ω is essential in this version of the in
equality. A simple counter example is when n = 1, Ω = [0, 1] ∪ [2, 3], and
u(x) =
−1 for x ∈ [0, 1]
1 for x ∈ [2, 3] .
1.6.2 The Classical HardyLittlewoodSobolev Inequality
Theorem 1.6.3 (HardyLittlewoodSobolev Inequality).
Let 0 < λ < n and s, r > 1 such that
1
r
+
1
s
+
λ
n
= 2.
Assume that f ∈ L
r
(R
n
) and g ∈ L
s
(R
n
). Then
R
n
R
n
f(x)[x −y[
−λ
g(y)dxdy ≤ C(n, s, λ)[[f[[
r
[[g[[
s
(1.56)
where
C(n, s, λ) =
n[B
n
[
λ/n
(n −λ)rs
λ/n
1 −1/r
λ/n
+
λ/n
1 −1/s
λ/n
with [B
n
[ being the volume of the unit ball in R
n
, and where
f
r
:= f
L
r
(R
n
)
.
Proof. Without loss of generality, we may assume that both f and g are non
negative and f
r
= 1 = g
s
.
Let
χ
G
(x) =
1 if x ∈ G
0 if x ∈G
be the characteristic function of the set G. Then one can see obviously that
f(x) =
∞
0
χ
{f>a}
(x) da (1.57)
g(x) =
∞
0
χ
{g>b}
(x) db (1.58)
[x[
−λ
= λ
∞
0
c
−λ−1
χ
{x<c}
(x) dc (1.59)
1.6 Other Basic Inequalities 37
To see the last identity, one may ﬁrst write
[x[
−λ
=
∞
0
χ
{x
−λ
>˜ c}
(x) d˜ c,
and then let ˜ c = c
−λ
.
Substituting (1.57), (1.58), and (1.59) into the left hand side of (1.56), we
have
I :=
R
n
R
n
f(x)[x −y[
−λ
g(y) dxdy
= λ
∞
0
∞
0
∞
0
R
n
R
n
c
−λ−1
χ
{f>a}
(x)χ
{x<c}
(x −y)χ
{g>b}
(y) dxdy dc db da
= λ
∞
0
∞
0
∞
0
c
−λ−1
I(a, b, c) dc db da. (1.60)
Let
u(c) = [B
n
[c
n
,
the volume of the ball of radius c, and let
v(a) =
R
n
χ
{f>a}
(x) dx, w(b) =
R
n
χ
{g>b}
(y) dy,
the measure of the sets ¦x [ f(x) > a¦ and ¦y [ g(y) > b¦, respectively. Then
we can express the norms as
f
r
r
= r
∞
0
a
r−1
v(a) da = 1 and g
s
s
= s
∞
0
b
s−1
w(b) db = 1. (1.61)
It is easy to see that
I(a, b, c) ≤
R
n
R
n
χ
{x<c}
(x −y)χ
{g>b}
(y) dxdy
≤
R
n
u(c)χ
{g>b}
(y) dy = u(c)w(b)
Similarly, one can show that I(a, b, c) is bounded above by other pairs and
arrive at
I(a, b, c) ≤ min¦u(c)w(b), u(c)v(a), v(a)w(b)¦. (1.62)
We integrate with respect to c ﬁrst. By (1.62), we have
∞
0
c
−λ−1
I(a, b, c) dc
≤
u(c)≤v(a)
c
−λ−1
w(b)u(c) dc +
u(c)>v(a)
c
−λ−1
w(b)v(a) dc
38 1 Introduction to Sobolev Spaces
= w(b)[B
n
[
(v(a)/B
n
)
1/n
0
c
−λ−1+n
dc +w(b)v(a)
(v(a)/B
n
)
1/n
c
−λ−1
dc
=
[B
n
[
λ/n
n −λ
w(b)v(a)
1−λ/n
+
[B
n
[
λ/n
λ
w(b)v(a)
1−λ/n
=
[B
n
[
λ/n
λ(n −λ)
w(b)v(a)
1−λ/n
(1.63)
Exchanging v(a) with w(b) in the second line of (1.63), we also obtain
∞
0
c
−λ−1
I(a, b, c) dc
≤
u(c)≤w(b)
c
−λ−1
v(a)u(c) dc +
u(c)>w(b))
c
−λ−1
w(b)v(a) dc
≤
[B
n
[
λ/n
λ(n −λ)
v(a)w(b)
1−λ/n
(1.64)
In view of (1.61), we split the bintegral into two parts, one from 0 to a
r/s
and the other from a
r/s
to ∞. By virtue of (1.60), (1.63), and (1.64), we derive
I ≤
n
n −λ
[B
n
[
λ/n
∞
0
v(a)
a
r/s
0
w(b)
1−λ/n
db da +
∞
0
v(a)
1−λ/n
∞
a
r/s
w(b) db da. (1.65)
To estimate the ﬁrst integral in (1.65), we use H¨older inequality with
m = (s −1)(1 −λ/n)
a
r/s
0
w(b)
1−λ/n
b
m
b
−m
db
≤
a
r/s
0
w(b)b
s−1
db
1−λ/n
a
r/s
0
b
−mn/λ
db
λ/n
≤
a
r/s
0
w(b)b
s−1
db
1−λ/n
a
r−1
, (1.66)
since mn/λ < 1. It follows that the ﬁrst integral in (1.65) is bounded above
by
λ
n −s(n −λ)
λ/n
∞
0
v(a)a
r−1
da
∞
0
w(b)b
s−1
db
1−λ/n
=
1
rs
λ/n
1 −1/r
λ/n
. (1.67)
1.6 Other Basic Inequalities 39
To estimate the second integral in (1.65), we ﬁrst rewrite it as
∞
0
w(b)
b
s/r
0
v(a)
1−λ/n
da db,
then an analogous computation shows that it is bounded above by
1
rs
λ/n
1 −1/s
λ/n
. (1.68)
Now the desired HardyLittlewoodSobolev inequality follows directly from
(1.65), (1.67), and (1.68). ¯ .
Theorem 1.6.4 (An equivalent form of the HardyLittlewoodSobolev in
equality)
Let g ∈ L
p
(R
n
) for
n
n−α
< p < ∞. Deﬁne
Tg(x) =
R
n
[x −y[
α−n
g(y)dy.
Then
Tg
p
≤ C(n, p, α)g
np
n+αp
. (1.69)
Proof. By the classical HardyLittlewoodSobolev inequality, we have
< f, Tg >=< Tf, g >≤ C(n, s, α) f
r
g
s
,
where < , > is the L
2
inner product.
Consequently,
Tg
p
= sup
fr=1
< f, Tg > ≤ C(n, s, α)g
s
,
where
1
p
+
1
r
= 1
1
r
+
1
s
=
n+α
n
.
Solving for s, we arrive at
s =
np
n +αp
.
This completes the proof of the Theorem. ¯ .
Remark 1.6.3 To see the relation between inequality (1.69) and the Sobolev
inequality, let’s rewrite it as
Tg
nq
n−αq
≤ Cg
q
(1.70)
with
40 1 Introduction to Sobolev Spaces
1 < q :=
np
n +αp
<
n
α
.
Let u = Tg. Then one can show that (see [CLO]),
(−´)
α
2
u = g.
Now inequality (1.70) becomes the Sobolev one
u
nq
n−αq
≤ C(−´)
α
2
u
q
.
2
Existence of Weak Solutions
2.1 Second Order Elliptic Operators
2.2 Weak Solutions
2.3 Method of Linear Functional Analysis
2.3.1 Linear Equations
2.3.2 Some Basic Principles in Functional Analysis
2.3.3 Existence of Weak Solutions
2.4 Variational Methods
2.4.1 Semilinear Equations
2.4.2 Calculus of Variations
2.4.3 Existence of Minimizers
2.4.4 Existence of Minimizers under Constrains
2.4.5 Minimax Critical Points
2.4.6 Existence of a Minimax via the Mountain Pass Theorem
Many physical models come with natural variational structures where one
can minimize a functional, usually an energy E(u). Then the minimizers are
weak solutions of the related partial diﬀerential equation
E
(u) = 0.
For example, for an open bounded domain Ω ⊂ R
n
, and for f ∈ L
2
(Ω),
the functional
E(u) =
1
2
Ω
[u[
2
dx −
Ω
f udx
possesses a minimizer among all u ∈ H
1
0
(Ω) (try to show this by using the
knowledge from the previous chapter). As we will see in this chapter, the
minimizer is a weak solution of the Dirichlet boundary value problem
42 2 Existence of Weak Solutions
−´u = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
In practice, it is often much easier to obtain a weak solution than a classical
one. One of the seven millennium problem posed by the Clay Institute of
Mathematical Sciences is to show that some suitable weak solutions to the
NavieStoke equation with good initial values are in fact classical solutions
for all time.
In this chapter, we consider the existence of weak solutions to second order
elliptic equations, both linear and semilinear.
For linear equations, we will use the LaxMilgram Theorem to show the
existence of weak solutions.
For semilinear equations, we will introduce variational methods. We will
consider the corresponding functionals in proper Hilbert spaces and seek weak
solutions of the equations associated with the critical points of the functionals.
We will use examples to show how to ﬁnd a minimizer, a minimizer under
constraint, and a minimax critical point. The wellknown Mountain Pass
Theorem is introduced and applied to show the existence of such a minimax.
To show that a weak solution possesses the desired diﬀerentiability so that
it is actually a classical one is called the regularity method, which will be
studied in Chapter 3.
2.1 Second Order Elliptic Operators
Let Ω be an open bounded set in R
n
. Let a
ij
(x), b
i
(x), and c(x) be bounded
functions on Ω with a
ij
(x) = a
ji
(x). Consider the second order partial diﬀer
ential operator L either in the divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u, (2.1)
or in the nondivergence form
Lu = −
n
¸
i,j=1
a
ij
(x)u
xixj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (2.2)
We say that L is uniformly elliptic if there exists a constant δ > 0, such
that
n
¸
i,j=1
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for almost every x ∈ Ω and for all ξ ∈ R
n
.
In the simplest case when a
ij
(x) = δ
ij
, b
i
(x) ≡ 0, and c(x) ≡ 0, L reduces
to −´. And, as we will see later, that the solutions of the general second order
elliptic equation Lu = 0 share many similar properties with the harmonic
functions.
2.2 Weak Solutions 43
2.2 Weak Solutions
Let the operator L be in the divergence form as deﬁned in (2.1). Consider the
second order elliptic equation with Dirichlet boundary condition
Lu = f , x ∈ Ω
u = 0 , x ∈ ∂Ω
(2.3)
When f = f(x), the equation is linear; and when f = f(x, u), semilinear.
Assume that, f(x) is in L
2
(Ω), or for a given solution u, f(x, u) is in
L
2
(Ω).
Multiplying both sides of the equation by a test function v ∈ C
∞
0
(Ω), and
integrating by parts on Ω, we have
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx =
Ω
fvdx. (2.4)
There are no boundary terms because both u and v vanish on ∂Ω. One can see
that the integrals in (2.4) are well deﬁned if u, v, and their ﬁrst derivatives are
square integrable. Write H
1
(Ω) = W
1,2
(Ω), and let H
1
0
(Ω) the completion of
C
∞
0
(Ω) in the norm of H
1
(Ω):
u =
¸
Ω
([Du[
2
+[u[
2
)dx
1/2
.
Then (2.4) remains true for any v ∈ H
1
0
(Ω). One can easily see that H
1
0
(Ω)
is also the most appropriate space for u. On the other hand, if u ∈ H
1
0
(Ω)
satisﬁes (2.4) and is second order diﬀerentiable, then through integration by
parts, we have
Ω
(Lu −f)vdx = 0 ∀v ∈ H
1
0
(Ω).
This implies that
Lu = f , ∀x ∈ Ω.
And therefore u is a classical solution of (2.3).
The above observation naturally leads to
Deﬁnition 2.2.1 The weak solution of problem (2.3) is a function u ∈
H
1
0
(Ω) that satisﬁes (2.4) for all v ∈ H
1
0
(Ω).
Remark 2.2.1 Actually, here the condition on f can be relaxed a little bit.
By the Sobolev embedding
H
1
(Ω) → L
2n
n−2
(Ω),
44 2 Existence of Weak Solutions
we see that v is in L
2n
n−2
(Ω), and hence by duality, f only need to be in
L
2n
n+2
(Ω). In the semilinear case, if f(x, u) = u
p
for any p ≤
n + 2
n −2
or more
generally
f(x, u) ≤ C
1
+C
2
[u[
n+2
n−2
;
then for any u ∈ H
1
(Ω), f(x, u) is in L
2n
n+2
(Ω).
2.3 Methods of Linear Functional Analysis
2.3.1 Linear Equations
For the Dirichelet problem with linear equation
Lu = f(x) , x ∈ Ω
u = 0 , x ∈ ∂Ω,
(2.5)
to ﬁnd its weak solutions, we can apply representation type theorems in Linear
Functional Analysis, such as Riesz Representation Theorem or LaxMilgram
Theorem. To this end, we introduce the bilinear form
B[u, v] =
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx
deﬁned on H
1
0
(Ω), which is the left hand side of (2.4) in the deﬁnition of the
weak solutions. While the right hand side of (2.4) may be regarded as a linear
functional on H
1
0
(Ω);
< f, v >:=
Ω
fvdx.
Now, to ﬁnd a weak solution of (2.5), it is equivalent to show that there
exists a function u ∈ H
1
0
(Ω), such that
B[u, v] =< f, v >, ∀ v ∈ H
1
0
(Ω). (2.6)
This can be realized by representation type theorems in Functional Analysis.
2.3.2 Some Basic Principles in Functional Analysis
Here we list brieﬂy some basic deﬁnitions and principles of functional analysis,
which will be used to obtain the existence of weak solutions. For more details,
please see [Sch1].
Deﬁnition 2.3.1 A Banach space X is a complete, normed linear space with
norm   that satisﬁes
(i) u +v ≤ u +v, ∀u, v ∈ X,
(ii) λu = [λ[u, ∀u, v ∈ X, λ ∈ R,
(iii) u = 0 if and only if u = 0.
2.3 Methods of Linear Functional Analysis 45
The examples of Banach spaces are:
(i) L
p
(Ω) with norm u
L
p
(Ω)
,
(ii) Sobolev spaces W
k,p
(Ω) with norm u
W
k,p
(Ω)
, and
(iii) H¨older spaces C
0,γ
(Ω) with norm u
C
0,γ
(Ω)
.
Deﬁnition 2.3.2 A Hilbert space H is a Banach space endowed with an inner
product (, ) which generates the norm
u := (u, u)
1/2
with the following properties
(i) (u, v) = (v, u), ∀u, v ∈ H,
(ii) the mapping u → (u, v) is linear for each v ∈ H,
(iii) (u, u) ≥ 0, ∀u ∈ H, and
(iv) (u, u) = 0 if and only if u = 0.
Examples of Hilbert spaces are
(i) L
2
(Ω) with inner product
(u, v) =
Ω
u(x)v(x)dx,
and
(ii) Sobolev spaces H
1
(Ω) or H
1
0
(Ω) with inner product
(u, v) =
Ω
[u(x)v(x) +Du(x) Dv(x)]dx.
Deﬁnition 2.3.3 (i) A mapping A : X→Y is a linear operator if
A[au +bv] = aA[u] +bA[u] ∀ u, v ∈ X, a, b ∈ R
1
.
(ii) A linear operator A : X→Y is bounded provided
A := sup
u
X
≤1
Au
Y
< ∞.
(iii) A bounded linear operator f : X→R
1
is called a bounded linear func
tional on X. The collection of all bounded linear functionals on X, denoted by
X
∗
, is called the dual space of X. We use
< f, u >:= f[u]
to denote the pairing of X
∗
and X.
Lemma 2.3.1 (Projection).
Let M be a closed subspace of H. Then for every u ∈ H, there is a v ∈ M,
such that
(u −v, w) = 0, ∀w ∈ M. (2.7)
In other words,
H = M ⊕M
⊥
:= ¦u ∈ H [ u⊥M¦.
46 2 Existence of Weak Solutions
Here v is the projection of u on M. The readers may ﬁnd the proof of the
Lemma in the book [Sch] ( Theorem 13) or as given in the following exercise.
Exercise 2.3.1 i) Given any u ∈ H and u ∈M, there exist a v
o
∈ M such
that
v
o
−u = inf
v∈M
v −u.
Hint: Show that a minimizing sequence is a Cauchy sequence.
ii) Show that
(u −v
o
, w) = 0 , ∀w ∈ M.
Based on this Projection Lemma, one can prove
Theorem 2.3.1 (Riesz Representation Theorem). Let H be a real Hilbert
space with inner product (, ), and H
∗
be its dual space. Then H
∗
can be
canonically identiﬁed with H, more precisely, for each u
∗
∈ H
∗
, there exists
a unique element u ∈ H, such that
< u
∗
, v >= (u, v) ∀v ∈ H.
The mapping u
∗
→ u is a linear isomorphism from H
∗
onto H.
This is a wellknown theorem in functional analysis. The readers may see
the book [Sch1] for its proof or do it as the following exercise.
Exercise 2.3.2 Prove the Riesz Representation Theorem in 4 steps.
Let T be a linear operator from H to R
1
and T ≡0. Show that
i) K(T) := ¦u ∈ H [ Tu = 0¦ is closed.
ii) H = K(T) ⊕K(T)
⊥
.
iii) The dimension of K(T) is 1.
iv) There exists v, such that Tv = 1. Then
Tv = (u, v) , ∀u ∈ H.
From Riesz Representation Theorem, one can derive the following
Theorem 2.3.2 (LaxMilgram Theorem). Let H be a real Hilbert space with
norm  . Let
B : H H→R
1
be a bilinear mapping. Assume that there exist constants M, m > 0, such that
(i) [B[u, v][ ≤ Muv, ∀u, v ∈ H
and
(ii) mu
2
≤ B[u, u], ∀u ∈ H.
Then for each bounded linear functional f on H, there exists a unique
element u ∈ H, such that
B[u, v] =< f, v >, ∀v ∈ H.
2.3 Methods of Linear Functional Analysis 47
Proof. 1. If B[u, v] is symmetric, i.e. B[u, v] = B[v, u], then the conditions
(i) and (ii) ensure that B[u, v] can be made as an inner product in H, and
the conclusion of the theorem follows directly from the Reisz Representation
Theorem.
2. In the case B[u, v] may not be symmetric, we proceed as follows.
On one hand, by the Riesz Representation Theorem, for a given bounded
linear functional f on H, there is an
˜
f ∈ H, such that
< f, v >= (
˜
f, v), ∀v ∈ H. (2.8)
On the other hand, for each ﬁxed element u ∈ H, by condition (i), B[u, ]
is also a bounded linear functional on H, and hence there exist a ˜ u ∈ H, such
that
B[u, v] = (˜ u, v), ∀v ∈ H. (2.9)
Denote this mapping from u to ˜ u by A, i.e.
A : H→H, Au = ˜ u. (2.10)
From (2.8), (2.9), and (2.10), It suﬃce to show that for each
˜
f ∈ H, there
exists a unique element u ∈ H, such that
Au =
˜
f.
We carry this out in two parts. In part (a), we show that A is a bounded
linear operator, and in part (b), we prove that it is onetoone and onto.
(a) For any u
1
, u
2
∈ H, any a
1
, a
2
∈ R
1
, and each v ∈ H, we have
(A(a
1
u
1
+a
2
u
2
), v) = B[(a
1
u
1
+a
2
u
2
), v]
= a
1
B[u
1
, v] +a
2
B[u
2
, v]
= a
1
(Au
1
, v) +a
2
(Au
2
, v)
= (a
1
Au
1
+a
2
Au
2
, v)
Hence
A(a
1
u
1
+a
2
u
2
) = a
1
Au
1
+a
2
Au
2
.
A is linear.
Moreover, by condition (i), for any u ∈ H,
Au
2
= (Au, Au) = B[u, Au] ≤ MuAu,
and consequently
Au ≤ Mu ∀u ∈ H. (2.11)
This veriﬁes that A is bounded.
(b) To see that A is onetoone, we apply condition (ii):
mu
2
≤ B[u, u] = (Au, u) ≤ Auu,
48 2 Existence of Weak Solutions
and it follows that
mu ≤ Au. (2.12)
Now if Au = Av, then from (2.12),
mu −v ≤ Au −Av.
This implies u = v. Hence A is onetoone.
From (2.12), one can also see that R(A), the range of A in H is closed. In
fact, assume that v
k
∈ R(A) and v
k
→v in H. Then for each v
k
, there is a u
k
,
such that Au
k
= v
k
. Inequality (2.12) implies that
u
i
−u
j
 ≤ Cv
i
−v
j
,
i.e. u
k
is a Cauchy sequence in H, and hence u
k
→u for some u ∈ H. Now, by
(2.11), Au
k
→Au and v = Au ∈ H. Therefore, R(A) is closed.
To show that R(A) = H, we need the Projection Lemma 2.3.1. For any
u ∈ H, by the Projection Lemma, there exists a v ∈ R(A), such that
0 = (u −v, A(u −v)) = B[(u −v), (u −v)] ≥ mu −v
2
.
Consequently, u = v ∈ R(A) and hence H ⊂ R(A). Therefore R(A) = H.
This completes the proof of the Theorem. ¯ .
2.3.3 Existence of Weak Solutions
Now we explore that in what situations, our second order elliptic operator
L would satisfy the conditions (i) and (ii) requested by the LaxMilgram
Theorem.
Let H = H
1
0
(Ω) be our Hilbert space with inner product
(u, v) :=
Ω
(DuDv +uv)dx.
By Poincar´e inequality, we can use the equivalent inner product
(u, v)
H
:=
Ω
DuDvdx
and the corresponding norm
u
H
:=
Ω
[Du[
2
dx.
For any v ∈ H, by the Sobolev Embedding, v is in L
2n
n−2
(Ω). And by virtue
of the H¨older inequality,
2.3 Methods of Linear Functional Analysis 49
< f, v >:=
Ω
f(x)v(x)dx ≤
Ω
[f(x)[
2n
n+2
dx
n+2
2n
Ω
[v(x)[
2n
n−2
dx
n−2
2n
,
hence we only need to assume that f ∈ L
2n
n+2
(Ω).
In the simplest case when L = −´, we have
B[u, v] =
Ω
Du Dvdx.
Obviously, condition (i) is met:
[B[u, v][ ≤ u
H
v
H
,
Condition (ii) is also satisﬁed:
B[u, u] = u
2
H
.
Now applying the LaxMilgram Theorem, we conclude that there exists a
unique u ∈ H such that
B[u, v] =< f, v >, ∀v ∈ H.
We have actually proved
Theorem 2.3.3 For every f(x) in L
2n
n+2
(Ω), the Dirichlet problem
−´u = f(x), x ∈ Ω,
u(x) = 0, x ∈ ∂Ω
has a unique weak solution u in H
1
0
(Ω).
In general,
B[u, v] =
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx.
Under the assumption that
a
ij

L
∞
(Ω)
, b
i

L
∞
(Ω)
, c
L
∞
(Ω)
≤ C,
and by H¨older inequality, one can easily verify that
[B[u, v][ ≤ 3Cu
H
v
H
.
Hence condition (i) is always satisﬁed. However, condition (ii) may not auto
matically hold. It depends on the sign of c(x), and the L
∞
norm of b
i
(x) and
c(x).
First from the uniform ellipticity condition, we have
50 2 Existence of Weak Solutions
Ω
n
¸
i,j=1
a
ij
(x)u
xi
u
xj
dx ≥ δ
Ω
[Du[
2
dx = δu
2
H
, (2.13)
for some δ > 0.
From here one can see that if b
i
(x) ≡ 0 and c(x) ≥ 0, then condition (ii) is
met. Actually, the condition on c(x) can be relaxed a little bit, for instance, if
c(x) ≥ −δ/2 (2.14)
with δ as in (2.13), then condition (ii) holds. Hence we have
Theorem 2.3.4 Assume that b
i
(x) ≡ 0 and c(x) satisﬁes (2.14). Then for
every f ∈ L
2n
n+2
(Ω), the problem
Lu = f(x) x ∈ Ω
u(x) = 0 x ∈ ∂Ω
has a unique weak solution in H
1
0
(Ω).
2.4 Variational Methods
2.4.1 Semilinear Equations
To ﬁnd the weak solutions of semilinear equations
Lu = f(x, u)
the LaxMilgram Theorem can no longer be applied. We will use variational
methods to obtain the existence. Roughly speaking, we will associate the
equation with the functional
J(u) =
1
2
Ω
(Lu u)dx −
Ω
F(x, u)dx
where
F(x, u) =
u
0
f(x, s)ds.
We show that the critical point of J(u) is a weak solution of our problem.
Then we will focus on how to seek critical points in various situations.
2.4.2 Calculus of Variations
For a continuously diﬀerentiable function g deﬁned on R
n
, a critical point is
a point where Dg vanishes. The simplest sort of critical points are global or
local maxima or minima. Others are saddle points. To seek weak solutions of
2.4 Variational Methods 51
partial diﬀerential equations, we generalize this concept to inﬁnite dimensional
spaces. To illustrate the idea, let us start from a simple example:
−´u = f(x) , x ∈ Ω;
u(x) = 0 , x ∈ ∂Ω.
(2.15)
To ﬁnd weak solutions of the problem, we study the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
f(x)udx
in H
1
0
(Ω).
Now suppose that there is some function u which happen to be a minimizer
of J() in H
1
0
(Ω), then we will demonstrate that u is actually a weak solution
of our problem.
Let v be any function in H
1
0
(Ω). Consider the realvalue function
g(t) = J(u +tv), t ∈ R.
Since u is a minimizer of J(), the function g(t) has a minimum at t = 0;
and therefore, we must have
g
(0) = 0, i.e.
d
dt
J(u +tv) [
t=0
= 0.
Here, explicitly,
J(u +tv) =
1
2
Ω
[D(u +tv)[
2
dx −
Ω
f(x)(u +tv)dx,
and
d
dt
J(u +tv) =
Ω
D(u +tv) Dvdx −
Ω
f(x)vdx.
Consequently, g
(0) = 0 yields
Ω
Du Dvdx −
Ω
f(x)vdx = 0, ∀v ∈ H
1
0
(Ω). (2.16)
This implies that u is a weak solution of problem (2.15).
Furthermore, if u is also second order diﬀerentiable, then through integra
tion by parts, we obtain
Ω
[−´u −f(x)] vdx = 0.
Since this is true for any v ∈ H
1
0
(Ω), we conclude
−´u −f(x) = 0.
52 2 Existence of Weak Solutions
Therefore, u is a classical solution of the problem.
From the above argument, the readers can see that, in order to be a weak
solution of the problem, here u need not to be a minima; it can be a maxima,
or a saddle point of the functional; or more generally, any point that satisﬁes
d
dt
J(u +tv) [
t=0
.
More precisely, we have
Deﬁnition 2.4.1 Let J be a linear functional on a Banach space X.
i) We say that J is Frechet diﬀerentiable at u ∈ X if there exists a con
tinuous linear map L = L(u) from X to X
∗
satisfying:
For any > 0, there is a δ = δ(, u), such that
[J(u +v) −J(u)− < L(u), v > [ ≤ v
X
whenever v
X
< δ,
where
< L(u), v >:= L(u)(v)
is the duality between X and X
∗
.
The mapping L(u) is usually denoted by J
(u).
ii) A critical point of J is a point at which J
(u) = 0, that is
< J
(u), v >= 0 ∀v ∈ X.
iii) We call J
(u) = 0 the EulerLagrange equation of the functional J.
Remark 2.4.1 One can verify that, if J is Frechet diﬀerentiable at u, then
< J
(u), v >= lim
t→0
J(u +tv) −J(u)
t
=
d
dt
J(u +tv) [
t=0
.
2.4.3 Existence of Minimizers
In the previous subsection, we have seen that the critical points of the func
tional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
f(x)udx
are weak solutions of the Dirichlet problem (2.15). Then our next question is:
Does the functional J actually possess a critical point?
We will show that there exists a minimizer of J. To this end, we verify
that J possesses the following three properties:
i) boundedness from below,
ii) coercivity, and
iii) weak lower semicontinuity.
i) Boundedness from Below.
2.4 Variational Methods 53
First we prove that J is bounded from below in H := H
1
0
(Ω) if f ∈ L
2
(Ω).
Again, for simplicity, we use the equivalent norm in H:
u
H
=
Ω
[Du[
2
dx.
By Poincar´e and H¨older inequalities, we have
J(u) ≥
1
2
u
2
H
−Cu
H
f
L
2
(Ω)
=
1
2
u
H
−Cf
L
2
(Ω)
2
−
C
2
2
f
2
L
2
(Ω)
≥ −
C
2
2
f
2
L
2
(Ω)
. (2.17)
Therefore, the functional J is bounded from below.
ii) Coercivity. However, A functional that is bounded from below does not
guarantee that it has a minimum. A simple counter example is
g(x) =
1
1 +x
2
, x ∈ R
1
.
Obviously, the inﬁmum of g(x) is 1, but it can never be attained. If we take a
minimizing sequence ¦x
k
¦ such that g(x
k
)→1, then x
k
goes away to inﬁnity.
This suggests that in order to prevent a minimizing sequence from ‘leaking’ to
inﬁnity, we need to have some sort of ‘coercive’ condition on our functional J.
That is, if a sequence ¦u
k
¦ goes to inﬁnity, i.e. if u
H
→∞, then J(u
k
) must
also become unbounded. Therefore a minimizing sequence would be retained
in a bounded set. And this is indeed true for our functional J. Actually, from
the second part of (2.17), one can see that
if u
k

H
→∞, then J(u
k
)→∞.
This implies that any minimizing sequence must be bounded in H.
iii) Weak Lower Semicontinuity. If H is a ﬁnite dimensional space, then a
bounded minimizing sequence ¦u
k
¦ would possesses a convergent subsequence,
and the limit would be a minimizer of J. This is no longer true in inﬁnite
dimensional spaces. We therefore turn our attention to weak topology.
Deﬁnition 2.4.2 Let X be a real Banach space. If a sequence ¦u
k
¦ ⊂ X
satisﬁes
< f, u
k
> → < f, u
o
> as k→∞
for every bounded linear functional f on X; then we say that ¦u
k
¦ converges
weakly to u
o
in X, and write
u
k
u
o
.
54 2 Existence of Weak Solutions
Deﬁnition 2.4.3 A Banach space is called reﬂexive if (X
∗
)
∗
= X, where X
∗
is the dual of X.
Theorem 2.4.1 (Weak Compactness.) Let X be a reﬂexive Banach space.
Assume that the sequence ¦u
k
¦ is bounded in X. Then there exists a subse
quence of ¦u
k
¦, which converges weakly in X.
Now let’s come back to our Hilbert space H := H
1
0
(Ω). By the Reisz
Representation Theorem, for every linear functional f on H, there is a unique
v ∈ H, such that
< f, u >= (v, u), ∀u ∈ H.
It follows that H
∗
= H, and therefore H is reﬂexive. By the Coercivity and
Weak Compactness Theorem, now a minimizing sequence ¦u
k
¦ is bounded,
and hence possesses a subsequence ¦u
ki
¦ that converges weakly to an element
u
o
in H:
(u
ki
, v)→(u
o
, v), ∀v ∈ H.
Naturally, we wish that the functional J we consider is continuous with respect
to this weak convergence, that is
lim
i→∞
J(u
ki
) = J(u
o
),
then u
o
would be our desired minimizer. However, this is not the case with our
functional, nor is it true for most other functionals of interest. Fortunately,
we do not really need such a continuity, instead, we only need a weaker one:
J(u
o
) ≤ liminf
i→∞
J(u
ki
).
Since on the other hand, by the deﬁnition of a minimizing sequence, we obvi
ously have
J(u
o
) ≥ liminf
i→∞
J(u
ki
),
and hence
J(u
o
) = liminf
i→∞
J(u
ki
).
Therefore, u
o
is a minimizer.
Deﬁnition 2.4.4 We say that a functional J() is weakly lower semicontinuous
on a Banach space X, if for every weakly convergent sequence
u
k
u
o
, in X,
we have
J(u
o
) ≤ liminf
k→∞
J(u
k
).
2.4 Variational Methods 55
Now we show that our functional J is weakly lower semicontinuous in H.
Assume that ¦u
k
¦ ⊂ H, and
u
k
u
o
in H.
Since for each ﬁxed f ∈ L
2n
n+2
(Ω),
Ω
fudx is a linear functional on H, it
follows immediately
Ω
fu
k
dx→
Ω
fu
o
dx, as k→∞. (2.18)
From the algebraic inequality
a
2
+b
2
≥ 2ab,
we have
Ω
[Du
k
[
2
dx ≥
Ω
[Du
o
[
2
dx + 2
Ω
Du
o
(Du
k
−Du
o
)dx.
Here the second term on the right goes to zero due to the weak convergence
u
k
u
o
. Therefore
liminf
k→∞
Ω
[Du
k
[
2
dx ≥
Ω
[Du
o
[
2
dx. (2.19)
Now (2.18) and (2.19) imply the weakly lower semicontinuity. So far, we
have proved
Theorem 2.4.2 Assume that Ω is a bounded domain with smooth boundary.
Then for every f ∈ L
2n
n+2
(Ω), the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
fudx
possesses a minimum u
o
in H
1
0
(Ω), which is a weak solution of the boundary
value problem
−´u = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
2.4.4 Existence of Minimizers Under Constraints
Now we consider the semilinear Dirichlet problem
−´u = [u[
p−1
u, x ∈ Ω,
u(x) = 0, x ∈ ∂Ω,
(2.20)
with 1 < p <
n+2
n−2
.
56 2 Existence of Weak Solutions
Obviously, u ≡ 0 is a solution. We call this a trivial solution. Of course,
what we are more interested is whether there exists nontrivial solutions. Let
J(u) =
1
2
Ω
[Du[
2
dx −
1
p + 1
Ω
[u[
p+1
dx.
It is easy to verify that
d
dt
J(u +tv) [
t=0
=
Ω
Du Dv −[u[
p−1
uv
dx.
Therefore, a critical point of the functional J in H := H
1
0
(Ω) is a weak solution
of (2.20)
However, this functional is not bounded from below. To see this, ﬁxed an
element u ∈ H and consider
J(tu) =
t
2
2
Ω
[Du[
2
dx −
t
p+1
p + 1
Ω
[u[
p+1
dx.
Noticing p + 1 > 2, we ﬁnd that
J(tu)→−∞, as t→∞.
Therefore, there is no minimizer of J(u). For this reason, instead of J, we will
consider another functional
I(u) :=
1
2
Ω
[Du[
2
dx,
in the constrain set
M := ¦u ∈ H [ G(u) :=
Ω
[u[
p+1
dx = 1¦.
We seek minimizers of I in M. Let ¦u
k
¦ ⊂ M be a minimizing sequence, i.e.
lim
k→∞
I(u
k
) = inf
u∈M
I(u) := m.
It follows that
Ω
[Du
k
[
2
dx is bounded, hence ¦u
k
¦ is bounded in H. By the
Weak Compactness Theorem, there exist a subsequence (for simplicity, we
still denote it by ¦u
k
¦), which converges weakly to u
o
in H. The weak lower
semicontinuity of the functional I yields
I(u
o
) ≤ liminf
k→∞
I(u
k
) = m. (2.21)
On the other hand, since p+1 <
2n
n −2
, by the Compact Sobolev Embedding
Theorem
H
1
(Ω) →→ L
p+1
(Ω),
2.4 Variational Methods 57
we ﬁnd that u
k
converges strongly to u
o
in L
p+1
(Ω), and hence
Ω
[u
o
[
p+1
dx = 1.
That is u
o
∈ M, and therefore
I(u
o
) ≥ m.
Together with (2.21), we obtain
I(u
o
) = m.
Now we have shown the existence of a minimizer u
o
of I in M. To link
u
o
to a weak solution of (2.20), we derive the corresponding EulerLagrange
equation for this minimizer under the constraint.
Theorem 2.4.3 (Lagrange Multiplier). let u be a minimizer of I in M, i.e.
I(u) = min
v∈M
I(v).
Then there exists a real number λ such that
I
(u) = λG
(u),
or
< I
(u), v >= λ < G
(u), v >, ∀v ∈ H.
Before proving the theorem, let’s ﬁrst recall the Lagrange Multiplier for
functions deﬁned on R
n
. Let f(x) and g(x) be smooth functions in R
n
. It is
well known that the critical points ( in particular, the minima ) x
o
of f(x)
under the constraint g(x) = c satisfy
Df(x
o
) = λDg(x
o
) (2.22)
for some constant λ. Geometrically, Df(x
o
) is a vector in R
n
. It can be de
composed as the sum of two perpendicular vectors:
Df(x
o
) = Df(x
o
) [
N
+Df(x
o
) [
T
,
where the two terms on the right hand side are the projection of Df(x
o
)
onto the normal and tangent spaces of the level set g(x) = c at point x
o
. By
deﬁnition, x
o
being a critical point of f(x) on the level set g(x) = c means
that the tangential projection Df(x
o
) [
T
= 0. While the normal projection
Df(x
o
) [
N
is parallel to Dg(x
o
), and we therefore have (2.22). Heuristically,
we may generalize this perception into inﬁnite dimensional space H. At a ﬁxed
point u ∈ H, I
(u) is a linear functional on H. By the Reisz Representation
58 2 Existence of Weak Solutions
Theorem, it can be identiﬁed as a point (or a vector) in H, denoted by
¯
I
(u),
such that
< I
(u), v >= (
¯
I
(u), v), ∀v ∈ H.
Similarly for G
(u), we have
¯
G
(u) ∈ H, and it is a normal vector of M at
point u. At a minimum u of I on the hyper surface ¦w ∈ H [ G(w) = 1¦,
¯
I
(u) must be perpendicular to the tangent space at point u, and hence
¯
I
(u) = λ
¯
G
(u).
We now prove this rigorously.
The Proof of Theorem 2.4.3.
Recall that if u is the minimum of J in the whole space H, then, for any
v ∈ H, we have
d
dt
I(u +tv) [
t=0
. (2.23)
Now u is only the minimum on M, we can no longer use (2.23) because u+tv
can not be kept on M, we can not guarantee that
G(u +tv) :=
Ω
[u +tv[
p+1
dx = 1
for all small t. To remedy this, we consider
g(t, s) := G(u +tv +sw).
For each small t, we try to show that there is an s = s(t), such that
G(u +tv +s(t)w) = 1.
Then we can calculate the variation of J(u +tv +s(t)w) as t→0.
In fact,
∂g
∂s
(0, 0) =< G
(u), w >= (p + 1)
Ω
[u[
p−1
uwdx.
Since u ∈ M,
Ω
[u[
p+1
dx = 1, there exists w (may just take w as u),
such that the right hand side of the above is not zero. Hence, according to the
Implicit Function Theorem, there exists a C
1
function s : R
1
→R
1
, such that
s(0) = 0, and g(t, s(t)) = 1,
for suﬃciently small t. Now u+tv+s(t)w is on M, and hence at the minimum
u of J on M, we must have
d
dt
I(u +tv +s(t)w) [
t=0
=< I
(u), v > + < I
(u), w > s
(0) = 0. (2.24)
2.4 Variational Methods 59
On the other hand, we have
0 =
dg
dt
(0, 0) =
∂g
∂t
(0, 0) +
∂g
∂s
(0, 0)s
(0)
= < G
(u), v > + < G
(u), w > s
(0).
Consequently
s
(0) = −
< G
(u), v >
< G
(u), w >
.
Substitute this into (2.24), and denote
λ =
< I
(u), w >
< G
(u), w >
,
we arrive at
< I
(u), v >= λ < G
(u), v >, ∀v ∈ H.
This completes the proof of the Theorem. ¯ .
Now let’s come back to the weak solution of problem (2.20). Our minimizer
u
o
of I under the constraint G(u) = 1 satisﬁes
< I
(u
o
), v >= λ < G
(u
o
), v >, ∀v ∈ H,
that is
Ω
Du
o
Dvdx = λ
Ω
[u
o
[
p−1
u
o
vdx, ∀v ∈ H. (2.25)
One can see that λ > 0. Let ˜ u = au
o
with λ/a
p−1
= 1. This is possible
since p > 1. Then it is easy to verify that
Ω
D˜ u Dvdx =
Ω
[˜ u[
p−1
˜ uvdx.
Now ˜ u is the desired weak solution of (2.20) in H. ¯ .
Exercise 2.4.1 Show that λ > 0 in (2.25).
2.4.5 Minimax Critical Points
Besides local or global minima and maxima, there are other types of critical
points: saddle points or minimax points. For a simple example, let’s consider
the function f(x, y) = x
2
−y
2
from R
2
to R
1
. (x, y) = (0, 0) is a critical point
of f, i.e. Df(0, 0) = 0. However, it is neither a local maximum nor a local
minimum. The graph of z = f(x, y) in R
3
looks like a horse saddle. If we
go on the graph along the direction of xaxis, (0,0) is the minimum, while
along the direction of yaxis, (0,0) is the maximum. For this reason, we also
call (0,0) a minimax critical point of f(x, y). The way to locate or to show
60 2 Existence of Weak Solutions
the existence of such minimax point is called a minimax variational method.
To illustrate the main idea, let’s still take this simple example. Suppose we
don’t know whether f(x, y) possesses a minimax point, but we do detect a
‘mountain range’ on the graph near xz plane. To show the existence of a mini
max point, we pick up two points on xy plane, for instance, P = (1, 1) and
Q = (−1, −1) on two sides of the ‘mountain range’. Let γ(s) be a continuous
curve linking the two points P and Q with γ(0) = P and γ(1) = Q. To go
from (P, f(P)) to (Q, f(Q)) on the graph, one needs ﬁrst to climb over the
’mountain range’ near xz plane, then to decent to the point (Q, f(Q)). Hence
on this path, there is a highest point. Then we take the inﬁmum among the
highest points on all the possible paths linking P and Q. More precisely, we
deﬁne
c = inf
γ∈Γ
max
s∈[0,1]
f(γ(s)),
where
Γ = ¦γ [ γ = γ(s) is continuous on [0,1], and γ(0) = P, γ(1) = Q¦
We then try to show that there exists a point P
o
at which f attains this
minimax value c, and more importantly, Df(P
o
) = 0.
For a functional J deﬁned on an inﬁnite dimensional space X, we will apply
a similar idea to seek its minimax critical points. Unlike ﬁnite dimensional
space, in an inﬁnite dimensional space, a bounded sequence may not converge.
Hence we require the functional J to satisfy some compactness condition,
which is the PalaisSmale condition.
Deﬁnition 2.4.5 We say that the functional J satisﬁes the PalaisSmale con
dition (in short, (PS)), if any sequence ¦u
k
¦ ∈ X for which J(u
k
) is bounded
and J
(u
k
)→0 as k→∞ possesses a convergent subsequence.
Under this compactness condition, Ambrosetti and Rabinowitz [AR] estab
lish the following wellknown theorem on the existence of a minimax critical
point.
Theorem 2.4.4 (Mountain Pass Theorem). Let X be a real Banach space
and J ∈ C
1
(X, R
1
). Suppose J satisﬁes (PS), J(0) = 0,
(J
1
) there exist constants ρ, α > 0 such that J [
∂Bρ(0)
≥ α, and
(J
2
) there is an e ∈ X ` B
ρ
(0), such that J(e) ≤ 0.
Then J possesses a critical value c ≥ α which can be characterized as
c = inf
γ∈Γ
max
u∈γ[0,1]
J(u)
where
Γ = ¦γ ∈ C([0, 1], R
1
) [ γ(0) = 0, γ(1) = e¦.
2.4 Variational Methods 61
Here, a critical value c means a value of the functional which is attained
by a critical point u, i.e. J
(u) = 0 and J(u) = c.
Heuristically, the Theorem says if the point 0 is located in a valley sur
rounded by a mountain range, and if at the other side of the range, there is
another low point e; then there must be a mountain pass from 0 to e which
contains a minimax critical point of J. In the ﬁgure below, on the graph of
J, the path connecting the two low points (0, J(0)) and (e, J(e)) contains a
highest point S, which is the lowest as compared to the highest points on the
nearby paths. This S is a saddle point, the minimax critical point of J.
The proof of the Theorem is mainly based on a deformation theorem which
exploits the change of topology of the level sets of J when the value of J
goes through a critical value. To illustrate this, let’s come back to the simple
example f(x, y) = x
2
−y
2
. Denote the level set of f by
f
c
:= ¦(x, y) ∈ R
2
[ f(x, y) ≤ c¦.
One can see that 0 is a critical value and for any a > 0, the level sets f
a
and
f
−a
has diﬀerent topology. f
a
is connected while f
−a
is not. If there is no
critical value between the two level sets, say f
a
and f
b
with 0 < a < b, then
one can continuously deform the set f
b
into f
a
. One convenient way is let the
points in the set f
b
‘ﬂow’ along the direction of −Df into f
a
; correspondingly,
the points on the graph ‘ﬂow downhills’ into a lower level. More precisely and
generally, we have
Theorem 2.4.5 (Deformation Theorem). Let X be a real Banach space. As
sume that J ∈ C
1
(X, R
1
) and satisﬁes the (PS). Suppose that c is not a
critical value of J.
62 2 Existence of Weak Solutions
Then for each suﬃciently small > 0, there exists a constant 0 < δ <
and a continuous mapping η(t, u) from [0, 1] X to X, such that
(i) η(0, u) = u, ∀u ∈ X,
(ii) η(1, u) = u, ∀u ∈J
−1
[c −, c +],
(iii) J(η(t, u) ≤ J(u), ∀u ∈ X, t ∈ [0, 1],
(iv) η(1, J
c+δ
) ⊂ J
c−δ
.
Roughly speaking, the Theorem says if c is not a critical value of J, then
one can continuously deform a higher level set J
c+δ
into a lower level J
c−δ
by
‘ﬂowing downhills’. Condition (ii) is to ensure that after the deformation, the
two points 0 and e as in the Mountain Pass theorem still remain ﬁxed.
Proof of the Deformation Theorem.
The main idea is to solve an ODE initial value problem (with respect to
t) for each u ∈ X, roughly look like the following:
dη
dt
(t, u) = −J
(η(t, u)),
η(0, u) = u.
(2.26)
That is, to let the ‘ﬂow’ go along the direction of −J
(η(t, u)), so that the
value of J(η(t, u) would decrease as t increases. However, we need to modify
the right hand side of the equation a little bit, so that the ﬂow will satisfy
other desired conditions.
Step 1 We ﬁrst choose a small > 0, so that the ﬂow would not go too
slow in J
c+
` J
c−
. This is actually guaranteed by the (PS). We claim that
there exist constants 0 < a, < 1, such that
J
(u) ≥ a, ∀u ∈ J
c+
` J
c−
. (2.27)
Otherwise, there would exist sequences a
k
→0,
k
→0 and elements u
k
∈ J
c+
k
`
J
c−
k
with J
(u
k
) ≤ a
k
. Then by virtue of the (PS) condition, there is a
subsequence ¦u
ki
¦ that converges to an element u
o
∈ X. Since J ∈ C
1
(X, R
1
),
we must have
J(u
o
) = c and J
(u
o
) = 0.
This is a contradiction with the assumption that c is not a critical value of J.
Therefore (2.27) holds.
Step 2. To satisfy condition (ii), i.e. to make the ﬂow stay still in the set
M := ¦u ∈ X [ J(u) ≤ c − or J(u) ≥ c +¦,
we could multiply the right hand side of equation (2.26) by dist(η, M), but
this would make the ﬂow go too slow near M. To modify further, we choose
0 < δ < , such that
δ <
a
2
2
, (2.28)
and let
2.4 Variational Methods 63
N := ¦u ∈ X [ c −δ ≤ J(u) ≤ c +δ¦.
Now for u ∈ X, deﬁne
g(u) :=
dist(u, M)
dist(u, M) +dist(u, N)
,
and let
h(s) :=
1 if 0 ≤ s ≤ 1
1
s
if s > 1.
We ﬁnally modify the −J
(η) as
F(η) := −g(η)h(J
(η))J
(η).
The introduction of the factor h() is to ensure the boundedness of F().
Step 3. Now for each ﬁxed u ∈ X, we consider the modiﬁed ODE problem
dη
dt
(t, u) = F(η(t, u))
η(0, u) = u.
One can verify that F is bounded and Lipschitz continuous on bounded sets,
and therefore by the wellknown ODE theory, there exists a unique solution
η(t, u) for all time t ≥ 0.
Condition (i) is satisﬁed automatically. From the deﬁnition of g
g(u) = 0 ∀u ∈ M;
hence condition (ii) is also satisﬁed.
To verify (iii) and (iv), we compute
d
dt
J(η(t, u)) = < J
(η(t, u)),
dη
dt
(t, u) >
= < J
(η(t, u)), F(η(t, u)) >
= −g(η(t, u))h(J
(η(t, u)))J
(η(t, u))
2
. (2.29)
In the above equality, obviously, the right hand side is nonpositive, and
hence J(η(t, u)) is nonincreasing. This veriﬁes (iii).
To see (iv), it suﬃce to show that for each u in N = J
c+δ
` J
c−δ
, we have
η(1, u) ∈ J
c−δ
.
Notice that for η ∈ N, g(η) ≡ 1, and (2.29) becomes
d
dt
J(η(t, u)) = −h(J
(η(t, u)))J
(η(t, u)
2
.
By the deﬁnition of h and (2.27), we have,
64 2 Existence of Weak Solutions
d
dt
J(η(t, u)) =
−J
(η(t, u))
2
≤ −a
2
if J
(η(t, u)) ≤ 1
−J
(η(t, u)) ≤ −a if J
(η(t, u)) > 1.
In any case, we derive
J(η(1, u)) ≤ J(η(0, u)) −a
2
≤ c +δ −a
2
≤ c −δ.
This establishes (iv) and therefore completes the proof of the Theorem. ¯ .
With this Deformation Theorem, the proof of the Mountain Pass Theorem
becomes very simple.
The Proof of the Mountain Pass Theorem.
First, due to the deﬁnition, we have c < ∞. To see this, we take a particular
path γ(t) = te and consider the function
g(t) = J(γ(t)).
It is continuous on the closed interval [0,1], and hence bounded from above.
Therefore
c ≤ max
u∈γ([0,1])
J(u) = max
t∈[0,1]
g(t) < ∞.
On the other hand, for any γ ∈ Γ
γ([0, 1]) ∩ ∂B
ρ
(0) = ∅.
Hence by condition (J
1
),
max
u∈γ([0,1])
J(u) ≥ inf
v∈∂Bρ(0)
J(v) ≥ α.
And it follows that c ≥ α.
Suppose the number c so deﬁned is not a critical value. We will use the
Deformation Theorem to continuously deform a path γ ∈ Γ down to the
level set J
c−δ
to derive a contradiction. More precisely, choose the in the
Deformation Theorem to be
α
2
, and 0 < δ < . From the deﬁnition of c, we
can pick a path γ ∈ Γ, such that
max
u∈γ([0,1])
J(u) ≤ c +δ,
i.e.
γ([0, 1]) ⊂ J
c+δ
.
Now by the Deformation Theorem, this path can be continuously deformed
down to the level set J
c−δ
, that is
η(1, γ([0, 1]) ⊂ J
c−δ
,
or in order words,
2.4 Variational Methods 65
max
u∈η(1,γ([0,1]))
J(u) ≤ c −δ. (2.30)
On the other hand, since 0 and e are in J
c−
, by (ii) of the Deformation
Theorem,
η(1, 0) = 0 and η(1, e) = e.
This means that η(1, γ([0, 1])) is also a path in Γ, and therefore we must have
max
u∈η(1,γ([0,1]))
J(u) ≥ c.
This contradicts with (2.30) and hence completes the proof of the Theorem.
¯ .
2.4.6 Existence of a Minimax via the Mountain Pass Theorem
Now we apply the Mountain Pass Theorem to seek weak solutions of the
semilinear elliptic problem
−´u = f(x, u), x ∈ Ω,
u(x) = 0, x ∈ ∂Ω.
(2.31)
Although the method we illustrate here could be applied to a more general
second order uniformly elliptic equations, we prefer this simple model that
better illustrates the main ideas.
We assume that f(x, u) satisﬁes
(f
1
)f(x, s) ∈ C(
¯
Ω R
1
, R
1
),
(f
2
) there exists constants c
1
, c
2
≥ 0, such that
[f(x, s)[ ≤ c
1
+c
2
[s[
p
,
with 0 ≤ p <
n+2
n−2
if n > 2. If n = 1, (f
2
) can be dropped; if n = 2, it suﬃce
that
[f(x, s)[ ≤ c
1
e
φ(s)
,
with φ(s)/s
2
→0 as [s[→∞.
(f
3
)f(x, s) = o([s[) as s→0, and
(f
4
) there are constants µ > 2 and r ≥ 0 such that for [s[ ≥ r,
0 < µF(x, s) ≤ sf(x, s),
where
F(x, s) =
s
0
f(x, t)dt.
Remark 2.4.2 A simple example of such function f(x, u) is [u[
p−1
u.
Theorem 2.4.6 (Rabinowitz [Ra]) If f satisﬁes condition (f
1
) − (f
4
), then
problem (2.31) possesses a nontrivial weak solution.
66 2 Existence of Weak Solutions
To ﬁnd weak solutions of the problem, we seek minimax critical points of
the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
F(x, u)dx
on the Hilbert space H := H
1
0
(Ω). We verify that J satisﬁes the conditions in
the Mountain Pass Theorem.
We ﬁrst need to show that J ∈ C
1
(H, R
1
). Since we have veriﬁed that the
ﬁrst term
Ω
[Du[
2
dx
is continuously diﬀerentiable, now we only need to check the second term
I(u) :=
Ω
F(x, u)dx.
Proposition 2.4.1 (Rabinowitz [Ra]) Let Ω ⊂ R
n
be a bounded domain and
let g satisfy
(g
1
)g ∈ C(
¯
Ω R
1
, R
1
), and
(g
2
) there are constants r, s ≥ 1 and a
1
, a
2
≥ 0 such that
[g(x, ξ)[ ≤ a
1
+a
2
[ξ[
r/s
for all x ∈
¯
Ω, ξ ∈ R
1
.
Then the map u(x)→g(x, u(x)) belongs to C(L
r
(Ω), L
s
(Ω)).
Proof. By (g
2
),
Ω
[g(x, u(x))[
s
dx ≤
Ω
(a
1
+a
2
[u[
r/s
)
s
dx
≤ a
3
Ω
(1 +[u[
r
)dx.
It follows that if u ∈ L
r
(Ω), then g(x, u) ∈ L
s
(Ω). That is
g(x, ) : L
r
(Ω)→L
s
(Ω).
Now we verify the continuity of the map. Fix u ∈ L
r
(Ω). Given any > 0, we
want to show that, there is a δ > 0, such that,
whenever φ
L
r
(Ω)
< δ, we have g(, u +φ) −g(, u)
L
s
(Ω)
< . (2.32)
Let
Ω
1
:= ¦x ∈
¯
Ω [ [φ(x)[ ≤ m¦
and
Ω
2
:=
¯
Ω ` Ω
1
.
Let
2.4 Variational Methods 67
I
i
=
Ωi
[g(x, u(x) +φ(x)) −g(x, u(x))[
s
dx, i = 1, 2.
By the continuity of g(x, ), for any η > 0, there exists m > 0, such that
[g(x, u(x) +φ(x)) −g(x, u(x))[ ≤ η, ∀x ∈ Ω
1
.
It follows that
I
1
≤ [Ω
1
[η
s
≤ [Ω[η
s
,
where [Ω[ is the volume of Ω. For the given > 0, we choose η and then m,
so that
[Ω[η
s
<
2
s
,
and therefore
I
1
≤
2
s
. (2.33)
Now we ﬁx this m, and estimate I
2
.
By (g
2
), we have
I
2
≤
Ω2
(c
1
+c
2
([u[
r
+[φ[
r
)) dx ≤ c
1
[Ω
2
[ +c
2
Ω2
[u[
r
dx +c
2
φ
r
L
r
(Ω)
,
(2.34)
for some constant c
1
and c
2
.
Moreover, as φ
L
r
(Ω)
< δ,
δ
r
>
Ω
[φ[
r
dx ≥ m
r
[Ω
2
[. (2.35)
Since u ∈ L
r
(Ω), we can make
Ω2
[u[
r
dx as small as we wish by letting [Ω
2
[
small. Now by (2.35), we can choose δ so small, such that [Ω
2
[ is suﬃciently
small, and hence the right hand side of (2.34) is less than
2
s
. Consequently,
by (2.33), for such a small δ,
I
1
+I
2
≤
2
s
+
2
s
.
This veriﬁes (2.32), and therefore completes the proof of the Proposition. ¯ .
Proposition 2.4.2 (Rabinowitz [Ra]) Assume that f(x, s) satisﬁes (f
1
) and
(f
2
). Let n ≥ 3. Then I ∈ C
1
(H, R) and
< I
(u), v >=
Ω
f(x, u)vdx, ∀v ∈ H.
Moreover, I() is weakly continuous and I
() is compact, i.e.
I(u
k
)→I(u) and I
(u
k
)→I
(u), whenever u
k
u in H;
68 2 Existence of Weak Solutions
Proof. We will ﬁrst show that I is Frechet diﬀerentiable on H and then prove
that I
(u) is continuous.
1. Let u, v ∈ H. We want to show that given any > 0, there exists a
δ = δ(, u), such that
Q(u, v) := [I(u +v) −I(u) −
Ω
f(x, u)vdx[ ≤ v (2.36)
whenever v ≤ δ. Here v denotes the H
1
(Ω) norm of v.
In fact,
Q(u, v) ≤
Ω
[F(x, u(x) +v(x)) −F(x, u(x)) −f(x, u(x))v(x)[dx
=
Ω
[f(x, u(x) +ξ(x)) −f(x, u(x))[ [v(x)[dx
≤ f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
v
L
2n
n−2
(Ω)
≤ f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
Kv. (2.37)
Here we have applied the Mean Value Theorem (with [ξ(x)[ ≤ [v(x)[), the
H¨older inequality, and the Sobolev inequality
v
L
2n
n−2
(Ω)
≤ Kv.
In Proposition 2.4.1, let r =
2n
n−2
and s =
2n
n+2
. Then by (f
2
), we deduce
that the map
u(x)→f(x, u(x))
is continuous from L
2n
n−2
(Ω) to L
2n
n+2
(Ω). Hence for any given > 0, we
can choose suﬃciently small δ > 0, such that whenever v < δ, we have
v
L
2n
n−2
(Ω)
< Kδ, and hence ξ
L
2n
n−2
(Ω)
< Kδ, and therefore
f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
<
K
.
It follows from (2.37) that
Q(u, v) < , whenever v < δ.
This veriﬁes (2.36) and hence I() is diﬀerentiable, and
< I
(u), v >=
Ω
f(x, u(x))v(x)dx. (2.38)
2. To verify the continuity of I
(), we show that for each ﬁxed u,
I
(u +φ) −I
(u)→0, as φ→0. (2.39)
2.4 Variational Methods 69
By deﬁnition,
I
(u +φ) −I
(u) = sup
v≤1
< I
(u +φ) −I
(u), v >
= sup
v≤1
Ω
[f(x, u(x) +φ(x)) −f(x, u(x))]v(x)dx
≤ sup
v≤1
f(, u +φ) −f(, u)
L
2n
n+2
(Ω)
Kv
≤ Kf(, u +φ) −f(, u)
L
2n
n+2
(Ω)
(2.40)
Here we have applied (2.38), the H¨older inequality, and the Sobolev inequality.
Now (2.39) follows again from Proposition 2.4.1. Therefore, I
() is continuous.
3. Finally, we verify the weak continuity of I(u) and the compactness of
I
(u). Assume that
¦u
k
¦ ⊂ H, and u
k
u in H.
We show that
I(u
k
)→I(u) as k→∞.
In fact,
[I(u
k
) −I(u)[ ≤
Ω
[F(x, u
k
(x)) −F(x, u(x))[dx
≤
Ω
[f(x, ξ
k
(x))[ [u
k
(x) −u(x)[dx
≤ f(, ξ
k
)
L
r
(Ω)
u
k
−u
L
q
(Ω)
≤ Cξ
k

L
2n
n−2
(Ω)
u
k
−u
L
q
(Ω)
(2.41)
Here we have applied the Mean Value Theorem and the H¨older inequality with
ξ
k
(x) between u
k
(x) and u(x), and
r =
2n
p(n −2)
, q =
r
r −1
=
2n
2n −p(n −2)
.
It is easy to see that
q <
2n
n −2
.
Since a weak convergent sequence is bounded, ¦u
k
¦ is bounded in H,
and hence by Sobolev Embedding, it is bounded in L
2n
n−2
(Ω), so does ¦ξ
k
¦.
Furthermore, by Compact Embedding of H into L
q
(Ω), the last term of (2.41)
approaches zero as k→∞. This veriﬁes the weak continuity of I().
Using a similar argument as in deriving (2.40), we obtain
I
(u
k
) −I
(u) ≤ Kf(, u
k
) −f(, u)
L
2n
n+2
(Ω)
. (2.42)
70 2 Existence of Weak Solutions
Since p <
n+2
n−2
, we have
2np
n+2
<
2n
n−2
. Then by the Compact Embedding,
H →→ L
2np
n+2
(Ω),
we derive
u
k
→u strongly in L
2np
n+2
(Ω).
Consequently, form Proposition 2.4.1,
f(, u
k
)→f(, u) strongly in L
2n
n+2
(Ω).
Now, together with (2.42), we arrive at
I
(u
k
)→I
(u) as k→∞.
This completes the proof of the proposition. ¯ .
Now let’s come back to Problem (2.31) and prove Theorem 2.4.6. We will
verify that J() satisﬁes the conditions in the Mountain Pass Theorem, and
thus possesses a nontrivial minimax critical point, which is a weak solution
of (2.31).
First we verify the (PS) condition. Assume that ¦u
k
¦ is a sequence in H
satisfying
[J(u
k
)[ ≤ M and J
(u
k
)→0.
We show that ¦u
k
¦ possesses a convergent subsequence.
By (f
4
), we have
µM +J
(u
k
)u
k
 ≥ µJ(u
k
)− < J
(u
k
), u
k
>
= (
µ
2
−1)
Ω
[Du
k
[
2
dx +
Ω
[f(x, u
k
)u
k
−µF(x, u
k
)] dx
≥ (
µ
2
−1)
Ω
[Du
k
[
2
dx +
u
k
(x)≤r
[f(x, u
k
)u
k
−µF(x, u
k
)] dx
≥ (
µ
2
−1)
Ω
[Du
k
[
2
dx −(a
1
r +a
2
r
p+1
)[Ω[
= c
o
u
k

2
−C
o
, (2.43)
for some positive constant c
o
and C
o
. Since J
(u
k
) is bounded, (2.43) implies
that ¦u
k
¦ must be bounded in H. Hence there exists a subsequence (still
denoted by ¦u
k
¦), which converges weakly to some element u
o
in H.
Let A : H→H
∗
be the duality map between H and its dual space H
∗
.
Then for any u, v ∈ H,
< Au, v >=
Ω
Du Dvdx.
Consequently,
2.4 Variational Methods 71
A
−1
J
(u) = u −A
−1
f(, u). (2.44)
From Proposition 2.4.2,
f(, u
k
)→f(, u) strongly in H
∗
.
It follows that
u
k
= A
−1
J
(u
k
) +A
−1
f(, u
k
)→A
−1
f(, u
o
), as k→∞.
This veriﬁes the (PS).
To see there is a ‘mountain range’ surrounding the origin, we estimate
Ω
F(x, u)dx. By (f
3
), for any given > 0, there is a δ > 0, such that, for all
x ∈
¯
Ω,
[F(x, s)[ ≤ [s[
2
, whenever [s[ < δ. (2.45)
While by (f
2
), there is a constant M = M(δ), such that for all x ∈
¯
Ω,
[F(x, s)[ ≤ M[s[
p+1
. (2.46)
Combining (2.45) and (2.46), we have,
[F(x, s)[ ≤ [s[
2
+M[s[
p+1
, for all x ∈
¯
Ω and for all s ∈ R.
It follows, via the Poincar´e and the Sobolev inequality, that
[
Ω
F(x, u)dx[ ≤
Ω
u
2
dx+M
Ω
[u[
p+1
dx ≤ C( +Mu
p−1
)u
2
. (2.47)
And consequently,
J(u) ≥
¸
1
2
−C( +Mu
p−1
)
u
2
. (2.48)
Choose =
1
8C
. For this , we ﬁx M; then choose u = ρ small, so that
CMρ
p−1
≤
1
8
.
Then form (2.48), we deduce
J[
∂Bρ(0)
≥
1
4
ρ
2
.
To see the existence of a point e ∈ H, such that J(e) ≤ 0, we ﬁx any u = 0
in H, and consider
J(tu) =
t
2
2
Ω
[Du[
2
dx −
Ω
F(x, tu)dx. (2.49)
72 2 Existence of Weak Solutions
By (f
4
), we have for [s[ ≥ r,
d
ds
[s[
−µ
F(x, s)
= [s[
−µ−2
s (−µF(x, s) +sf(x, s))
≥ 0 if s ≥ r
≤ 0 if s ≤ −r.
In any case, we deduce, for [s[ ≥ r,
F(x, s) ≥ a
3
[s[
µ
;
and it follows that
F(x, s) ≥ a
3
[s[
µ
−a
4
. (2.50)
Combining (2.49) and (2.50), and taking into account that µ > 2, we conclude
that
J(tu)→−∞, as t→∞.
Now J satisﬁes all the conditions in the Mountain Pass Theorem, hence it
possesses a minimax critical point u
o
, which is a weak solution of (2.31).
Moreover, J(u
o
) > 0, therefore it is a nontrivial solution. This completes the
proof of the Theorem 2.4.6. ¯ .
3
Regularity of Solutions
3.1 W
2,p
a Priori Estimates
3.1.1 Newtonian Potentials,
3.1.2 Uniform Elliptic Equations
3.2 W
2,p
Regularity of Solutions
3.2.1 The Case p ≥ 2.
3.2.2 The Case 1 < p < 2.
3.2.3 Other Useful Results Concerning the Existence, Uniqueness, and
Regularity
3.3 Regularity Lifting
3.3.1 Bootstraps
3.3.2 the Regularity Lifting Theorem
3.3.3 Applications to PDEs
3.3.4 Applications to Integral Equations
In the previous chapter, we used functional analysis, mainly calculus of
variations to seek the existence of weak solutions for second order linear or
semilinear elliptic equations. These weak solutions we obtained were in the
Sobolev space W
1,2
and hence, by Sobolev imbedding, in L
p
space for p ≤
n+2
n−2
.
Usually, the weak solutions are easier to obtain than the classical ones, and
this is particularly true for nonlinear equations. However, in practice, we are
often required to ﬁnd classical solutions. Therefore, one would naturally want
to know whether these weak solutions are actually diﬀerentiable, so that they
can become classical ones. These questions will be answered here.
In this chapter, we will introduce methods to show that, in most cases, a
weak solution is in fact smooth. This is called the regularity argument. Very
often, the regularity is equivalent to the a priori estimate of solutions. To see
the diﬀerence between the regularity and the a priori estimate, we take the
equation
74 3 Regularity of Solutions
−´u = f(x)
for example. Roughly speaking, the W
2,p
regularity theory infers that
If u ∈ W
1,p
is a weak solution and if f ∈ L
p
, then u ∈ W
2,p
.
While the W
2,p
a priori estimate says
If u ∈ W
1,p
0
∩ W
2,p
is a weak solution and if f ∈ L
p
, then
u
W
2,p ≤ Cf
L
p.
In the a priori estimate, we preassumed that u is in W
2,p
. It might seem
a little bit surprising at this moment that the a priori estimates turn out to
be powerful tools in deriving regularities. The readers will see in section 3.2
how the a priori estimates and uniqueness of solutions lead to the regularity.
Let Ω be an open bounded set in R
n
. Let a
ij
(x), b
i
(x), and c(x) be bounded
functions on Ω with a
ij
(x) = a
ji
(x). Consider the second order partial diﬀer
ential equation
Lu := −
n
¸
i,j=1
a
ij
(x)u
xixj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u = f(x). (3.1)
We assume L is uniformly elliptic, that is, there exists a constant δ > 0,
such that
n
¸
i,j=1
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for a. e. x ∈ Ω and for all ξ ∈ R
n
.
In Section 3.1, we establish W
2,p
a priori estimate for the solutions of (3.1).
We show that if the function f(x) is in L
p
(Ω), and u ∈ W
2,p
(Ω) is a strong
solution of (3.1), then there exists a constant C, such that
u
W
2,p ≤ C(u
L
p +f
L
p).
We will ﬁrst establish this for a Newtonian potential, then use the method of
“frozen coeﬃcients” to generalize it to uniformly elliptic equations.
In Section 3.2, we establish a W
2,p
regularity. We show that if f(x) is in
L
p
(Ω), then any weak solution u of (3.1) is in W
2,p
(Ω).
In Section 3.3, we introduce and prove a regularity lifting theorem. It
provides a simple method for the study of regularity. The version we present
here contains some new developments. It is much more general and very easy
to use. We believe that the method will be helpful to both experts and non
experts in the ﬁeld.
We will also use examples to show how this method can be applied to boost
the regularity of the solutions for PDEs as well as for integral equations.
3.1 W
2,p
a Priori Estimates 75
3.1 W
2,p
a Priori Estimates
We establish W
2,p
estimate for the solutions of
Lu = f(x) , x ∈ Ω. (3.2)
First we start with the Newtonian potential.
3.1.1 Newtonian Potentials
Let Ω be a bounded domain of R
n
. Let
Γ(x) =
1
n(n −2)ω
n
1
[x[
n−2
n ≥ 3,
be the fundamental solution of the Laplace equation. The Newtonian potential
is known as
w(x) =
Ω
Γ(x −y)f(y)dy.
We prove
Theorem 3.1.1 Let f ∈ L
p
(Ω) for 1 < p < ∞, and let w be the Newtonian
potential of f. Then w ∈ W
2,p
(Ω) and
´w = f(x), a.e. x ∈ Ω, (3.3)
and
D
2
w
L
p
(Ω)
≤ Cf
L
p
(Ω)
. (3.4)
Write w = Nf, where N is obviously a linear operator. For ﬁxed i, j,
deﬁne the linear operator T as
Tf = D
ij
Nf = D
ij
R
n
Γ(x −y)f(y)dy
To prove Theorem 3.1.1, it is equivalent to show that
T : L
p
(Ω) −→ L
p
(Ω) is a bounded linear operator . (3.5)
Our proof can actually be applied to more general operators. To this end,
we introduce the concept of weak type and strong type operators.
Deﬁne
µ
f
(t) = [¦x ∈ Ω[ [f(x)[ > t¦[.
For p ≥ 1, a weak L
p
space L
p
w
(Ω) is the collection of functions f that
satisfy
f
p
L
p
w
(Ω)
= sup¦µ
f
(t)t
p
, ∀t > 0¦ < ∞.
76 3 Regularity of Solutions
An operator T : L
p
(Ω) → L
q
(Ω) is of strong type (p, q) if
Tf
L
q
(Ω)
≤ Cf
L
p
(Ω)
, ∀f ∈ L
p
(Ω).
T is of weak type (p, q) if
Tf
L
q
w
(Ω)
≤ Cf
L
p
(Ω)
, ∀f ∈ L
p
(Ω).
Outline of Proof of (3.5).
We decompose the proof into the proofs of the following four lemmas.
First, we use Fourier transform to show
Lemma 3.1.1 T : L
2
(Ω) → L
2
(Ω) is a bounded linear operator. i.e. T is of
strong type (2, 2).
Secondly, we use CalderonZygmund’s Decomposition Lemma to prove
Lemma 3.1.2 T is of weak type (1,1).
Thirdly, we employ Marcinkiewicz Interpolation Theorem to derive
Lemma 3.1.3 T is of strong type (r, r) for any 1 < r ≤ 2.
Finally, by duality, we conclude
Lemma 3.1.4 T is of strong type (p, p), for 1 < p < ∞.
The Proof of Lemma 3.1.1.
First we consider f ∈ C
∞
0
(Ω) ⊂ C
∞
0
(R
n
). Then obviously w ∈ C
∞
(R
n
)
and satisﬁes
´w = f(x) , ∀x ∈ R
n
.
Let
ˆ
f(ξ) =
1
(2π)
n/2
R
n
e
i<x,ξ>
f(x)dx,
be the Fourier transform of f, where
< x, ξ >=
n
¸
k
x
k
ξ
k
is the inner product of R
n
, and i =
√
−1.
We need the following simple property of the transform
¯
D
k
f(ξ) = −iξ
k
ˆ
f(ξ) , and
¯
D
jk
f(ξ) = −ξ
j
ξ
k
ˆ
f(ξ), (3.6)
and the wellknown Plancherel’s Theorem:

ˆ
f
L
2
(R
n
)
= f
L
2
(R
n
)
. (3.7)
It follows from (3.6) and (3.7),
3.1 W
2,p
a Priori Estimates 77
Ω
[f(x)[
2
dx =
R
n
[f(x)[
2
dx
=
R
n
[´w(x)[
2
dx
=
R
n
[
¯
´w(ξ)[
2
dx
=
R
n
[ξ[
4
[ ˆ w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
ξ
2
k
ξ
2
j
[ ˆ w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
[
¯
D
kj
w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
[D
kj
w(x)[
2
dx
=
R
n
[D
2
w[
2
dx.
Consequently,
Tf
L
2
(Ω)
≤ f
L
2
(Ω)
, ∀f ∈ C
∞
0
(Ω). (3.8)
Now to verify (3.8) for any f ∈ L
2
(Ω), we simply pick a sequence ¦f
k
¦ ⊂
C
∞
0
(Ω) that converges to f in L
2
(Ω), and take the limit. This proves the
Lemma. ¯ .
The Proof of Lemma 3.1.2.
We need the following wellknown Calder´onZygmund’s Decomposition
Lemma (see its proof in Appendix C).
Lemma 3.1.5 For f ∈ L
1
(R
n
), ﬁxed α > 0, ∃ E, G such that
(i) R
n
= E ∪ G, E ∩ G = ∅
(ii) [f(x)[ ≤ α, a.e. x ∈ E
(iii) G =
∞
¸
k=1
Q
k
, ¦Q
k
¦: disjoint cubes s.t.
α <
1
[Q
k
[
Q
k
[f(x)[dx ≤ 2
n
α
For any f ∈ L
1
(Ω), to apply the Calder´onZygmund’s Decomposition
Lemma, we ﬁrst extend f to vanish outside Ω. For any given α > 0, ﬁx a
large cube Q
o
in R
n
, such that
Qo
[f(x)[dx ≤ α[Q
o
[.
78 3 Regularity of Solutions
We show that
µ
Tf
(α) := [¦x ∈ R
n
[ [Tf(x)[ ≥ α¦[ ≤ C
f
L
1
(Ω)
α
. (3.9)
Split the function f into the “good” part g and “bad” part b: f = g + b,
where
g(x) =
f(x) for x ∈ E
1
Q
k

Q
k
f(x)dx for x ∈ Q
k
, k = 1, 2, .
Since the operator T is linear, Tf = Tg +Tb; and therefore
µ
Tf
(α) ≤ µ
Tg
(
α
2
) +µ
Tb
(
α
2
).
We will estimate µ
Tg
(
α
2
) and µ
Tb
(
α
2
) separately. The estimate of the ﬁrst one
is easy, because g ∈ L
2
. To estimate µ
Tb
(
α
2
), we divide R
n
into two parts: G
∗
and E
∗
:= R
n
` G
∗
(see below for the precise deﬁnition of G
∗
.) We will show
that
(a) [G
∗
[ ≤
C
α
f
L
1
(Ω)
, and
(b) [¦x ∈ E
∗
[ [Tb(x)[ ≥
α
2
¦[ ≤
C
α
E
∗
[Tb(x)[dx ≤
C
α
f
L
1
(Ω)
.
These will imply the desired estimate for µ
Tb
(
α
2
).
Obviously, from the deﬁnition of g, we have
[g(x)[ ≤ 2
n
α , almost everywhere (3.10)
and
b(x) = 0 for x ∈ E , and
Q
k
b(x)dx = 0 for k = 1, 2, . (3.11)
We ﬁrst estimate µ
Tg
. By Lemma 3.1.1 and (3.10), we derive
µ
Tg
(
α
2
) ≤
4
α
2
g
2
(x)dx ≤
2
n+2
α
[g(x)[dx ≤
2
n+2
α
[f(x)[dx. (3.12)
We then estimate µ
Tb
. Let
b
k
(x) =
b(x) for x ∈ Q
k
0 elsewhere .
Then
Tb =
∞
¸
k=1
Tb
k
.
For each ﬁxed k, let ¦b
km
¦ ⊂ C
∞
0
(Q
k
) be a sequence converging to b
k
in
L
2
(Ω) satisfying
Q
k
b
km
(x)dx =
Q
k
b
k
(x)dx = 0. (3.13)
3.1 W
2,p
a Priori Estimates 79
From the expression
Tb
km
(x) =
Q
k
D
ij
Γ(x −y)b
km
(y)dy,
one can see that due to the singularity of D
ij
Γ(x−y) in Q
k
and the fact that
b
km
may not be bounded in Q
k
, one can only estimate Tb
km
(x) when x is of
a positive distance away from Q
k
. For this reason, we cover Q
k
by a bigger
ball B
k
which has the same center as Q
k
, and the radius of the ball δ
k
is the
same as the diameter of Q
k
. We now estimate the integral in the complement
of B
k
:
R
n
\B
k
[Tb
km
[(x)dx =
Qo\B
k
[
Q
k
D
ij
Γ(x −y)b
km
(y)dy[dx
=
Qo\B
k
[
Q
k
[D
ij
Γ(x −y) −D
ij
Γ(x − ¯ y)]b
km
(y)dy[dx
≤ C δ
k
Qo\B
k
1
[x[
n+1
dx [
Q
k
b
km
(y)dy[
≤ C
1
δ
k
∞
δ
k
1
r
2
dr
Q
k
[b
km
(y)[dy
≤ C
2
Q
k
[b
km
(y)[dy (3.14)
where ¯ y is the center of the cube Q
k
. One small trick here is to add a term
(which is 0 by (3.13)):
Q
k
D
ij
Γ(x − ¯ y)b
km
(y)dy
to produce a helpful factor δ
k
by applying the mean value theorem to the
diﬀerence:
D
ij
Γ(x −y) −D
ij
Γ(x − ¯ y) = (y − ¯ y) D(D
ij
Γ)(x −ξ) ≤ δ
k
[D(D
ij
)(x −ξ)[.
Now letting m→∞ in (3.14), we obtain
R
n
\B
k
[Tb
k
(x)[dx ≤ C
Q
k
[b
k
(y)[dy.
Let
G
∗
=
∞
¸
k=1
B
k
and E
∗
= R
n
` G
∗
.
It follows that
E
∗
[Tb(x)[dx ≤ C
∞
¸
k=1
R
n
\G
∗
[Tb
k
[dx ≤ C
∞
¸
k=1
R
n
\B
k
[Tb
k
[dx
≤ C
∞
¸
k=1
Q
k
[b
k
(y)[dy ≤ C
R
n
[f(x)[dx. (3.15)
80 3 Regularity of Solutions
Obviously
µ
Tb
(
α
2
) ≤ [G
∗
[ +[¦x ∈ E
∗
[ Tb(x) ≥
α
2
¦[. (3.16)
By (iii) in the Calder´onZygmund’s Decomposition Lemma, we have
[G
∗
[ =
∞
¸
k=1
[B
k
[ = C
∞
¸
k=1
[Q
k
[ ≤
C
α
∞
¸
k=1
Q
k
[f(x)[dx =
C
α
R
n
[f(x)[dx.
(3.17)
Write
E
∗
α
= ¦x ∈ E
∗
[ [Tb(x)[ ≥
α
2
¦.
Then by (3.15), we derive
[E
∗
α
[
α
2
≤
E
∗
α
[Tb(x)[dx ≤
E
∗
[Tb(x)[dx ≤ C
R
n
[f(x)[dx. (3.18)
Now the desired inequality (3.9) is a direct consequence of (3.12), (3.16),
(3.17, and (3.18). This completes the proof of the Lemma.
The proof of Lemma 3.1.3. In the previous lemmas, we have shown that
the operator T is of weak type (1, 1) and strong type (2, 2) (of course also weak
type (2, 2)). Now Lemma 3.1.3 is a direct consequence of the Marcinkiewicz
interpolation theorem in the following restricted form:
Lemma 3.1.6 Let T be a linear operator from L
p
(Ω) ∩L
q
(Ω) into itself with
1 ≤ p < q < ∞. If T is of weak type (p, p) and weak type (q, q), then for any
p < r < q, T is of strong type (r, r). More precisely, if there exist constants
B
p
and B
q
, such that
µ
Tf
(t) ≤
B
p
f
p
t
p
and µ
Tf
(t) ≤
B
q
f
q
t
q
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
then
Tf
r
≤ CB
θ
p
B
1−θ
q
f
r
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
where
1
r
=
θ
p
+
1 −θ
q
and C depends only on p, q, and r.
The proof of this lemma can be found in Appendix C.
The proof of Lemma 3.1.4. From the previous lemma, we know that,
for any 1 < r ≤ 2, we have
Tg
L
r
(Ω)
≤ C
r
g
L
r
(Ω)
. (3.19)
3.1 W
2,p
a Priori Estimates 81
Let
< f, g >=
Ω
f(x)g(x)dx
be the duality between f and g. Then it is easy to verify that
< g, Tf >=< Tg, f > . (3.20)
Given any 2 < p < ∞, let r =
p
p−1
, i.e.
1
r
+
1
p
= 1. Obviously, 1 < r < 2. It
follows from (3.19) and (3.20) that
Tf
L
p = sup
g
L
r =1
< g, Tf >= sup
g
L
r =1
< Tg, f >
≤ sup
g
L
r =1
f
L
pTg
L
r ≤ C
r
f
L
p.
This completes the proof of the Lemma.
3.1.2 Uniform Elliptic Equations
In this section, we consider general second order uniform elliptic equations
with Dirichlet boundary condition
Lu := −
¸
n
i,j=1
a
ij
(x)u
xixj
+
¸
n
i=1
b
i
(x)u
xi
+c(x)u = f(x), x ∈ Ω
u(x) = 0, x ∈ ∂Ω.
(3.21)
We assume that Ω is bounded with C
2,α
boundary, a
ij
(x) ∈ C
0
(
¯
Ω), b
i
(x) ∈
L
∞
(Ω), c(x) ∈ L
∞
(Ω), and
λ[ξ[
2
≤ a
ij
ξ
i
ξ
j
≤ Λ[ξ[
2
for some positive constants λ and Λ.
Deﬁnition 3.1.1 We say that u is a strong solution of
Lu = f(x), x ∈ Ω
if u is twice weakly diﬀerentiable in Ω and satisﬁes the equation almost ev
erywhere in Ω.
Based on the result in the previous section–the estimates on the Newtonian
potentials–we will establish a priori estimates on the strong solutions of (3.21).
Theorem 3.1.2 Let u ∈ W
2,p
(Ω)
¸
W
1,2
0
(Ω) be a strong solution of (3.21).
Assume that f ∈ L
p
(Ω). Then
[[u[[
W
2,p
(Ω)
≤ C([[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.22)
where C is a constant depending on [[b
i
(x)[[
L
∞
(Ω)
, [[c(x)[[
L
∞
(Ω)
, λ, Λ, n, p,
and the domain Ω.
82 3 Regularity of Solutions
Remark 3.1.1 Conditions
b
i
(x), c(x) ∈ L
∞
(Ω)
can be replaced by the weaker ones
b
i
(x), c(x) ∈ L
p
(Ω) , for any p > n.
Proof of Theorem 3.1.2.
We divide the proof into two major parts.
Part 1. Interior Estimate
[[∇
2
u[[
L
p
(K)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.23)
where K is any compact subset of Ω.
Part 2. Boundary Estimate
[[u[[
W
2,p
(Ω\Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.24)
where Ω
δ
= ¦x ∈ Ω [ d(x, ∂Ω) > δ¦.
Combining the interior and the boundary estimates, we obtain
[[u[[
W
2,p
(Ω)
≤ [[u[[
W
2,p
(Ω\Ω
2δ
)
+[[u[[
W
2,p
(Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.25)
Theorem 3.1.2 is then a simple consequence of (3.25) and the following
well known GargaliadoJohnNirenberg type interpolation estimate
[[∇u[[
L
p
(Ω)
≤ C[[u[[
1/2
L
p
(Ω)
[[∇
2
u[[
1/2
L
p
(Ω)
≤ [[∇
2
u[[
L
p
(Ω)
+
C
4
[[u[[
L
p
(Ω)
.
Substituting this estimate of [[∇u[[
L
p
(Ω)
into our inequality (3.25), we get:
[[u[[
W
2,p
(Ω)
≤ C[[∇
2
u[[
L
p
(Ω)
+C(
C
4
[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
).
Choosing <
1
2C
, we can then absorb the term [[∇
2
u[[
L
p
(Ω)
on the right hand
side by the left hand side and arrive at the conclusion of our theorem
[[u[[
W
2,p
(Ω)
≤ C([[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.26)
For both the interior and the boundary estimates, the main idea is the well
known method of “frozen” coeﬃcients. Locally, near a point x
o
, we may regard
the leading coeﬃcients a
ij
(x) of the operator approximately as the constant
a
ij
(x
o
) (as if the functions a
ij
are frozen at point x
o
), and thus we are able
to treat the operator as a constant coeﬃcient one and apply the potential
estimates derived in the previous section.
3.1 W
2,p
a Priori Estimates 83
Part 1: Interior Estimates.
First we deﬁne the cutoﬀ function φ(s) to be a compact supported C
∞
function:
φ(s) = 1 if s ≤ 1
φ(s) = 0 if s ≥ 2.
Then we quantify the ‘module’ continuity of a
ij
with
(δ) = sup
x−y≤δ;x,y∈Ω;1≤i,j≤n
[a
ij
(x) −a
ij
(y)[
The function (δ) ` 0 as δ ` 0, and it measures the ‘module’ continuity
of the functions a
ij
.
For any x
o
∈ Ω
2δ
, let
η(x) = φ(
[x −x
o
[
δ
) and w(x) = η(x)u(x),
then
a
ij
(x
o
)
∂
2
w
∂x
i
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+a
ij
(x)
∂
2
w
∂x
i
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+η(x)a
ij
(x)
∂
2
u
∂x
i
∂x
j
+
+ a
ij
(x)u(x)
∂
2
η
∂x
i
∂x
j
+ 2a
ij
(x)
∂u
∂x
i
∂η
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+η(x)(b
i
(x)
∂u
∂x
i
+c(x)u −f(x)) +
+ a
ij
(x)u(x)
∂
2
η
∂x
i
∂x
j
+ 2a
ij
(x)
∂u
∂x
i
∂η
∂x
j
:= F(x) for x ∈ R
n
.
In the above, we skipped the summation sign
¸
. We abbreviated
¸
n
i,j=1
a
ij
as a
ij
and so on. Here we notice that all terms are supported in B
2δ
:=
B
2δ
(x
o
) ⊂ Ω. With a simple linear transformation, we may assume a
ij
(x
o
) =
δ
ij
. Apparently both w and
Γ ∗ F :=
1
n(n −2)ω
n
R
n
1
[x −y[
n−2
F(y)dy
are solutions of
´u = F.
Thus by the uniqueness, w = Γ ∗ F (see the exercise after the proof of
the theorem). Now, following the Newtonian potential estimate (see Theorem
3.3), we obtain:
84 3 Regularity of Solutions
[[
2
w[[
L
p
(B
2δ
)
= [[
2
w[[
L
p
(R
n
)
≤ C[[F[[
L
p
(R
n
)
= C[[F[[
L
p
(B
2δ
)
. (3.27)
Calculating term by term, we have
[[F[[
L
p
(B
2δ
)
≤ (2δ)[[
2
w[[
L
p
(B
2δ
)
+[[f[[
L
p
(B
2δ
)
+C([[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
).
Combining this with the previous inequality (3.27) and choosing δ so small
such that C(2δ) < 1/2, we get
[[
2
w[[
L
p
(B
2δ
)
≤ C(2δ)[[
2
w[[
L
p
(B
2δ
)
+C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
)
≤
1
2
[[
2
w[[
L
p
(B
2δ
)
+C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
).
This is equivalent to
[[
2
w[[
L
p
(B
2δ
)
≤ C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
)
Since u = w on B
δ
(x
o
), we have
[[
2
u[[
L
p
(B
δ
)
= [[
2
w[[
L
p
(B
δ
)
.
Consequently,
[[
2
u[[
L
p
(B
δ
)
≤ C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
). (3.28)
To extend this estimate from a δball to a compact set, we notice that the
collection of balls
¦B
δ
(x) [ x ∈ Ω
2δ
¦
forms an open covering of Ω
2δ
. Hence there are ﬁnitely many balls ¦B
δ
(x
i
) [
i = 1, .......m¦ that have already covered Ω
2δ
. We now apply the above esti
mate (3.28) on each of the balls and sum up to obtain
[[
2
u[[
L
p
(Ω
2δ
)
≤
m
¸
i=1
[[
2
u[[
L
p
(B
δ
)(x
i
)
≤
m
¸
i=1
C([[f[[
L
p
(B
2δ
)(x
i
)
+[[∇u[[
L
p
(B
2δ
)(x
i
)
+[[u[[
L
p
(B
2δ
)(x
i
)
)
≤ C([[f[[
L
p
(Ω)
+[[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
) (3.29)
For any compact subset K of Ω, let δ <
1
2
dist(K, ∂Ω), then K ⊂ Ω
2δ
, and
we derive
[[∇
2
u[[
L
p
(K)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.30)
This establishes the interior estimates.
Part 2. Boundary Estimates
The boundary estimates are very similar. Let’s just describe how we can
apply almost the same scheme here. For any point x
o
∈ ∂Ω, B
δ
(x
o
)
¸
∂Ω
3.1 W
2,p
a Priori Estimates 85
is a C
2,α
graph for δ small. With a suitable rotation, we may assume that
B
δ
(x
o
)
¸
∂Ω is given by the graph of
x
n
= h(x
1
, x
2
, ..., x
n−1
) = h(x
),
and Ω is on top of this graph locally. Let
y = ψ(x) = (x
−x
o
, x
n
−h(x
)),
then ψ is a diﬀeomorphism that maps a neighborhood of x
o
in Ω onto B
+
r
(0) =
¦y ∈ B
r
(0) [ y
n
> 0¦. The equation becomes
−
¸
n
i,j=1
¯ a
ij
(y)u
yiyj
+
¸
n
i=1
¯
b
i
(y)u
yi
+ ¯ c(y)u =
¯
f(y) , for y ∈ B
+
r
(0)
u(y) = 0 , for y ∈ ∂B
+
r
(0) with y
n
= 0;
(3.31)
where the new coeﬃcients are computed from the original ones via the chain
rule of diﬀerentiation. For example,
¯ a
ij
(y) =
∂ψ
i
∂x
l
(ψ
−1
(y))a
lk
(ψ
−1
(y))
∂ψ
j
∂x
k
(ψ
−1
(y)).
If necessary, we make a linear transformation so that ¯ a
ij
(0) = δ
ij
. Since
a plane is still mapped to a plane, we may assume that equation (3.31) is
valid for some smaller r. Applying the method of frozen coeﬃcients (with
w(y) = φ(
2y
r
)u(y)) we get:
´w(y) = F(y) on B
+
r
(0).
Now let ¯ w(y) and
¯
F(y) be the odd extension of w(y) and F(y) from B
+
r
(0)
to B
r
(0), i.e.
¯ w(y) = ¯ w(y
1
, , y
n−1
, y
n
) =
w(y
1
, , y
n−1
, y
n
), if y
n
≥ 0;
−w(y
1
, , y
n−1
, −y
n
), if y
n
< 0.
Similarly for
¯
F(y).
Then one can check that:
´¯ w(y) =
¯
F(y) , for y ∈ B
r
(0).
Through the same calculations as in Part 1, we derive the basic interior esti
mate
[[
2
u[[
L
p
(Br(x
o
))
≤ C([[f[[
L
p
(B2r(x
o
))
+[[∇u[[
L
p
(B2r(x
o
))
+[[u[[
L
p
(B2r(x
o
))
)
≤ C
1
([[f[[
L
p
(B2r(x
o
)∩Ω)
+[[∇u[[
L
p
(B2r(x
o
)∩Ω)
+[[u[[
L
p
(B2r(x
o
)∩Ω)
)
for any x
o
on ∂Ω and for some small radius r. The last inequality in the above
was due to the symmetric extension of w to ¯ w from the half ball to the whole
ball.
86 3 Regularity of Solutions
These balls form a covering of the compact set ∂Ω, and thus has a ﬁnite
covering B
ri
(x
i
) (i = 1, 2, ...., k). These balls also cover a neighborhood of ∂Ω
including Ω`Ω
δ
for some δ small. Summing the estimates up, and following
the same steps as we did for the interior estimates, we get:
[[u[[
W
2,p
(Ω\Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.32)
This establishes the boundary estimates and hence completes the proof of
the theorem. ¯ .
Exercise 3.1.1 Assume that Ω is bounded, and w ∈ W
2,p
0
(Ω). Show that if
−´w = F, then w = Γ ∗ F.
Hint: (a) Show that Γ ∗ F(x) = 0 for x ∈
¯
Ω.
(b) Show that it is true for w ∈ C
2
0
(R
n
).
(c) Show that
´(J
w) = J
(´w)
and thus
J
w = Γ ∗ (J
F).
Let →0 to derive w = Γ ∗ F.
3.2 W
2,p
Regularity of Solutions
In this section, we will use the a priori estimates established in the previous
section to derive the W
2,p
regularity for the weak solutions.
Let L be a second order uniformly elliptic operator in divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (3.33)
Deﬁnition 3.2.1 We say that u ∈ W
1,p
0
(Ω) is a weak solution of
Lu = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω
(3.34)
if for any v ∈ W
1,q
0
(Ω),
Ω
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i
b
i
(x)u
xi
v +c(x)uv
¸
¸
dx =
Ω
f(x)vdx, (3.35)
where
1
p
+
1
q
= 1.
3.2 W
2,p
Regularity of Solutions 87
Remark 3.2.1 Since C
∞
0
(Ω) is dense in W
1,p
0
(Ω), we only need to require
(3.35) be true for all v ∈ C
∞
0
(Ω); and this is more convenient in many appli
cations.
The main result of the section is
Theorem 3.2.1 Assume Ω is a bounded domain in R
n
. Let L be a uniformly
elliptic operator deﬁned in (3.33) with a
ij
(x) Lipschitz continuous and b
i
(x)
and c(x) bounded.
Assume that u ∈ W
1,p
0
(Ω) is a weak solution of (3.34). If f(x) ∈ L
p
(Ω),
then u ∈ W
2,p
(Ω) .
Outline of the Proof. The proof of the theorem is quite complex and
will be accomplished in two stages.
In stage 1, we ﬁrst consider the case when p ≥ 2, because one can rather
easily show the uniqueness of the weak solutions by multiplying both sides of
the equation by the solution itself and integrating by parts. As one will see,
this uniqueness plays a key role in deriving the regularity.
The proof of the regularity is based on the following fundamental proposi
tion on the existence, uniqueness, and regularity for the solution of the Laplace
equation on a unit ball.
Proposition 3.2.1 Assume f ∈ L
p
(B
1
(0)) with p ≥ 2. Then the Dirichlet
problem
´u = f(x) , x ∈ B
1
(0)
u(x) = 0 , x ∈ ∂B
1
(0)
(3.36)
exists a unique solution u ∈ W
2,p
(B
1
(0)) satisfying
u
W
2,p
(B1)
≤ Cf
L
p
(B1)
. (3.37)
We will prove this proposition in subsection 3.2.1, and then use the
“frozen” coeﬃcients method to derive the regularity for general operator L.
In stage 2 (subsection 3.2.2), we consider the case 1 < p < 2. The main
diﬃculty lies in the uniqueness of the weak solution. To show the uniqueness,
we ﬁrst prove a W
2,p
version of Fredholm Alternative for p ≥ 2 based on
the regularity result in stage 1, which will be used to deduce the existence of
solutions for equation
−
¸
ij
(a
ij
(x)w
xi
)
xj
= F(x).
Then we use this equation with a proper choice of F(x) to derive a contradic
tion if there exists a nontrivial solution of equation
−
¸
ij
(a
ij
(x)v
xi
)
xj
= 0.
88 3 Regularity of Solutions
After proving the uniqueness, to derive the regularity, everything else is
the same as in stage 1.
Remark 3.2.2 Actually, the conditions on the coeﬃcients of L can be weak
ened. This will use the Regularity Lifting Theorem in Section 3.3, and hence
will be deliberated there.
3.2.1 The Case p ≥ 2
We will prove Proposition 3.2.1 and use it to derive the regularity of weak
solutions for general operator L. To this end, we need
Lemma 3.2.1 (Better A Priori Estimates at Presence of Uniqueness.) As
sume that
i) for solutions of
Lu = f(x) , x ∈ Ω (3.38)
it holds the a priori estimate
u
W
2,p
(Ω)
≤ C(u
L
p
(Ω)
+f
L
p
(Ω)
), (3.39)
and
ii) (uniqueness) if Lu = 0, then u = 0.
Then we have a better a priori estimate for the solution of (3.38)
u
W
2,p
(Ω)
≤ Cf
L
p
(Ω)
. (3.40)
Proof. Suppose the inequality (3.40) is false, then there exists a sequence of
functions ¦f
k
¦ with f
k

L
p = 1 and the corresponding sequence of solutions
¦u
k
¦ for
Lu
k
= f
k
(x) , x ∈ Ω
such that
u
k

W
2,p→∞ as k→∞.
It follows from the a priori estimate (3.39) that
u
k

L
p→∞ as k→∞.
Let
v
k
=
u
k
u
k

L
p
and g
k
=
f
k
u
k

L
p
.
Then
v
k

L
p = 1 , and g
k

L
p→0. (3.41)
From the equation
Lv
k
= g
k
(x) , x ∈ Ω (3.42)
3.2 W
2,p
Regularity of Solutions 89
it holds the a priori estimate
v
k

W
2,p ≤ C(v
k

L
p +g
k

L
p). (3.43)
(3.41) and (3.43) imply that ¦v
k
¦ is bounded in W
2,p
, and hence there exists
a subsequence (still denoted by ¦v
k
¦) that converges weakly to v in W
2,p
. By
the compact Sobolev embedding, the same sequence converges strongly to v
in L
p
, and hence v
L
p = 1. From (3.42), we arrive at
Lv = 0 , x ∈ Ω.
Therefore by the uniqueness assumption, we must have v = 0. This contradicts
with the fact v
L
p = 1 and therefore completes the proof. ¯ .
The Proof of Proposition 3.2.1.
For convenience, we abbreviate B
r
(0) as B
r
.
To see the uniqueness, assume that both u and v are solutions of (3.36).
Let w = u −v. Then w weakly satisﬁes
´w = 0 , x ∈ B
1
w(x) = 0 , x ∈ ∂B
1
Multiplying both sides of the equation by w and then integrating on B
1
, we
derive
B1
[w[
2
dx = 0.
Hence w = 0 almost everywhere, this, together with the zero boundary
condition, implies w = 0 almost everywhere.
For the existence, it is wellknown that, if f is continuous, then
u(x) =
B1
G(x, y)f(y)dy
is the solution. Here
G(x, y) = Γ(x −y) −h(x, y)
is the Green’s function, in which
Γ(x) =
1
n(n−2)ωn
1
x
n−2
, n ≥ 3
1
2π
ln [x[ , n = 2
is the fundamental solution of Laplace equation as introduce in the deﬁnition
of Newtonian Potentials, and
h(x, y) =
1
n(n −2)ω
n
1
([x[[
x
x
2
−y[)
n−2
, for n ≥ 3
90 3 Regularity of Solutions
is a harmonic function. The addition of h(x, y) is to ensure that G(x, y) van
ishes on the boundary.
We have obtained the needed estimate on the ﬁrst part
B1
Γ(x −y)f(y)dy
when we worked on the Newtonian Potentials. For the second part, noticing
that h(x, y) is smooth when y is away from the boundary (see the exercise
below), we start from a smaller ball B
1−δ
for each δ > 0. Let
u
δ
(x) =
B1
G(x, y)f
δ
(y)dy,
where
f
δ
(x) =
f(x) , x ∈ B
1−δ
0 , elsewhere .
Now, by our result for the Newtonian Potentials, D
2
u
δ
is in L
p
(B
1
). Applying
the Poincar`e inequality, we derive that u
δ
is also in L
p
(B
1
), and hence it is in
W
2,p
(B
1
). Moreover, due to uniqueness and Lemma 3.2.1, we have the better
a priori estimate
u
δ

W
2,p
(B1)
≤ Cf
L
p
(B1)
. (3.44)
Pick a sequence ¦δ
i
¦ tending to zero, then the corresponding solutions
¦u
δi
¦ is a Cauchy sequence in W
2,p
(B
1
), because
u
δi
−u
δj

W
2,p
(B1)
≤ Cf
δi
−f
δj

L
p
(B1)
→0 , as i, j→∞.
Let
u
o
= lim
i→∞
u
δi
, in W
2,p
(B
1
).
Then u
o
is a solution of (3.36). Moreover, from 3.44, we see that the better a
priori estimate (3.37) holds for u
o
.
This completes the proof of Proposition 3.2.1. ¯ .
Exercise 3.2.1 Show that if [y[ ≤ 1 − δ for some δ > 0, then there exist a
constant c
o
> 0, such that
[x[[
x
[x[
2
−y[ ≥ c
o
, ∀ x ∈ B
1
(0).
Proof of Regularity Theorem 3.2.1 for p ≥ 2.
The general frame of the proof is similar to the W
2,p
a priori estimate in
the previous section. The key diﬀerence here is that in the a priori estimate, we
preassume that u is a strong solution in W
2,p
, and here we only assume that
u is a weak solution in W
1,p
. After reading the proof of the a priori estimates,
the reader may notice that if one can establish a W
2,p
regularity in a small
3.2 W
2,p
Regularity of Solutions 91
neighborhood of each point in Ω, then by an entirely similar argument as in
obtaining the a priori estimates, one can derive the regularity on the whole of
Ω.
As in the proof of the a priori estimates, we deﬁne the cutoﬀ function
φ(s) =
1 , if s ≤ 1
0 , if s ≥ 2.
Let u be a W
1,p
0
(Ω) weak solution of (3.34).
For any
x
o
∈ Ω
2δ
:= ¦x ∈ Ω [ dist(x, ∂Ω) ≥ 2δ¦,
let
η(x) = φ(
[x −x
o
[
δ
) and w(x) = η(x)u(x).
Then w is supported in B
2δ
(x
o
). By (3.35) and a straight forward calculation,
one can verify that, for any v ∈ C
∞
0
(B
2δ
(x
o
)),
B
2δ
(x
o
)
a
ij
(x
o
)w
xi
v
xj
dx =
B
2δ
(x
o
)
[a
ij
(x
o
)−a
ij
(x)]w
xi
v
xj
dx+
B
2δ
(x
o
)
F(x)vdx
where
F(x) = f(x) −(a
ij
(x)η
xi
u)
xj
−b
i
(x)u
xi
−c(x)u(x).
In order words, w is a weak solution of
a
ij
(x
o
)w
xixj
= ([a
ij
(x
o
) −a
ij
(x)]w
xi
)
xj
−F(x) , x ∈ B
2δ
(x
o
)
w(x) = 0 , x ∈ ∂B
2δ
(x
o
).
(3.45)
In the above, we omitted the summation signs
¸
. We abbreviated
¸
n
i,j=1
a
ij
as a
ij
and so on.
With a change of coordinates, we may assume that a
ij
(x
o
) = δ
ij
and write
equation (3.45) as
´w = ([a
ij
(x
o
) −a
ij
(x)]w
xi
)
xj
−F(x) (3.46)
For any v ∈ W
2,p
(B
2δ
(x
o
)), obviously,
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
∈ L
p
(B
2δ
(x
o
)).
Also, one can easily verify that F(x) is in L
p
(B
2δ
(x
o
)). By virtue of Proposi
tion 3.2.1, the operator ´ is invertible. Consider the equation in W
2,p
v = Kv +´
−1
F (3.47)
where
(Kv)(x) = ´
−1
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
.
92 3 Regularity of Solutions
Under the assumption that a
ij
(x) are Lipschitz continuous, one can verify that
(see the exercise below), for suﬃciently small δ, K is a contracting map from
W
2,p
(B
2δ
(x
o
)) to itself. Therefore, there exists a unique solution v of equation
(3.47). This v is also a weak solution of equation (3.46). Similar to the proof
of Proposition 3.2.1, one can show (as an exercise) the uniqueness of the weak
solution of (3.46). Therefore, we must have w = v and thus conclude that w is
also in W
2,p
(B
2δ
(x
o
)). This completes the stage 1 of proof for Theorem 3.2.1.
¯ .
Exercise 3.2.2 Assume that each a
ij
(x) is Lipschitz continuous. Let
(Kv)(x) = ´
−1
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
, x ∈ B
2δ
(x
o
).
Show that, for δ suﬃciently small, the operator K is a contracting map in
W
2,p
(B
2δ
(x
o
)), i.e. there exists a constant 0 < γ < 1, such that
Kφ −Kψ
W
2,p
(B
2δ
(x
o
))
≤ γφ −ψ
W
2,p
(B
2δ
(x
o
))
.
Exercise 3.2.3 Prove the uniqueness of the W
1,p
(B
2δ
(x
o
)) weak solution for
equation (3.46) when p ≥ 2.
3.2.2 The Case 1 < p < 2.
Lemma 3.2.2 Let p ≥ 2. If u = 0 whenever Lu = 0 (in the sense of W
1,p
0
(Ω)
weak solution), then for any f ∈ L
p
(Ω), there exist a unique u ∈ W
1,p
0
(Ω) ∩
W
2,p
(Ω), such that
Lu = f(x) and u
W
2,p
(Ω)
≤ Cf
L
p. (3.48)
Proof. This is actually a W
2,p
version of the Fredholm Alternative.
From the previous chapter, one can see that for α suﬃciently large, by
LaxMilgram Theorem, for any given g ∈ L
2
(Ω), there exist a unique W
1,2
0
(Ω)
weak solution v of the equation
(L +α)v = g , x ∈ Ω.
From the Regularity Theorem in this chapter, we have v ∈ W
2,2
(Ω). There
fore, we can write
v = (L +α)
−1
g
where (L +α)
−1
is a bounded linear operator from L
2
(Ω) to W
2,2
(Ω).
Let ¦f
k
¦ ⊂ C
∞
0
(Ω) be a sequence of smooth approximations of f(x). For
each k, consider equation
(L +α)w −αw = f
k
, (3.49)
3.2 W
2,p
Regularity of Solutions 93
or equivalently,
(I −K)w = F, (3.50)
where I is the identity operator, K = α(L+α)
−1
, and F = (L+α)
−1
f
k
. One
can see that K is a compact operator from W
1,2
0
(Ω) into itself, because of
the compact embedding of W
2,2
into W
1,2
. Now we can apply the Fredholm
Alternative:
Equation (3.50) exists a unique solution if and only if the corresponding
homogeneous equation (I −K)w = 0 has only trivial solution.
The second half of the above is just the assumption of the theorem, hence,
equation (3.50) possesses a unique solution u
k
in W
1,2
0
(Ω), i.e.
u
k
−α(L +α)
−1
u
k
= (L +α)
−1
f
k
. (3.51)
Furthermore, since both α(L + α)
−1
u
k
and (L + α)
−1
f
k
are in W
2,2
(Ω), we
derive immediately that u
k
is also in W
2,2
(Ω), or we can deduce this from the
Regularity Theorem 3.2.1. Since f
k
is in L
p
(Ω) for any p, we conclude from
Theorem 3.2.1 that u
k
is in W
2,p
(Ω), and furthermore, due to uniqueness, we
have the improved a priori estimate (see Lemma 3.2.1)
u
k

W
2,p ≤ Cf
k

L
p , k = 1, 2,
Moreover, for any integers i and j,
L(u
i
−u
j
) = f
i
(x) −f
j
(x) , x ∈ Ω.
It follows that
u
i
−u
j

W
2,p ≤ Cf
i
−f
j

L
p→0 , as i, j→∞.
This implies that ¦u
k
¦ is a Cauchy sequence in W
2,p
and hence it converges
to an element u in W
2,p
(Ω). Now this function u is the desired solution. ¯ .
Proposition 3.2.2 (Uniqueness of W
1,p
0
Weak Solution for 1 < p < ∞. )
Let Ω be a bounded domain in R
n
. Assume that a
ij
(x) are Lipschitz con
tinuous. If u is a W
1,p
0
(Ω) weak solution of
(a
ij
(x)u
xi
)
xj
= 0, (3.52)
then u = 0 almost every where.
Proof. . When p ≥ 2, we have proved the uniqueness. Now assume 1 < p < 2.
Suppose there exist a nontrivial weak solution u. Let ¦u
k
¦ be a sequence
of C
∞
0
(Ω) functions such that
u
k
→u in W
1,p
0
(Ω) , as k→∞.
For each k, consider the equation
94 3 Regularity of Solutions
(a
ij
(x)v
xi
)
xj
= F
k
(x) :=
[u
k
[
p/q
u
k
1 +[u
k
[
2
, (3.53)
where
1
q
+
1
p
= 1.
Obviously, for p < 2, q is greater than 2, and F
k
(x) is in L
q
(Ω). Since we
already have uniqueness for W
1,q
0
weak solution, Lemma 3.2.2 guarantees the
existence of a solution v
k
for equation (3.53).
Through integration by parts several times, one can verify that
0 =
Ω
v
k
(a
ij
(x)u
xi
)
xj
dx =
Ω
[u
k
[
p/q
u
k
1 +[u
k
[
2
udx. (3.54)
Since u
k
→u in W
1,p
, it is easy to see that
[u
k
[
p/q
u
k
1 +[u
k
[
2
→
[u[
p/q
u
1 +[u[
2
in L
q
(Ω), and therefore, by (3.54), we must have u = 0 almost everywhere.
Taking into account that u ∈ W
1,p
0
(Ω), we ﬁnally deduce that u = 0 almost
everywhere. This completes the proof of the proposition. ¯ .
After proving the uniqueness, the rest of the arguments are entirely the
same as in stage 1. This completes stage 2 of the proof for Theorem 3.2.1.
3.2.3 Other Useful Results Concerning the Existence, Uniqueness,
and Regularity
Theorem 3.2.2 Given any pair p, q > 1 and f ∈ L
q
(Ω), if u ∈ W
1,p
0
(Ω) is
a weak solution of
Lu = f(x) , x ∈ Ω, (3.55)
then u ∈ W
2,q
(Ω) and it holds the a priori estimate
u
W
2,q
(Ω)
≤ C(u
L
q +f
L
q ). (3.56)
Proof. Case i) If q ≤ p, then obviously, u is also a W
1,q
0
(Ω) weak solution, and
the results follow from the Regularity Theorem 3.2.1 and the W
2,q
a priori
estimates.
Case ii) If q > p, then we use the Sobolev embedding
W
2,p
→ W
1,p1
,
with
p
1
=
np
n −p
.
If p
1
≥ q, we are done. If p
1
< q, we use the fact that u is a W
1,p1
weak
solution, and by Regularity, u ∈ W
2,p1
. Repeating this process, after a few
steps, we will arrive at a p
k
≥ q.
3.3 Regularity Lifting 95
From this Theorem, we can derive immediately the equivalence between
the uniqueness of weak solutions in any two spaces W
1,p
0
and W
1,q
0
:
Corollary 3.2.1 For any pair p, q > 1, the following are equivalent
i) If u ∈ W
1,p
0
(Ω) is a weak solution of Lu = 0, then u = 0.
ii) If u ∈ W
1,q
0
(Ω) is a weak solution of Lu = 0, then u = 0.
Theorem 3.2.3 (W
2,p
Version of the Fredholm Alternative). Let 1 < p < ∞.
If u = 0 whenever Lu = 0 (in the sense of W
1,p
0
(Ω) weak solution), then for
any f ∈ L
p
(Ω), there exist a unique u ∈ W
1,p
0
(Ω) ∩ W
2,p
(Ω), such that
Lu = f(x) and u
W
2,p
(Ω)
≤ Cf
L
p.
For 2 ≤ p < ∞, we have stated this theorem in Lemma 3.2.2. Now, for
1 < p < 2, after we obtained the W
2,p
regularity in this case, the proof of this
theorem is entirely the same as for Lemma 3.2.2.
3.3 Regularity Lifting
3.3.1 Bootstrap
Assume that u ∈ H
1
(Ω) is a weak solution of
−´u = u
p
(x) , x ∈ Ω (3.57)
Then by Sobolev Embedding, u ∈ L
2n
n−2
(Ω). If the power p is less than the
critical number
n+2
n−2
, then the regularity of u can be enhanced repeatedly
through the equation until we reach that u ∈ C
α
(Ω). Finally the “Schauder
Estimate” will lift u to C
2+α
(Ω) and hence to be a classical solution. We call
this the “Bootstrap Method” as will be illustrated below.
For each ﬁxed p <
n+2
n−2
, write
p =
n + 2
n −2
−δ
for some δ > 0.
Since u ∈ L
2n
n−2
(Ω),
u
p
∈ L
2n
p(n−2)
(Ω) = L
2n
(n+2)−δ(n−2)
(Ω).
The equation (3.57) boosts the solution u to W
2,q1
(Ω) with
q
1
=
2n
(n + 2) −δ(n −2)
.
By the Sobolev Embedding
96 3 Regularity of Solutions
W
2,q1
(Ω) → L
nq
1
n−2q
1
(Ω)
we have u ∈ L
s1
(Ω) with
s
1
=
nq
1
n −2q
1
=
2n
n −2
1
1 −δ
.
The integrable power of u has been ampliﬁed by
1
1−δ
(> 1) times.
Now, u
p
∈ L
q2
(Ω) with
q
2
=
s
1
p
=
4n
(1 −δ)[n + 2 −δ(n −2)]
.
And hence through the equation, we derive u ∈ W
2,q2
(Ω). By Sobolev em
bedding, if 2q
2
≥ n, i.e. if
(1 −δ)[n + 2 −δ(n −2)] −4 ≤ 0 ,
then, we have either u ∈ L
q
(Ω) for any q, or u ∈ C
α
(Ω). We are done. If
2q
2
< n, then u ∈ L
s2
(Ω), with
s
2
=
2n
(1 −δ)[n + 2 −δ(n −2)] −4
.
It is easy to verify that
s
2
≥
2n
(1 −δ)(n −2)
1
1 −δ
= s
1
1
1 −δ
.
The integrable power of u has again been ampliﬁed by at least
1
1−δ
times.
Continuing this process, after a ﬁnite many steps, we will boost u to L
q
(Ω)
for any q, and hence in C
α
(Ω) for some 0 < α < 1. Finally, by the Schauder
estimate, u ∈ C
2+α
(Ω), and therefore, it is a classical solution.
From the above argument, one can see that, if p =
n+2
n−2
, the so called
“critical power”, then δ = 0, and the integrable power of the solution u can not
be boosted this way. To deal with this situation, we introduce a “Regularity
Lifting Method” in the next subsection.
3.3.2 Regularity Lifting Theorem
Here, we present a simple method to boost regularity of solutions. It has been
used extensively in various forms in the authors previous works. The essence of
the approach is wellknown in the analysis community. However, the version
we present here contains some new developments. It is much more general
and is very easy to use. We believe that our method will provide convenient
ways, for both experts and nonexperts in the ﬁeld, in obtaining regularities.
Essentially, it is based on the following Regularity Lifting Theorem.
3.3 Regularity Lifting 97
Let Z be a given vector space. Let  
X
and  
Y
be two norms on Z.
Deﬁne a new norm  
Z
by
 
Z
=
p
 
p
X
+ 
p
Y
For simplicity, we assume that Z is complete with respect to the norm  
Z
.
Let X and Y be the completion of Z under  
X
and  
Y
, respectively.
Here, one can choose p, 1 ≤ p ≤ ∞, according to what one needs. It’s easy to
see that Z = X ∩ Y .
Theorem 3.3.1 Let T be a contraction map from X into itself and from Y
into itself. Assume that f ∈ X, and that there exits a function g ∈ Z such
that f = Tf +g. Then f also belongs to Z.
Proof. Step 1. First show that T : Z −→ Z is a contraction. Since T is a
contraction on X, there exists a constant θ
1
, 0 < θ
1
< 1 such that
Th
1
−Th
2

X
≤ θ
1
h
1
−h
2

X
.
Similarly, we can ﬁnd a constant θ
2
, 0 < θ
2
< 1 such that
Th
1
−Th
2

Y
≤ θ
2
h
1
−h
2

Y
.
Let θ = max¦θ
1
, θ
2
¦. Then, for any h
1
, h
2
∈ Z,
Th
1
−Th
2

Z
=
p
Th
1
−Th
2

p
X
+Th
1
−Th
2

p
Y
≤
p
θ
p
1
h
1
−h
2

p
X
+θ
p
2
h
1
−h
2

p
Y
≤ θh
1
−h
2

Z
.
Step 2. Since T : Z −→ Z is a contraction, given g ∈ Z, we can ﬁnd a
solution h ∈ Z such that h = Th + g. We see that T : X −→ X is also a
contraction and g ∈ Z ⊂ X. The equation x = Tx + g has a unique solution
in X. Thus, f = h ∈ Z since both h and f are solutions of x = Tx +g in X.
3.3.3 Applications to PDEs
Now, we explain how the “Regularity Lifting Theorem” proved in the previous
subsection can be used to boost the regularity of week solutions involving
critical exponent:
−´u = u
n+2
n−2
. (3.58)
Still assume that Ω is a smooth bounded domain in R
n
with n ≥ 3. Let
u ∈ H
1
0
(Ω) be a weak solution of equation (3.58). Then by Sobolev embedding,
u ∈ L
2n
n−2
(Ω).
98 3 Regularity of Solutions
We can split the right hand side of (3.58) in two parts:
u
n+2
n−2
= u
4
n−2
u := a(x)u.
Then obviously, a(x) ∈ L
n
2
(Ω). Hence, more generally, we consider the regu
larity of the weak solution of the following equation
−´u = a(x)u +b(x). (3.59)
Theorem 3.3.2 Assume that a(x), b(x) ∈ L
n
2
(Ω). Let u be any weak solution
of equation (3.59) in H
1
0
(Ω). Then u is in L
p
(Ω) for any 1 ≤ p < ∞.
Remark 3.3.1 Even in the best situation when a(x) ≡ 0, we can only have
u ∈ W
2,
n
2
(Ω) via the W
2,p
estimate, and hence derive that u ∈ L
p
(Ω) for any
p < ∞ by Sobolev embedding.
Now let’s come back to nonlinear equation (3.58). Assume that u is a
H
1
0
(Ω) weak solution. From the above theorem, we ﬁrst conclude that u is
in L
q
(Ω) for any 1 < q < ∞. Then by our Regularity Theorem 3.2.1, u is
in W
2,q
(Ω). This implies that u ∈ C
1,α
(Ω) via Sobolev embedding. Finally,
from the classical C
2,α
regularity, we arrive at u ∈ C
2,α
(Ω).
Corollary 3.3.1 If u is a H
1
0
(Ω) weak solution of equation (3.58), then u ∈
C
2,α
(Ω), and hence it is a classical solution.
Proof of Theorem 3.3.2.
Let G(x, y) be the Green’s function of −´ in Ω. Deﬁne
(Tf)(x) = (−´)
−1
f(x) =
Ω
G(x, y)f(y)dy.
Then obviously
0 < G(x, y) < Γ([x −y[) :=
C
n
[x −y[
n−2
,
and it follows that, for any q ∈ (1,
n
2
),
Tf
L
nq
n−2q
(Ω)
≤ Γ ∗ f
L
nq
n−2q
(R
n
)
≤ Cf
L
q
(Ω)
.
Here, we may extend f to be zero outside Ω when necessary.
For a positive number A, deﬁne
a
A
(x) =
a(x) if [a(x)[ ≥ A
0 otherwise ,
and
a
B
(x) = a(x) −a
A
(x).
3.3 Regularity Lifting 99
Let
(T
A
u)(x) =
Ω
G(x, y)a
A
(y)u(y)dy.
Then equation (3.59) can be written as
u(x) = (T
A
u)(x) +F
A
(x),
where
F
A
(x) =
Ω
G(x, y)[a
B
(y)u(y) +b(y)]dy.
We will show that, for any 1 ≤ p < ∞,
i) T
A
is a contracting operator from L
p
(Ω) to L
p
(Ω) for A large, and
ii) F
A
(x) is in L
p
(Ω).
Then by the “Regularity Lifting Theorem”, we derive immediately that
u ∈ L
p
(Ω).
i) The Estimate of the Operator T
A
.
For any
n
n−2
< p < ∞, there is a q, 1 < q <
n
2
, such that
p =
nq
n −2q
and then by H¨older inequality, we derive
T
A
u
L
p
(Ω)
≤ Γ ∗ a
A
u
L
p
(R
n
)
≤ Ca
A
u
L
q
(Ω)
≤ Ca
A

L
n
2 (Ω)
u
L
p
(Ω)
.
Since a(x) ∈ L
n
2
(Ω), one can choose a large number A, such that
a
A

L
n
2 (Ω)
≤
1
2
and hence arrive at
T
A
u
L
p
(Ω)
≤
1
2
u
L
p
(Ω)
.
That is T
A
: L
p
(Ω)→L
p
(Ω) is a contracting operator.
ii) The Integrability of F
A
(x).
(a) First consider
F
1
A
(x) :=
Ω
G(x, y)b(y)dy.
For any
n
n−2
< p < ∞, choose 1 < q <
n
2
, such that p =
nq
n−2q
. Extending b(x)
to be zero outside Ω, we have
F
1
A

L
p
(Ω)
≤ Γ ∗ b
L
p
(R
n
)
≤ Cb
L
q
(Ω)
≤ Cb
L
n
2 (Ω)
< ∞.
Hence
F
1
A
(x) ∈ L
p
(Ω).
100 3 Regularity of Solutions
(b) Then estimate
F
2
A
(x) =
Ω
G(x, y)a
B
(y)u(y)dy.
By the boundedness of a
B
(x), we have
F
2
A

L
p
(Ω)
≤ a
B
u
L
q
(Ω)
≤ Cu
L
q
(Ω)
.
Noticing that u ∈ L
q
(Ω) for any 1 < q ≤
2n
n −2
, and
p =
2n
n −6
when q =
2n
n −2
,
we conclude that, for the following values of p and dimension n,
1 < p < ∞ when 3 ≤ n ≤ 6
1 < p <
2n
n−6
when n > 6,
F
A
(x) ∈ L
p
(Ω), and hence T
A
is a contracting operator from L
p
(Ω) to L
p
(Ω).
Applying the Regularity Lifting Theorem, we arrive at
u ∈ L
p
(Ω) , for any 1 < p < ∞ if 3 ≤ n ≤ 6
u ∈ L
p
(Ω) , for any 1 < p <
2n
n−6
if n > 6.
The only thing that prevent us from reaching the full range of p is the
estimate of F
2
A
(x). However, from the starting point where u ∈ L
2n
n−2
(Ω), we
have arrived at
u ∈ L
p
(Ω) , for any 1 < p <
2n
n −6
.
Now from this point on, through an entire similar argument as above, we will
reach
u ∈ L
p
(Ω) , for any 1 < p < ∞ if 3 ≤ n ≤ 10
u ∈ L
p
(Ω) , for any 1 < p <
2n
n−10
if n > 10.
Each step, we add 4 to the dimension n for which holds
u ∈ L
p
(Ω) , 1 ≤ p < ∞.
Continuing this way, we prove our theorem for all dimensions. This completes
the proof of the Theorem. ¯ .
Now we will use the similar idea and the Regularity Lifting Theorem to
carry out Stage 3 of the proof of the Regularity Theorem.
Let L be a second order uniformly elliptic operator in divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (3.60)
we will prove
3.3 Regularity Lifting 101
Theorem 3.3.3 (W
2,p
Regularity under Weaker Conditions)
Let Ω be a bounded domain in R
n
and 1 < p < ∞. Assume that
i) a
ij
(x) ∈ W
1,n+δ
(Ω) for some δ > 0;
ii)
b
i
(x) ∈
L
n
(Ω), if 1 < p < n
L
n+δ
(Ω), if p = n
L
p
(Ω), if n < p < ∞;
iii)
c(x) ∈
L
n/2
(Ω), if 1 < p < n/2
L
n/2+δ
(Ω), if p = n/2
L
p
(Ω), if n/2 < p < ∞.
Suppose f ∈ L
p
(Ω) and u is a W
1,p
0
(Ω) weak solution of
Lu = f(x) x ∈ Ω. (3.61)
Then u is in W
2,p
(Ω).
Proof. Write equation (3.61) as
−
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
= −
n
¸
i=1
b
i
(x)u
xi
+c(x)u
+f(x).
Let
L
o
u := −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
.
By Proposition 3.2.2, we know that equation L
o
u = 0 has only trivial
solution, and it follows from Theorem 3.2.3, the operator L
o
is invertible.
Denote it by L
−1
o
. Then L
−1
o
is a bounded linear operator from L
p
(Ω) to
W
2,p
(Ω).
For each i and each positive number A, deﬁne
b
iA
(x) =
b
i
(x) , if [b
i
(x)[ ≥ A
0 , elsewhere .
Let
b
iB
(x) := b
i
(x) −b
iA
(x).
Similarly for C
A
(x) and C
B
(x).
Now we can rewrite equation (3.61) as
u = T
A
u +F
A
(u, x) (3.62)
where
T
A
u = −L
−1
o
¸
i
b
iA
(x)u
xi
+c
A
(x)u
102 3 Regularity of Solutions
and
F
A
(u, x) = −L
−1
o
¸
i
b
iB
(x)u
xi
+c
B
(x)u
+L
−1
o
f(x).
We ﬁrst show that
F
A
(u, ) ∈ W
2,p
(Ω) , ∀u ∈ W
1,p
(Ω), f ∈ L
p
(Ω). (3.63)
Since b
iB
(x) and c
iB
(x) are bounded, one can easily verify that
¸
i
b
iB
(x)u
xi
+c
B
(x)u
∈ L
p
(Ω)
for any u ∈ W
1,p
(Ω). This implies (3.63).
Then we prove that T
A
is a contracting operator fromW
2,p
(Ω) to W
2,p
(Ω).
In fact, by H¨older inequality and Sobolev embedding from W
2,p
to W
1,q
,
one can see that, for any v ∈ W
2,p
(Ω),
b
iA
Dv
L
p ≤ C
b
iA

L
nv
W
2,p , if 1 < p < n
b
iA

L
n+δ v
W
2,p , if p = n
b
iA

L
pv
W
2,p , if n < p < ∞,
(3.64)
and
c
A
v
L
p ≤ C
c
A

L
n/2v
W
2,p , if 1 < p < n/2
c
A

L
n/2+δ v
W
2,p , if p = n/2
c
A

L
pv
W
2,p , if n/2 < p < ∞.
(3.65)
Under the conditions of the theorem, the ﬁrst norms on the right hand
side of (3.64) and (3.65), i.e. b
iA

L
n, c
A

L
n/2 can be made arbitrarily small
if A is suﬃciently large. Hence for any > 0, there exists an A > 0, such that

¸
i
b
iA
(x)v
xi
+c
A
(x)v
L
p ≤ v
W
2,p , ∀ v ∈ W
2,p
(Ω).
Taking into account that L
−1
o
is a bounded operator from L
p
(Ω) to
W
2,p
(Ω), we deduce that T
A
is a contract mapping from W
2,p
(Ω) to W
2,p
(Ω).
It follows from the Contract Mapping Theorem, there exist a v ∈ W
2,p
(Ω),
such that
v = T
A
v +F
A
(u, x).
Now u and v satisfy the same equation. By uniqueness, we must have
u = v. Therefore we derive that u ∈ W
2,p
(Ω). This completes the proof of the
theorem. ¯ .
3.3 Regularity Lifting 103
3.3.4 Applications to Integral Equations
As another application of the Regularity Lifting Theorem, we consider the
following integral equation
u(x) =
R
n
1
[x −y[
n−α
u(y)
n+α
n−α
dy , x ∈ R
n
, (3.66)
where α is any real number satisfying 0 < α < n.
It arises as an EulerLagrange equation for a functional under a constraint
in the context of the HardyLittlewoodSobolev inequalities.
It is also closely related to the following family of semilinear partial dif
ferential equations
(−∆)
α/2
u = u
(n+α)/(n−α)
, x ∈ R
n
n ≥ 3. (3.67)
Actually, one can prove (See [CLO]) that the integral equation (3.66) and the
PDE (3.67) are equivalent.
In the special case α = 2, (3.67) becomes
−∆u = u
(n+2)/(n−2)
, x ∈ R
n
.
which is the well known Yamabe equation.
In the context of the HardyLittlewoodSobolev inequality, it is natural
to start with u ∈ L
2n
n−α
(R
n
). We will use the Regularity Lifting Theorem to
boost u to L
q
(R
n
) for any 1 < q < ∞, and hence in L
∞
(R
n
). More generally,
we consider the following equation
u(x) =
R
n
K(y)[u(y)[
p−1
u(y)
[x −y[
n−α
dy (3.68)
for some 1 < p < ∞. We prove
Theorem 3.3.4 Let u be a solution of (3.68). Assume that
u ∈ L
qo
(R
n
) for some q
o
>
n
n −α
, (3.69)
R
n
[K(y)[u(y)[
p−1
[
n
α
dy < ∞ , and [K(y)[ ≤ C(1 +
1
[y[
γ
) for some γ < α.
(3.70)
Then u is in L
q
(R
n
) for any 1 < q < ∞.
Remark 3.3.2 i) If
p >
n
n −α
, [K(x)[ ≤ M, and u ∈ L
n(p−1)
α
(R
n
), (3.71)
then conditions (3.69) and (3.70) are satisﬁed.
104 3 Regularity of Solutions
ii) Last condition in (3.71) is somewhat sharp in the sense that if it is
violated, then equation (3.68) when K(x) ≡ 1 possesses singular solutions
such as
u(x) =
c
[x[
α
p−1
.
Proof of Theorem 3.3.4: Deﬁne the linear operator
T
v
w =
R
n
K(y)[v(y)[
p−1
w
[x −y[
n−α
dy.
For any real number a > 0, deﬁne
u
a
(x) = u(x) , if [u(x)[ > a , or if [x[ > a
u
a
(x) = 0 , otherwise .
Let u
b
(x) = u(x) −u
a
(x).
Then since u satisﬁes equation (3.68), one can verify that u
a
satisﬁes the
equation
u
a
= T
ua
u
a
+g(x) (3.72)
with the function
g(x) =
R
n
K(y)[u
b
(y)[
p−1
u
b
(y)
[x −y[
n−α
dy −u
b
(x).
Under the second part of the condition (3.70), it is obvious that
g(x) ∈ L
∞
∩ L
qo
.
For any q >
n
n−α
, we ﬁrst apply the HardyLittlewoodSobolev inequality
to obtain
T
ua
w
L
q ≤ C(α, n, q)K[u
a
[
p−1
w
L
nq
n+αq
. (3.73)
Then write H(x) = K(x)[u
a
(x)[
p−1
, we apply the H¨older inequality to the
right hand side of the above inequality
R
n
[H(y)[
nq
n+αq
[f(y)[
nq
n+αq
dy
n+αq
nq
≤
R
n
[H(y)[
nq
n+αq
·r
dy
1
r
R
n
[f(y)[
nq
n+αq
·s
dy
1
s
¸
n+αq
nq
=
R
n
[H(y)[
n
α
dy
α
n
w
L
q . (3.74)
Here we have chosen
s =
n +αq
n
and r =
n +αq
αq
.
3.3 Regularity Lifting 105
It follows from (3.73) and (3.74) that
T
ua
w
L
q ≤ C(α, n, q)(
[K(y)[u
a
(y)[
p−1
[
n
α
dy)
α
n
w
L
q . (3.75)
By (3.70) and (3.75), we deduce, for suﬃciently large a,
T
ua
w
L
q ≤
1
2
w
L
q . (3.76)
Applying (3.76) to both the case q = q
o
and the case q = p
o
> q
o
, and by
the Contracting Mapping Theorem, we see that the equation
w = T
ua
w +g(x) (3.77)
has a unique solution in both L
qo
and L
po
∩L
qo
. From (3.72), u
a
is a solution
of (3.77) in L
qo
. Let w be the solution of (3.77) in L
po
∩L
qo
, then w is also a
solution in L
qo
. By the uniqueness, we must have u
a
= w ∈ L
po
∩L
qo
for any
p
o
>
n
n−α
. So does u. This completes the proof of the Theorem.
4
Preliminary Analysis on Riemannian Manifolds
4.1 Diﬀerentiable Manifolds
4.2 Tangent Spaces
4.3 Riemannian Metrics
4.4 Curvature
4.4.1 Curvature of Plane Curves
4.4.2 Curvature of Surfaces in R
3
4.4.3 Curvature on Riemannian Manifolds
4.5 Calculus on Manifolds
4.5.1 Higher Order Covariant Derivatives and the LaplaceBertrami Op
erator
4.5.2 Integrals
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature
4.6 Sobolev Embeddings
According to Einstein, the Universe we live in is a curved space. We call
this curved space a manifold, which is a generalization of the ﬂat Euclidean
space. Roughly speaking, each neighborhood of a point on a manifold is home
omorphic to an open set in the Euclidean space, and the manifold as a whole
is obtained by pasting together pieces of the Euclidean spaces. In this section,
we will introduce the most basic concepts of diﬀerentiable manifolds, tangent
space, Riemannian metrics, and curvatures. We will try to be as intuitive and
brief as possible. For more rigorous arguments and detailed discussions, please
see, for example, the book Riemannian Geometry by do Carmo [Ca].
4.1 Diﬀerentiable Manifolds
A simple example of a two dimensional diﬀerentiable manifold is a regular
surface in R
3
. Intuitively, it is a union of open sets of R
2
, organized in such a
108 4 Preliminary Analysis on Riemannian Manifolds
way that at the intersection of any two open sets the change from one to the
other can be made in a diﬀerentiable manner. More precisely, we have
Deﬁnition 4.1.1 (Diﬀerentiable Manifolds)
A diﬀerentiable ndimensional manifold M is
(i) a union of open sets, M =
¸
α
V
α
and a family of homeomorphisms φ
α
from V
α
to an open set φ
α
(V
α
) in
R
n
, such that
(ii) For any pair α and β with V
α
¸
V
β
= U = ∅, the images of the
intersection φ
α
(U) and φ
β
(U) are open sets in R
n
and the mappings φ
β
◦φ
−1
α
are diﬀerentiable.
(iii) The family ¦V
α
, φ
α
¦ are maximal relative to the conditions (i) and
(ii).
A manifold M is a union of open sets. In each open set U ⊂ M, one
can deﬁne a coordinates chart. In fact, let φ
U
be the homeomorphism from
U to φ
U
(U) in R
n
, for each p ∈ U, if φ
U
(p) = (x
1
, x
2
, , x
n
) in R
n
, then
we can deﬁne the coordinates of p to be (x
1
, x
2
, x
n
). This is called a local
coordinates of p, and (U, φ
U
) is called a coordinates chart. If a family of
coordinates charts that covers M are well made so that the transition from
one to the other is in a diﬀerentiable way as in condition (ii), then it is called
a diﬀerentiable structure of M. Given a diﬀerentiable structure on M, one
can easily complete it into a maximal one, by taking the union of all the
coordinates charts of M that are compatible with (in the sense of condition
(ii)) the ones in the given structure.
A trivial example of ndimensional manifolds is the Euclidean space R
n
,
with the diﬀerentiable structure given by the identity. The following are two
nontrivial examples.
Example 1. The ndimensional unit sphere
S
n
= ¦x ∈ R
n+1
[ x
2
1
+ +x
2
n+1
= 1¦.
Take the unit circle S
1
for instance, we can use the following four coordinates
charts:
V
1
= ¦x ∈ S
1
[ x
2
> 0¦, φ
1
(x) = x
1
,
V
2
= ¦x ∈ S
1
[ x
2
< 0¦, φ
2
(x) = x
1
,
V
3
= ¦x ∈ S
1
[ x
1
> 0¦, φ
3
(x) = x
2
,
V
4
= ¦x ∈ S
1
[ x
1
< 0¦, φ
4
(x) = x
2
.
Obviously, S
1
is the union of the four open sets V
1
, V
2
, V
3
, and V
4
. On the
intersection of any two open sets, say on V
1
¸
V
3
, we have
x
2
=
1 −x
2
1
, x
1
> 0;
x
1
=
1 −x
2
2
, x
2
> 0.
They are both diﬀerentiable functions. Same is true on other intersections.
Therefore, with this diﬀerentiable structure, S
1
is a 1dimensional diﬀeren
tiable manifold.
4.2 Tangent Spaces 109
Example 2. The ndimensional real projective space P
n
(R). It is the set of
straight lines of R
n+1
which passes through the origin (0, , 0) ∈ R
n+1
, or
the quotient space of R
n+1
` ¦0¦ by the equivalence relation
(x
1
, x
n+1
) ∼ (λx
1
, , λx
n+1
), λ ∈ R, λ = 0.
We denote the points in P
n
(R) by [x]. Observe that if x
i
= 0, then
[x
1
, , x
n+1
] = [x
1
/x
i
, x
i−1
/x
i
, 1, x
i+1
/x
i
, x
n+1
/x
i
].
For i = 1, 2, , n + 1, let
V
i
= ¦[x] [ x
i
= 0¦.
Apparently,
P
n
(R) =
n+1
¸
i=1
V
i
.
Geometrically, V
i
is the set of straight lines of R
n+1
which passes through the
origin and which do not belong to the hyperplane x
i
= 0.
Deﬁne
φ
i
([x]) = (ξ
i
1
, , ξ
i
i−1
, ξ
i
i+1
, , ξ
i
n+1
),
where ξ
i
k
= x
k
/x
i
(i = k) and 1 ≤ i ≤ n + 1.
On the intersection V
i
¸
V
j
, the changes of coordinates
ξ
j
k
=
ξ
i
k
ξ
i
j
k = i, j
ξ
j
i
=
1
ξ
i
j
.
are obviously smooth, thus ¦V
i
, φ
i
¦ is a diﬀerentiable structure of P
n
(R) and
hence P
n
(R) is a diﬀerentiable manifold.
4.2 Tangent Spaces
We know that at every point of a regular curve or at a regular surface, there
exists a tangent line or a tangent plane. Similarly, given a diﬀerentiable struc
ture on a manifold, we can deﬁne a tangent space at each point. For a regular
surfaces S in R
3
, at each given point p, the tangent plane is the space of all
tangent vectors, and each tangent vector is deﬁned as the “velocity in the am
bient space R
3
. Since on a diﬀerential manifold, there is no such an ambient
space, we have to ﬁnd a characteristic property of the tangent vector which
will not involve the concepts of velocity. To this end, let’s consider a smooth
curve in R
n
:
γ(t) = (x
1
(t), , x
n
(t)), t ∈ (−, ), with γ(0) = p.
110 4 Preliminary Analysis on Riemannian Manifolds
The tangent vector of γ at point p is
v = (x
1
(0), , x
n
(0)).
Let f(x) be a smooth function near p. Then the directional derivative of f
along vector v ∈ R
n
can be expressed as
d
dt
(f ◦ γ) [
t=0
=
n
¸
i=1
∂f
∂x
i
(p)
∂x
i
∂t
(0) = (
¸
i
x
i
(0)
∂
∂x
i
)f.
Hence the directional derivative is an operator on diﬀerentiable functions that
depends uniquely on the vector v. We can identify v with this operator and
thus generalize the concept of tangent vectors to diﬀerentiable manifolds M.
Deﬁnition 4.2.1 A diﬀerentiable function γ : (−, )→M is called a (diﬀer
entiable) curve in M.
Let D be the set of all functions that are diﬀerentiable at point p = γ(0) ∈
M. The tangent vector to the curve γ at t = 0 is an operator (or function)
γ
(0) : D→R given by
γ
(0)f =
d
dt
(f ◦ γ) [
t=0
.
A tangent vector at p is the tangent vector of some curve γ at t = 0 with
γ(0) = p. The set of all tangent vectors at p is called the tangent space at p
and denoted by T
p
M.
Let (x
1
, , x
n
) be the coordinates in a local chart (U, φ) that covers p
with φ(p) = 0 ∈ φ(U). Let γ(t) = (x
1
(t), , x
n
(t)) be a curve in M with
γ(0) = p. Then
γ
(0)f =
d
dt
f(γ(t)) [
t=0
=
d
dt
f(x
1
(t), , x
n
(t)) [
t=0
=
¸
i
x
i
(0)
∂f
∂x
i
=
¸
i
x
i
(0)
∂
∂x
i
p
f.
This shows that the vector γ
(0) can be expressed as the linear combination
of
∂
∂xi
p
, i.e.
γ
(0) =
¸
i
x
i
(0)
∂
∂x
i
p
.
Here
∂
∂xi
p
is the tangent vector at p of the “coordinate curve”:
x
i
→φ
−1
(0, , 0, x
i
, 0, , 0);
and
4.2 Tangent Spaces 111
∂
∂x
i
p
f =
∂
∂x
i
f(φ
−1
(0, , 0, x
i
, 0, , 0)) [
xi=0
.
One can see that the tangent space T
p
M is an ndimensional linear space with
the basis
∂
∂x
1
p
, ,
∂
∂x
n
p
.
The dual space of the tangent space T
p
M is called the cotangent space,
and denoted by T
∗
p
M. We can realize it in the following.
We ﬁrst introduce the diﬀerential of a diﬀerentiable mapping.
Deﬁnition 4.2.2 Let M and N be two diﬀerential manifolds and let f :
M→N be a diﬀerentiable mapping. For every p ∈ M and for each v ∈ T
p
M,
choose a diﬀerentiable curve γ : (−, )→M, with γ(0) = p and γ
(0) = v. Let
β = f ◦ γ. Deﬁne the mapping
df
p
: T
p
M→T
f(p)
N
by
df
p
(v) = β
(0).
We call df
p
the diﬀerential of f. Obviously, it is a linear mapping from one
tangent space T
p
M to the other T
f(p)
N; And one can show that it does not
depend on the choice of γ (See [Ca]). When N = R
1
, the collection of all such
diﬀerentials form the cotangent space. From the deﬁnition, one can derive
that
df
p
(
∂
∂x
i
p
) =
∂
∂x
i
p
f.
In particular, for the diﬀerential (dx
i
)
p
of the coordinates function x
i
, we have
(dx
i
)
p
(
∂
∂x
j
p
) = δ
ij
.
And consequently,
df
p
=
¸
i
∂f
∂x
i
p
(dx
i
)
p
.
From here we can see that
(dx
1
)
p
, (dx
2
)
p
, , (dx
n
)
p
form a basis of the cotangent space T
∗
p
M at point p.
Deﬁnition 4.2.3 The tangent space of M is
TM =
¸
p∈M
T
p
M.
And the cotangent space of M is
T
∗
M =
¸
p∈M
T
∗
p
M.
112 4 Preliminary Analysis on Riemannian Manifolds
4.3 Riemannian Metrics
To measure the arc length, area, or volume on a manifold, we need to introduce
some kind of measurements, or ‘metrics’. For a surface S in the three dimen
sional Euclidean space R
3
, there is a natural way of measuring the length of
vectors tangent to S, which are simply the length of the vectors in the ambient
space R
3
. These can be expressed in terms of the inner product <, > in R
3
.
Given a curve γ(t) on S for t ∈ [a, b], its length is the integral
b
a
[γ
(t)[dt,
where the length of the velocity vector γ
(t) is given by the inner product in
R
3
:
[γ
(t)[ =
< γ
(t), γ
(t) >.
The deﬁnition of the inner product <, > enable us to measure not only
the lengths of curves on S, but also the area of domains on S, as well as other
quantities in geometry.
Now we generalize this concept to diﬀerentiable manifolds without using
the ambient space. More precisely, we have
Deﬁnition 4.3.1 A Riemannian metric on a diﬀerentiable manifold M is a
correspondence which associates to each point p of M an inner product < , >
p
on the tangent space T
p
M, which varies diﬀerentiably, that is,
g
ij
(q) ≡<
∂
∂x
i
q
,
∂
∂x
j
q
>
q
is a diﬀerentiable function of q for all q near p.
A diﬀerentiable manifold with a given Riemannian metric will be called a
Riemannian manifold.
It is clear this deﬁnition does not depend on the choice of coordinate
system. Also one can prove that ( see [Ca])
Proposition 4.3.1 A diﬀerentiable manifold M has a Riemannian metric.
Now we show how a Riemannian metric can be used to measure the length
of a curve on a manifold M.
Let I be an open interval in R
1
, and let γ : I→M be a diﬀerentiable
mapping ( curve ) on M. For each t ∈ I, dγ is a linear mapping from T
t
I to
T
γ(t)
M, and γ
(t) ≡
dγ
dt
≡ dγ(
d
dt
) is a tangent vector of T
γ(t)
M. The restriction
of the curve γ to a closed interval [a, b] ⊂ I is called a segment. We deﬁne the
length of the segment by
b
a
< γ
(t), γ
(t) >
1/2
dt.
4.3 Riemannian Metrics 113
Here the arc length element is
ds =< γ
(t), γ
(t) >
1/2
dt. (4.1)
As we mentioned in the previous section, let (x
1
, , x
n
) be the coordinates
in a local chart (U, φ) that covers p = γ(t) = (x
1
(t), , x
n
(t)). Then
γ
(t) =
¸
i
x
i
(t)
∂
∂x
i
p
.
Consequently,
ds
2
= < γ
(t), γ
(t) > dt
2
=
n
¸
i,j=1
<
∂
∂x
i
,
∂
∂x
j
>
p
x
i
(t)dt x
j
(t)dt
=
n
¸
i,j=1
g
ij
(p)dx
i
dx
j
.
This expresses the length element ds in terms of the metric g
ij
.
To derive the formula for the volume of a region (an open connected subset)
D in M in terms of metric g
ij
, let’s begin with the volume of the parallelepiped
form by the tangent vectors. Let ¦e
1
, , e
n
¦ be an orthonormal basis of T
p
M.
Let
∂
∂x
i
(p) =
¸
j
a
ij
e
j
, i = 1, n.
Then
g
ik
(p) =<
∂
∂x
i
,
∂
∂x
k
>
p
=
¸
j l
a
ij
a
kl
< e
j
, e
l
>=
¸
j
a
ij
a
kj
. (4.2)
We know the volume of the parallelepiped form by
∂
∂x1
,
∂
∂xn
is the deter
minant det(a
ij
), and by virtue of (4.2), it is the same as
det(g
ij
).
Assume that the region D is cover by a coordinate chart (U, φ) with co
ordinates (x
1
, x
n
). From the above arguments, it is natural to deﬁne the
volume of D as
vol(D) =
φ(U)
det(g
ij
)dx
1
dx
n
.
A trivial example is M = R
n
, the ndimensional Euclidean space with
∂
∂x
i
= e
i
= (0, , 1, 0, , 0).
The metric is given by
g
ij
=< e
i
, e
j
>= δ
ij
.
114 4 Preliminary Analysis on Riemannian Manifolds
A less trivial example is S
2
, the 2dimensional unit sphere with standard
metric, i.e., with the metric inherited from the ambient space R
3
= ¦(x, y, z)¦.
We will use the usual spherical coordinates (θ, ψ) with
x = sin θ cos ψ
y = sin θ sin ψ
z = cos θ
to illustrate how we use the measurement (inner product) in R
3
to derive the
expression of the standard metric g
ij
in term of the local coordinate (θ, ψ) on
S
2
.
Let p be a point on S
2
covered by a local coordinates chart (U, φ) with
coordinates (θ, ψ). We ﬁrst derive the expression for
∂
∂θ
p
and
∂
∂ψ
p
the
tangent vectors at point p = φ
−1
(θ, ψ) of the coordinate curves ψ = constant
and θ =constant respectively.
In the coordinates of R
3
, we have,
∂
∂θ
p
= (
∂x
∂θ
,
∂y
∂θ
,
∂z
∂θ
) = (cos θ cos ψ, cos θ sin ψ, −sin θ),
and
∂
∂ψ
p
= (
∂x
∂ψ
,
∂y
∂ψ
,
∂z
∂ψ
) = (−sin θ sin ψ, sin θ cos ψ, 0).
It follows that
g
11
(p) =<
∂
∂θ
,
∂
∂θ
>
p
= cos
2
θ cos
2
ψ + cos
2
θ sin
2
ψ + sin
2
θ = 1,
g
12
(p) = g
21
(p) =<
∂
∂θ
,
∂
∂ψ
>
p
= 0,
and
g
22
(p) =<
∂
∂ψ
,
∂
∂ψ
>
p
= sin
2
θ.
Consequently, the length element is
ds =
g
11
dθ
2
+ 2g
12
dθdψ +g
22
dψ
2
=
dθ
2
+ sin
2
dψ
2
,
and the area element is
dA =
det(g
ij
)dθdψ = sin θdθdψ.
These conform with our knowledge on the length and area in the spherical
coordinates.
4.4 Curvature 115
4.4 Curvature
4.4.1 Curvature of Plane Curves
Consider curves on a plane. Intuitively, we feel that a straight line does not
bend at all, and a circle of radius one bends more than a circle of radius 2.
To measure the extent of bending of a curve, or the amount it deviates from
being straight, we introduce curvature.
Let p be a point on a curve, and T(p) be the unit tangent vector at p. Let
q be a nearby point. Let ∆α be the amount of change of the angle from T(p)
to T(q), and ∆s the arc length between p and q on the curve. We deﬁne the
curvature at p as
k(p) = lim
q→p
∆α
∆s
=
dα
ds
(p).
If a plane curve is given parametrically as c(t) = (x(t), y(t)), then the unit
tangent vector at c(t) is
T(t) =
(x
(t), y
(t))
[x
(t)]
2
+ [y
(t)]
2
,
and
ds =
[x
(t)]
2
+ [y
(t)]
2
dt.
By a straight forward calculation, one can verify that
k(c(t)) =
[dT[
ds
=
x
(t)y
(t) −y
(t)x
(t)
¦[x
(t)]
2
+ [y
(t)]
2
¦
3/2
.
If a plane curve is given explicitly as y = f(x), then in the above formula,
replacing t by x and y(t) by f(x), we ﬁnd that the curvature at point (x, f(x))
is
k(x, f(x)) =
f
(x)
¦1 + [f
(x)]
2
¦
3/2
. (4.3)
4.4.2 Curvature of Surfaces in R
3
1. Principle Curvature
Let S be a surface in R
3
, p be a point on S, and N(p) a normal vector
of S at p. Given a tangent vector T(p) at p, consider the intersection of
the surface with the plane containing T(p) and N(p). This intersection is a
plane curve, and its curvature is taken as the absolute value of the normal
curvature, which is positive if the curve turns in the same direction as the
surface’s chosen normal, and negative otherwise. The normal curvature varies
as the tangent vector T(p) changes. The maximum and minimum values of
the normal curvature at point p are called principal curvatures, and usually
denoted by k
1
(p) and k
2
(p).
116 4 Preliminary Analysis on Riemannian Manifolds
2. Gaussian Curvature. One way to measure the extent of bending of a
surface is by Gaussian curvature. It is deﬁned as the product of the two
principal curvatures
K(p) = k
1
(p) k
2
(p).
From this, one can see that no matter which side normal is chosen (say, for
a closed surface, one may chose outer normals or inner normals), the Gaussian
curvature of a sphere is positive, of a one sheet hyperboloids is negative, and
of a torus or a plane is zero. A sphere of radius R has Gaussian curvature
1
R
2
.
Let f(x, y) be a diﬀerentiable function. Consider a surface S in R
3
deﬁned
as the graph of x
3
= f(x
1
, x
2
). Assume that the origin is on S, and the xy
plane is tangent to S, that is
f(0, 0) = 0 and f
1
(0, 0) = 0 = f
2
(0, 0),
where, for simplicity, we write f
i
=
∂f
∂xi
and we will use f
ij
to denote
∂
2
f
∂xi∂xj
.
We calculate the Gaussian curvature of S at the origin 0 = (0, 0, 0).
Let u = (u
1
, u
2
) be a unit vector on the tangent plane. Then by (4.3), the
normal curvature k
n
(u) in u direction is
∂
2
f
∂u
2
[(
∂f
∂u
)
2
+ 1]
3/2
(0)
where
∂f
∂u
and
∂
2
f
∂u
2
are ﬁrst and second directional derivative of f in u direction.
Using the assumption that f
1
(0) = 0 = f
2
(0), we arrive at
k
n
(u) = (f
11
u
2
1
+ 2f
12
u
1
u
2
+f
22
u
2
2
)(0) = u
T
F u,
where
F =
f
11
f
12
f
21
f
22
(0),
and u
T
denotes the transpose of u.
Let λ
1
and λ
2
be two eigenvalues of the matrix F with corresponding unit
eigenvectors e
1
and e
2
, i.e.
F e
i
= λ
i
e
i
, i = 1, 2. (4.4)
Since the matrix F is symmetric, the two eigenvectors e
1
and e
2
are orthog
onal, and it is obvious that
e
T
i
F e
i
= λ
i
, i = 1, 2.
Let θ be the angle between u and e
1
. Then we can express
u = cos θ e
1
+ sin θ e
2
.
It follows that
4.4 Curvature 117
k
n
(u) = u
T
F u = cos
2
θ λ
1
+ sin
2
θ λ
2
.
Assume that λ
1
≤ λ
2
. Then
λ
1
≤ λ
1
+ sin
2
θ (λ
2
−λ
1
) = k
n
(u) = cos
2
θ (λ
1
−λ
2
) +λ
2
≤ λ
2
.
Hence λ
1
and λ
2
are minimum and maximum values of normal curvature,
and therefore they are principal curvatures k
1
and k
2
.
From (4.4), we see that λ
1
and λ
2
are solutions of the quadratic equation
f
11
−λ f
12
f
21
f
22
−λ
= λ
2
−(f
11
+f
22
)λ + (f
11
f
22
−f
12
f
21
) = 0.
Therefore, by deﬁnition, the Gaussian curvature at 0 is
K(0) = k
1
k
2
= λ
1
λ
2
= (f
11
f
22
−f
12
f
21
)(0).
4.4.3 Curvature on Riemannian Manifolds
On a general n dimensional Riemannian manifold M, there are several kinds
of curvatures. Before introducing them, we need a few preparations.
1. Vector Fields and Brackets
A vector ﬁeld X on M is a correspondence that assigns each point p ∈ M
a vector X(p) in the tangent space T
p
M. It is a mapping from M to T M. If
this mapping is diﬀerentiable, then we say the vector ﬁeld X is diﬀerentiable.
In a local coordinates, we can express
X(p) =
n
¸
i
a
i
(p)
∂
∂x
i
.
When applying to a smooth function f on M, it is a directional derivative
operator in the direction X(p):
(Xf)(p) =
n
¸
i
a
i
(p)
∂f
∂x
i
.
Deﬁnition 4.4.1 For two diﬀerentiable vector ﬁelds X and Y, deﬁne the Lie
Bracket as
[X, Y ] = XY −Y X.
If X(p) =
¸
n
i
a
i
(p)
∂
∂xi
and Y (p) =
¸
n
j
b
j
(p)
∂
∂xj
, then
[X, Y ]f = XY f −Y Xf =
n
¸
i,j=1
a
i
∂b
j
∂x
i
−b
i
∂a
j
∂x
i
∂f
∂x
j
.
118 4 Preliminary Analysis on Riemannian Manifolds
2. Covariant Derivatives and Riemannian Connections
To be more intuitive, we begin with a surface S in R
3
. Let c : I→S be a
curve on S, and let V (t) be a vector ﬁeld along c(t), t ∈ I. One can see that
dV
dt
is a vector in the ambient space R
3
, but it may not belong to the tangent
plane of S. To consider that rate of change of V (t) restricted to the surface S,
we make an orthogonal projection of
dV
dt
onto the tangent plane, and denote
it by
DV
dt
. We call this the covariant derivative of V (t) on S – the derivative
viewed by the twodimensional creatures on S.
On an ndimensional diﬀerentiable manifold M, the covariant derivative
is deﬁned by aﬃne connections.
Let D(M) be the set of all smooth vector ﬁelds.
Deﬁnition 4.4.2 Let X, Y, Z ∈ D(M), and let f and g be smooth functions
on M. An aﬃne connection D on a diﬀerentiable manifold M is a mapping
from D(M) D(M) to D(M) which maps (X, Y ) to D
X
Y and satisﬁes
1)D
fX+gY
Z = fD
X
Z +gD
Y
Z
2)D
X
(Y +Z) = D
X
Y +D
X
Z
3)D
X
(fY ) = fD
X
Y +X(f)Y.
Given an aﬃne connection D on M, for a vector ﬁeld V (t) = Y (c(t)) along
a diﬀerentiable curve c : I→M, deﬁne the covariant derivative of V along c(t)
as
DV
dt
= Ddc
dt
Y.
In a local coordinates, let
c(t) = (x
1
(t), x
n
(t)) and V (t) =
n
¸
i
v
i
(t)
∂
∂x
i
.
Then by the properties of the aﬃne connection, one can express
DV
dt
=
¸
i
dv
i
dt
∂
∂x
i
+
¸
i,j
dx
i
dt
v
j
D ∂
∂x
i
∂
∂x
j
. (4.5)
Recall that in Euclidean spaces with inner product <, >, a vector ﬁeld
V (t) along a curve c(t) is parallel if only if
dV
dt
= 0; and for a pair of vector
ﬁelds X and Y along c(t), we have < X, Y >= constant. Similarly, we have
Deﬁnition 4.4.3 Let M be a diﬀerentiable manifold with an aﬃne connec
tion D. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if
DV
dt
= 0.
4.4 Curvature 119
Deﬁnition 4.4.4 Let M be a diﬀerentiable manifold with an aﬃne connec
tion D and a Riemannian metric <, >. If for any pair of parallel vector ﬁelds
X and Y along any smooth curve c(t), we have
< X, Y >
c(t)
= constant ,
then we say that D is compatible with the metric <, >.
The following fundamental theorem of Levi and Civita guarantees the
existence of such an aﬃne connection on a Riemannian manifold.
Theorem 4.4.1 Given a Riemannian manifold M with metric <, >, there
exists a unique connection D on M that is compatible with <, >. Moreover
this connection is symmetric, i.e.
D
X
Y −D
Y
X = [X, Y ].
We skip the proof of the theorem. Interested readers may see the book of do
Carmo [ca] (page 55).
In a local coordinates (x
1
, , x
n
), this unique Riemannian connection can
be expressed as
D ∂
∂x
i
∂
∂x
j
=
¸
k
Γ
k
ij
∂
∂x
k
,
where
Γ
k
ij
=
1
2
¸
m
∂g
jm
∂x
i
+
∂g
mi
∂x
j
−
∂g
ij
∂x
m
g
mk
is called the Christoﬀel symbols, g
ij
=<
∂
∂xi
,
∂
∂xj
>, and (g
ij
) is the inverse
matrix of (g
ij
).
3. Curvature
Deﬁnition 4.4.5 Let M be a Riemannian manifold with the Riemannian
connection D. The curvature R is a correspondence that assigns every pair of
smooth vector ﬁeld X, Y ∈ D(M) a mapping R(X, Y ) : D(M)→D(M) deﬁned
by
R(X, Y )Z = D
Y
D
X
Z −D
X
D
Y
Z +D
[X,Y ]
Z, ∀Z ∈ D(M).
Deﬁnition 4.4.6 The sectional curvature of a given two dimensional sub
space E ⊂ T
p
M at point p is
K(E) =
< R(X, Y )X, Y >
[X[
2
[Y [
2
− < X, Y >
2
,
where X and Y are any two linearly independent vectors in E, and
[X[
2
[Y [
2
− < X, Y >
2
is the area of the parallelogram spun by X and Y .
120 4 Preliminary Analysis on Riemannian Manifolds
One can show that K(E) so deﬁned is independent of the choice of X and Y .
It is actually the Gaussian curvature of the section.
Example. Let M be the unit sphere with standard metric, then one can
verify that its sectional curvature at every point is K(E) = 1.
Deﬁnition 4.4.7 Let X = Y
n
be a unit vector in T
p
M, and let ¦Y
1
, , Y
n−1
¦
be an orthonormal basis of the hyper plane in T
p
M orthogonal to X, then the
Ricci curvature in the direction X is the average of the sectional curvature of
the n −1 sections spun by X and Y
1
, X and Y
2
, , and X and Y
n−1
:
Ric
p
(X) =
1
n −1
n−1
¸
i=1
< R(X, Y
i
)X, Y
i
> .
The scalar curvature at p is
R(p) =
1
n
n
¸
i=1
Ric
p
(Y
i
).
One can show that the above deﬁnition does not depend on the choice of
the orthonormal basis. On two dimensional manifolds, Ricci curvature is the
same as sectional curvature, and they both depends only on the point p.
4.5 Calculus on Manifolds
In the previous chapter, we introduced the concept of aﬃne connection D on
a diﬀerentiable manifold M. It is a mapping from Γ(M) Γ(M) to Γ(M),
where Γ(M) is the set of all smooth vector ﬁelds on M. Here we will extend
this deﬁnition to the space of diﬀerentiable tensor ﬁelds, and thus introduce
higher order covariant derivatives. For convenience, we adapt the summation
convention and write, for instance,
X
i
∂
∂x
i
:=
¸
i
X
i
∂
∂x
i
,
Γ
j
ik
dx
k
:=
¸
k
Γ
j
ik
dx
k
,
and so on.
4.5.1 Higher Order Covariant Derivatives and the
LaplaceBertrami Operator
Let p and q be two nonnegative integers. At a point x on M, as usual, we
denote the tangent space by T
x
M. Deﬁne T
q
p
(T
x
M) as the space of (p, q)
tensors on T
x
M, that is, the space of (p +q)linear forms
4.5 Calculus on Manifolds 121
η : T
x
M T
x
M
. .. .
p
T
∗
x
M T
∗
x
M
. .. .
q
→R
1
.
In a local coordinates, we can express
T = T
j1···jq
i1···ip
dx
i1
⊗ ⊗dx
ip
⊗
∂
∂x
j1
⊗ ⊗
∂
∂x
jq
.
Recall that, for the aﬃne connection D, in a local coordinates, if we set
i
= D ∂
∂x
i
,
then
i
∂
∂x
j
(x) = Γ
k
ij
∂
∂x
k
x
,
where Γ
k
ij
are the Christoﬀel symbols. For smooth vector ﬁelds X, Y ∈ Γ(M)
with
X = (X
1
, , X
n
) and Y = (Y
1
, , Y
n
),
by the bilinear properties of the connection, we have
D
X
Y = D
X
i ∂
∂x
i
Y = X
i
i
Y
= X
i
∂Y
j
∂x
i
∂
∂x
j
+Y
j
Γ
k
ij
∂
∂x
k
= X
i
∂Y
j
∂x
i
+Γ
j
ik
Y
k
∂
∂x
j
(4.6)
Now we naturally extend this aﬃne connection D to diﬀerentiable (p, q)
tensor ﬁelds T on M. For a point x on M, and a vector X ∈ T
x
M, D
X
T is
deﬁned to be a (p, q)tensor on T
x
M by
D
X
T(x) = X
i
(
i
T)(x),
where
(
i
T)(x)
j1···jq
i1···ip
=
∂T
j1···jq
i1···ip
∂x
i
x
−
p
¸
k=1
Γ
α
ii
k
(x)T(x)
j1···jq
i1···i
k−1
αi
k+1
···ip
+
q
¸
k=1
Γ
j
k
iα
(x)T(x)
j1···j
k−1
αj
k+1
jq
i1···ip
. (4.7)
Notice that (4.6) is a particular case when (4.7) is applied to the (0, 1)
tensor Y . If we apply it to a (1, 0)tensor, say dx
j
, then
i
(dx
j
) = −Γ
j
ik
dx
k
. (4.8)
Given two diﬀerentiable tensor ﬁelds T and
˜
T, we have
122 4 Preliminary Analysis on Riemannian Manifolds
D
X
(T ⊗
˜
T) = D
X
T ⊗
˜
T +T ⊗D
X
˜
T.
For a diﬀerentiable (p, q)tensor ﬁeld T, we deﬁne T to be the (p +1, q)
tensor ﬁeld whose components are given by
(T)
j1···jq
i1···ip+1
= (
i1
T)
j1···jq
i2···ip+1
.
Similarly, one can deﬁne
2
T,
3
T, .
For example, a smooth function f(x) on M is a (0, 0)tensor. Hence f is
a (1, 0)tensor deﬁned by
f =
i
fdx
i
=
∂f
∂x
i
dx
i
.
And
2
f is a (2, 0)tensor:
2
f =
j
∂f
∂x
i
dx
i
⊗dx
j
=
j
∂f
∂x
i
dx
i
+
∂f
∂x
i
j
(dx
i
)
dx
j
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
dx
i
⊗dx
j
.
Here we have used (4.7) and (4.8). In the Riemannian context,
2
f is called
the Hessian of f and denoted by Hess(f). Its ij
th
component is
(
2
f)
ij
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
.
The trace of the Hessian matrix ((
2
f)
ij
) is deﬁned to be the Laplace
Bertrami operator ´ acting on f,
´f = g
ij
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
where (g
ij
) is the inverse matrix of (g
ij
). Through a straight forward calcula
tion, one can verify that
´ =
1
[g[
n
¸
i,j=1
∂
∂x
i
(
[g[ g
ij
∂
∂x
j
)
where [g[ is the determinant of (g
ij
).
4.5.2 Integrals
On a smooth ndimensional Riemannian manifold (M, g), one can deﬁne a
natural positive Radon measure on M, and the theory of Lebesgue integral
applies.
4.5 Calculus on Manifolds 123
Let (U
i
, φ
i
)
i
be a family of coordinates charts that covers M. We say that
a family (U
i
, φ
i
, η
i
)
i
is a partition of unity associate to (U
i
, φ
i
)
i
if
(i) (η
i
)
i
is a smooth partition of unity associated to the covering (U
i
)
i
,
and
(ii) for any i, supp η
i
⊂ U
i
.
One can show that, for each family of coordinates chart (U
i
, φ
i
)
i
of M,
there exists a corresponding family of partition of unity (U
i
, φ
i
, η
i
)
i
.
Let f be a continuous function on M with compact support, and given a
family of coordinates charts (U
i
, φ
i
)
i
of M, we deﬁne its integral on M as the
sum of the integrals on open sets in R
n
:
M
fdV
g
=
¸
i
φi(Ui)
(η
i
[g[f) ◦ φ
−1
i
dx
where (U
i
, φ
i
, η
i
)
i
is a family of partition of unity associated to (U
i
, φ
i
)
i
, [g[ is
the determinant of (g
ij
) in the charts (U
i
, φ
i
)
i
, and dx stands for the volume
element of R
n
. One can prove that such a deﬁnition does not depend on the
choice of the charts (U
i
, φ
i
)
i
and the partition of unity (U
i
, φ
i
, η
i
)
i
.
For smooth functions u and v on a compact manifold M, one can verify
the following integration by parts formula
−
M
´
g
uvdV
g
=
M
< u, v > dV
g
, (4.9)
where ´
g
is the LaplaceBertrami operator associated to the metric g, and
< u, v >= g
ij
i
u
j
v = g
ij
∂u
∂x
i
∂v
∂x
j
is the scalar product associated with g for 1forms.
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature
A Riemannian metric g on M is said to be pointwise conformal to another
metric g
o
if there exist a positive function ρ, such that
g(x) = ρ(x)g
o
(x), ∀ x ∈ M.
Suppose M is a two dimensional Riemannian manifold with a metric g
o
and
the corresponding Gaussian curvature K
o
(x). Let g(x) = e
2u(x)
g
o
(x) for some
smooth function u(x) on M. Let K(x) be the Gaussian curvature associated
to the metric g. Then by a straight forward calculation, one can verify that
−´
o
u +K
o
(x) = K(x)e
2u(x)
, ∀ x ∈ M. (4.10)
Here ´
o
is the LaplaceBertrami operator associated to g
o
.
124 4 Preliminary Analysis on Riemannian Manifolds
Similarly, for conformal relation between scalar curvatures on ndimensional
manifolds ( n ≥ 3), say on standard sphere S
n
, we have
−´
o
u +
n(n −2)
4
u =
(n −2)
4(n −1)
R(x)u
n+2
n−2
, ∀ x ∈ S
n
, (4.11)
where g
o
is the standard metric on S
n
with the corresponding Laplace
Bertrami operator ´
o
, R(x) is the scalar curvature associated to the point
wise conformal metric g = u
4
n−2
g
o
.
In Chapter 5 and 6, we will study the existence and qualitative properties
of the solutions u for the above two equations respectively.
4.6 Sobolev Embeddings
Let (M, g) be a smooth Riemannian manifold. Let u be a smooth function
on M and k be a nonnegative integer. We denote
k
u the k
th
covariant
derivative of u as deﬁned in the previous section, and its norm [
k
u[ is deﬁned
in a local coordinates chart by
[
k
u[
2
= g
i1j1
g
i
k
j
k
(
k
u)
i1···i
k
(
k
u)
j1···j
k
.
In particular, we have [
0
u[ = [u[ and
[
1
u[
2
= [u[
2
= g
ij
(u)
i
(u)
j
= g
ij
i
u
j
u = g
ij
∂u
∂x
i
∂u
∂x
j
.
For an integer k and a real number p ≥ 1, let (
p
k
(M) be the space of C
∞
functions on M such that
M
[
j
u[
p
dV
g
< ∞, ∀ j = 0, 1, , k.
Deﬁnition 4.6.1 The Sobolev space H
k,p
(M) is the completion of (
p
k
(M)
with respect to the norm
u
H
k,p
(M)
=
k
¸
j=0
M
[
j
u[
p
dV
g
1/p
.
Many results concerning the Sobolev spaces on R
n
introduced in Chapter
1 can be generalized to Sobolev spaces on Riemannian manifolds. For appli
cation purpose, we list some of them in the following. Interested readers may
ﬁnd the proofs in [He] or [Au].
Theorem 4.6.1 For any integer k, H
k,2
(M) is a Hilbert space when equipped
with the equivalent norm
4.6 Sobolev Embeddings 125
u =
k
¸
i=1
M
[
i
u[
2
dV
g
1/2
.
The corresponding scalar product is deﬁned by
< u, v >=
k
¸
i=1
M
<
i
u,
i
v > dV
g
where < , > is the scalar product on covariant tensor ﬁelds associated to the
metric g.
Theorem 4.6.2 If M is compact, then H
k,p
(M) does not depend on the met
ric.
Theorem 4.6.3 (Sobolev Embeddings I)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold. As
sume 1 ≤ q < n and p =
nq
n−q
. Then
H
1,q
(M) ⊂ L
p
(M) .
More generally, if m, k are two integers with 0 ≤ m < k, and if p =
nq
n−q(k−m)
, then
H
k,q
(M) ⊂ H
m,p
(M) .
Theorem 4.6.4 (Sobolev Embeddings II)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold. As
sume that q ≥ 1 and m, k are two integers with 0 ≤ m < k. If q >
n
k−m
,
then
H
k,q
(M) ⊂ C
m
(M) .
Theorem 4.6.5 (Compact Embeddings)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold.
(i) Assume that q ≥ 1 and m, k are two integers with 0 ≤ m < k. Then
for any real number p such that 1 ≤ p <
nq
n−q(k−m)
, the embedding
H
k,q
(M) ⊂ H
m,p
(M)
is compact. In particular, the embedding of H
1,q
(M) into L
p
(M) is compact
if 1 ≤ p <
nq
n−q
.
(ii) For q > n, the embedding of H
k,q
(M) into C
λ
(M) is compact, if
(k−λ) >
n
q
. In particular, the embedding of H
1,q
(M) into C
0
(M) is compact.
126 4 Preliminary Analysis on Riemannian Manifolds
Theorem 4.6.6 (Poincar´e Inequality)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold and
let p ∈ [1, n) be real. Then there exists a positive constant C such that for any
u ∈ H
1,p
(M),
M
[u − ¯ u[
p
dV
g
1/p
≤ C
M
[u[
p
dV
g
1/p
,
where
¯ u =
1
V ol
(M,g)
M
udV
g
is the average of u on M.
5
Prescribing Gaussian Curvature on Compact
2Manifolds
5.1 Prescribing Gaussian Curvature on S
2
5.1.1 Obstructions
5.1.2 Variational Approaches and Key Inequalities
5.1.3 Existence of Weak Solutions
5.2 Gaussian Curvature on Negatively Curved Manifolds
5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper Solu
tions
5.2.2 Chen and Li’s Results
In this and the next chapters, we will present the readers with real re
search examples–semilinear equations arising from prescribing Gaussian and
scalar curvatures. We will illustrate, among other methods, how the calculus
of variations can be applied to seek weak solutions of these equations.
To show the existence of weak solutions for a certain equation, we consider
the corresponding functional J() in an appropriate Hilbert space. According
to the compactness of the functional, people usually divide it into three cases:
a) subcritical case where the level set of J is compact, say, J satisﬁes the
(PS) condition (see also Chapter 2):
Every sequence ¦u
k
¦ that satisﬁes J(u
k
)→c and J
(u
k
)→0 possesses a
convergent subsequence,
b) critical case which is the limiting situation, and can be approximated
by subcritical cases, or
c) super critical case–the remaining cases.
To roughly see when a functional may lose compactness, let’s consider
J(x) = (x
2
−1)e
x
deﬁned on one dimensional space R
1
. One can easily see that
there exists a sequence ¦x
k
¦, for instance, x
k
= −k, such that J(x
k
)→0 and
J
(x
k
)→0 as k→∞, however, ¦x
k
¦ possesses no convergent subsequence be
cause this sequence is not bounded. The functional has no compactness at level
128 5 Prescribing Gaussian Curvature on Compact 2Manifolds
J = 0. We know that in a ﬁnite dimensional space, a bounded sequence has a
convergent subsequence. While in inﬁnite dimensional space, even a bounded
sequence may not have a convergent subsequence. For instance ¦sin kx¦ is a
bounded sequence in L
2
([0, 1]) but does not have a convergent subsequence.
In the situation where the lack of compactness occurs, to ﬁnd a critical
point of the functional, one would try to recover the compactness through
various means. Take again the one dimensional example J(x) = (x
2
− 1)e
x
,
which has no compactness at level J = 0. However, if we restrict it to levels
less than 0, we can recover the compactness. That is, if we pick a sequence
¦x
k
¦ satisfying J(x
k
) < 0 and J
(x
k
)→0, then one can verify that it possesses
a subsequence which converges to a critical point x
0
of J. Actually, this x
0
is
a minimum of J.
To further illustrate the concept of compactness, we consider some exam
ples in inﬁnite dimensional space. Let Ω be an open bounded subset in R
n
,
and let H
1
0
(Ω) be the Hilbert space introduced in previous chapters. Consider
the Dirichlet boundary value problem
−´u = u
p
(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
(5.1)
To seek weak solutions, we study the corresponding functional
J
p
(u) =
Ω
[u[
p+1
dx
on the constraint
H = ¦u ∈ H
1
0
(Ω) [
Ω
[u[
2
dx = 1¦.
When p <
n+2
n−2
, it is in the subcritical case. As we have seen in Chapter
2, the functional satisﬁes the (PS) condition. If we deﬁne
m
p
= inf
u∈H
J
p
(u),
then a minimizing sequence ¦u
k
¦, J
p
(u
k
)→m
p
possesses a convergent subse
quence in H
1
0
(Ω), and the limiting function u
0
would be the minimum of J
p
in H, hence it is the desired weak solution.
When p = τ :=
n+2
n−2
, it is in the critical case. One can ﬁnd a minimizing
sequence that does not converge. For example, when Ω = B
1
(0) is a ball,
consider
u
k
(x) = u(x) =
[n(n −2)k]
n−2
4
(1 +k[x[
2
)
n−2
2
η(x)
where η ∈ C
∞
0
(Ω) is a cut oﬀ function satisfying
η(x) =
1 , x ∈ B1
2
(0)
between 0 and 1 , elsewhere .
5.1 Prescribing Gaussian Curvature on S
2
129
Let
v
k
(x) =
u
k
(x)
Ω
[u
k
[
2
dx
.
Then one can verify that ¦v
k
¦ is a minimizing sequence of J
τ
in H with
J
(v
k
)→0. However it does not have a convergent subsequence. Actually, as
k→∞, the energy of ¦v
k
¦ are concentrating at one point 0. We say that the
sequence “blows up ” at point 0.
In essence, the compactness of the functional J
τ
here relies on the Sobolev
embedding:
H
1
(Ω) → L
p+1
(Ω).
As we learned in Chapter 1, this embedding is compact when p <
n+2
n−2
and is
not when p =
n+2
n−2
.
In the critical case, the compactness may be recovered by considering
other level sets or by changing the topology of the domain. For example, if Ω
is an annulus, then the above J
τ
has a minimizer in H. In this and the next
chapter, the readers will see examples from prescribing Gaussian and scalar
curvature, where the corresponding equations are in critical case and where
various means have been exploited to recover the compactness.
5.1 Prescribing Gaussian Curvature on S
2
Given a function K(x) on two dimensional unit sphere S := S
2
with standard
metric g
o
, can it be realized as the Gaussian curvature of some pointwise
conformal metric g? This has been an interesting problem in diﬀerential ge
ometry and is known as the Nirenberg problem. If we let g = e
2u
g
o
, then it is
equivalent to solving the following semilinear elliptic equation on S
2
:
−´u + 1 = K(x)e
2u
, x ∈ S
2
. (5.2)
where ´is the LaplaceBertrami operator associated with the standard metric
g
o
.
If we replace 2u by u and let R(x) = 2K(x), then equation (5.2) becomes
−´u + 2 = R(x)e
u(x)
. (5.3)
Here 2 and R(x) are actually the scalar curvature of S
2
with standard metric
g
o
and with the pointwise conformal metric e
u
g
o
, respectively.
5.1.1 Obstructions
For equation (5.3) to have a solution, the function R(x) must satisfy the
obvious GaussBonnet sign condition
S
R(x)e
u
dx = 8π. (5.4)
130 5 Prescribing Gaussian Curvature on Compact 2Manifolds
This can be seen by integrating both sides of the equation (5.3), and the fact
that
−
S
´u dx =
S
u 1 dx = 0.
Condition (5.4) implies that R(x) must be positive somewhere. Besides this
obvious necessary condition, there are other necessary conditions found by
Kazdan and Warner. It reads
S
φ
i
R(x)e
u
dx = 0, i = 1, 2, 3. (5.5)
Here φ
i
are the ﬁrst spherical harmonic functions, i.e
−´φ
i
= 2φ
i
(x), i = 1, 2, 3.
Actually, φ
i
(x) = x
i
in the coordinates x = (x
1
, x
2
, x
3
) of R
3
.
Later, Bouguignon and Ezin generalized this condition to
S
X(R)e
u
dx = 0. (5.6)
where X is any conformal Killing vector ﬁeld on standard S
2
.
Condition (5.5) gives rise to many examples of R(x) for which the equa
tion (5.3) has no solution. In particular, a monotone rotationally symmetric
function admits no solution. Then for which kinds of functions R(x), can one
solve (5.3)? This has been an interesting problem in geometry.
5.1.2 Variational Approach and Key Inequalities
To obtain the existence of a solution for equation (5.3), people usually use
a variational approach. First prove the existence of a weak solution, then by
a regularity argument, one can show that the weak solution is smooth and
hence is the classical solution.
To obtain a weak solution, we let
H
1
(S) := H
1,2
(S)
be the Hilbert space with norm
u
1
=
¸
S
([u[
2
+u
2
)dx
1/2
.
Let
H(S) = ¦u ∈ H
1
(S) [
S
udx = 0,
S
R(x)e
u
dx > 0¦.
Then by the Poincar´e inequality, we can use
5.1 Prescribing Gaussian Curvature on S
2
131
u =
¸
S
[u[
2
dx
1/2
as an equivalent norm in H(S).
Consider the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
in H(S).
To estimate the value of the functional J, we introduce some useful in
equalities.
Lemma 5.1.1 (MoserTrudinger Inequality)
Let S be a compact two dimensional Riemannian manifold. Then there
exists constant C, such that the inequality
S
e
4πu
2
dA ≤ C (5.7)
holds for all u ∈ H
1
(S), satisfying
S
[u[
2
dA ≤ 1 and
S
udA = 0. (5.8)
We skip the proof of this inequality. Interested readers many see the article
of Moser [Mo1] or the article of Chen [C1].
From this inequality, one can derive immediately that
Lemma 5.1.2 There exists constant C, such that for all u ∈ H
1
(S) holds
S
e
u
dA ≤ C exp
1
16π
S
[u[
2
dA+
1
4π
S
udA
. (5.9)
Proof. Let
¯ u =
1
4π
S
u(x)dA
be the average of u on S. Still denote
u =
¸
S
[u[
2
dx
1/2
.
i) For any v ∈ H
1
(S) with ¯ v = 0, let w =
v
v
. Then w = 1 and ¯ w = 0.
Applying Lemma 5.1.1 to such w, we have
S
e
4πv
2
v
2
dA ≤ C. (5.10)
132 5 Prescribing Gaussian Curvature on Compact 2Manifolds
From the obvious inequality
v
4
√
π
−
2
√
πv
v
2
≥ 0,
we deduce immediately
v ≤
v
2
16π
+
4πv
2
v
2
.
Together with (5.10), we have
S
e
v
dA ≤ C exp
1
16π
v
2
, ∀v ∈ H
1
(S), with ¯ v = 0. (5.11)
ii) For any u ∈ H
1
(S), let v = u−¯ u. Then ¯ v = 0, and we can apply (5.11)
to v and obtain
S
e
u−¯ u
dA ≤ C exp
1
16π
u
2
.
Consequently
S
e
u
dA ≤ C exp
1
16π
u
2
+ ¯ u
.
This completes the proof of the Lemma. ¯ .
From Lemma 5.1.2, we can conclude that the functional J() is bounded
from below in H(S). In fact, by the Lemma,
S
R(x)e
u
dA ≤ max R
S
e
u
dA ≤ max R(x) C exp
1
16π
u
2
.
It follows that
J(u) ≥
1
2
u
2
−8π
¸
ln(C max R) +
1
16π
u
2
≥ −8π ln(C max R) (5.12)
Therefore, the functional J is bounded from below in H(S). Now we can take
a minimizing sequence ¦u
k
¦ ⊂ H(S):
J(u
k
)→ inf
u∈H(S)
J(u).
However, as we mentioned before, a minimizing sequence may not be bounded,
that is, it may ‘leak’ to inﬁnity. To prevent this from happening, we need some
coerciveness of the functional. Although it is not true in general, we do have
coerciveness for J in some special subset of H(S), In particular, in the set
of even functions–antipodal symmetric functions. More precisely, we have the
following ”Distribution of Mass Principle” from Chen and Li [CL7] [CL8].
5.1 Prescribing Gaussian Curvature on S
2
133
Lemma 5.1.3 Let Ω
1
and Ω
2
be two subsets of S such that
dist(Ω
1
, Ω
2
) ≥ δ
o
> 0.
Let 0 < α
o
≤
1
2
. Then for any > 0, there exists a constant C = C(α
o
, δ
o
, ),
such that the inequality
S
e
u
dA ≤ C exp
1
32π
+
u
2
+ ¯ u
(5.13)
holds for all u ∈ H
1
(S) satisfying
Ω1
e
u
dA
S
e
u
dA
≥ α
o
and
Ω2
e
u
dA
S
e
u
dA
≥ α
o
. (5.14)
Intuitively, if we think e
u(x)
as the density of a solid sphere S at point
x, then
Ωi
e
u
dA is the mass of the region Ω
i
, and
S
e
u
dA the total mass
of the sphere. The Lemma says that if the mass of the sphere is kind of
distributed in two parts of the sphere, more precisely, if the total mass does
not concentrate at one point, then we can have a stronger inequality than
(5.9). In particular, for even functions, this stronger inequality (5.13) holds.
If we apply this stronger inequality to the family of even functions in H(S),
we have
J(u) ≥
1
2
u
2
−8π
¸
ln(C max R) +
1
32π
+
u
2
≥ −8π ln(C max R) +
1
4
−8π
u
2
.
Choosing small enough, we arrive at
J(u) ≥
1
8
u
2
−C
1
, for all even functions u in H(S). (5.15)
This stronger inequality guarantees the boundedness of a minimizing se
quence, and hence it converges to a weak limit u
o
in H
1
(S). As we will show in
the next section, the functional J is weakly lower semicontinuous, and there
fore, u
o
so obtained is the minimum of J(u), i.e., a weak solution of equation
(5.2). Then by a standard regularity argument, as we will see in the next next
section, we will be able to conclude that u
o
is a classical solution.
In many articles, the authors use the “center of mass ” analysis, i.e., con
sider the center of mass of the sphere with density e
u(x)
at point x:
P(u) =
S
x e
u
dA
S
e
u
dA
,
and use this to study the behavior of a minimizing sequence (See [CD], [CD1],
[CY], and [CY1]). Both analysis are equivalent on the sphere S
2
. However,
134 5 Prescribing Gaussian Curvature on Compact 2Manifolds
since “the center of mass” P(u) lies in the ambient space R
3
, on a general
two dimensional compact Riemannian surfaces, this kinds of analysis can not
be applied. While the “Distribution of Mass” analysis can be applied to any
compact surfaces, even surfaces with conical singularities. Interested reader
may see the author’s paper [CL7] [CL8]for more general version of inequality
(5.13).
Proof of Lemma 5.1.3.
Let g
1
, g
2
be two smooth functions, such that
1 ≥ g
i
≥ 0, g
i
(x) ≡ 1, for x ∈ Ω
i
, i = 1, 2
and supp g
1
∩ supp g
2
= ∅. It suﬃce to show that for u ∈ H
1
(S) with
S
udA =
0, (5.14) implies
S
e
u
dA ≤ C exp
(
1
32π
+)u
2
. (5.16)
Case i) If g
1
u ≤ g
2
u, then by (5.9)
S
e
u
dA ≤
1
α
o
Ω1
e
u
dA ≤
1
α
o
S
e
g1u
dA
≤
C
α
o
exp¦
1
16π
g
1
u
2
+g
1
u¦
≤
C
α
o
exp¦
1
32π
g
1
u +g
2
u
2
+g
1
u¦
≤
C
α
o
exp¦
1
32π
(1 +
1
)u
2
+c
1
(
1
)u
2
L
2¦. (5.17)
for some small
1
> 0. Here we have used the fact that
g
1
u +g
2
u
2
=
S
[(g
1
u +g
2
u)[
2
dA
=
S
[(g
1
+g
2
)u + (g
1
+g
2
)u[
2
dA
≤ u
2
+C
1
uu
L
2 +C
2
u
2
L
2.
In order to get rid of the term u
2
L
2
on the right hand side of the above
inequality, we employ the condition
S
udA = 0.
Given any η > 0, choose a, such that meas¦x ∈ S[u(x) ≥ a¦ = η. Apply
(5.17) to the function (u −a)
+
≡ max¦0, (u −a)¦, we have
S
e
u
dA ≤ e
a
S
e
u−a
dA ≤ e
a
S
e
(u−a)
+
dA
≤ C exp¦
1
32π
(1 +
1
)u
2
+c
1
(
1
)(u −a)
+

2
2
+a¦ (5.18)
5.1 Prescribing Gaussian Curvature on S
2
135
Where C = C(δ, α
o
).
By the H¨ older and Sobolev inequality (See Theorem 4.6.3)
(u −a)
+

2
L
2 ≤ η
1
2
(u −a)
+

2
L
4 ≤ cη
1
2
(u
2
+(u −a)
+

2
L
2) (5.19)
Choose η so small that Cη
1/2
≤
1
2
, then the above inequality implies
(u −a)
+

2
L
2 ≤ 2Cη
1
2
u
2
. (5.20)
Now by Poincar´e inequality (See Theorem 4.6.6)
a η ≤
u≥a
udA ≤
S
[u[dA ≤ cu
hence for any δ > 0,
a ≤ δu
2
+
c
2
4δη
2
. (5.21)
Now (5.16) follows from (5.18), (5.20) and (5.21).
Case ii) If g
2
u ≤ g
1
u, by a similar argument as in case i), we obtain
(5.17) and then (5.16). This completes the proof.
5.1.3 Existence of Weak Solutions
Theorem 5.1.1 (Moser) If the function R is even, that is if
R(x) = R(−x) ∀ x ∈ S
2
,
where x and −x are two antipodal points. Then the necessary and suﬃcient
condition for equation (5.3) to have a solution is R(x) be positive somewhere.
To prove the existence of a weak solution, as in the previous section, we
consider the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
in H
e
(S) = ¦u ∈ H(S) [ u is even almost everywhere ¦. Let ¦u
k
¦ be a mini
mizing sequence of J in H
e
(S). Then by (5.15), ¦u
k
¦ is bounded in H
1
(S), and
hence possesses a subsequence (still denoted by ¦u
k
¦) that converges weakly
to some element u
o
in H
1
(S). Then by the Compact Sobolev Embedding of
H
1
into L
2
u
k
→u
o
in L
2
(S). (5.22)
Consequently
u
o
(−x) = u
o
(x) almost everywhere , and
S
u
o
(x)dA = 0,
136 5 Prescribing Gaussian Curvature on Compact 2Manifolds
that is u
o
∈ H
e
(S).
Furthermore, as we showed in the previous chapter, the integral is weakly
lower semicontinuous, i.e.
lim
k→∞
S
[u
k
[
2
dA ≥
S
[u
o
[
2
dA.
To show that the functional J is weakly lower semicontinuous:
J(u
o
) ≤ limJ(u
k
) = inf
u∈He(S)
J(u).
We need the following
Lemma 5.1.4 If ¦u
k
¦ converges weakly to u in H
1
(S), then there exists a
subsequence (still denoted by ¦u
k
¦), such that
S
e
u
k
(x)
dA→
S
e
u(x)
dA, as k→∞.
We postpone the proof of this lemma for a moment. Now from this lemma,
we obviously have
S
R(x)e
u
k
dA→
S
R(x)e
uo
dA,
and it follows that J() is weakly lower semicontinuous. On the other hand,
we have
J(u
o
) ≥ inf
u∈He(S)
J(u).
Therefore, u
o
is a minimum of J in H
e
(S).
Consequently, for any v ∈ H
e
(S), we have
0 =< J
(u
o
), v >=
S
< u
o
, v > dA−
8π
S
R(x)e
uo
dA
S
R(x)e
uo
vdA.
For any w ∈ H(S), let
v(x) =
1
2
(w(x) +w(−x)) :=
1
2
(w(x) +w
−
(x)).
Then obviously, v ∈ H
e
(S), and it follows that
0 =< J
(u
o
), v >=
1
2
(< J
(u
o
), w > + < J
(u
o
), w
−
>) =< J
(u
o
), w > .
Here we have used the fact that u
o
(−x) = u
o
(x). This implies that u
o
is a
critical point of J in H
1
(S) and hence a constant multiple of u
o
is a weak
solution of equation (5.3). Now what left is to prove Lemma 5.1.4.
The Proof of Lemma 5.1.4.
5.1 Prescribing Gaussian Curvature on S
2
137
From Lemma 5.1.2, for any real number p > 0, we have
S
e
pu
dA ≤ C exp¦
p
2
16π
S
[u[
2
dA+
p
4π
S
udA¦. (5.23)
And by H¨ oder inequality, for any 0 < p < 2,
S
[(e
u
)[
p
dA =
S
e
pu
[u[
p
dA ≤
S
e
2p
2−p
u
dA
2−p
2
S
[u[
2
dA
p
2
.
(5.24)
Inequalities (5.23) and (5.24) imply that ¦e
u
k
¦ is a bounded sequence in H
1,p
for any 1 < p < 2. By the compact Sobolev embedding of H
1,p
into L
1
, there
exists a subsequence of ¦e
u
k
¦ which converges strongly to e
uo
in L
1
(S). This
completes the proof of the Lemma.
Chen and Ding [CD] generalized Moser’s result to a broader class of sym
metric functions, as we will describe in the following.
Let O(3) be the group of orthogonal transformations of R
3
. Let G be a
closed ﬁnite subgroup of O(3). Let
F
G
= ¦x ∈ S
2
[ gx = x, ∀g ∈ G¦
be the set of ﬁxed points under the action of G.
The action of G on S
2
induces an action of G on the Hilbert space H
1
(S):
g[u](x) → u(gx)
under which the ﬁxed point subspace of H
1
(S) is given by
X = ¦u ∈ H
1
(S) [ u(gx) = u(x), ∀g ∈ G¦.
Assume that R(x) is Gsymmetric, or Ginvariant, i.e.
R(gx) = R(x), ∀g ∈ G. (5.25)
Let X
∗
= X
¸
H(S). Recall that
H(S) = ¦u ∈ H
1
(S) [
S
udA = 0,
S
R(x)e
u
dA > 0¦.
Under the assumption (5.25), the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
is Ginvariant in X
∗
, i.e.
J(g[u]) = J(u), ∀g ∈ G, ∀u ∈ X
∗
.
We seek a minimum of J in X
∗
. The following lemma guarantees that such a
minimum is the critical point we desired.
138 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Lemma 5.1.5 Assume that u
o
is a critical point of J in X
∗
, then it is also
a critical point of J in H(S).
Proof. First, for any g ∈ G and any u, v ∈ H(S), we have
< J
(u), g[v] >=< J
(g
−1
[u]), v > . (5.26)
Where g
−1
is the inverse of g. This can be easily see from
< J
(u), v >=
S
uvdA−
8π
S
Re
u
dA
S
Re
u
vdA.
Assume
G = ¦g
1
, g
m
¦.
For any w ∈ H(S), let
v =
1
m
(g
1
[w] + +g
m
[w]).
Then for any g ∈ G
g[v] =
1
m
(gg
1
[w] + +gg
m
[w]) = v.
Hence v ∈ X
∗
. It follows that
0 = < J
(u
o
), v >=
1
m
m
¸
i
< J
(u
o
), g
i
[w] >
=
1
m
m
¸
i
< J
(g
−1
i
u
o
), w >=
1
m
m
¸
i
< J
(u
o
), w >
= < J
(u
o
), w > .
Therefore, u
o
is a critical point of J in H(S). ¯ .
Theorem 5.1.2 (Chen and Ding [CD]). suppose that R ∈ C
α
(S
2
) satisﬁes
(5.25) with max
S
R > 0. If F
G
= ∅, let m = max
F
G
R. Then equation (5.2)
has a solution provided one of the following conditions holds:
(i) F
G
= ∅,
(ii) m ≤ 0, or
(iii) m > 0 and c
∗
= inf
X∗
J < −8π ln 4πm.
Remark 5.1.1 In the case when the function R(x) is even, the group G =
¦id, −id¦, where id is the identity transform, i.e.
id(x) = x and −id(x) = −x ∀ x ∈ S
2
.
Obviously, in this case F
G
is empty. Hence the above Theorem contains
Moser’s existence result as a special case.
5.1 Prescribing Gaussian Curvature on S
2
139
To prove this theorem, we need another two lemmas.
Lemma 5.1.6 (Onofri [On]) For any u ∈ H
1
(S) with
S
udA = 0, holds
S
e
u
dA ≤ 4π exp
1
16π
S
[u[
2
dA
. (5.27)
Lemma 5.1.7 Suppose that ¦u
i
¦ is a sequence in H(S) with J(u
i
) ≤ β. If
u
i
→∞, then there exists a subsequence (still denoted by ¦u
i
¦) and a point
ξ ∈ S
2
with R(ξ) > 0, such that
S
R(x)e
ui
dA = (R(ξ) +o(1))
S
e
ui
dA, (5.28)
and
lim
i→∞
J(u
i
) ≥ −8π ln 4πR(ξ). (5.29)
Proof. We ﬁrst show that, there exists a subsequence of ¦u
i
¦ and a point
ξ ∈ S
2
, such that for any > 0, we have
S\B(ξ)
e
ui
dA→0. (5.30)
Otherwise, there exists two subsets of S
2
, Ω
1
and Ω
2
, with dist(Ω
1
, Ω
2
)
≥
o
> 0 such that ¦u
i
¦ satisﬁes all the conditions in Lemma 5.1.3. Hence the
inequality
S
e
ui
dA ≤ C exp(
1
16π
−δ)u
i

2
holds for some δ > 0. Then by the previous argument, we conclude that u
i

is bounded. This contradicts with our assumption, and hence veriﬁes (5.30).
Now (5.30) and the continuity assumption on R(x) implies (5.28).
Roughly speaking, the above argument shows that if a minimizing sequence
of J is unbounded, then passing to a subsequence, the mass of the surface S
with density e
ui(x)
at point x will be concentrating at one point ξ on S.
To verify (5.29), we use (5.28) and Onofri’s Lemma:
J(u
i
) =
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1))
S
e
ui
dA
=
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1)) −8π ln
S
e
ui
dA
≥
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1)) −8π(ln 4π +
1
16π
S
[u[
2
dA)
= −8π ln(4πR(ξ) +o(1)).
140 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Let i→∞, we arrive at (5.29). Since the right hand side of (5.29) is bounded
above by β, we must have R(ξ) > 0. This completes the proof of the Lemma.
Proof of the Theorem. Deﬁne
c
∗
= inf
X∗
J.
As we argue in the previous section (see (5.12)), we have c
∗
> −∞. Let
¦u
i
¦ ⊂ X
∗
be a minimizing sequence, i.e. J(u
i
)→c
∗
, as i→∞.
Similar to the reasoning at the beginning of this section, one can see that
we only need to show that ¦u
i
¦ is bounded in H
1
(S). Assume on the contrary
that u
i
→∞. Then from the proof of Lemma 5.1.7, there is a point ξ ∈ S
2
,
such that for any > 0, we have
S\B(ξ)
e
ui
dA→0. (5.31)
Since u
i
is invariant under the action of G, we must also have
S\B(gξ)
e
ui
dA→0. (5.32)
Because is arbitrary, we deduce that gξ = ξ, that is ξ ∈ F
G
.
Now under condition (i), F
G
is empty. Therefore, ¦u
i
¦ must be bounded.
Moreover, by Lemma 5.1.7,
R(ξ) > 0 and c
∗
≥ −8π ln 4πR(ξ).
Noticing that m ≥ R(ξ) by deﬁnition, this again contradicts with conditions
(ii) and (iii).
Therefore, under either one of the conditions (i), (ii), or (iii), the mini
mizing sequence ¦u
i
¦ must be bounded, and hence possesses a subsequence
converging weakly to some u
o
∈ X
∗
. This a constant multiple of u
o
is the
desired weak solution of equation (5.3).
Here conditions (i) and (ii) in Theorem 5.1.2 can be easily veriﬁed through
the group G or the given function R. However, one may wonder, under what
conditions on R, do we have condition (iii). The following theorem provides a
suﬃcient condition.
Theorem 5.1.3 (Chen and Ding [CD]). Suppose that R ∈ C
2
(S
2
), is G
invariant, and m = max
F
G
R > 0. Let
D = ¦x ∈ S
2
[ R(x) = m¦.
If ´R(x
o
) > 0 for some x
o
∈ D, then
c
∗
< −8π ln 4πm, (5.33)
and therefore (5.3) possesses a weak solution.
5.1 Prescribing Gaussian Curvature on S
2
141
Proof. Choose a spherical polar coordinate system on S := S
2
:
(θ, φ), −
π
2
≤ θ ≤
π
2
, 0 ≤ φ ≤ 2π
with x
o
= (
π
2
, φ) as the north pole.
Consider the following family of functions:
u
λ
(θ, φ) = ln
1 −λ
2
(1 −λsin θ)
2
, v
λ
= u
λ
−
1
4π
S
u
λ
dA, 0 < λ < 1.
One can verify that this family of functions satisfy the equation
−´u
λ
+ 2 = 2e
u
λ
. (5.34)
As λ→1, the family ¦u
λ
¦ ‘blows up’ (tend to ∞) at the one point x
o
,
while approaching −∞ everywhere else on S
2
. Therefore, the mass of the
sphere with density e
u
λ
(x)
is concentrating at point x
o
as λ→1.
It is easy to see that v
λ
∈ H(S) for λ closed to 1. Moreover, we claim that
v
λ
∈ X, i.e. v
λ
(gx) = v
λ
(x), ∀g ∈ G.
In fact, since g is an isometry of S
2
, and gx
o
= x
o
, we have
d(gx, x
o
) = d(gx, gx
o
) = d(x, x
o
),
where d(, ) is the geodesic distance on S
2
. But v
λ
depends only on θ =
d(x, x
o
), so
v
λ
(gx) = v
λ
(x), ∀g ∈ G.
It follows that v
λ
∈ X
∗
for λ suﬃciently closed to 1.
By a direct computation, one can verify that
1
2
S
[u
λ
[
2
dA+ 8π¯ u
λ
= 0.
Consequently,
J(v
λ
) = −8π ln
S
R(x)e
u
λ
dA.
Notice that c
∗
≤ J(v
λ
) by the deﬁnition of c
∗
. Thus, in order to show (5.33),
we only need to verify that, for λ suﬃciently close to 1
S
R(x)e
u
λ
dA > 4πm. (5.35)
Let δ > 0 be small, we have
S
Re
u
λ
dA = (1 −λ
2
)
2π
0
π
2
π
2
−δ
[R(θ, φ) −m]
cos θ
(1 −λsin θ)
2
dθdφ
+
2π
0
π
2
−δ
−
π
2
[R(θ, φ) −m]
cos θ
(1 −λsin θ)
2
dθdφ
¸
+ 4πm
= (1 −λ
2
)¦I +II¦ + 4πm.
142 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Here we have used the fact
S
e
u
λ
dA = 4π
which can be derived directly from equation (5.34).
For each ﬁxed δ, it is easy to check that the integral II remains bounded
as λ tends to 1. Thus (5.35) will hold if we can show that the integral I →+∞
as λ→1.
Notice that x
o
is a maximum point of the restriction of R on F
G
. Hence
by the principle of symmetric critically, x
o
is a critical point of R, that is
R(x
o
) = 0. Using this fact and the second order Taylor expansion of R(x)
near x
o
, we can derive
I = π
π
2
π
2
−δ
´R(x
o
) cos
2
θ +o
θ −
π
2
2
(1 −λsin θ)
2
cos θ dθ
≥ c
o
π
2
π
2
−δ
cos
3
θ
(1 −λsin θ)
2
dθ.
Here c
o
is a positive constant. We have chosen δ to be suﬃciently small and
have used the fact that ´R(x
o
) is positive. From the above inequality, one
can easily verify that as λ→1, I→+∞, since the integral
π
2
π
2
−δ
cos
3
θ
(1 −sin θ)
2
d θ
diverges to +∞.
This completes the proof of the Theorem. ¯ .
5.2 Prescribing Gaussian Curvature on Negatively
Curved Manifolds
Let M be a compact two dimensional Riemannian manifold. Let g
o
(x) be
a metrics on M with the corresponding LaplaceBeltrami operator ´ and
Gaussian curvature K
o
(x). Given a function K(x) on M, can it be realized as
the Gaussian curvature associated to the pointwise conformal metrics g(x) =
e
2u(x)
g
o
(x)? As we mentioned before, to answer this question, it is equivalent
to solve the following semilinear elliptic equation
−´u +K
o
(x) = K(x)e
2u
x ∈ M. (5.36)
To simplify this equation, we introduce the following simple existence re
sult.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 143
Lemma 5.2.1 Let f(x) ∈ L
2
(M). Then the equation
−´u = f(x) x ∈ M (5.37)
possesses a solution if and only if
M
f(x)d A = 0. (5.38)
Proof. The ‘only if’ part can be seen by integrating both side of the equation
(5.37) on M.
We now prove the ‘if’ part. Assume that (5.38) holds, we will obtain the
existence of a solution to (5.37). To this end, we consider the functional
J(u) =
1
2
M
[u[
2
d A−
M
f(x)ud A.
under the constraint
G = ¦u ∈ H
1
(M) [
M
u(x)d A = 0¦.
As we have seen before, we can use, in G, the equivalent norm
u =
M
[u[
2
d A
1
2
.
By H¨ oder and Poincar´e inequalities, we derive
J(u) ≥
1
2
u
2
−f
L
2u
L
2
≥
1
2
u
2
−c
o
f
L
2u
=
1
2
(u −c
o
f
L
2)
2
−
c
2
o
f
2
L
2
2
. (5.39)
The above inequality implies immediately that the functional J(u) is
bounded from below in the set G.
Let ¦u
k
¦ be a minimizing sequence of J in G, i.e.
J(u
k
)→inf
G
J, as k→∞.
Then again by (5.39), ¦u
k
¦ is bounded in H
1
, and hence possesses a subse
quence (still denoted by ¦u
k
¦) that converges weakly to some u
o
in H
1
(M).
From compact Sobolev imbedding
H
1
→→ L
2
,
144 5 Prescribing Gaussian Curvature on Compact 2Manifolds
we see that
u
k
→u
o
strongly in L
2
, and hence so in L
1
.
Therefore
M
u
o
(x)d A = lim
M
u
k
(x)d A = 0, (5.40)
and
M
f(x)u
o
(x)d A = lim
M
f(x)u
k
(x)d A. (5.41)
(5.40) implies that u
o
∈ G, and hence
J(u
o
) ≥ inf
G
J(u). (5.42)
While (5.41) and the weak lower semicontinuity of the Dirichlet integral:
liminf
M
[u
k
[
2
d A ≥
M
[u
o
[
2
d A
leads to
J(u
o
) ≤ inf
G
J(u).
From this and (5.42) we conclude that u
o
is the minimum of J in the set G,
and therefore there exists constant λ, such that u
o
is a weak solution of
−´u
o
−f(x) = λ.
Integrating both sides and by (5.38), we ﬁnd that λ = 0. Therefore u
o
is a
solution of equation (5.37). This completes the proof of the Lemma. ¯ .
We now simplify equation (5.36). First let ˜ u = 2u, then
−´˜ u + 2K
o
(x) = 2K(x)e
˜ u
. (5.43)
Let v(x) be a solution of
−´v = 2K
o
(x) −2
¯
K
o
(5.44)
where
¯
K
o
=
1
[M[
M
K
o
(x)d A
is the average of K
o
(x) on M. Because the integral of the right hand side of
(5.44) on M is 0, the solution of (5.44) exists due to Lemma 5.2.1.
Let w = ˜ u +v. Then it is easy to verify that
−´w + 2
¯
K
o
= 2K(x)e
−v
e
w
. (5.45)
Finally, letting
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 145
α = 2
¯
K
o
and R(x) = 2K(x)e
−v(x)
and rewriting w as u, we reduce equation (5.36) to
−´u +α = R(x)e
u(x)
, x ∈ M (5.46)
By the wellknown GaussBonnet Theorem, if g is a Riemannian metrics
on M with corresponding Gaussian curvature K
g
(x), then
M
K
g
(x)d A
g
= 2πχ(M)
where χ(M) is the Euler Characteristic of M, and d A
g
the area element
associated to metric g.
We say M is negatively curved, if χ(M) < 0, and hence α < 0.
5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper
Solutions
Let M be a compact two dimensional manifold. Let α < 0. In [KW], Kazdan
and Warner considered the equation
−´u +α = R(x)e
u(x)
, x ∈ M (5.47)
and proved
Theorem 5.2.1 (KazdanWarner) Let
¯
R be the average of R(x) on M. Then
i) A necessary condition for (5.46) to have a solution is
¯
R < 0.
ii) If
¯
R < 0, then there is a constant α
o
, with −∞ ≤ α
o
< 0, such that
one can solve (5.46) if α > α
o
, but cannot solve (5.46) if α < α
o
.
iii) α
o
= −∞ if and only if R(x) ≤ 0.
The existence of a solution was established by using the method of upper
and lower solutions. We say that u
+
is an upper solution of equation (5.47) if
−´u
+
+α ≥ R(x)e
u+
, x ∈ M.
Similarly, we say that u
−
is a lower solution of (5.47) if
−´u
−
+α ≤ R(x)e
u−
, x ∈ M.
The following approach is adapted from [KW] with minor modiﬁcations.
Lemma 5.2.2 If a solution of (5.47) exists, then
¯
R < 0. Even more strongly,
the unique solution φ of
´φ +αφ = R(x) , x ∈ M (5.48)
must be positive.
146 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Proof. We ﬁrst prove the second part. Using the substitution v = e
−u
, one
sees that (5.47) has a solution u if and only if there is a positive solution v of
´v +αv −R(x) −
[v[
2
v
= 0. (5.49)
Assume that v is such a positive solution. Let φ be the unique solution of
(5.48) and let w = φ −v. Then
´w +αw = −
[v[
2
v
≤ 0. (5.50)
It follows that w ≥ 0. Otherwise, there is a point x
o
∈ M, such that w(x
o
) < 0
and x
o
is a minimum of w, hence ´w(x
o
) ≥ 0. Consequently, we must have
´w(x
o
) +αw(x
o
) > 0.
A contradiction with (5.50). Therefore w ≥ 0 and φ ≥ v > 0. This shows
that a necessary condition for (5.47) to have a solution is that the unique
solution φ of (5.48) must be positive. Integrating both sides of (5.48), we see
immediately that
¯
R < 0. This completes the proof of the Lemma. ¯ .
Lemma 5.2.3 (The Existence of Upper Solutions).
i) If R(x) ≤ 0 but not identically 0, then there exist an upper solution u
+
of (5.47).
ii) If
¯
R < 0, then there exists a small δ > 0, such that for −δ ≤ α < 0,
there exists an upper solution u
+
of (5.47).
Proof. Let v be a solution of
−´v = R(x) −
¯
R.
We try to choose constant a and b, such that u
+
= av+b is an upper solution.
We have
−´u
+
+α −R(x)e
u+
= aR(x) −a
¯
R +α −R(x)e
av+b
. (5.51)
We want to choose constant a and b, so that the right hand side of (5.51) is
nonnegative.
In Case i), when R(x) ≤ 0, we regroup the right hand side of (5.51) as
f(a, b, x) ≡ (−a
¯
R +α) −R(x)(e
av+b
−a). (5.52)
For any given α < 0, we ﬁrst choose a suﬃciently large, so that −a
¯
R+α > 0.
Then we choose b large, so that
e
av(x)+b
−a ≥ 0 , ∀ x ∈ M.
This is possible because v(x) is continuous and M is compact. It follows from
(5.51) and (5.52) that
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 147
−´u
+
+α −R(x)e
u+
≥ 0.
In Case ii), we choose α ≥
a
¯
R
2
and e
b
= a, where a is a positive number
to be determine later. Then (5.52) becomes
f(a, b, x) ≥ −
a
¯
R
2
−aR(x)(e
av
−1)
≥ a
−
¯
R
2
−R
L
∞[e
av
−1[
= aR
L
∞
−
¯
R
2R
L
∞
−[e
av
−1[
.
Again by the continuity of v and the compactness of M, we see that
e
av(x)
→1 , uniformly on M as a→0.
Therefore, we can make
[e
av(x)
−1[ ≤
−
¯
R
2R
L
∞
, for all x ∈ M,
by choosing a suﬃciently small. Thus u
+
so constructed is an upper solution
of (5.47) for α ≥
a
¯
R
2
. This completes the proof of the Lemma. ¯ .
Lemma 5.2.4 (Existence of Lower Solutions) Given any R ∈ L
p
(M) and a
function u
+
∈ W
2,p
(M), there is a lower solution u
−
∈ W
2,p
(M) of (5.47)
such that u
−
< u
+
.
Proof. If R(x) is bounded from below, then one can clearly use any suf
ﬁciently large negative constant as u
−
. For general R(x) ∈ L
p
(M), let
K(x) = max¦1, −R(x)¦. Choose a positive constant a so that a
¯
K = −α.
Then aK(x) +α = 0 and (aK(x) + α) ∈ L
p
(M). Thus there is a solution w
of
´w = aK(x) +α.
By the L
p
regularity theory, w ∈ W
2,p
(M) and hence is continuous. Let
u
−
= w −λ. Then
−´u
−
+α −R(x)e
u−
= −´w +α −R(x)e
w−λ
= −aK(x) −R(x)e
w−λ
≤ −K(x)(a −e
w−λ
). (5.53)
Choosing λ suﬃciently large so that
a −e
w−λ
≥ 0,
148 5 Prescribing Gaussian Curvature on Compact 2Manifolds
and noticing that K(x) ≥ 0, by (5.53), we arrive at
−´u
−
+α −R(x)e
u−
≤ 0.
Therefore u
−
is a lower solution. Moreover, one can make λ large enough such
that u
−
= w −λ < u
+
. This completes the proof of the Lemma. ¯ .
Lemma 5.2.5 Let p > 2. If there exist upper and lower solutions, u
+
, u
−
∈
W
2,p
(M) of (5.47), and if u
−
≤ u
+
, then there is a solution u ∈ W
2,p
(M).
Moreover, u
−
≤ u ≤ u
+
, and u is C
∞
in any open set where R(x) is C
∞
.
Proof. We will rewrite the equation (5.47) as
Lu = f(x, u),
then start from the upper solution u
+
and apply iterations:
Lu
1
= f(x, u
+
) , Lu
i+1
= f(x, u
i
) , i = 1, 2, .
In order that for each i, u
−
≤ u
i
≤ u
+
, we need the operator L obeys some
maximum principle and f(x, ) possess some monotonicity. The Laplace oper
ator in the original equation does not obey maximum principles, however, if
we let
Lu = −´u +k(x)u,
where k(x) is any positive function on M, then the new operator L obeys the
maximum principle:
If Lu ≥ 0, then u(x) ≥ 0 , for all x ∈ M.
To see this, we suppose in the contrary, there is some point on M where u < 0.
Let x
o
be a negative minimum of u, then −´u(x
o
) ≤ 0, and it follows that
−´u(x
o
) +k(x
o
)u(x
o
) ≤ k(x
o
)u(x
o
) < 0.
A contradiction. Hence the maximum principle holds for L. We now write
equation (5.47) as
Lu ≡ −´u +k(x)u = R(x)e
u
−α +k(x)u := f(x, u).
Based on the maximum principle on L, to keep u
−
≤ u
i
≤ u
+
, it requires that
f(x, ) be monotone increasing, that is
∂f
∂u
= R(x)e
u
+k(x) ≥ 0.
To this end, we choose
k(x) = k
1
(x)e
u+(x)
, where k
1
(x) = max¦1, −R(x)¦.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 149
Base on this choice of L and f, we show by math induction that the
iteration sequence so constructed is monotone decreasing:
u
i+1
≤ u
i
≤ ≤ u
1
≤ u
+
, i = 1, 2, . (5.54)
In fact, from
Lu
+
≥ f(x, u
+
) and Lu
1
= f(x, u
+
)
we derive immediately
L(u
+
−u
1
) ≥ 0.
Then the maximum principle for L implies u
+
≥ u
1
. This completes the ﬁrst
step of our induction. Assume that for any integer m, u
m+1
≤ u
m
≤ u
+
, we
will show that u
m+2
≤ u
m+1
. Since f(x, ) is monotone increasing in u for
u ≤ u
+
, we have
f(x, u
m+1
) ≤ f(x, u
m
)
and hence L(u
m+1
−u
m+2
) ≥ 0. Therefore u
m+1
≥ u
m+2
. This veriﬁes (5.54)
through math induction. Similarly, one can show the other side of the inequal
ity and arrive at
u
−
≤ u
i+1
≤ u
i
≤ ≤ u
+
, i = 1, 2, . (5.55)
Since u
+
, u
−
, and u
i
are continuous, inequality (5.55) shows that ¦[u
i
[¦ is
uniformly bounded. Consequently, in the L
p
norm,
Lu
i+1

L
p = R(x)e
ui
−α +k(x)u
i

L
p ≤ C.
Hence ¦u
i
¦ is bounded in W
2,p
(M). By the compact imbedding of W
2,p
into
C
1
, there is a subsequence that converges uniformly to some function u in
C
1
(M). In view of the monotonicity (5.55), we conclude that the entire se
quence ¦u
i
¦ itself converges uniformly to u. Moreover
u
i+1
−u
j+1

W
2,p ≤ CL(u
i+1
−u
j+1
)
L
p
≤ C(R
L
pe
ui
−e
uj

L
∞ +k
L
pu
i
−u
j

L
∞).
Therefore, ¦u
i
¦ converges strongly in W
2,p
, so u ∈ W
2,p
. Since L : W
2,p
→L
p
is continuous, it follows that u is a solution of (5.47) and satisﬁes u
−
≤ u ≤ u
+
.
By the Schauder theory, one proves inductively that u ∈ C
∞
in any open
set where R(x) is C
∞
. More precisely, if R ∈ C
m+γ
, then u ∈ C
m+2+γ
. This
completes the proof of the Lemma. ¯ .
Based on these Lemmas, we are now able to complete the proof of the
Theorem 5.2.1.
Proof of Theorem 5.2.1.
Part i) can obviously derived from Lemma 5.2.2.
150 5 Prescribing Gaussian Curvature on Compact 2Manifolds
From Lemma 5.2.4 and 5.2.5, if there is a upper solution, then there is
a solution. Hence by Lemma 5.2.3, there exist a δ > 0, such that (5.47) is
solvable for −δ ≤ α < 0. Let
α
o
= inf¦α [ (5.47) is sovable ¦.
Then for any 0 > α > α
o
, one can solve (5.47). In fact, if for α
1
, (5.47)
possesses a solution u
1
, then for any α > α
1
, we have
−´u
1
+α > R(x)e
u1
,
that is, u
1
is an upper solution for equation (5.47) at α. Hence equation
−´u +α = R(x)e
u
also has a solution.
Lemma 5.2.3 infers that α
o
= −∞ if R(x) ≤ 0 but R(x) is not identically
0. Now to complete the proof of the Theorem, it suﬃce to prove that if R(x)
is positive somewhere, then α
o
> −∞. By virtue of Lemma 5.2.2, we only
need to show
Claim: There exist α < 0, such that the unique solution of
´φ +αφ = R(x) (5.56)
is negative somewhere.
Actually, we can prove a stronger result:
−αφ(x)→R(x) almost everywhere as α→−∞. (5.57)
We argue in the following 2 steps.
i) For a given α < 0, let u = u(x, α) be the solution of (5.56). Multiplying
both sides of equation (5.56) by u and integrate over M to obtain
−
M
[u[
2
dA+α
M
u
2
dA =
M
R(x)u(x)dA.
Consequently,
u
L
2 ≤
1
α
M
RudA.
Then by H¨ oder inequality
u
L
2 ≤
1
α
R
L
2.
This implies that
as α→−∞, u(x, α)→0, almost everywhere . (5.58)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 151
Since for any α < 0, the operator ´+α is invertible, we can express
u = (´+α)
−1
R(x).
From the above reasoning, one can see that, as α→−∞,
if R ∈ L
2
, then (´+α)
−1
R(x)→0 almost everywhere . (5.59)
ii) To see (5.57), we write
−αφ +R(x) = (´+α)
−1
´R. (5.60)
One can see the validity of (5.60) by simply apply the operator ´+α to both
sides.
Now by the assumption, we have ´R ∈ L
2
, and hence (5.59) implies that
−αφ +R(x)→0, a.e..
Or
φ(x)→
1
α
R(x). a.e.
Since R(x) is continuous and positive somewhere, for suﬃciently negative α,
φ(x) is negative somewhere.
This completes the proof of the Theorem.
5.2.2 Chen and Li’s Results
From the previous section, one can see from the Kazdan and Warner’s result
that in the case when R(x) is nonpositive, equation (5.46) has been solved
completely. i.e. a necessary and suﬃcient condition for equation (5.46) to have
a solution for any 0 > α > −∞ is R(x) ≤ 0. However, in the case when R(x)
changes signs, we have α
o
> −∞, and hence there are still some questions
remained. In particular, one may naturally ask:
Does equation (5.46) has a solution for the critical value α = α
o
?
In [CL1], Chen and Li answered this question aﬃrmatively.
Theorem 5.2.2 (ChenLi)
Assume that R(x) is a continuous function on M. Let α
o
be deﬁned in the
previous section. Then equation (5.46) has at least one solution when α = α
o
.
Proof. The Outline. Let ¦α
k
¦ be a sequence of numbers such that
α
k
> α
o
, and α
k
→α
o
, as k→∞.
We will prove the Theorem in 3 steps.
152 5 Prescribing Gaussian Curvature on Compact 2Manifolds
In step 1, we minimize the functional, for each ﬁxed α
k
,
J
k
(u) =
1
2
M
[u[
2
d A+α
k
M
ud A−
M
R(x)e
u
d A
in a class of functions that are between a lower and a Upper solution. Let the
minimizer be u
k
. Then let α
k
→α
o
.
In step 2, we show that the sequence of minimizers ¦u
k
¦ is bounded in the
region where R(x) is positively bounded away from 0.
In step 3, we prove that ¦u
k
¦ is bounded in H
1
(M) and hence converges
to a desired solution.
Step 1. For each α
k
> α
o
, by Kazdan and Warner, there exists a solution
φ
k
of
−´φ
k
+α
k
= R(x)e
φ
k
. (5.61)
We ﬁrst show that there exists a constant c
o
> 0, such that
φ
k
(x) ≥ −c
o
, ∀ x ∈ M and ∀ k = 1, 2, . (5.62)
Otherwise, for each k, let x
k
be a minimum of φ
k
(x) on M. Then obviously,
φ
k
(x
k
)→−∞, as k→∞.
On the other hand, since x
k
is a minimum, we have
−´φ
k
(x
k
) ≤ 0 , ∀ k = 1, 2, .
These contradict equation (5.61) because
α
k
→α
o
< 0 , as k→∞.
This veriﬁes (5.62).
Choose a suﬃciently negative constant A < −c
o
to be the lower solution
of equation (5.46) for all α
k
, that is,
−´A+α
k
≤ R(x)e
A
.
This is possible because R(x) is bounded below on M, and hence we can make
R(x)e
A
greater than any ﬁxed negative number.
For each ﬁxed α
k
, choose α
o
< ˜ α
k
< α
k
. By the result of Kazdan and
Warner, there exists a solution of equation (5.46) for α = ˜ α
k
. Call it ψ
k
. Then
−´ψ
k
+α
k
> −´ψ
k
+ ˜ α
k
= R(x)e
ψ
k
(x)
,
i.e. ψ
k
is a super solution of the equation
−´u +α
k
= R(x)e
u(x)
. (5.63)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 153
Illuminated by an idea of Ding and Liu [DL], we minimize the functional
J
k
(u) in a class of functions
H = ¦u ∈ C
1
(M) [ A ≤ u ≤ ψ
k
¦.
This is possible, because the functional J
k
(u) is bounded from below. In
fact, since M is compact and R(x) is continuous, there exists a constant C
1
,
such that
[R(x)[ ≤ C
1
, ∀ x ∈ M.
Consequently, for any u ∈ H,
J
k
(u) ≥
1
2
M
[u[
2
d A+α
k
M
ud A−C
1
M
e
ψ
k
d A
≥
1
2
(u −C
2
)
2
−C
3
. (5.64)
Here we have used the H¨ older and Poincar´e inequality. From (5.64), one
can easily see that J
k
is bounded from below. Also by (5.65) and a similar
argument as in the previous section, we can show that there is a subsequence
of a minimizing sequence which converges weakly to a minimum u
k
of J
k
(u)
in the set H. In order to show that u
k
is a solution of equation (5.63), we
need it be in the interior of H, i.e.
A < u
k
(x) < ψ
k
(x), ∀ x ∈ M. (5.65)
To verify this, we use a maximum principle.
First we show that
u
k
(x) < ψ
k
(x) , ∀ x ∈ M. (5.66)
Suppose in the contrary, there exists x
o
∈ M, such that u
k
(x
o
) = ψ
k
(x
o
), we
will derive a contradiction. Since u
k
(x
o
) > A, there is a δ > 0, such that
A < u
k
(x) , ∀x ∈ B
δ
(x
o
).
It follows that, for any positive function v(x) with its support in B
δ
(x
o
),
there exists an > 0, such that as − ≥ t ≤ 0, we have u
k
+ tv ∈ H.
Since u
k
is a minimum of J
k
in H, we have in particular that the function
g(t) = J
k
(u
k
+tv) achieves its minimum at t = 0 on the interval [−, 0]. Hence
g
(0) ≤ 0. This implies that
M
[−´u
k
+α
k
−R(x)e
u
k
]v(x)d A ≤ 0. (5.67)
Consequently
−´u
k
(x
o
) +α
k
−R(x
o
)e
u
k
(xo)
≤ 0. (5.68)
Let w(x) = ψ
k
(x) −u
k
(x). Then w(x) ≥ 0, and x
o
is a minimum of w. It
follows that −´w(x
o
) ≤ 0. While on the other hand, we have, by (5.68),
154 5 Prescribing Gaussian Curvature on Compact 2Manifolds
−´w(x
o
) = −´ψ
k
(x
o
) −(−´u
k
(x
o
)) ≥ −˜ α
k
+α
k
> 0.
Again a contradiction. Therefore we must have
u
k
(x) < ψ
k
(x) , ∀ x ∈ M.
Similarly, one can show that
A < u
k
(x) , ∀ x ∈ M.
This veriﬁes (5.65). Now for any v ∈ C
1
(M), u
k
+tv is in H for all suﬃciently
small t, and hence
d
dt
J
k
(u+tv)[
t=0
= 0. Then a direct computation as we did
in the past will lead to
−´u
k
+α
k
= R(x)e
u
k
, ∀ x ∈ M.
Step 2. It is obvious that the sequence ¦u
k
¦ so obtained in the previous
step is uniformly bounded from below by the negative constant A. To show
that it is also bounded from above, we ﬁrst consider the region where R(x) is
positively bounded away from 0. We will use a sup + inf inequality of Brezis,
Li, and Shafrir [BLS]:
Lemma 5.2.6 (Brezis, Li, and Shafrir). Assume that V is a Lipschitz func
tion satisfying
0 < a ≤ V (x) ≤ b < ∞
and let K be a compact subset of a domain Ω in R
2
. Then any solution u of
the equation
−´u = V (x)e
u
, x ∈ Ω,
satisﬁes
sup
K
u + inf
Ω
u ≤ C(a, b, V 
L
∞, K, Ω). (5.69)
Let x
o
be a point on M at which R(x
o
) > 0. Choose so small such that
R(x) ≥ a > 0 , ∀ x ∈ Ω ≡ B
2
(x
o
),
for some constant a.
Let v
k
be a solution of
−´v
k
−α
k
= 0 x ∈ Ω
v
k
(x) = 1 x ∈ ∂Ω.
Let w
k
= u
k
+v
k
. Then w
k
satisﬁes
−´w
k
= R(x)e
−v
k
e
w
k
.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 155
By the maximum principle, the sequence ¦v
k
¦ is bounded from above and from
below in the small ball Ω. Since ¦u
k
¦ is uniformly bounded from below, ¦w
k
¦
is also bounded from below. Locally, the metric is pointwise conformal to the
Euclidean metric, hence one can apply Lemma 5.2.6 to ¦w
k
¦ to conclude that
the sequence ¦w
k
¦ is also uniformly bounded from above in the smaller ball
B
(x
o
). Since x
o
is an arbitrary point where R is positive, we conclude that
¦w
k
¦ is uniformly bounded in the region where R(x) is positively bounded
away from 0. Now the uniform bound for ¦u
k
¦ in the same region follows suit.
Step 3. We show that the sequence ¦u
k
¦ is bounded in H
1
(M).
Choose a small δ > 0, such that the set
D = ¦x ∈ M [ R(x) > δ¦
is nonempty. It is open since R(x) is continuous. From the previous step, we
know that ¦u
k
¦ is uniformly bounded in D.
Let K(x) be a smooth function such that
K(x) < 0 , for x ∈ D; and K(x) = 0 , else where.
For each α
k
, let v
k
be the unique solution of
−´v
k
+α
k
= K(x)e
v
k
, x ∈ M.
Then since
−´v
k
+α
k
≤ 0 , and α
k
→α
o
The sequence ¦v
k
¦ is uniformly bounded.
Let w
k
= u
k
−v
k
, then
−´w
k
= R(x)e
u
k
−K(x)e
v
k
. (5.70)
We show that ¦w
k
¦ is bounded in H
1
(M).
On one hand, multiplying both sides of (5.70) by e
w
k
and integrating on
M, we have
M
[w
k
[
2
e
w
k
dA =
M
R(x)e
u
k
+w
k
dA−
D
K(x)e
u
k
dA. (5.71)
On the other hand, since each u
k
is a minimizer of the functional J
k
, the
second derivative J
k
(u
k
) is positively deﬁnite. It follows that for any function
φ ∈ H
1
(M),
M
[[φ[
2
−R(x)e
u
k
φ
2
]d A ≥ 0.
Choosing φ = e
w
k
2
, we derive
156 5 Prescribing Gaussian Curvature on Compact 2Manifolds
1
4
M
[w
k
[
2
e
w
k
dA ≥
M
R(x)e
u
k
+w
k
dA. (5.72)
Combining (5.71) and (5.72), we arrive at
−
D
K(x)e
u
k
dA ≥
3
4
M
[w
k
[
2
e
w
k
dA. (5.73)
Notice that ¦u
k
¦ is uniformly bounded in region D and ¦w
k
¦ is bounded
from below due to the fact that ¦u
k
¦ is bounded from below and ¦v
k
¦ is
bounded, (5.73) implies that
M
[w
k
[
2
dAis bounded, and so does
M
[u
k
[
2
dA.
Consider ˜ u
k
= u
k
−¯ u
k
, where ¯ u
k
is the average of u
k
. Obviously
M
˜ u
k
dA =
0, hence we can apply Poincar´e inequality to ˜ u
k
and conclude that ¦˜ u
k
¦ is
bounded in H
1
(M). Apparently, we have
M
u
2
k
dA ≤ 2
M
˜ u
2
k
dA+ ¯ u
2
k
[M[
.
If
M
u
2
k
dA is unbounded, then ¦¯ u
k
¦ is unbounded. Taking into account that
¦u
k
¦ is bounded in the region D where R(x) is positively bounded away from
0, we would have
D
˜ u
2
k
dA =
D
[u
k
− ¯ u
k
[
2
dA→∞,
for some subsequence, a contradiction with the fact that ¦˜ u
k
¦ is bounded
in H
1
(M). Therefore, ¦u
k
¦ must also be bounded in H
1
(M). Consequently,
there exists a subsequence of ¦u
k
¦ that converges to a function u
o
in H
1
(M),
and u
o
is the desired weak solution for
−´u
o
+α
o
= R(x)e
uo
.
This completes the proof of the Theorem. ¯ .
6
Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.1 Introduction
6.2 The Variational Approach
6.2.1 Estimate the Values of the Functional
6.2.2 The Variational Scheme
6.3 The A Priori Estimates
6.3.1 In the Region Where R < 0
6.3.2 In the Region Where R is small
6.3.3 In the Region Where R > 0
6.1 Introduction
On higher dimensional sphere S
n
with n ≥ 3, Kazdan and Warner raise the
following question:
Which functions R(x) can be realized as the scalar curvature of some
conformally related metrics? It is equivalent to consider the existence of a
solution to the following semilinear elliptic equation
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
, u > 0, x ∈ S
n
. (6.1)
The equation is “critical” in the sense that lack of compactness occurs.
Besides the obvious necessary condition that R(x) be positive somewhere,
there are wellknown obstructions found by Kazdan and Warner [KW2] and
later generalized by Bourguignon and Ezin [BE]. The conditions are:
S
n
X(R)dV
g
= 0 (6.2)
158 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
where dV
g
is the volume element of the conformal metric g and X is any
conformal vector ﬁeld associated with the standard metric g
0
. We call these
KazdanWarner type conditions. These conditions give rise to many examples
of R(x) for which equation (6.1) has no solution. In particular, a monotone
rotationally symmetric function R admits no solution.
In the last two decades, numerous studies were dedicated to these problems
and various suﬃcient conditions were found (please see the articles [Ba] [BC]
[C] [CC] [CD1] [CD2] [ChL] [CS] [CY1] [CY2] [CY3] [CY4] [ES] [Ha] [Ho] [Li]
[Li2] [Li3] [Mo] [SZ] [XY] and the references therein ). However, among others,
one problem of common concern left open, namely, were those KazdanWarner
type necessary conditions also suﬃcient ? In the case where R is rotationally
symmetric, the conditions become:
R > 0 somewhere and R
changes signs. (6.3)
Then
(a) is (6.3) a suﬃcient condition? and
(b) if not, what are the necessary and suﬃcient conditions?
Kazdan listed these as open problems in his CBMS Lecture Notes [Ka].
Recently, we answered question (a) negatively [CL3] [CL4]. We found some
stronger obstructions. Our results imply that for a rotationally symmetric
function R, if it is monotone in the region where it is positive, then problem
(6.1) admits no solution unless R is a constant.
In other words, a necessary condition to solve (6.1) is that
R
(r) changes signs in the region where R is positive. (6.4)
Now, is this a suﬃcient condition?
For prescribing Gaussian curvature equation (5.3) on S
2
, Xu and Yang
[XY] showed that if R is ‘nondegenerate,’ then (6.4) is a suﬃcient condition.
For equation (6.1) on higher dimensional spheres, a major diﬃculty is
that multiple blowups may occur when approaching a solution by a mini
max sequence of the functional or by solutions of subcritical equations.
In dimensions higher than 3, the ‘nondegeneracy’ condition is no longer
suﬃcient to guarantee the existence of a solution. It was illustrated by a
counter example in [Bi] constructed by Bianchi. He found some positive ro
tationally symmetric function R on S
4
, which is nondegenerate and non
monotone, and for which the problem admits no solution. In this situation,
a more proper condition is called the ‘ﬂatness condition’. Roughly speaking,
it requires that at every critical point of R, the derivatives of R up to order
(n − 2) vanish, while some higher but less than n
th
order derivative is dis
tinct from 0. For n = 3, the ‘nondegeneracy’ condition is a special case of
the ‘ﬂatness condition’. Although people are wondering if the ‘ﬂatness condi
tion’ is necessary, it is still used widely today (see [CY3], [Li2], and [SZ]) as a
standard assumption to guarantee the existence of a solution. The above men
tioned Bianchi’s counter example seems to suggest that the ‘ﬂatness condition’
is somewhat sharp.
6.1 Introduction 159
Now, a natural question is:
Under the ‘ﬂatness condition,’ is (6.4) a suﬃcient condition?
In this section, we answer the question aﬃrmatively. We prove that, under
the ‘ﬂatness condition,’ (6.4) is a necessary and suﬃcient condition for (6.1) to
have a solution. This is true in all dimensions n ≥ 3, and it applies to functions
R with changing signs. Thus, we essentially answer the open question (b)
posed by Kazdan.
There are many versions of ‘ﬂatness conditions,’ a general one was pre
sented in [Li2] by Y. Li. Here, to better illustrate the idea, in the statement
of the following theorem, we only list a typical and easytoverify one.
Theorem 6.1.1 Let n ≥ 3. Let R = R(r) be rotationally symmetric and
satisfy the following ﬂatness condition near every positive critical point r
o
:
R(r) = R(r
o
) +a[r −r
o
[
α
+h([r −r
o
[), with a = 0 and n−2 < α < n. (6.5)
where h
(s) = o(s
α−1
).
Then a necessary and suﬃcient condition for equation (6.1) to have a
solution is that
R
(r) changes signs in the regions where R > 0.
The theorem is proved by a variational approach. We blend in our new
ideas with other ideas in [XY], [Ba], [Li2], and [SZ]. We use the ‘center of
mass’ to deﬁne neighborhoods of ‘critical points at inﬁnity,’ obtain some quite
sharp and clean estimates in those neighborhoods, and construct a maxmini
variational scheme at subcritical levels and then approach the critical level.
Outline of the proof of Theorem 6.1.1.
Let γ
n
=
n(n−2)
4
and τ =
n+2
n−2
. We ﬁrst ﬁnd a positive solution u
p
of the
subcritical equation
−´u +γ
n
u = R(r)u
p
. (6.6)
for each p < τ and close to τ. Then let p→τ, take the limit.
To ﬁnd the solution of equation (6.6), we construct a maxmini variational
scheme. Let
J
p
(u) :=
S
n
Ru
p+1
dV
and
E(u) :=
S
n
([u[
2
+γ
n
u
2
)dV.
We seek critical points of J
p
(u) under the constraint
H = ¦u ∈ H
1
(S
n
) [ E(u) = γ
n
[S
n
[, u ≥ 0¦.
where [S
n
[ is the volume of S
n
.
160 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
If R has only one positive local maximum, then by condition (6.4), on each
of the poles, R is either nonpositive or has a local minimum. Then similar
to the approach in [C], we seek a solution by minimizing the functional in a
family of rotationally symmetric functions in H.
In the following, we assume that R has at least two positive local maxima.
In this case, the solutions we seek are not necessarily symmetric.
Our scheme is based on the following key estimates on the values of the
functional J
p
(u) in a neighborhood of the ‘critical points at inﬁnity’ associated
with each local maximum of R. Let r
1
be a positive local maximum of R. We
prove that there is an open set G
1
⊂ H (independent of p), such that on the
boundary ∂G
1
of G
1
, we have
J
p
(u) ≤ R(r
1
)[S
n
[ −δ, (6.7)
while there is a function ψ
1
∈ G
1
, such that
J
p
(ψ
1
) > R(r
1
)[S
n
[ −
δ
2
. (6.8)
Here δ > 0 is independent of p. Roughly speaking, we have some kinds of
‘mountain pass’ associated to each local maximum of R. The set G
1
is deﬁned
by using the ‘center of mass’.
Let r
1
and r
2
be two smallest positive local maxima of R. Let ψ
1
and ψ
2
be two functions deﬁned by (6.8) associated to r
1
and r
2
. Let γ be a path in
H connecting ψ
1
and ψ
2
, and let Γ be the family of all such paths.
Deﬁne
c
p
= sup
γ∈Γ
min
γ
J
p
(u). (6.9)
For each p < τ, by compactness, there is a critical point u
p
of J
p
(), such
that
J
p
(u
p
) = c
p
.
Obviously, a constant multiple of u
p
is a solution of (6.6). Moreover, by (6.7),
J
p
(u
p
) ≤ R(r
i
)[S
n
[ −δ, (6.10)
for any positive local maximum r
i
of R.
To ﬁnd a solution of (6.1), we let p→τ, take the limit. To show the conver
gence of a subsequence of ¦u
p
¦, we established apriori bounds for the solutions
in the following order.
(i) In the region where R < 0: This is done by the ‘Kelvin Transform’ and
a maximum principle.
(ii) In the region where R is small: This is mainly due to the boundedness
of the energy E(u
p
).
(iii) In the region where R is positively bounded away from 0: First due to
the energy bound, ¦u
p
¦ can only blow up at most ﬁnitely many points. Using
a Pohozaev type identity, we show that the sequence can only blow up at one
6.2 The Variational Approach 161
point and the point must be a local maximum of R. Finally, we use (6.10) to
argue that even one point blow up is impossible and thus establish an apriori
bound for a subsequence of ¦u
p
¦.
In subsection 2, we carry on the maxmini variational scheme and obtain
a solution u
p
of (6.6) for each p.
In subsection 3, we establish a priori estimates on the solution sequence
¦u
p
¦.
6.2 The Variational Approach
In this section, we construct a maxmini variational scheme to ﬁnd a solution
of
−´u +γ
n
u = R(r)u
p
(6.11)
for each p < τ :=
n+2
n−2
.
Let
J
p
(u) :=
S
n
Ru
p+1
dV
and
E(u) :=
S
n
([u[
2
+γ
n
u
2
)dV.
Let
(u, v) = (
S
uv +γ
n
uv)dV
be the inner product in the Hilbert space H
1
(S
n
), and
u :=
E(u)
be the corresponding norm.
We seek critical points of J
p
(u) under the constraint
H := ¦u ∈ H
1
(S
n
) [ E(u) = E(1) = γ
n
[S
n
[, u ≥ 0¦.
where [S
n
[ is the volume of S
n
.
One can easily see that a critical point of J
p
in S multiplied by a constant
is a solution of (6.11).
We divide the rest of the section into two parts.
First, we establish the key estimates (6.7) and (6.8).
Then, we carry on the maxmini variational scheme.
162 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.2.1 Estimate the Values of the Functional
To construct a maxmini variational scheme, we ﬁrst show that there is some
kinds of ‘mountain pass’ associated to each positive local maximum of R.
Unlike the classical ones, these ‘mountain passes’ are in some neighborhoods
of the ‘critical points at inﬁnity’ (See Proposition 6.2.2 and 6.2.1 below).
Choose a coordinate system in R
n+1
, so that the south pole of S
n
is at
the origin O and the center of the ball B
n+1
is at (0, , 0, 1). As usual, we
use [ [ to denote the distance in R
n+1
.
Deﬁne the center of mass of u as
q(u) =:
S
n
xu
τ+1
(x)dV
S
n
u
τ+1
(x)dV
(6.12)
It is actually the center of mass of the sphere S
n
with density u
τ+1
(x) at the
point x.
We recall some wellknown facts in conformal geometry.
Let φ
q
be the standard solution with its ‘center of mass’ at q ∈ B
n+1
, that
is, φ
q
∈ H and satisﬁes
−´u +γ
n
u = γ
n
u
τ
, (6.13)
We may also regard φ
q
as depend on two parameters λ and ˜ q, where ˜ q
is the intersection of S
n
with the ray passing the center and the point q.
When ˜ q = O (the south pole of S
n
), we can express, in the spherical polar
coordinates x = (r, θ) of S
n
(0 ≤ r ≤ π, θ ∈ S
n−1
) centered at the south pole
where r = 0,
φ
q
= φ
λ,˜ q
= (
λ
λ
2
cos
2
r
2
+ sin
2 r
2
)
n−2
2
. (6.14)
with 0 < λ ≤ 1. As λ→0, this family of functions blows up at south pole,
while tends to zero elsewhere.
Correspondingly, there is a family of conformal transforms
T
q
: H −→ H; T
q
u := u(h
λ
(x))[det(dh
λ
)]
n−2
2n
(6.15)
with
h
λ
(r, θ) = (2 tan
−1
(λtan
r
2
), θ).
Here det(dh
λ
) is the determinant of dh
λ
, and can be expressed explicitly as
det(dh
λ
) =
λ
cos
2
r
2
+λ
2
sin
2 r
2
n
.
It is wellknown that this family of conformal transforms leave the equation
(6.13), the energy E(), and the integral
S
n
u
τ+1
dV invariant. More generally
and more precisely, we have
6.2 The Variational Approach 163
Lemma 6.2.1 For any u, v ∈ H
1
(S
n
), and for any nonnegative integer k,
it holds the following
(i) (T
q
u, T
q
v) = (u, v) (6.16)
(ii)
S
n
(T
q
u)
k
(T
q
v)
τ+1−k
dV =
S
n
u
k
v
τ+1−k
dV. (6.17)
In particular,
E(T
q
u) = E(u) and
S
n
(T
q
u)
τ+1
dV =
S
n
u
τ+1
dV.
Proof. (i) We will use a property of the conformal Laplacian to prove the
invariance of the energy
E(T
q
u) = E(u) .
The proof for (i) is similar.
On a ndimensional Riemannian manifold (M, g),
L
g
:= ´
g
−c(n)R
g
is call a conformal Laplacian, where c(n) =
n−2
4(n−1)
, ´
g
and R
g
are the Laplace
Beltrami operator and the scalar curvature associated with the metric g re
spectively.
The conformal Laplacian has the following wellknown invariance property
under the conformal change of metrics. For ˆ g = w
4/(n−2)
g, w > 0, we have
L
ˆ g
u = w
−
n+2
n−2
L
g
(uw) , for all u ∈ C
∞
(M). (6.18)
Interested readers may ﬁnd its proof in [Bes].
If M is a compact manifold without boundary, then multiplying both sides
of (6.18) by u and integrating by parts, one would arrive at
M
¦[
ˆ g
u[
2
+c(n)R
ˆ g
[u[
2
¦dV
ˆ g
=
M
¦[
g
(uw)[
2
+c(n)R
g
[uw[
2
¦dV
g
. (6.19)
In our situation, we choose g to be the Euclidean metric ¯ g in R
n
with
¯ g
ij
= δ
ij
and ˆ g = g
o
, the standard metric on S
n
. Then it is wellknown that
g
o
= w
4
n−2
(x)¯ g
with
w(x) =
4
4 +[x[
2
n−2
2
satisfying the equation
−
¯
´w = γ
n
w
n+2
n−2
.
We also have
164 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
c(n)R
go
= γ
n
:=
n(n −2)
4
and
c(n)R
¯ g
= c(n)0 = 0.
Now formula (6.19) becomes
S
n
¦[
o
u[
2
+γ
n
[u[
2
¦dV
o
=
R
n
[
¯
(uw)[
2
dx, (6.20)
where
o
and
¯
are gradient operators associated with the metrics g
o
and ¯ g
respectively.
Again, let ˜ q be the intersection of S
n
with the ray passing through the
center of the sphere and the point q. Make a stereographic projection from
S
n
to R
n
as shown in the ﬁgure below, where ˜ q is placed at the origin of R
n
.
Under this projection, the point (r, θ) ∈ S
n
becomes x = (x
1
, , x
n
) ∈
R
n
, with
[x[
2
= tan
r
2
and
(T
q
u)(x) = u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
.
It follows from this and (6.20) that
E(T
q
u) =
R
n
[
o
¸
u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
¸
[
2
+γ
n
[u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
[
2
¸
dV
go
=
R
n
[
¯
¸
u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
w(x)
¸
[
2
dx
=
R
n
[
¯
¸
u(λx)
4λ
4 +λ[x[
2
n−2
2
¸
[
2
dx
=
R
n
[
¯
¸
u(y)
4
4 +[y[
2
n−2
2
¸
[
2
dy
=
R
n
[
¯
(u(y)w(y))[
2
dy
= E(u).
(ii) This can be derived immediately by making change of variable y =
h
λ
(x).
This completes the proof of the lemma. ¯ .
One can also verify that
T
q
φ
q
= 1.
6.2 The Variational Approach 165
The relations between q and λ, ˜ q and T
q
for ˜ q = O can be expressed in a
similar way.
We now carry on the estimates near the south pole (0, θ) which we as
sume to be a positive local maximum. The estimates near other positive local
maxima are similar. Our conditions on R implies that
R(r) = R(0) −ar
α
, for some a > 0, n −2 < α < n. (6.21)
in a small neighborhood of O.
Deﬁne
Σ = ¦u ∈ H [ [q(u)[ ≤ ρ
o
, v := min
t,q
u −tφ
q
 ≤ ρ
o
¦. (6.22)
Notice that the ‘centers of mass’ of functions in Σ are near the south pole O.
This is a neighborhood of the ‘critical points at inﬁnity’ corresponding to O.
We will estimate the functional J
p
in this neighborhood.
We ﬁrst notice that the supremum of J
p
in Σ approaches R(0)[S
n
[ as p→τ.
More precisely, we have
Proposition 6.2.1 For any δ
1
> 0, there is a p
1
≤ τ, such that for all
τ ≥ p ≥ p
1
,
sup
Σ
J
p
(u) > R(0)[S
n
[ −δ
1
. (6.23)
Then we show that on the boundary of Σ, J
p
is strictly less and bounded
away from R(0)[S
n
[.
Proposition 6.2.2 There exist positive constants ρ
o
, p
o
, and δ
o
, such that
for all p ≥ p
o
and for all u ∈ ∂Σ, holds
J
p
(u) ≤ R(0)[S
n
[ −δ
o
. (6.24)
Proof of Proposition 6.2.1
Through a straight forward calculation, one can show that
J
τ
(φ
λ,O
)→R(0)[S
n
[, as λ→0. (6.25)
For a given δ
1
> 0, by (6.25), one can choose λ
o
, such that φ
λo,O
∈ Σ, and
J
τ
(φ
λo,O
) > R(0)[S
n
[ −
δ
1
2
. (6.26)
It is easy to see that for a ﬁxed function φ
λo,O
, J
p
is continuous with
respect to p. Hence, by (6.26), there exists a p
1
, such that for all p ≥ p
1
,
166 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
J
p
(φ
λo,O
) > R(0)[S
n
[ −δ
1
.
This completes the proof of Proposition 6.2.1.
The proof of Proposition 6.2.2 is rather complex. We ﬁrst estimate J
p
for
a family of standard functions in Σ.
Lemma 6.2.2 For p suﬃciently close to τ, and for λ and [˜ q[ suﬃciently
small, we have
J
p
(φ
λ,˜ q
) ≤ (R(0) −C
1
[˜ q[
α
)[S
n
[(1 +o
p
(1)) −C
1
λ
α+δp
, (6.27)
where δ
p
:= τ −p and o
p
(1)→0 as p→τ.
Proof of Lemma 6.2.2. Let > 0 be small such that (6.21) holds in
B
(O) ⊂ S
n
. We express
J
p
(φ
λ,˜ q
) =
S
n
R(x)φ
p+1
λ,˜ q
dV =
B(O)
+
S
n
\B(O)
. (6.28)
From the deﬁnition (6.14) of φ
λ,˜ q
and the boundedness of R, we obtain
the smallness of the second integral:
S
n
\B(O)
R(x)φ
p+1
λ,˜ q
dV ≤ C
2
λ
n−δp
n−2
2
for ˜ q ∈ B
2
(O). (6.29)
To estimate the ﬁrst integral, we work in a local coordinate centered at O.
B(O)
R(x)φ
p+1
λ,˜ q
dV =
B(O)
R(x + ˜ q)φ
p+1
λ,O
dV
≤
B(O)
[R(0) −a[x + ˜ q[
α
]φ
p+1
λ,O
dV
≤
B(O)
[R(0) −C
3
[x[
α
−C
3
[˜ q[
α
]φ
p+1
λ,O
dV
≤ [R(0) −C
3
[˜ q[
α
][S
n
[[1 +o
p
(1)] −C
4
λ
α+δp(n−2)/2
. (6.30)
Here we have used the fact that [x + ˜ q[
α
≥ c([x[
α
+[˜ q[
α
) for some c > 0 in
one half of the ball B
(O) and the symmetry of φ
λ,O
.
Noticing that α < n and δ
p
→0, we conclude that (6.28), (6.29), and (6.30)
imply (6.27). This completes the proof of the Lemma.
To estimate J
p
for all u ∈ ∂Σ, we also need the following two lemmas that
describe some useful properties of the set Σ.
Lemma 6.2.3 (On the ‘center of mass’)
(i) Let q, λ, and ˜ q be deﬁned by (6.14). Then for suﬃciently small q,
[q[
2
≤ C([˜ q[
2
+λ
4
). (6.31)
(ii) Let ρ
o
, q, and v be deﬁned by (6.22). Then for ρ
o
suﬃciently small,
ρ
o
≤ [q[ +Cv. (6.32)
6.2 The Variational Approach 167
Lemma 6.2.4 (Orthogonality)
Let u ∈ Σ and v = u−t
o
φ
qo
be deﬁned by (6.22). Let T
qo
be the conformal
transform such that
T
qo
φ
qo
= 1. (6.33)
Then
S
n
T
qo
vdV = 0 (6.34)
and
S
n
T
qo
v ψ
i
(x)dV = 0, i = 1, 2, n. (6.35)
where ψ
i
are ﬁrst spherical harmonic functions on S
n
:
−´ψ
i
= nψ
i
(x) , i = 1, 2, , n.
If we place the center of the sphere S
n
at the origin of R
n+1
(not in our case),
then for x = (x
1
, , x
n
),
ψ
i
(x) = x
i
, i = 1, 2, , n.
Proof of Lemma 6.2.3.
(i) From the deﬁnition that φ
q
= φ
λ,˜ q
, and (6.14), and by an elementary
calculation, one arrives at
[q − ˜ q[ ∼ λ
2
, for λ suﬃciently small . (6.36)
Draw a perpendicular line segment from O to q˜ q, then one can see that
(6.31) is a direct consequence of the triangle inequality and (6.36).
(ii) For any u = tφ
q
+ v ∈ ∂Σ, the boundary of Σ, by the deﬁnition, we
have either v = ρ
o
or [q(u)[ = ρ
o
. If v = ρ
o
, then we are done. Hence we
assume that [q(u)[ = ρ
o
. It follows from the deﬁnition of the ‘center of mass’
that
ρ
o
= [
S
n
x(tφ
q
+v)
τ+1
S
n
(tφ
q
+v)
τ+1
[. (6.37)
Noticing that v ≤ ρ
o
is very small, we can expand the integrals in (6.37) in
terms of v:
ρ
o
= [
t
τ+1
xφ
τ+1
q
+ (τ + 1)t
τ
xφ
τ
q
v +o(v)
t
τ+1
φ
τ+1
q
+ (τ + 1)t
τ
φ
τ
q
v +o(v)
[
≤ [
xφ
τ+1
q
φ
τ+1
q
[ +Cv ≤ [q[ +Cv.
Here we have used the fact that v is small and that t is close to 1. This
completes the proof of Lemma 6.2.3.
168 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
Proof of Lemma 6.2.4.
Write L = ´+γ
n
. We use the fact that v = u −t
o
φ
qo
is the minimum of
E(u −tφ
q
) among all possible values of t and q.
(i) By a variation with respect to t, we have
0 = (v, φ
qo
) =
S
n
vLφ
qo
dV. (6.38)
It follows from (6.13) that
S
n
vφ
τ
qo
dV = 0.
Now, apply the conformal transform T
qo
to both v and φ
qo
in the above
integral. By the invariant property of the transform, we arrive at (6.34).
(ii) To prove (6.35), we make a variation with respect to q.
0 =
q
vLφ
q
[
q=qo
= γ
n
q
vφ
τ
q
[
q=qo
q
T
qo
v(T
qo
φ
q
)
τ
[
q=qo
=
q
T
qo
v(φ
q−qo
)
τ
[
q=qo
τ
T
qo
vφ
τ−1
q−qo
q
φ
q−qo
[
q=qo
= τ
T
qo
v(ψ
1
, , ψ
n
).
This completes the proof of the Lemma.
Proof of Proposition 6.2.2
Make a perturbation of R(x):
¯
R(x) =
R(x) x ∈ B
2ρo
(O)
m x ∈ S
n
` B
2ρo
(O),
(6.39)
where m = R [
∂B
2ρo(O)
.
Deﬁne
¯
J
p
(u) =
S
n
¯
R(x)u
p+1
dV.
The estimate is divided into two steps.
In step 1, we show that the diﬀerence between J
p
(u) and
¯
J
τ
(u) is very
small. In step 2, we estimate
¯
J
τ
(u).
Step 1.
First we show
¯
J
p
(u) ≤
¯
J
τ
(u)(1 +o
p
(1)). (6.40)
where o
p
(1)→0 as p→τ.
6.2 The Variational Approach 169
In fact, by the H¨older inequality
¯
Ru
p+1
≤
¯
R
τ+1
p+1
u
τ+1
dV
p+1
τ+1
[S
n
[
τ−p
τ+1
≤
¯
Ru
τ+1
dV
p+1
τ+1
[ R(0)[S
n
[ [
τ−p
τ+1
.
This implies (6.40).
Now estimate the diﬀerence between J
p
(u) and
¯
J
p
(u).
[ J
p
(u) −
¯
J
p
(u) [≤ C
1
S
n
\B2ρo
(O)
u
p+1
≤ C
2
S
n
\B2ρo
(O)
(tφ
q
)
p+1
+C
2
S
n
\B2ρo
(O)
v
p+1
≤ C
3
λ
n−δp
n−2
2
+C
3
v
p+1
.
(6.41)
Step 2.
By virtue of (6.40) and (6.41), we now only need to estimate
¯
J
τ
(u). For
any u ∈ ∂Σ, write u = v + tφ
q
. (6.38) implies that v and φ
q
are orthogonal
with respect to the inner product associated to E(), that is
S
n
(vφ
q
+γ
n
vφ
q
)dV = 0.
Notice that u
2
:= E(u), we have
u
2
= v
2
+t
2
φ
q

2
.
Using the fact that both u and φ
q
belong to H with
u = γ
n
[S
n
[ = φ
q
,
We derive
t
2
= 1 −
v
2
γ
n
[S
n
[
. (6.42)
It follows that
S
n
¯
R(x)u
τ+1
≤
t
τ+1
S
n
¯
R(x)φ
τ+1
q
+ (τ + 1)
S
n
¯
R(x)φ
τ
q
v +
τ(τ + 1)
2
S
n
φ
τ−1
q
v
2
+o(v
2
)
= I
1
+ (τ + 1)I
2
+
τ(τ + 1)
2
I
3
+o(v
2
). (6.43)
To estimate I
1
, we use (6.42) and (6.27),
170 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
I
1
≤ (1−
τ + 1
2
v
2
γ
n
[S
n
[
)R(0)[S
n
[(1−c
1
[˜ q[
α
−c
1
λ
α
) +O(λ
n
) +o(v
2
). (6.44)
for some positive constant c
1
.
To estimate I
2
, we use the H
1
orthogonality between v and φ
q
(see (6.38)),
I
2
=
S
n
(
¯
R(x) −m)φ
τ
q
vdV =
B2ρo
(O)
(
¯
R(x) −m)φ
τ
q
vdV
=
1
γ
n
B2ρo
(O)
(
¯
R(x) −m)Lφ
q
vdV
≤ Cρ
α
o
[
B2ρo
(O)
Lφ
q
vdV [= Cρ
α
o
[(φ
q
, v)[
≤ Cρ
α
o
φ
q
v ≤ C
4
ρ
α
o
v. (6.45)
To estimate I
3
, we use (6.34) and (6.35). It is wellknown that the ﬁrst
and second eigenspaces of −´ on S
n
are constants and ψ
i
corresponding to
eigenvalues 0 and n respectively. Now (6.34) and (6.35) imply that T
q
v is
orthogonal to these two eigenspaces and hence
S
n
(T
q
v)[
2
dV ≥ λ
2
S
n
[T
q
v[
2
dV ,
where λ
2
= n+c
2
, for some positive number c
2
, is the second nonzero eigen
value of −´. Consequently
v
2
= T
q
v
2
≥ (γ
n
+n +c
2
)
S
n
(T
q
v)
2
.
It follows that
I
3
≤ R(0)
S
n
φ
τ−1
q
v
2
dV = R(0)
S
n
(T
q
φ
q
)
τ−1
(T
q
v)
2
dV
= R(0)
S
n
(T
q
v)
2
≤
R(0)
γ
n
+n +c
2
v
2
. (6.46)
Now (6.44), (6.45), and (6.46) imply that there is a β > 0 such that
¯
J
τ
(u) ≤ R(0)[S
n
[[1 −β([˜ q[
α
+λ
α
+v
2
)]. (6.47)
Here we have used the fact that v is very small and ρ
α
o
v can be controlled
by [˜ q[
α
+λ
α
(see Lemma 6.2.2). Notice that the positive constant c
2
in (6.46)
has played a key role, and without it, the coeﬃcient of v
2
in (6.47) would
be 0 due to the fact that γ
n
=
n(n−1)
4
.
For any u ∈ ∂Σ, we have
either v = ρ
o
or [q(u)[ = ρ
o
.
By (6.31) and (6.47), in either case there exist a δ
o
> 0, such that
6.2 The Variational Approach 171
¯
J
τ
(u) ≤ R(0)[S
n
[ −2δ
o
. (6.48)
Now (6.24) is an immediate consequence of (6.40), (6.41), and (6.48) . This
completes the proof of the Proposition.
6.2.2 The Variational Scheme
In this part, we construct variational schemes to show the existence of a
solution to equation (6.11) for each p < τ.
Case (i): R has only one positive local maximum.
In this case, condition (6.4) implies that R must have local minima at
both poles. Similar to the ideas in [C], we seek a solution by maximizing the
functional J
p
in a class of rotationally symmetric functions:
H
r
= ¦u ∈ H [ u = u(r)¦. (6.49)
Obviously, J
p
is bounded from above in H
r
and it is wellknown that the
variational scheme is compact for each p < τ (See the argument in Chapter
2). Therefore any maximizing sequence possesses a converging subsequence in
H
r
and the limiting function multiplied by a suitable constant is a solution of
(6.11).
Case (ii): R has at least two positive local maxima.
Let r
1
and r
2
be the two smallest positive local maxima of R. By Propo
sition 6.2.1 and 6.2.2, there exist two disjoint open sets Σ
1
, Σ
2
⊂ H, ψ
i
∈
Σ
i
, p
o
< τ and δ > 0, such that for all p ≥ p
o
,
J
p
(ψ
i
) > R(r
i
)[S
n
[ −
δ
2
, i = 1, 2; (6.50)
and
J
p
(u) ≤ R(r
i
)[S
n
[ −δ, ∀u ∈ ∂Σ
i
, i = 1, 2. (6.51)
Let γ be a path in H joining ψ
1
and ψ
2
. Let Γ be the family of all such
paths. Deﬁne
c
p
= sup
γ∈Γ
min
u∈γ
J
p
(u). (6.52)
For each ﬁxed p < τ, by the wellknown compactness, there exists a critical
(saddle) point u
p
of J
p
with
J
p
(u
p
) = c
p
.
Moreover, due to (6.51) and the deﬁnition of c
p
, we have
J
p
(u
p
) ≤ min
i=1,2
R(r
i
)[S
n
[ −δ. (6.53)
One can easily see that a critical point of J
p
in H multiplied by a constant
is a solution of (6.11) and for all p close to τ, the constant multiples are
bounded from above and bounded away from 0.
172 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.3 The A Priori Estimates
In the previous section, we showed the existence of a positive solution u
p
to
the subcritical equation (6.11) for each p < τ. In this section, we prove that
as p→τ, there is a subsequence of ¦u
p
¦, which converges to a solution u
o
of
(6.1). The convergence is based on the following a priori estimate.
Theorem 6.3.1 Assume that R satisﬁes condition (6.5) in Theorem 6.1.1,
then there exists p
o
< τ, such that for all p
o
< p < τ, the solution of (6.11)
obtained by the variational scheme are uniformly bounded.
To prove the Theorem, we estimate the solutions in three regions where
R < 0, R close to 0, and R > 0 respectively.
6.3.1 In the Region Where R < 0
The apriori bound of the solutions is stated in the following proposition, which
is a direct consequence of Proposition 6.2.2 in our paper [CL6].
Proposition 6.3.1 For all 1 < p ≤ τ, the solutions of (6.11) are uniformly
bounded in the regions where R ≤ −δ, for every δ > 0. The bound depends
only on δ, dist(¦x [ R(x) ≤ −δ¦, ¦x [ R(x) = 0¦), and the lower bound of R.
6.3.2 In the Region Where R is Small
In [CL6], we also obtained estimates in this region by the method of moving
planes. However, in that paper, we assume that R be bounded away from
zero. In [CL7], for rotationally symmetric functions R on S
3
, we removed this
condition by using a blowing up analysis near R(x) = 0. That method may
be applied to higher dimensions with a few modiﬁcations. Now within our
variational frame in this paper, the a priori estimate is simpler. It is mainly
due to the energy bound.
Proposition 6.3.2 Let ¦u
p
¦ be solutions of equation (6.11) obtained by the
variational approach in section 2. Then there exists a p
o
< τ and a δ > 0,
such that for all p
o
< p ≤ τ, ¦u
p
¦ are uniformly bounded in the regions where
[R(x)[ ≤ δ.
Proof. It is easy to see that the energy E(u
p
) are bounded for all p. Now
suppose that the conclusion of the Proposition is violated. Then there exists
a subsequence ¦u
i
¦ with u
i
= u
pi
, p
i
→τ, and a sequence of points ¦x
i
¦, with
x
i
→x
o
and R(x
o
) = 0, such that
u
i
(x
i
)→∞.
We will use a rescaling argument to derive a contradiction. Since x
i
may not
be a local maximum of u
i
, we need to choose a point near x
i
, which is almost
a local maximum. To this end, let K be any large number and let
6.3 The A Priori Estimates 173
r
i
= 2K[u
i
(x
i
)]
−
p
i
−1
2
.
In a small neighborhood of x
o
, choose a local coordinate and let
s
i
(x
i
) = u
i
(x)(r
i
−[x −x
i
[)
2
p
i
−1
.
Let a
i
be the maximum of s
i
(x) in B
ri
(x
i
). Let λ
i
= [u
i
(a
i
)]
−
p
i
−1
2
.
Then from the deﬁnition of a
i
, one can easily verify that the ball B
λiK
(a
i
)
is in the interior of B
ri
(x
i
) and the value of u
i
(x) in B
λiK
(a
i
) is bounded
above by a constant multiple of u
i
(a
i
).
To see the ﬁrst part, we use the fact s
i
(a
i
) ≥ s
i
(x
i
). From this, we derive
u
i
(a
i
)(r
i
−[a
i
−x
i
[)
2
p
i
−1
≥ u
i
(x
i
)r
2
p
i
−1
i
= (2K)
2
p
i
−1
.
It follows that
r
i
−[a
i
−x
i
[
2
≥ λ
i
K, (6.54)
hence
B
λiK
(a
i
) ⊂ B
ri
(x
i
).
To see the second part, we use s
i
(a
i
) ≥ s
i
(x) and derive from (6.54),
u
i
(x) ≤ u
i
(a
i
)
r
i
−[a
i
−x
i
[
r
i
−[x −x
i
[
2
p
i
−1
≤ u
i
(a
i
) 2
2
p
i
−1
, ∀ x ∈ B
λiK
(a
i
).
Now we can make a rescaling.
v
i
(x) =
1
u
i
(a
i
)
u
i
(λ
i
x +a
i
).
Obviously v
i
is bounded in B
K
(0), and it follows by a standard argument
that ¦v
i
¦ converges to a harmonic function v
o
in B
K
(0) ⊂ R
n
, with v
o
(0) = 1.
Consequently for i suﬃciently large,
B
K
(0)
v
i
(x)
τ+1
dx ≥ cK
n
, (6.55)
for some c > 0.
On the other hand, the boundedness of the energy E(u
i
) implies that
S
n
u
τ+1
i
dV ≤ C, (6.56)
for some constant C. By a straight forward calculation, one can verify that,
for any K > 0,
S
n
u
τ+1
i
dV ≥
B
K
(0)
v
τ+1
i
dx. (6.57)
Obviously, (6.56) and (6.57) contradict with (6.55). This completes the
proof of the Proposition. ¯ .
174 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.3.3 In the Regions Where R > 0.
Proposition 6.3.3 Let ¦u
p
¦ be solutions of equation (6.11) obtained by the
variational approach in section 2. Then there exists a p
o
< τ, such that for
all p
o
< p ≤ τ and for any > 0, ¦u
p
¦ are uniformly bounded in the regions
where R(x) ≥ .
Proof. The proof is divided into 3 steps. In Step 1, we argue that the solutions
can only blow up at ﬁnitely many points because of the energy bound. In Step
2, we show that there is no more than one point blow up and the point must
be a local maximum of R. This is done mainly by using a Pohozaeve type
identity. In Step 3, we use (6.53) to conclude that even one point blow up is
impossible.
Step 1. The argument is standard. Let ¦x
i
¦ be a sequence of points such
that u
i
(x
i
)→∞and x
i
→x
o
with R(x
o
) > 0. Let r
i
, s
i
(x), and v
i
(x) be deﬁned
as in Part II. Then similar to the argument in Part II, we see that ¦v
i
(x)¦
converges to a standard function v
o
(x) in R
n
with
−´v
o
= R(x
o
)v
τ
o
.
It follows that
Br
i
(xo)
u
τ+1
i
dV ≥ c
o
> 0.
Because the total energy of u
i
is bounded, we can have only ﬁnitely many
such x
o
. Therefore, ¦u
i
¦ can only blow up at ﬁnitely many points.
Step 2. We have shown that ¦u
i
¦ has only isolated blow up points. As a
consequence of a result in [Li2] (also see [CL6] or [SZ]), we have
Lemma 6.3.1 Let R satisfy the ‘ﬂatness condition’ in Theorem 1, Then ¦u
i
¦
can have at most one simple blow up point and this point must be a local
maximum of R. Moreover, ¦u
i
¦ behaves almost like a family of the standard
functions φ
λi,˜ q
, with λ
i
= (max u
i
)
−
2
n−2
and with ˜ q at the maximum of R.
Step 3. Finally, we show that even one point blow up is impossible. For
convenience, let ¦u
i
¦ be the sequence of critical points of J
pi
we obtained in
Section 2. From the proof of Proposition 2.2, one can obtain
J
τ
(u
i
) ≤ min
k
R(r
k
)[S
n
[ −δ, (6.58)
for all positive local maxima r
k
of R, because each u
i
is the minimum of J
pi
on a path. Now if ¦u
i
¦ blow up at x
o
, then by Lemma 6.3.1, we have
J
τ
(u
i
)→R(x
o
)[S
n
[.
This is a contradiction with (6.58).
Therefore, the sequence ¦u
i
¦ is uniformly bounded and possesses a subse
quence converging to a solution of (6.1). This completes the proof of Theorem
6.1.1. ¯ .
7
Maximum Principles
7.1 Introduction
7.2 Weak Maximum Principle
7.3 The Hopf Lemma and Strong Maximum Principle
7.4 Maximum Principle Based on Comparison
7.5 A Maximum Principle for Integral Equations
7.1 Introduction
As a simple example of maximum principle, let’s consider a C
2
function u(x)
of one independent variable x. It is wellknown in calculus that at a local
maximum point x
o
of u, we must have
u
(x
o
) ≤ 0.
Based on this observation, then we have the following simplest version of
maximum principle:
Assume that u
(x) > 0 in an open interval (a, b), then u can not have
any interior maximum in the interval.
One can also see this geometrically. Since under the condition u
(x) > 0,
the graph is concave up, it can not have any local maximum.
More generally, for a C
2
function u(x) of nindependent variables x =
(x
1
, , x
n
), at a local maximum x
o
, we have
D
2
u(x
o
) := (u
xixj
(x
o
)) ≤ 0 ,
that is, the symmetric matrix is nonpositive deﬁnite at point x
o
. Correspond
ingly, the simplest version maximum principle reads:
176 7 Maximum Principles
If
(a
ij
(x)) ≥ 0 and
¸
ij
a
ij
(x)u
xixj
(x) > 0 (7.1)
in an open bounded domain Ω, then u can not achieve its maximum in the
interior of the domain.
An interesting special case is when (a
ij
(x)) is an identity matrix, in which
condition (7.1) becomes
´u > 0.
Unlike its onedimensional counterpart, condition (7.1) no longer implies
that the graph of u(x) is concave up. A simple counter example is
u(x
1
, x
2
) = x
2
1
−
1
2
x
2
2
.
One can easily see from the graph of this function that (0, 0) is a saddle point.
In this case, the validity of the maximum principle comes from the simple
algebraic fact:
For any two n n matrices A and B, if A ≥ 0 and B ≤ 0, then AB ≤ 0.
In this chapter, we will introduce various maximum principles, and most
of them will be used in the method of moving planes in the next chapter.
Besides this, there are numerous other applications. We will list some below.
i) Providing Estimates
Consider the boundary value problem
−´u = f(x) , x ∈ B
1
(0) ⊂ R
n
u(x) = 0 , x ∈ ∂B
1
(0).
(7.2)
If a ≤ f(x) ≤ b in B
1
(0), then we can compare the solution u with the
two functions
a
2n
(1 −[x[
2
) and
b
2n
(1 −[x[
2
)
which satisfy the equation with f(x) replaced by a and b, respectively, and
which vanish on the boundary as u does. Now, applying the maximum prin
ciple for ´ operator (see Theorem 7.1.1 in the following), we obtain
a
2n
(1 −[x[
2
) ≤ u(x) ≤
b
2n
(1 −[x[
2
).
ii) Proving Uniqueness of Solutions
In the above example, if f(x) ≡ 0, then we can choose a = b = 0, and
this implies that u ≡ 0. In other words, the solution of the boundary value
problem (7.2) is unique.
iii) Establishing the Existence of Solutions
(a) For a linear equation such as (7.2) in any bounded open domain Ω, let
u(x) = sup φ(x)
7.1 Introduction 177
where the sup is taken among all the functions that satisfy the corresponding
diﬀerential inequality
−´φ ≤ f(x) , x ∈ Ω
φ(x) = 0 , x ∈ ∂Ω.
Then u is a solution of (7.2).
(b) Now consider the nonlinear problem
−´u = f(u) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
(7.3)
Assume that f() is a smooth function with f
() ≥ 0. Suppose that there
exist two functions u(x) ≤ ¯ u(x), such that
−´u ≤ f(u) ≤ f(¯ u) ≤ −´¯ u.
These two functions are called sub (or lower) and super (or upper) solutions
respectively.
To seek a solution of problem (7.3), we use successive approximations. Let
−´u
1
= f(u) and −´u
i+1
= f(u
i
).
Then by maximum principle, we have
u ≤ u
1
≤ u
2
≤ ≤ u
i
≤ ≤ ¯ u.
Let u be the limit of the sequence ¦u
i
¦:
u(x) = limu
i
(x),
then u is a solution of the problem (7.3).
In Section 7.2 , we introduce and prove the weak maximum principles.
Theorem 7.1.1 (Weak Maximum Principle for −´.)
i) If
−´u(x) ≥ 0, x ∈ Ω,
then
min
¯
Ω
u ≥ min
∂Ω
u.
ii) If
−´u(x) ≤ 0, x ∈ Ω,
then
max
¯
Ω
u ≤ max
∂Ω
u.
178 7 Maximum Principles
This result can be extended to general uniformly elliptic operators. Let
D
i
=
∂
∂x
i
, D
ij
=
∂
2
∂x
i
∂x
j
.
Deﬁne
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x).
Here we always assume that a
ij
(x), b
i
(x), and c(x) are bounded continuous
functions in
¯
Ω. We say that L is uniformly elliptic if
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for any x ∈ Ω, any ξ ∈ R
n
and for some δ > 0.
Theorem 7.1.2 (Weak Maximum Principle for L) Let L be the uniformly
elliptic operator deﬁned above. Assume that c(x) ≡ 0.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u.
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
.
These Weak Maximum Principles infer that the minima or maxima of u
attain at some points on the boundary ∂Ω. However, they do not exclude
the possibility that the minima or maxima may also occur in the interior of
Ω. Actually this can not happen unless u is constant, as we will see in the
following.
Theorem 7.1.3 (Strong Maximum Principle for L with c(x) ≡ 0) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≡ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u attains its minimum value only on ∂Ω unless u is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u attains its maximum value only on ∂Ω unless u is constant.
This maximum principle (as well as the weak one) can also be applied to
the case when c(x) ≥ 0 with slight modiﬁcations.
7.1 Introduction 179
Theorem 7.1.4 (Strong Maximum Principle for L with c(x) ≥ 0) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≥ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u can not attain its nonpositive minimum in the interior of Ω unless u
is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u can not attain its nonnegative maximum in the interior of Ω unless
u is constant.
We will prove these Theorems in Section 7.3 by using the Hopf Lemma.
Notice that in the previous Theorems, we all require that c(x) ≥ 0.
Roughly speaking, maximum principles hold for ‘positive’ operators. −´ is
‘positive’, and obviously so does −´+ c(x) if c(x) ≥ 0. However, as we will
see in the next chapter, in practical problems it occurs frequently that the
condition c(x) ≥ 0 can not be met. Do we really need c(x) ≥ 0? The answer
is ‘no’. Actually, if c(x) is not ‘too negative’, then the operator ‘´+ c(x)’
can still remain ‘positive’ to ensure the maximum principle. These will be
studied in Section 7.4, where we prove the ‘Maximum Principles Based on
Comparisons’.
Let φ be a positive function on
¯
Ω satisfying
−´φ +λ(x)φ ≥ 0. (7.4)
Let u be a function such that
−´u +c(x)u ≥ 0 x ∈ Ω
u ≥ 0 on ∂Ω.
(7.5)
Theorem 7.1.5 (Maximum Principle Based on Comparison)
Assume that Ω is a bounded domain. If
c(x) > λ(x) , ∀x ∈ Ω,
then u ≥ 0 in Ω.
Also in Section 7.4, as consequences of Theorem 7.1.5, we derive the ‘Nar
row Region Principle’ and the ‘Decay at Inﬁnity Principle’. These principles
can be applied very conveniently in the ‘Method of Moving Planes’ to estab
lish the symmetry of solutions for semilinear elliptic equations, as we will see
in later sections.
In Section 7.5, we establish a maximum principle for integral inequalities.
180 7 Maximum Principles
7.2 Weak Maximum Principles
In this section, we prove the weak maximum principles.
Theorem 7.2.1 (Weak Maximum Principle for −´.)
i) If
−´u(x) ≥ 0, x ∈ Ω, (7.6)
then
min
¯
Ω
u ≥ min
∂Ω
u. (7.7)
ii) If
−´u(x) ≤ 0, x ∈ Ω, (7.8)
then
max
¯
Ω
u ≤ max
∂Ω
u. (7.9)
Proof. Here we only present the proof of part i). The entirely similar proof
also works for part ii).
To better illustrate the idea, we will deal with one dimensional case and
higher dimensional case separately.
First, let Ω be the interval (a, b). Then condition (7.6) becomes u
(x) ≤
0. This implies that the graph of u(x) on (a, b) is concave downward, and
therefore one can roughly see that the values of u(x) in (a, b) are large or
equal to the minimum value of u at the end points (See Figure 2).
a
b
Figure 2
1
To prove the above observation rigorously, we ﬁrst carry it out under the
stronger assumption that
−u
(x) > 0. (7.10)
Let m = min
∂Ω
u. Suppose in contrary to (7.7), there is a minimum x
o
∈ (a, b)
of u, such that u(x
o
) < m. Then by the second order Taylor expansion of u
around x
o
, we must have −u
(x
o
) ≤ 0. This contradicts with the assumption
(7.10).
Now for u only satisfying the weaker condition (7.6), we consider a per
turbation of u:
u
(x) = u(x) −x
2
.
Obviously, for each > 0, u
(x) satisﬁes the stronger condition (7.10), and
hence
7.2 Weak Maximum Principles 181
min
¯
Ω
u
≥ min
∂Ω
u
.
Now letting →0, we arrive at (7.7).
To prove the theorem in dimensions higher than one, we need the following
Lemma 7.2.1 (Mean Value Inequality) Let x
o
be a point in Ω. Let B
r
(x
o
) ⊂
Ω be the ball of radius r center at x
o
, and ∂B
r
(x
o
) be its boundary.
i) If −´u(x) > (=) 0 for x ∈ B
ro
(x
o
) with some r
o
> 0, then for any
r
o
> r > 0,
u(x
o
) > (=)
1
[∂B
r
(x
o
)[
∂Br(x
o
)
u(x)dS. (7.11)
It follows that, if x
o
is a minimum of u in Ω, then
−´u(x
o
) ≤ 0. (7.12)
ii) If −´u(x) < 0 for x ∈ B
ro
(x
o
) with some r
o
> 0, then for any r
o
>
r > 0,
u(x
o
) <
1
[∂B
r
(x
o
)[
∂Br(x
o
)
u(x)dS. (7.13)
It follows that, if x
o
is a maximum of u in Ω, then
−´u(x
o
) ≥ 0. (7.14)
We postpone the proof of the Lemma for a moment. This Lemma tell
us that, if −´u(x) > 0, then the value of u at the center of the small ball
B
r
(x
o
) is larger than its average value on the boundary ∂B
r
(x
o
). Roughly
speaking, the graph of u is locally somewhat concave downward. Now based
on this Lemma, to prove the theorem, we ﬁrst consider u
(x) = u(x) −[x[
2
.
Obviously,
−´u
= −´u + 2n > 0. (7.15)
Hence we must have
min
Ω
u
≥ min
∂Ω
u
(7.16)
Otherwise, if there exists a minimum x
o
of u in Ω, then by Lemma 7.2.1, we
have −´u
(x
o
) ≤ 0. This contradicts with (7.15). Now in (7.16), letting →0,
we arrive at the desired conclusion (7.7).
This completes the proof of the Theorem.
The Proof of Lemma 7.2.1. By the Divergence Theorem,
Br(x
o
)
´u(x)dx =
∂Br(x
o
)
∂u
∂ν
dS = r
n−1
S
n−1
∂u
∂r
(x
o
+rω)dS
ω
, (7.17)
where dS
ω
is the area element of the n − 1 dimensional unit sphere S
n−1
=
¦ω [ [ω[ = 1¦.
182 7 Maximum Principles
If ´u < 0, then by (7.17),
∂
∂r
¦
S
n−1
u(x
o
+rω)dS
ω
¦ < 0. (7.18)
Integrating both sides of (7.18) from 0 to r yields
S
n−1
u(x
o
+rω)dS
ω
−u(x
o
)[S
n−1
[ < 0,
where [S
n−1
[ is the area of S
n−1
. It follows that
u(x
o
) >
1
r
n−1
[S
n−1
[
∂Br(x
o
)
u(x)dS.
This veriﬁes (7.11).
To see (7.12), we suppose in contrary that −´u(x
o
) > 0. Then by the
continuity of ´u, there exists a δ > 0, such that
−´u(x) > 0, ∀x ∈ B
δ
(x
o
).
Consequently, (7.11) holds for any 0 < r < δ. This contradicts with the
assumption that x
o
is a minimum of u.
This completes the proof of the Lemma.
From the proof of Theorem 7.2.1, one can see that if we replace −´ op
erator by −´+ c(x) with c(x) ≥ 0, then the conclusion of Theorem 7.2.1 is
still true (with slight modiﬁcations). Furthermore, we can replace the Laplace
operator −´ with general uniformly elliptic operators. Let
D
i
=
∂
∂x
i
, D
ij
=
∂
2
∂x
i
∂x
j
.
Deﬁne
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x). (7.19)
Here we always assume that a
ij
(x), b
i
(x), and c(x) are bounded continuous
functions in
¯
Ω. We say that L is uniformly elliptic if
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for any x ∈ Ω, any ξ ∈ R
n
and for some δ > 0.
Theorem 7.2.2 (Weak Maximum Principle for L with c(x) ≡ 0). Let L be
the uniformly elliptic operator deﬁned above. Assume that c(x) ≡ 0.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u. (7.20)
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
u.
7.3 The Hopf Lemma and Strong Maximum Principles 183
For c(x) ≥ 0, the principle still applies with slight modiﬁcations.
Theorem 7.2.3 (Weak Maximum Principle for L with c(x) ≥ 0). Let L be
the uniformly elliptic operator deﬁned above. Assume that c(x) ≥ 0. Let
u
−
(x) = min¦u(x), 0¦ and u
+
(x) = max¦u(x), 0¦.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u
−
.
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
u
+
.
Interested readers may ﬁnd its proof in many standard books, say in [Ev],
page 327.
7.3 The Hopf Lemma and Strong Maximum Principles
In the previous section, we prove a weak form of maximum principle. In the
case Lu ≥ 0, it concludes that the minimum of u attains at some point on the
boundary ∂Ω. However it does not exclude the possibility that the minimum
may also attain at some point in the interior of Ω. In this section, we will show
that this can not actually happen, that is, the minimum value of u can only
be achieved on the boundary unless u is constant. This is called the “Strong
Maximum Principle”. We will prove it by using the following
Lemma 7.3.1 (Hopf Lemma). Assume that Ω is an open, bounded, and
connected domain in R
n
with smooth boundary ∂Ω. Let u be a function in
C
2
(Ω) ∩ C(
¯
Ω). Let
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x)
be uniformly elliptic in Ω with c(x) ≡ 0. Assume that
Lu ≥ 0 in Ω. (7.21)
Suppose there is a ball B contained in Ω with a point x
o
∈ ∂Ω ∩ ∂B and
suppose
u(x) > u(x
o
), ∀x ∈ B. (7.22)
Then for any outward directional derivative at x
o
,
∂u(x
o
)
∂ν
< 0. (7.23)
184 7 Maximum Principles
In the case c(x) ≥ 0, if we require additionally that u(x
o
) ≤ 0, then the
same conclusion of the Hopf Lemma holds.
Proof. Without loss of generality, we may assume that B is centered at
the origin with radius r. Deﬁne
w(x) = e
−αr
2
−e
−αx
2
.
Consider v(x) = u(x) +w(x) on the set D = B
r
2
(x
o
) ∩ B (See Figure 3).
x
o
0
D
r
Ω
Figure 3
1
We will choose α and appropriately so that we can apply the Weak
Maximum Principle to v(x) and arrive at
v(x) ≥ v(x
o
) ∀x ∈ D. (7.24)
We postpone the proof of (7.24) for a moment. Now from (7.24), we have
∂v
∂ν
(x
o
) ≤ 0, (7.25)
Noticing that
∂w
∂ν
(x
o
) > 0
We arrive at the desired inequality
∂u
∂ν
(x
o
) < 0.
Now to complete the proof of the Lemma, what left to verify is (7.24). We
will carry this out in two steps. First we show that
Lv ≥ 0. (7.26)
Hence we can apply the Weak Maximum Principle to conclude that
min
¯
D
v ≥ min
∂D
. (7.27)
7.3 The Hopf Lemma and Strong Maximum Principles 185
Then we show that the minimum of v on the boundary ∂D is actually
attained at x
o
:
v(x) ≥ v(x
o
) ∀x ∈ ∂D. (7.28)
Obviously, (7.27) and (7.28) imply (7.24).
To see (7.26), we directly calculate
Lw = e
−αx
2
¦4α
2
n
¸
i,j=1
a
ij
(x)x
i
x
j
−2α
n
¸
i=1
[a
ii
(x) −b
i
(x)x
i
] −c(x)¦ +c(x)e
−αr
2
≥ e
−αx
2
¦4α
2
n
¸
i,j=1
a
ij
(x)x
i
x
j
−2α
n
¸
i=1
[a
ii
(x) −b
i
(x)x
i
] −c(x)¦ (7.29)
By the ellipticity assumption, we have
n
¸
i,j=1
a
ij
(x)x
i
x
j
≥ δ[x[
2
≥ δ(
r
2
)
2
> 0 in D. (7.30)
Hence we can choose α suﬃciently large, such that Lw ≥ 0. This, together
with the assumption Lu ≥ 0 implies Lv ≥ 0, and (7.27) follows from the Weak
Maximum Principle.
To verify (7.28), we consider two parts of the boundary ∂D separately.
(i) On ∂D ∩ B, since u(x) > u(x
o
), there exists a c
o
> 0, such that
u(x) ≥ u(x
o
) +c
o
. Take small enough such that [w[ ≤ δ on ∂D∩B. Hence
v(x) ≥ u(x
o
) = v(x
o
) ∀x ∈ ∂D ∩ B.
(ii) On ∂D∩∂B, w(x) = 0, and by the assumption u(x) ≥ u(x
o
), we have
v(x) ≥ v(x
o
).
This completes the proof of the Lemma.
Now we are ready to prove
Theorem 7.3.1 (Strong Maximum Principle for L with c(x) ≡ 0.) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≡ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u attains its minimum only on ∂Ω unless u is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u attains its maximum only on ∂Ω unless u is constant.
Proof. We prove part i) here. The proof of part ii) is similar. Let m be
the minimum value of u in Ω. Set Σ = ¦x ∈ Ω [ u(x) = m¦. It is relatively
closed in Ω. We show that either Σ is empty or Σ = Ω.
186 7 Maximum Principles
We argue by contradiction. Suppose Σ is a nonempty proper subset of
Ω. Then we can ﬁnd an open ball B ⊂ Ω ` Σ with a point on its boundary
belonging to Σ. Actually, we can ﬁrst ﬁnd a point p ∈ Ω ` Σ such that
d(p, Σ) < d(p, ∂Ω), then increase the radius of a small ball center at p until
it hits Σ (before hitting ∂Ω). Let x
o
be the point at ∂B ∩ Σ. Obviously we
have in B
Lu ≥ 0 and u(x) > u(x
o
).
Now we can apply the Hopf Lemma to conclude that the normal outward
derivative
∂u
∂ν
(x
o
) < 0. (7.31)
On the other hand, x
o
is an interior minimum of u in Ω, and we must have
Du(x
o
) = 0. This contradicts with (7.31) and hence completes the proof of
the Theorem.
In the case when c(x) ≥ 0, the strong principle still applies with slight
modiﬁcations.
Theorem 7.3.2 (Strong Maximum Principle for L with c(x) ≥ 0.) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≥ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u can not attain its nonpositive minimum in the interior of Ω unless u
is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u can not attain its nonnegative maximum in the interior of Ω unless
u is constant.
Remark 7.3.1 In order that the maximum principle to hold, we assume that
the domain Ω be bounded. This is essential, since it guarantees the existence
of maximum and minimum of u in
¯
Ω. A simple counter example is when Ω
is the half space ¦x ∈ R
n
[ x
n
> 0¦, and u(x, y) = x
n
. Obviously, ´u = 0, but
u does not obey the maximum principle:
max
Ω
u ≤ max
∂Ω
u.
Equally important is the nonnegativeness of the coeﬃcient c(x). For exam
ple, set Ω = ¦(x, y) ∈ R
2
[ −
π
2
< x <
π
2
, −
π
2
< y <
π
2
¦. Then u = cos xcos y
satisﬁes
−´u −2u = 0, in Ω
u = 0, on ∂Ω.
7.3 The Hopf Lemma and Strong Maximum Principles 187
But, obviously, there are some points in Ω at which u < 0.
However, if we impose some sign restriction on u, say u ≥ 0, then both
conditions can be relaxed. A simple version of such result will be present in
the next theorem.
Also, as one will see in the next section, c(x) is actually allowed to be
negative, but not ‘too negative’.
Theorem 7.3.3 (Maximum Principle and Hopf Lemma for not necessarily
bounded domain and not necessarily nonnegative c(x). )
Let Ω be a domain in R
n
with smooth boundary. Assume that u ∈ C
2
(Ω)∩
C(
¯
Ω) and satisﬁes
−´u +
¸
n
i=1
b
i
(x)D
i
u +c(x)u ≥ 0, u(x) ≥ 0, x ∈ Ω
u(x) = 0 x ∈ ∂Ω
(7.32)
with bounded functions b
i
(x) and c(x). Then
i) if u vanishes at some point in Ω, then u ≡ 0 in Ω; and
ii) if u ≡0 in Ω, then on ∂Ω, the exterior normal derivative
∂u
∂ν
< 0.
To prove the Theorem, we need the following Lemma concerning eigenval
ues.
Lemma 7.3.2 Let λ
1
be the ﬁrst positive eigenvalue of
−´φ = λ
1
φ(x) x ∈ B
1
(0)
φ(x) = 0 x ∈ ∂B
1
(0)
(7.33)
with the corresponding eigenfunction φ(x) > 0. Then for any ρ > 0, the ﬁrst
positive eigenvalue of the problem on B
ρ
(0) is
λ
1
ρ
2
. More precisely, if we let
ψ(x) = φ(
x
ρ
), then
−´ψ =
λ
1
ρ
2
ψ(x) x ∈ B
ρ
(0)
ψ(x) = 0 x ∈ ∂B
ρ
(0)
(7.34)
The proof is straight forward, and will be left for the readers.
The Proof of Theorem 7.3.3.
i) Suppose that u = 0 at some point in Ω, but u ≡0 on Ω. Let
Ω
+
= ¦x ∈ Ω [ u(x) > 0¦.
Then by the regularity assumption on u, Ω
+
is an open set with C
2
boundary.
Obviously,
u(x) = 0 , ∀ x ∈ ∂Ω
+
.
Let x
o
be a point on ∂Ω
+
, but not on ∂Ω. Then for ρ > 0 suﬃciently small,
one can choose a ball B
ρ/2
(¯ x) ⊂ Ω
+
with x
o
as its boundary point. Let ψ be
188 7 Maximum Principles
the positive eigenfunction of the eigenvalue problem (7.34) on B
ρ
(x
o
) corre
sponding to the eigenvalue
λ
1
ρ
2
. Obviously, B
ρ
(x
o
) completely covers B
ρ/2
(¯ x).
Let v =
u
ψ
. Then from (7.32), it is easy to deduce that
0 ≤ −´v −2v
ψ
ψ
+
n
¸
i=1
b
i
(x)D
i
v +
−´ψ
ψ
+
n
¸
i=1
D
i
ψ
ψ
+c(x)
v
≡ −´v +
n
¸
i=1
˜
b
i
(x)D
i
v + ˜ c(x)v.
Let φ be the positive eigenfunction of the eigenvalue problem (7.33) on B
1
,
then
˜ c(x) =
λ
1
ρ
2
+
1
ρ
n
¸
i=1
D
i
φ
φ
+c(x).
This allows us to choose ρ suﬃciently small so that ˜ c(x) ≥ 0. Now we can
apply Hopf Lemma to conclude that, the outward normal derivative at the
boundary point x
o
of B
ρ/2
(¯ x),
∂v
∂ν
(x
o
) < 0, (7.35)
because
v(x) > 0 ∀x ∈ B
ρ/2
(¯ x) and v(x
o
) =
u(x
o
)
ψ(x
o
)
= 0.
On the other hand, since x
o
is also a minimum of v in the interior of Ω,
we must have
v(x
o
) = 0.
This contradicts with (7.35) and hence proves part i) of the Theorem.
ii) The proof goes almost the same as in part i) except we consider the
point x
o
on ∂Ω and the ball B
ρ/2
(¯ x) is in Ω with x
o
∈ ∂B
ρ/2
(¯ x). Then for
the outward normal derivative of u, we have
∂u
∂ν
(x
o
) =
∂v
∂ν
(x
o
)ψ(x
o
) +v(x
o
)
∂ψ
∂ν
(x
o
) =
∂v
∂ν
(x
o
)ψ(x
o
) < 0.
Here we have used a wellknown fact that the eigenfunction ψ on B
ρ
(x
o
) is ra
dially symmetric about the center x
o
, and hence ψ(x
o
) = 0. This completes
the proof of the Theorem.
7.4 Maximum Principles Based on Comparisons
In the previous section, we show that if (−´+c(x))u ≥ 0, then the maximum
principle, i.e. (7.20), applies. There, we required c(x) ≥ 0. We can think −´
7.4 Maximum Principles Based on Comparisons 189
as a ‘positive’ operator, and the maximum principle holds for any ‘positive’
operators. For c(x) ≥ 0, −´+c(x) is also ‘positive’. Do we really need c(x) ≥ 0
here? To answer the question, let us consider the Dirichlet eigenvalue problem
of −´:
−´φ −λφ(x) = 0 x ∈ Ω
φ(x) = 0 x ∈ ∂Ω.
(7.36)
We notice that the eigenfunction φ corresponding to the ﬁrst positive eigen
value λ
1
is either positive or negative in Ω. That is, the solutions of (7.36)
with λ = λ
1
obey Maximum Principle, that is, the maxima or minima of φ
are attained only on the boundary ∂Ω. This suggests that, to ensure the Max
imum Principle, c(x) need not be nonnegative, it is allowed to be as negative
as −λ
1
. More precisely, we can establish the following more general maximum
principle based on comparison.
Theorem 7.4.1 Assume that Ω is a bounded domain. Let φ be a positive
function on
¯
Ω satisfying
−´φ +λ(x)φ ≥ 0. (7.37)
Assume that u is a solution of
−´u +c(x)u ≥ 0 x ∈ Ω
u ≥ 0 on ∂Ω.
(7.38)
If
c(x) > λ(x) , ∀x ∈ Ω, (7.39)
then u ≥ 0 in Ω.
Proof. We argue by contradiction. Suppose that u(x) < 0 somewhere in
Ω. Let v(x) =
u(x)
φ(x)
. Then since φ(x) > 0, we must have v(x) < 0 somewhere
in Ω. Let x
o
∈ Ω be a minimum of v(x). By a direct calculation, it is easy to
verify that
−´v = 2v
φ
φ
+
1
φ
(−´u +
´φ
φ
u). (7.40)
On one hand, since x
o
is a minimum, we have
−´v(x
o
) ≤ 0 and v(x
o
) = 0. (7.41)
While on the other hand, by (7.37), (7.38), and (7.39), and taking into
account that u(x
o
) < 0, we have, at point x
o
,
−´u +
´φ
φ
u(x
o
) ≥ −´u +λ(x
o
)u(x
o
)
> −´u +c(x
o
)u(x
o
) ≥ 0.
This is an obvious contradiction with (7.40) and (7.41), and thus completes
the proof of the Theorem.
190 7 Maximum Principles
Remark 7.4.1 From the proof, one can see that conditions (7.37) and (7.39)
are required only at the points where v attains its minimum, or at points where
u is negative.
The Theorem is also valid on an unbounded domains if u is “nonnegative”
at inﬁnity:
Theorem 7.4.2 If Ω is an unbounded domain, besides condition (7.38), we
assume further that
liminf
x→∞
u(x) ≥ 0. (7.42)
Then u ≥ 0 in Ω.
Proof. Still consider the same v(x) as in the proof of Theorem 7.4.1. Now
condition (7.42) guarantees that the minima of v(x) do not “leak” away to
inﬁnity. Then the rest of the arguments are exactly the same as in the proof
of Theorem 7.4.1.
For convenience in applications, we provide two typical situations where
there exist such functions φ and c(x) satisfying condition (7.37) and (7.39),
so that the Maximum Principle Based on Comparison applies:
i) Narrow regions, and
ii) c(x) decays fast enough at ∞.
i) Narrow Regions. When
Ω = ¦x [ 0 < x
1
< l¦
is a narrow region with width l as shown:
Ω

0 l x
1
We can choose φ(x) = sin(
x1+
l
). Then it is easy
to see that −´φ = (
1
l
)
2
φ, where λ(x) =
−1
l
2
can be very negative when l is suﬃciently small.
Corollary 7.4.1 (Narrow Region Principle.) If u satisﬁes (7.38) with bounded
function c(x). Then when the width l of the region Ω is suﬃciently small, c(x)
satisﬁes (7.39), i.e. c(x) > λ(x) =
−1
l
2
. Hence we can directly apply Theorem
7.4.1 to conclude that u ≥ 0 in Ω, provided liminf
x→∞
u(x) ≥ 0.
ii) Decay at Inﬁnity. In dimension n ≥ 3, one can choose some positive
number q < n −2, and let φ(x) =
1
x
q
Then it is easy to verify that
7.5 A Maximum Principle for Integral Equations 191
−´φ =
q(n −2 −q)
[x[
2
φ.
In the case c(x) decays fast enough near inﬁnity, we can adapt the proof of
Theorem 7.4.1 to derive
Corollary 7.4.2 (Decay at Inﬁnity) Assume there exist R > 0, such that
c(x) > −
q(n −2 −q)
[x[
2
, ∀[x[ > R. (7.43)
Suppose
lim
x→∞
u(x)
[x[
q
= 0.
Let Ω be a region containing in B
C
R
(0) ≡ R
n
` B
R
(0). If u satisﬁes (7.38) on
¯
Ω, then
u(x) ≥ 0 for all x ∈ Ω.
Remark 7.4.2 From Remark 7.4.1, one can see that actually condition
(7.43) is only required at points where u is negative.
Remark 7.4.3 Although Theorem 7.4.1 as well as its Corollaries are stated
in linear forms, they can be easily applied to a nonlinear equation, for example,
−´u −[u[
p−1
u = 0 x ∈ R
n
. (7.44)
Assume that the solution u decays near inﬁnity at the rate of
1
[x[
s
with
s(p − 1) > 2. Let c(x) = −[u(x)[
p−1
. Then for R suﬃciently large, and for
the region Ω as stated in Corollary 7.4.2, c(x) satisﬁes (7.43) in Ω. If further
assume that
u [
∂Ω
≥ 0,
then we can derive from Corollary 7.4.2 that u ≥ 0 in the entire region Ω.
7.5 A Maximum Principle for Integral Equations
In this section, we introduce a maximum principle for integral equations.
Let Ω be a region in R
n
, may or may not be bounded. Assume
K(x, y) ≥ 0, ∀(x, y) ∈ Ω Ω.
Deﬁne the integral operator T by
(Tf)(x) =
Ω
K(x, y)f(y)dy.
192 7 Maximum Principles
Let   be a norm in a Banach space satisfying
f ≤ g, whenever 0 ≤ f(x) ≤ g(x) in Ω. (7.45)
There are many norms that satisfy (7.45), such as L
p
(Ω) norms, L
∞
(Ω) norm,
and C(Ω) norm.
Theorem 7.5.1 Suppose that
Tg ≤ θg with some 0 < θ < 1 ∀g. (7.46)
If
f(x) ≤ (Tf)(x), (7.47)
then
f(x) ≤ 0 for all x ∈ Ω. (7.48)
Proof. Deﬁne
Ω
+
= ¦x ∈ Ω [ f(x) > 0¦,
and
f
+
(x) = max¦0, f(x)¦, f
−
(x) = min¦0, f(x)¦.
Then it is easy to see that for any x ∈ Ω
+
,
0 ≤ f
+
(x) ≤ (Tf)(x) = (Tf)
+
(x) + (Tf)
−
(x) ≤ (Tf)
+
(x) (7.49)
While in the rest of the region Ω, we have f
+
(x) = 0 and (Tf)
+
(x) ≥ 0 by
the deﬁnitions. Therefore, (7.49) holds for any x ∈ Ω. It follows from (7.49)
that
f
+
 ≤ (Tf)
+
 ≤ θf
+
.
Since θ < 1, we must have
f
+
 = 0,
which implies that f
+
(x) ≡ 0, and hence f(x) ≤ 0. This completes the proof
of the Theorem.
Recall that in Section 5, after introducing the “Maximum Principle Based
on Comparison” for Partial Diﬀerential Equations, we explained how it can
be applied to the situations of “Narrow Regions” and “Decay at Inﬁnity”.
Parallel to this, we have similar applications for integral equations. Let α be
a number between 0 and n. Deﬁne
(Tf)(x) =
Ω
1
[x −y[
n−α
c(y)f(y)dy.
Assume that f satisﬁes the integral equation
f = Tf in Ω,
7.5 A Maximum Principle for Integral Equations 193
or more generally, the integral inequality
f ≤ Tf in Ω. (7.50)
Further assume that
Tf
L
p
(Ω)
≤ Cc(y)
L
τ
(Ω)
f
L
p
(Ω)
, (7.51)
for some p, τ > 1.
If we have some right integrability condition on c(y), then we can derived,
from Theorem 7.5.1, a maximum principle that will be applied to “Narrow
Regions” and “Near Inﬁnity”. More precisely, we have
Corollary 7.5.1 Assume that c(y) ≥ 0 and c(y) ∈ L
τ
(R
n
). Let f ∈ L
p
(R
n
)
be a nonnegative function satisfying (7.50) and (7.51). Then there exist posi
tive numbers R
o
and
o
depending on c(y) only, such that
if µ(Ω ∩ B
Ro
(0)) ≤
o
, then f
+
≡ 0 in Ω.
where µ(D) is the measure of the set D.
Proof. Since c(y) ∈ L
τ
(R
n
), by Lebesgue integral theory, when the measure
of the intersection of Ω with B
Ro
(0) is suﬃciently small, we can make the
integral
Ω
[c(y)[
τ
dy as small as we wish, and thus to obtain
Cc(y)
L
τ
(Ω)
< 1.
Now it follows from Theorem 7.5.1 that
f
+
(x) ≡ 0 , ∀ x ∈ Ω.
This completes the proof of the Corollary.
Remark 7.5.1 One can see that the condition µ(Ω ∩ B
Ro
(0)) ≤
o
in the
Corollary is satisﬁed in the following two situations.
i) Narrow Regions: The width of Ω is very small.
ii) Near Inﬁnity: Say, Ω = B
C
R
(0), the complement of the ball B
R
(0), with
suﬃciently large R.
As an immediate application of this “Maximum Principle”, we study an
integral equation in the next section. We will use the method of moving planes
to obtain the radial symmetry and monotonicity of the positive solutions.
8
Methods of Moving Planes and of Moving
Spheres
8.1 Outline of the Method of Moving Planes
8.2 Applications of Maximum Principles Based on Comparison
8.2.1 Symmetry of Solutions on a Unit Ball
8.2.2 Symmetry of Solutions of −∆u = u
p
in R
n
.
8.2.3 Symmetry of Solutions of −∆u = e
u
in R
2
8.3 Method of Moving Planes in a Local Way
8.3.1 The Background
8.3.2 The A Priori Estimates
8.4 Method of Moving Spheres
8.4.1 The Background
8.4.2 Necessary Conditions
8.5 Method of Moving Planes in an Integral Form and
Symmetry of Solutions for Integral Equations
The Method of Moving Planes (MMP) was invented by the Soviet math
ematician Alexanderoﬀ in the early 1950s. Decades later, it was further de
veloped by Serrin [Se], Gidas, Ni, and Nirenberg [GNN], Caﬀarelli, Gidas,
and Spruck [CGS], Li [Li], Chen and Li [CL] [CL1], Chang and Yang [CY],
and many others. This method has been applied to free boundary problems,
semilinear partial diﬀerential equations, and other problems. Particularly for
semilinear partial diﬀerential equations, there have seen many signiﬁcant con
tributions. We refer to the paper of Frenkel [F] for more descriptions on the
method.
The Method of Moving Planes and its variant–the Method of Moving
Spheres–have become powerful tools in establishing symmetries and mono
tonicity for solutions of partial diﬀerential equations. They can also be used
to obtain a priori estimates, to derive useful inequalities, and to prove non
existence of solutions.
196 8 Methods of Moving Planes and of Moving Spheres
From the previous chapter, we have seen the beauty and power of maxi
mum principles. The MMP greatly enhances the power of maximum principles.
Roughly speaking, the MMP is a continuous way of repeated applications of
maximum principles. During this process, the maximum principle has been
used inﬁnitely many times; and the advantage is that each time we only need
to use the maximum principle in a very narrow region. From the previous
chapter, one can see that, in such a narrow region, even if the coeﬃcients of
the equation are not ‘good,’ the maximum principle can still be applied. In the
authors’ research practice, we also introduced a form of maximum principle
at inﬁnity to the MMP and therefore simpliﬁed many proofs and extended
the results in more natural ways. We recommend the readers study this part
carefully, so that they will be able to apply it to their own research.
It is wellknown that by using a Green’s function, one can change a dif
ferential equation into an integral equation, and under certain conditions,
they are equivalent. To investigate the symmetry and monotonicity of inte
gral equations, the authors, together with Ou, created an integral form of
MMP. Instead of using local properties (say diﬀerentiability) of a diﬀeren
tial equation, they employed the global properties of the solutions of integral
equations.
In this chapter, we will apply the Method of Moving Planes and their
variant–the Method of Moving Spheres– to study semilinear elliptic equa
tions and integral equations. We will establish symmetry, monotonicity, a
priori estimates, and nonexistence of the solutions. During the process of
Moving Planes, the Maximum Principles introduced in the previous chapter
are applied in innovative ways.
In Section 8.2, we will establish radial symmetry and monotonicity for the
solutions of the following three semilinear elliptic problems
−´u = f(u) x ∈ B
1
(0)
u = 0 on ∂B
1
(0);
−´u = u
n+2
n−2
(x) x ∈ R
n
n ≥ 3;
and
−´u = e
u(x)
x ∈ R
2
.
During the moving of planes, the Maximum Principles Base on Comparison
will play a major role. In particular, the Narrow Region Principle and the
Decay at Inﬁnity Principle will be used repeatedly in dealing with the three
examples.
In Section 8.3, we will apply the Method of Moving Planes in a ‘local way’
to obtain a priori estimates on the solutions of the prescribing scalar curvature
equation on a compact Riemannian manifold M
−
4(n −1)
n −2
´
o
u +R
o
(x)u = R(x)u
n+2
n−2
, in M.
8.1 Outline of the Method of Moving Planes 197
We alow the function R(x) to change signs. In this situation, the traditional
blowingup analysis fails near the set where R(x) = 0. We will use the Method
of Moving Planes in an innovative way to obtain a priori estimates. Since the
Method of Moving Planes can not be applied to the solution u directly, we
introduce an auxiliary function to circumvent this diﬃculty.
In Section 8.4, we use the Method of Moving Spheres to prove a non
existence of solutions for the prescribing Gaussian and scalar curvature equa
tions
−´u + 2 = R(x)e
u
,
and
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
on S
2
and on S
n
(n ≥ 3), respectively. We prove that if the function R(x)
is rotationally symmetric and monotone in the region where it is positive,
then both equations admit no solution. This provides a stronger necessary
condition than the well known KazdanWarner condition, and it also becomes
a suﬃcient condition for the existence of solutions in most cases.
In Section 8.5, as an application of the maximum principle for integral
equations introduced in Section 7.5, we study the integral equation in R
n
u(x) =
R
n
1
[x −y[
n−α
u
n+α
n−α
(y)dy,
for any real number α between 0 and n. It arises as an EulerLagrange equation
for a functional in the context of the HardyLittlewoodSobolev inequalities.
Due to the diﬀerent nature of the integral equation, the traditional Method of
Moving Planes does not work. Hence we exploit its global property and develop
a new idea–the Integral Form of the Method of Moving Planes to obtain the
symmetry and monotonicity of the solutions. The Maximum Principle for
Integral Equations established in Chapter 7 is combined with the estimates
of various integral norms to carry on the moving of planes.
8.1 Outline of the Method of Moving Planes
To outline how the Method of Moving Planes works, we take the Euclidian
space R
n
for an example. Let u be a positive solution of a certain partial
diﬀerential equation. If we want to prove that it is symmetric and monotone
in a given direction, we may assign that direction as x
1
axis. For any real
number λ, let
T
λ
= ¦x = (x
1
, x
2
, , x
n
) ∈ R
n
[ x
1
= λ¦.
This is a plane perpendicular to x
1
axis and the plane that we will move with.
Let Σ
λ
denote the region to the left of the plane, i.e.
198 8 Methods of Moving Planes and of Moving Spheres
Σ
λ
= ¦x ∈ R
n
[ x
1
< λ¦.
Let
x
λ
= (2λ −x
1
, x
2
, , x
n
),
the reﬂection of the point x = (x
1
, , x
n
) about the plane T
λ
(See Figure 1).
x
x
λ
Σ
λ
T
λ
Figure 1
1
We compare the values of the solution u at point x and x
λ
, and we want
to show that u is symmetric about some plane T
λo
. To this end, let
w
λ
(x) = u(x
λ
) −u(x).
In order to show that, there exists some λ
o
, such that
w
λo
(x) ≡ 0 , ∀x ∈ Σ
λo
,
we generally go through the following two steps.
Step 1. We ﬁrst show that for λ suﬃciently negative, we have
w
λ
(x) ≥ 0 , ∀x ∈ Σ
λ
. (8.1)
Then we are able to start oﬀ from this neighborhood of x
1
= −∞, and move
the plane T
λ
along the x
1
direction to the right as long as the inequality (8.1)
holds.
Step 2. We continuously move the plane this way up to its limiting position.
More precisely, we deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
We prove that u is symmetric about the plane T
λo
, that is w
λo
(x) ≡ 0 for all
x ∈ Σ
λo
. This is usually carried out by a contradiction argument. We show
that if w
λo
(x) ≡ 0, then there would exist λ > λ
o
, such that (8.1) holds, and
this contradicts with the deﬁnition of λ
o
.
From the above illustration, one can see that the key to the Method of
Moving Planes is to establish inequality (8.1), and for partial diﬀerential equa
tions, maximum principles are powerful tools for this task. While for integral
equations, we use a diﬀerent idea. We estimate a certain norm of w
λ
on the
set
Σ
−
λ
= ¦x ∈ Σ
λ
[ w
λ
(x) < 0¦
where the inequality (8.1) is violated. We show that this norm must be zero,
and hence Σ
−
λ
is empty.
8.2 Applications of the Maximum Principles Based on Comparisons 199
8.2 Applications of the Maximum Principles Based on
Comparisons
In this section, we study some semilinear elliptic equations. We will apply
the method of moving planes to establish the symmetry of the solutions. The
essence of the method of moving planes is the application of various maximum
principles. In the proof of each theorem, the readers will see vividly how the
Maximum Principles Based on Comparisons are applied to narrow regions and
to solutions with decay at inﬁnity.
8.2.1 Symmetry of Solutions in a Unit Ball
We ﬁrst begin with an elegant result of Gidas, Ni, and Nirenberg [GNN1]:
Theorem 8.2.1 Assume that f() is a Lipschitz continuous function such
that
[f(p) −f(q)[ ≤ C
o
[p −q[ (8.2)
for some constant C
o
. Then every positive solution u of
−´u = f(u) x ∈ B
1
(0)
u = 0 on ∂B
1
(0).
(8.3)
is radially symmetric and monotone decreasing about the origin.
Proof.
As shown on Figure 4 below, let T
λ
= ¦x [ x
1
= λ¦ be the plane perpen
dicular to the x
1
axis. Let Σ
λ
be the part of B
1
(0) which is on the left of the
plane T
λ
. For each x ∈ Σ
λ
, let x
λ
be the reﬂection of the point x about the
plane T
λ
, more precisely, x
λ
= (2λ −x
1
, x
2
, , x
n
).
0
x x
λ
Σ
λ
T
λ
Figure 4
x
1
1
We compare the values of the solution u on Σ
λ
with those on its reﬂection.
Let
u
λ
(x) = u(x
λ
), and w
λ
(x) = u
λ
(x) −u(x).
200 8 Methods of Moving Planes and of Moving Spheres
Then it is easy to see that u
λ
satisﬁes the same equation as u does. Applying
the Mean Value Theorem to f(u), one can verify that w
λ
satisﬁes
´w
λ
+C(x, λ)w
λ
(x) = 0, x ∈ Σ
λ
,
where
C(x, λ) =
f(u
λ
(x)) −f(u(x))
u
λ
(x) −u(x)
,
and by condition (8.2),
[C(x, λ)[ ≤ C
o
. (8.4)
Step 1: Start Moving the Plane.
We start from the near left end of the region. Obviously, for λ suﬃciently
close to −1, Σ
λ
is a narrow (in x
1
direction) region, and on ∂Σ
λ
, w
λ
(x) ≥ 0.
( On T
λ
, w
λ
(x) = 0; while on the curve part of ∂Σ
λ
, w
λ
(x) > 0 since u > 0
in B
1
(0).)
Now we can apply the “Narrow Region Principle” ( Corollary 7.4.1 ) to
conclude that, for λ close to −1,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.5)
This provides a starting point for us to move the plane T
λ
Step 2: Move the Plane to Its Right Limit.
We now increase the value of λ continuously, that is, we move the plane
T
λ
to the right as long as the inequality (8.5) holds. We show that, by moving
this way, the plane will not stop before hitting the origin. More precisely, let
¯
λ = sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦,
we ﬁrst claim that
¯
λ ≥ 0. (8.6)
Otherwise, we will show that the plane can be further moved to the right
by a small distance, and this would contradicts with the deﬁnition of
¯
λ. In
fact, if
¯
λ < 0, then the image of the curved surface part of ∂Σ¯
λ
under the
reﬂection about T¯
λ
lies inside B
1
(0), where u(x) > 0 by assumption. It follows
that, on this part of ∂Σ¯
λ
, w¯
λ
(x) > 0. By the Strong Maximum Principle, we
deduce that
w¯
λ
(x) > 0
in the interior of Σ¯
λ
.
Let d
o
be the maximum width of narrow regions that we can apply the
“Narrow Region Principle”. Choose a small positive number δ, such that δ ≤
do
2
, −
¯
λ. We consider the function w¯
λ+δ
(x) on the narrow region (See Figure
5):
Ω
δ
= Σ¯
λ+δ
∩ ¦x [ x
1
>
¯
λ −
d
o
2
¦.
8.2 Applications of the Maximum Principles Based on Comparisons 201
It satisﬁes
´w¯
λ+δ
+C(x,
¯
λ +δ)w¯
λ+δ
= 0 x ∈ Ω
δ
w¯
λ+δ
(x) ≥ 0 x ∈ ∂Ω
δ
.
(8.7)
0
Ω
δ
T
¯
λ−
do
2
Σ
¯
λ−
do
2
T¯
λ+δ
Figure 5
x
1
1
The equation is obvious. To see the boundary condition, we ﬁrst notice
that it is satisﬁed on the two curved parts and one ﬂat part where x
1
=
¯
λ+δ
of the boundary ∂Ω
δ
due to the deﬁnition of w¯
λ+δ
. To see that it is also true
on the rest of the boundary where x
1
=
¯
λ −
do
2
, we use continuity argument.
Notice that on this part, w¯
λ
is positive and bounded away from 0. More
precisely and more generally, there exists a constant c
o
> 0, such that
w¯
λ
(x) ≥ c
o
, ∀x ∈ Σ
¯
λ−
do
2
.
Since w
λ
is continuous in λ, for δ suﬃciently small, we still have
w¯
λ+δ
(x) ≥ 0 , ∀x ∈ Σ
¯
λ−
do
2
.
Hence in particular, the boundary condition in (8.7) holds for such small δ.
Now we can apply the “Narrow Region Principle” to conclude that
w¯
λ+δ
(x) ≥ 0, ∀x ∈ Ω
δ
.
And therefore,
w¯
λ+δ
(x) ≥ 0, ∀x ∈ Σ¯
λ+δ
.
This contradicts with the deﬁnition of
¯
λ and thus establishes (8.6).
(8.6) implies that
u(−x
1
, x
) ≤ u(x
1
, x
) , ∀x
1
≥ 0, (8.8)
202 8 Methods of Moving Planes and of Moving Spheres
where x
= (x
2
, , x
n
).
We then start from λ close to 1 and move the plane T
λ
toward the left.
Similarly, we obtain
u(−x
1
, x
) ≥ u(x
1
, x
) , ∀x
1
≥ 0, (8.9)
Combining two opposite inequalities (8.8) and (8.9), we see that u(x) is
symmetric about the plane T
0
. Since we can place x
1
axis in any direction,
we conclude that u(x) must be radially symmetric about the origin. Also the
monotonicity easily follows from the argument. This completes the proof.
8.2.2 Symmetry of Solutions of −u = u
p
in R
n
In an elegant paper of Gidas, Ni, and Nirenberg [2], an interesting results is
the symmetry of the positive solutions of the semilinear elliptic equation:
´u +u
p
= 0 , x ∈ R
n
, n ≥ 3. (8.10)
They proved
Theorem 8.2.2 For p =
n+2
n−2
, all the positive solutions of (8.10) with rea
sonable behavior at inﬁnity, namely
u = O(
1
[x[
n−2
),
are radially symmetric and monotone decreasing about some point, and hence
assume the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for λ > 0 and for some x
o
∈ R
n
.
This uniqueness result, as was pointed out by R. Schoen, is in fact equiv
alent to the geometric result due to Obata [O]: A Riemannian metric on S
n
which is conformal to the standard one and having the same constant scalar
curvature is the pull back of the standard one under a conformal map of
S
n
to itself. Recently, Caﬀarelli, Gidas and Spruck [CGS] removed the de
cay assumption u = O([x[
2−n
) and proved the same result. In the case that
1 ≤ p <
n+2
n−2
, Gidas and Spruck [GS] showed that the only nonnegative solu
tion of (8.10) is identically zero. Then, in the authors paper [CL1], a simpler
and more elementary proof was given for almost the same result:
Theorem 8.2.3 i) For p =
n+2
n−2
, every positive C
2
solution of (8.10) must
be radially symmetric and monotone decreasing about some point, and
hence assumes the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for some λ > 0 and x
o
∈ R
n
.
8.2 Applications of the Maximum Principles Based on Comparisons 203
ii) For p <
n+2
n−2
, the only nonnegative solution of (8.10) is identically zero.
The proof of Theorem 8.2.2 is actually included in the more general proof
of the ﬁrst part of Theorem 8.2.3. However, to better illustrate the idea, we
will ﬁrst present the proof of Theorem 8.2.2 (mostly in our own idea). And
the readers will see vividly, how the “Decay at Inﬁnity” principle is applied
here.
Proof of Theorem 8.2.2.
Deﬁne
Σ
λ
= ¦x = (x
1
, , x
n
) ∈ R
n
[ x
1
< λ¦, T
λ
= ∂Σ
λ
and let x
λ
be the reﬂection point of x about the plane T
λ
, i.e.
x
λ
= (2λ −x
1
, x
2
, , x
n
).
(See the previous Figure 1.)
Let
u
λ
(x) = u(x
λ
), and w
λ
(x) = u
λ
(x) −u(x).
The proof consists of three steps. In the ﬁrst step, we start from the very
left end of our region R
n
, that is near x
1
= −∞. We will show that, for λ
suﬃciently negative,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.11)
Here, the “Decay at Inﬁnity” is applied.
Then in the second step, we will move our plane T
λ
in the the x
1
direction
toward the right as long as inequality (8.11) holds. The plane will stop at
some limiting position, say at λ = λ
o
. We will show that
w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
.
This implies that the solution u is symmetric and monotone decreasing about
the plane T
λo
. Since x
1
axis can be chosen in any direction, we conclude that
u must be radially symmetric and monotone about some point.
Finally, in the third step, using the uniqueness theory in Ordinary Diﬀer
ential Equations, we will show that the solutions can only assume the given
form.
Step 1. Prepare to Move the Plane from Near −∞.
To verify (8.11) for λ suﬃciently negative, we apply the maximum principle
to w
λ
(x). Write τ =
n+2
n−2
. By the deﬁnition of u
λ
, it is easy to see that, u
λ
satisﬁes the same equation as u does. Then by the Mean Value Theorem, it is
easy to verify that
−´w
λ
= u
τ
λ
(x) −u
τ
(x) = τψ
τ−1
λ
(x)w
λ
(x). (8.12)
where ψ
λ
(x) is some number between u
λ
(x) and u(x). Recalling the “Maxi
mum Principle Based on Comparison” (Theorem 7.4.1), we see here c(x) =
204 8 Methods of Moving Planes and of Moving Spheres
−τψ
τ−1
λ
(x). By the “Decay at Inﬁnity” argument (Corollary 7.4.2), it suﬃce
to check the decay rate of ψ
τ−1
λ
(x), and more precisely, only at the points ˜ x
where w
λ
is negative (see Remark 7.4.2 ). Apparently at these points,
u
λ
(˜ x) < u(˜ x),
and hence
0 ≤ u
λ
(˜ x) ≤ ψ
λ
(˜ x) ≤ u(˜ x).
By the decay assumption of the solution
u(x) = O
1
[x[
n−2
,
we derive immediately that
ψ
τ−1
λ
(˜ x) = O
(
1
[˜ x[
)
4
n−2
= O
1
[˜ x[
4
.
Here the power of
1
[˜ x[
is greater than two, which is what (actually more
than) we desire for. Therefore, we can apply the “Maximum Principle Based
on Comparison” to conclude that for λ suﬃciently negative ( [˜ x[ suﬃciently
large ), we must have (8.11). This completes the preparation for the moving
of planes.
Step 2. Move the Plane to the Limiting Position to Derive Symmetry.
Now we can move the plane T
λ
toward right, i.e., increase the value of λ,
as long as the inequality (8.11) holds. Deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
Obviously, λ
o
< +∞, due to the asymptotic behavior of u near x
1
= +∞. We
claim that
w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
. (8.13)
Otherwise, by the “Strong Maximum Principle” on unbounded domains ( see
Theorem 7.3.3 ), we have
w
λo
(x) > 0 in the interior of Σ
λo
. (8.14)
We show that the plane T
λo
can still be moved a small distance to the right.
More precisely, there exists a δ
o
> 0 such that, for all 0 < δ < δ
o
, we have
w
λo+δ
(x) ≥ 0 , ∀x ∈ Σ
λo+δ
. (8.15)
This would contradict with the deﬁnition of λ
o
, and hence (8.13) must hold.
Recall that in the last section, we use the “Narrow Region Principle ” to
derive (8.15). Unfortunately, it can not be applied in this situation, because
8.2 Applications of the Maximum Principles Based on Comparisons 205
the “narrow region” here is unbounded, and we are not able to guarantee that
w
λo
is bounded away from 0 on the left boundary of the “narrow region”.
To overcome this diﬃculty, we introduce a new function
¯ w
λ
(x) =
w
λ
(x)
φ(x)
,
where
φ(x) =
1
[x[
q
with 0 < q < n −2.
Then it is a straight forward calculation to verify that
−´¯ w
λ
= 2¯ w
λ
φ
φ
+
−´w
λ
+
´φ
φ
w
λ
1
φ
(8.16)
We have
Lemma 8.2.1 There exists a R
o
> 0 ( independent of λ), such that if x
o
is
a minimum point of ¯ w
λ
and ¯ w
λ
(x
o
) < 0, then [x
o
[ < R
o
.
We postpone the proof of the Lemma for a moment. Now suppose that (8.15) is
violated for any δ > 0. Then there exists a sequence of numbers ¦δ
i
¦ tending
to 0 and for each i, the corresponding negative minimum x
i
of w
λo+δi
. By
Lemma 8.2.1, we have
[x
i
[ ≤ R
o
, ∀ i = 1, 2, .
Then, there is a subsequence of ¦x
i
¦ (still denoted by ¦x
i
¦) which converges
to some point x
o
∈ R
n
. Consequently,
¯ w
λo
(x
o
) = lim
i→∞
¯ w
λo+δi
(x
i
) = 0 (8.17)
and
¯ w
λo
(x
o
) = lim
i→∞
¯ w
λo+δi
(x
i
) ≤ 0.
However, we already know ¯ w
λo
≥ 0, therefore, we must have ¯ w
λo
(x
o
) = 0. It
follows that
w
λo
(x
o
) = ¯ w
λo
(x
o
)φ(x
o
) + ¯ w
λo
(x
o
)φ = 0 + 0 = 0. (8.18)
On the other hand, by (8.14), since w
λo
(x
o
) = 0, x
o
must be on the
boundary of Σ
λo
. Then by the Hopf Lemma (see Theorem 7.3.3), we have,
the outward normal derivative
∂w
λo
∂ν
(x
o
) < 0.
This contradicts with (8.18). Now, to verify (8.15), what left is to prove the
Lemma.
206 8 Methods of Moving Planes and of Moving Spheres
Proof of Lemma 8.2.1. Assume that x
o
is a negative minimum of ¯ w
λ
.
Then
−´¯ w
λ
(x
o
) ≤ 0 and ¯ w
λ
(x
o
) = 0. (8.19)
On the other hand, as we argued in Step 1, by the asymptotic behavior of
u at inﬁnity, if [x
o
[ is suﬃciently large,
c(x
o
) := −τψ
τ−1
(x
o
) > −
q(n −2 −q)
[x
o
[
≡
´φ(x
o
)
φ(x
o
)
.
It follows from (8.12) that
−´w
λ
+
´φ
φ
w
λ
(x
o
) > 0.
This, together with (8.19) contradicts with (8.16), and hence completes the
proof of the Lemma.
Step 3. In the previous two steps, we show that the positive solutions
of (8.10) must be radially symmetric and monotone decreasing about some
point in R
n
. Since the equation is invariant under translation, without loss of
generality, we may assume that the solutions are symmetric about the origin.
Then they satisﬁes the following ordinary diﬀerential equation
−u
(r) −
n−1
r
u
(r) = u
τ
u
(0) = 0
u(0) =
[n(n−2)]
(n−2)/4
λ
n−2
2
for some λ > 0. One can verify that u(r) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+r
2
)
n−2
2
is a solution, and
by the uniqueness of the ODE problem, this is the only solution. Therefore,
we conclude that every positive solution of (8.10) must assume the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for λ > 0 and some x
o
∈ R
n
. This completes the proof of the Theorem.
Proof of Theorem 8.2.3.
i) The general idea in proving this part of the Theorem is almost the
same as that for Theorem 8.2.2. The main diﬀerence is that we have no decay
assumption on the solution u at inﬁnity, hence the method of moving planes
can not be applied directly to u. So we ﬁrst make a Kelvin transform to deﬁne
a new function
v(x) =
1
[x[
n−2
u(
x
[x[
2
).
8.2 Applications of the Maximum Principles Based on Comparisons 207
Then obviously, v(x) has the desired decay rate
1
[x[
n−2
at inﬁnity, but has a
possible singularity at the origin. It is easy to verify that v satisﬁes the same
equation as u does, except at the origin:
´v +v
τ
(x) = 0 x ∈ R
n
` ¦0¦, n ≥ 3. (8.20)
We will apply the method of moving planes to the function v, and show
that v is radially symmetric and monotone decreasing about some point. If
the center point is the origin, then by the deﬁnition of v, u is also symmetric
and monotone decreasing about the origin. If the center point of v is not the
origin, then v has no singularity at the origin, and hence u has the desired
decay at inﬁnity. Then the same argument in the proof of Theorem 8.2.2 would
imply that u is symmetric and monotone decreasing about some point.
Deﬁne
v
λ
(x) = v(x
λ
) , w
λ
(x) = v
λ
(x) −v(x).
Because v(x) may be singular at the origin, correspondingly w
λ
may be sin
gular at the point x
λ
= (2λ, 0, , 0). Hence instead of on Σ
λ
, we consider w
λ
on
˜
Σ
λ
= Σ
λ
` ¦x
λ
¦. And in our proof, we treat the singular point carefully.
Each time we show that the points of interest are away from the singularities,
so that we can carry on the method of moving planes to the end to show the
existence of a λ
o
such that w
λo
(x) ≡ 0 for x ∈
˜
Σ
λo
and v is strictly increasing
in the x
1
direction in
˜
Σ
λo
.
As in the proof of Theorem 8.2.2, we see that v
λ
satisﬁes the same equation
as v does, and
−´w
λ
= τψ
τ−1
λ
(x)w
λ
(x).
where ψ
λ
(x) is some number between v
λ
(x) and v(x).
Step 1. We show that, for λ suﬃciently negative, we have
w
λ
(x) ≥ 0, ∀x ∈
˜
Σ
λ
. (8.21)
By the asymptotic behavior
v(x) ∼
1
[x[
n−2
,
we derive immediately that, at a negative minimum point x
o
of w
λ
,
ψ
τ−1
λ
(x
o
) ∼ (
1
[x
o
[
n−2
)
τ−1
=
1
[x
o
[
4
,
the power of
1
[x
o
[
is greater than two, and we have the desired decay rate for
c(x) := −τψ
τ−1
λ
(x), as mentioned in Corollary 7.4.2. Hence we can apply the
“Decay at Inﬁnity” to w
λ
(x). The diﬀerence here is that, w
λ
has a singularity
208 8 Methods of Moving Planes and of Moving Spheres
at x
λ
, hence we need to show that, the minimum of w
λ
is away from x
λ
.
Actually, we will show that,
If inf
˜
Σ
λ
w
λ
(x) < 0, then the inﬁmum is achieved in Σ
λ
` B
1
(x
λ
) (8.22)
To see this, we ﬁrst note that for x ∈ B
1
(0),
v(x) ≥ min
∂B1(0)
v(x) =
0
> 0
due to the fact that v(x) > 0 and ´v ≤ 0.
Then let λ be so negative, that v(x) ≤
0
for x ∈ B
1
(x
λ
). This is possible
because v(x) → 0 as [x[ → ∞.
For such λ, obviously w
λ
(x) ≥ 0 on B
1
(x
λ
) ` ¦x
λ
¦. This implies (8.22).
Now similar to the Step 1 in the proof of Theorem 8.2.2, we can deduce that
w
λ
(x) ≥ 0 for λ suﬃciently negative.
Step 2. Again, deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈
˜
Σ
λ
¦.
We will show that
If λ
o
< 0, then w
λo
(x) ≡ 0, ∀x ∈
˜
Σ
λo
.
Deﬁne ¯ w
λ
=
w
λ
φ
the same way as in the proof of Theorem 8.2.2. Suppose
that ¯ w
λo
(x) ≡0, then by Maximum Principle we have
¯ w
λo
(x) > 0 , for x ∈
˜
Σ
λo
.
The rest of the proof is similar to the step 2 in the proof of Theorem 8.2.2
except that now we need to take care of the singularities. Again let λ
k
` λ
o
be a sequence such that ¯ w
λ
k
(x) < 0 for some x ∈
˜
Σ
λ
k
. We need to show that
for each k, inf
˜
Σ
λ
k
¯ w
λ
k
(x) can be achieved at some point x
k
∈
˜
Σ
λ
k
and that
the sequence ¦x
k
¦ is bounded away from the singularities x
λ
k
of w
λ
k
. This
can be seen from the following facts
a) There exists > 0 and δ > 0 such that
¯ w
λo
(x) ≥ for x ∈ B
δ
(x
λo
) ` ¦x
λo
¦.
b) lim
λ→λo
inf
x∈B
δ
(x
λ
)
¯ w
λ
(x) ≥ inf
x∈B
δ
(x
λo
)
¯ w
λo
(x) ≥ .
Fact (a) can be shown by noting that ¯ w
λo
(x) > 0 on
˜
Σ
λo
and ´w
λo
≤ 0,
while fact (b) is obvious.
Now through a similar argument as in the proof of Theorem 8.2.2 one can
easily arrive at a contradiction. Therefore ¯ w
λo
(x) ≡ 0.
8.2 Applications of the Maximum Principles Based on Comparisons 209
If λ
o
= 0, then we can carry out the above procedure in the opposite di
rection, namely, we move the plane in the negative x
1
direction from positive
inﬁnity toward the origin. If our planes T
λ
stop somewhere before the origin,
we derive the symmetry and monotonicity in x
1
direction by the above ar
gument. If they stop at the origin again, we also obtain the symmetry and
monotonicity in x
1
direction by combining the two inequalities obtained in
the two opposite directions. Since the x
1
direction can be chosen arbitrarily,
we conclude that the solution u must be radially symmetric about some point.
ii) To show the nonexistence of solutions in the case p <
n+2
n−2
, we notice
that after the Kelvin transform, v satisﬁes
´v +
1
[x[
n+2−p(n−2)
v
p
(x) = 0 , x ∈ R
n
` ¦0¦.
Due to the singularity of the coeﬃcient of v
p
at the origin, one can easily
see that v can only be symmetric about the origin if it is not identically
zero. Hence u must also be symmetric about the origin. Now given any two
points x
1
and x
2
in R
n
, since equation (8.10) is invariant under translations
and rotations, we may assume that the origin is at the mid point of the line
segment x
1
x
2
. Then from the above argument, we must have u(x
1
) = u(x
2
).
It follows that u is constant. Finally, from the equation (8.10), we conclude
that u ≡ 0. This completes the proof of the Theorem.
8.2.3 Symmetry of Solutions for −u = e
u
in R
2
When considering prescribing Gaussian curvature on two dimensional com
pact manifolds, if the sequence of approximate solutions “blows up”, then by
rescaling and taking limit, one would arrive at the following equation in the
entire space R
2
:
∆u + exp u = 0 , x ∈ R
2
R
2
exp u(x)dx < +∞
(8.23)
The classiﬁcation of the solutions for this limiting equation would provide es
sential information on the original problems on manifolds, also it is interesting
in its own right.
It is known that
φ
λ,x
o(x) = ln
32λ
2
(4 +λ
2
[x −x
o
[
2
)
2
for any λ > 0 and any point x
o
∈ R
2
is a family of explicit solutions.
We will use the method of moving planes to prove:
Theorem 8.2.4 Every solution of (8.23) is radially symmetric with respect
to some point in R
2
and hence assumes the form of φ
λ,x
o(x).
To this end, we ﬁrst need to obtain some decay rate of the solutions near
inﬁnity.
210 8 Methods of Moving Planes and of Moving Spheres
Some Global and Asymptotic Behavior of the Solutions
The following Theorem gives the asymptotic behavior of the solutions near
inﬁnity, which is essential to the application of the method of moving planes.
Theorem 8.2.5 If u(x) is a solution of (8.23), then as [x[ → +∞,
u(x)
ln [x[
→ −
1
2π
R
2
exp u(x)dx ≤ −4
uniformly.
This Theorem is a direct consequence of the following two Lemmas.
Lemma 8.2.2 (W. Ding) If u is a solution of
−´u = e
u
, x ∈ R
2
and
R
2
exp u(x)dx < +∞,
then
R
2
exp u(x)dx ≥ 8π.
Proof. For −∞ < t < ∞, let Ω
t
= ¦x [ u(x) > t¦, one can obtain
Ωt
exp u(x)dx = −
Ωt
´u =
∂Ωt
[u[ds
−
d
dt
[Ω
t
[ =
∂Ωt
ds
[u[
By the Schwartz inequality and the isoperimetric inequality,
∂Ωt
ds
[u[
∂Ωt
[u[ ≥ [∂Ω
t
[
2
≥ 4π[Ω
t
[.
Hence
−(
d
dt
[Ω
t
[)
Ωt
exp u(x)dx ≥ 4π[Ω
t
[
and so
d
dt
(
Ωt
exp u(x)dx)
2
= 2 exp t (
d
dt
[Ω
t
[)
Ωt
exp u(x)dx ≤ −8π[Ω
t
[e
t
.
Integrating from −∞ to ∞ gives
−(
R
2
exp u(x)dx)
2
≤ −8π
R
2
exp u(x)dx
which implies
R
2
exp u(x)dx ≥ 8π as desired.
Lemma 8.2.2 enables us to obtain the asymptotic behavior of the solutions
at inﬁnity.
8.2 Applications of the Maximum Principles Based on Comparisons 211
Lemma 8.2.3 . If u(x) is a solution of (8.23), then as [x[ → +∞,
u(x)
ln [x[
→ −
1
2π
R
2
exp u(x)dx uniformly.
Proof.
By a result of Brezis and Merle [BM], we see that the condition
R
2
exp u(x)dx <
∞ implies that the solution u is bounded from above.
Let
w(x) =
1
2π
R
2
(ln [x −y[ −ln([y[ + 1)) exp u(y)dy.
Then it is easy to see that
´w(x) = exp u(x) , x ∈ R
2
and we will show
w(x)
ln [x[
→
1
2π
R
2
exp u(x)dx uniformly as [x[ → +∞. (8.24)
To see this, we need only to verify that
I :=
R
2
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
e
u(y)
dy→0
as [x[→∞. Write I = I
1
+I
2
+I
3
, where I
1
, I
2
and I
3
are the integrals on the
three regions
D
1
= ¦y [ [x −y[ ≤ 1¦,
D
2
= ¦y [ [x −y[ > 1 and [y[ ≤ K¦
and
D
3
= ¦y [ [x −y[ > 1 and [y[ > K¦
respectively. We may assume that [x[ ≥ 3.
a) To estimate I
1
, we simply notice that
I
1
≤ C
x−y≤1
e
u(y)
dy −
1
ln [x[
x−y≤1
ln [x −y[e
u(y)
dy
Then by the boundedness of e
u(y)
and
R
2
e
u(y)
dy, we see that I
1
→0 as
[x[→∞.
b) For each ﬁxed K, in region D
2
, we have, as [x[→∞,
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
→0
hence I
2
→0.
212 8 Methods of Moving Planes and of Moving Spheres
c)To see I
3
→0, we use the fact that for [x −y[ > 1
[
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
[ ≤ C
Then let K→∞. This veriﬁes (8.24).
Consider the function v(x) = u(x) +w(x). Then ´v ≡ 0 and
v(x) ≤ C +C
1
ln([x[ + 1)
for some constant C and C
1
. Therefore v must be a constant. This completes
the proof of our Lemma.
Combining Lemma 8.2.2 and Lemma 8.2.3, we obtain our Theorem 8.2.5.
The Method of Moving Planes and Symmetry
In this subsection, we apply the method of moving planes to establish the
symmetry of the solutions. For a given solution, we move the family of lines
which are orthogonal to a given direction from negative inﬁnity to a critical
position and then show that the solution is symmetric in that direction about
the critical position. We also show that the solution is strictly increasing before
the critical position. Since the direction can be chosen arbitrarily, we conclude
that the solution must be radially symmetric about some point. Finally by
the uniqueness of the solution of the following O.D.E. problem
u
(r) +
1
r
u
(r) = f(u)
u
(0) = 0
u(0) = 1
we see that the solutions of (8.23) must assume the form φ
λ,x
0(x).
Assume that u(x) is a solution of (8.23). Without loss of generality, we
show the monotonicity and symmetry of the solution in the x
1
direction.
For λ ∈ R
1
, let
Σ
λ
= ¦(x
1
, x
2
) [ x
1
< λ¦
and
T
λ
= ∂Σ
λ
= ¦(x
1
, x
2
) [ x
1
= λ¦.
Let
x
λ
= (2λ −x
1
, x
2
)
be the reﬂection point of x = (x
1
, x
2
) about the line T
λ
. (See the previous
Figure 1.)
Deﬁne
w
λ
(x) = u(x
λ
) −u(x).
A straight forward calculation shows
´w
λ
(x) + (exp ψ
λ
(x))w
λ
(x) = 0 (8.25)
8.2 Applications of the Maximum Principles Based on Comparisons 213
where ψ
λ
(x) is some number between u(x) and u
λ
(x).
Step 1. As in the previous examples, we show that, for λ suﬃciently neg
ative, we have
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
.
By the “Decay at Inﬁnity” argument, the key is to check the decay rate of
c(x) := −e
ψ
λ
(x)
at points x
o
where w
λ
(x) is negative. At such point
u(x
oλ
) < u(x
o
), and hence ψ
λ
(x
o
) ≤ u(x
o
).
By virtue of Theorem 8.2.5, we have
e
ψ
λ
(x
o
)
= O(
1
[x
o
[
4
) (8.26)
Notice that we are in dimension two, while the key function φ =
1
x
q
given
in Corollary 7.4.2 requires 0 < q < n − 2, hence it does not work here. As a
modiﬁcation, we choose
φ(x) = ln([x[ −1).
Then it is easy to verify that
´φ
φ
(x) =
−1
[x[([x[ −1)
2
ln([x[ −1)
.
It follows from this and (8.26) that
e
ψ(x
o
)
+
´φ
φ
(x
o
) < 0 for suﬃciently large [x
o
[. (8.27)
This is what we desire for.
Then similar to the argument in Subsection 5.2, we introduce the function
¯ w
λ
(x) =
w
λ
(x)
φ(x)
.
It satisﬁes
´¯ w
λ
+ 2¯ w
λ
φ
φ
+
e
ψ
λ
(x)
+
´φ
φ
¯ w
λ
= 0. (8.28)
Moreover, by the asymptotic behavior of the solution u near inﬁnity (see
Lemma 8.2.3), we have, for each ﬁxed λ,
¯ w
λ
(x) =
u(x
λ
) −u(x)
ln([x[ −1)
→0 , as [x[→∞. (8.29)
Now, if w
λ
(x) < 0 somewhere, then by (8.29), there exists a point x
o
,
which is a negative minimum of ¯ w
λ
(x). At this point, one can easily derive a
contradiction from (8.27) and (8.28). This completes Step 1.
214 8 Methods of Moving Planes and of Moving Spheres
Step 2 . Deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
We show that
If λ
o
< 0, then w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
.
The argument is entirely similar to that in the Step 2 of the proof of
Theorem 8.2.2 except that we use φ(x) = ln(−x
1
+ 2). This completes the
proof of the Theorem.
8.3 Method of Moving Planes in a Local Way
8.3.1 The Background
Let M be a Riemannian manifold of dimension n ≥ 3 with metric g
o
. Given
a function R(x) on M, one interesting problem in diﬀerential geometry is to
ﬁnd a metric g that is pointwise conformal to the original metric g
o
and has
scalar curvature R(x). This is equivalent to ﬁnding a positive solution of the
semilinear elliptic equation
−
4(n −1)
n −2
´
o
u +R
o
(x)u = R(x)u
n+2
n−2
, x ∈ M, (8.30)
where ´
o
is the BeltramiLaplace operator of (M, g
o
) and R
o
(x) is the scalar
curvature of g
o
.
In recent years, there have seen a lot of progress in understanding equation
(8.30). When (M, g
o
) is the standard sphere S
n
, the equation becomes
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
, u > 0, x ∈ S
n
. (8.31)
It is the socalled critical case where the lack of compactness occurs. In this
case, the wellknown KazdanWarner condition gives rise to many examples of
R in which there is no solution. In the last few years, a tremendous amount of
eﬀort has been put to ﬁnd the existence of the solutions and many interesting
results have been obtained ( see [Ba] [BC] [Bi] [BE] [CL1] [CL3] [CL4] [CL6]
[CY1] [CY2] [ES] [KW1] [Li2] [Li3] [SZ] [Lin1] and the references therein).
One main ingredient in the proof of existence is to obtain a priori estimates
on the solutions. For equation (8.31) on S
n
, to establish a priori estimates,
a useful technique employed by most authors was a ‘blowingup’ analysis.
However, it does not work near the points where R(x) = 0. Due to this
limitation, people had to assume that R was positive and bounded away from
0. This technical assumption became somewhat standard and has been used
by many authors for quite a few years. For example, see articles by Bahri
8.3 Method of Moving Planes in a Local Way 215
[Ba], Bahri and Coron [BC], Chang and Yang [CY1], Chang, Gursky, and
Yang [CY2], Li [Li2] [Li3], and Schoen and Zhang [SZ].
Then for functions with changing signs, are there any a priori estimates?
In [CL6], we answered this question aﬃrmatively. We removed the above
wellknown technical assumption and obtained a priori estimates on the solu
tions of (8.31) for functions R which are allowed to change signs.
The main diﬃculty encountered by the traditional ‘blowingup’ analysis
is near the point where R(x) = 0. To overcome this, we used the ‘method of
moving planes’ in a local way to control the growth of the solutions in this
region and obtained an a priori estimate.
In fact, we derived a priori bounds for the solutions of more general equa
tions
−´u +
n(n −2)
4
u = R(x)u
p
, u > 0, x ∈ S
n
. (8.32)
for any exponent p between 1 +
1
A
and A, where A is an arbitrary positive
number. For a ﬁxed A, the bound is independent of p.
Proposition 8.3.1 Assume R ∈ C
2,α
(S
n
) and [R(x)[ ≥ β
0
for any x with
[R(x)[ ≤ δ
0
. Then there are positive constants and C depending only on β
0
,
δ
0
, M, and R
C
2,α
(S
n
)
, such that for any solution u of (8.32), we have u ≤ C
in the region where [R(x)[ ≤ .
In this proposition, we essentially required R(x) to grow linearly near its
zeros, which seems a little restrictive. Recently, in the case p =
n+2
n−2
, Lin [Lin1]
weaken the condition by assuming only polynomial growth of R near its zeros.
To prove this result, he ﬁrst established a Liouville type theorem in R
n
, then
use a blowing up argument.
Later, by introducing a new auxiliary function, we sharpen our method
of moving planes to further weaken the condition and to obtain more general
result:
Theorem 8.3.1 Let M be a locally conformally ﬂat manifold. Let
Γ = ¦x ∈ M [ R(x) = 0¦.
Assume that Γ ∈ C
2,α
and R ∈ C
2,α
(M) satisfying
∂R
∂ν
≤ 0 where ν is the
outward normal (pointing to the region where R is negative) of Γ, and
R(x)
[R(x)[
is
continuous near Γ,
= 0 at Γ.
(8.33)
Let D be any compact subset of M and let p be any number greater than 1.
Then the solutions of the equation
−´
o
u +R
o
(x)u = R(x)u
p
, in M (8.34)
are uniformly bounded near Γ ∩ D.
216 8 Methods of Moving Planes and of Moving Spheres
One can see that our condition (8.33) is weaker than Lin’s polynomial
growth restriction. To illustrate, one may simply look at functions R(x) which
grow like exp¦−
1
d(x)
¦, where d(x) is the distance from the point x to Γ.
Obviously, this kinds of functions satisfy our condition, but are not polynomial
growth.
Moreover our restriction on the exponent is much weaker. Besides being
n + 2
n −2
, p can be any number greater than 1.
8.3.2 The A Priori Estimates
Now, we estimate the solutions of equation (8.34) and prove Theorem 8.3.1.
Since M is a locally conformally ﬂat manifold, a local ﬂat metric can be
chosen so that equation (8.34) in a compact set D ⊂ M can be reduced to
−´u = R(x)u
p
, p > 1, (8.35)
in a compact set K in R
n
. This is the equation we will consider throughout
the section.
The proof of the Theorem is divided into two parts. We ﬁrst derive the
bound in the region(s) where R is negatively bounded away from zero. Then
based on this bound, we use the ‘method of moving planes’ to estimate the
solutions in the region(s) surrounding the set Γ.
Part I. In the region(s) where R is negative
In this part, we derive estimates in the region(s) where R is negatively
bounded away from zero. It comes from a standard elliptic estimate for sub
harmonic functions and an integral bound on the solutions. We prove
Theorem 8.3.2 The solutions of (8.35) are uniformly bounded in the region
¦x ∈ K : R(x) ≤ 0 and dist(x, Γ) ≥ δ¦. The bound depends only on δ, the
lower bound of R.
To prove Theorem 8.3.2, we introduce the following lemma of Gilbarg and
Trudinger [GT].
Lemma 8.3.1 (See Theorem 9.20 in [GT]). Let u ∈ W
2,n
(D) and suppose
´u ≥ 0. Then for any ball B
2r
(x) ⊂ D and any q > 0, we have
sup
Br(x)
u ≤ C(
1
r
n
B2r(x)
(u
+
)
q
dV )
1
q
, (8.36)
where C = C(n, D, q).
In virtue of this Lemma, to establish an a priori bound of the solutions,
one only need to obtain an integral bound. In fact, we prove
8.3 Method of Moving Planes in a Local Way 217
Lemma 8.3.2 Let B be any ball in K that is bounded away from Γ, then
there is a constant C such that
B
u
p
dx ≤ C. (8.37)
Proof. Let Ω be an open neighborhood of K such that dist (∂Ω, K ) ≥ 1.
Let ϕ be the ﬁrst eigenfunction of −´ in Ω, i.e.
−´ϕ = λ
1
ϕ(x) x ∈ Ω
ϕ(x) = 0 x ∈ ∂Ω.
Let
sgnR =
1 if R(x) > 0
0 if R(x) = 0
−1 if R(x) < 0.
Let α =
1+2p
p−1
. Multiply both sides of equation (8.35) by ϕ
α
[R[
α
sgnR and
then integrate. Taking into account that α > 2 and ϕ and R are bounded in
Ω, through a straight forward calculation, we have
Ω
ϕ
α
[R[
1+α
u
p
dx ≤ C
1
Ω
ϕ
α−2
[R[
α−2
udx ≤ C
2
Ω
ϕ
α
p
[R[
α−2
udx.
Applying H¨older inequality on the righthandside, we arrive at
Ω
ϕ
α
[R[
1+α
u
p
dx ≤ C
3
. (8.38)
Now (8.37) follows suit. This completes the proof of the Lemma.
The Proof of Theorem 8.3.2. It is a direct consequence of Lemma 8.3.1
and Lemma 8.3.2
Part II. In the region where R is small
In this part, we estimate the solutions in a neighborhood of Γ, where R is
small.
Let u be a given solution of (8.35), and x
0
∈ Γ. It is suﬃcient to show that
u is bounded in a neighborhood of x
0
. The key ingredient is to use the method
of moving planes to show that, near x
0
, along any direction that is close to
the direction of R, the values of the solution u is comparable. This result,
together with the boundedness of the integral
u
p
in a small neighborhood
of x
0
, leads to an a priori bound of the solutions. Since the method of moving
planes can not be applied directly to u, we construct some auxiliary functions.
The proof consists of the following four steps.
Step 1. Transforming the region
Make a translation, a rotation, or if necessary, a Kelvin transform. Denote
the new system by x = (x
1
, y) with y = (x
2
, , x
n
). Let x
1
axis pointing
218 8 Methods of Moving Planes and of Moving Spheres
to the right. In the new system, x
0
becomes the origin, the image of the set
Ω
+
:= ¦x [ R(x) > 0¦ is contained in the left of the yplane, and the image of
Γ is tangent to the yplane at origin. The image of Γ is also uniformly convex
near the origin.
Let x
1
= φ(y) be the equation of the image of Γ near the origin. Let
∂
1
D := ¦x [ x
1
= φ(y) +, x
1
≥ −2¦
The intersection of ∂
1
D and the plane ¦x ∈ R
n
[ x
1
= −2¦ encloses a part
of the plane, which is denote by ∂
2
D. Let D by the region enclosed by two
surfaces ∂
1
D and ∂
2
D. (See Figure 6.)
x
1 0
y
Γ
∂
1
D
∂
2
D
R(x) > 0
Figure 6
T
λ
R(x) < 0
1
The small positive number is chosen to ensure that
(a) ∂
1
D is uniformly convex,
(b)
∂R
∂x1
≤ 0 in D, and
(c)
R(x)
[R[
is continuous in D.
Step 2. Introducing an auxiliary function
Let m = max
∂1D
u. Let ˜ u be a C
2
extension of u from ∂
1
D to the entire
∂D, such that
0 ≤ ˜ u ≤ 2m, and [˜ u[ ≤ Cm.
8.3 Method of Moving Planes in a Local Way 219
Let w be the harmonic function
´w = 0 x ∈ D
w = ˜ u x ∈ ∂D
Then by the maximum principle and a standard elliptic estimate, we have
0 ≤ w(x) ≤ 2m and w
C
1
(D)
≤ C
1
m. (8.39)
Introduce a new function
v(x) = u(x) −w(x) +C
0
m[ +φ(y) −x
1
] +m[ +φ(y) −x
1
]
2
which is well deﬁned in D. Here C
0
is a large constant to be determined later.
One can easily verify that v(x) satisﬁes the following
´v +ψ(y) +f(x, v) = 0 x ∈ D
v(x) = 0 x ∈ ∂
1
D
where
ψ(y) = −C
0
m´φ(y) −m´[ +φ(y)]
2
−2m,
and
f(x, v) = 2mx
1
´φ(y)+R(x)¦v+w(x)−C
0
m[+φ(y)−x
1
]−m[+φ(y)−x
1
]
2
¦
p
At this step, we show that
v(x) > 0 ∀x ∈ D.
We consider the following two cases.
Case 1) φ(y) +
2
≤ x
1
≤ φ(y) + . In this region, R(x) is negative and
bounded away from 0. By Theorem 8.3.2 and the standard elliptic estimate,
we have
[
∂u
∂x
1
[ ≤ C
2
m,
for some positive constant C
2
. It follows from this and (8.39),
∂v
∂x
1
≤ (C
1
+C
2
−C
0
)m−2m( +φ(y) −x
1
).
Noticing that ( + φ(y) − x
1
) ≥ 0, one can choose C
0
suﬃciently large, such
that
∂v
∂x
1
< 0
Then since
v(x) = 0 ∀x ∈ ∂
1
D
we have v(x) > 0.
220 8 Methods of Moving Planes and of Moving Spheres
Case 2) −2 ≤ x
1
≤ φ(y) +
2
. In this region, again by (8.39), we have
v(x) ≥ −w(x) +C
0
m
2
≥ −2m+C
0
m
2
.
Again, choosing C
0
suﬃciently large (depending only on ), we arrive at v(x) >
0.
Step 3. Applying the Method of Moving Planes to v in x
1
direc
tion
In the previous step, we showed that v(x) ≥ 0 in D and v(x) = 0 on ∂
1
D.
This lies a foundation for the moving of planes. Now we can start from the
rightmost tip of region D, and move the plane perpendicular to x
1
axis toward
the left. More precisely, let
Σ
λ
= ¦x ∈ D [ x
1
≥ λ¦,
T
λ
= ¦x ∈ R
n
[ x
1
= λ¦.
Let x
λ
= (2λ − x
1
, y) be the reﬂection point of x with respect to the plane
T
λ
. Let
v
λ
(x) = v(x
λ
) and w
λ
(x) = v
λ
(x) −v(x).
We are going to show that
w
λ
(x) ≥ 0 ∀ x ∈ Σ
λ
. (8.40)
We want to show that the above inequality holds for −
1
≤ λ ≤ for some
0 <
1
< . This choice of
1
is to guarantee that when the plane T
λ
moves to
λ = −
1
, the reﬂection of Σ
λ
.about the plane T
λ
still lies in region D.
(8.40) is true when λ is less but close to , because Σ
λ
is a narrow region
and f(x, v) is Lipschitz continuous in v (Actually,
∂f
∂v
= R(x)). For detailed
argument, the readers may see the ﬁrst example in Section 5.
Now we decrease λ, that is, move the plane T
λ
toward the left as long as
the inequality (8.40) remains true. We show that the moving of planes can be
carried on provided
f(x, v(x)) ≤ f(x
λ
, v(x)) for x = (x
1
, y) ∈ D with x
1
> λ > −
1
. (8.41)
In fact, it is easy to see that v
λ
satisﬁes the equation
´v
λ
+ψ(y) +f(x
λ
, v
λ
) = 0,
and hence w
λ
satisﬁes
−´w
λ
= f(x
λ
, v
λ
) −f(x
λ
, v) +f(x
λ
, v) −f(x, v)
= R(x
λ
)(v
λ
−v) +f(x
λ
, v) −f(x, v)
= R(x
λ
)w
λ
+f(x
λ
, v) −f(x, v).
If (8.41) holds, then we have
8.3 Method of Moving Planes in a Local Way 221
−´w
λ
−R(x
λ
)w
λ
(x) ≥ 0. (8.42)
This enables us to apply the maximum principle to w
λ
.
Let
λ
o
= inf¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
If λ
o
> −
1
, then since v(x) > 0 in D, by virtue of (8.42) and the maximum
principle, we have
w
λo
(x) > 0, ∀x ∈ Σ
λo
and
∂w
λo
∂x
1
< 0, ∀x ∈ T
λo
∩ Σ
λo
.
Then, similar to the argument in the examples in Section 5, one can derive a
contradiction.
It is elementary to see that (8.41) holds if
∂f(x, v)
∂x
1
≤ 0 in the set
¦x ∈ D [ R(x) < 0 or x
1
> −2
1
¦.
To estimate
∂f
∂x
1
, we use
∂f
∂x
1
= 2m´φ(y) +u
p−1
¦
∂R
∂x
1
u +R(x)p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)]¦.
First, we notice that the uniform convexity of the image of Γ near the
origin implies
´φ(y) ≤ −a
0
< 0. (8.43)
for some constant a
0
.
We consider the following two possibilities.
a) R(x) ≤ 0. Choose C
0
suﬃciently large, so that
∂w
∂x
1
+C
0
m ≥ 0
Noticing that
∂R
∂x1
≤ 0 and ( +φ(y) −x
1
) ≥ 0 in D, we have
∂f
∂x
1
≤ 0.
b) R(x) > 0, x = (x
1
, y) with x
1
> −2
1
.
In the part where u ≥ 1, we use u to control
R(x)/
∂R
∂x
1
p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)].
More precisely, we write
∂f
∂x
1
= 2m´φ(y)+u
p−1
∂R
∂x
1
u +
R(x)/
∂R
∂x
1
p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)]
.
222 8 Methods of Moving Planes and of Moving Spheres
Choose
1
suﬃciently small, such that
[
∂R
∂x
1
[ ∼ [R[.
Then by condition (8.33) on R, we can make R(x)/
∂R
∂x1
arbitrarily small by a
small choice of
1
, and therefore arrive at
∂f
∂x1
≤ 0.
In the part where u ≤ 1, in virtue of (8.43), we can use 2m´φ(y) to control
Rp[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)].
Here again we use the smallness of R.
So far, our conclusion is: The method of moving planes can be carried on
up to λ = −
1
. More precisely, for any λ between −
1
and , the inequality
(8.40) is true.
Step 4. Deriving the a priori bound
Inequality (8.40) implies that, in a small neighborhood of the origin, the
function v(x) is monotone decreasing in x
1
direction. A similar argument
will show that this remains true if we rotate the x
1
axis by a small angle.
Therefore, for any point x
0
∈ Γ, one can ﬁnd a cone ∆
x0
with x
0
as its vertex
and staying to the left of x
0
, such that
v(x) ≥ v(x
0
) ∀x ∈ ∆
x0
(8.44)
Noticing that w(x) is bounded in D, (8.44) leads immediately to
u(x) +C
6
≥ u(x
0
) ∀x ∈ ∆
x0
(8.45)
More generally, by a similar argument, one can show that (8.45) is true
for any point x
0
in a small neighborhood of Γ. Furthermore, the intersection
of the cone ∆
x0
with the set ¦x[R(x) ≥
δ0
2
¦ has a positive measure and the
lower bound of the measure depends only on δ
0
and the C
1
norm of R. Now
the a priori bound of the solutions is a consequence of (8.45) and an integral
bound on u, which can be derived from Lemma 8.3.2.
This completes the proof of Theorem 8.3.1.
8.4 Method of Moving Spheres
8.4.1 The Background
Given a function K(x) on the two dimensional standard sphere S
2
, the well
known Nirenberg problem is to ﬁnd conditions on K(x), so that it can be
realized as the Gaussian curvature of some conformally related metric. This
is equivalent to solving the following nonlinear elliptic equation
8.4 Method of Moving Spheres 223
−´u + 1 = K(x)e
2u
(8.46)
on S
2
. Where ´ is the Laplacian operator associated to the standard metric.
On the higher dimensional sphere S
n
, a similar problem was raised by
Kazdan and Warner:
Which functions R(x) can be realized as the scalar curvature of some
conformally related metrics ?
The problem is equivalent to the existence of positive solutions of another
elliptic equation
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
(8.47)
For a unifying description, on S
2
, we let R(x) = 2K(x) be the scalar curvature,
then (8.46) is equivalent to
−´u + 2 = R(x)e
u
. (8.48)
Both equations are socalled ‘critical’ in the sense that lack of compactness
occurs. Besides the obvious necessary condition that R(x) be positive some
where, there are other wellknown obstructions found by Kazdan and Warner
[KW1] and later generalized by Bourguignon and Ezin [BE]. The conditions
are:
S
n
X(R)dA
g
= 0 (8.49)
where dA
g
is the volume element of the metric e
u
g
o
and u
4
n−2
g
o
for n = 2 and
for n ≥ 3 respectively. The vector ﬁeld X in (8.49) is a conformal Killing ﬁeld
associated to the standard metric g
o
on S
n
. In the following, we call these
necessary conditions KazdanWarner type conditions.
These conditions give rise to many examples of R(x) for which (8.48) and
(8.47) have no solution. In particular, a monotone rotationally symmetric
function R admits no solution.
Then for which R, can one solve the equations? What are the necessary
and suﬃcient conditions for the equations to have a solution? These are very
interesting problems in geometry.
In recent years, various suﬃcient conditions were obtained by many au
thors (For example, see [BC] [CY1] [CY2] [CY3] [CY4] [CL10] [CD1] [CD2]
[CC] [CS] [ES] [Ha] [Ho] [KW1] [Kw2] [Li1] [Li2] [Mo] and the references
therein.) However, there are still gaps between those suﬃcient conditions and
the necessary ones. Then one may naturally ask: ‘ Are the necessary condi
tions of KazdanWarner type also suﬃcient?’ This question has been open for
many years. In [CL3], we gave a negative answer to this question.
To ﬁnd necessary and suﬃcient conditions, it is natural to begin with
functions having certain symmetries. A pioneer work in this direction is [Mo]
by Moser. He showed that, for even functions R on S
2
, the necessary and
suﬃcient condition for (8.48) to have a solution is R being positive somewhere.
224 8 Methods of Moving Planes and of Moving Spheres
Note that, in the class of even functions, (1.2) is satisﬁed automatically. Thus
one can interpret Moser’s result as:
In the class of even functions, the KazdanWarner type conditions are the
necessary and suﬃcient conditions for (8.48) to have a solution.
A simple question was asked by Kazdan [Ka]:
What are the necessary and suﬃcient conditions for rotationally symmetric
functions?
To be more precise, for rotationally symmetric functions R = R(θ) where
θ is the distance from the north pole, the KazdanWarner type conditions take
the form:
R > 0 somewhere and R
changes signs (8.50)
In [XY], Xu and Yang found a family of rotationally symmetric functions
R
satisfying conditions (8.50), for which problem (8.48) has no rotationally
symmetric solution. Even though one didn’t know whether there were any
other nonsymmetric solutions for these functions R
, this result is still very
interesting, because it suggests that (8.50) may not be the suﬃcient condition.
In our earlier paper [CL3], we present a class of rotationally symmetric
functions satisfying (8.50) for which problem (8.48) has no solution at all. And
thus, for the ﬁrst time, pointed out that the Kazdan Warner type conditions
are not suﬃcient for (8.48) to have a solution. First, we proved the non
existence of rotationally symmetric solutions:
Proposition 8.4.1 Let R be rotationally symmetric. If
R is monotone in the region where R > 0, and R ≡ C, (8.51)
Then problem (8.48) and (8.47) has no rotationally symmetric solution.
This generalizes Xu and Yang’s result, since their family of functions R
satisfy (8.51). We also cover the higher dimensional cases.
Although we believed that for all such functions R, there is no solution at
all, we were not able to prove it at that time. However, we could show this
for a family of such functions.
Proposition 8.4.2 There exists a family of functions R satisfying the Kazdan
Warner type conditions (8.50), for which the problem (8.48) has no solution
at all.
At that time, we were not able to obtain the counter part of this result in
higher dimensions.
In [XY], Xu and Yang also proved:
Proposition 8.4.3 Let R be rotationally symmetric. Assume that
i) R is nondegenerate
ii)
R
changes signs in the region where R > 0 (8.52)
Then problem (8.48) has a solution.
8.4 Method of Moving Spheres 225
The above results and many other existence results tend to lead people
to believe that what really counts is whether R
changes signs in the region
where R > 0. And we conjectured in [CL3] that for rotationally symmetric
R, condition (8.52), instead of (8.50), would be the necessary and suﬃcient
condition for (8.48) or (8.47) to have a solution.
The main objective of this section is to present the proof on the necessary
part of the conjecture, which appeared in our later paper [CL4]. In [CL3], to
prove Proposition 8.4.2, we used the method of ‘moving planes’ to show that
the solutions of (8.48) are rotationally symmetric. This method only work
for a special family of functions satisfying (8.51). It does not work in higher
dimensions. In [CL4], we introduce a new idea. We call it the method of ‘mov
ing spheres’. It works in all dimensions, and for all functions satisfying (8.51).
Instead of showing the symmetry of the solutions, we obtain a comparison for
mula which leads to a direct contradiction. In an earlier work [P] , a similar
method was used by P. Padilla to show the radial symmetry of the solutions
for some nonlinear Dirichlet problems on annuluses.
The following are our main results.
Theorem 8.4.1 Let R be continuous and rotationally symmetric. If
R is monotone in the region where R > 0, and R ≡ C, (8.53)
Then problems (8.48) and (8.47) have no solution at all.
We also generalize this result to a family of nonsymmetric functions.
Theorem 8.4.2 Let R be a continuous function. Let B be a geodesic ball
centered at the North Pole x
n+1
= 1. Assume that
i) R(x) ≥ 0, R is nondecreasing in x
n+1
direction, for x ∈ B ;
ii) R(x) ≤ 0, for x ∈ S
n
` B;
Then problems (8.48) and (8.47) have no solution at all.
Combining our Theorem 1 with Xu and Yang’s Proposition 3, one can
obtain a necessary and suﬃcient condition in the nondegenerate case:
Corollary 8.4.1 Let R be rotationally symmetric and nondegenerate in the
sense that R
= 0 whenever R
= 0. Then the necessary and suﬃcient condi
tion for (8.48) to have a solution is
R > 0 somewhere and R
changes signs in the region where R > 0
8.4.2 Necessary Conditions
In this subsection, we prove Theorem 8.4.1 and Theorem 8.4.2.
For convenience, we make a stereographic projection from S
n
to R
n
. Then
it is equivalent to consider the following equation in Euclidean space R
n
:
226 8 Methods of Moving Planes and of Moving Spheres
−´u = R(r)e
u
, x ∈ R
2
. (8.54)
−´u = R(r)u
τ
u > 0, x ∈ R
n
, n ≥ 3, (8.55)
with appropriate asymptotic growth of the solutions at inﬁnity
u ∼ −4 ln [x[ for n=2 ; u ∼ C[x[
2−n
for n ≥ 3 (8.56)
for some C > 0. Here ´ is the Euclidean Laplacian operator, r = [x[ and
τ =
n+2
n−2
is the socalled critical Sobolev exponent. The function R(r) is the
projection of the original R in equations (8.48) and (8.47); and this projection
does not change the monotonicity of the function. The function R is also
bounded and continuous, and we assume these throughout the subsection.
For a radial function R satisfying (8.53), there are three possibilities:
i) R is nonpositive everywhere,
ii) R ≡ C is nonnegative everywhere and monotone,
iii) R > 0 and nonincreasing for r < r
o
, R ≤ 0 for r > r
o
; or vice versa.
The ﬁrst two cases violate the KazdanWarner type conditions, hence there
is no solution. Thus, we only need to show that in the remaining case, there
is also no solution. Without loss of generality, we may assume that:
R(r) > 0, R
(r) ≤ 0 for r < 1; R(r) ≤ 0 for r ≥ 1. (8.57)
The proof of Theorem 8.4.1 is based on the following comparison formulas.
Lemma 8.4.1 Let R be a continuous function satisfying (8.57). Let u be a
solution of (8.54) and (8.56) for n = 2, then
u(λx) > u(
λx
[x[
2
) −4 ln [x[ ∀x ∈ B
1
(0), 0 < λ ≤ 1 (8.58)
Lemma 8.4.2 Let R be a continuous function satisfying (8.57). Let u be a
solution of (8.55) and (8.56) for n > 2, then
u(λx) > [x[
2−n
u(
λx
[x[
2
) ∀x ∈ B
1
(0), 0 < λ ≤ 1 (8.59)
We ﬁrst prove Lemma 8.4.2. A similar idea will work for Lemma 8.4.1.
Proof of Lemma 8.4.2. We use a new idea called the method of
‘moving spheres ’. Let x ∈ B
1
(0), then λx ∈ B
λ
(0). The reﬂection point of λx
about the sphere ∂B
λ
(0) is
λx
x
2
. We compare the values of u at those pairs
of points λx and
λx
x
2
. In step 1, we start with the unit sphere and show that
(8.59) is true for λ = 1. This is done by Kelvin transform and the strong
maximum principle. Then in step 2, we move (shrink) the sphere ∂B
λ
(0)
8.4 Method of Moving Spheres 227
towards the origin. We show that one can always shrink ∂B
λ
(0) a little bit
before reaching the origin. By doing so, we establish the inequality for all
x ∈ B
1
(0) and λ ∈ (0, 1].
Step 1. In this step, we establish the inequality for x ∈ B
1
(0), λ =
1. We show that
u(x) > [x[
2−n
u(
x
[x[
2
) ∀x ∈ B
1
(0). (8.60)
Let v(x) = [x[
2−n
u(
x
x
2
) be the Kelvin transform. Then it is easy to verify
that v satisﬁes the equation
−´v = R(
1
r
)v
τ
(8.61)
and v is regular.
It follows from (8.57) that ´u < 0, and ´v ≥ 0 in B
1
(0). Thus −´(u −
v) > 0. Applying the maximum principle, we obtain
u > v in B
1
(0) (8.62)
which is equivalent to (8.60).
Step 2. In this step, we move the sphere ∂B
λ
(0) towards λ = 0.
We show that the moving can not be stopped until λ reaches the origin. This
is done again by the Kelvin transform and the strong maximum principle.
Let u
λ
(x) = λ
n
2
−1
u(λx). Then
−´u
λ
= R(λx)u
τ
λ
(x). (8.63)
Let v
λ
(x) = [x[
2−n
u
λ
(
x
x
2
) be the Kelvin transform of u
λ
. Then it is easy
to verify that
−´v
λ
= R(
λ
r
)v
τ
λ
(x). (8.64)
Let w
λ
= u
λ
−v
λ
. Then by (8.63) and (8.64),
´w
λ
+R(
λ
r
)ψ
λ
w
λ
= [R(
λ
r
) −R(λr)]u
τ
λ
(8.65)
where ψ
λ
is some function with values between τu
τ−1
λ
and τv
τ−1
λ
. Taking into
account of assumption (2.4), we have
R(
λ
r
) −R(λr) ≤ 0 for r ≤ 1, λ ≤ 1.
It follows from (8.65) that
´w
λ
+C
λ
(x)w
λ
≤ 0 (8.66)
where C
λ
(x) is a bounded function if λ is bounded away from 0.
228 8 Methods of Moving Planes and of Moving Spheres
It is easy to see that for any λ, strict inequality holds somewhere for (8.66).
Thus applying the strong maximum principle, we know that the inequality
(8.59) is equivalent to
w
λ
≥ 0. (8.67)
From step 1, we know (8.67) is true for (x, λ) ∈ B
1
(0) ¦1¦. Now we decrease
λ. Suppose (8.67) does not hold for all λ ∈ (0, 1] and let λ
o
> 0 be the smallest
number such that the inequality is true for (x, λ) ∈ B
1
(0) [λ
o
, 1]. We will
derive a contradiction by showing that for λ close to and less than λ
o
, the
inequality is still true. In fact, we can apply the strong maximum principle
and then the Hopf lemma to (8.66) for λ = λ
o
to get:
w
λo
> 0 in B
1
and
∂w
λo
∂r
< 0 on ∂B
1
. (8.68)
These combined with the fact that w
λ
≡ 0 on ∂B
1
imply that (8.67) holds for
λ close to and less than λ
o
.
Proof of Lemma 8.4.1. In step 1, we let
v(x) = u(
x
[x[
2
) −4 ln [x[.
In step 2, let
u
λ
(x) = u(λx) + 2 ln λ, v
λ
(x) = u
λ
(
x
[x[
2
) −4 ln [x[.
Then arguing as in the proof of Lemma 8.4.2 we derive the conclusion of
Lemma 8.4.1.
Now we are ready to prove Theorem 8.4.1.
Proof of Theorem 8.4.1. Taking λ to 0 in (8.58), we get ln [x[ > 0
for [x[ < 1 which is impossible. Letting λ→0 in (8.59) and using the fact
that u(0) > 0 we obtain again a contradiction. These complete the proof of
Theorem 8.4.1.
Proof of Theorem 8.4.2. Noting that in the proof of Theorem
8.4.1, we only compare the values of the solutions along the same radial di
rection, one can easily adapt the argument in the proof of Theorem 8.4.1 to
derive the conclusion of Theorem 8.4.2.
8.5 Method of Moving Planes in Integral Forms and
Symmetry of Solutions for an Integral Equation
Let R
n
be the n−dimensional Euclidean space, and let α be a real number
satisfying 0 < α < n. Consider the integral equation
u(x) =
R
n
1
[x −y[
n−α
u(y)
(n+α)/(n−α)
dy. (8.69)
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 229
It arises as an EulerLagrange equation for a functional under a constraint
in the context of the HardyLittlewoodSobolev inequalities. In [L], Lieb clas
siﬁed the maximizers of the functional, and thus obtained the best constant
in the HLS inequalities. He then posed the classiﬁcation of all the critical
points of the functional – the solutions of the integral equation (8.69) as an
open problem.
This integral equation is also closely related to the following family of
semilinear partial diﬀerential equations
(−∆)
α/2
u = u
(n+α)/(n−α)
. (8.70)
In the special case n ≥ 3 and α = 2, it becomes
−∆u = u
(n+2)/(n−2)
. (8.71)
As we mentioned in Section 6, solutions to this equation were studied by
Gidas, Ni, and Nirenberg [GNN]; Caﬀarelli, Gidas, and Spruck [CGS]; Chen
and Li [CL]; and Li [Li]. Recently, Wei and Xu [WX] generalized this result
to the solutions of (8.70) with α being any even numbers between 0 and n.
Apparently, for other real values of α between 0 and n, equation (8.70) is
also of practical interest and importance. For instance, it arises as the Euler
Lagrange equation of the functional
I(u) =
R
n
[(−∆)
α
4
u[
2
dx/(
R
n
[u[
2n
n−α
dx)
n−α
n
.
The classiﬁcation of the solutions would provide the best constant in the
inequality of the critical Sobolev imbedding from H
α
2
(R
n
) to L
2n
n−α
(R
n
):
(
R
n
[u[
2n
n−α
dx)
n−α
n
≤ C
R
n
[(−∆)
α
4
u[
2
dx.
As usual, here (−∆)
α/2
is deﬁned by Fourier transform. We can show that
(see [CLO]), all the solutions of partial diﬀerential equation (8.70) satisfy our
integral equation (8.69), and vise versa. Therefore, to classify the solutions
of this family of partial diﬀerential equations, we only need to work on the
integral equations (8.69).
In order either the HardyLittlewoodSobolev inequality or the above men
tioned critical Sobolev imbedding to make sense, u must be in L
2n
n−α
(R
n
). Un
der this assumption, we can show that (see [CLO]) a positive solution u is in
fact bounded, and therefore possesses higher regularity. Furthermore, by using
the method of moving planes, we can show that if a solution is locally L
2n
n−α
,
then it is in L
2n
n−α
(R
n
). Hence, in the following, we call a solution u regular if
it is locally L
2n
n−α
. It is interesting to notice that if this condition is violated,
then a solution may not be bounded. A simple example is u =
1
x
(n−α)/2
,
which is a singular solution. We studied such solutions in [CLO1].
We will use the method of moving planes to prove
230 8 Methods of Moving Planes and of Moving Spheres
Theorem 8.5.1 Every positive regular solution u(x) of (8.69) is radially
symmetric and decreasing about some point x
o
and therefore assumes the form
c(
t
t
2
+[x −x
o
[
2
)
(n−α)/2
, (8.72)
with some positive constants c and t.
To better illustrate the idea, we will ﬁrst prove the Theorem under stronger
assumption that u ∈ L
2n
n−2
(R
n
). Then we will show how the idea of this proof
can be extended under weaker condition that u is only locally in L
2n
n−2
.
For a given real number λ, deﬁne
Σ
λ
= ¦x = (x
1
, . . . , x
n
) [ x
1
≤ λ¦,
and let x
λ
= (2λ−x
1
, x
2
, . . . , x
n
) and u
λ
(x) = u(x
λ
). (Again see the previous
Figure 1.)
Lemma 8.5.1 For any solution u(x) of (1.1), we have
u(x) −u
λ
(x) =
Σ
λ
(
1
[x −y[
n−α
−
1
[x
λ
−y[
n−α
)(u(y)
n+α
n−α
−u
λ
(y)
n+α
n−α
)dy,
(8.73)
It is also true for v(x) =
1
[x[
n−α
u(
x
[x[
2
), the Kelvin type transform of u(x),
for any x = 0.
Proof. Since [x −y
λ
[ = [x
λ
−y[, we have
u(x) =
Σ
λ
1
[x −y[
n−α
u(y)
n+α
n−α
dy +
Σ
λ
1
[x
λ
−y[
n−α
u
λ
(y)
n+α
n−α
dy, and
u(x
λ
) =
Σ
λ
1
[x
λ
−y[
n−α
u(y)
n+α
n−α
dy +
Σ
λ
1
[x −y[
n−α
u
λ
(y)
n+α
n−α
.
This implies (8.73). The conclusion for v(x) follows similarly since it satisﬁes
the same integral equation (8.69) for x = 0.
Proof of the Theorem 8.5.1 (Under Stronger Assumption u ∈ L
2n
n−2
(R
n
)).
Let w
λ
(x) = u
λ
(x) − u(x). As in Section 6, we will ﬁrst show that, for λ
suﬃciently negative,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.74)
Then we can start moving the plane T
λ
= ∂Σ
λ
from near −∞ to the right,
as long as inequality (8.74) holds. Let
λ
o
= sup¦λ ≤ 0 [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 231
We will show that
λ
o
< ∞, and w
λo
(x) ≡ 0.
Unlike partial diﬀerential equations, here we do not have any diﬀerential
equations for w
λ
. Instead, we will introduce a new idea – the method of moving
planes in integral forms. We will exploit some global properties of the integral
equations.
Step 1. Start Moving the Plane from near −∞.
Deﬁne
Σ
−
λ
= ¦x ∈ Σ
λ
[ w
λ
(x) < 0¦. (8.75)
We show that for suﬃciently negative values of λ, Σ
−
λ
must be empty. By
Lemma 8.5.1, it is easy to verify that
u(x) −u
λ
(x) ≤
Σ
−
λ
1
[x −y[
n−α
−
1
[x
λ
−y[
n−α
[u
τ
(y) −u
τ
λ
(y)]dy
≤
Σ
−
λ
1
[x −y[
n−α
[u
τ
(y) −u
τ
λ
(y)]dy
= τ
Σ
−
λ
1
[x −y[
n−α
ψ
τ−1
λ
(y)[u(y) −u
λ
(y)]dy
≤ τ
Σ
−
λ
1
[x −y[
n−α
u
τ−1
(y)[u(y) −u
λ
(y)]dy,
where ψ
λ
(x) is valued between u
λ
(x) and u(x) by the Mean Value Theorem,
and since on Σ
−
λ
, u
λ
(x) < u(x), we have ψ
λ
(x) ≤ u(x).
We ﬁrst apply the classical HardyLittlewoodSobolev inequality (its equiv
alent form) to the above to obtain, for any q >
n
n−α
:
w
λ

L
q
(Σ
−
λ
)
≤ Cu
τ−1
w
λ

L
nq/(n+αq)
(Σ
−
λ
)
=
Σ
λ
−
[u
τ−1
(x)[
nq
n+αq
[w
λ
(x)[
nq
n+αq
dx
n+αq
nq
.
Then use the H¨older inequality to the last integral in the above. First choose
s =
n +αq
n
so that
nq
n +αq
s = q,
then choose
r =
n +αq
αq
232 8 Methods of Moving Planes and of Moving Spheres
to ensure
1
r
+
1
s
= 1.
We thus arrive at
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
By condition that u ∈ L
τ+1
(R
n
), we can choose N suﬃciently large, such
that for λ ≤ −N, we have
C¦
Σ
λ
u
τ+1
(y) dy¦
α
n
≤
1
2
.
Now (8.76) implies that w
λ

L
q
(Σ
−
λ
)
= 0, and therefore Σ
−
λ
must be measure
zero, and hence empty by the continuity of w
λ
(x). This veriﬁes (8.74) and
hence completes Step 1. The readers can see that here we have actually used
Corollary 7.5.1, with Ω = Σ
λ
, a region near inﬁnity.
Step 2. We now move the plane T
λ
= ¦x [ x
1
= λ¦ to the right as long as
(8.74) holds. Let λ
o
as deﬁned before, then we must have λ
o
< ∞. This can
be seen by applying a similar argument as in Step 1 from λ near +∞.
Now we show that
w
λo
(x) ≡ 0 , ∀ x ∈ Σ
λo
. (8.76)
Otherwise, we have w
λo
(x) ≥ 0, but w
λo
(x) ≡ 0 on Σ
λo
; we show that the
plane can be moved further to the right. More precisely, there exists an
depending on n, α, and the solution u(x) itself such that w
λ
(x) ≥ 0 on Σ
λ
for
all λ in [λ
o
, λ
o
+).
By Lemma 8.5.1, we have in fact w
λo
(x) > 0 in the interior of Σ
λo
. Let
Σ
−
λo
= ¦x ∈ Σ
λo
[ w
λo
(x) ≤ 0¦. Then obviously, Σ
−
λo
has measure zero, and
lim
λ→λo
Σ
−
λ
⊂ Σ
−
λo
. From the ﬁrst inequality of (8.76), we deduce,
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
(8.77)
Condition u ∈ L
τ+1
(R
n
) ensures that one can choose suﬃciently small,
so that for all λ in [λ
o
, λ
o
+),
C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
≤
1
2
.
Now by (8.77), we have w
λ

L
q
(Σ
−
λ
)
= 0, and therefore Σ
−
λ
must be empty.
Here we have actually applied Corollary 7.5.1 with Ω = Σ
−
λ
. For λ suﬃciently
close to λ
o
, Ω is contained in a very narrow region.
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 233
Proof of the Theorem 8.5.1 ( under Weaker Assumption that u is only
locally in L
τ+1
).
Outline. Since we do not assume any integrability condition of u(x) near
inﬁnity, we are not able to carry on the method of moving planes directly on
u(x). To overcome this diﬃculty, we consider v(x), the Kelvin type transform
of u(x). It is easy to verify that v(x) satisﬁes the same equation (8.69), but
has a possible singularity at origin, where we need to pay some particular
attention. Since u is locally L
τ+1
, it is easy to see that v(x) has no singularity
at inﬁnity, i.e. for any domain Ω that is a positive distance away from the
origin,
Ω
v
τ+1
(y) dy < ∞. (8.78)
Let λ be a real number and let the moving plane be x
1
= λ. We compare
v(x) and v
λ
(x) on Σ
λ
` ¦0¦. The proof consists of three steps. In step 1, we
show that there exists an N > 0 such that for λ ≤ −N, we have
v(x) ≤ v
λ
(x), ∀x ∈ Σ
λ
` ¦0¦. (8.79)
Thus we can start moving the plane continuously from λ ≤ −N to the right as
long as (8.79) holds. If the plane stops at x
1
= λ
o
for some λ
o
< 0, then v(x)
must be symmetric and monotone about the plane x
1
= λ
o
. This implies that
v(x) has no singularity at the origin and u(x) has no singularity at inﬁnity.
In this case, we can carry on the moving planes on u(x) directly to obtain
the radial symmetry and monotonicity. Otherwise, we can move the plane all
the way to x
1
= 0, which is shown in step 2. Since the direction of x
1
can
be chosen arbitrarily, we deduce that v(x) must be radially symmetric and
decreasing about the origin. We will show in step 3 that, in any case, u(x) can
not have a singularity at inﬁnity, and hence both u and v are in L
τ+1
(R
n
).
Step 1 and Step 2 are entirely similar to the proof under stronger assump
tion. What we need to do is to replace u there by v and Σ
λ
there by Σ
λ
`¦0¦.
Hence we only show Step 3 here.
Step 3. Finally, we showed that u has the desired asymptotic behavior at
inﬁnity, i.e., it satisﬁes
u(x) = O(
1
[x[
n−2
).
Suppose in the contrary, let x
1
and x
2
be any two points in R
n
and let x
o
be
the midpoint of the line segment x
1
x
2
. Consider the Kelvin type transform
centered at x
o
:
v(x) =
1
[x −x
o
[
n−α
u(
x −x
o
[x −x
o
[
2
).
Then v(x) has a singularity at x
o
. Carry on the arguments as in Steps 1 and 2,
we conclude that v(x) must be radially symmetric about x
o
, and in particular,
u(x
1
) = u(x
2
). Since x
1
and x
2
are any two points in R
n
, u must be constant.
This is impossible. Similarly, The continuity and higher regularity of u follows
234 8 Methods of Moving Planes and of Moving Spheres
from standard theory on singular integral operators. This completes the proof
of the Theorem.
A
Appendices
A.1 Notations
A.1.1 Algebraic and Geometric Notations
1. A = (a
ij
) is a an mn matrix with ij
th
entry a
ij
.
2. det(a
ij
) is the determinant of the matrix (a
ij
).
3. A
T
= transpose of the matrix A.
4. R
n
= ndimensional real Euclidean space with a typical point x =
(x
1
, , x
n
).
5. Ω usually denotes and open set in R
n
.
6.
¯
Ω is the closure of Ω.
7. ∂Ω is the boundary of Ω.
8. For x = (x
1
, , x
n
) and y = (y
1
, , y
n
) in R
n
,
x y =
n
¸
i=1
x
i
y
i
, [x[ =
n
¸
i=1
x
2
i
1
2
.
9. [x −y[ is the distance of the two points x and y in R
n
.
10. B
r
(x
o
) = ¦x ∈ R
n
[ [x − x
o
[ < r¦ is the ball of radius r centered at the
point x
o
in R
n
.
11. S
n
is the sphere of radius one centered at the origin in R
n+1
, i.e. the
boundary of the ball of radius one centered at the origin:
B
1
(0) = ¦x ∈ R
n+1
[ [x[ < 1¦.
A.1.2 Notations for Functions and Derivatives
1. For a function u : Ω ⊂ R
n
→R
1
, we write
u(x) = u(x
1
, , x
n
) , x ∈ Ω.
We say u is smooth if it is inﬁnitely diﬀerentiable.
236 A Appendices
2. For two functions u and v, u ≡ v means u is identically equal to v.
3. We set
u := v
to deﬁne u as equaling v.
4. We usually write u
xi
for
∂u
∂xi
, the ﬁrst partial derivative of u with respect
to x
i
. Similarly
u
xixj
=
∂
2
u
∂x
i
∂x
j
u
xixjx
k
=
∂
3
u
∂x
i
∂x
j
∂x
k
.
5.
D
α
u(x) :=
∂
α
u(x)
∂x
α1
1
∂x
αn
n
where α = (α
1
, , α
n
) is a multiindex of order
[α[ = α
1
+α
2
+ α
n
.
6. For a nonnegative integer k,
D
k
u(x) := ¦D
α
u(x) [ [α[ = k¦
the set of all partial derivatives of order k.
If k = 1, we regard the elements of Du as being arranged in a vector
Du := (u
x1
, , u
xn
) = u,
the gradient vector.
If k = 2, we regard the elements of D
2
u being arranged in a matrix
D
2
u :=
¸
¸
∂
2
u
∂x1∂x1
∂
2
u
∂x1∂xn
∂
2
u
∂xn∂x1
∂
2
u
∂xn∂xn
¸
,
the Hessian matrix.
7. The Laplacian
´u :=
n
¸
i=1
u
xixi
is the trace of D
2
u.
8.
[D
k
u[ :=
¸
¸
α=k
[D
α
u[
2
¸
1/2
.
A.1 Notations 237
A.1.3 Function Spaces
1.
C(Ω) := ¦u : Ω→R
1
[ u is continuous ¦
2.
C(
¯
Ω) := ¦u ∈ C(Ω) [ u is uniformly continuous¦
3.
u
C(
¯
Ω)
:= sup
x∈Ω
[u(x)[
4.
C
k
(Ω) := ¦u : Ω→R
1
[ u is ktimes continuously diﬀerentiable ¦
5.
C
k
(
¯
Ω) := ¦u ∈ C
k
(Ω) [ D
α
u is uniformly continuous for all [α[ ≤ k¦
6.
u
C
k
(
¯
Ω)
:=
¸
α≤k
D
α
u
C(
¯
Ω)
7. Assume Ω is an open subset in R
n
and 0 < α ≤ 1. If there exists a
constant C such that
[u(x) −u(y)[ ≤ C[x −y[
α
, ∀ x, y ∈ Ω,
then we say that u is H¨ oder continuous in Ω.
8. The α
th
H¨ oder seminorm of u is
[u]
C
0,α
(
¯
Ω)
:= sup
x=y
[u(x) −u(y)[
[x −y[
α
9. The α
th
H¨ oder norm is
u
C
0,α
(
¯
Ω)
:= u
C(
¯
Ω)
+ [u]
C
0,α
(
¯
Ω)
10. The H¨ oder space C
k,α
(
¯
Ω) consists of all functions u ∈ C
k
(
¯
Ω) for which
the norm
u
C
k,α
(
¯
Ω)
:=
¸
β≤k
D
β
u
C(
¯
Ω)
+
¸
β=k
[D
β
u]
C
0,α
(
¯
Ω)
is ﬁnite.
11. C
∞
(Ω) is the set of all inﬁnitely diﬀerentiable functions
12. C
k
0
(Ω) denotes the subset of functions in C
k
(Ω) with compact support in
Ω.
238 A Appendices
13. L
p
(Ω) is the set of Lebesgue measurable functions u on Ω with
u
L
p
(Ω)
:=
Ω
[u(x)[
p
dx
1/p
< ∞.
14. L
∞
(Ω) is the set of Lebesgue measurable functions u with
u
L
∞
(Ω)
:= ess sup
Ω
[u[ < ∞.
15. L
p
loc
(Ω) denotes the set of Lebesgue measurable functions u on Ω whose
p
th
power is locally integrable, i.e. for any compact subset K ⊂ Ω,
K
[u(x)[
p
dx < ∞.
16. W
k,p
(Ω) denotes the Sobolev space consisting functions u with
u
W
k,p
(Ω)
:=
¸
0≤α≤k
Ω
[D
α
u[
p
dx
1/p
< ∞.
17. H
k,p
(Ω) is the completion of C
∞
(Ω) under the norm u
W
k,p
(Ω)
.
18. For any open subset Ω ⊂ R
n
, any nonnegative integer k and any p ≥ 1,
it is shown (See Adams [Ad]) that
H
k,p
(Ω) = W
k,p
(Ω).
19. For p = 2, we usually write
H
k
(Ω) = H
k,2
(Ω).
A.1.4 Notations for Estimates
1. In the process of estimates, we use C to denote various constants that can
be explicitly computed in terms of known quantities. The value of C may
change from line to line in a given computation.
2. We write
u = O(v) as x→x
o
,
provided there exists a constant C such that
[u(x)[ ≤ C[v(x)[
for x suﬃciently close to x
o
.
3. We write
u = o(v) as x→x
o
,
provided
[u(x)[
[v(x)[
→0 as x→x
o
.
A.1 Notations 239
A.1.5 Notations from Riemannian Geometry
1. M denotes a diﬀerentiable manifold.
2. T
p
M is the tangent space of M at point p.
3. T
∗
p
M is the cotangent space of M at point p.
4. A Riemannian metric g can be expressed in a matrix
g =
¸
g
11
g
1n
g
n1
g
nn
¸
where in a local coordinates
g
ij
(p) =<
∂
∂x
i
p
,
∂
∂x
j
p
>
p
.
5. (M, g) denotes an ndimensional manifold M with Riemannain metric g.
In a local coordinates, the length element
ds :=
¸
n
¸
i,j=1
g
ij
dx
i
dx
j
¸
1/2
.
The volume element
dV :=
det(g
ij
)dx
1
dx
n
.
6. K(x) denotes the Gaussian curvature, and R(x) scalar curvature of a
manifold M at point x.
7. For two diﬀerentiable vector ﬁelds X and Y on M, the Lie Bracket is
[X, Y ] := XY −Y X.
8. D denotes an aﬃne connection on a diﬀerentiable manifold M.
9. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if
DV
dt
= 0.
10. Given a Riemannian manifold M with metric <, >, there exists a unique
connection D on M that is compatible with the metric <, >, i.e.
< X, Y >
c(t)
= constant ,
for any pair of parallel vector ﬁelds X and Y along any smooth curve c(t).
11. In a local coordinates (x
1
, , x
n
), this unique Riemannian connection
can be expressed as
D ∂
∂x
i
∂
∂x
j
=
¸
k
Γ
k
ij
∂
∂x
k
,
where Γ
k
ij
is the Christoﬀel symbols.
240 A Appendices
12. We denote
i
= D ∂
∂x
i
.
13. The summation convention. We write, for instance,
X
i
∂
∂x
i
:=
¸
i
X
i
∂
∂x
i
,
Γ
j
ik
dx
k
:=
¸
k
Γ
j
ik
dx
k
,
and so on.
14. A smooth function f(x) on M is a (0, 0)tensor. f is a (1, 0)tensor
deﬁned by
f =
i
fdx
i
=
∂f
∂x
i
dx
i
.
2
f is a (2, 0)tensor:
2
f =
j
∂f
∂x
i
dx
i
⊗dx
j
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
dx
i
⊗dx
j
.
In the Riemannian context,
2
f is called the Hessian of f and denoted
by Hess(f). Its ij
th
component is
(
2
f)
ij
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
.
15. The trace of the Hessian matrix ((
2
f)
ij
) is deﬁned to be the Laplace
Beltrami operator ´.
´ =
1
[g[
n
¸
i,j=1
∂
∂x
i
(
[g[ g
ij
∂
∂x
j
)
where [g[ is the determinant of (g
ij
).
16. We abbreviate:
M
uvdV
g
:=
M
< u, v > dV
g
where
< u, v >= g
ij
i
u
j
v = g
ij
∂u
∂x
i
∂v
∂x
j
is the scalar product associated with g for 1forms.
A.2 Inequalities 241
17. The norm of the k
th
covariant derivative of u, [
k
u[ is deﬁned in a local
coordinates chart by
[
k
u[
2
= g
i1j1
g
i
k
j
k
(
k
u)
i1···i
k
(
k
u)
j1···j
k
.
In particular, we have
[
1
u[
2
= [u[
2
= g
ij
(u)
i
(u)
j
= g
ij
i
u
j
u = g
ij
∂u
∂x
i
∂u
∂x
j
.
18. (
p
k
(M) is the space of C
∞
functions on M such that
M
[
j
u[
p
dV
g
< ∞, ∀ j = 0, 1, , k.
19. The Sobolev space H
k,p
(M) is the completion of (
p
k
(M) with respect to
the norm
u
H
k,p
(M)
=
k
¸
j=0
M
[
j
u[
p
dV
g
1/p
.
A.2 Inequalities
Young’s Inequality. Assume 1 < p, q < ∞,
1
p
+
1
q
= 1. Then for any a, b > 0,
it holds
ab ≤
a
p
p
+
b
q
q
(A.1)
Proof. We will use the convexity of the exponential function e
x
. Let x
1
< x
2
be any two point in R
1
. Then since
1
p
+
1
q
= 1,
1
p
x
1
+
1
q
x
2
is a point between
x
1
and x
2
. By the convexity of e
x
, we have
e
1
p
x1+
1
q
x2
≤
1
p
e
x1
+
1
q
e
x2
. (A.2)
Taking x
1
= ln a
p
and x
2
= ln b
q
in (A.2), we arrive at inequality (A.1). ¯ .
CauchySchwarz Inequality.
[x y[ ≤ [x[[y[ , ∀ x, y ∈ R
n
.
Proof. For any real number λ > 0, we have
0 ≤ [x ±λy[
2
= [x[
2
±2λx y +λ
2
[y[
2
.
It follows that
±x y ≤
1
2λ
[x[
2
+
λ
2
[y[
2
.
The minimum value of the right hand side is attained at λ =
x
y
. At this value
of λ, we arrive at desired inequality. ¯ .
242 A Appendices
H¨ oder’s Inequality. Let Ω be a domain in R
n
. Assume that u ∈ L
p
(Ω)
and v ∈ L
q
(Ω) with 1 ≤ p, q ≤ ∞ and
1
p
+
1
q
= 1. Then
Ω
[uv[dx ≤ u
L
p
(Ω)
v
L
q
(Ω)
. (A.3)
Proof. Let f =
u
u
L
p
(Ω)
and g =
v
v
L
q
(Ω)
. Then obviously, f
L
p
(Ω)
= 1 =
g
L
q
(Ω)
. It follows from the Young’s inequality (A.1),
Ω
[f g[dx ≤
1
p
Ω
[f[
p
dx +
1
q
Ω
[g[
q
dx =
1
p
+
1
q
= 1.
This implies (A.3). ¯ .
Minkowski’s Inequality. Assume 1 ≤ p ≤ ∞. Then for any u, v ∈
L
p
(Ω),
u +v
L
p
(Ω)
≤ u
L
p
(Ω)
+v
L
p
(Ω)
.
Proof. By the above H¨ oder inequality, we have
u +v
p
L
p
(Ω)
=
Ω
[u +v[
p
dx ≤
Ω
[u +v[
p−1
([u[ +[v[)dx
≤
Ω
[u +v[
p
dx
p−1
p
¸
Ω
[u[
p
dx
1
p
+
Ω
[v[
p
dx
1
p
¸
= u +v
p−1
L
p
(Ω)
u
L
p
(Ω)
+v
L
p
(Ω)
.
Then divide both sides by u +v
p−1
L
p
(Ω)
. ¯ .
Interpolation Inequality for L
p
Norms. Assume that u ∈ L
r
(Ω) ∩
L
s
(Ω) with 1 ≤ r ≤ s ≤ ∞. Then for any r ≤ t ≤ s and 0 < θ < 1 such that
1
t
=
θ
r
+
1 −θ
s
(A.4)
we have u ∈ L
t
(Ω), and
u
L
t
(Ω)
≤ u
θ
L
r
(Ω)
u
1−θ
L
s
(Ω)
. (A.5)
Proof. Write
Ω
[u[
t
dx =
Ω
[u[
a
[u[
t−a
dx.
Choose p and q so that
ap = r , (t −a)q = s , and
1
p
+
1
q
= 1. (A.6)
A.3 CalderonZygmund Decomposition 243
Then by H¨ oder inequality, we have
Ω
[u[
t
dx ≤
Ω
[u[
r
dx
a/r
Ω
[u[
s
dx
(t−a)/s
.
Divide through the powers by t, we obtain
u
t
≤ u
a/t
r
u
1−a/t
s
.
Now let θ = a/t, we arrive at desired inequality (A.5). Here the restriction
(A.4) comes from (A.6). ¯ .
A.3 CalderonZygmund Decomposition
Lemma A.3.1 For f ∈ L
1
(R
n
), f ≥ 0, ﬁxed α > 0, ∃ E, G such that
(i) R
n
= E ∪ G, E ∩ G = ∅
(ii) f(x) ≤ α, a.e. x ∈ E
(iii) G =
∞
¸
k=1
Q
k
, ¦Q
k
¦: disjoint cubes s.t.
α <
1
[Q
k
[
Q
k
f(x)dx ≤ 2
n
α
Proof. Since
R
n
f(x)dx is ﬁnite, for a given α > 0, one can pick a cube Q
o
suﬃciently large, such that
Qo
f(x)dx ≤ α[Q
o
[. (A.7)
Divide Q
o
into 2
n
equal subcubes with disjoint interior. Those subcubes
Q satisfying
Q
f(x)dx ≤ α[Q[
are similarly subdivided, and this process is repeated indeﬁnitely. Let O de
note the set of subcubes Q thus obtained that satisfy
Q
f(x)dx > α[Q[.
For each Q ∈ O, let
˜
Q be its predecessor, i.e., Q is one of the 2
n
subcubes of
˜
Q. Then obviously, we have [
˜
Q[/[Q[ = 2
n
, and consequently,
α <
1
[Q[
Q
f(x)dx ≤
1
[Q[
˜
Q
f(x)dx ≤
1
[Q[
α[
˜
Q[ = 2
n
α. (A.8)
Let
244 A Appendices
G =
¸
Q∈Q
Q , and E = Q
o
` G.
Then iii) follows immediately. To see ii), noticing that each point of E lies in
a nested sequence of cubes Q with diameters tending to zero and satisfying
Q
f(x)dx ≤ α[Q[,
now by Lebesgue’s Diﬀerentiation Theorem, we have
f(x) ≤ α , a.e in E. (A.9)
This completes the proof of the Lemma. ¯ .
Lemma A.3.2 Let T be a linear operator from L
p
(Ω)∩L
q
(Ω) into itself with
1 ≤ p < q < ∞. If T is of weak type (p, p) and weak type (q, q), then for any
p < r < q, T is of strong type (r, r). More precisely, if there exist constants
B
p
and B
q
, such that, for any t > 0,
µ
Tf
(t) ≤
B
p
f
p
t
p
and µ
Tf
(t) ≤
B
q
f
q
t
q
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
then
Tf
r
≤ CB
θ
p
B
1−θ
q
f
r
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
where
1
r
=
θ
p
+
1 −θ
q
and C depends only on p, q, and r.
Proof. For any number s > 0, let
g(x) =
f(x) if [f(x)[ ≤ s
0 if [f(x)[ > s.
We split f into the good part g and the bad part b: f(x) = g(x) +b(x). Then
[Tf(x)[ ≤ [Tg(x)[ +[Tb(x)[,
and hence
µ(t) ≡ µ
Tf
(t) ≤ µ
Tg
(
t
2
) +µ
Tb
(
t
2
)
≤
2B
q
t
q
Ω
[g(x)[
q
dx +
2B
p
t
p
Ω
[b(x)[
p
dx.
It follows that
A.3 CalderonZygmund Decomposition 245
Ω
[Tf[
r
dx =
∞
0
µ(t)d(t
r
) = r
∞
0
t
r−1
µ(t)dt
≤ r(2B
q
)
q
∞
0
t
r−1−q
f≤s
[f(x)[
q
dx
dt
+ r(2B
p
)
p
∞
0
t
r−1−p
f>s
[f(x)[
p
dx
dt
≡ r(2B
q
)
q
I
q
+r(2B
p
)
p
I
p
. (A.10)
Let s = t/A for some positive number A to be ﬁxed later. Then
I
q
= A
r−q
∞
0
s
r−1−q
f≤s
[f(x)[
q
dx
ds
= A
r−q
Ω
[f(x)[
q
∞
f
s
r−1−q
ds
dx
=
A
r−q
q −r
Ω
[f(x)[
r
dx. (A.11)
Similarly,
I
p
= A
r−p
∞
0
s
r−1−p
f>s
[f(x)[
p
dx
ds =
Ω
[f(x)[
p
f
0
s
r−1−p
ds
=
A
r−p
r −p
Ω
[f(x)[
r
dx. (A.12)
Combining (A.10), (A.11), and (A.12), we derive
Ω
[Tf(x)[
r
dx ≤ rF(A)
Ω
[f(x)[
r
dx, (A.13)
where
F(A) =
(2B
q
)
q
A
r−q
q −r
+
(2B
p
)
p
A
r−p
r −p
.
By elementary calculus, one can easily verify that the minimum of F(A) is
A = 2B
q/(q−p)
q
B
p/(p−q)
p
.
For this value of A, (A.13) becomes
Ω
[Tf(x)[
r
dx ≤ r2
r
1
q −r
+
1
r −p
B
q(r−p)/(q−p)
q
B
p(q−r)/(q−p)
p
Ω
[f(x)[
r
dx.
Letting
C = 2
r
q −r
+
r
r −p
1/r
, and
1
r
=
θ
p
+
1 −θ
q
,
246 A Appendices
we arrive immediately at
Tf
L
r
(Ω)
≤ CB
θ
p
B
1−θ
q
f
L
r
(Ω)
.
This completes the proof of the Lemma. ¯ .
A.4 The Contraction Mapping Principle
Let X be a linear space with norm  , and let T be a mapping from X into
itself. If there exists a number θ < 1, such that
Tx −Ty ≤ θx −y for all x, y ∈ X,
Then T is called a contraction mapping .
Theorem A.4.1 Let T be a contraction mapping in a Banach space B. Then
it has a unique ﬁxed point in B, that is, there is a unique x ∈ B, such that
Tx = x.
Proof. Let x
o
be any point in the Banach space B. Deﬁne
x
1
= Tx
o
, and x
k+1
= Tx
k
k = 1, 2, .
We show that ¦x
k
¦ is a Cauchy sequence in B. In fact, for any integers n > m,
x
n
−x
m
 ≤
n−1
¸
k=m
x
k+1
−x
k

=
n−1
¸
k=m
T
k
x
1
−T
k
x
o

≤
n−1
¸
k=m
θ
k
x
1
−x
o

≤
θ
m
1 −θ
x
1
−x
o
→0 as m→∞.
Since a Banach is complete, the sequence ¦x
k
¦ converges to an element x
in B. Taking limit on both side of
x
k+1
= Tx
k
,
we arrive at
Tx = x. (A.14)
A.5 The ArzelaAscoli Theorem 247
To see the uniqueness, assume that y is a solution of (A.14). Then
x −y = Tx −Ty ≤ θx −y.
Therefore, we must have
x −y = 0,
That is x = y. ¯ .
A.5 The ArzelaAscoli Theorem
We say that the sequence of functions ¦u
k
(x)¦ are uniformly equicontinuous
if for each > 0, there exists δ > 0, such that
[u
k
(x) −u
k
(y)[ < whenever [x −y[ < δ
for all x, y ∈ R
n
and k = 1, 2, .
Theorem A.5.1 (ArzelaAscoli Compactness Criterion for Uniform Conver
gence).
Assume that ¦u
k
(x)¦ is a sequence of realvalued functions deﬁned on R
n
satisfying
[u
k
(x)[ ≤ M k = 1, 2, , x ∈ R
n
for some positive number M, and ¦u
k
(x)¦ are uniformly equicontinuous. Then
there exists a subsequence ¦u
ki
(x)¦ of ¦u
k
(x)¦ and a continuous function u(x),
such that
u
ki
→u
uniformly on any compact subset of R
n
.
References
[Ad] R. Adams, Sobolev Spaces, Academic Press, San Diego, 1978.
[AR] A. Ambrosetti and P. Rabinowitz, Dual variational method in critical
point theory and applications, J. Funct. Anal. 14(1973), 349381.
[Au] T. Aubin, Nonlinear Analysis on Manifolds. MongeAmpere Equations,
SpringerVerlag, 1982.
[Ba] A. Bahri, Critical points at inﬁnity in some variational problems, Pit
man, 1989, Research Notes in Mathematics Series, Vol. 182, 1988.
[BC] A. Bahri and J. Coron, The scalar curvature problem on three
dimensional sphere, J. Funct. Anal., 95(1991), 106172.
[BE] J. Bourguignon and J. Ezin, Scalar curvature functions in a class of
metrics and conformal transformations, Trans. AMS, 301(1987), 723
736.
[Bes] A. Besse, Einstein Manifolds, SpringerVerlag, Berling, Heidelberg,
1987.
[Bi] G. Bianchi, Nonexistence and symmetry of solutions to the scalar curva
ture equation, Comm. Partial Diﬀerential Equations, 21(1996), 229234.
[BLS] H. Brezis, Y. Li, and I. Shafrir, A sup + inf inequality for some nonlinear
elliptic equations involving exponential nonlinearities, J. Funct. Anal.
115 (1993), 344358.
[BM] H. Brezis, and F. Merle, Uniform estimates and blowup behavior for
solutions of u = V (x)e
u
in two dimensions, Comm. PDE. 16 (1991),
12231253.
[C] W. Chen, Scalar curvatures on S
n
, Mathematische Annalen, 283(1989),
353365.
[C1] W. Chen, A Tr¨ udinger inequality on surfaces with conical singularities,
Proc. AMS 3(1990), 821832.
[Ca] M. do Carmo, Riemannian Geometry, Translated by F. Flaherty,
Birkhauser Boston, 1992.
[CC] K. Cheng and J. Chern, Conformal metrics with prescribed curvature
on S
n
, J. Diﬀ. Equs., 106(1993), 155185.
[CD1] W. Chen and W. Ding, A problem concerning the scalar curvature on
S
2
, Kexue Tongbao, 33(1988), 533537.
[CD2] W. Chen and W. Ding, Scalar curvatures on S
2
, Trans. AMS,
303(1987), 365382.
250 References
[CGS] L.Caﬀarelli, B.Gidas, and J.Spruck, Asymptotic symmetry and local
behavior of semilinear elliptic equations with critical Sobolev growth,
Comm. Pure Appl. Math. XLII(1989), 271297.
[ChL] K. Chang and Q. Liu, On Nirenberg’s problem, International J. Math.,
4(1993), 5988.
[CL1] W. Chen and C. Li, Classiﬁcation of solutions of some nonlinear elliptic
equations , Duke Math. J., 63(1991), 615622.
[CL2] W. Chen and C. Li, Qualitative properties of solutions to some nonlinear
elliptic equations in R
2
, Duke Math J., 71(1993), 427439.
[CL3] W. Chen and C. Li, A note on KazdanWarner type conditions, J. Diﬀ.
Geom. 41(1995), 259268.
[CL4] W. Chen and C. Li, A necessary and suﬃcient condition for the Niren
berg problem, Comm. Pure Appl. Math., 48(1995), 657667.
[CL5] W. Chen and C. Li A priori estimates for solutions to nonlinear elliptic
equations, Arch. Rat. Mech. Anal., 122(1993), 145157.
[CL6] W. Chen and C. Li, A priori estimates for prescribing scalar curvature
equations, Ann. Math., 145(1997), 547564.
[CL7] W. Chen and C. Li, Prescribing Gaussian curvature on surfaces with
conical singularities, J. Geom. Analysis, 1(1991), 359372.
[CL8] W. Chen and C. Li, Gaussian curvature on singular surfaces, J. Geom.
Anal., 3(1993), 315334.
[CL9] W. Chen and C. Li, What kinds of singular surface can admit constant
curvature, Duke Math J., 2(1995), 437451.
[CL10] W. Chen and C. Li, Prescribing scalar curvature on S
n
, Paciﬁc J.
Math, 1(2001), 6178.
[CL11] W. Chen and C. Li, Indeﬁnite elliptic problems in a domain, Disc.
Cont. Dyn. Sys., 3(1997), 333340.
[CL12] W. Chen and C. Li, Gaussian curvature in the negative case, Proc.
AMS, 131(2003), 741744.
[CL13] W. Chen and C. Li, Regularity of solutions for a system of integral
equations, C.P.A.A., 4(2005), 18.
[CL14] W. Chen and C. Li, Moving planes, moving spheres, and a priori esti
mates, J. Diﬀ. Equa. 195(2003), 113.
[CLO] W. Chen, C. Li, and B. Ou, Classiﬁcation of solutions for an integral
equation, , CPAM, 59(2006), 330343.
[CLO1] W. Chen, C. Li, and B. Ou, Qualitative properties of solutions for an
integral equation, Disc. Cont. Dyn. Sys. 12(2005), 347354.
[CLO2] W. Chen, C. Li, and B. Ou, Classiﬁcation of solutions for a system of
integral equations, Comm. in PDE, 30(2005), 5965.
[CS] K. Cheng and J. Smoller, Conformal metrics with prescribed Gaussian
curvature on S
2
, Trans. AMS, 336(1993), 219255.
[CY1] A. Chang, and P. Yang, A perturbation result in prescribing scalar
curvature on S
n
, Duke Math. J., 64(1991), 2769.
[CY2] A. Chang, M. Gursky, and P. Yang, The scalar curvature equation on
2 and 3spheres, Calc. Var., 1(1993), 205229.
[CY3] A. Chang and P. Yang, Prescribing Gaussian curvature on S
2
, Acta.
Math., 159(1987), 214259.
[CY4] A. Chang, P. Yang, Conformal deformation of metric on S
2
, J. Diﬀ.
Geom., 23(1988) 259296.
References 251
bibitem[DL]DL W. Ding and J. Liu, A note on the problem of prescribing
Gaussian curvature on surfaces, Trans. AMS, 347(1995), 10591065.
[ES] J. Escobar and R. Schoen, Conformal metric with prescribed scalar
curvature, Invent. Math. 86(1986), 243254.
[Ev] L. Evans, Partial Diﬀerential Equations, Graduate Studies in Math,
Vol 19, AMS, 1998.
[F] L. Frenkel, An Introduction to Maximum Principles and Symmetry in
Elliptic Problems, Cambridge Unversity Press, New York, 2000.
[He] E. Hebey, Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequal
ities, AMS, Courant Institute of Mathematical Sciences Lecture Notes
5, 1999.
[GNN] B. Gidas, W. Ni, and L. Nirenberg, Symmetry of positive solutions of
nonlinear elliptic equations in R
n
, In the book Mathematical Analysis
and Applications, Academic Press, New York, 1981.
[GNN1] B. Gidas, W. Ni, and L. Nirenberg, Symmetry and related properties
via the maximum principle, Comm. Math. Phys., 68(1979), 209243.
[GS] B. Gidas and J. Spruck, Global and local behavior of positive solutions
of nonlinear elliptic equations, Comm. Pure Appl. Math., 34(1981), 525
598.
[GT] D. Gilbarg and N. Trudinger, Elliptic partial diﬀerential equations of
second order, SpringerVerlag, 1983.
[Ha] Z. Han, Prescribing Gaussian curvature on S
2
, Duke Math. J., 61(1990),
679703.
[Ho] C. Hong, A best constant and the Gaussian curvature, Proc. of AMS,
97(1986), 737747.
[Ka] J. Kazdan, Prescribing the curvature of a Riemannian manifold, AMS
CBMS Lecture, No.57, 1984.
[KW1] J. Kazdan and F. Warner, Curvature functions for compact two mani
folds, Ann. of Math. , 99(1974), 1447.
[Kw2] J. Kazdan and F. Warner, Existence and conformal deformation of met
rics with prescribing Gaussian and scalar curvature, Annals of Math.,
101(1975), 317331.
[L] E.Lieb, Sharp constants in the HardyLittlewoodSobolev and related
inequalities, Ann. of Math. 118(1983), 349374.
[Li] C. Li, Local asymptotic symmetry of singular solutions to nonlinear
elliptic equations, Invent. Math. 123(1996), 221231.
[Li0] C. Li, Monotonicity and symmetry of solutions of fully nonlinear elliptic
equations on bounded domain, Comm. in PDE. 16(2&3)(1991), 491526.
[Li1] Y. Li, On prescribing scalar curvature problem on S
3
and S
4
, C. R.
Acad. Sci. Paris, t. 314, Serie I (1992), 5559.
[Li2] Y. Li, Prescribing scalar curvature on S
n
and related problems, J.
Diﬀerential Equations 120 (1995), 319410.
[Li3] Y. Li, Prescribing scalar curvature on S
n
and related problems, II: Ex
istence and compactness, Comm. Pure Appl. Math. 49 (1996), 541597.
[Lin1] C. Lin, On Liouville theorem and apriori estimates for the scalar curva
ture equations, Ann. Scuola. Norm. Sup. Pisa, Cl. Sci. 27(1998), 107130.
[Mo] J. Moser, On a nonlinear problem in diﬀerential geometry, Dynamical
Systems, Academic Press, N.Y., (1973).
[P] P. Padilla, Symmetry properties of positive solutions of elliptic equations
on symmetric domains, Appl. Anal. 64(1997), 153169.
252 References
[O] M. Obata, The conjecture on conformal transformations of Riemannian
manifolds, J. Diﬀ. Geom., 6(1971), 247258.
[On] E. Onofri, On the positivity of the eﬀective action in a theory of random
surfaces, Comm. Math. Phys., 86 (1982), 321326.
[Rab] P. Rabinowitz, Minimax Methods in Critical Point Theory with Appli
cations to Diﬀerential Equations, CBMS series, No. 65, 1986.
[Sch] M. Schechter, Modern Methods in Partial Diﬀerential Equations,
McGrawHill Inc., 1977.
[Sch1] M. Schester, Principles of Functional Analysis, Graduate Studies in
Math, Vol. 6, AMS, 2002.
[SZ] R. Schoen and D. Zhang, Prescribed scalar curvature on the nspheres,
Calc. Var. Partial Diﬀerential Equations 4 (1996), 125.
[WX] J. Wei and X. Xu, Classiﬁcation of solutions of higher order conformally
invariant equations, Math. Ann. 313(1999), 207228.
[XY] X. Xu and P. Yang, Remarks on prescribing Gauss curvature, Tran.
AMS, Vol. 336, No. 2, (1993), 831840.
Preface
In this book we intend to present basic materials as well as real research examples to young researchers in the ﬁeld of nonlinear analysis for partial diﬀerential equations (PDEs), in particular, for semilinear elliptic PDEs. We hope it will become a good reading material for graduate students and a handy textbook for professors to use in a topic course on nonlinear analysis. We will introduce a series of wellknown typical methods in nonlinear analysis. We will ﬁrst illustrate them by using simple examples, then we will lead readers to the research front and explain how these methods can be applied to solve practical problems through careful analysis of a series of recent research articles, mostly the authors’ recent papers. From our experience, roughly speaking, in applying these commonly used methods, there are usually two aspects: i) general scheme (more or less universal) and, ii) key points for each individual problem. We will use our own research articles to show readers what the general schemes are and what the key points are; and with the understandings of these examples, we hope the readers will be able to apply these general schemes to solve their own research problems with the discovery of their own key points. In Chapter 1, we introduce basic knowledge on Sobolev spaces and some commonly used inequalities. These are the major spaces in which we will seek weak solutions of PDEs. Chapter 2 shows how to ﬁnd weak solutions for some typical linear and semilinear PDEs by using functional analysis methods, mainly, the calculus of variations and critical point theories including the wellknown Mountain Pass Lemma. In Chapter 3, we establish W 2,p a priori estimates and regularity. We prove that, in most cases, weak solutions are actually diﬀerentiable, and hence are classical ones. We will also present a Regularity Lifting Theorem. It is a simple method to boost the regularity of solutions. It has been used extensively in various forms in the authors’ previous works. The essence of the approach is wellknown in the analysis community. However, the version we present here
VI
Preface
contains some new developments. It is much more general and is very easy to use. We believe that our method will provide convenient ways, for both experts and nonexperts in the ﬁeld, in obtaining regularities. We will use examples to show how this theorem can be applied to PDEs and to integral equations. Chapter 4 is a preparation for Chapter 5 and 6. We introduce Riemannian manifolds, curvatures, covariant derivatives, and Sobolev embedding on manifolds. Chapter 5 deals with semilinear elliptic equations arising from prescribing Gaussian curvature, on both positively and negatively curved manifolds. We show the existence of weak solutions in critical cases via variational approaches. We also introduce the method of lower and upper solutions. Chapter 6 focus on solving a problem from prescribing scalar curvature on S n for n ≥ 3. It is in the critical case where the corresponding variational functional is not compact at any level sets. To recover the compactness, we construct a maxmini variational scheme. The outline is clearly presented, however, the detailed proofs are rather complex. The beginners may skip these proofs. Chapter 7 is devoted to the study of various maximum principles, in particular, the ones based on comparisons. Besides classical ones, we also introduce a version of maximum principle at inﬁnity and a maximum principle for integral equations which basically depends on the absolute continuity of a Lebesgue integral. It is a preparation for the method of moving planes. In Chapter 8, we introduce the method of moving planes and its variant– the method of moving spheres–and apply them to obtain the symmetry, monotonicity, a priori estimates, and even nonexistence of solutions. We also introduce an integral form of the method of moving planes which is quite diﬀerent than the one for PDEs. Instead of using local properties of a PDE, we exploit global properties of solutions for integral equations. Many research examples are illustrated, including the wellknown one by Gidas, Ni, and Nirenberg, as well as a few from the authors’ recent papers. Chapters 7 and 8 function as a selfcontained group. The readers who are only interested in the maximum principles and the method of moving planes, can skip the ﬁrst six chapters, and start directly from Chapter 7.
Yeshiva University University of Colorado
Wenxiong Chen Congming Li
Contents
1
Introduction to Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Approximation by Smooth Functions . . . . . . . . . . . . . . . . . . . . . . . 1.4 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Compact Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Other Basic Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Poincar´’s Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . e 1.6.2 The Classical HardyLittlewoodSobolev Inequality . . . . Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Second Order Elliptic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Methods of Linear Functional Analysis . . . . . . . . . . . . . . . . . . . . . 2.3.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Some Basic Principles in Functional Analysis . . . . . . . . . 2.3.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Variational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Semilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Calculus of Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Existence of Minimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Existence of Minimizers Under Constraints . . . . . . . . . . . 2.4.5 Minimax Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 Existence of a Minimax via the Mountain Pass Theorem Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 W 2,p a Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Newtonian Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Uniform Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 W 2,p Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 The Case p ≥ 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 3 8 9 21 32 34 34 36 41 42 43 44 44 44 48 50 50 50 52 55 59 65 73 75 75 81 86 88
2
3
172 6. . . . . . . 151 Prescribing Scalar Curvature on S n . . 97 3. . . . . . . . . . . . . . . . .3 The A Priori Estimates . . .4 Applications to Integral Equations . . . . . . . . .1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator . 142 5. . . . . . . . . .3 Regularity Lifting . .3 In the Regions Where R > 0. 161 6. . . . . . . . . . . . . . . . . . . . . . . . . . 174 5 6 . . . . . . . . . . . . . . . 129 5. . . 135 5. .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3. . . . . 107 4. . . . . . . . . . . . . . . . 96 3. . . . . . . . . . . . . . .2 In the Region Where R is Small . . . . . . .1 In the Region Where R < 0 . . . .2. . . . . . . . . .2 Regularity Lifting Theorem . . . . . . . . . . . . . . . . . . 122 4.1 Curvature of Plane Curves . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . .1 Prescribing Gaussian Curvature on S 2 . . . . . . . . . . . . . . . . 115 4. . . . 95 3. . . . . .4. . . . . . . . . . . . .2 The Variational Approach . . . . 124 Prescribing Gaussian Curvature on Compact 2Manifolds . . . . . . . . . . . . . . . . .3. . . . .1. . . . . . . . . . . . . . . . . . . . . . . 157 6. . . 95 3. . . . . . . . . . . . .3 Equations on Prescribing Gaussian and Scalar Curvature123 4. . . . . . . . . . . 162 6. . . . . . . . . . . . . . . . . . . . 103 4 Preliminary Analysis on Riemannian Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Prescribing Gaussian Curvature on Negatively Curved Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . 120 4. . . . 157 6. . . . . . . 172 6. . . . .4 Curvature . . . . . . . . . . . . . . . . .2 Chen and Li’s Results . .5 Calculus on Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 5. .1 Introduction . . . . . Method of Lower and Upper Solutions . . . . . . . . . . . .3 Other Useful Results Concerning the Existence. . . . . 115 4. . . . . . . . . . . . .1 Obstructions . 145 5. . . . . . 120 4. . . . . . . . . . . . . . .2 Curvature of Surfaces in R3 . . . 112 4. . . . . . . 115 4. . . . . . . . . . .1 Bootstrap . . . . . .3. . . . . . . . . . . .3. . . . . . . .2 Variational Approach and Key Inequalities . . . . . . . . . . . . .3 Applications to PDEs . 130 5. . . . .5. 107 4. . . . . . . . . . .2. . . . . . . . . . . 117 4. . . . . . . . . . . . . . . . . and Regularity . . . . . . . . . . . . . . . . . 109 4. . . . . . . . . . . . . . . . . . .2. . .3. . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . .1 Diﬀerentiable Manifolds . . . . . . . . .VIII Contents 3. . . . . . Uniqueness. . . . . . . 171 6. . . . . . . . . . . . . . . . . . . . . . for n ≥ 3 . . . .2 Integrals . . . .2. . . . . . . . . .2 Tangent Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . .1. . . . . . . . . . . .1 Estimate the Values of the Functional . . . . .6 Sobolev Embeddings . . . . . . . . . .3 Existence of Weak Solutions . . . . . . . . . . . . . . . .5. . . . . . . . 92 3. . . 172 6. . . . . . . . . . . . . . . . . 127 5.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Curvature on Riemannian Manifolds . . . . . . .2 The Case 1 < p < 2. . . . . . . . . . . . . . . . .1 Kazdan and Warner’s Results. . .4.2. . . . . .2 The Variational Scheme . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . .1. . . . .3 Riemannian Metrics . . . . . . . . . . . . . . . . .
. . . .1 Outline of the Method of Moving Planes . . . . . . . . . . . . . . . . 188 7. 216 8. .3 The Hopf Lemma and Strong Maximum Principles . . . . .1 Symmetry of Solutions in a Unit Ball . .Contents IX 7 Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . . . . . . . . . . . .1. . . . . . .2. . . . . . . . . . . . . . . . . . . .2. . . . . . . . . .1 The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation . . . . . . .1 Algebraic and Geometric Notations . . . . .3. . . . 199 8. 235 A. . . . . 183 7. . . . . . . . . . . . . . . . . . . . . . . . .3 Function Spaces . . . . . . . . . . . . . . . . . . . . .2 The A Priori Estimates . . . . . 222 8. . . . . . . . . . . . .1. . . . . 175 7. . . . . . . . . . . . 214 8. . . . . . . . .5 A Maximum Principle for Integral Equations . . . . . . . . . . . . .1 Notations . . . . . . . . . . . . . . . . 191 Methods of Moving Planes and of Moving Spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Notations for Functions and Derivatives . . 237 A. . 246 A. . . . . . . . . . . . . .4 The Contraction Mapping Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 8. . . . . . . . . . . . . . . . .4 Maximum Principles Based on Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . 235 A. . . .1. . . . . . . . . . . . . . . .2 Applications of the Maximum Principles Based on Comparisons . . . . 222 8. . . . . . . . .4 Notations for Estimates . . . . . 180 7. . . . . . . 175 7. . . . 235 A. . .2. . . . . . . . . . . . . . . . . . 197 8. . . 249 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 8. . . . . . . . .2 Necessary Conditions . . . . . . . 235 A. . . 239 A. . . . . .4. . . . . . . . . . . . . 238 A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . .1 The Background . . . . .4 Method of Moving Spheres . . .3 Symmetry of Solutions for − u = eu in R2 . . . . . . . . . . . . . . . . 195 8. . . 228 Appendices . . . . 209 8. . . .3. . . . 241 A. . . . . . . . . . .2 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 8. . . . . . . . .3 Method of Moving Planes in a Local Way . . . . . . . 247 8 A References . . . . . . . . . . . . . . . . . . 214 8. . . . . . . . . . . . .2 Weak Maximum Principles . . . . . . . . . . . . . . 243 A. . . . . . . . .2 Symmetry of Solutions of − u = up in Rn .1. .5 The ArzelaAscoli Theorem . . . . . .3 CalderonZygmund Decomposition . .5 Notations from Riemannian Geometry . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
Chang. For these purposes. such as H. Very often. Lieb. Sobolev in the 1930s. many well known mathematicians. E. the Sobolev Spaces were introduced. Caﬀarelli.2 1. Hence. The role of Sobolev Spaces in the analysis of PDEs is somewhat similar to the role of Euclidean spaces in the study of geometry. people use numbers to quantify the surrounding objects. A. For example.5 1. Functions are used to describe physical states.4 1. Brezis.3 1. we use a sequence of approximate solutions to approach a real one. L.1 Introduction to Sobolev Spaces 1. They have many applications in various branches of mathematics. the absolute value a − b is used to measure the diﬀerence between two numbers a and b. More recently. In mathematics. but we must also study relationships among diﬀerent metrics. it is imperative that we need not only develop suitable metrics to measure diﬀerent states (functions). Hardy and J. Serrin.6. The fundamental research on the relations among various Sobolev Spaces (Sobolev norms) was ﬁrst carried out by G. that is. and how close these solutions are to the real one depends on how we measure them. in particular. which metric we are choosing. Littlewood in the 1910s and then by S.2 Poincar´’s Inequality e The Classical HardyLittlewoodSobolev Inequality In real life.6 Distributions Sobolev Spaces Approximation by Smooth Functions Sobolev Embeddings Compact Embedding Other Basic Inequalities 1. L. in the theory of partial diﬀerential equations.6. J.1 1. Nirenberg. and . temperature is a function of time and place.1 1.
instead of f (x). one may view − as an operator acting on a proper linear space and then apply some known principles of functional analysis. The main objectives are to determine if and how the norms dominate each other. what the sharp estimates are.1). From the deﬁnition of the functional in either (1. u) d x. Then it becomes a semilinear equation. by an appropriate regularity argument. one can see that the function u in the space need not be second order diﬀerentiable as required by the classical solutions of (1. x ∈ ∂Ω (1. ·). To roughly illustrate this kind of application.3). To ﬁnd the existence of weak solutions for partial diﬀerential equations. u) = 0 1 2  u2 d x − Ω u Ω F (x. the method of functional analysis. one may recover the diﬀerentiability of the solutions.1) To prove the existence of solutions. and which functions are ‘critically’ related to these sharp estimates. Let Ω be a bounded domain in Rn and consider the Dirichlet problem associated with the Laplace equation: − u = f (x). (1. have worked in this area. However. Stein.2 1 Introduction to Sobolev Spaces E. we have the functional J(u) = where F (x. let’s start oﬀ with a simple example. for instance. especially for nonlinear partial diﬀerential equations.2) or (1.’ to derive the existence.2) in a proper linear space.1). u). which functions achieve these sharp estimates. and seek critical points of the functional in that space. the calculus of variations. has seen more and more applications. Correspondingly.1) in the classical sense. so that they can still satisfy equation (1. Hence the critical points of the functional are solutions of the problem only in the ‘weak’ sense. One may also consider the corresponding variational functional J(u) = 1 2  u2 d x − Ω Ω f (x) u d x (1.3) f (x. For example. This kind of variational approach is particularly powerful in dealing with nonlinear equations. x ∈ Ω u = 0.4) . we consider f (x. our intention is to view it as an operator A acting on some proper linear spaces X and Y of functions and to symbolically write the equation as Au = f (1. In general. s) d x is an antiderivative of f (x. in particular. the ‘ﬁxed point theory’ or ‘the degree theory. given a PDE problem. in equation (1.
. In this process. mainly the notion of the weak derivatives. one can substantially weaken the notion of partial derivatives. Therefore. for the functional J(u) in (1. Hence in Section 1.1.2. the key is to ﬁnd an appropriate operator ‘A’ and appropriate spaces ‘X’ and ‘Y’. we show that. Then in the next three sections. Before going onto the details of this chapter. 1. the Sobolev spaces are designed precisely for this purpose and will work out properly. As we will see in the next few chapters. The result can then be applied to a broad class of partial diﬀerential equations. Existence of Weak Solutions. these weak derivatives can be approximated by smooth functions. we will introduce the distributions. sometimes use ﬁnite dimensional approximation. As we will see later. the readers may have a glance at the introduction of the next chapter to gain more motivations for studying the Sobolev spaces. it is natural to ﬁrst ﬁnd a sequence of approximate solutions.4). One seeks solutions which are less diﬀerentiable but are easier to obtain. to show the existence of such weak solutions.3. in many cases. We may also associate this operator with a functional J(·).3). In Section 1. the weak convergent sequence in the “stronger” space becomes a strong convergent sequence in the “weaker” space. and these ﬁrst derivatives need not be continuous or even be deﬁned everywhere. The limit function of a convergent sequence of approximate solutions would be the desired exact solution of the equation. and then ii) by the compact embedding from a “stronger” Sobolev space into a “weaker” one. what really involved were only the ﬁrst derivatives of u instead of the second derivatives as required classically for a second order equation. every bounded sequence has a weakly convergent subsequence. To derive many useful properties in Sobolev spaces. which are the elements of the Sobolev spaces.1 Distributions As we have seen in the introduction. via functional analysis approach. The advantage is that it allows one to divide the task of ﬁnding “suitable” smooth solutions for a PDE into two major steps: Step 1. whose critical points are the solutions of the equation (1.2) or (1. it is not so convenient to work directly on weak derivatives.1. we can just work on smooth functions to establish a series of important inequalities. In solving a partial diﬀerential equation. there are two basic stages in showing convergence: i) in a reﬂexive Banach space. and then go on to investigate the convergence of the sequence. We then deﬁne Sobolev spaces in Section 1.1 Distributions 3 Then we can apply the general and elegant principles of linear or nonlinear functional analysis to study the solvability of various equations involving A. It is very common that people use “energy” minimization or conservation.
among which Sobolev spaces are the most frequently used ones. Example 1. n. 2. In this section. However. we have. One uses various analysis tools to boost the diﬀerentiability of the known weak solutions and try to show that they are actually classical ones. then .1. Let 1 x−y ρ( )u(y)dy. ∂u φ(x)dx = − ∂xi u(x) Ω Ω ∂φ dx.4 1 Introduction to Sobolev Spaces Step 2. for i = 1. ∂xi (1.5) ∂u does not exist. the following function f (x) = ∞ is in C0 (Ω). Such functions are Lebesque measurable functions f deﬁned on Ω and with the property that 1/p f Lp (K) := K f (x)p dx <∞ for every compact subset K in Ω. Let Lp (Ω) be the space of pth power locally summable functions for loc 1 ≤ p ≤ ∞. and supp u ⊂ K ⊂⊂ Ω. Regularity Lifting.2 Assume ρ ∈ C0 (Rn ). u (x) := ρ ∗ u := n Rn 1 exp{ x−xo 2 −r2 } for x − xo  < r 0 elsewhere Then u ∈ ∞ C0 (Ω) for suﬃciently small.5) still makes sense if u is a locally L1 summable Now if u is not in C 1 (Ω).1 Assume BR (xo ) := {x ∈ Rn  x − xo  < R} ⊂ Ω. then for any r < R. Assume that u is a C 1 function in Ω and φ ∈ D(Ω). Let Rn be the ndimensional Euclidean space and Ω be an open connected ∞ subset in Rn . This is called the space of test functions on Ω. we introduce the notion of ‘weak derivatives. Both the existence of weak solutions and the regularity lifting have become two major branches of today’s PDE analysis. Various functions spaces and the related embedding theories are basic tools in both analysis. Through integration by parts.1. ∞ Example 1. u ∈ Lp (Ω). Let D(Ω) = C0 (Ω) be the linear space of inﬁnitely diﬀerentiable functions with compact support in Ω.’ which will be the elements of the Sobolev spaces. · · · . the integral on ∂xi the right hand side of (1.
u only need to be locally L1 summable. we deﬁne the ﬁrst derivative function v(x) that satisﬁes v(x)φ(x)dx = − Ω Ω ∂u weakly as the ∂xi u ∂φ dx ∂xi for all functions φ ∈ D(Ω). through a straight forward integration by parts k times. The same idea works for higher partial derivatives.1. we say that v is the αth weak derivative loc of u. ∂xα1 ∂xα2 · · · ∂xαn n 1 2 Given any test function φ ∈ D(Ω).1 For u. Example 1.6) as the weak representatives of Dα u. 1). Now if u is not k times diﬀerentiable. However the right hand side is valid for functions u with much weaker diﬀerentiability.1. (1. αn ) be a multiindex of order k := α := α1 + α2 + · · · + αn . we arrive at Dα u φ(x) d x = (−1)α Ω Ω u Dα φ d x. · · · .e.6) There is no boundary term because φ vanishes near the boundary. written v = Dα u provided v(x) φ(x) d x = (−1)α Ω Ω u Dα φ d x for all test functions φ ∈ D(Ω). the regular αth partial derivative of u is Dα u = ∂ α1 ∂ α2 · · · ∂ αn u .1 Distributions 5 function. Let α = (α1 .3 For n = 1 and Ω = (−π. Thus it is natural to choose those functions v that satisfy (1.6) makes no sense. Deﬁnition 1. For this reason. α2 . let u(x) = cos x if − π < x ≤ 0 1 − x if 0 < x < 1. the left hand side of (1. v ∈ L1 (Ω).1. For u ∈ C k (Ω). i. . Then its weak derivative u (x) can be represented by v(x) = − sin x if − π < x ≤ 0 −1 if 0 < x < 1.
for any φ ∈ D(Ω). that 1 1 u(x)φ (x)dx = − −π −π v(x)φ(x)dx.7) still holds. Ω Therefore. through integration by parts. If v and w are the weak αth partial derivatives of u. However. we can view a weak derivative as a linear functional acting on the space of test functions D(Ω). and we call it a distribution. More generally. . By the deﬁnition of the weak derivatives. Lemma 1. we have Deﬁnition 1. Since the weak derivative is deﬁned by integrals. we must have v(x) = w(x) almost everywhere . then v(x) = w(x) almost everywhere. the function u is not diﬀerentiable at x = 0. we have 1 0 1 u(x)φ (x)dx = −π −π 0 cos x φ (x)dx + 0 (1 − x)φ (x)dx 1 = −π sin x φ(x)dx + φ(0) − φ(0) + 0 1 φ(x)dx =− −π v(x)φ(x)dx.7) In fact. The linear space of distributions or the generalized functions on Ω. one can see that. one may alter the values of the weak derivative v(x) in a set of measure zero.2 A distribution is a continuous linear functional on D(Ω). Proof. in the classical sense. It follows that (v(x) − w(x))φ(x)dx = 0 ∀φ ∈ D(Ω).1. and (1. is the collection of all continuous linear functionals on D(Ω). it should be unique up to a set of measure zero.1.1 (Uniqueness of Weak Derivatives). denoted by D (Ω). Dα u.6 1 Introduction to Sobolev Spaces To see this. we have v(x)φ(x)dx = (−1)α Ω Ω u(x)Dα φdx = Ω w(x)φ(x)dx for any φ ∈ D(Ω). (1. we verify. From the deﬁnition. In this example.
if there is an f ∈ L1 (Ω) such that loc µ(φ) = Tf (φ). such a delta function is the derivative of some function in the following distributional sense. The most important and most commonly used distributions are locally summable functions. To explain this. for any sequence {φk } ⊂ D(Ω) with φk →φ in D(Ω). and we say that φk →φ in D(Ω) if a) there exists K ⊂⊂ Ω such that suppφk . It is not a function at all. who often simply view δxo as 0. For any distribution µ. consider loc Tf (φ) = Ω f (x)φ(x)dx. 1). for x = xo δxo (x) = ∞. suppφ ⊂ K. one can show that such a delta function is not locally summable. as k→∞. then we say that µ is (or can be realized as ) a locally summable function and identify it as f . we may regard δ0 (x) as f (x) in the sense of distributions. the delta function at xo can be deﬁned as δxo (φ) = φ(xo ). An interesting example of a distribution that is not a locally summable function is the wellknown Dirac delta function.1. we have 1 1 0.1 Distributions 7 Here. Surprisingly. for x ≥ 0. Let xo be a point in Ω. − −1 f (x)φ (x)dx = − 0 φ (x)dx = φ(0) = δ0 (φ). This kind of “function” has been used widely and so successfully by physicists and engineers. we have T (φk )→T (φ). . For any φ ∈ D(Ω). ∀φ ∈ D(Ω). let Ω = (−1. for x = xo . In fact. Hence it is a distribution. for x < 0 1. and let f (x) = Then. It is easy to verify that Tf (·) is a continuous linear functional on D(Ω). the continuity of a functional T on D(Ω) means that. Dα φk →Dα φ uniformly as k→∞. and b) for any α. and hence it is a distribution. for any f ∈ Lp (Ω) with 1 ≤ p ≤ ∞. Compare this with the deﬁnition of weak derivatives. However.
8
1 Introduction to Sobolev Spaces
1.2 Sobolev Spaces
Now suppose given a function f ∈ Lp (Ω), we want to solve the partial diﬀerential equation ∆u = f (x) in the sense of weak derivatives. Naturally, we would seek solutions u, such that ∆u are in Lp (Ω). More generally, we would start from the collections of all distributions whose second weak derivatives are in Lp (Ω). In a variational approach, as we have seen in the introduction, to seek critical points of the functional 1 2  u2 dx −
Ω Ω
F (x, u)dx,
a natural set of functions we start with is the collection of distributions whose ﬁrst weak derivatives are in L2 (Ω). More generally, we have Deﬁnition 1.2.1 The Sobolev space W k,p (Ω) (k ≥ 0 and p ≥ 1) is the collection of all distributions u on Ω such that for all multiindex α with α ≤ k, Dα u can be realized as a Lp function on Ω. Furthermore, W k,p (Ω) is a Banach space with the norm u :=
α≤k Ω
1/p Dα u(x)p dx .
In the special case when p = 2, it is also a Hilbert space and we usually denote it by H k (Ω).
k,p ∞ Deﬁnition 1.2.2 W0 (Ω) is the closure of C0 (Ω) in W k,p (Ω). k,p Intuitively, W0 (Ω) is the space of functions whose up to (k − 1)th order derivatives vanish on the boundary.
Example 1.2.1 Let Ω = (−1, 1). Consider the function f (x) = xβ . For 0 < β < 1, it is obviously not diﬀerentiable at the origin. However for any 1 , it is in the Sobolev space W 1,p (Ω). More generally, let Ω be 1≤p< 1−β an open unit ball centered at the origin in Rn , then the function xβ is in W 1,p (Ω) if and only if n β >1− . (1.8) p To see this, we ﬁrst calculate fxi (x) = and hence βxi , x2−β for x = 0,
1.3 Approximation by Smooth Functions
9
 f (x) = Fix a small have
β . x1−β
(1.9)
> 0. Then for any φ ∈ D(Ω), by integration by parts, we f (x)φxi (x)dx +
Ω\B (0) ∂B (0)
fxi (x)φ(x)dx = −
Ω\B (0)
f φνi dS.
where ν = (ν1 , ν2 , · · · , νn ) is an inwardnormal vector on ∂B (0). Now, under the condition that β > 1 − n/p, fxi is in Lp (Ω) ⊂ L1 (Ω), and 
∂B (0)
f φνi dS ≤ C
n−1+β
→0,
as →0.
It follows that xβ φxi (x)dx = −
Ω Ω
βxi φ(x)dx. x2−β
Therefore, the weak ﬁrst partial derivatives of xβ are βxi , i = 1, 2, · · · , n. x2−β Moreover, from (1.9), one can see that  f  is in Lp (Ω) if and only if β >1− n . p
1.3 Approximation by Smooth Functions
While working in Sobolev spaces, for instance, proving inequalities, it may feel inconvenient and cumbersome to manage the weak derivatives directly based on its deﬁnition. To get around, in this section, we will show that any function in a Sobolev space can be approached by a sequence of smooth functions. In other words, the smooth functions are dense in Sobolev spaces. Based on this, when deriving many useful properties of Sobolev spaces, we can just work on smooth functions and then take limits. At the end of the section, for more convenient application of the approximation results, we introduce an Operator Completion Theorem. We also prove an Extension Theorem. Both theorems will be used frequently in the next few sections. The idea in approximation is based on molliﬁers. Let j(x) = One can verify that c e x2 −1 if x < 1 0 if x ≥ 1.
1
10
1 Introduction to Sobolev Spaces
∞ j(x) ∈ C0 (B1 (0)).
Choose the constant c, such that j(x)dx = 1.
Rn
For each
> 0, deﬁne j (x) =
1
n
x j( ).
Then obviously j (x)dx = 1, ∀ > 0.
Rn ∞ One can also verify that j ∈ C0 (B (0)), and
lim j (x) = →0
0 for x = 0 ∞ for x = 0.
The above observations suggest that the limit of j (x) may be viewed as a delta function, and from the wellknown property of the delta function, we would naturally expect, for any continuous function f (x) and for any point x ∈ Ω, (J f )(x) :=
Ω
j (x − y)f (y)dy→f (x),
as →0.
(1.10)
More generally, if f (x) is only in Lp (Ω), we would expect (1.10) to hold almost everywhere. We call j (x) a molliﬁer and (J f )(x) the molliﬁcation of f (x). We will show that, for each > 0, J f is a C ∞ function, and as →0, J f →f in W k,p . Notice that, actually (J f )(x) =
B (x)∩Ω
j (x − y)f (y)dy,
hence, in order that J f (x) to approximate f (x) well, we need B (x) to be completely contained in Ω to ensure that j (x − y)dy = 1
B (x)∩Ω
(one of the important property of delta function). Equivalently, we need x to be in the interior of Ω. For this reason, we ﬁrst prove a local approximation theorem. Theorem 1.3.1 (Local approximation by smooth functions). k,p For any f ∈ W k,p (Ω), J f ∈ C ∞ (Rn ) and J f → f in Wloc (Ω) as →0.
1.3 Approximation by Smooth Functions
11
Then, to extend this result to the entire Ω, we will choose inﬁnitely many open sets Oi , i = 1, 2, · · · , each of which has a positive distance to the boundary of Ω, and whose union is the whole Ω. Based on the above theorem, we are able to approximate a W k,p (Ω) function on each Oi by a sequence of smooth functions. Combining this with a partition of unity, and a cut oﬀ function if Ω is unbounded, we will then prove Theorem 1.3.2 (Global approximation by smooth functions). For any f ∈ W k,p (Ω), there exists a sequence of functions {fm } ⊂ ∞ C (Ω) ∩ W k,p (Ω) such that fm → f in W k,p (Ω) as m→∞. Theorem 1.3.3 (Global approximation by smooth functions up to the boundary). Assume that Ω is bounded with C 1 boundary ∂Ω, then for any f ∈ k,p W (Ω), there exists a sequence of functions {fm } ⊂ C ∞ (Ω) = C ∞ (Ω) ∩ W k,p (Ω) such that fm → f in W k,p (Ω) as m→∞.
∞ When Ω is the entire space Rn , the approximation by C ∞ or by C0 functions are essentially the same. We have k,p Theorem 1.3.4 W k,p (Rn ) = W0 (Rn ). In other words, for any f ∈ W k,p (Rn ), ∞ there exists a sequence of functions {fm } ⊂ C0 (Rn ), such that
fm →f,
as m→∞;
in W k,p (Rn ).
Proof of Theorem 1.3.1 We prove the theorem in three steps. In step 1, we show that J f ∈ C ∞ (Rn ) and Jf
Lp (Ω)
≤ f
Lp (Ω) .
From the deﬁnition of J f (x), we can see that it is well deﬁned for all x ∈ Rn , and it vanishes if x is of distance away from Ω. Here and in the following, for simplicity of argument, we extend f to be zero outside of Ω. In step 2, we prove that, if f is in Lp (Ω), (J f )→f in Lp (Ω). loc We ﬁrst verify this for continuous functions and then approximate Lp functions by continuous functions. In step 3, we reach the conclusion of the Theorem. For each f ∈ W k,p (Ω) and α ≤ k, Dα f is in Lp (Ω). Then from the result in Step 2, we have J (Dα f )→Dα f Hence what we need to verify is in Lloc (Ω).
12
1 Introduction to Sobolev Spaces
Dα (J f )(x) = J (Dα f )(x). As the readers will notice, the arguments in the last two steps only work in any compact subset of Ω. Step 1. Let ei = (0, · · · , 0, 1, 0, · · · , 0) be the unit vector in the xi direction. Fix > 0 and x ∈ Rn . By the deﬁnition of J f , we have, for h < , (J f )(x + hei ) − (J f )(x) = h Since as h→0,
B2 (x)∩Ω
j (x + hei − y) − j (x − y) f (y)dy. h (1.11)
j (x + hei − y) − j (x − y) ∂j (x − y) → h ∂xi uniformly for all y ∈ B2 (x) ∩ Ω, we can pass the limit through the integral sign in (1.11) to obtain ∂(J f )(x) = ∂xi Similarly, we have Dα (J f )(x) =
Ω α Dx j (x − y)f (y)dy. Ω
∂j (x − y) f (y)dy. ∂xi
Noticing that j (·) is inﬁnitely diﬀerentiable, we conclude that J f is also inﬁnitely diﬀerentiable. Then we derive Jf
p−1 p
Lp (Ω)
≤ f
1
Lp (Ω) .
(1.12)
By the H¨lder inequality, we have o (J f )(x) = 
Ω
j
(x − y)j p (x − y)f (y)dy
p−1 p
≤
Ω
j (x − y)dy
j (x − y)f (y) dy
p Ω
1 p
1 p
≤
B (x)
j (x − y)f (y)p dy
.
It follows that (J f )(x)p dx ≤
Ω Ω B (x)
j (x − y)f (y)p dy dx f (y)p
Ω B (y)
≤ ≤
Ω
j (x − y)dx dy
f (y)p dy.
Now assume that f ∈ W k. and each simple function can be approximated by a continuous function. for any compact subset K of Ω.y< max f (x − y) − f (x) B (0) j (y)dy max f (x − y) − f (x). 3 It follows from this and (1.y< x∈K.12). where χj is the characteristic function of some measurable set Aj . J f −f Lp (K) →0. By writing (J f )(x) − f (x) = B (0) j (y)[f (x − y) − f (x)]dy. This proves (1.13) for continuous functions.14) f − g Lp (Ω) < .3 Approximation by Smooth Functions 13 This veriﬁes (1. Lp (K) Step 3.1. This veriﬁes (1. Then for any α with α ≤ k. We prove that. Step 2.13) infers that for suﬃciently small . and given any δ > 0. (1. 3 This can be derived from the wellknown fact that any Lp function can be k approximated by a simple function of the form j=1 aj χj (x). For any f in Lp (Ω). we have α D f ∈ Lp (Ω). (1.14) that J f −f Lp (K) ≤ J f −J g Lp (K) + J g−g Lp (K) + g−f Lp (K) ≤ 2 f − g Lp (K) + J g − g δ δ ≤ 2· + 3 3 = δ.13) We ﬁrst show this for a continuous function f . For the continuous function g. the last term in the above inequality tends to zero uniformly as →0. Due to the continuity of f and the compactness of K. such that δ (1. We show that . choose a a continuous function g.13). we have δ J g − g Lp (K) < . we have (J f )(x) − f (x) ≤ ≤ x∈K. as →0.p (Ω).
1. 3. i=0 ∞ 0 ≤ ηi (x) ≤ 1. for suﬃciently small . then any point y in the ball B (x) is in the interior of Ω. Let 1 Ωi = {x ∈ Ω  dist(x. we ﬁx a point x in K. i=0 Choose a smooth partition of unity {ηi }∞ associated with the open sets i=0 {Oi }∞ .p (Ω) and supp(ηi f ) ⊂ Oi . Choose i > 0 so small that fi := J i (ηi f ) satisﬁes δ fi − ηi f W k. we have J (Dα f )→Dα f in Lp (K). 2. 2. (1. · · · ¯i ) suppfi ⊂ (Ωi+4 \ Ω i = 1. Ω = Here we have used the fact that j (x − y) and all its derivatives are supported in B (x) ⊂ Ω. 1 To see this. ηi ∈ C0 (Oi ) ∞ i=0 ηi (x) = 1.15) in Lp (K). · · · i ¯ Write Oi = Ωi+3 \ Ωi+1 . Dα (J f )(x) = Ω α Dx j (x − y)f (y)dy α Dx j (x − y)f (y)dy B (x) α Dy j (x − y)f (y)dy B (x) = = (−1)α = B (x) j (x − y)Dα f (y)dy j (x − y)Dα f (y)dy. · · · (1. Choose < 2 dist(K. Fix a δ > 0.14 1 Introduction to Sobolev Spaces Dα (J f )→Dα f By the result in Step 2. Dα (J f )(x) = J (Dα f )(x). Given any function f ∈ W k.3. Consequently.p (Ω) ≤ 2i+1 i = 0. Choose some open set O0 ⊂⊂ Ω so that Ω = ∪∞ Oi . ∀x ∈ K.p (Ω).2 Step 1. ∂Ω) > }. The Proof of Theorem 1. obvious ηi f ∈ W k. x ∈ Ω.1. ∂Ω). i = 1.3.16) . For a Bounded Region Ω. Now what left to verify is. This completes the proof of the Theorem 1.
2i+1 From here we can see that the series converges in W k.1.p (Ω) ∞ i=0 (fi −ηi f ) k. we write ∞ g−f = i=0 (fi − ηi f ). such that . ∀α ≤ k. from the above inequality.18) Now in the bounded domain Ω ∩ BR (0).p (Ω). we have g−f W k. neither fi nor ηi f converge. x ∈ BR−2 (0). such that f W k. there exist R > 0. Since δ > 0 is any number. (1. Although each partial sum of fi is in C0 (Ω).1 In the above argument. and for each open set O ⊂⊂ Ω.p (Ω\BR−2 (0)) ≤ δ. since f ∈ W k. but ∞ their diﬀerence converges.p (Ω) ≤δ i=0 1 = δ. there are at most ﬁnitely many nonzero terms in the sum. hence (g − f ) ∈ W (Ω).3. Show that the series ∞ in X if i vi < ∞.p (Ω). For Unbounded Region Ω.p (Ω).p (Ω).p (Ω) ≤ Cδ.3.17) Choose a cut oﬀ function φ ∈ C ∞ (Rn ) satisfying φ(x) = 1.1 below). there exists a constant C. To see that g ∈ W k.p (Ω) (see Exercise 1. by the argument in Step 1. because each fi is in C ∞ (Ω). It follows from (1. Hint: Show that the partial sum is a Cauchy sequence. it is easy to verify that. such that φf − f W k.3 Approximation by Smooth Functions ∞ 15 Set g = i=0 fi . Moreover.3. Remark 1. 0. Let X be a Banach space and vi ∈ X. Given any δ > 0. ∞ i vi converges Step 2. Then g ∈ C ∞ (Ω).1 .p the inﬁnite series fi does not converge to g in W (Ω). and therefore g ∈ W k. and Dα φ(x) ≤ 1 ∀x ∈ Rn .p (Ω)? We showed that the diﬀerence (g − f ) is in W k. Then how did we prove that g ∈ W k. there is a function g ∈ C ∞ (Ω ∩ BR (0)). Exercise 1.16) that ∞ ∞ fi − η i f i=0 W k. we complete step 1.p ≤ δ.17). x ∈ Rn \ BR (0). Then by (1. (1. k.
we will translate the function a little bit inward. we can make a C 1 change of coordinates locally. Set D = Ω ∩ Br/2 (xo ). so that in the new coordinates system (x1 . and D = {x  x ∈ D}. we have Dα (J f ) − Dα f Lp (D) ≤ Dα (J f ) − Dα f Lp (D) + Dα f − Dα f Lp (D) .19) Obviously. Step 1.p function f on D within Ω. for a suﬃciently small r > 0. for any multiindex α ≤ k.19). Since ∂Ω is C 1 . The Proof of Theorem 1. . This D is obtained by shifting D toward the inside of Ω by a units. there is room to mollify a given W k. More precisely.16 1 Introduction to Sobolev Spaces g−f W k. xn ).3. Br (xo ) ∩ Ω = {x ∈ Br (xo )  xn > φ(x1 .2.p (Ω) + φf − f + Cδ R (0)) W k. · · · .p (Ω∩B ≤ (C1 + C)δ. Then we will again use the partition of unity to complete the proof. We claim that J f →f in W k. and by (1. To circumvent this. Actually.3 We will still use the molliﬁers to approximate a function. Shift every point x ∈ D in xn direction a units.p (Ω∩BR (0)) ≤ δ. we ﬁrst translate f a distance in the xn direction to become f (x) = f (x ). (1. φg − f W k. so that there is room to mollify within Ω. This completes the proof of the Theorem 1. deﬁne x = x + a en .3. and on each set that covers the boundary layer. Approximating in a Small Set Covering ∂Ω.p (Ω) ≤ φg − φf ≤ C1 g − f W k. Choose a suﬃciently large. then mollify it. As compared to the proofs of the previous theorem. Now. · · · xn−1 )} with some C 1 function Φ. the main diﬃculty here is that for a point on the boundary.p (Ω) W k. the function φg is in C ∞ (Ω). Let xo be any point on ∂Ω. we cover Ω with ﬁnitely many open sets. so that the ball B (x) lies in Ω ∩ Br (xo ) for all x ∈ D and for all small > 0. there is no room to mollify. we can express.p (D).18) and (1.
and select a function i=0 ¯ g0 ∈ C ∞ (D0 ) such that g0 − f W k. the union of which covers ∂Ω.21) Let {ηi } be a smooth partition of unity subordinated to the open sets {Di }N . (1.21) that.3. Deﬁne i=0 N g= i=0 ηi gi . N ¯ Then obviously g ∈ C ∞ (Ω). such that for . call them Di . φ(r) = 0. for 1 < r < 2.4 ∞ Let φ(r) be a C0 cut oﬀ function such that for 0 ≤ r ≤ 1. Since ∂Ω is compact.p (D0 ) ≤ δ. The Proof of Theorem 1. 2. between 0 and 1 . (1.3.3. from the argument ¯ in Step 1. while the second term also vanishes in the process due to the continuity of the translation in the Lp norm. i = 1.p (Di ) ≤ δ.p (Di ) = C(N + 1)δ.1 implies that the ﬁrst term on the right hand side goes to zero as →0. for each Di . Step 2. as R→∞.3. Then by a direct computation. there exists a sequence of numbers {Rm } with Rm →∞.20) and (1.22) Thus.3 Approximation by Smooth Functions 17 A similar argument as in the proof of Theorem 1.p (Rn ) →0.3. · · · . Given δ > 0. one can show that φ( x )f (x) − f (x) R W k. (1.1.1. 1 . such that gi − f W k. we can ﬁnd ﬁnitely many such sets D. Similar to the proof of Theorem 1. for each α ≤ k N Dα g − Dα f Lp (Ω) ≤ i=0 N Dα (ηi gi ) − Dα (ηi f ) gi − f i=0 Lp (Di ) ≤C W k. for r ≥ 2. and f = i=0 ηi f. there exists gi ∈ C ∞ (Di ). it follows from (1. N . This completes the proof of the Theorem 1. Applying the Partition of Unity.20) Choose an open set D0 ⊂⊂ Ω such that Ω ⊂ ∪N Di .
3.p (Rn ) ≤ ≤ fm −gm W k.p (Rn ) + gm −f W k.5 Let D be a dense linear subspace of a normed space X.3. as →0. Rm ≤ 1 .p (Rn ). from the Approximation Theorem 1.p (Rn ) ≤ 2 →0. In order to make such extensions more conveniently. In other words. such that T ¯ T = T ¯ and T x = T x ∀ x ∈ D.24) m ∞ is in C0 (Rn ). such that xi →xo . each function fm fm −f W k. in W k.3. as m→∞. which show that smooth functions are dense in Sobolev spaces W k. Proof.p . Then there exists an extension T of T from D to the ¯ is a bounded linear operator from X to Y .24). we can just ﬁrst work on smooth functions and then extend them to the whole Sobolev spaces. . (1.p (Rn ) fm − gm 1 .18 1 Introduction to Sobolev Spaces gm (x) := φ( we have x )f (x).4. as i→∞. m This completes the proof of Theorem 1. Given any element xo ∈ X. such that for fm := J m (gm ). Hence there exist m. Theorem 1. W k. without going through approximation process in each particular space. we have Obviously. when we derive various inequalities in Sobolev spaces. Let Y be a Banach space. Assume T :D→Y ¯ is a bounded linear map.23) and (1.p .p is the completion of C k under the norm · W k.p (Rn ) m. (1. W k. and by (1. whole space X. Now we have proved all the four Approximation Theorems. since D is dense in X. there exists a sequence {xi } ∈ D. particularly in the next three sections.3.23) m On the other hand. for each ﬁxed gm − f W k. Later. we prove the following Operator Completion Theorem in general Banach spaces. J (gm )→gm .
there exists a bounded linear operator E : W k.3. We deﬁne E(u) in the following two steps. xn ) . i=0 . + Assume u ∈ C k (Br (0)). This completes the proof. xn ) ∈ Rn  x < 1. Let ¯ T xo = yo . Step 1. for u ∈ W k. {T xi } converges to some element yo in Y . Deﬁne u(x . it is easy to show that. j→∞. From this density.1. we prove the following Extension Theorem.p set O ⊃ Ω. as i. i→∞ ¯ ¯ Now. −λi xn ) . xn < 0. E(u)(x) = u(x) almost everywhere on Ω based on ¯ ¯ E(u)(x) = u(x) on Ω ∀ u ∈ C k (Ω).3).3 Approximation by Smooth Functions 19 It follows that T xi − T xj ≤ T xi − xj →0.e.p (Ω). suppose there is another sequence {xi } that converges to xo in X and T xi →y1 in Y . We extend u to be a C k function on the whole ball Br (0). for x ∈ D. we ﬁrst work on functions u ∈ ¯ C k (Ω).25). The special case when Ω is a half ball + Br (0) := {x = (x . since C k (Ω) is dense in W k. deﬁne T x by (1. Then y1 − yo = lim T xi − T xi ≤ Climi→∞ xi − xi = 0.p (Ω) → W0 (O) such that Eu = u a. and for other x ∈ X.25) To see that (1. To deﬁne the extension operator E. xn ) = ¯ u(x . Moreover Obviously. ¯ Now we prove the theorem for u ∈ C k (Ω).3. Theorem 1. This implies that {T xi } is a Cauchy sequence in Y . Proof. T ¯ T xo = lim T xi ≤ lim T i→∞ i→ ∞ xi = T xo .25) is well deﬁned. in Ω.p (Ω) (see Theorem 1. deﬁne T x = T x. xn ≥ 0 k ai u(x . Since Y is a Banach space.p (Ω). ¯ is a linear operator.6 Assume that Ω is bounded and ∂Ω is C k . ¯ Hence T is also bounded. (1. xn > 0}. As a direct application of this Operator Completion Theorem. then for any open k. Then we can apply the Operator Completion Theorem to extend E ¯ to W k.
Choose ro suﬃciently small so that Dxo ⊂ O. Reduce the general domains to the case of half ball covered in Step 1 via domain transformations and a partition of unity. φ(U ∩ ∂Ω) lies on the hyper plane xn = 0. Now. From Step 1. D1 . ¯ Let O be an open set containing Ω.20 1 Introduction to Sobolev Spaces To guarantee that all the partial derivatives up to the order k are ∂xj n continuous across the hyper plane xn = 0. the extended function u is in C k (Br (0)). −1 Dxo := φ (Bro (0)) is an open set containing xo . ∂ j u(x . Then there exists an ro > 0. · · · . i=0 Let φi be the mapping associated with Di as described in Step 1. such that 0 < λ0 < λ1 < · · · < λk < 1. Since the coeﬃcient determinant det(λj ) is not i zero. · · · . for such λi and ai . · · · m. η1 . And let ui (y) = u(φ−1 (y) be the function deﬁned on the half ball ˜ i + ¯ Bri (0) = φi (Di ∩ Ω) . 2. u . Given any xo ∈ ∂Ω. each ui (y) can be extended as a C k function ui (y) onto the ˜ ¯ whole ball Bri (0). Dm and a corresponding partition of unity η0 . such that and ∞ ηi ∈ C0 (Di ) i = 0. j = 0. xn ) ¯ Then solve the algebraic system k ai λj = 1 . m m ηi (x) = 1 ∀ x ∈ Ω. there exists a o k neighborhood U of x and a C transformation φ : U →Rn which satisﬁes that φ(xo ) = 0. we can deﬁne the extension of u from Ω to its neighborhood O as m E(u) = i=1 ηi (x)¯i (φi (x)) + η0 u(x). · · · . and hence the system has a unique solution. we ﬁrst pick λi . i i=0 to determine a0 . ak . k. hence there exist a ﬁnite subcovering D0 := Ω. ¯ Step 2. · · · . and φ(U ∩ Ω) is n contained in R+ . All such Dxo and Ω forms an open ¯ covering of Ω. ηm . 1. such that Bro (0) ⊂⊂ φ(U ). · · · . One can easily verify that. i = 1. a1 . 1. and φ ∈ C k (Dxo ).
From the Extension Theorem. Remark 1. we have J (E(u)) − u W k.p (Ω) . as →0. 1. for any x ∈ Ω. E(u) ∈ C0 (O).p (Ω) →0 . and 21 E(u) W k. the improvement is that one can write out the explicit form of the approximation.4 Sobolev Embeddings When we seek weak solutions of partial diﬀerential equations.p . ¯ Moreover. we start with functions in a Sobolev space W k. The following theorem answers the question and at meantime provides inequalities among the relevant norms.p (O) ≤C u W k.1 Assume that Ω is bounded and ∂Ω is C k .1. Let O be an open ¯ set containing Ω.4 Sobolev Embeddings ∞ Obviously. Then the linear operator ∞ J (E(u)) : W k.3.p (Ω). . one can derive immediately Corollary 1. Let O := {x ∈ Rn  dist(x. O) ≤ }. we have m m E(u)(x) = i=1 m ηi (x)¯i (φi (x)) + η0 (x)u(x) = u i=1 ηi (x)˜i (φi (x)) + η0 (x)u(x) u m = = ηi (x)u(φ−1 (φi (x))) i i=1 m + η0 (x)u(x) = i=1 ηi (x)u(x) + η0 (x)u(x) ηi (x) u(x) = u(x). and for each u ∈ W k.3. as compare to the previous approximation theorems.p (Ω)→C0 (O ) is bounded. Here. i=0 This completes the proof of the Extension Theorem. Then it is natural that we would like to know whether the functions in this space also automatically belong to some other spaces.2 Notice that the Extension Theorem actually implies an improved version of the Approximation Theorem under stronger assumption on ∂Ω (be C k instead of C 1 ).
The proof of the Theorem is based upon several simpler theorems. and there exists a constant C. (1. Here. one would expect more. Then it must also be true for the rescaled function of u: uλ (x) = u(λx).p (Ω). Assume Ω is bounded and has a C 1 boundary. We would like to know for what value of q. (1.γ p (Ω) ≤C u W k. 1 k (i) If k < n . we have 1 λn λp Duλ p dx = n λ Rn uλ (x)q dx = u(y)q dy.22 1 Introduction to Sobolev Spaces Theorem 1.p (Ω) (ii) If k > that n p. if p p any positive number < 1. Now suppose (1. From the deﬁnition. Let u ∈ W k.p (Ω) 1 + n − n. Naturally. if n p n p is not an integer is an integer .p norm–the norm of the derivatives Du Lp (Ω) = Ω Du dx p 1 p –would suppose to be more useful in practice. We ﬁrst consider the functions in W 1. Rn .26) holds. we start with the smooth functions with compact supports in Rn .1 (General Sobolev inequalities). that is uλ By substitution.4. apparently these functions belong to Lq (Ω) for 1 ≤ q ≤ p. then u ∈ C k−[ p ]−1.p (Ω).γ (Ω). and what is more meaningful is to ﬁnd out how large this q can be. then u ∈ Lq (Ω) with 1 = p − n and there exists a constant p q C such that u Lq (Ω) ≤ C u W k.27) Rn Du(y)p dy. can we establish the inequality u Lq (Rn ) ≤ C Du Lp (Rn ) .26) ∞ with constant C independent of u ∈ C0 (Rn ). For simplicity. and Rn Lq (Rn ) ≤ C Duλ Lp (Rn ) . And to control the Lq norm for larger q by W 1. such n u where γ= C k−[ n ]−1. [b] is the integer part of the number b.
One counter example is u = log log(1 + x ) on Ω = B1 (0). It belongs to W 1. it is false.1.p (Rn ). (1. we derive immediately Corollary 1. Suppose that 1 ≤ p < n and u ∈ W 1. Then one may suspect that u ∈ L∞ when p = n. u ∈ C0 (Rn ) (1. for n > 1. such that u Lp∗ (Ω) ≤C u W 1.1 Inequality (1. 1 However. For functions in W 1. This is a more delicate situation. open subset of Rn with C 1 boundary.4. it is necessarily that the power of λ here be zero.4. we must require p < n. Here for q to be positive. Therefore.p (Rn ) functions by Extension Theorem and arrive at Theorem 1. Then there exists a constant C = C(p. that is q= np . p∗ := n−p →∞. as will be stated in the following theorem. For n = 1. n−p It turns out that this condition is also suﬃcient. from x u(x) = −∞ u (x)dx.p (Rn ) can be approached by a sequence of func1 tions in C0 (Rn ). and we will deal with it later. Since any function in W 1.2 (GagliardoNirenbergSobolev inequality).p (Ω) .28) holds for all functions u in W 1. .p (Ω).27) that u Lq (Rn ) ≤ Cλ1− p + q Du n n Lp (Rn ) . Theorem 1. such that 1 u Lp∗ (Rn ) ≤ C Du Lp (Rn ) . we can extended them to be W 1. we derive immediately that ∞ u(x) ≤ −∞ u (x)dx. Assume that 1 ≤ p < n. in order that C to be independent of u. n). Unfortunately.28) where p∗ = np n−p . n.p (Ω).4 Sobolev Embeddings 23 It follows from (1.29) np In the limiting case as p→n.n (Ω). this is true only in dimension one. Ω).4. but not to L∞ (Ω).3 Assume that Ω is a bounded. Then u is in Lp∗ (Ω) and there exists a constant C = C(p.
we follow two basic principles.4.p (Ω) to W 1. n) such that u where γ = 1 − n . Assume n < p ≤ ∞.2. This is o indeed true in general.4. we reduce the proof to functions with enough smoothness (which is just Theorem 1.28) are called ‘soft analyseses’.p (Rn ) . we will prove Theorem 1. Then. by H¨lder inequality.2). y in R1 . o y y u(y) − u(x) ≤ x u (t)dt ≤ x u (t) dt p 1 p y 1 1− p · x dt . and the steps leading from (1. Instead of dealing with functions in W k.4.1. Secondly.p to embed into better spaces. as i.4. we will show the ‘soft’ parts ﬁrst and then the ‘hard’ ones. First.24 1 Introduction to Sobolev Spaces Naturally. and consequently. Take the supremum over all pairs x.4.p (Rn ) via the Extension Theorem. In the following.2 is true and derive Corollary 1. we deal with general domains by extending functions u ∈ W 1. u ∈ C 1 (Rn ) ≤ C ui − uj W 1. by the Approximation Theorem.2 is the ‘key’ and inequality (1. It follows that u(y) − u(x) y − x1− p 1 ∞ ≤ −∞ u (t) dt p 1 p .p . j → ∞. we have y u(y) − u(x) = x u (t)dt. the left hand side of the above 1 inequality is the norm in the H¨lder space C 0. for p > n. we consider the special case where Ω = Rn . one sees that Theorem 1. Here.4 (Morrey’s inequality).1 and Theorem 1. there ∞ exists a sequence {uk } ⊂ C0 (Rn ) such that u − uk W 1. let us ﬁrst consider the simplest case when n = 1 and p > 1.γ (Rn ) ≤C u W 1. Obviously. The Proof of Corollary 1.4.p (Rn ) → 0. Given any function u ∈ W 1. To get some rough idea what these spaces might be.4.4.28).p (Rn ). Then there exists a constant C = C(p. the proof of (1.2. and we have Theorem 1. . p To establish an inequality like (1.27) to (1. we obtain ui − uj Lp∗ (Rn ) C 0. y ∈ R1 with x < y. Applying Theorem 1. First. Often.3. one would expect W 1. for any x.27) is called a ‘hard analysis’.p (Rn ) → 0 as k → ∞.γ (R1 ) with γ = 1 − p .27) there provides the foundation for the proof of the Sobolev embedding.4. we will assume that Theorem 1.
i.31) to uγ for a properly chosen γ > 1 to extend the inequality to the case when p > 1. (1. ˜ almost everywhere in Ω. (1.p (Ω) .p every u in W 1.27).p (O) ≤ CC1 u W 1. we prove u Rn n n−1 n−1 n dx ≤ Rn Dudx.p (Rn ) .e.4. We ﬁrst establish the inequality for p = 1.4.3. there exists a constant C1 = C1 (p. by the Extension Theorem (Theorem 1.p (Ω).p (Rn ) =C u W 1. let O be an open set that covers Ω.4. 2. Now for functions in W 1. The Proof of Theorem 1. We need Lemma 1. This completes the proof of the Corollary.6). there exists a function u in W0 (O). m and 1 1 1 + + ··· + = 1.31) Then we will apply (1. · · · . for 1. moreover.1. such that ˜ u = u.2. (1.3. p1 p2 pm m Then u1 u2 · · · um dx ≤ Ω ui pi dx i Ω 1 pi . This completes the proof of the Theorem. Ω.p (Ω) .32) . we ﬁrst extend them to be functions with compact supports in Rn .30) Now we can apply the GagliardoNirenbergSobolev inequality to u to derive ˜ u Lp∗ (Ω) ≤ u ˜ Lp∗ (O) ≤C u ˜ W 1.1 (General H¨lder Inequality) Assume that o ui ∈ Lpi (Ω) for i = 1.4 Sobolev Embeddings 25 Thus {uk } also converges to u in Lp (Rn ). to apply inequality (1. The Proof of Theorem 1. n.p (O) ≤ C1 u W 1. Consequently. such that u ˜ W 1. More precisely. we arrive at u Lp∗ (Rn ) ∗ = lim uk Lp∗ (Rn ) ≤ lim C uk W 1. O).p (Ω).
xn )dyi .31) for n = 2. y2 )dy2 . And it follows that n u(x) n n−1 ∞ ≤ i=1 −∞ Du(x1 . we arrive at (1. −∞ The above two inequalities together imply that ∞ ∞ u(x)2 ≤ −∞ Du(y1 . The case p = 1. (1. we prove 2 u(x)2 dx ≤ R2 R2 Dudx . ∂yi u(x) ≤ −∞ Du(x1 .26 1 Introduction to Sobolev Spaces The proof can be obtained by applying induction to the usual H¨lder inequalo ity for two functions. xn )dyi 1 n−1 . To better illustrate the idea. we have u(x) ≤ ∞ Du(x1 . · · · n. . x2 )dy1 · −∞ Du(x1 . Similarly. · · · . Now. x2 )dy1 . Step 1. xn )dyi . 2. · · · . 2. xi−1 . ∂y1 Du(y1 .33) Since u has a compact support.32). that is. yi . · · · . ∞ xi −∞ ∂u (x1 . yi . we have u(x) = It follows that u(x) ≤ −∞ x1 −∞ ∞ ∂u (y1 . yi .33). Then we deal with the general situation when n > 2. y2 )dy2 . x2 )dy1 . · · · . · · · . We write u(x) = Consequently. i = 1. i = 1. xi+1 . we obtain ∞ u −∞ n n−1 ∞ 1 n−1 ∞ n ∞ 1 n−1 dx1 ≤ −∞ ∞ Dudy1 −∞ i=2 1 n−1 Dudyi −∞ ∞ n ∞ −∞ dx1 1 n−1 ≤ −∞ Dudy1 i=2 −∞ Dudx1 dyi . · · · n. Integrating both sides with respect to x1 and applying the general H¨lder o inequality (1. we ﬁrst derive inequality (1. integrate both sides of the above inequality with respect to x1 and x2 from −∞ to ∞. · · · . Now we are ready to prove the Theorem.
This veriﬁes (1. xn−1 . applying the General H¨lder Inequality. The Case p > 1. Again. Finally. We have ∞ −∞ ∞ −∞ ∞ −∞ ∞ 1 n−1 u n−1 dx1 dx2 ≤ ∞ −∞ ∞ 1 n−1 n n ∞ −∞ ∞ 1 n−1 Dudx1 dy2 −∞ −∞ Dudy1 · i=3 −∞ Dudx1 dyi dx2 . we arrive at o ∞ −∞ ∞ −∞ ∞ −∞ n ∞ 1 n−1 u n−1 dx1 dx2 ≤ ∞ −∞ ∞ −∞ ∞ ∞ 1 n−1 n Dudy1 dx2 −∞ ∞ −∞ −∞ 1 n−1 Dudx1 dy2 .31) to the function uγ with γ > 1 to be chosen later. integrating both sides with respect to xn and applying the general H¨lder inequality. × Dudx1 dx2 dyi i=3 −∞ Continuing this way by integrating with respect to x3 . Exercise 1. . and by the H¨lder inequality.4.31).1 Write your own proof with all details for the cases n = 3 and n = 4. we have o u Rn γn n−1 n−1 n dx ≤ Rn D(uγ )dx = γ Rn uγ−1 Dudx Du Rn γn γ+n−1 γ+n−1 γn ≤γ Rn u γn n−1 (γ−1)(n−1) γn dx dx . we deduce ∞ ∞ ··· −∞ ∞ −∞ u n−1 dx1 · · · dxn−1 ≤ ∞ 1 n−1 n ∞ ∞ 1 n−1 ··· −∞ ∞ −∞ ∞ Dudy1 dx2 · · · dxn−1 −∞ 1 n−1 ··· −∞ Dudx1 dy2 dx3 · · · dxn−1 1 n−1 ··· × ··· −∞ −∞ Dudx1 · · · dxn−2 dyn−1 Rn Dudx . Applying (1.4 Sobolev Embeddings 27 Then integrate the above inequality with respect to x2 . Step 2.1. we obtain o u n−1 dx ≤ Rn Rn n n n−1 Dudx . · · · .
then converting the integral on the right hand side from polar to rectangular coordinates. Step 1. The Proof of Theorem 1. we prove (1. We will carry the proof out in three steps. respectively.35).36) where Br (x) is the volume of Br (x). Integrating both sides with respect to ω on the unit sphere ∂B1 (0). that is γ+n−1 γ= p(n − 1) .36).34) and (1.35) Both of them can be derived from the following 1 Br (x) u(y) − u(x)dy ≤ C Br (x) Br (x) Du(y) dy. we obtain .4. x − y s u(x + sω) − u(x) ≤ 0 Du(x + τ ω)dτ. (1. and in Step 2 and Step 3. so that γn = p. n−p 1 p we obtain u n−p dx Rn np n−p np ≤γ Rn Dup dx . y − xn−1 (1. This completes the proof of the Theorem. We start from 1 u(y) − u(x) = 0 d u(x + t(y − x))dt = dt s Du(x + τ ω) · ωdτ. 0 where ω= It follows that y−x .34) and sup x=y u(x) − u(y) ≤ C Du x − y1−n/p Lp (Rn ) .28 1 Introduction to Sobolev Spaces It follows that u Rn γn n−1 n−1 γn dx ≤γ Rn Du γn γ+n−1 γ+n−1 γn dx . Now choose γ. (1. and hence y = x + sω. s = x − y.p (Rn ) . In Step 1.4 (Morrey’s inequality). We will establish two inequalities sup u ≤ C u Rn W 1. we verify (1.
37) Du(y) dy x − yn−1 1/p ≤C Rn Dup dy B1 (x) 1/p dy x − y (n−1)p p−1 p−1 p dy (1.1.38) ≤ C1 Rn Dup dy .4 Sobolev Embeddings s 29 u(x + sω) − u(x)dσ ≤ ∂B1 (0) 0 s ∂B1 (0) Du(x + τ ω)dσdτ Du(x + τ ω) 0 ∂B1 (0) = = Bs (x) τ n−1 dσdτ τ n−1 Du(z) dz. integrating with respect to s from 0 to r. we arrive at u(y) − u(x)dy ≤ Br (x) rn n Br (x) Du(y) dy. .36).39). and hence x − y (n−1)p p−1 is ﬁnite. and taking into account that the integrand on the right hand side is nonnegative. By (1. x − zn−1 Multiplying both sides by sn−1 . so that the integral dy B1 (x) < n. (1. we deduce o I1 ≤ C B1 (x) u(x)dy B1 (x) u(x) − u(y)dy + B1 (x) 1 B1 (x) u(y)dy B1 (x) (1. For each ﬁxed x ∈ Rn . we have u(x) = 1 B1 (x) 1 ≤ B1 (x) = I1 + I2 .34) is an immediate consequence of (1.39) Now (1.37). and (1. (1. Step 2. x − yn−1 This veriﬁes (1.38).36) and the H¨lder inequality. (n−1)p p−1 Here we have used the condition that p > n. Also it is obvious that I2 ≤ C u Lp (Rn ) .
3 again to W k−1. and (1. Applying Theorem 1. or 1 1 1 1 1 1 1 2 = ∗ − =( − )− = − . thus u ∈ W k−1.41) u(z) − u(y)dz ≤ Cr1−n/p Du D Lp (Rn ) .4.p (Ω) .42) yield u(x) − u(y) ≤ Cx − y1−n/p Du Lp (Rn ) . For α ≤ k−1. np∗ n−p∗ . dz x − z (n−1)p p−1 p−1 p = Cr1−n/p Du Similarly. (1. (1. we have u ∈ W k−2. we have 1 D u(x) − u(z)dz ≤ D C Br (x) u(x) − u(z)dz Br (x) 1/p p ≤C Rn Du dz Br (x) Lp (Rn ) . (1. n (i) Assume that u ∈ W k.3.4.p∗ (Ω) ≤ C2 C1 u W k. This implies (1. we have Dα u ∈ Lp (Ω). Then u(x) − u(y) ≤ 1 D u(x) − u(z)dz + D 1 D u(z) − u(y)dz.41).30 1 Introduction to Sobolev Spaces Step 3. let r = x − y and D = Br (x) ∩ Br (y). ∗∗ p p n p n n p n Continuing this way k times.p (Ω) and u where p∗∗ = W k−2.p (Ω) .42) Now (1. By Theorem 1.43) 1 k with 1 = p − n .p∗ (Ω) ≤ C1 u W k.3 successively on q np 1 1 the integer k.4.p∗∗ (Ω) ∗ ∗∗ ≤ C2 u W k−1. then p1 = p − n . We want to show that u ∈ p Lq (Ω) and u Lq (Ω) ≤ C u W k.43).p (Ω). The Proof of Theorem 1.p (Ω) and u W k−1. D (1. For any pair of ﬁxed points x and y in Rn .4. 1 D (1. This can be done by applying Theorem 1. .p (Ω) with k < .p (Ω). Again denote p∗ = n−p .40) Again by (1.40).1 (the General Sobolev Inequality). we arrive at (1. Dα u ∈ ∗ ∗ ∗ W 1.35) and thus completes the proof of the Theorem.36).p (Ω) .
p (Ω) . p γ =1− n n n =1+ − . u C k− n −1. and in this case. n − mp n − 1. W k. in our situation. C 0. we can use the result in the previous step. p Obviously.p (Ω) . such that W k. this condition is not necessarily met. m> np > n. That is. we will try to ﬁnd a smallest integer m.γ (Ω) ≤ C1 u W k−m. with q > n.4 Sobolev Embeddings 31 (ii) Now assume that k > u n . This completes the proof of the Theorem. We can ﬁrst decrease the order of diﬀerentiation to increase the power of summability. q p p n n is an integer. when n is not an integer.γ p (Ω) C 0. To remedy this.p (Ω) → W k−m. we have m = .q (Ω) ≤ C1 C2 u W k. However. While . q can be any number p p > n.44) to Dα u. Recall that in the Morrey’s inequality p ≤C u W 1. with any α ≤ k − m − 1 to obtain Dα u Or equivalently.γ (Ω) (1.1. the smallest such integer m is m= n .44) we require that p > n. we want q= and equivalently. More precisely. which implies that γ can be any positive number < 1.p (Ω) . we can apply Morrey’s inequality (1.q (Ω). [ ] ≤C u Here. p For this choice of m.
Theorem 1.p (Ω).p (Ω) into Lq (Ω) is still true. What we need to show is that if {uk } is a bounded sequence in W 1. Assume that Ω is a bounded open subset in Rn with C 1 boundary ∂Ω. The ﬁrst part–inequality (1.5. as stated below. We postpone the proof of the Lemma for a moment and see how it implies the Theorem.1 (Ω).32 1 Introduction to Sobolev Spaces 1. (1.p (Ω) possesses a convergent subsequence in Lq (Ω).p (Ω). assume that {uk } is a bounded sequence in W 1.p (Ω) is embedded into Lp (Ω) np with p∗ = n−p . in the sense that i) there is a constant C. W 1. Suppose 1 ≤ p < n. one can expect more. we conclude immediately that {uk } converges strongly to uo in Lq (Ω). . Obviously. ∀ u ∈ W 1.p (Ω). This proves the Theorem.5.1 (RellichKondrachov Compact Embedding). Let {uk } be a bounded sequence in W 1. due to the strict inequality on the exponent.p (Ω) . Applying the H¨lder inequality o uk − uo Lq (Ω) ≤ uk − uo θ L1 (Ω) uk − uo 1−θ Lp∗ (Ω) .1 (Ω) possesses a convergent subsequence in L1 (Ω). since p ≥ 1 and Ω is bounded.45)– is included in the general Embedding Theorem. the embedding of W 1.45) and ii) every bounded sequence in W 1. there is a subsequence (still denoted by {uk }) that converges strongly to uo in L1 (Ω). {uk } is bounded in Lp∗ (Ω).5. if the region Ω is bounded. In fact. Proof. We will show the strong convergence of this sequence in three steps with the help of a family of molliﬁers uk (x) = Ω j (y)uk (x − y)dy. such that u Lq (Ω) ∗ ≤C u W 1.5 Compact Embedding In the previous section. By the Sobolev embedding. Actually. for q < p∗ .p (Ω) is compactly imbedded into Lq (Ω): W 1. it is also bounded in W 1. we proved that W 1. by Lemma 1.p (Ω). Now. Then for each 1 ≤ q < p∗ .p (Ω) → → Lq (Ω).1 Every bounded sequence in W 1. then it possesses a convergent subsequence in Lq (Ω).1. On the other hand. Then there exists a subsequence (still denoted by {uk }) which converges weakly to an element uo in W 1. We now come back to prove the Lemma. This can be derived immediately from the following Lemma 1.1 (Ω).
Finally. then for all x ∈ Rn and for all k = 1. we extract diagonally a subsequence of {uk } which converges strongly in L1 (Ω).1 function can be approached by a sequence of smooth functions. 2. Then. the sequence of functions {uk } all have compact support in a bounded open set G ⊂ Rn . (1.50) . we j (x − y)uk (y)dy B (x) L∞ (Rn ) uk (x) ≤ ≤ j uk L1 (G) ≤ C n < ∞. · · ·.1 (G) ≤ C < ∞ . Integrating with respect to x and changing the order of integration. Step 1. we show that uk →uk in L1 (Ω) as →0 .1. that Ω = Rn . (1. for each ﬁxed > 0. for all k = 1. we obtain 1 uk − uk L1 (G) ≤ B1 (0) j(y) 0 G Duk (x − ty)dx dt dy uk W 1. corresponding to the above convergent sequence {uk }. without loss of generality. Now ﬁx an have > 0. as →0 . Since every W 1.49) Step 2. uniformly in k. 2. and uk W 1. From the property of molliﬁers.5 Compact Embedding 33 First. Based on the Extension Theorem. ≤ G Duk (z)dz ≤ (1. · · · .48) It follows that uk − uk L1 (G) →0 . uniformly in k. we prove that (1.47) (1.46) there is a subsequence of {uk } which converges uniformly . we may assume. we may also assume that each uk is smooth. we have uk (x) − uk (x) = B1 (0) 1 j (y)[uk (x − y) − uk (x)]dy j (y) B1 (0) 0 = =− d uk (x − ty)dt dy dt 1 j (y) B1 (0) 0 Duk (x − ty)dt y dy.1 (G) .
1/3. because uii −ujj L1 (G) 1/i 1/k 1/k 1/k ≤ uii −uii 1/i L1 (G) + uii −ujj 1/i 1/j L1 (G) + ujj −ujj 1/j L1 (G) →0 . as i.1 Poincar´’s Inequality e For functions that vanish on the boundary.p Assume Ω is bounded.6 Other Basic Inequalities 1. Then select the corresponding subsequence from {uk }: u11 . u33 .52) Step 3. (1. e 1. due to the fact that each of the norm on the right hand side →0. Then u Lp (Ω) ≤ C Du Lp (Ω) . we have Theorem 1. · · · . · · · .i→∞ lim uj − ui L1 (G) = 0. Duk (x) ≤ C n+1 < ∞. This is our desired subsequence. (1. 2.1 (Poincar´’s Inequality I). the sequence {uk } is uniformly bounded and equicontinuous. Pick the diagonal subsequence from the above: {uii } ⊂ {uk }. and denote the corresponding subsequence that satisﬁes (1. 3.6. Suppose u ∈ W0 (Ω) for some 1 ≤ p ≤ ∞. in particular j.53) . · · · . by the ArzelaAscoli Theorem (see the Appendix). for k = 1.50) and (1. (1. uk2 . Therefore. u22 . for each ﬁxed > 0. Now choose to be 1.51) (1. 1/2. · · · successively. This completes the proof of the Lemma and hence the Theorem. 1.52) by uk1 .51) imply that. it possesses a subsequence (still denoted by {uk } ) which converges uniformly on G.6. 1/k.34 1 Introduction to Sobolev Spaces Similarly. uk3 · · · . j→∞.
53) does not hold. (1.p (Ω). Assume 1 ≤ p ≤ ∞. Consequently. Let u be the average of u on Ω. a.p (Ω). 0 ii) Just for this theorem.2 (Poincar´ Inequality II). p. Proof.55) . Ω It follows that Dvo (x) = 0 . we abbreviate u Lp (Ω) as u p .p (Ω) = Du Lp (Ω) . Taking into account that vk ∈ C0 (Ω). {vk } is bounded in W 1.p (1. {vk } converges strongly in Lp (Ω) to vo . e Theorem 1. For convenience. ∀ u ∈ W 1.6. and Dvk p →0 . ∞ Thus vo is a constant. we may assume that {uk } ⊂ C0 (Ω). connected. while uk p →∞ .54) and therefore completes the proof of the Theorem.xi φdx = 0.1 i) Now based on this inequality.1. then there exists a sequence {uk } ⊂ W0 (Ω). However. as k→∞. Then there exists a ¯ constant C = C(n. 1. such that Duk p = 1 .6. for each φ ∈ C0 (Ω).6 Other Basic Inequalities 35 Remark 1. Then uk p vk p = 1 . Ω). From the compact embedding results in the previous section. and therefore vo p = 1. which may not be zero on the the boundary. such that u−u ¯ p ≤ C Du p . one can easily prove it by using the Sobolev innq equality u Lp ≤ u Lq with p = n−q and the H¨lder inequality.e. we must have vo ≡ 0. uk Let vk = . o the one we present in the following is a uniﬁed proof that works for both this and the next theorem. (1. e Let Ω be a bounded.p ∞ ∞ Since C0 (Ω) is dense in W0 (Ω). one can take an equivalent 1. Ω vφxi dx = lim k →∞ Ω vk φxi dx = − lim k→∞ vk. and open subset in Rn with C 1 boundary. Suppose inequality 1.p (Ω) and hence possesses a subsequence (still denoted by {vk }) that converges weakly to some vo ∈ W 1. For functions in W 1. This contradicts with (1.p norm of W0 (Ω) as u W 1.54) ∞ On the other hand.p (Ω). we have another version of Poincar´’s inequality.
3] .36 1 Introduction to Sobolev Spaces The proof of this Theorem is similar to the previous one.57) (1. Let 0 < λ < n and s.6. Then f (x)x − y−λ g(y)dxdy ≤ C(n. Ω = [0. r > 1 such that 1 1 λ + + = 2.2 The connectness of Ω is essential in this version of the inequality. λ)f r gs Rn Rn (1.59) g(x) = 0 ∞ χ{g>b} (x) db x−λ = λ 0 c−λ−1 χ{x<c} (x) dc .58) (1. Proof.2 The Classical HardyLittlewoodSobolev Inequality Theorem 1. 1] ∪ [2. we choose uk p uk − uk ¯ . s. A simple counter example is when n = 1. r s n Assume that f ∈ Lr (Rn ) and g ∈ Ls (Rn ). 1. λ) = nB n λ/n (n − λ)rs λ/n 1 − 1/r λ/n + λ/n 1 − 1/s λ/n with B n  being the volume of the unit ball in Rn . s. and u(x) = −1 for x ∈ [0.6.3 (HardyLittlewoodSobolev Inequality). and where f r := f Lr (Rn ) . Instead of letting uk vk = . vk = uk − uk p ¯ Remark 1.6. Then one can see obviously that ∞ f (x) = 0 χ{f >a} (x) da ∞ (1. 1] 1 for x ∈ [2. Let 1 if x ∈ G χG (x) = 0 if x ∈G be the characteristic function of the set G. we may assume that both f and g are nonnegative and f r = 1 = g s . Without loss of generality. 3].56) where C(n.
w(b) = Rn χ{g>b} (y) dy. c c and then let c = c−λ . ˜ Substituting (1. we have I := Rn f (x)x − y−λ g(y) dx dy Rn ∞ ∞ 0 ∞ ∞ 0 0 0 ∞ ∞ = λ 0 Rn Rn c−λ−1 χ{f >a} (x)χ{x<c} (x − y)χ{g>b} (y) dx dy dc db da c−λ−1 I(a. and let v(a) = Rn χ{f >a} (x) dx . By (1.6 Other Basic Inequalities 37 To see the last identity.61) It is easy to see that I(a. c) ≤ min{u(c)w(b). the volume of the ball of radius c. we have ∞ c−λ−1 I(a. u(c)v(a). 0 = λ Let (1. b. (1. and (1. (1. c) dc 0 ≤ u(c)≤v(a) c−λ−1 w(b)u(c) dc + u(c)>v(a) c−λ−1 w(b)v(a) dc .58).56). one can show that I(a.59) into the left hand side of (1. Then we can express the norms as ∞ ∞ f r r =r 0 ar−1 v(a) da = 1 and g s s =s 0 bs−1 w(b) db = 1.62) We integrate with respect to c ﬁrst.60) u(c) = B n cn . b. v(a)w(b)}.57). one may ﬁrst write ∞ x−λ = 0 χ{x−λ >˜} (x) d˜. c) is bounded above by other pairs and arrive at I(a. c) dc db da. b. respectively. b. (1. the measure of the sets {x  f (x) > a} and {y  g(y) > b}. c) ≤ Rn Rn χ{x<c} (x − y)χ{g>b} (y) dx dy u(c)χ{g>b} (y) dy = u(c)w(b) Rn ≤ Similarly.62).1. b.
61). we derive I ≤ × 0 n B n λ/n n−λ ∞ ar/s ∞ ∞ v(a) 0 w(b)1−λ/n db da + 0 v(a)1−λ/n ar/s w(b) db da. It follows that the ﬁrst integral in (1. we split the bintegral into two parts. By virtue of (1.65).64) ≤ B n λ/n v(a)w(b)1−λ/n λ(n − λ) In view of (1. (1. we use H¨lder inequality with o m = (s − 1)(1 − λ/n) ar/s w(b)1−λ/n bm b−m db 0 ar/s 1−λ/n ar/s λ/n ≤ 0 ar/s w(b)b s−1 db 0 1−λ/n b −mn/λ db ≤ 0 w(b)b s−1 db · ar−1 . and (1. we also obtain ∞ c−λ−1 I(a.63).66) since mn/λ < 1. (1.63). one from 0 to ar/s and the other from ar/s to ∞. c) dc 0 ≤ u(c)≤w(b) c−λ−1 v(a)u(c) dc + u(c)>w(b)) c−λ−1 w(b)v(a) dc (1.63) Exchanging v(a) with w(b) in the second line of (1.38 1 Introduction to Sobolev Spaces (v(a)/B n )1/n = w(b)B  n 0 c−λ−1+n dc + w(b)v(a) (v(a)/B n )1/n c−λ−1 dc = B n λ/n B n λ/n w(b)v(a)1−λ/n + w(b)v(a)1−λ/n n−λ λ B n λ/n = w(b)v(a)1−λ/n λ(n − λ) (1.64).65) is bounded above by λ n − s(n − λ) λ/n 0 ∞ ∞ 1−λ/n v(a)ar−1 da 0 w(b)bs−1 db = 1 rs λ/n 1 − 1/r λ/n . (1.67) . (1. b.65) To estimate the ﬁrst integral in (1.60).
1.6 Other Basic Inequalities
39
To estimate the second integral in (1.65), we ﬁrst rewrite it as
∞ bs/r
w(b)
0 0
v(a)1−λ/n da db,
then an analogous computation shows that it is bounded above by 1 rs λ/n 1 − 1/s
λ/n
.
(1.68)
Now the desired HardyLittlewoodSobolev inequality follows directly from (1.65), (1.67), and (1.68). Theorem 1.6.4 (An equivalent form of the HardyLittlewoodSobolev inequality) n Let g ∈ Lp (Rn ) for n−α < p < ∞. Deﬁne T g(x) =
Rn
x − yα−n g(y)dy.
Then Tg
p
≤ C(n, p, α) g
np n+αp
.
(1.69)
Proof. By the classical HardyLittlewoodSobolev inequality, we have < f, T g >=< T f, g >≤ C(n, s, α) f where < ·, · > is the L2 inner product. Consequently, Tg where
p r
g
s
,
= sup < f, T g > ≤ C(n, s, α) g s ,
f
r =1
1 p 1 r
+ +
1 r 1 s
=1 = n+α . n
Solving for s, we arrive at s=
np . n + αp
This completes the proof of the Theorem. Remark 1.6.3 To see the relation between inequality (1.69) and the Sobolev inequality, let’s rewrite it as Tg with
nq n−αq
≤C g
q
(1.70)
40
1 Introduction to Sobolev Spaces
1 < q :=
np n < . n + αp α
Let u = T g. Then one can show that (see [CLO]), (− ) 2 u = g. Now inequality (1.70) becomes the Sobolev one u
nq n−αq α
≤ C (− ) 2 u q .
α
2 Existence of Weak Solutions
2.1 2.2 2.3
Second Order Elliptic Operators Weak Solutions Method of Linear Functional Analysis 2.3.1 2.3.2 2.3.3 Linear Equations Some Basic Principles in Functional Analysis Existence of Weak Solutions Semilinear Equations Calculus of Variations Existence of Minimizers Existence of Minimizers under Constrains Minimax Critical Points Existence of a Minimax via the Mountain Pass Theorem
2.4
Variational Methods 2.4.1 2.4.2 2.4.3 2.4.4 2.4.5 2.4.6
Many physical models come with natural variational structures where one can minimize a functional, usually an energy E(u). Then the minimizers are weak solutions of the related partial diﬀerential equation E (u) = 0. For example, for an open bounded domain Ω ⊂ Rn , and for f ∈ L2 (Ω), the functional 1  u2 dx − f u dx E(u) = 2 Ω Ω
1 possesses a minimizer among all u ∈ H0 (Ω) (try to show this by using the knowledge from the previous chapter). As we will see in this chapter, the minimizer is a weak solution of the Dirichlet boundary value problem
42
2 Existence of Weak Solutions
− u = f (x) , x ∈ Ω u(x) = 0 , x ∈ ∂Ω. In practice, it is often much easier to obtain a weak solution than a classical one. One of the seven millennium problem posed by the Clay Institute of Mathematical Sciences is to show that some suitable weak solutions to the NavieStoke equation with good initial values are in fact classical solutions for all time. In this chapter, we consider the existence of weak solutions to second order elliptic equations, both linear and semilinear. For linear equations, we will use the LaxMilgram Theorem to show the existence of weak solutions. For semilinear equations, we will introduce variational methods. We will consider the corresponding functionals in proper Hilbert spaces and seek weak solutions of the equations associated with the critical points of the functionals. We will use examples to show how to ﬁnd a minimizer, a minimizer under constraint, and a minimax critical point. The wellknown Mountain Pass Theorem is introduced and applied to show the existence of such a minimax. To show that a weak solution possesses the desired diﬀerentiability so that it is actually a classical one is called the regularity method, which will be studied in Chapter 3.
2.1 Second Order Elliptic Operators
Let Ω be an open bounded set in Rn . Let aij (x), bi (x), and c(x) be bounded functions on Ω with aij (x) = aji (x). Consider the second order partial diﬀerential operator L either in the divergence form
n n
Lu = −
(aij (x)uxi )xj +
i,j=1 i=1
bi (x)uxi + c(x)u,
(2.1)
or in the nondivergence form
n n
Lu = −
i,j=1
aij (x)uxi xj +
i=1
bi (x)uxi + c(x)u.
(2.2)
We say that L is uniformly elliptic if there exists a constant δ > 0, such that
n
aij (x)ξi ξj ≥ δξ2
i,j=1
for almost every x ∈ Ω and for all ξ ∈ Rn . In the simplest case when aij (x) = δij , bi (x) ≡ 0, and c(x) ≡ 0, L reduces to − . And, as we will see later, that the solutions of the general second order elliptic equation Lu = 0 share many similar properties with the harmonic functions.
2.2 Weak Solutions
43
2.2 Weak Solutions
Let the operator L be in the divergence form as deﬁned in (2.1). Consider the second order elliptic equation with Dirichlet boundary condition Lu = f , x ∈ Ω u = 0 , x ∈ ∂Ω (2.3)
When f = f (x), the equation is linear; and when f = f (x, u), semilinear. Assume that, f (x) is in L2 (Ω), or for a given solution u, f (x, u) is in 2 L (Ω). ∞ Multiplying both sides of the equation by a test function v ∈ C0 (Ω), and integrating by parts on Ω, we have
n n
Ω i,j=1
aij (x)uxi vxj +
i=1
bi (x)uxi v + c(x)uv dx =
Ω
f vdx.
(2.4)
There are no boundary terms because both u and v vanish on ∂Ω. One can see that the integrals in (2.4) are well deﬁned if u, v, and their ﬁrst derivatives are 1 square integrable. Write H 1 (Ω) = W 1,2 (Ω), and let H0 (Ω) the completion of ∞ 1 C0 (Ω) in the norm of H (Ω):
1/2
u =
Ω
(Du2 + u2 )dx
.
1 1 Then (2.4) remains true for any v ∈ H0 (Ω). One can easily see that H0 (Ω) 1 is also the most appropriate space for u. On the other hand, if u ∈ H0 (Ω) satisﬁes (2.4) and is second order diﬀerentiable, then through integration by parts, we have 1 (Lu − f )vdx = 0 ∀v ∈ H0 (Ω). Ω
This implies that
Lu = f , ∀x ∈ Ω.
And therefore u is a classical solution of (2.3). The above observation naturally leads to Deﬁnition 2.2.1 The weak solution of problem (2.3) is a function u ∈ 1 1 H0 (Ω) that satisﬁes (2.4) for all v ∈ H0 (Ω). Remark 2.2.1 Actually, here the condition on f can be relaxed a little bit. By the Sobolev embedding H 1 (Ω) → L n−2 (Ω),
2n
3.3. such as Riesz Representation Theorem or LaxMilgram Theorem. To this end. we can apply representation type theorems in Linear Functional Analysis. x ∈ ∂Ω. which is the left hand side of (2. (2.1 Linear Equations For the Dirichelet problem with linear equation Lu = f (x) . please see [Sch1].1 A Banach space X is a complete. to ﬁnd a weak solution of (2. we introduce the bilinear form n n B[u. (2.3 Methods of Linear Functional Analysis 2. f only need to be in 2n n+2 or more L n+2 (Ω). v ∈ X. 2n 2. (ii) λu = λ u . such that 1 B[u. if f (x. u) = up for any p ≤ n−2 generally n+2 f (x.4) in the deﬁnition of the weak solutions. ∀u. x ∈ Ω u=0. For more details.3.6) This can be realized by representation type theorems in Functional Analysis. normed linear space with norm · that satisﬁes (i) u + v ≤ u + v . . and hence by duality. it is equivalent to show that there 1 exists a function u ∈ H0 (Ω). u) ≤ C1 + C2 u n−2 . f (x.4) may be regarded as a linear 1 functional on H0 (Ω).5) to ﬁnd its weak solutions. 2. then for any u ∈ H 1 (Ω). While the right hand side of (2. λ ∈ R. In the semilinear case. v] = Ω i. v >:= Ω f vdx. v] =< f. Now.5). ∀u.j=1 aij (x)uxi vxj + i=1 bi (x)uxi v + c(x)uv dx 1 deﬁned on H0 (Ω). v ∈ X. Deﬁnition 2. u) is in L n+2 (Ω).44 2 Existence of Weak Solutions 2n we see that v is in L n−2 (Ω).2 Some Basic Principles in Functional Analysis Here we list brieﬂy some basic deﬁnitions and principles of functional analysis. v >. ∀ v ∈ H0 (Ω). < f. which will be used to obtain the existence of weak solutions. (iii) u = 0 if and only if u = 0.
.3.p (Ω) . and (iii) H¨lder spaces C 0. and 1 (ii) Sobolev spaces H 1 (Ω) or H0 (Ω) with inner product (u.3 Methods of Linear Functional Analysis 45 The examples of Banach spaces are: (i) Lp (Ω) with norm u Lp (Ω) . Lemma 2. u). ∀u ∈ H. v ∈ X. We use < f. Examples of Hilbert spaces are (i) L2 (Ω) with inner product (u. ∀w ∈ M. ∀u.3.γ (Ω) .γ (Ω) with norm u C 0. (ii) Sobolev spaces W k. o Deﬁnition 2. b ∈ R1 . (ii) the mapping u → (u.3.2 A Hilbert space H is a Banach space endowed with an inner product (·. v) = Ω [u(x)v(x) + Du(x) · Dv(x)]dx.3 (i) A mapping A : X→Y is a linear operator if A[au + bv] = aA[u] + bA[u] ∀ u. u)1/2 with the following properties (i) (u. u) = 0 if and only if u = 0. Then for every u ∈ H. ·) which generates the norm u := (u.1 (Projection). u >:= f [u] to denote the pairing of X ∗ and X. Let M be a closed subspace of H. v) = Ω u(x)v(x)dx.p (Ω) with norm u W k. H = M ⊕ M ⊥ := {u ∈ H  u⊥M }.2. (ii) A linear operator A : X→Y is bounded provided A := sup u X ≤1 Au Y < ∞. a. is called the dual space of X. v ∈ H. u) ≥ 0. such that (u − v. w) = 0. denoted by X ∗ . there is a v ∈ M . Deﬁnition 2. v) is linear for each v ∈ H. (iii) A bounded linear operator f : X→R1 is called a bounded linear functional on X. and (iv) (u. The collection of all bounded linear functionals on X. (2. (iii) (u. v) = (v.7) In other words.
The readers may see the book [Sch1] for its proof or do it as the following exercise. v >= (u. Based on this Projection Lemma. and H ∗ be its dual space. ii) Show that (u − vo . The readers may ﬁnd the proof of the Lemma in the book [Sch] ( Theorem 13) or as given in the following exercise. . From Riesz Representation Theorem. Let H be a real Hilbert space with inner product (·.2 (LaxMilgram Theorem). such that < u∗ . ∀u ∈ H. Let H be a real Hilbert space with norm · . w) = 0 .46 2 Existence of Weak Solutions Here v is the projection of u on M . such that (i) B[u. one can prove Theorem 2. for each u∗ ∈ H ∗ . iii) The dimension of K(T ) is 1. there exists a unique element u ∈ H. v >.3. Show that i) K(T ) := {u ∈ H  T u = 0} is closed. ·). such that T v = 1. such that B[u. Then for each bounded linear functional f on H.2 Prove the Riesz Representation Theorem in 4 steps. This is a wellknown theorem in functional analysis. v) ∀v ∈ H. ∀u. Then T v = (u.3. m > 0. ∀u ∈ H. more precisely. v ∈ H and (ii) m u 2 ≤ B[u. Exercise 2. one can derive the following Theorem 2. there exist a vo ∈ M such that vo − u = inf v − u .1 (Riesz Representation Theorem). Then H ∗ can be canonically identiﬁed with H. Let T be a linear operator from H to R1 and T ≡0. ∀v ∈ H. ii) H = K(T ) ⊕ K(T )⊥ . u]. Exercise 2.3. The mapping u∗ → u is a linear isomorphism from H ∗ onto H. Assume that there exist constants M.1 i) Given any u ∈ H and u ∈M . ∀w ∈ M. iv) There exists v. v] =< f. Let B : H × H→R1 be a bilinear mapping. v∈M Hint: Show that a minimizing sequence is a Cauchy sequence. v) . there exists a unique element u ∈ H.3. v] ≤ M u v .
v] may not be symmetric. and the conclusion of the theorem follows directly from the Reisz Representation Theorem. A is linear. We carry this out in two parts. v] = B[v. for any u ∈ H. such that ˜ < f. v] = (˜. ·] is also a bounded linear functional on H.10). and hence there exist a u ∈ H. by the Riesz Representation Theorem.8) On the other hand. (a) For any u1 . there is an f ∈ H. B[u. Au] ≤ M u Au . there exists a unique element u ∈ H. we prove that it is onetoone and onto. (2. 2. On one hand. we apply condition (ii): m u 2 2 = (Au. for each ﬁxed element u ∈ H. v] is symmetric. B[u.11) ≤ B[u. and each v ∈ H. (2. Moreover. ˜ A : H→H. (2. v] = a1 B[u1 . then the conditions (i) and (ii) ensure that B[u. If B[u. ∀v ∈ H. by condition (i). v >= (f .e. ∀v ∈ H. we have (A(a1 u1 + a2 u2 ). such that ˜ Au = f . This veriﬁes that A is bounded. ˜ (2. (b) To see that A is onetoone. v) + a2 (Au2 .10) ˜ From (2. v) Hence A(a1 u1 + a2 u2 ) = a1 Au1 + a2 Au2 . Au and consequently Au ≤ M u ∀u ∈ H. v). v] = a1 (Au1 . Au) = B[u. and (2. v] + a2 B[u2 . by condition (i). u) ≤ Au u . In the case B[u. v) = (a1 Au1 + a2 Au2 . u (2. a2 ∈ R1 . v) = B[(a1 u1 + a2 u2 ). u2 ∈ H. Au = u. and in part (b). In part (a). v] can be made as an inner product in H. u].e.9) Denote this mapping from u to u by A. such ˜ that B[u. any a1 . i. v). It suﬃce to show that for each f ∈ H.3 Methods of Linear Functional Analysis 47 Proof. we show that A is a bounded linear operator. we proceed as follows. u] = (Au. for a given bounded ˜ linear functional f on H. 1. i. .8).9).2.
o . such that Auk = vk . And by virtue of the H¨lder inequality.11). uk is a Cauchy sequence in H. v) := Ω (DuDv + uv)dx. then from (2. such that 0 = (u − v. there is a uk . v)H := Ω DuDvdx and the corresponding norm u := Ω H Du2 dx. This completes the proof of the Theorem. by the Sobolev Embedding.12) This implies u = v. Hence A is onetoone. Therefore R(A) = H. A(u − v)) = B[(u − v). and hence uk →u for some u ∈ H. i.12) implies that ui − uj ≤ C vi − vj . our second order elliptic operator L would satisfy the conditions (i) and (ii) requested by the LaxMilgram Theorem. by (2. by the Projection Lemma. Then for each vk .e. assume that vk ∈ R(A) and vk →v in H. the range of A in H is closed. R(A) is closed.12). By Poincar´ inequality. one can also see that R(A). 2n For any v ∈ H. v is in L n−2 (Ω). Inequality (2.12). 2.1. u = v ∈ R(A) and hence H ⊂ R(A). From (2.3.48 2 Existence of Weak Solutions and it follows that m u ≤ Au . For any u ∈ H. there exists a v ∈ R(A).3 Existence of Weak Solutions Now we explore that in what situations. Consequently. we can use the equivalent inner product e (u. we need the Projection Lemma 2. To show that R(A) = H.3. Auk →Au and v = Au ∈ H. Now. (u − v)] ≥ m u − v 2 . Therefore. 1 Let H = H0 (Ω) be our Hilbert space with inner product (u. In fact. (2. Now if Au = Av. m u − v ≤ Au − Av .
and the L∞ norm of bi (x) and c(x). u] = u 2 H. condition (ii) may not automatically hold. v >:= Ω f (x)v(x)dx ≤ Ω f (x) 2n n+2 n+2 2n dx Ω v(x) 2n n−2 dx . 2n In general. we have . u(x) = 0.2. It depends on the sign of c(x).3. x ∈ ∂Ω 1 has a unique weak solution u in H0 (Ω). However. we conclude that there exists a unique u ∈ H such that B[u. hence we only need to assume that f ∈ L n+2 (Ω). v] =< f. v] = Ω n n bi (x)uxi v + c(x)uv dx. H v H.3 Methods of Linear Functional Analysis 49 n−2 2n < f. and by H¨lder inequality. Hence condition (i) is always satisﬁed. v] ≤ 3C u H v H. v] = Ω 2n Du · Dvdx. First from the uniform ellipticity condition. i. bi L∞ (Ω) . one can easily verify that o B[u. Now applying the LaxMilgram Theorem.3 For every f (x) in L n+2 (Ω). condition (i) is met: B[u. x ∈ Ω. the Dirichlet problem − u = f (x). c L∞ (Ω) ≤ C. ∀v ∈ H.j=1 aij (x)uxi vxj + i=1 Under the assumption that aij L∞ (Ω) . In the simplest case when L = − . Obviously. v] ≤ u Condition (ii) is also satisﬁed: B[u. we have B[u. We have actually proved Theorem 2. v >. B[u.
(2. then condition (ii) is met. Roughly speaking. Actually. we will associate the equation with the functional J(u) = where F (x. the problem Lu = f (x) x ∈ Ω u(x) = 0 x ∈ ∂Ω 1 has a unique weak solution in H0 (Ω). u) = 0 1 2 (Lu · u)dx − Ω u Ω F (x.2 Calculus of Variations For a continuously diﬀerentiable function g deﬁned on Rn .14) 2.50 2 Existence of Weak Solutions n aij (x)uxi uxj dx ≥ δ Ω i. then condition (ii) holds. To seek weak solutions of .3. Hence we have Theorem 2.14). Then we will focus on how to seek critical points in various situations. if c(x) ≥ −δ/2 with δ as in (2. 2.4.j=1 Ω Du2 dx = δ u 2 H. Then for 2n every f ∈ L n+2 (Ω). We show that the critical point of J(u) is a weak solution of our problem. (2. Others are saddle points.4 Assume that bi (x) ≡ 0 and c(x) satisﬁes (2. We will use variational methods to obtain the existence. a critical point is a point where Dg vanishes.13). for instance.4. From here one can see that if bi (x) ≡ 0 and c(x) ≥ 0.13) for some δ > 0. u) the LaxMilgram Theorem can no longer be applied. The simplest sort of critical points are global or local maxima or minima. s)ds. u)dx f (x.1 Semilinear Equations To ﬁnd the weak solutions of semilinear equations Lu = f (x. the condition on c(x) can be relaxed a little bit.4 Variational Methods 2.
e. let us start from a simple example: − u = f (x) . we must have g (0) = 0. the function g(t) has a minimum at t = 0. and therefore. then through integration by parts. x ∈ Ω. ∀v ∈ H0 (Ω). x ∈ ∂Ω. . To ﬁnd weak solutions of the problem. Since u is a minimizer of J(·). 1 Let v be any function in H0 (Ω). dt f (x)(u + tv)dx.4 Variational Methods 51 partial diﬀerential equations. g (0) = 0 yields Du · Dvdx − Ω Ω 1 f (x)vdx = 0.16) This implies that u is a weak solution of problem (2. Here. Furthermore.2. J(u + tv) = and 1 2 D(u + tv)2 dx − Ω Ω i. Ω 1 Since this is true for any v ∈ H0 (Ω). explicitly. if u is also second order diﬀerentiable. we conclude − u − f (x) = 0. Consider the realvalue function g(t) = J(u + tv).15). Consequently. (2. To illustrate the idea. then we will demonstrate that u is actually a weak solution of our problem. Now suppose that there is some function u which happen to be a minimizer 1 of J(·) in H0 (Ω). we obtain [− u − f (x)] vdx = 0. t ∈ R. d J(u + tv) = dt D(u + tv) · Dvdx − Ω Ω f (x)vdx. we generalize this concept to inﬁnite dimensional spaces. u(x) = 0 . we study the functional J(u) = 1 2 Du2 dx − Ω Ω (2.15) f (x)udx 1 in H0 (Ω). d J(u + tv) t=0 = 0.
3 Existence of Minimizers In the previous subsection. Then our next question is: Does the functional J actually possess a critical point? We will show that there exists a minimizer of J.52 2 Existence of Weak Solutions Therefore. To this end. i) We say that J is Frechet diﬀerentiable at u ∈ X if there exists a continuous linear map L = L(u) from X to X ∗ satisfying: For any > 0.4. that is < J (u). we have seen that the critical points of the functional 1 J(u) = Du2 dx − f (x)udx 2 Ω Ω are weak solutions of the Dirichlet problem (2. v >= 0 ∀v ∈ X. Remark 2. u is a classical solution of the problem. dt More precisely. then < J (u).4. v >  ≤ where < L(u). there is a δ = δ( . or a saddle point of the functional. v >:= L(u)(v) is the duality between X and X ∗ . it can be a maxima. we verify that J possesses the following three properties: i) boundedness from below. v >= lim t→0 d J(u + tv) − J(u) = J(u + tv) t=0 . such that J(u + v) − J(u)− < L(u). t dt v X whenever v X < δ. in order to be a weak solution of the problem. The mapping L(u) is usually denoted by J (u).15). and iii) weak lower semicontinuity. i) Boundedness from Below. the readers can see that. or more generally. we have Deﬁnition 2. ii) A critical point of J is a point at which J (u) = 0.1 One can verify that. any point that satisﬁes d J(u + tv) t=0 .4. iii) We call J (u) = 0 the EulerLagrange equation of the functional J. 2. here u need not to be a minima.1 Let J be a linear functional on a Banach space X. ii) coercivity. From the above argument. if J is Frechet diﬀerentiable at u. . u).
This suggests that in order to prevent a minimizing sequence from ‘leaking’ to inﬁnity. . uo > as k→∞ for every bounded linear functional f on X. And this is indeed true for our functional J. then we say that {uk } converges weakly to uo in X. This implies that any minimizing sequence must be bounded in H. However.e. but it can never be attained. If we take a minimizing sequence {xk } such that g(xk )→1. i. and the limit would be a minimizer of J. iii) Weak Lower Semicontinuity. if a sequence {uk } goes to inﬁnity. We therefore turn our attention to weak topology. the inﬁmum of g(x) is 1. we have e o J(u) ≥ 1 u 2 − C u H f L2 (Ω) H 2 1 C2 2 = u H − C f L2 (Ω) − f 2 2 C2 ≥− f 2 2 (Ω) . we need to have some sort of ‘coercive’ condition on our functional J. then a bounded minimizing sequence {uk } would possesses a convergent subsequence.17) Therefore. one can see that if uk H →∞. 1 + x2 Obviously. This is no longer true in inﬁnite dimensional spaces.4. That is. the functional J is bounded from below.4 Variational Methods 53 1 First we prove that J is bounded from below in H := H0 (Ω) if f ∈ L2 (Ω). then J(uk ) must also become unbounded. x ∈ R1 .17). uk > → < f. If H is a ﬁnite dimensional space. and write uk uo . Actually. if u H →∞. Again.2 Let X be a real Banach space. from the second part of (2. Deﬁnition 2. L 2 2 L2 (Ω) (2. then xk goes away to inﬁnity. we use the equivalent norm in H: u H = Ω Du2 dx. ii) Coercivity. A simple counter example is g(x) = 1 . then J(uk )→∞. for simplicity. By Poincar´ and H¨lder inequalities.2. If a sequence {uk } ⊂ X satisﬁes < f. A functional that is bounded from below does not guarantee that it has a minimum. Therefore a minimizing sequence would be retained in a bounded set.
i→∞ and hence J(uo ) = lim inf J(uki ). now a minimizing sequence {uk } is bounded.1 (Weak Compactness. . It follows that H ∗ = H. in X.4.4. where X ∗ is the dual of X. nor is it true for most other functionals of interest. we only need a weaker one: J(uo ) ≤ lim inf J(uki ). by the deﬁnition of a minimizing sequence. By the Reisz Representation Theorem. Fortunately. v)→(uo . Assume that the sequence {uk } is bounded in X. and hence possesses a subsequence {uki } that converges weakly to an element uo in H: (uki . and therefore H is reﬂexive. Naturally. k →∞ uo . ∀v ∈ H. we wish that the functional J we consider is continuous with respect to this weak convergence. 1 Now let’s come back to our Hilbert space H := H0 (Ω). then uo would be our desired minimizer. there is a unique v ∈ H. Theorem 2. ∀u ∈ H. i→∞ Therefore. we do not really need such a continuity. if for every weakly convergent sequence uk we have J(uo ) ≤ lim inf J(uk ). By the Coercivity and Weak Compactness Theorem. such that < f. Then there exists a subsequence of {uk }. instead. uo is a minimizer. u >= (v. i→∞ Since on the other hand. However. Deﬁnition 2.) Let X be a reﬂexive Banach space.4.54 2 Existence of Weak Solutions Deﬁnition 2. for every linear functional f on H. u). that is i→ ∞ lim J(uki ) = J(uo ).4 We say that a functional J(·) is weakly lower semicontinuous on a Banach space X.3 A Banach space is called reﬂexive if (X ∗ )∗ = X. we obviously have J(uo ) ≥ lim inf J(uki ). which converges weakly in X. v). this is not the case with our functional.
it f uo dx. and uk Since for each ﬁxed f ∈ L n+2 (Ω). follows immediately f uk dx→ Ω Ω 2n uo in H.4. which is a weak solution of the boundary value problem − u = f (x) .2. x ∈ Ω. 2n Then for every f ∈ L n+2 (Ω). the functional J(u) = 1 2 Du2 dx − Ω Ω f udx 1 possesses a minimum uo in H0 (Ω).18) From the algebraic inequality a2 + b2 ≥ 2ab.4 Existence of Minimizers Under Constraints Now we consider the semilinear Dirichlet problem − u = up−1 u.19) imply the weakly lower semicontinuity. we have Duk 2 dx ≥ Ω Ω Duo 2 dx + 2 Ω Duo · (Duk − Duo )dx.20) . (2.19) Now (2.4 Variational Methods 55 Now we show that our functional J is weakly lower semicontinuous in H. x ∈ Ω u(x) = 0 .2 Assume that Ω is a bounded domain with smooth boundary. (2. (2. Therefore lim inf k→∞ Duk 2 dx ≥ Ω Ω Duo 2 dx. Here the second term on the right goes to zero due to the weak convergence uk uo . Assume that {uk } ⊂ H. So far.18) and (2. x ∈ ∂Ω. we have proved Theorem 2.4. Ω f udx is a linear functional on H. with 1 < p < n+2 n−2 . x ∈ ∂Ω. 2. as k→∞. u(x) = 0.
there exist a subsequence (for simplicity. i. k →∞ lim I(uk ) = inf I(u) := m. what we are more interested is whether there exists nontrivial solutions. instead of J. Ω Noticing p + 1 > 2. by the Compact Sobolev Embedding n−2 H 1 (Ω) → → Lp+1 (Ω). We seek minimizers of I in M. u∈M It follows that Ω Duk 2 dx is bounded. Ω up+1 dx = 1}. u ≡ 0 is a solution. . Of course.e. we still denote it by {uk }).21) 2n . Let J(u) = It is easy to verify that d J(u + tv) t=0 = dt Du · Dv − up−1 uv dx. We call this a trivial solution. Ω 1 Therefore. k →∞ On the other hand. Therefore. ﬁxed an element u ∈ H and consider J(tu) = t2 2 Du2 dx − Ω tp+1 p+1 up+1 dx. as t→∞.56 2 Existence of Weak Solutions Obviously. we ﬁnd that J(tu)→ − ∞. we will consider another functional I(u) := in the constrain set M := {u ∈ H  G(u) := Ω 1 2 Du2 dx. The weak lower semicontinuity of the functional I yields I(uo ) ≤ lim inf I(uk ) = m. By the Weak Compactness Theorem. To see this. a critical point of the functional J in H := H0 (Ω) is a weak solution of (2. since p+1 < Theorem (2. this functional is not bounded from below. there is no minimizer of J(u).20) However. For this reason. Ω 1 2 Du2 dx − Ω 1 p+1 up+1 dx. hence {uk } is bounded in H. Let {uk } ⊂ M be a minimizing sequence. which converges weakly to uo in H.
we may generalize this perception into inﬁnite dimensional space H. I(u) = min I(v). v >. and we therefore have (2. the minima ) xo of f (x) under the constraint g(x) = c satisfy Df (xo ) = λDg(xo ) (2.e. By the Reisz Representation . ∀v ∈ H. Now we have shown the existence of a minimizer uo of I in M . v∈M Then there exists a real number λ such that I (u) = λG (u). Df (xo ) is a vector in Rn . we obtain I(uo ) = m. Ω That is uo ∈ M .20).4 Variational Methods 57 we ﬁnd that uk converges strongly to uo in Lp+1 (Ω). i.22) for some constant λ. let u be a minimizer of I in M. I (u) is a linear functional on H. Let f (x) and g(x) be smooth functions in Rn . Before proving the theorem. v >= λ < G (u). To link uo to a weak solution of (2. Geometrically. It is well known that the critical points ( in particular. It can be decomposed as the sum of two perpendicular vectors: Df (xo ) = Df (xo ) N +Df (xo ) T . we derive the corresponding EulerLagrange equation for this minimizer under the constraint.3 (Lagrange Multiplier). where the two terms on the right hand side are the projection of Df (xo ) onto the normal and tangent spaces of the level set g(x) = c at point xo . Heuristically. let’s ﬁrst recall the Lagrange Multiplier for functions deﬁned on Rn . While the normal projection Df (xo ) N is parallel to Dg(xo ). Together with (2.4.22). or < I (u). By deﬁnition. Theorem 2. and therefore I(uo ) ≥ m. and hence uo p+1 dx = 1. At a ﬁxed point u ∈ H.2. xo being a critical point of f (x) on the level set g(x) = c means that the tangential projection Df (xo ) T = 0.21).
s) := G(u + tv + sw). Recall that if u is the minimum of J in the whole space H. ∀v ∈ H. In fact. We now prove this rigorously. there exists w (may just take w as u). For each small t. we can not guarantee that G(u + tv) := Ω u + tvp+1 dx = 1 for all small t. (2. we try to show that there is an s = s(t).23) because u + tv can not be kept on M . v). and hence at the minimum u of J on M . The Proof of Theorem 2.4. Then we can calculate the variation of J(u + tv + s(t)w) as t→0. Similarly for G (u). I (u) must be perpendicular to the tangent space at point u. and g(t. we must have d I(u + tv + s(t)w) t=0 =< I (u). v > + < I (u). Now u + tv + s(t)w is on M .24) . s(t)) = 1.23) dt Now u is only the minimum on M . then. we can no longer use (2. At a minimum u of I on the hyper surface {w ∈ H  G(w) = 1}. and hence I (u) = λG (u). according to the Implicit Function Theorem. such that G(u + tv + s(t)w) = 1. v >= (I (u). we have G (u) ∈ H. for suﬃciently small t. To remedy this. and it is a normal vector of M at point u. w > s (0) = 0. Ω up+1 dx = 1. it can be identiﬁed as a point (or a vector) in H. w >= (p + 1) ∂s up−1 uwdx. for any v ∈ H. denoted by I (u). such that < I (u).58 2 Existence of Weak Solutions Theorem. we have d I(u + tv) t=0 . we consider g(t. ∂g (0. 0) =< G (u). there exists a C 1 function s : R1 →R1 . such that the right hand side of the above is not zero. dt (2.3. Hence. Ω Since u ∈ M . such that s(0) = 0.
˜ Exercise 2. y). w > s (0). there are other types of critical points: saddle points or minimax points. Our minimizer uo of I under the constraint G(u) = 1 satisﬁes < I (uo ).25) One can see that λ > 0.e.1 Show that λ > 0 in (2. it is neither a local maximum nor a local minimum. while along the direction of yaxis. 2. For this reason.2. (2. ∀v ∈ H.20) in H. w > Consequently s (0) = − Substitute this into (2. 0) + (0. This is possible ˜ since p > 1.4. i. 0)s (0) dt ∂t ∂s = < G (u). ∀v ∈ H. However. w > < I (u). that is Duo · Dvdx = λ Ω Ω uo p−1 uo vdx. < G (u). (0. v >= λ < G (u). and denote λ= we arrive at < I (u). This completes the proof of the Theorem. Let u = auo with λ/ap−1 = 1.20). y) = (0. 0) is a critical point of f . Now let’s come back to the weak solution of problem (2. y) in R3 looks like a horse saddle. < G (u).4 Variational Methods 59 On the other hand. Df (0.0) is the minimum. Then it is easy to verify that D˜ · Dvdx = u Ω Ω ˜p−1 uvdx. 0) = 0. v >= λ < G (uo ).0) is the maximum. u ˜ Now u is the desired weak solution of (2. The way to locate or to show . If we go on the graph along the direction of xaxis. (x. v > + < G (u).0) a minimax critical point of f (x.25). < G (u). y) = x2 − y 2 from R2 to R1 . we also call (0. ∀v ∈ H. v > . v >.5 Minimax Critical Points Besides local or global minima and maxima. we have 0= dg ∂g ∂g (0. (0. For a simple example. let’s consider the function f (x. v >.4. The graph of z = f (x.24). 0) = (0. w > .
but we do detect a ‘mountain range’ on the graph near xz plane. To show the existence of a minimax point.1] where Γ = {γ  γ = γ(s) is continuous on [0. f (P )) to (Q. Ambrosetti and Rabinowitz [AR] establish the following wellknown theorem on the existence of a minimax critical point. .60 2 Existence of Weak Solutions the existence of such minimax point is called a minimax variational method. Then J possesses a critical value c ≥ α which can be characterized as c = inf max J(u) γ∈Γ u∈γ[0. we will apply a similar idea to seek its minimax critical points. γ∈Γ s∈[0. y) possesses a minimax point. for instance.1]. P = (1. γ(1) = e}. f (Q)) on the graph. then to decent to the point (Q. Unlike ﬁnite dimensional space. there is a highest point. For a functional J deﬁned on an inﬁnite dimensional space X. a bounded sequence may not converge. 1) and Q = (−1. f (Q)). R1 ). let’s still take this simple example. −1) on two sides of the ‘mountain range’. Deﬁnition 2. To go from (P. Df (Po ) = 0. Under this compactness condition. (J1 ) there exist constants ρ. we pick up two points on xy plane. (PS)). J(0) = 0. Suppose J satisﬁes (PS).4.4 (Mountain Pass Theorem). Let γ(s) be a continuous curve linking the two points P and Q with γ(0) = P and γ(1) = Q. Hence we require the functional J to satisfy some compactness condition. More precisely. Then we take the inﬁmum among the highest points on all the possible paths linking P and Q. and γ(0) = P.1] where Γ = {γ ∈ C([0. 1]. Suppose we don’t know whether f (x. such that J(e) ≤ 0. α > 0 such that J ∂Bρ (0) ≥ α. Theorem 2. Hence on this path. and more importantly. γ(1) = Q} We then try to show that there exists a point Po at which f attains this minimax value c.4. and (J2 ) there is an e ∈ X \ Bρ (0). if any sequence {uk } ∈ X for which J(uk ) is bounded and J (uk )→0 as k→∞ possesses a convergent subsequence. To illustrate the main idea.5 We say that the functional J satisﬁes the PalaisSmale condition (in short. R1 )  γ(0) = 0. Let X be a real Banach space and J ∈ C 1 (X. one needs ﬁrst to climb over the ’mountain range’ near xz plane. which is the PalaisSmale condition. we deﬁne c = inf max f (γ(s)). in an inﬁnite dimensional space.
4 Variational Methods 61 Here. y) ≤ c}. fa is connected while f−a is not. To illustrate this. . The proof of the Theorem is mainly based on a deformation theorem which exploits the change of topology of the level sets of J when the value of J goes through a critical value. a critical value c means a value of the functional which is attained by a critical point u. One convenient way is let the points in the set fb ‘ﬂow’ along the direction of −Df into fa . This S is a saddle point. the Theorem says if the point 0 is located in a valley surrounded by a mountain range. J(0)) and (e. we have Theorem 2.5 (Deformation Theorem). the path connecting the two low points (0. J (u) = 0 and J(u) = c. More precisely and generally. Suppose that c is not a critical value of J. the minimax critical point of J. on the graph of J.2. let’s come back to the simple example f (x. the level sets fa and f−a has diﬀerent topology. One can see that 0 is a critical value and for any a > 0. If there is no critical value between the two level sets. say fa and fb with 0 < a < b. J(e)) contains a highest point S.e. the points on the graph ‘ﬂow downhills’ into a lower level. Heuristically. R1 ) and satisﬁes the (PS). i. then there must be a mountain pass from 0 to e which contains a minimax critical point of J. Denote the level set of f by fc := {(x. and if at the other side of the range.4. y) ∈ R2  f (x. Let X be a real Banach space. there is another low point e. In the ﬁgure below. which is the lowest as compared to the highest points on the nearby paths. then one can continuously deform the set fb into fa . y) = x2 − y 2 . correspondingly. Assume that J ∈ C 1 (X.
This is actually guaranteed by the (PS). u) ≤ J(u). M ). Condition (ii) is to ensure that after the deformation. Step 1 We ﬁrst choose a small > 0.62 2 Existence of Weak Solutions Then for each suﬃciently small > 0. to make the ﬂow stay still in the set M := {u ∈ X  J(u) ≤ c − or J(u) ≥ c + }. < 1. u)). to let the ‘ﬂow’ go along the direction of −J (η(t. (2.27) holds. To satisfy condition (ii). dη dt (t. 1] × X to X. the two points 0 and e as in the Mountain Pass theorem still remain ﬁxed. To modify further. but this would make the ﬂow go too slow near M . However. Then by virtue of the (PS) condition. u) = u. η(0. (iv) η(1. u) (2. ∀u ∈ X. u) from [0. Jc+δ ) ⊂ Jc−δ . The main idea is to solve an ODE initial value problem (with respect to t) for each u ∈ X.26) by dist(η. u) would decrease as t increases. Step 2. u) = u. ∀u ∈ X. (iii) J(η(t. k →0 and elements uk ∈ Jc+ k \ Jc− k with J (uk ) ≤ ak .27) Otherwise. we need to modify the right hand side of the equation a little bit. Therefore (2. we could multiply the right hand side of equation (2.26) That is. such that J (u) ≥ a. i. ∀u ∈ Jc+ \ Jc− . Proof of the Deformation Theorem.28) 2 and let . we must have J(uo ) = c and J (uo ) = 0. so that the ﬂow would not go too slow in Jc+ \ Jc− . This is a contradiction with the assumption that c is not a critical value of J. such that a2 δ< . so that the ﬂow will satisfy other desired conditions. t ∈ [0. there would exist sequences ak →0. there is a subsequence {uki } that converges to an element uo ∈ X. then one can continuously deform a higher level set Jc+δ into a lower level Jc−δ by ‘ﬂowing downhills’. we choose 0 < δ < . (ii) η(1. c + ]. We claim that there exist constants 0 < a. such that (i) η(0. u) = u. the Theorem says if c is not a critical value of J. R1 ). Roughly speaking. (2. so that the value of J(η(t. 1]. Since J ∈ C 1 (X. u)). ∀u ∈J −1 [c − .e. roughly look like the following: = −J (η(t. there exists a constant 0 < δ < and a continuous mapping η(t.
obviously. dt By the deﬁnition of h and (2. u) ∈ Jc−δ . M ) . N ) 1 if 0 ≤ s ≤ 1 1 s if s > 1. (2. hence condition (ii) is also satisﬁed. u) 2 . To verify (iii) and (iv). dη dt (t. This veriﬁes (iii). u)) = −h( J (η(t. it suﬃce to show that for each u in N = Jc+δ \ Jc−δ . there exists a unique solution η(t. u)) ) J (η(t. the right hand side is nonpositive.29) In the above equality. u) = u. and hence J(η(t. u)) η(0. we consider the modiﬁed ODE problem = F (η(t. dist(u. and therefore by the wellknown ODE theory. . u)) is nonincreasing. u))h( J (η(t. Now for each ﬁxed u ∈ X. g(η) ≡ 1. Notice that for η ∈ N . F (η(t. To see (iv). u) dist(u. we compute d dη J(η(t.27). we have η(1. From the deﬁnition of g g(u) = 0 ∀u ∈ M . Condition (i) is satisﬁed automatically. u)). u) > dt dt = < J (η(t.2. u)) = < J (η(t. Now for u ∈ X. u) for all time t ≥ 0. The introduction of the factor h(·) is to ensure the boundedness of F (·). and (2.29) becomes d J(η(t. M ) + dist(u. One can verify that F is bounded and Lipschitz continuous on bounded sets. u)). u)) 2 . (t. u)) > = −g(η(t. u)) ) J (η(t.4 Variational Methods 63 N := {u ∈ X  c − δ ≤ J(u) ≤ c + δ}. we have. Step 3. deﬁne g(u) := and let h(s) := We ﬁnally modify the −J (η) as F (η) := −g(η)h( J (η) )J (η).
To see this. With this Deformation Theorem. Therefore c ≤ max J(u) = max g(t) < ∞. The Proof of the Mountain Pass Theorem. u)) = dt In any case. we derive − J (η(t. γ([0. . 1]) ∩ ∂Bρ (0) = ∅. First. u)) ≤ J(η(0. this path can be continuously deformed down to the level set Jc−δ . More precisely. choose the in the Deformation Theorem to be α . such that u∈γ([0.1]) max J(u) ≥ v∈∂Bρ (0) inf J(v) ≥ α.1]. and hence bounded from above. for any γ ∈ Γ γ([0.1]) max J(u) ≤ c + δ.64 2 Existence of Weak Solutions d J(η(t. u∈γ([0.1] On the other hand. u)) ≤ 1 − J (η(t. It is continuous on the closed interval [0. i. This establishes (iv) and therefore completes the proof of the Theorem. J(η(1. Now by the Deformation Theorem. u∈γ([0. From the deﬁnition of c. u)) > 1. we take a particular path γ(t) = te and consider the function g(t) = J(γ(t)). that is η(1. Hence by condition (J1 ). u)) ≤ −a if J (η(t. we have c < ∞. 1]) ⊂ Jc+δ . the proof of the Mountain Pass Theorem becomes very simple. γ([0.1]) t∈[0. 1]) ⊂ Jc−δ . or in order words. due to the deﬁnition. u)) − a2 ≤ c + δ − a2 ≤ c − δ.e. u)) 2 ≤ −a2 if J (η(t. And it follows that c ≥ α. Suppose the number c so deﬁned is not a critical value. we 2 can pick a path γ ∈ Γ . and 0 < δ < . We will use the Deformation Theorem to continuously deform a path γ ∈ Γ down to the level set Jc−δ to derive a contradiction.
t)dt. 2. e) = e. Theorem 2. 1])) is also a path in Γ . (2. γ([0. x ∈ Ω. (f3 )f (x.4. where F (x.4 Variational Methods u∈η(1. η(1.31) Although the method we illustrate here could be applied to a more general second order uniformly elliptic equations. s) = o(s) as s→0.6 (Rabinowitz [Ra]) If f satisﬁes condition (f1 ) − (f4 ). u) satisﬁes ¯ (f1 )f (x. 0 < µF (x. s).31) possesses a nontrivial weak solution. with φ(s)/s2 →0 as s→∞.4.γ([0. x ∈ ∂Ω. s) ∈ C(Ω × R1 . it suﬃce f (x. (f2 ) can be dropped. . Remark 2. R1 ).2. u(x) = 0. and therefore we must have u∈η(1. This contradicts with (2. u) is up−1 u. This means that η(1. (2. We assume that f (x. u).30) On the other hand.6 Existence of a Minimax via the Mountain Pass Theorem Now we apply the Mountain Pass Theorem to seek weak solutions of the semilinear elliptic problem − u = f (x. (f2 ) there exists constants c1 . such that f (x. s) ≤ c1 eφ(s) . then problem (2. 0) = 0 and η(1. s) ≤ sf (x.2 A simple example of such function f (x. if n = 2. by (ii) of the Deformation Theorem. s) ≤ c1 + c2 sp . s) = 0 s f (x. with 0 ≤ p < that n+2 n−2 if n > 2.30) and hence completes the proof of the Theorem. and (f4 ) there are constants µ > 2 and r ≥ 0 such that for s ≥ r. c2 ≥ 0.1])) 65 max J(u) ≤ c − δ.γ([0. we prefer this simple model that better illustrates the main ideas.1])) max J(u) ≥ c. since 0 and e are in Jc− .4. If n = 1.
a2 ≥ 0 such that g(x. u) ¯ Ω1 := {x ∈ Ω  φ(x) ≤ m} ¯ Ω2 := Ω \ Ω1 . That is g(x. we seek minimax critical points of the functional 1 Du2 dx − F (x. u + φ) − g(·. we have g(·. whenever Let and Let φ Lr (Ω) < δ. Given any > 0. then g(x. We verify that J satisﬁes the conditions in the Mountain Pass Theorem. (2. and (g2 ) there are constants r.4. Fix u ∈ Lr (Ω). R1 ). such that. ξ) ≤ a1 + a2 ξr/s ¯ for all x ∈ Ω. u)dx. s ≥ 1 and a1 . Then the map u(x)→g(x. R1 ). Ls (Ω)). Now we verify the continuity of the map. Proposition 2. Ω ≤ a3 It follows that if u ∈ Lr (Ω). there is a δ > 0. ξ ∈ R1 . u) ∈ Ls (Ω). u(x)) belongs to C(Lr (Ω).66 2 Existence of Weak Solutions To ﬁnd weak solutions of the problem.32) . g(x. we want to show that. now we only need to check the second term I(u) := Ω F (x. ·) : Lr (Ω)→Ls (Ω). Proof. u(x))s dx ≤ Ω Ω (a1 + a2 ur/s )s dx (1 + ur )dx. Ls (Ω) < . u)dx J(u) = 2 Ω Ω 1 on the Hilbert space H := H0 (Ω).1 (Rabinowitz [Ra]) Let Ω ⊂ Rn be a bounded domain and let g satisfy ¯ (g1 )g ∈ C(Ω × R1 . We ﬁrst need to show that J ∈ C 1 (H. By (g2 ). Since we have veriﬁed that the ﬁrst term Du2 dx Ω is continuously diﬀerentiable.
I(uk )→I(u) and I (uk )→I (u). ∀x ∈ Ω1 . (2.34) φr dx ≥ mr Ω2 .32). u)vdx. we can choose δ so small.e. for such a small δ. such that Ω2  is suﬃciently s small. . R) and < I (u).2 (Rabinowitz [Ra]) Assume that f (x. as φ Lr (Ω) < δ. and hence the right hand side of (2. there exists m > 0. whenever uk u in H.4. i. where Ω is the volume of Ω. Now by (2. Ωη s < 2 and therefore s I1 ≤ . ·). By the continuity of g(x. Moreover. such that g(x. Consequently.33). we have I2 ≤ Ω2 (c1 + c2 (ur + φr )) dx ≤ c1 Ω2  + c2 Ω2 ur dx + c2 φ r Lr (Ω) .2. u(x) + φ(x)) − g(x. δr > Ω (2. by (2. It follows that I1 ≤ Ω1 η s ≤ Ωη s . u(x) + φ(x)) − g(x. so that s .35). and therefore completes the proof of the Proposition. (2. i = 1. we choose η and then m. ∀v ∈ H. and estimate I2 . Moreover. for any η > 0. I1 + I2 ≤ s 2 + s 2 . for some constant c1 and c2 .4 Variational Methods 67 Ii = Ωi g(x. Then I ∈ C 1 (H. u(x)) ≤ η. By (g2 ). I(·) is weakly continuous and I (·) is compact.35) Since u ∈ Lr (Ω). Proposition 2. For the given > 0. we can make Ω2 ur dx as small as we wish by letting Ω2  small. s) satisﬁes (f1 ) and (f2 ).34) is less than 2 . v >= Ω f (x. Let n ≥ 3. 2. u(x))s dx. This veriﬁes (2.33) 2 Now we ﬁx this m.
v ∈ H. and therefore L (Ω) L (Ω) 2n 2n f (·. u) It follows from (2. and the Sobolev inequality o v 2n L n−2 (Ω) ≤K v . u(x) + v(x)) − F (x. let r = n−2 and s = n+2 . To verify the continuity of I (·). and < I (u). u(x) + ξ(x)) − f (x. This veriﬁes (2. we have 2n 2n v n−2 < Kδ. u) ≤ f (·. such that whenever v < δ. 1. as φ →0.39) . u(x))v(x)dx.37) Here we have applied the Mean Value Theorem (with ξ(x) ≤ v(x)). In fact. we deduce that the map u(x)→f (x. v >= Ω f (x. u + ξ) − f (·. v) := I(u + v) − I(u) − Ω f (x. 2n 2n In Proposition 2. Here v denotes the H 1 (Ω) norm of v. 2n L n+2 (Ω) < K .68 2 Existence of Weak Solutions Proof. Q(u. there exists a δ = δ( .4. v) < . u(x)) is continuous from L n−2 (Ω) to L n+2 (Ω). u) L n+2 (Ω) 2n v 2n L n−2 (Ω) L n+2 (Ω) K v . Then by (f2 ).36) and hence I(·) is diﬀerentiable. u + ξ) − f (·. Hence for any given > 0. (2. such that Q(u. we show that for each ﬁxed u. u(x)) · v(x)dx Ω 2n = ≤ f (·. Let u. u(x)) − f (x. and hence ξ n−2 < Kδ. u)vdx ≤ v (2.1. u + ξ) − f (·.38) 2.37) that Q(u. We will ﬁrst show that I is Frechet diﬀerentiable on H and then prove that I (u) is continuous. the H¨lder inequality. I (u + φ) − I (u) →0. We want to show that given any > 0. u). v) ≤ Ω F (x. u(x))v(x)dx f (x. we can choose suﬃciently small δ > 0.36) whenever v ≤ δ. (2. whenever v < δ. (2.
We show that In fact. Furthermore. and r= It is easy to see that q< r 2n 2n . u(x) + φ(x)) − f (x. I(uk ) − I(u) ≤ Ω I(uk )→I(u) as k→∞. uk ) − f (·.1. by Compact Embedding of H into Lq (Ω). the H¨lder inequality. Assume that {uk } ⊂ H. p(n − 2) r−1 2n − p(n − 2) 2n . and the Sobolev inequality. ξk (x)) · uk (x) − u(x)dx Ω Lr (Ω) 2n ≤ ≤ f (·. u(x))dx f (x. u) 2n L n+2 (Ω) Here we have applied (2. n−2 Since a weak convergent sequence is bounded. u) 2n L n+2 (Ω) . the last term of (2. F (x.39) follows again from Proposition 2. ξk ) ≤ C ξk uk − u uk − u Lq (Ω) Lq (Ω) L n−2 (Ω) (2.4 Variational Methods 69 By deﬁnition. we obtain I (uk ) − I (u) ≤ K f (·. o Now (2.40) ≤ K f (·.4.42) . and uk u in H. {uk } is bounded in H. u(x))]v(x)dx f (·. Finally.2. (2.41) approaches zero as k→∞. I (u + φ) − I (u) = sup < I (u + φ) − I (u). 2n and hence by Sobolev Embedding. I (·) is continuous. 3. we verify the weak continuity of I(u) and the compactness of I (u). so does {ξk }. u + φ) − f (·. uk (x)) − F (x. Therefore.41) Here we have applied the Mean Value Theorem and the H¨lder inequality with o ξk (x) between uk (x) and u(x). v > v ≤1 = sup v ≤1 Ω [f (x.40). Using a similar argument as in deriving (2. it is bounded in L n−2 (Ω). q= = . u) 2n ≤ sup v ≤1 L n+2 (Ω) K v (2. This veriﬁes the weak continuity of I(·).38). u + φ) − f (·.
31).43) implies that {uk } must be bounded in H. v ∈ H. we have µM + J (uk ) uk ≥ µJ(uk )− < J (uk ). (2.42).1. Consequently. uk )uk − µF (x. Then for any u. Then by the Compact Embedding. uk )] dx 2 Ω uk (x)≤r µ ≥ ( − 1) Duk 2 dx − (a1 r + a2 rp+1 )Ω 2 Ω = co uk 2 − Co . uk )→f (·. We will verify that J(·) satisﬁes the conditions in the Mountain Pass Theorem. we have 2np n+2 < 2n n−2 .4.43) for some positive constant co and Co . together with (2. and thus possesses a nontrivial minimax critical point. we arrive at I (uk )→I (u) as k→∞. we derive uk →u strongly in L n+2 (Ω). 2np Consequently. form Proposition 2.31) and prove Theorem 2. uk )uk − µF (x.4.70 2 Existence of Weak Solutions Since p < n+2 n−2 . uk )] dx 2 Ω Ω µ ≥ ( − 1) Duk 2 dx + [f (x. 2np H → → L n+2 (Ω). This completes the proof of the proposition. By (f4 ). Assume that {uk } is a sequence in H satisfying J(uk ) ≤ M and J (uk )→0. f (·. First we verify the (PS) condition. Since J (uk ) is bounded. which is a weak solution of (2. (2. We show that {uk } possesses a convergent subsequence. Let A : H→H ∗ be the duality map between H and its dual space H ∗ . which converges weakly to some element uo in H. Now let’s come back to Problem (2. u) strongly in L n+2 (Ω). v >= Ω 2n Du · Dvdx. Now.6. . uk > µ = ( − 1) Duk 2 dx + [f (x. < Au. Hence there exists a subsequence (still denoted by {uk }).
44) This veriﬁes the (PS). via the Poincar´ and the Sobolev inequality. (2.46). It follows that uk = A−1 J (uk ) + A−1 f (·. (2.46) It follows. whenever s < δ. u)dx ≤ Ω u2 dx + M Ω up+1 dx ≤ C( + M u p−1 ) u 2 . J(u) ≥ Choose = 1 8C . uo ).45) and (2.47) And consequently. F (x. then choose u = ρ small. 8 Then form (2. as k→∞. there is a δ > 0. s) ≤ s2 . From Proposition 2.49) . there is a constant M = M (δ). for all Ω ¯ x ∈ Ω. ¯ for all x ∈ Ω and for all s ∈ R. s) ≤ s2 + M sp+1 . so that CM ρp−1 ≤ 1 . (2. F (x. we have. 1 − C( + M u 2 p−1 ) u 2. tu)dx.4 Variational Methods 71 A−1 J (u) = u − A−1 f (·. f (·. u) strongly in H ∗ . we deduce J∂Bρ (0) ≥ 1 2 ρ . (2. (2. F (x. such that J(e) ≤ 0.2. uk )→A−1 f (·. and consider J(tu) = t2 2 Du2 dx − Ω Ω F (x. we ﬁx M . 4 To see the existence of a point e ∈ H.2. (2. u)dx. that e  Ω F (x. u).48) For this .48).45) ¯ While by (f2 ). we estimate F (x. for any given > 0. By (f3 ). uk )→f (·. we ﬁx any u = 0 in H. Combining (2. s) ≤ M sp+1 . such that. such that for all x ∈ Ω. To see there is a ‘mountain range’ surrounding the origin.4.
s)) ds In any case.31). s) + sf (x. and taking into account that µ > 2. Combining (2. we deduce. for s ≥ r. .50). and it follows that F (x.4. F (x. we have for s ≥ r. This completes the proof of the Theorem 2. therefore it is a nontrivial solution. J(uo ) > 0. which is a weak solution of (2. hence it possesses a minimax critical point uo .49) and (2. s) = s−µ−2 s (−µF (x. Moreover. as t→∞. d s−µ F (x.72 2 Existence of Weak Solutions By (f4 ). s) ≥ a3 sµ . we conclude that J(tu)→ − ∞.6. s) ≥ a3 sµ − a4 . Now J satisﬁes all the conditions in the Mountain Pass Theorem.50) ≥ 0 if s ≥ r ≤ 0 if s ≤ −r. (2.
in Lp space for p ≤ n−2 .1 3. the weak solutions are easier to obtain than the classical ones.3. we take the equation .2 W 2. one would naturally want to know whether these weak solutions are actually diﬀerentiable.3. However.2 Newtonian Potentials. in most cases.3 3.p Regularity of Solutions 3. These questions will be answered here. Uniform Elliptic Equations 3. we will introduce methods to show that. 3. so that they can become classical ones.3 Regularity Lifting 3.3.3 Other Useful Results Concerning the Existence. Uniqueness.p a Priori Estimates 3. These weak solutions we obtained were in the n+2 Sobolev space W 1.1 The Case p ≥ 2.3.2. and Regularity 3.4 Bootstraps the Regularity Lifting Theorem Applications to PDEs Applications to Integral Equations In the previous chapter.1.2.1 W 2.2 and hence. a weak solution is in fact smooth. 3. in practice. by Sobolev imbedding.1.2.2 3.2 The Case 1 < p < 2.1 3.3 Regularity of Solutions 3. we used functional analysis. Therefore. mainly calculus of variations to seek the existence of weak solutions for second order linear or semilinear elliptic equations. the regularity is equivalent to the a priori estimate of solutions. In this chapter. Very often. To see the diﬀerence between the regularity and the a priori estimate. we are often required to ﬁnd classical solutions. Usually. and this is particularly true for nonlinear equations. This is called the regularity argument.
p regularity. In Section 3.3.2. In the a priori estimate.p is a weak solution and if f ∈ Lp .1). In Section 3. then u W 2. bi (x).p (Ω) is a strong solution of (3.p If u ∈ W0 ∩ W 2.j=1 for a. (3.p (Ω). We will ﬁrst establish this for a Newtonian potential. x ∈ Ω and for all ξ ∈ Rn . then any weak solution u of (3. that is. While the W 2. Let Ω be an open bounded set in Rn . We believe that the method will be helpful to both experts and nonexperts in the ﬁeld. We show that if the function f (x) is in Lp (Ω). In Section 3.1) We assume L is uniformly elliptic. then u ∈ W 2. we establish a W 2.2 how the a priori estimates and uniqueness of solutions lead to the regularity. Consider the second order partial diﬀerential equation n n Lu := − i. e. The readers will see in section 3. . Let aij (x).j=1 aij (x)uxi xj + i=1 bi (x)uxi + c(x)u = f (x).p regularity theory infers that If u ∈ W 1. It might seem a little bit surprising at this moment that the a priori estimates turn out to be powerful tools in deriving regularities.p ≤C f Lp . and c(x) be bounded functions on Ω with aij (x) = aji (x). It is much more general and very easy to use. the W 2.1. The version we present here contains some new developments. we establish W 2. We show that if f (x) is in p L (Ω). Roughly speaking.p a priori estimate says 1. then use the method of “frozen coeﬃcients” to generalize it to uniformly elliptic equations.1). and u ∈ W 2.74 3 Regularity of Solutions − u = f (x) for example. then there exists a constant C.p is a weak solution and if f ∈ Lp . We will also use examples to show how this method can be applied to boost the regularity of the solutions for PDEs as well as for integral equations.p .p a priori estimate for the solutions of (3. such that u W 2.p ≤ C( u Lp + f Lp ). such that n aij (x)ξi ξj ≥ δξ2 i. we introduce and prove a regularity lifting theorem.p . It provides a simple method for the study of regularity.1) is in W 2. there exists a constant δ > 0. we preassumed that u is in W 2.
Write w = N f. The Newtonian potential is known as w(x) = Γ (x − y)f (y)dy. Deﬁne µf (t) = {x ∈ Ω  f (x) > t}. (3. where N is obviously a linear operator. Then w ∈ W 2. For ﬁxed i. a.p a Priori Estimates We establish W 2. deﬁne the linear operator T as T f = Dij N f = Dij Rn Γ (x − y)f (y)dy To prove Theorem 3.1 W 2.p a Priori Estimates 75 3. Ω We prove Theorem 3.p (Ω) and w = f (x). we introduce the concept of weak type and strong type operators.1.1 Let f ∈ Lp (Ω) for 1 < p < ∞. (3. x ∈ Ω.1 W 2. 3.2) First we start with the Newtonian potential.1.e.3) (3. Let Γ (x) = 1 1 n(n − 2)ωn xn−2 n ≥ 3.1 Newtonian Potentials Let Ω be a bounded domain of Rn .1. j. ∀t > 0} < ∞. it is equivalent to show that T : Lp (Ω) −→ Lp (Ω) is a bounded linear operator . be the fundamental solution of the Laplace equation.5) Our proof can actually be applied to more general operators. . To this end. x ∈ Ω. a weak Lp space Lp (Ω) is the collection of functions f that w satisfy f p Lp (Ω) w = sup{µf (t)tp .4) ≤C f Lp (Ω) . For p ≥ 1. and D2 w Lp (Ω) (3. and let w be the Newtonian potential of f .3.1.p estimate for the solutions of Lu = f (x) .
r) for any 1 < r ≤ 2.6) and the wellknown Plancherel’s Theorem: ˆ f L2 (Rn ) = f L2 (Rn ) . i. q) if Tf Lq (Ω) w ≤C f Lp (Ω) . Then obviously w ∈ C ∞ (Rn ) and satisﬁes w = f (x) .6) and (3. ∀x ∈ Rn . Outline of Proof of (3. We need the following simple property of the transform ˆ Dk f (ξ) = −iξk f (ξ) . . Rn be the Fourier transform of f .2 T is of weak type (1. First.7). (3. Finally.1. We decompose the proof into the proofs of the following four lemmas. ∀f ∈ Lp (Ω).1. Let ˆ f (ξ) = 1 (2π)n/2 ei<x.1.1. q) if Tf Lq (Ω) ≤C f Lp (Ω) . we use Fourier transform to show Lemma 3. ξ >= k xk ξk √ is the inner product of Rn .e.ξ> f (x)dx. The Proof of Lemma 3.3 T is of strong type (r.1. where n < x. we conclude Lemma 3.76 3 Regularity of Solutions An operator T : Lp (Ω) → Lq (Ω) is of strong type (p. ∞ ∞ First we consider f ∈ C0 (Ω) ⊂ C0 (Rn ). 2). we employ Marcinkiewicz Interpolation Theorem to derive Lemma 3.4 T is of strong type (p. ∀f ∈ Lp (Ω).1 T : L2 (Ω) → L2 (Ω) is a bounded linear operator.7) It follows from (3. T is of weak type (p. ˆ and Djk f (ξ) = −ξj ξk f (ξ). T is of strong type (2. for 1 < p < ∞. and i = −1.5). (3. by duality. Thirdly. we use CalderonZygmund’s Decomposition Lemma to prove Lemma 3.1). p).1. Secondly.
1 W 2.p a Priori Estimates 77 f (x)2 dx = Ω Rn f (x)2 dx  w(x)2 dx Rn = = Rn  w(ξ)2 dx ξ4 w(ξ)2 dξ ˆ Rn n 2 2 ξk ξj w(ξ)2 dξ ˆ = = k. k=1 α< 1 Qk  f (x)dx ≤ 2n α Qk For any f ∈ L1 (Ω). Qo . x ∈ E (iii) G = ∞ E. We need the following wellknown Calder´nZygmund’s Decomposition o Lemma (see its proof in Appendix C). we simply pick a sequence {fk } ⊂ that converges to f in L2 (Ω). For any given α > 0.1.8) for any f ∈ L2 (Ω). we ﬁrst extend f to vanish outside Ω. and take the limit.8) Now to verify (3. E ∩ G = ∅ (ii) f (x) ≤ α.j=1 Rn = = Rn D2 w2 dx.2. The Proof of Lemma 3. Lemma 3. (3.t. a. ﬁx a large cube Qo in Rn . This proves the Lemma. {Qk }: disjoint cubes s.3.j=1 n Rn = k. Consequently.j=1 n Rn Dkj w(ξ)2 dξ Dkj w(x)2 dx k.5 For f ∈ L1 (Rn ). ∃ (i) Rn = E ∪ G.1. ﬁxed α > 0. to apply the Calder´nZygmund’s Decomposition o Lemma. ∀f ∈ C0 (Ω). Tf ∞ C0 (Ω) L2 (Ω) ≤ f L2 (Ω) ∞ . G such that Qk . such that f (x)dx ≤ αQo .e.
Let bk (x) = Then Tb = k=1 ∞ For each ﬁxed k. Obviously. where f (x) for x ∈ E g(x) = 1 Qk  Qk f (x)dx for x ∈ Qk .1. we divide Rn into two parts: G∗ 2 and E ∗ := Rn \ G∗ (see below for the precise deﬁnition of G∗ . ∞ T bk . we derive 4 α µT g ( ) ≤ 2 2 α g 2 (x)dx ≤ 2n+2 α g(x)dx ≤ 2n+2 α f (x)dx.1 and (3.13) .11) We ﬁrst estimate µT g . let {bkm } ⊂ C0 (Qk ) be a sequence converging to bk in 2 L (Ω) satisfying b(x) for x ∈ Qk 0 elsewhere . we have g(x) ≤ 2n α . and therefore α α µT f (α) ≤ µT g ( ) + µT b ( ). because g ∈ L2 .9) Split the function f into the “good” part g and “bad” part b: f = g + b. (3. · · · . bkm (x)dx = Qk Qk bk (x)dx = 0. and Qk (3.78 3 Regularity of Solutions We show that µT f (α) := {x ∈ Rn  T f (x) ≥ α} ≤ C f L1 (Ω) α . from the deﬁnition of g. Since the operator T is linear.) We will show that C (a) G∗  ≤ f L1 (Ω) . E∗ 2 α α α These will imply the desired estimate for µT b ( 2 ). 2. 2 2 We will estimate µT g ( α ) and µT b ( α ) separately. T f = T g + T b. and α α C C (b) {x ∈ E ∗  T b(x) ≥ } ≤ T b(x)dx ≤ f L1 (Ω) . By Lemma 3. (3.10) b(x)dx = 0 for k = 1.10). 2. · · · . (3. k = 1. (3. The estimate of the ﬁrst one 2 2 is easy.12) We then estimate µT b . almost everywhere and b(x) = 0 for x ∈ E . To estimate µT b ( α ).
One small trick here is to add a term ¯ (which is 0 by (3. (3.1 W 2. Let G∗ = Bk k=1 and E ∗ = Rn \ G∗ . It follows that ∞ ∞ T b(x)dx ≤ C E∗ k=1 ∞ Rn \G∗ T bk dx ≤ C k=1 Rn \Bk T bk dx f (x)dx.p a Priori Estimates 79 From the expression T bkm (x) = Qk Dij Γ (x − y)bkm (y)dy. one can see that due to the singularity of Dij Γ (x − y) in Qk and the fact that bkm may not be bounded in Qk . For this reason.14).15) ≤C k=1 Qk bk (y)dy ≤ C Rn .14) ≤ C2 Qk bkm (y)dy where y is the center of the cube Qk . one can only estimate T bkm (x) when x is of a positive distance away from Qk . ¯ ¯ Now letting m→∞ in (3.13)): Dij Γ (x − y )bkm (y)dy ¯ Qk to produce a helpful factor δk by applying the mean value theorem to the diﬀerence: Dij Γ (x − y) − Dij Γ (x − y ) = (y − y ) · D(Dij Γ )(x − ξ) ≤ δk D(Dij )(x − ξ). and the radius of the ball δk is the same as the diameter of Qk . We now estimate the integral in the complement of Bk : T bkm (x)dx = Rn \Bk Qo \Bk  Qk Dij Γ (x − y)bkm (y)dydx [Dij Γ (x − y) − Dij Γ (x − y )]bkm (y)dydx ¯ Qk = Qo \Bk  ≤ C δk Qo \Bk ∞ 1 dx ·  xn+1 Qk bkm (y)dy Qk ≤ C1 δk δk 1 dr · r2 bkm (y)dy (3. we obtain T bk (x)dx ≤ C Rn \Bk Qk ∞ bk (y)dy. we cover Qk by a bigger ball Bk which has the same center as Qk .3.
T is of strong type (r.4. 2)). More precisely.1. we have Tg Lr (Ω) ≤ Cr g Lr (Ω) . 2 ∗ Eα = {x ∈ E ∗  T b(x) ≥ Then by (3. 1 θ 1−θ = + r p q and C depends only on p. (3. we know that. 2) (of course also weak type (2. then for any p < r < q. This completes the proof of the Lemma. q).17) α }.16) G∗  = k=1 Bk  = C k=1 Qk  ≤ C α ∞ f (x)dx = k=1 Qk C α f (x)dx.3 is a direct consequence of the Marcinkiewicz interpolation theorem in the following restricted form: Lemma 3. and (3. we have shown that the operator T is of weak type (1.18) Now the desired inequality (3. The proof of this lemma can be found in Appendix C. 2 2 By (iii) in the Calder´nZygmund’s Decomposition Lemma. From the previous lemma. . we have o ∞ ∞ (3. if there exist constants Bp and Bq . If T is of weak type (p. q. such that µT f (t) ≤ then Tf where r θ 1−θ ≤ CBp Bq f r Bp f t p p and µT f (t) ≤ Bq f t q q .1. for any 1 < r ≤ 2. Now Lemma 3. ∀f ∈ Lp (Ω) ∩ Lq (Ω).19) . (3.3. The proof of Lemma 3.15). Rn Write (3. In the previous lemmas. p) and weak type (q. (3.17.9) is a direct consequence of (3. r).18). The proof of Lemma 3. we derive ∗ Eα  α ≤ 2 T b(x)dx ≤ ∗ Eα T b(x)dx ≤ C E∗ Rn f (x)dx. ∀f ∈ Lp (Ω) ∩ Lq (Ω).12).80 3 Regularity of Solutions Obviously α α µT b ( ) ≤ G∗  + {x ∈ E ∗  T b(x) ≥ }.1. 1) and strong type (2. and r. (3.1.6 Let T be a linear operator from Lp (Ω) ∩ Lq (Ω) into itself with 1 ≤ p < q < ∞.16).
Then 1. follows from (3. (3. n i.2 W0 (Ω) be a strong solution of (3.21).1 W 2. Lu := − u(x) = 0.e. 3. Lr =1 ≤ g sup Lr =1 f Lp Tg Lr ≤ Cr f This completes the proof of the Lemma.20) + 1 p = 1. c(x) ∈ L∞ (Ω).1 We say that u is a strong solution of Lu = f (x). g >= Ω f (x)g(x)dx be the duality between f and g. .21).22) where C is a constant depending on bi (x)L∞ (Ω) . 1 < r < 2. Deﬁnition 3. p.1. f > . aij (x) ∈ C0 (Ω).3.1.p a Priori Estimates 81 Let < f. T f >= g < T g. Obviously. x ∈ Ω x ∈ ∂Ω. Λ. Theorem 3. p Given any 2 < p < ∞.α boundary. x ∈ Ω if u is twice weakly diﬀerentiable in Ω and satisﬁes the equation almost everywhere in Ω. f > Lp . Then it is easy to verify that < g. T f >=< T g.19) and (3. Based on the result in the previous section–the estimates on the Newtonian potentials–we will establish a priori estimates on the strong solutions of (3.j=1 aij (x)uxi xj + n i=1 bi (x)uxi uW 2. It sup Tf Lp = g sup Lr =1 < g.21) ¯ We assume that Ω is bounded with C 2. and the domain Ω. c(x)L∞ (Ω) . we consider general second order uniform elliptic equations with Dirichlet boundary condition + c(x)u = f (x).2 Uniform Elliptic Equations In this section.20) that 1 r (3.p (Ω) Assume that f ∈ Lp (Ω). let r = p−1 .2 Let u ∈ W 2.1. i. and λξ2 ≤ aij ξi ξj ≤ Λξ2 for some positive constants λ and Λ. n. λ.p (Ω) ≤ C(uLp (Ω) + f Lp (Ω) ) (3. bi (x) ∈ L∞ (Ω).
24) uLp (Ω) ≤  1/2 2 uLp (Ω) + C uLp (Ω) . Part 1.26) For both the interior and the boundary estimates. We divide the proof into two major parts.p (Ω) ≤ C(uLp (Ω) + f Lp (Ω) ) (3. Boundary Estimate uW 2. Part 2.82 3 Regularity of Solutions Remark 3. Interior Estimate  2 uLp (K) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ) (3.p (Ω) ≤ uW 2. 4 1 Choosing < 2C .2.1. Locally. near a point xo .p (Ω) ≤ C  2 uLp (Ω) + C( C uLp (Ω) + f Lp (Ω) ). ∂Ω) > δ}.1 Conditions bi (x). we may regard the leading coeﬃcients aij (x) of the operator approximately as the constant aij (xo ) (as if the functions aij are frozen at point xo ).23) where K is any compact subset of Ω. for any p > n. (3. c(x) ∈ Lp (Ω) . we get: uW 2. the main idea is the well known method of “frozen” coeﬃcients.2 is then a simple consequence of (3. we obtain uW 2. c(x) ∈ L∞ (Ω) can be replaced by the weaker ones bi (x).p (Ω\Ω2δ ) + uW 2.25) and the following well known GargaliadoJohnNirenberg type interpolation estimate  uLp (Ω) ≤ CuLp (Ω)  1/2 2 (3. 4 Substituting this estimate of  uLp (Ω) into our inequality (3.1. .p (Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ). and thus we are able to treat the operator as a constant coeﬃcient one and apply the potential estimates derived in the previous section. Combining the interior and the boundary estimates.p (Ω\Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ) where Ωδ = {x ∈ Ω  d(x. we can then absorb the term  2 uLp (Ω) on the right hand side by the left hand side and arrive at the conclusion of our theorem uW 2.25).25) Theorem 3. Proof of Theorem 3.1.
With a simple linear transformation.1≤i. following the Newtonian potential estimate (see Theorem 3. Now. we skipped the summation sign . Thus by the uniqueness. We abbreviated i. For any xo ∈ Ω2δ . δ ∂2η ∂u ∂η + 2aij (x) ∂xi ∂xj ∂xi ∂xj ∂u ∂2w + η(x)(bi (x) + c(x)u − f (x)) + ∂xi ∂xj ∂xi = (aij (xo ) − aij (x)) + aij (x)u(x) ∂2η ∂u ∂η + 2aij (x) ∂xi ∂xj ∂xi ∂xj := F (x) for x ∈ Rn . w = Γ ∗ F (see the exercise after the proof of the theorem).1 W 2. we obtain: . Apparently both w and Γ ∗ F := are solutions of 1 n(n − 2)ωn 1 F (y)dy x − yn−2 n Rn u = F. Then we quantify the ‘module’ continuity of aij with (δ) = The function (δ) 0 as δ of the functions aij . let η(x) = φ( then aij (xo ) ∂2w ∂2w ∂2w = (aij (xo ) − aij (x)) + aij (x) ∂xi ∂xj ∂xi ∂xj ∂xi ∂xj = (aij (xo ) − aij (x)) + aij (x)u(x) ∂2w ∂2u + η(x)aij (x) + ∂xi ∂xj ∂xi ∂xj sup x−y≤δ. Here we notice that all terms are supported in B2δ := B2δ (xo ) ⊂ Ω.j≤n aij (x) − aij (y) 0.x. and it measures the ‘module’ continuity x − xo  ) and w(x) = η(x)u(x).j=1 aij as aij and so on.p a Priori Estimates 83 Part 1: Interior Estimates.y∈Ω.3. First we deﬁne the cutoﬀ function φ(s) to be a compact supported C ∞ function: φ(s) = 1 if s ≤ 1 φ(s) = 0 if s ≥ 2. In the above. we may assume aij (xo ) = δij .3).
. Part 2. we have F Lp (B2δ ) ≤ (2δ) 2 wLp (B2δ ) +f Lp (B2δ ) +C( uLp (B2δ ) +uLp (B2δ ) ). then K ⊂ Ω2δ . we notice that the collection of balls {Bδ (x)  x ∈ Ω2δ } forms an open covering of Ω2δ .27) and choosing δ so small such that C (2δ) < 1/2.. uLp (Bδ ) ≤ C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ).. Boundary Estimates The boundary estimates are very similar. Let’s just describe how we can apply almost the same scheme here.29) ≤ C(f Lp (Ω) +  uLp (Ω) + uLp (Ω) ) For any compact subset K of Ω. we get  2 wLp (B2δ ) ≤ C (2δ) 2 wLp (B2δ ) + C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ) ≤ 1  2 wLp (B2δ ) + C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ).27) Calculating term by term. 2 This is equivalent to  2 wLp (B2δ ) ≤ C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ) Since u = w on Bδ (xo ). let δ < 1 dist(K. ∂Ω). (3. Bδ (xo ) ∂Ω .. We now apply the above estimate (3. Combining this with the previous inequality (3..m} that have already covered Ω2δ . .84 3 Regularity of Solutions  2 wLp (B2δ ) =  2 wLp (Rn ) ≤ CF Lp (Rn ) = CF Lp (B2δ ) .. (3. we have  Consequently.28) To extend this estimate from a δball to a compact set. and 2 we derive  2 uLp (K) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ). Hence there are ﬁnitely many balls {Bδ (xi )  i = 1.  2 2 uLp (Bδ ) =  2 wLp (Bδ ) . For any point xo ∈ ∂Ω.28) on each of the balls and sum up to obtain m  2 uLp (Ω2δ ) ≤ i=1 m  2 uLp (Bδ )(xi ) ≤ i=1 C(f Lp (B2δ )(xi ) +  uLp (B2δ )(xi ) + uLp (B2δ )(xi ) ) (3. (3.30) This establishes the interior estimates.
−w(y1 . The equation becomes n n + ¯ − i. if yn < 0. yn−1 . yn−1 . + ¯ Now let w(y) and F (y) be the odd extension of w(y) and F (y) from Br (0) ¯ to Br (0).3.e.. i. x2 . Since ¯ a plane is still mapped to a plane. For example.p a Priori Estimates 85 is a C 2. ¯ Through the same calculations as in Part 1. With a suitable rotation. · · · .31) where the new coeﬃcients are computed from the original ones via the chain rule of diﬀerentiation. aij (y) = ¯ ∂ψ j ∂ψ i −1 (ψ (y))alk (ψ −1 (y)) k (ψ −1 (y)). · · · . we make a linear transformation so that aij (0) = δij . yn−1 . . ∂xl ∂x If necessary. yn ) = ¯ ¯ ¯ Similarly for F (y). xn − h(x )). for y ∈ Br (0) ¯ b ¯ + u(y) = 0 . we may assume that equation (3. Applying the method of frozen coeﬃcients (with w(y) = φ( 2y )u(y)) we get: r + w(y) = F (y) on Br (0). yn ). ¯ w(y) = F (y) . for y ∈ ∂Br (0) with yn = 0. + then ψ is a diﬀeomorphism that maps a neighborhood of xo in Ω onto Br (0) = {y ∈ Br (0)  yn > 0}..31) is valid for some smaller r. we derive the basic interior estimate  2 uLp (Br (xo )) ≤ C(f Lp (B2r (xo )) +  uLp (B2r (xo )) + uLp (B2r (xo )) ) ≤ C1 (f Lp (B2r (xo )∩Ω) +  uLp (B2r (xo )∩Ω) + uLp (B2r (xo )∩Ω) ) for any xo on ∂Ω and for some small radius r. if yn ≥ 0..j=1 aij (y)uyi yj + i=1 ¯i (y)uyi + c(y)u = f (y) . Let y = ψ(x) = (x − xo .α graph for δ small. w(y) = w(y1 . we may assume that Bδ (xo ) ∂Ω is given by the graph of xn = h(x1 . and Ω is on top of this graph locally. The last inequality in the above was due to the symmetric extension of w to w from the half ball to the whole ¯ ball. Then one can check that: w(y1 .1 W 2. for y ∈ Br (0). −yn ). (3. . · · · . xn−1 ) = h(x ).
86 3 Regularity of Solutions These balls form a covering of the compact set ∂Ω.34) n bi (x)uxi v + c(x)uv dx = f (x)vdx. ¯ Hint: (a) Show that Γ ∗ F (x) = 0 for x ∈Ω. (3. 3. . and following the same steps as we did for the interior estimates.1 Assume that Ω is bounded. then w = Γ ∗ F . x ∈ Ω u(x) = 0 . Let L be a second order uniformly elliptic operator in divergence form n n Lu = − i. Let →0 to derive w = Γ ∗ F . 2. k). and thus has a ﬁnite covering Bri (xi ) (i = 1.33) 1.j=1 (aij (x)uxi )xj + i=1 bi (x)uxi + c(x)u. 2.p regularity for the weak solutions. These balls also cover a neighborhood of ∂Ω including Ω\Ωδ for some δ small..1.. Show that if − w = F . . we get: uW 2..p Deﬁnition 3. we will use the a priori estimates established in the previous section to derive the W 2.j=1 1 p aij (x)uxi vxj + i (3. x ∈ ∂Ω 1.p (Ω\Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ). Ω Ω i.p Regularity of Solutions In this section. Summing the estimates up.p Exercise 3.2. (c) Show that (J w) = J ( w) and thus J w = Γ ∗ (J F ). and w ∈ W0 (Ω)..2 W 2. 2 n (b) Show that it is true for w ∈ C0 (R ). (3.32) This establishes the boundary estimates and hence completes the proof of the theorem.q if for any v ∈ W0 (Ω).35) where + 1 q = 1. n (3.1 We say that u ∈ W0 (Ω) is a weak solution of Lu = f (x) .
uniqueness.p Regularity of Solutions 87 1.2. and then use the “frozen” coeﬃcients method to derive the regularity for general operator L. and regularity for the solution of the Laplace equation on a unit ball.37) We will prove this proposition in subsection 3.1.34). we consider the case 1 < p < 2.2. which will be used to deduce the existence of solutions for equation − ij (aij (x)wxi )xj = F (x). we ﬁrst consider the case when p ≥ 2. we only need to require ∞ (3. The proof of the theorem is quite complex and will be accomplished in two stages.p Assume that u ∈ W0 (Ω) is a weak solution of (3. this uniqueness plays a key role in deriving the regularity.2 W 2. Then we use this equation with a proper choice of F (x) to derive a contradiction if there exists a nontrivial solution of equation − ij (aij (x)vxi )xj = 0.33) with aij (x) Lipschitz continuous and bi (x) and c(x) bounded.35) be true for all v ∈ C0 (Ω).p ∞ Remark 3. .1 Since C0 (Ω) is dense in W0 (Ω). To show the uniqueness. then u ∈ W 2.1 Assume f ∈ Lp (B1 (0)) with p ≥ 2. we ﬁrst prove a W 2. In stage 1. The proof of the regularity is based on the following fundamental proposition on the existence.p (B1 (0)) satisfying u W 2. In stage 2 (subsection 3. and this is more convenient in many applications.3. x ∈ ∂B1 (0) exists a unique solution u ∈ W 2. Let L be a uniformly elliptic operator deﬁned in (3. The main result of the section is Theorem 3. Then the Dirichlet problem u = f (x) . Outline of the Proof. As one will see. The main diﬃculty lies in the uniqueness of the weak solution. (3. 1. If f (x) ∈ Lp (Ω).2). x ∈ B1 (0) (3. because one can rather easily show the uniqueness of the weak solutions by multiplying both sides of the equation by the solution itself and integrating by parts.p (B1 ) ≤C f Lp (B1 ) .36) u(x) = 0 .2.2.1 Assume Ω is a bounded domain in Rn .p (Ω) . Proposition 3.2.p version of Fredholm Alternative for p ≥ 2 based on the regularity result in stage 1.
2. This will use the Regularity Lifting Theorem in Section 3. It follows from the a priori estimate (3.88 3 Regularity of Solutions After proving the uniqueness. (3.1 (Better A Priori Estimates at Presence of Uniqueness.41) (3.38) it holds the a priori estimate u W 2. x ∈ Ω .) Assume that i) for solutions of Lu = f (x) . Remark 3. (3.2. x ∈ Ω such that uk W 2.42) Lvk = gk (x) .p →∞ as k→∞. 3. Then we have a better a priori estimate for the solution of (3.39) that uk Let vk = Then vk From the equation Lp Lp →∞ as k→∞.3. fk uk and gk = . x ∈ Ω (3. and hence will be deliberated there.39) and ii) (uniqueness) if Lu = 0.1 and use it to derive the regularity of weak solutions for general operator L. p uk L uk Lp = 1 .p (Ω) ≤ C( u Lp (Ω) + f Lp (Ω) ).2. (3.38) u W 2.2. To this end. the conditions on the coeﬃcients of L can be weakened.2 Actually. Suppose the inequality (3.40) Proof. then u = 0.1 The Case p ≥ 2 We will prove Proposition 3.p (Ω) ≤C f Lp (Ω) . to derive the regularity. we need Lemma 3.40) is false. then there exists a sequence of functions {fk } with fk Lp = 1 and the corresponding sequence of solutions {uk } for Luk = fk (x) . and gk Lp →0. everything else is the same as in stage 1.
y)f (y)dy is the solution.2. the same sequence converges strongly to v in Lp .43) (3.n≥3 n=2 is the fundamental solution of Laplace equation as introduce in the deﬁnition of Newtonian Potentials. implies w = 0 almost everywhere. then u(x) = B1 G(x. in which Γ (x) = 1 1 n(n−2)ωn xn−2 1 2π ln x . it is wellknown that. we arrive at Lv = 0 . this.42). we derive  w2 dx = 0. and hence there exists a subsequence (still denoted by {vk }) that converges weakly to v in W 2. for n ≥ 3 x n(n − 2)ωn (x x2 − y)n−2 . x ∈ ∂B1 Multiplying both sides of the equation by w and then integrating on B1 . Here G(x. Let w = u − v. B1 Hence w = 0 almost everywhere.41) and (3. Therefore by the uniqueness assumption. To see the uniqueness. we must have v = 0. For convenience.p . The Proof of Proposition 3.36). and hence v Lp = 1. y) = Γ (x − y) − h(x. For the existence. if f is continuous.p ≤ C( vk Lp + gk Lp ). we abbreviate Br (0) as Br . y) is the Green’s function. (3. together with the zero boundary condition. .3. assume that both u and v are solutions of (3.2 W 2. From (3.p Regularity of Solutions 89 it holds the a priori estimate vk W 2. By the compact Sobolev embedding. y) = 1 1 . x ∈ Ω.43) imply that {vk } is bounded in W 2.1. and h(x.p . x ∈ B1 w(x) = 0 . This contradicts with the fact v Lp = 1 and therefore completes the proof. Then w weakly satisﬁes w = 0 .
p (B1 ) ≤ C f Lp (B1 ) . f (x) .36).p (B1 ).44. then the corresponding solutions {uδi } is a Cauchy sequence in W 2. we have the better a priori estimate uδ W 2. we derive that uδ is also in Lp (B1 ).p a priori estimate in the previous section.2.p . After reading the proof of the a priori estimates. Moreover. uo = lim uδi . j→∞.1.1.2. because uδi − uδj Let W 2. This completes the proof of Proposition 3. The key diﬀerence here is that in the a priori estimate. Moreover. For the second part. as i. (3.p .p regularity in a small .p (B1 ). x2 Proof of Regularity Theorem 3. The addition of h(x. elsewhere . We have obtained the needed estimate on the ﬁrst part Γ (x − y)f (y)dy B1 when we worked on the Newtonian Potentials. such that x x − y ≥ co . we see that the better a priori estimate (3. and hence it is in e W 2.p (B1 ) ≤ C fδi − fδj Lp (B1 ) →0 . y)fδ (y)dy. in W 2. Applying the Poincar` inequality.p (B1 ). y) vanishes on the boundary. y) is smooth when y is away from the boundary (see the exercise below). D2 uδ is in Lp (B1 ). Exercise 3. The general frame of the proof is similar to the W 2. we preassume that u is a strong solution in W 2.1 for p ≥ 2.1 Show that if y ≤ 1 − δ for some δ > 0.44) Pick a sequence {δi } tending to zero. the reader may notice that if one can establish a W 2. from 3. Let uδ (x) = B1 G(x.37) holds for uo . x ∈ B1−δ 0.2. where fδ (x) = Now. due to uniqueness and Lemma 3. by our result for the Newtonian Potentials. then there exist a constant co > 0. y) is to ensure that G(x. ∀ x ∈ B1 (0). noticing that h(x. we start from a smaller ball B1−δ for each δ > 0.90 3 Regularity of Solutions is a harmonic function. and here we only assume that u is a weak solution in W 1.2. i→∞ Then uo is a solution of (3.
obviously.2. 1.2 W 2. For any xo ∈ Ω2δ := {x ∈ Ω  dist(x.45) In the above. n (3. one can derive the regularity on the whole of Ω. we deﬁne the cutoﬀ function φ(s) = 1 .3. if s ≤ 1 0 .p (B2δ (xo )). As in the proof of the a priori estimates. we omitted the summation signs . δ Then w is supported in B2δ (xo ). We abbreviated i. w is a weak solution of aij (xo )wxi xj = ([aij (xo ) − aij (x)]wxi )xj − F (x) .p Regularity of Solutions 91 neighborhood of each point in Ω. then by an entirely similar argument as in obtaining the a priori estimates. if s ≥ 2. In order words.45) as w = ([aij (xo ) − aij (x)]wxi )xj − F (x) For any v ∈ W 2. By (3. Consider the equation in W 2. η(x) = φ( aij (xo )wxi vxj dx = B2δ (xo ) B2δ (xo ) let [aij (xo )−aij (x)]wxi vxj dx+ B2δ (xo ) F (x)vdx where F (x) = f (x) − (aij (x)ηxi u)xj − bi (x)uxi − c(x)u(x).j=1 aij as aij and so on.1.p v = Kv + where (Kv)(x) = −1 −1 (3.47) ([aij (xo ) − aij (x)]vxi )xj . ∞ one can verify that. x − xo  ) and w(x) = η(x)u(x).35) and a straight forward calculation. ∂Ω) ≥ 2δ}. for any v ∈ C0 (B2δ (xo )). the operator is invertible.34). one can easily verify that F (x) is in Lp (B2δ (xo )). By virtue of Proposition 3. we may assume that aij (xo ) = δij and write equation (3. x ∈ B2δ (xo ) w(x) = 0 . With a change of coordinates. x ∈ ∂B2δ (xo ). Also. ([aij (xo ) − aij (x)]vxi )xj ∈ Lp (B2δ (xo )). .46) F (3.p Let u be a W0 (Ω) weak solution of (3.
i.2 LaxMilgram Theorem. by 1.p (B2δ (xo )). ∞ Let {fk } ⊂ C0 (Ω) be a sequence of smooth approximations of f (x).2.2.1. Exercise 3. Therefore. for δ suﬃciently small. the operator K is a contracting map in W 2.e.47).48) Proof. This v is also a weak solution of equation (3. Let (Kv)(x) = −1 ([aij (xo ) − aij (x)]vxi )xj . Similar to the proof of Proposition 3.92 3 Regularity of Solutions Under the assumption that aij (x) are Lipschitz continuous. From the Regularity Theorem in this chapter.2 Assume that each aij (x) is Lipschitz continuous. For each k. there exists a constant 0 < γ < 1. K is a contracting map from W 2. If u = 0 whenever Lu = 0 (in the sense of W0 (Ω) 1. x ∈ B2δ (xo ). one can show (as an exercise) the uniqueness of the weak solution of (3. one can see that for α suﬃciently large.2 (Ω).2. Show that.46). Exercise 3.p (B2δ (xo )) to itself. This is actually a W 2. This completes the stage 1 of proof for Theorem 3. such that Kφ − Kψ W 2. we can write v = (L + α)−1 g where (L + α)−1 is a bounded linear operator from L2 (Ω) to W 2. one can verify that (see the exercise below). x ∈ Ω. then for any f ∈ L (Ω).2. we have v ∈ W 2.p (B2δ (xo )) weak solution for equation (3.p (B2δ (xo )).49) . for suﬃciently small δ.2. 3. such that Lu = f (x) and u W 2. 1. Therefore. (3. (3. there exist a unique u ∈ W0 (Ω) ∩ W 2.p p weak solution).2 Let p ≥ 2.2 (Ω). From the previous chapter. consider equation (L + α)w − αw = fk .p (B2δ (xo )) .3 Prove the uniqueness of the W 1.1. we must have w = v and thus conclude that w is also in W 2.p version of the Fredholm Alternative.p (Ω).2.p Lemma 3.2 The Case 1 < p < 2. there exists a unique solution v of equation (3.46).p (Ω) ≤C f Lp . there exist a unique W0 (Ω) weak solution v of the equation (L + α)v = g . for any given g ∈ L2 (Ω). Therefore.p (B2δ (xo )) ≤γ φ−ψ W 2.46) when p ≥ 2.
3. 1.2 W 2.p tinuous. Now we can apply the Fredholm Alternative: Equation (3. we derive immediately that uk is also in W 2. K = α(L + α)−1 . For each k.52) Proof. The second half of the above is just the assumption of the theorem. since both α(L + α)−1 uk and (L + α)−1 fk are in W 2. we have proved the uniqueness. i. and furthermore. for any integers i and j. as k→∞.1 that uk is in W 2. 1. uk − α(L + α)−1 uk = (L + α)−1 fk .1) uk W 2.50) exists a unique solution if and only if the corresponding homogeneous equation (I − K)w = 0 has only trivial solution. (I − K)w = F. hence.50) where I is the identity operator. and F = (L + α)−1 fk .2 into W 1.2. When p ≥ 2. Suppose there exist a nontrivial weak solution u. j→∞. L(ui − uj ) = fi (x) − fj (x) .p ≤ C fk Lp .2 (Uniqueness of W0 Weak Solution for 1 < p < ∞. Assume that aij (x) are Lipschitz con1. or we can deduce this from the Regularity Theorem 3. Now this function u is the desired solution. as i. · · · Moreover.2.2.1.p and hence it converges to an element u in W 2. then u = 0 almost every where.p Regularity of Solutions 93 or equivalently. If u is a W0 (Ω) weak solution of (aij (x)uxi )xj = 0. (3.2 (Ω). consider the equation .2 can see that K is a compact operator from W0 (Ω) into itself. It follows that ui − uj W 2. .p (Ω).p ≤ C fi − fj Lp →0 . ) n Let Ω be a bounded domain in R . Let {uk } be a sequence ∞ of C0 (Ω) functions such that 1. x ∈ Ω.e. 2.p Proposition 3.p uk →u in W0 (Ω) . due to uniqueness.p (Ω). we conclude from Theorem 3.51) Furthermore. (3.2. Since fk is in Lp (Ω) for any p.2 . This implies that {uk } is a Cauchy sequence in W 2.50) possesses a unique solution uk in W0 (Ω).2 (Ω). Now assume 1 < p < 2. (3. k = 1.2 equation (3. because of the compact embedding of W 2. One 1. we have the improved a priori estimate (see Lemma 3.
q Obviously. .1.2. it is easy to see that  uk p/q uk 1 +  uk 2 →  up/q u 1 +  u2 in Lq (Ω). one can verify that 0= Ω vk (aij (x)uxi )xj dx = Ω  uk p/q uk 1 +  uk 2 · u dx. we will arrive at a pk ≥ q. with p1 = np .55) then u ∈ W 2.2. we are done.94 3 Regularity of Solutions (aij (x)vxi )xj = Fk (x) := ·  uk p/q uk 1 +  uk 2 . Case ii) If q > p.2. (3.q (Ω) and it holds the a priori estimate u W 2.2 guarantees the existence of a solution vk for equation (3. q > 1 and f ∈ Lq (Ω).q a priori estimates. After proving the uniqueness. Lemma 3.q Proof.2.2 Given any pair p. by (3. If p1 < q. Since we 1.53) 1 where 1 + p = 1. Case i) If q ≤ p.p Theorem 3. and Fk (x) is in Lq (Ω).q already have uniqueness for W0 weak solution. and Regularity 1.p1 . and therefore.3 Other Useful Results Concerning the Existence. we ﬁnally deduce that u = 0 almost everywhere. This completes the proof of the proposition.p Taking into account that u ∈ W0 (Ω). u ∈ W 2. and by Regularity. 3.1 and the W 2.p1 . Repeating this process. 1. x ∈ Ω. and the results follow from the Regularity Theorem 3.54) Since uk →u in W 1. we must have u = 0 almost everywhere.p1 weak solution.53). for p < 2.p .p → W 1. q is greater than 2. This completes stage 2 of the proof for Theorem 3. then obviously. (3.54).56) 1. then we use the Sobolev embedding W 2. Uniqueness. u is also a W0 (Ω) weak solution. after a few steps.2. n−p If p1 ≥ q. we use the fact that u is a W 1. Through integration by parts several times. if u ∈ W0 (Ω) is a weak solution of Lu = f (x) . (3. (3.q (Ω) ≤ C( u Lq + f Lq ). the rest of the arguments are entirely the same as in stage 1.
p (Ω).p (Ω) ≤C f Lp . 1.2. the proof of this theorem is entirely the same as for Lemma 3. n+2 For each ﬁxed p < n−2 . we have stated this theorem in Lemma 3. The equation (3. q > 1. Let 1 < p < ∞.p Version of the Fredholm Alternative).p If u = 0 whenever Lu = 0 (in the sense of W0 (Ω) weak solution).1 Bootstrap Assume that u ∈ H 1 (Ω) is a weak solution of − u = up (x) . such that Lu = f (x) and u W 2. the following are equivalent 1.3 Regularity Lifting 95 From this Theorem.3 Regularity Lifting 3.3 (W 2. For 2 ≤ p < ∞. there exist a unique u ∈ W0 (Ω) ∩ W 2. after we obtained the W 2.3.p any f ∈ Lp (Ω).2.p regularity in this case.57) boosts the solution u to W 2. (n + 2) − δ(n − 2) 2n 2n n+2 −δ n−2 .1 For any pair p. for 1 < p < 2. write p= for some δ > 0.2. then for 1. Finally the “Schauder Estimate” will lift u to C 2+α (Ω) and hence to be a classical solution. 2n Since u ∈ L n−2 (Ω).57) Then by Sobolev Embedding. x ∈ Ω 2n (3.q1 (Ω) with q1 = By the Sobolev Embedding 2n . then u = 0. we can derive immediately the equivalence between 1.2. then u = 0. up ∈ L p(n−2) (Ω) = L (n+2)−δ(n−2) (Ω).q ii) If u ∈ W0 (Ω) is a weak solution of Lu = 0. If the power p is less than the n+2 critical number n−2 .2.p the uniqueness of weak solutions in any two spaces W0 and W0 : Corollary 3.2. We call this the “Bootstrap Method” as will be illustrated below. Now. 1. 3.3.p i) If u ∈ W0 (Ω) is a weak solution of Lu = 0.q 1. u ∈ L n−2 (Ω). then the regularity of u can be enhanced repeatedly through the equation until we reach that u ∈ C α (Ω). Theorem 3.
3.q2 (Ω). we present a simple method to boost regularity of solutions. we derive u ∈ W 2. We are done. and therefore. i. or u ∈ C α (Ω).q1 (Ω) → L n−2q1 (Ω) we have u ∈ Ls1 (Ω) with s1 = nq1 2n 1 = . u ∈ C 2+α (Ω). We believe that our method will provide convenient ways. one can see that. for both experts and nonexperts in the ﬁeld. . It is much more general and is very easy to use. in obtaining regularities. Finally. up ∈ Lq2 (Ω) with q2 = (> 1) times.3. It has been used extensively in various forms in the authors previous works. then.e. if 2q2 ≥ n. if p = n−2 . s1 4n = . by the Schauder estimate. we introduce a “Regularity Lifting Method” in the next subsection. By Sobolev embedding. we have either u ∈ Lq (Ω) for any q. To deal with this situation. The essence of the approach is wellknown in the analysis community. (1 − δ)(n − 2) 1 − δ 1−δ 2n . n − 2q1 n−21−δ 1 1−δ nq1 The integrable power of u has been ampliﬁed by Now. the version we present here contains some new developments. then u ∈ Ls2 (Ω). it is a classical solution. if (1 − δ)[n + 2 − δ(n − 2)] − 4 ≤ 0 . If 2q2 < n. (1 − δ)[n + 2 − δ(n − 2)] − 4 1 The integrable power of u has again been ampliﬁed by at least 1−δ times. and the integrable power of the solution u can not be boosted this way. the so called “critical power”.96 3 Regularity of Solutions W 2. p (1 − δ)[n + 2 − δ(n − 2)] And hence through the equation. However. we will boost u to Lq (Ω) for any q. Continuing this process. and hence in C α (Ω) for some 0 < α < 1. n+2 From the above argument. then δ = 0.2 Regularity Lifting Theorem Here. with s2 = It is easy to verify that s2 ≥ 2n 1 1 = s1 . Essentially. after a ﬁnite many steps. it is based on the following Regularity Lifting Theorem.
Step 1. Let Deﬁne a new norm · Z by · Z · X and · Y be two norms on Z.1 Let T be a contraction map from X into itself and from Y into itself. Here. 0 < θ2 < 1 such that T h1 − T h2 Y ≤ θ2 h1 − h2 Y . h2 ∈ Z. respectively. 2n .3.3. there exists a constant θ1 . = p · p X + · p Y For simplicity. 3.3 Applications to PDEs Now. according to what one needs. It’s easy to see that Z = X ∩ Y . Let 1 u ∈ H0 (Ω) be a weak solution of equation (3. and that there exits a function g ∈ Z such that f = T f + g. We see that T : X −→ X is also a contraction and g ∈ Z ⊂ X. Assume that f ∈ X. (3. f = h ∈ Z since both h and f are solutions of x = T x + g in X. 0 < θ1 < 1 such that T h1 − T h2 X ≤ θ1 h1 − h2 X.58). ≤ θ h1 − h2 Step 2. for any h1 . Thus. u ∈ L n−2 (Ω). we can ﬁnd a constant θ2 . we can ﬁnd a solution h ∈ Z such that h = T h + g. Theorem 3.58) Still assume that Ω is a smooth bounded domain in Rn with n ≥ 3. Let X and Y be the completion of Z under · X and · Y . Similarly.3 Regularity Lifting 97 Let Z be a given vector space. 1 ≤ p ≤ ∞. we assume that Z is complete with respect to the norm · Z . one can choose p. Then by Sobolev embedding. The equation x = T x + g has a unique solution in X. Proof. given g ∈ Z. Then f also belongs to Z. Since T is a contraction on X. Since T : Z −→ Z is a contraction.3. Let θ = max{θ1 . T h1 − T h2 Z = p T h1 − T h2 p p X + T h1 − T h2 p X p + θ2 h1 − h2 p Y p Y ≤ p θ1 h1 − h2 Z. Then. First show that T : Z −→ Z is a contraction. θ2 }. we explain how the “Regularity Lifting Theorem” proved in the previous subsection can be used to boost the regularity of week solutions involving critical exponent: n+2 − u = u n−2 .
then u ∈ 2.98 3 Regularity of Solutions We can split the right hand side of (3. we consider the regularity of the weak solution of the following equation − u = a(x)u + b(x).59) Theorem 3. n ).1 If u is a H0 (Ω) weak solution of equation (3. we arrive at u ∈ C 2. n n n+2 4 (3. we may extend f to be zero outside Ω when necessary.58) in two parts: u n−2 = u n−2 u := a(x)u. .α regularity. Let G(x.2. deﬁne aA (x) = and a(x) if a(x) ≥ A 0 otherwise . y) be the Green’s function of − (T f )(x) = (− )−1 f (x) = Ω in Ω.58). Then obviously. 2 (Ω) via the W 2. y)f (y)dy. x − yn−2 L n−2q (Ω) ≤ Γ ∗f nq L n−2q (Rn ) ≤C f Lq (Ω) . 2 Tf nq Cn . Now let’s come back to nonlinear equation (3.p estimate. Deﬁne G(x.1 Even in the best situation when a(x) ≡ 0.58). a(x) ∈ L 2 (Ω).q (Ω).3. aB (x) = a(x) − aA (x). b(x) ∈ L 2 (Ω). for any q ∈ (1.2 Assume that a(x). we ﬁrst conclude that u is in Lq (Ω) for any 1 < q < ∞. For a positive number A. Here. Then by our Regularity Theorem 3.1. y) < Γ (x − y) := and it follows that. Then u is in Lp (Ω) for any 1 ≤ p < ∞.α (Ω) via Sobolev embedding.3.59) in H0 (Ω). Remark 3.2. Hence. Let u be any weak solution 1 of equation (3. This implies that u ∈ C 1. from the classical C 2.α (Ω). and hence derive that u ∈ Lp (Ω) for any p < ∞ by Sobolev embedding. more generally. Proof of Theorem 3. Finally. Then obviously 0 < G(x. Assume that u is a 1 H0 (Ω) weak solution. 1 Corollary 3. and hence it is a classical solution. u is in W 2. we can only have n u ∈ W 2. From the above theorem.α C (Ω).3.3.
59) can be written as u(x) = (TA u)(x) + FA (x). one can choose a large number A. Then equation (3. We will show that. there is a q. y)[aB (y)u(y) + b(y)]dy. i) TA is a contracting operator from Lp (Ω) to Lp (Ω) for A large. Since a(x) ∈ L 2 (Ω). n 2. Then by the “Regularity Lifting Theorem”.3 Regularity Lifting 99 Let (TA u)(x) = Ω G(x. ii) The Integrability of FA (x). and ii) FA (x) is in Lp (Ω). n For any n−2 < p < ∞. we have 1 FA Lp (Ω) such that p = ≤C b nq n−2q . we derive o TA u Lp (Ω) ≤ Γ ∗ aA u n Lp (Rn ) ≤ C aA u Lq (Ω) ≤ C aA L 2 (Ω) n u Lp (Ω) . Extending b(x) < ∞. n For any n−2 < p < ∞. ≤ Γ ∗b Lp (Rn ) ≤C b Lq (Ω) L 2 (Ω) n Hence 1 FA (x) ∈ Lp (Ω). y)aA (y)u(y)dy. such that 2 p= nq n − 2q and then by H¨lder inequality. for any 1 ≤ p < ∞. such that aA and hence arrive at L 2 (Ω) n ≤ 1 2 1 u Lp (Ω) .3. we derive immediately that u ∈ Lp (Ω). (a) First consider TA u Lp (Ω) ≤ 1 FA (x) := Ω G(x. choose 1 < q < to be zero outside Ω. . 1 < q < n . i) The Estimate of the Operator TA . y)b(y)dy. where FA (x) = Ω G(x. 2 That is TA : Lp (Ω)→Lp (Ω) is a contracting operator.
for any 1 < p < . 1 ≤ p < ∞. and n−2 2n 2n when q = . This completes the proof of the Theorem. 1 < p < ∞ when 3 ≤ n ≤ 6 2n 1 < p < n−6 when n > 6. we have 2 FA Lp (Ω) ≤ aB u Lq (Ω) ≤C u Lq (Ω) .60) we will prove . Noticing that u ∈ Lq (Ω) for any 1 < q ≤ p= 2n . from the starting point where u ∈ L n−2 (Ω). we add 4 to the dimension n for which holds u ∈ Lp (Ω) . Let L be a second order uniformly elliptic operator in divergence form n n Lu = − i. and hence TA is a contracting operator from Lp (Ω) to Lp (Ω). n−6 n−2 we conclude that. we prove our theorem for all dimensions. for any 1 < p < n−6 if n > 6. Applying the Regularity Lifting Theorem. for any 1 < p < ∞ if 3 ≤ n ≤ 10 2n u ∈ Lp (Ω) . By the boundedness of aB (x). for the following values of p and dimension n. we have arrived at 2n u ∈ Lp (Ω) . through an entire similar argument as above. However. for any 1 < p < n−10 if n > 10. (3.100 3 Regularity of Solutions (b) Then estimate 2 FA (x) = Ω G(x. for any 1 < p < ∞ if 3 ≤ n ≤ 6 2n u ∈ Lp (Ω) . we arrive at u ∈ Lp (Ω) . Each step. The only thing that prevent us from reaching the full range of p is the 2n 2 estimate of FA (x). Continuing this way.j=1 (aij (x)uxi )xj + i=1 bi (x)uxi + c(x)u. FA (x) ∈ Lp (Ω). we will reach u ∈ Lp (Ω) . n−6 Now from this point on. y)aB (y)u(y)dy. Now we will use the similar idea and the Regularity Lifting Theorem to carry out Stage 3 of the proof of the Regularity Theorem.
Let Lo u := − n (aij (x)uxi )xj .p Suppose f ∈ Lp (Ω) and u is a W0 (Ω) weak solution of Lu = f (x) x ∈ Ω. c(x) ∈ Ln/2+δ (Ω). Proof.61) as u = TA u + FA (u. if p = n p L (Ω). the operator Lo is invertible. bi (x) ∈ Ln+δ (Ω). Then L−1 is a bounded linear operator from Lp (Ω) to o o W 2. 1.n+δ (Ω) for some δ > 0.61) as n n (3.3 (W 2. Denote it by L−1 . we know that equation Lo u = 0 has only trivial solution. Now we can rewrite equation (3. if p = n/2 p L (Ω).2.3. Then u is in W 2. and it follows from Theorem 3. Write equation (3. if n/2 < p < ∞. Similarly for CA (x) and CB (x). elsewhere .p Regularity under Weaker Conditions) Let Ω be a bounded domain in Rn and 1 < p < ∞.j=1 By Proposition 3.61) − i. if bi (x) ≥ A 0.2. biB (x) := bi (x) − biA (x).62) biA (x)uxi + cA (x)u .j=1 (aij (x)uxi )xj = − i=1 bi (x)uxi + c(x)u + f (x). For each i and each positive number A. deﬁne biA (x) = Let bi (x) .p (Ω).2. iii) n/2 if 1 < p < n/2 L (Ω).3. x) where TA u = −L−1 o i (3. if n < p < ∞. i. ii) n if 1 < p < n L (Ω). Assume that i) aij (x) ∈ W 1.3 Regularity Lifting 101 Theorem 3.3.p (Ω).
cA Ln/2 can be made arbitrarily small if A is suﬃciently large.p (Ω) to W 2. ·) ∈ W 2. there exists an A > 0. biA Ln . In fact. if 1 < p < n/2 . if 1 < p < n biA Ln+δ v W 2. x). for any v ∈ W 2. Since biB (x) and ciB (x) are bounded. if n < p < ∞. if p = n/2 if n/2 < p < ∞.64) biA Lp v W 2.p . f ∈ Lp (Ω). we must have u = v.q .p (Ω). we deduce that TA is a contract mapping from W 2.p v W 2. Taking into account that L−1 is a bounded operator from Lp (Ω) to o W (Ω).p to W 1.p .p (Ω) to W 2.p (Ω).p (Ω).63). and cA v cA cA ≤C· cA Ln/2 Lp v W 2. by H¨lder inequality and Sobolev embedding from W 2. the ﬁrst norms on the right hand side of (3. there exist a v ∈ W 2. Lp Ln/2+δ v W 2.p (Ω). Hence for any > 0.p (Ω).e. Then we prove that TA is a contracting operator from W 2. 2. ∀ v ∈ W 2. such that v = TA v + FA (u. This implies (3. o We ﬁrst show that FA (u.65). (3.p (Ω). By uniqueness.63) ∈ Lp (Ω) for any u ∈ W 1.65) Under the conditions of the theorem. such that biA (x)vxi + cA (x)v i Lp ≤ v W 2. This completes the proof of the theorem. ∀u ∈ W 1.p . biA Ln v W 2.102 3 Regularity of Solutions and FA (u. if p = n biA Dv Lp ≤ C · (3. It follows from the Contract Mapping Theorem. i.64) and (3.p .p . Therefore we derive that u ∈ W 2. o one can see that. x) = −L−1 o i biB (x)uxi + cB (x)u + L−1 f (x). .p (Ω).p (Ω).p (Ω) . one can easily verify that biB (x)uxi + cB (x)u i (3.p .p Now u and v satisfy the same equation.
We will use the Regularity Lifting Theorem to boost u to Lq (Rn ) for any 1 < q < ∞. n−α x − y (3. n−α (3. Remark 3. we consider the following equation u(x) = Rn K(y)u(y)p−1 u(y) dy x − yn−α (3. It arises as an EulerLagrange equation for a functional under a constraint in the context of the HardyLittlewoodSobolev inequalities. x ∈ Rn .67) becomes −∆u = u(n+2)/(n−2) . It is also closely related to the following family of semilinear partial differential equations (−∆)α/2 u = u(n+α)/(n−α) .67) Actually.69) and (3.68) for some 1 < p < ∞.68).3.3.2 i) If p> 1 ) for some γ < α .66) where α is any real number satisfying 0 < α < n. (3.3.4 Let u be a solution of (3. . More generally.4 Applications to Integral Equations As another application of the Regularity Lifting Theorem. In the context of the HardyLittlewoodSobolev inequality. x ∈ Rn n ≥ 3. x ∈ Rn . (3. K(x) ≤ M. we consider the following integral equation u(x) = Rn n+α 1 u(y) n−α dy .67) are equivalent. and u ∈ L α (Rn ). In the special case α = 2.70) are satisﬁed. We prove Theorem 3.3 Regularity Lifting 103 3.3. and K(y) ≤ C(1 + Rn n Then u is in Lq (Rn ) for any 1 < q < ∞.71) then conditions (3. which is the well known Yamabe equation. one can prove (See [CLO]) that the integral equation (3. Assume that u ∈ Lqo (Rn ) for some qo > n . and hence in L∞ (Rn ).70) n(p−1) n .69) K(y)u(y)p−1  α dy < ∞ . n−α (3.66) and the PDE (3. it is natural 2n to start with u ∈ L n−α (Rn ). yγ (3.
x p−1 Proof of Theorem 3. Let ub (x) = u(x) − ua (x).104 3 Regularity of Solutions ii) Last condition in (3.71) is somewhat sharp in the sense that if it is violated. (3. n αq .70).3. one can verify that ua satisﬁes the equation ua = Tua ua + g(x) (3.72) with the function g(x) = Rn K(y)ub (y)p−1 ub (y) dy − ub (x). or if x > a ua (x) = 0 . we apply the H¨lder inequality to the o right hand side of the above inequality H(y) n+αq f (y) n+αq dy Rn nq n+αq ·r 1 r nq nq n+αq nq ≤ Rn H(y) n α dy Rn f (y) nq n+αq ·s 1 s n+αq nq dy (3. we ﬁrst apply the HardyLittlewoodSobolev inequality Lq Tua w ≤ C(α. otherwise .4: Deﬁne the linear operator Tv w = Rn K(y)v(y)p−1 w dy.68) when K(x) ≡ 1 possesses singular solutions such as c u(x) = α . deﬁne ua (x) = u(x) . Here we have chosen s= n + αq n + αq and r = . For any q > to obtain n n−α . n. it is obvious that g(x) ∈ L∞ ∩ Lqo . x − yn−α Under the second part of the condition (3.74) = Rn α n H(y) dy w Lq . if u(x) > a . q) Kua p−1 w nq L n+αq . Then since u satisﬁes equation (3.68). x − yn−α For any real number a > 0. then equation (3.73) Then write H(x) = K(x)ua (x)p−1 .
.76) to both the case q = qo and the case q = po > qo .74) that Tua w Lq ≤ C(α.3.77) in Lpo ∩ Lqo . (3. then w is also a solution in Lqo .77) has a unique solution in both Lqo and Lpo ∩ Lqo . From (3. for suﬃciently large a.75). we must have ua = w ∈ Lpo ∩ Lqo for any n po > n−α . By the uniqueness. we see that the equation w = Tua w + g(x) (3. (3. Let w be the solution of (3. we deduce.73) and (3.76) Applying (3.77) in Lqo . Tua w Lq ≤ 1 w 2 Lq . q)( K(y)ua (y)p−1  α dy) n w n α Lq .70) and (3. ua is a solution of (3.75) By (3. n. This completes the proof of the Theorem. and by the Contracting Mapping Theorem.3 Regularity Lifting 105 It follows from (3.72). So does u.
.
5.5 Calculus on Manifolds 4.4 Preliminary Analysis on Riemannian Manifolds 4.3 Equations on Prescribing Gaussian and Scalar Curvature 4.3 4. We call this curved space a manifold. In this section. which is a generalization of the ﬂat Euclidean space.4 Diﬀerentiable Manifolds Tangent Spaces Riemannian Metrics Curvature 4. please see. Intuitively. the Universe we live in is a curved space.6 Sobolev Embeddings According to Einstein. the book Riemannian Geometry by do Carmo [Ca]. Roughly speaking.2 Curvature of Surfaces in R3 4.1 Curvature of Plane Curves 4.4. Riemannian metrics.2 Integrals 4. 4. and curvatures.4.1 4. We will try to be as intuitive and brief as possible. for example.5.3 Curvature on Riemannian Manifolds 4. each neighborhood of a point on a manifold is homeomorphic to an open set in the Euclidean space.1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator 4. and the manifold as a whole is obtained by pasting together pieces of the Euclidean spaces.2 4.4. it is a union of open sets of R2 . tangent space.5. we will introduce the most basic concepts of diﬀerentiable manifolds. For more rigorous arguments and detailed discussions.1 Diﬀerentiable Manifolds A simple example of a two dimensional diﬀerentiable manifold is a regular surface in R3 . organized in such a .
V2 = {x ∈ S 1  x2 < 0}. 2 They are both diﬀerentiable functions.1 (Diﬀerentiable Manifolds) A diﬀerentiable ndimensional manifold M is (i) a union of open sets. . Example 1. Same is true on other intersections. (iii) The family {Vα . φ4 (x) = x2 . one can deﬁne a coordinates chart. for each p ∈ U . then we can deﬁne the coordinates of p to be (x1 . The following are two nontrivial examples. V4 = {x ∈ S 1  x1 < 0}. In fact. φ2 (x) = x1 . by taking the union of all the coordinates charts of M that are compatible with (in the sense of condition (ii)) the ones in the given structure. one can easily complete it into a maximal one. A manifold M is a union of open sets. x2 . A trivial example of ndimensional manifolds is the Euclidean space Rn . x2 . we have Deﬁnition 4. V3 = {x ∈ S 1  x1 > 0}. If a family of coordinates charts that covers M are well made so that the transition from one to the other is in a diﬀerentiable way as in condition (ii). the images of the intersection φα (U ) and φβ (U ) are open sets in Rn and the mappings φβ ◦ φ−1 α are diﬀerentiable. φα } are maximal relative to the conditions (i) and (ii). let φU be the homeomorphism from U to φU (U ) in Rn . V3 . x1 > 0. and (U. 1 n+1 Take the unit circle S 1 for instance. 1 1 − x2 . · · · . M = α Vα and a family of homeomorphisms φα from Vα to an open set φα (Vα ) in Rn . S 1 is the union of the four open sets V1 . · · · xn ). φU ) is called a coordinates chart. More precisely. x2 > 0. we have x2 = x1 = 1 − x2 .108 4 Preliminary Analysis on Riemannian Manifolds way that at the intersection of any two open sets the change from one to the other can be made in a diﬀerentiable manner. xn ) in Rn . and V4 . then it is called a diﬀerentiable structure of M . say on V1 V3 . Given a diﬀerentiable structure on M . On the intersection of any two open sets. Therefore. This is called a local coordinates of p. The ndimensional unit sphere S n = {x ∈ Rn+1  x2 + · · · + x2 = 1}. such that (ii) For any pair α and β with Vα Vβ = U = ∅. Obviously. we can use the following four coordinates charts: V1 = {x ∈ S 1  x2 > 0}. if φU (p) = (x1 . In each open set U ⊂ M . with the diﬀerentiable structure given by the identity. φ1 (x) = x1 . φ3 (x) = x2 .1. S 1 is a 1dimensional diﬀerentiable manifold. with this diﬀerentiable structure. V2 .
λ ∈ R. we have to ﬁnd a characteristic property of the tangent vector which will not involve the concepts of velocity. φi } is a diﬀerentiable structure of P n (R) and hence P n (R) is a diﬀerentiable manifold. · · · . thus {Vi . We denote the points in P n (R) by [x]. · · · xi−1 /xi . · · · . 0) ∈ Rn+1 . at each given point p. λ = 0. The ndimensional real projective space P n (R). 2. and each tangent vector is deﬁned as the “velocity in the ambient space R3 . 1. there exists a tangent line or a tangent plane.2 Tangent Spaces We know that at every point of a regular curve or at a regular surface. For i = 1. ξi+1 . · · · . On the intersection Vi Vj . · · · . · · · xn+1 /xi ]. ξj are obviously smooth. xi+1 /xi . · · · . Vi is the set of straight lines of Rn+1 which passes through the origin and which do not belong to the hyperplane xi = 0. the tangent plane is the space of all tangent vectors. then [x1 . let Vi = {[x]  xi = 0}. Deﬁne i i i i φi ([x]) = (ξ1 . ξi−1 .4. with γ(0) = p. j k ξi j ξi = j 1 i . ξn+1 ). 4. · · · xn+1 ) ∼ (λx1 . . n + 1. λxn+1 ). t ∈ (− . It is the set of straight lines of Rn+1 which passes through the origin (0. · · · . there is no such an ambient space. Since on a diﬀerential manifold. let’s consider a smooth curve in Rn : γ(t) = (x1 (t). To this end. Apparently.2 Tangent Spaces 109 Example 2. Similarly. ). n+1 P n (R) = i=1 Vi . xn (t)). we can deﬁne a tangent space at each point. Geometrically. the changes of coordinates i ξ j = ξk k = i. xn+1 ] = [x1 /xi . i where ξk = xk /xi (i = k) and 1 ≤ i ≤ n + 1. · · · . or the quotient space of Rn+1 \ {0} by the equivalence relation (x1 . given a diﬀerentiable structure on a manifold. For a regular surfaces S in R3 . Observe that if xi = 0.
Let (x1 . φ) that covers p with φ(p) = 0 ∈ φ(U ).110 4 Preliminary Analysis on Riemannian Manifolds The tangent vector of γ at point p is v = (x1 (0). ∂xi Hence the directional derivative is an operator on diﬀerentiable functions that depends uniquely on the vector v. p Here ∂ ∂xi p is the tangent vector at p of the “coordinate curve”: xi →φ−1 (0. · · · . Then γ (0)f = d d f (γ(t)) t=0 = f (x1 (t). · · · . xn (t)) t=0 dt dt ∂ ∂f = xi (0) = xi (0) f. xn ) be the coordinates in a local chart (U. Let D be the set of all functions that are diﬀerentiable at point p = γ(0) ∈ M . )→M is called a (diﬀerentiable) curve in M . We can identify v with this operator and thus generalize the concept of tangent vectors to diﬀerentiable manifolds M . · · · .e. 0). xi . ∂xi ∂xi p i i This shows that the vector γ (0) can be expressed as the linear combination of ∂ ∂xi p . xn (0)). The tangent vector to the curve γ at t = 0 is an operator (or function) γ (0) : D→R given by γ (0)f = d (f ◦ γ) t=0 . Let γ(t) = (x1 (t). · · · . · · · . 0. The set of all tangent vectors at p is called the tangent space at p and denoted by Tp M . · · · .2. 0. Deﬁnition 4. dt A tangent vector at p is the tangent vector of some curve γ at t = 0 with γ(0) = p. Let f (x) be a smooth function near p. γ (0) = i xi (0) ∂ ∂xi . Then the directional derivative of f along vector v ∈ Rn can be expressed as d (f ◦ γ) t=0 = dt n i=1 ∂xi ∂f (p) (0) = ( ∂xi ∂t xi (0) i ∂ )f. xn (t)) be a curve in M with γ(0) = p. i. and .1 A diﬀerentiable function γ : (− .
And the cotangent space of M is T ∗M = p∈M ∗ Tp M. p (dxi )p . xi . choose a diﬀerentiable curve γ : (− . 0)) xi =0 . (dxn )p ∗ form a basis of the cotangent space Tp M at point p. For every p ∈ M and for each v ∈ Tp M .2. dfp = i ∂ ∂xj ∂f ∂xi ) = δij . it is a linear mapping from one tangent space Tp M to the other Tf (p) N . · · · . one can derive that ∂ ∂ )= f. ∗ and denoted by Tp M . (dx2 )p .2.4. dfp ( ∂xi p ∂xi p In particular. Deﬁne the mapping dfp : Tp M →Tf (p) N by dfp (v) = β (0). 0. We ﬁrst introduce the diﬀerential of a diﬀerentiable mapping. We can realize it in the following. We call dfp the diﬀerential of f .···. )→M .2 Let M and N be two diﬀerential manifolds and let f : M →N be a diﬀerentiable mapping. we have (dxi )p ( And consequently. 0. · · · .2 Tangent Spaces 111 ∂ ∂xi f= p ∂ f (φ−1 (0. · · · . Deﬁnition 4. . with γ(0) = p and γ (0) = v. ∂xi One can see that the tangent space Tp M is an ndimensional linear space with the basis ∂ ∂ . ∂x1 p ∂xn p The dual space of the tangent space Tp M is called the cotangent space.3 The tangent space of M is TM = p∈M Tp M. for the diﬀerential (dxi )p of the coordinates function xi . Obviously. And one can show that it does not depend on the choice of γ (See [Ca]). From the deﬁnition. When N = R1 . p From here we can see that (dx1 )p . Deﬁnition 4. . the collection of all such diﬀerentials form the cotangent space. Let β = f ◦ γ.
that is. a .3. These can be expressed in terms of the inner product <. and let γ : I→M be a diﬀerentiable mapping ( curve ) on M . which are simply the length of the vectors in the ambient space R3 . b]. Also one can prove that ( see [Ca]) Proposition 4. dγ is a linear mapping from Tt I to d Tγ(t) M . More precisely. we have Deﬁnition 4. Let I be an open interval in R1 . and γ (t) ≡ dγ ≡ dγ( dt ) is a tangent vector of Tγ(t) M . or ‘metrics’.3 Riemannian Metrics To measure the arc length. we need to introduce some kind of measurements. Given a curve γ(t) on S for t ∈ [a. there is a natural way of measuring the length of vectors tangent to S. as well as other quantities in geometry. The restriction dt of the curve γ to a closed interval [a. For a surface S in the three dimensional Euclidean space R3 . area. A diﬀerentiable manifold with a given Riemannian metric will be called a Riemannian manifold.112 4 Preliminary Analysis on Riemannian Manifolds 4. or volume on a manifold. but also the area of domains on S. · >p on the tangent space Tp M . γ (t) >.3. a where the length of the velocity vector γ (t) is given by the inner product in R3 : γ (t) = < γ (t). gij (q) ≡< ∂ ∂xi . Now we generalize this concept to diﬀerentiable manifolds without using the ambient space. which varies diﬀerentiably. > in R3 . q ∂ ∂xj >q q is a diﬀerentiable function of q for all q near p. It is clear this deﬁnition does not depend on the choice of coordinate system. > enable us to measure not only the lengths of curves on S. γ (t) >1/2 dt.1 A Riemannian metric on a diﬀerentiable manifold M is a correspondence which associates to each point p of M an inner product < ·. The deﬁnition of the inner product <. For each t ∈ I. We deﬁne the length of the segment by b < γ (t). b] ⊂ I is called a segment. its length is the integral b γ (t)dt. Now we show how a Riemannian metric can be used to measure the length of a curve on a manifold M .1 A diﬀerentiable manifold M has a Riemannian metric.
∂xi The metric is given by gij =< ei . en } be an orthonormal basis of Tp M . (4. This expresses the length element ds in terms of the metric gij .j=1 gij (p)dxi dxj . From the above arguments. ∂xi j Then gik (p) =< ∂ ∂ . To derive the formula for the volume of a region (an open connected subset) D in M in terms of metric gij .4. xn ) be the coordinates in a local chart (U. · · · . φ) with coordinates (x1 . · · · n. it is natural to deﬁne the volume of D as vol(D) = det(gij )dx1 · · · dxn . · · · ∂xn is the determinant det(aij ). and by virtue of (4. el >= jl j aij akj .2).2) ∂ ∂ We know the volume of the parallelepiped form by ∂x1 . the ndimensional Euclidean space with ∂ = ei = (0.1) As we mentioned in the previous section. xn (t)). (4. · · · xn ). · · · . let’s begin with the volume of the parallelepiped form by the tangent vectors. γ (t) >1/2 dt. φ) that covers p = γ(t) = (x1 (t). 0.3 Riemannian Metrics 113 Here the arc length element is ds =< γ (t). i = 1. . p Consequently. Let {e1 . Let ∂ (p) = aij ej .j=1 n < ∂ ∂ . >p = ∂xi ∂xk aij akl < ej . γ (t) > dt2 n = i. Assume that the region D is cover by a coordinate chart (U. · · · . ej >= δij . let (x1 . it is the same as det(gij ). φ(U ) A trivial example is M = Rn . >p xi (t)dt xj (t)dt ∂xi ∂xj = i. 0). 1. · · · . ds2 = < γ (t). Then γ (t) = i xi (t) ∂ ∂xi . · · · .
z)}. φ) with coordinates (θ. sin θ cos ψ. with the metric inherited from the ambient space R3 = {(x.114 4 Preliminary Analysis on Riemannian Manifolds A less trivial example is S 2 . the length element is ds = g11 dθ2 + 2g12 dθdψ + g22 dψ 2 = dθ2 + sin2 dψ 2 . >p = sin2 θ. i. 0). ψ) with x = sin θ cos ψ y = sin θ sin ψ z = cos θ to illustrate how we use the measurement (inner product) in R3 to derive the expression of the standard metric gij in term of the local coordinate (θ. − sin θ). ψ) on S2. . ∂ψ ∂ψ ∂ψ =( p It follows that g11 (p) =< ∂ ∂ . and the area element is dA = det(gij )dθdψ = sin θdθdψ. ψ). In the coordinates of R3 . .e. ) = (− sin θ sin ψ.. y. we have. cos θ sin ψ. These conform with our knowledge on the length and area in the spherical coordinates. We ﬁrst derive the expression for −1 ∂ ∂θ p and ∂ ∂ψ tangent vectors at point p = φ (θ. . ∂θ ∂ψ ∂ ∂ . Let p be a point on S 2 covered by a local coordinates chart (U. ψ) of the coordinate curves ψ = constant and θ =constant respectively. >p = 0. >p = cos2 θ cos2 ψ + cos2 θ sin2 ψ + sin2 θ = 1. ) = (cos θ cos ψ. ∂θ ∂θ g12 (p) = g21 (p) =< and g22 (p) =< ∂ ∂ . ∂ψ ∂ψ Consequently. ∂θ ∂θ ∂θ ∂x ∂y ∂z . the 2dimensional unit sphere with standard metric. We will use the usual spherical coordinates (θ. ∂ ∂θ and ∂ ∂ψ =( p p the ∂x ∂y ∂z .
By a straight forward calculation.4 Curvature 4. and N(p) a normal vector of S at p. f (x)) = . To measure the extent of bending of a curve. Given a tangent vector T(p) at p. (4. k(p) = lim q →p ∆s ds If a plane curve is given parametrically as c(t) = (x(t).4. one can verify that k(c(t)) = dT x (t)y (t) − y (t)x (t) = .4 Curvature 115 4.4. Intuitively.3) {1 + [f (x)]2 }3/2 4. or the amount it deviates from being straight. we feel that a straight line does not bend at all. y(t)). The normal curvature varies as the tangent vector T(p) changes. then in the above formula.2 Curvature of Surfaces in R3 1. ds {[x (t)]2 + [y (t)]2 }3/2 If a plane curve is given explicitly as y = f (x). Principle Curvature Let S be a surface in R3 . consider the intersection of the surface with the plane containing T(p) and N(p). we introduce curvature. and ∆s the arc length between p and q on the curve. ds = [x (t)]2 + [y (t)]2 dt. and a circle of radius one bends more than a circle of radius 2. and negative otherwise. We deﬁne the curvature at p as dα ∆α = (p). and usually denoted by k1 (p) and k2 (p). y (t)) [x (t)]2 + [y (t)]2 . and its curvature is taken as the absolute value of the normal curvature. Let p be a point on a curve. replacing t by x and y(t) by f (x).4. p be a point on S. . This intersection is a plane curve. then the unit tangent vector at c(t) is T(t) = and (x (t). The maximum and minimum values of the normal curvature at point p are called principal curvatures. Let ∆α be the amount of change of the angle from T(p) to T(q). which is positive if the curve turns in the same direction as the surface’s chosen normal.1 Curvature of Plane Curves Consider curves on a plane. and T(p) be the unit tangent vector at p. we ﬁnd that the curvature at point (x. Let q be a nearby point. f(x)) is f (x) k(x.
3). Let λ1 and λ2 be two eigenvalues of the matrix F with corresponding unit eigenvectors e1 and e2 . i = 1. the normal curvature kn (u) in u direction is ∂2f ∂u2 ∂f [( ∂u )2 + 2 2 1]3/2 (0) ∂f ∂ where ∂u and ∂uf are ﬁrst and second directional derivative of f in u direction. 0). and uT denotes the transpose of u. A sphere of radius R has Gaussian curvature R2 .e. Let u = (u1 . ∂ f ∂f where. u2 ) be a unit vector on the tangent plane. 2. One way to measure the extent of bending of a surface is by Gaussian curvature. 1 2 where F= f11 f12 f21 f22 (0). We calculate the Gaussian curvature of S at the origin 0 = (0. we write fi = ∂xi and we will use fij to denote ∂xi ∂xj . i Let θ be the angle between u and e1 . i. one can see that no matter which side normal is chosen (say. for simplicity. Then we can express u = cos θ e1 + sin θ e2 . that is f (0. and it is obvious that eT F ei = λi . F ei = λi ei . 0) = 0 and f1 (0. we arrive at kn (u) = (f11 u2 + 2f12 u1 u2 + f22 u2 )(0) = uT F u. 2. It is deﬁned as the product of the two principal curvatures K(p) = k1 (p) · k2 (p). x2 ). i = 1. of a one sheet hyperboloids is negative.116 4 Preliminary Analysis on Riemannian Manifolds 2. one may chose outer normals or inner normals). 3 Let f (x. It follows that . 0) = 0 = f2 (0. Assume that the origin is on S. Gaussian Curvature. (4. 0). 2 Using the assumption that f1 (0) = 0 = f2 (0). for a closed surface. 0.4) Since the matrix F is symmetric. y) be a diﬀerentiable function. From this. the Gaussian curvature of a sphere is positive. Consider a surface S in R deﬁned as the graph of x3 = f (x1 . Then by (4. and the xy plane is tangent to S. the two eigenvectors e1 and e2 are orthogonal. and 1 of a torus or a plane is zero.
In a local coordinates. by deﬁnition.j=1 ai ∂bj ∂aj − bi ∂xi ∂xi . then we say the vector ﬁeld X is diﬀerentiable. we can express n X(p) = i ai (p) ∂ . From (4.4. f21 f22 − λ Therefore. If X(p) = n i ∂ ai (p) ∂xi and Y (p) = n ∂ j bj (p) ∂xj . If this mapping is diﬀerentiable. we see that λ1 and λ2 are solutions of the quadratic equation f11 − λ f12 = λ2 − (f11 + f22 )λ + (f11 f22 − f12 f21 ) = 0.4.4 Curvature 117 kn (u) = uT F u = cos2 θ λ1 + sin2 θ λ2 . we need a few preparations. Vector Fields and Brackets A vector ﬁeld X on M is a correspondence that assigns each point p ∈ M a vector X(p) in the tangent space Tp M .4). Then λ1 ≤ λ1 + sin2 θ (λ2 − λ1 ) = kn (u) = cos2 θ (λ1 − λ2 ) + λ2 ≤ λ2 . and therefore they are principal curvatures k1 and k2 . Y ] = XY − Y X. it is a directional derivative operator in the direction X(p): n (Xf )(p) = i ai (p) ∂f . ∂xi When applying to a smooth function f on M .4. the Gaussian curvature at 0 is K(0) = k1 k2 = λ1 λ2 = (f11 f22 − f12 f21 )(0).1 For two diﬀerentiable vector ﬁelds X and Y. n then ∂f . ∂xj [X.3 Curvature on Riemannian Manifolds On a general n dimensional Riemannian manifold M . Y ]f = XY f − Y Xf = i. 1. Before introducing them. Hence λ1 and λ2 are minimum and maximum values of normal curvature. It is a mapping from M to T M . 4. ∂xi Deﬁnition 4. there are several kinds of curvatures. Assume that λ1 ≤ λ2 . deﬁne the Lie Bracket as [X.
4. Let c : I→S be a curve on S. To consider that rate of change of V (t) restricted to the surface S.4. Y. Given an aﬃne connection D on M . one can express DV = dt dvi ∂ + dt ∂xi ∂ dxi vj D ∂ . Deﬁnition 4. Y >= constant. Covariant Derivatives and Riemannian Connections To be more intuitive. dV we make an orthogonal projection of onto the tangent plane. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if = 0. Y ) to DX Y and satisﬁes 1)Df X+gY Z = f DX Z + gDY Z 2)DX (Y + Z) = DX Y + DX Z 3)DX (f Y ) = f DX Y + X(f )Y. An aﬃne connection D on a diﬀerentiable manifold M is a mapping from D(M ) × D(M ) to D(M ) which maps (X.j Recall that in Euclidean spaces with inner product <. for a vector ﬁeld V (t) = Y (c(t)) along a diﬀerentiable curve c : I→M . One can see that dV is a vector in the ambient space R3 . we have Deﬁnition 4. ∂xi Then by the properties of the aﬃne connection. Let D(M ) be the set of all smooth vector ﬁelds. and denote dt DV it by . but it may not belong to the tangent dt plane of S. let n c(t) = (x1 (t). ∂xi ∂x dt j (4. Similarly. We call this the covariant derivative of V (t) on S – the derivative dt viewed by the twodimensional creatures on S. we begin with a surface S in R3 . and let V (t) be a vector ﬁeld along c(t). we have < X. the covariant derivative is deﬁned by aﬃne connections. >. t ∈ I. deﬁne the covariant derivative of V along c(t) as DV = D dc Y.3 Let M be a diﬀerentiable manifold with an aﬃne connecDV tion D. a vector ﬁeld dV V (t) along a curve c(t) is parallel if only if = 0. dt dt In a local coordinates.118 4 Preliminary Analysis on Riemannian Manifolds 2. and let f and g be smooth functions on M . dt .5) i i.2 Let X. Z ∈ D(M ). On an ndimensional diﬀerentiable manifold M . · · · xn (t)) and V (t) = i vi (t) ∂ . and for a pair of vector dt ﬁelds X and Y along c(t).
xn ). We skip the proof of the theorem. 3. .1 Given a Riemannian manifold M with metric <. >.4. In a local coordinates (x1 . ∂xj g mk is called the Christoﬀel symbols. Curvature >.4. The following fundamental theorem of Levi and Civita guarantees the existence of such an aﬃne connection on a Riemannian manifold. Y > .4 Let M be a diﬀerentiable manifold with an aﬃne connection D and a Riemannian metric <. · · · .6 The sectional curvature of a given two dimensional subspace E ⊂ Tp M at point p is K(E) = < R(X. i. Interested readers may see the book of do Carmo [ca] (page 55). Y )Z = DY DX Z − DX DY Z + D[X.5 Let M be a Riemannian manifold with the Riemannian connection D. ∀Z ∈ D(M ). and (g ij ) is the inverse Deﬁnition 4. DX Y − DY X = [X. gij =< matrix of (gij ). Y ) : D(M )→D(M ) deﬁned by R(X. Y ]. The curvature R is a correspondence that assigns every pair of smooth vector ﬁeld X. >. Y >c(t) = constant .Y ] Z. D ∂ ∂xi ∂x ∂xk j k where k Γij = 1 2 m ∂gjm ∂gmi ∂gij + − ∂xi ∂xj ∂xm ∂ ∂ ∂xi .4. >. we have < X. Deﬁnition 4. Theorem 4.4 Curvature 119 Deﬁnition 4. Moreover this connection is symmetric. Y )X. this unique Riemannian connection can be expressed as ∂ k ∂ = Γij .4. If for any pair of parallel vector ﬁelds X and Y along any smooth curve c(t).4. X2 Y 2 − < X. >. and X2 Y 2 − < X. Y ∈ D(M ) a mapping R(X.e. there exists a unique connection D on M that is compatible with <. then we say that D is compatible with the metric <. Y >2 where X and Y are any two linearly independent vectors in E. Y >2 is the area of the parallelogram spun by X and Y .
120 4 Preliminary Analysis on Riemannian Manifolds One can show that K(E) so deﬁned is independent of the choice of X and Y . Deﬁnition 4. and so on. we q denote the tangent space by Tx M . and X and Yn−1 : Ricp (X) = 1 n−1 n−1 < R(X. i=1 The scalar curvature at p is R(p) = 1 n n Ricp (Yi ). Yn−1 } be an orthonormal basis of the hyper plane in Tp M orthogonal to X. At a point x on M . for instance. we adapt the summation convention and write. Yi > . 4. and let {Y1 . It is actually the Gaussian curvature of the section.4. we introduced the concept of aﬃne connection D on a diﬀerentiable manifold M. and thus introduce higher order covariant derivatives. i=1 One can show that the above deﬁnition does not depend on the choice of the orthonormal basis. where Γ (M ) is the set of all smooth vector ﬁelds on M . and they both depends only on the point p. For convenience. Let M be the unit sphere with standard metric. ∂xi j Γik dxk := k j Γik dxk . Example.5. X and Y2 .7 Let X = Yn be a unit vector in Tp M . Here we will extend this deﬁnition to the space of diﬀerentiable tensor ﬁelds.1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator Let p and q be two nonnegative integers.5 Calculus on Manifolds In the previous chapter. Deﬁne Tp (Tx M ) as the space of (p. · · · . then the Ricci curvature in the direction X is the average of the sectional curvature of the n − 1 sections spun by X and Y1 . 4. It is a mapping from Γ (M ) × Γ (M ) to Γ (M ). Yi )X. that is. Ricci curvature is the same as sectional curvature. then one can verify that its sectional curvature at every point is K(E) = 1. Xi ∂ := ∂xi Xi i ∂ . as usual. the space of (p + q)linear forms . · · ·. q)tensors on Tx M . On two dimensional manifolds.
If we apply it to a (1. ∂ ∂xk then i ∂ ∂xj k (x) = Γij . Y ∈ Γ (M ) with X = (X 1 . x k where Γij are the Christoﬀel symbols. q) tensor ﬁelds T on M. 0)tensor. by the bilinear properties of the connection. (4. for the aﬃne connection D. ∂xj1 ∂xjq Recall that. and a vector X ∈ Tx M .6) Now we naturally extend this aﬃne connection D to diﬀerentiable (p. we have .7) Notice that (4. say dxj . Y n ). For smooth vector ﬁelds X. p q 121 In a local coordinates.4. then i (dx j j ) = −Γik dxk . For a point x on M . if we set i =D ∂ ∂xi . j ···j ∂Ti11···ipq ∂xi q x j ···j p − k=1 j ···j 1 q α Γiik (x)T (x)i1 ···ik−1 αik+1 ···ip j ···j + k=1 jk 1 k−1 Γiα (x)T (x)i1 ···ip αjk+1 jq . X n ) and Y = (Y 1 . q)tensor on Tx M by DX T (x) = X i ( where ( 1 q i T )(x)i1 ···ip = i T )(x). · · · . in a local coordinates.7) is applied to the (0. (4. 1)tensor Y . DX T is deﬁned to be a (p.8) ˜ Given two diﬀerentiable tensor ﬁelds T and T . we can express T = Ti11···ipq dxi1 ⊗ · · · ⊗ dxip ⊗ j ···j ∂ ∂ ⊗ ··· ⊗ .6) is a particular case when (4. · · · .5 Calculus on Manifolds ∗ ∗ η : Tx M × · · · × Tx M × Tx M × · · · × Tx M →R1 . we have DX Y = DX i = Xi = Xi ∂ ∂xi Y = Xi iY ∂Y j ∂ k ∂ + Y j Γij ∂xi ∂xj ∂xk ∂Y j j + Γik Y k ∂xi ∂ ∂xj (4.
∂xi f is a (2. Its ij th component is ( 2 f is called f )ij = ∂2f k ∂f − Γij . For a diﬀerentiable (p. q) j ···j j1 ···jq i1 T )i2 ···ip+1 . 0)tensor. ∂xi ∂xj ∂xk j 2 Here we have used (4. g). we deﬁne tensor ﬁeld whose components are given by 1 q ( T )i1 ···ip+1 = ( T to be the (p + 1. 3 T . Through a straight forward calculation. f = g ij f )ij ) is deﬁned to be the Laplace ∂2f k ∂f − Γij ∂xi ∂xj ∂xk where (g ij ) is the inverse matrix of (gij ).8). For example. Hence a (1. 4. Similarly. In the Riemannian context. the Hessian of f and denoted by Hess(f ). · · ·. . and the theory of Lebesgue integral applies. q)tensor ﬁeld T . one can deﬁne a natural positive Radon measure on M .j=1 n where g is the determinant of (gij ). one can verify that = 1 ∂ ∂ ( g g ij ) ∂xi ∂xj g i.5.7) and (4. 0)tensor deﬁned by f= And 2 i i f dx f is = ∂f i dx .122 4 Preliminary Analysis on Riemannian Manifolds ˜ ˜ ˜ DX (T ⊗ T ) = DX T ⊗ T + T ⊗ DX T . one can deﬁne 2 T . a smooth function f (x) on M is a (0. ∂xi ∂xj ∂xk 2 The trace of the Hessian matrix (( Bertrami operator acting on f . 0)tensor: 2 f = = = ∂f i dx ⊗ dxj ∂xi ∂f ∂f (dxi ) dxj dxi + j ∂xi ∂xi j ∂2f k ∂f − Γij dxi ⊗ dxj .2 Integrals On a smooth ndimensional Riemannian manifold (M.
supp ηi ⊂ Ui .5. For smooth functions u and v on a compact manifold M . One can show that. ηi )i is a partition of unity associate to (Ui . one can verify the following integration by parts formula − M g u vdVg = M < u. (4. and given a family of coordinates charts (Ui . φi )i . ηi )i is a family of partition of unity associated to (Ui . φi )i of M . ηi )i .9) where g is the LaplaceBertrami operator associated to the metric g.4. φi )i of M . φi . . such that g(x) = ρ(x)go (x). Let K(x) be the Gaussian curvature associated to the metric g. (4. and < u. ηi )i .10) is the LaplaceBertrami operator associated to go . we deﬁne its integral on M as the sum of the integrals on open sets in Rn : f dVg = M i φi (Ui ) (ηi gf ) ◦ φ−1 dx i where (Ui .5 Calculus on Manifolds 123 Let (Ui . φi . 4. ∀ x ∈ M. Let f be a continuous function on M with compact support. and dx stands for the volume element of Rn . φi . φi )i . We say that a family (Ui . one can verify that − Here o ou + Ko (x) = K(x)e2u(x) . for each family of coordinates chart (Ui . φi . v >= g ij iu jv = g ij ∂u ∂v ∂xi ∂xj is the scalar product associated with g for 1forms. Then by a straight forward calculation. and (ii) for any i. φi )i if (i) (ηi )i is a smooth partition of unity associated to the covering (Ui )i . Suppose M is a two dimensional Riemannian manifold with a metric go and the corresponding Gaussian curvature Ko (x). Let g(x) = e2u(x) go (x) for some smooth function u(x) on M . g is the determinant of (gij ) in the charts (Ui . v > dVg . φi )i be a family of coordinates charts that covers M . there exists a corresponding family of partition of unity (Ui . ∀ x ∈ M.3 Equations on Prescribing Gaussian and Scalar Curvature A Riemannian metric g on M is said to be pointwise conformal to another metric go if there exist a positive function ρ. φi )i and the partition of unity (Ui . One can prove that such a deﬁnition does not depend on the choice of the charts (Ui .
∀ x ∈ S n . Let u be a smooth function on M and k be a nonnegative integer.p (M ) is the completion of Ck (M ) with respect to the norm k 1/p u H k.2 (M ) is a Hilbert space when equipped with the equivalent norm . k. Theorem 4. In particular. 4.11) where go is the standard metric on S n with the corresponding LaplaceBertrami operator o . We denote k u the k th covariant derivative of u as deﬁned in the previous section. R(x) is the scalar curvature associated to the point4 wise conformal metric g = u n−2 go . For application purpose. we will study the existence and qualitative properties of the solutions u for the above two equations respectively. we list some of them in the following. Many results concerning the Sobolev spaces on Rn introduced in Chapter 1 can be generalized to Sobolev spaces on Riemannian manifolds.1 The Sobolev space H k. · · · .p (M ) = j=0 M  j up dVg . g) be a smooth Riemannian manifold. let Ck (M ) be the space of C ∞ functions on M such that  M j up dVg < ∞ . we have   1 u = u and iu ju u2 =  u2 = g ij ( u)i ( u)j = g ij = g ij ∂u ∂u .124 4 Preliminary Analysis on Riemannian Manifolds Similarly. say on standard sphere S n . we have − ou + n+2 n(n − 2) (n − 2) u= R(x)u n−2 . H k.1 For any integer k. p Deﬁnition 4.6. Interested readers may ﬁnd the proofs in [He] or [Au]. ∂xi ∂xj p For an integer k and a real number p ≥ 1.6. and its norm  k u is deﬁned in a local coordinates chart by  k u2 = g i1 j1 · · · g ik jk ( 0 k u)i1 ···ik ( k u)j1 ···jk . ∀ j = 0. 4 4(n − 1) (4. 1.6 Sobolev Embeddings Let (M. for conformal relation between scalar curvatures on ndimensional manifolds ( n ≥ 3). In Chapter 5 and 6.
· > is the scalar product on covariant tensor ﬁelds associated to the metric g. the embedding of H 1.3 (Sobolev Embeddings I) Let (M. if (k − λ) > n . k are two integers with 0 ≤ m < k. if m. k are two integers with 0 ≤ m < k.q (M ) ⊂ H m. Theorem 4. Theorem 4. Asn sume that q ≥ 1 and m. compact ndimensional Riemannian manifold. In particular.q (M ) into Lp (M ) is compact nq if 1 ≤ p < n−q . the embedding of H 1. Theorem 4.6. k are two integers with 0 ≤ m < k. If q > k−m . g) be a smooth. then H k.4. i v > dVg where < ·.p (M ) does not depend on the metric. compact ndimensional Riemannian manifold.p (M ) is compact.q (M ) ⊂ Lp (M ) . g) be a smooth.q (M ) ⊂ C m (M ) .6 Sobolev Embeddings k 1/2 125 u = i=1 M  i u dVg 2 . compact ndimensional Riemannian manifold. then H k.q (M ) into C λ (M ) is compact. More generally.5 (Compact Embeddings) Let (M. the embedding H k. The corresponding scalar product is deﬁned by k < u.q (M ) into C 0 (M ) is compact. the embedding of H k. Theorem 4. Then nq for any real number p such that 1 ≤ p < n−q(k−m) . v >= i=1 M < i u.6.6.q (M ) ⊂ H m.2 If M is compact. then H k. q .4 (Sobolev Embeddings II) Let (M. and if p = nq n−q(k−m) . (i) Assume that q ≥ 1 and m. Then H 1.6. Asnq sume 1 ≤ q < n and p = n−q . In particular.p (M ) . g) be a smooth. (ii) For q > n.
Then there exists a positive constant C such that for any u ∈ H 1. where u= ¯ is the average of u on M .6 (Poincar´ Inequality) e Let (M. compact ndimensional Riemannian manifold and let p ∈ [1.g) M udVg . 1/p 1/p u − up dVg ¯ M ≤C M  up dVg . n) be real.p (M ). 1 V ol(M. g) be a smooth.126 4 Preliminary Analysis on Riemannian Manifolds Theorem 4.6.
Method of Lower and Upper SoluChen and Li’s Results 5.5 Prescribing Gaussian Curvature on Compact 2Manifolds 5.2. such that J(xk )→0 and J (xk )→0 as k→∞.1 tions 5. The functional has no compactness at level .1. among other methods. we consider the corresponding functional J(·) in an appropriate Hilbert space. say. To show the existence of weak solutions for a certain equation. To roughly see when a functional may lose compactness. J satisﬁes the (P S) condition (see also Chapter 2): Every sequence {uk } that satisﬁes J(uk )→c and J (uk )→0 possesses a convergent subsequence. One can easily see that there exists a sequence {xk }. or c) super critical case–the remaining cases. how the calculus of variations can be applied to seek weak solutions of these equations. for instance.2. we will present the readers with real research examples–semilinear equations arising from prescribing Gaussian and scalar curvatures.2 In this and the next chapters. xk = −k. and can be approximated by subcritical cases. b) critical case which is the limiting situation. however. let’s consider J(x) = (x2 −1)ex deﬁned on one dimensional space R1 .2 Gaussian Curvature on Negatively Curved Manifolds 5. {xk } possesses no convergent subsequence because this sequence is not bounded. According to the compactness of the functional.1. people usually divide it into three cases: a) subcritical case where the level set of J is compact.2 5.1 5.1 Prescribing Gaussian Curvature on S 2 5.1. We will illustrate.3 Obstructions Variational Approaches and Key Inequalities Existence of Weak Solutions Kazdan and Warner’s Results.
we consider some examples in inﬁnite dimensional space. a bounded sequence has a convergent subsequence. In the situation where the lack of compactness occurs. and the limiting function u0 would be the minimum of Jp in H. 1]) but does not have a convergent subsequence. then one can verify that it possesses a subsequence which converges to a critical point x0 of J.1) up+1 dx on the constraint 1 H = {u ∈ H0 (Ω)  Ω n+2 When p < n−2 . consider n−2 [n(n − 2)k] 4 uk (x) = u(x) = n−2 η(x) (1 + kx2 ) 2 ∞ where η ∈ C0 (Ω) is a cut oﬀ function satisfying η(x) = 1. the functional satisﬁes the (P S) condition. x ∈ ∂Ω. hence it is the desired weak solution. this x0 is a minimum of J. For example. we study the corresponding functional Jp (u) = Ω (5.128 5 Prescribing Gaussian Curvature on Compact 2Manifolds J = 0. We know that in a ﬁnite dimensional space. Jp (uk )→mp possesses a convergent subse1 quence in H0 (Ω). One can ﬁnd a minimizing sequence that does not converge. If we deﬁne  u2 dx = 1}. To seek weak solutions. which has no compactness at level J = 0. 1 and let H0 (Ω) be the Hilbert space introduced in previous chapters. one would try to recover the compactness through various means. n+2 When p = τ := n−2 . it is in the critical case. if we restrict it to levels less than 0. However. u∈H then a minimizing sequence {uk }. if we pick a sequence {xk } satisfying J(xk ) < 0 and J (xk )→0. To further illustrate the concept of compactness. Let Ω be an open bounded subset in Rn . Consider the Dirichlet boundary value problem − u = up (x) . As we have seen in Chapter 2. . Take again the one dimensional example J(x) = (x2 − 1)ex . For instance {sin kx} is a bounded sequence in L2 ([0. we can recover the compactness. While in inﬁnite dimensional space. Actually. even a bounded sequence may not have a convergent subsequence. when Ω = B1 (0) is a ball. mp = inf Jp (u). x ∈ Ω u(x) = 0 . elsewhere . x ∈ B 1 (0) 2 between 0 and 1 . That is. to ﬁnd a critical point of the functional. it is in the subcritical case.
1 Prescribing Gaussian Curvature on S 2 129 Let vk (x) = uk (x) . x ∈ S 2 . Actually.4) . if Ω is an annulus. If we replace 2u by u and let R(x) = 2K(x). For example.2) where is the LaplaceBertrami operator associated with the standard metric go . (5. (5. then it is equivalent to solving the following semilinear elliptic equation on S 2 : − u + 1 = K(x)e2u . this embedding is compact when p < n−2 and is n+2 not when p = n−2 . In this and the next chapter.3) Here 2 and R(x) are actually the scalar curvature of S 2 with standard metric go and with the pointwise conformal metric eu go . If we let g = e2u go .2) becomes − u + 2 = R(x)eu(x) . S (5. the compactness may be recovered by considering other level sets or by changing the topology of the domain. the function R(x) must satisfy the obvious GaussBonnet sign condition R(x)eu dx = 8π. as k→∞.1. n+2 As we learned in Chapter 1.1 Prescribing Gaussian Curvature on S 2 Given a function K(x) on two dimensional unit sphere S := S 2 with standard metric go .1 Obstructions For equation (5. then equation (5. the readers will see examples from prescribing Gaussian and scalar curvature. the compactness of the functional Jτ here relies on the Sobolev embedding: H 1 (Ω) → Lp+1 (Ω).3) to have a solution. We say that the sequence “blows up ” at point 0. However it does not have a convergent subsequence. 5. can it be realized as the Gaussian curvature of some pointwise conformal metric g? This has been an interesting problem in diﬀerential geometry and is known as the Nirenberg problem. then the above Jτ has a minimizer in H. 5.5. where the corresponding equations are in critical case and where various means have been exploited to recover the compactness. In essence. the energy of {vk } are concentrating at one point 0.  uk 2 dx Ω Then one can verify that {vk } is a minimizing sequence of Jτ in H with J (vk )→0. respectively. In the critical case.
Condition (5. one can show that the weak solution is smooth and hence is the classical solution. people usually use a variational approach. To obtain a weak solution.130 5 Prescribing Gaussian Curvature on Compact 2Manifolds This can be seen by integrating both sides of the equation (5. 3. S (5. Later. 2. i = 1. It reads φi R(x)eu dx = 0.4) implies that R(x) must be positive somewhere. x2 . φi (x) = xi in the coordinates x = (x1 . 3.1. Besides this obvious necessary condition. i. a monotone rotationally symmetric function admits no solution. Then for which kinds of functions R(x). we let H 1 (S) := H 1. then by a regularity argument.3)? This has been an interesting problem in geometry.3) has no solution. 2.3). Then by the Poincar´ inequality.5) gives rise to many examples of R(x) for which the equation (5. Bouguignon and Ezin generalized this condition to X(R)eu dx = 0. there are other necessary conditions found by Kazdan and Warner. H(S) = {u ∈ H 1 (S)  S udx = 0. x3 ) of R3 . In particular. we can use e . S R(x)eu dx > 0}. S (5. can one solve (5.3). First prove the existence of a weak solution.6) where X is any conformal Killing vector ﬁeld on standard S 2 . Actually.5) Here φi are the ﬁrst spherical harmonic functions. S S Condition (5. i = 1. and the fact that − u dx = u · 1 dx = 0.e − φi = 2φi (x).2 Variational Approach and Key Inequalities To obtain the existence of a solution for equation (5.2 (S) be the Hilbert space with norm 1/2 u Let 1 = S ( u2 + u2 )dx . 5.
¯ i) For any v ∈ H 1 (S) with v = 0. (5. (5.1.7) holds for all u ∈ H 1 (S).1. such that the inequality e4πu dA ≤ C S 2 (5.1 (MoserTrudinger Inequality) Let S be a compact two dimensional Riemannian manifold. . Still denote 1/2 u = S  u2 dx v v .2 There exists constant C.10) .8) We skip the proof of this inequality. such that for all u ∈ H 1 (S) holds eu dA ≤ C exp S 1 16π  u2 dA + S 1 4π udA .9) Proof. Then w = 1 and w = 0. Consider the functional J(u) = 1 2  u2 dx − 8π ln S S R(x)eu dx in H(S). To estimate the value of the functional J. Interested readers many see the article of Moser [Mo1] or the article of Chen [C1]. From this inequality. we introduce some useful inequalities. we have e S 4πv 2 v 2 dA ≤ C.1.1 Prescribing Gaussian Curvature on S 2 1/2 131 u = S  u2 dx as an equivalent norm in H(S). one can derive immediately that Lemma 5. S (5. Then there exists constant C. let w = ¯ Applying Lemma 5.1 to such w. satisfying  u2 dA ≤ 1 and S S udA = 0.5. Let u= ¯ 1 4π u(x)dA S be the average of u on S. Lemma 5.
1. with v = 0. Although it is not true in general. by the Lemma.11) ii) For any u ∈ H 1 (S). R(x)eu dA ≤ max R S S eu dA ≤ max R(x) · C exp 1 u 16π 2 . . we do have coerciveness for J in some special subset of H(S). let v = u − u. a minimizing sequence may not be bounded. we can conclude that the functional J(·) is bounded from below in H(S). we have the following ”Distribution of Mass Principle” from Chen and Li [CL7] [CL8]. ¯ This completes the proof of the Lemma. Then v = 0. In fact.132 5 Prescribing Gaussian Curvature on Compact 2Manifolds From the obvious inequality √ v 2 πv √ − v 4 π we deduce immediately v≤ Together with (5. ¯ (5. Now we can take a minimizing sequence {uk } ⊂ H(S): J(uk )→ inf J(u).12) Therefore. To prevent this from happening. More precisely. we have ev dA ≤ C exp S 2 ≥ 0.11) ¯ ¯ to v and obtain 1 u u 2 . we need some coerciveness of the functional. and we can apply (5. as we mentioned before. + 16π v 2 1 v 16π 2 . the functional J is bounded from below in H(S). v 2 4πv 2 . It follows that J(u) ≥ 1 1 u 2 − 8π ln(C max R) + u 2 16π ≥ −8π ln(C max R) 2 (5. u∈H(S) However. In particular. eu−¯ dA ≤ C exp 16π S Consequently eu dA ≤ C exp S 1 u 16π 2 +u .10). ∀v ∈ H 1 (S). that is. in the set of even functions–antipodal symmetric functions.2. From Lemma 5. it may ‘leak’ to inﬁnity.
the authors use the “center of mass ” analysis.e. more precisely.1 Prescribing Gaussian Curvature on S 2 133 Lemma 5. consider the center of mass of the sphere with density eu(x) at point x: P (u) = S x eu dA . As we will show in the next section. and hence it converges to a weak limit uo in H 1 (S). i. In particular. and S eu dA the total mass of the sphere. .1. If we apply this stronger inequality to the family of even functions in H(S).13) holds for all u ∈ H 1 (S) satisfying Ω1 S eu dA eu dA ≥ αo and Ω2 S eu dA eu dA ≥ αo . (5. for all even functions u in H(S). δo .. the functional J is weakly lower semicontinuous. [CY]. Both analysis are equivalent on the sphere S 2 . and [CY1]).9). Then by a standard regularity argument.15) This stronger inequality guarantees the boundedness of a minimizing sequence. we arrive at 1 u 8 2 J(u) ≥ − C1 . However. The Lemma says that if the mass of the sphere is kind of distributed in two parts of the sphere. ). then Ωi eu dA is the mass of the region Ωi .5. a weak solution of equation (5. if we think eu(x) as the density of a solid sphere S at point x. 1 + 32π eu dA ≤ C exp S u 2 +u ¯ (5. this stronger inequality (5. u 2 ≥ −8π ln(C max R) + Choosing small enough. eu dA S and use this to study the behavior of a minimizing sequence (See [CD].3 Let Ω1 and Ω2 be two subsets of S such that dist(Ω1 . i. 1 Let 0 < αo ≤ 2 . Then for any such that the inequality > 0. then we can have a stronger inequality than (5. [CD1].14) Intuitively.13) holds. and therefore.2). (5.. there exists a constant C = C(αo . uo so obtained is the minimum of J(u). Ω2 ) ≥ δo > 0. we have J(u) ≥ 1 u 2 2 − 8π ln(C max R) + 1 − 8π 4 1 + 32π u 2. we will be able to conclude that uo is a classical solution. for even functions. if the total mass does not concentrate at one point. In many articles.e. as we will see in the next next section.
1. Given any η > 0.17) to the function (u − a)+ ≡ max{0. Proof of Lemma 5. g2 be two smooth functions. Interested reader may see the author’s paper [CL7] [CL8]for more general version of inequality (5. even surfaces with conical singularities.14) implies eu dA ≤ C exp ( S udA = 1 + ) u 32π 2 . choose a.18) . It suﬃce to show that for u ∈ H 1 (S) with 0. (u − a)}.13). gi (x) ≡ 1. Here we have used the fact that 2 g1 u + g2 u = S  (g1 u + g2 u)2 dA ( g1 + S 2 = ≤ u g2 )u + (g1 + g2 ) u2 dA L2 + C1 u u + C2 u 2 L2 . on a general two dimensional compact Riemannian surfaces.3. then by (5. (5.17) for some small 1 > 0. Let g1 . (5.134 5 Prescribing Gaussian Curvature on Compact 2Manifolds since “the center of mass” P (u) lies in the ambient space R3 . (5. In order to get rid of the term u 2 2 on the right hand side of the above L inequality. we have eu dA ≤ ea S S eu−a dA ≤ ea · S 1) e(u−a) dA u 2 + 1 (1 + ≤ C exp{ 32π + c1 ( 1 ) (u − a)+ 2 2 + a} (5. 2 S and supp g1 ∩ supp g2 = ∅. f or x ∈ Ωi . such that meas{x ∈ Su(x) ≥ a} = η. Apply (5. this kinds of analysis can not be applied. such that 1 ≥ gi ≥ 0. we employ the condition S udA = 0. While the “Distribution of Mass” analysis can be applied to any compact surfaces.16) Case i) If g1 u ≤ g2 u . i = 1.9) eu dA ≤ S 1 1 eu dA ≤ eg1 u dA αo Ω1 αo S C 1 g1 u 2 + g1 u} ≤ exp{ αo 16π C 1 ≤ exp{ g1 u + g2 u 2 + g1 u} αo 32π C 1 ≤ exp{ (1 + 1 ) u 2 + c1 ( 1 ) u αo 32π 2 L2 }.
and hence possesses a subsequence (still denoted by {uk }) that converges weakly to some element uo in H 1 (S).21).3) ¨ (u − a)+ 2 L2 ≤ η 2 (u − a)+ 1 2 L4 ≤ cη 2 ( u 1 2 + (u − a)+ 2 L2 ) (5.5. This completes the proof. 5. Let {uk } be a minimizing sequence of J in He (S). a≤δ u 2 + c2 .16).3) to have a solution is R(x) be positive somewhere.1 Prescribing Gaussian Curvature on S 2 135 Where C = C(δ.3 Existence of Weak Solutions Theorem 5.6) e a·η ≤ u≥a udA ≤ S udA ≤ c u hence for any δ > 0.20) and (5. then the above inequality implies 2 (u − a)+ 2 L2 ≤ 2Cη 2 u 2 .22) Consequently uo (−x) = uo (x) almost everywhere . as in the previous section. 1 (5. by a similar argument as in case i). Then the necessary and suﬃcient condition for equation (5.1. Case ii) If g2 u ≤ g1 u .1.20) Now by Poincar´ inequality (See Theorem 4.6.6. and S uo (x)dA = 0. that is if R(x) = R(−x) ∀ x ∈ S 2 .15).18). where x and −x are two antipodal points. we obtain (5. . (5. Then by the Compact Sobolev Embedding of H 1 into L2 uk →uo in L2 (S).19) Choose η so small that Cη 1/2 ≤ 1 .16) follows from (5.21) Now (5.1 (Moser) If the function R is even. (5. we consider the functional J(u) = 1 2  u2 dx − 8π ln S S R(x)eu dx in He (S) = {u ∈ H(S)  u is even almost everywhere }. 4δη 2 (5. {uk } is bounded in H 1 (S).17) and then (5. To prove the existence of a weak solution. Then by (5. αo ). By the H older and Sobolev inequality (See Theorem 4.
then there exists a subsequence (still denoted by {uk }). 2 Here we have used the fact that uo (−x) = uo (x).3). as k→∞.136 5 Prescribing Gaussian Curvature on Compact 2Manifolds that is uo ∈ He (S).4 If {uk } converges weakly to u in H 1 (S). The Proof of Lemma 5. . for any v ∈ He (S). the integral is weakly lower semicontinuous. Consequently. S For any w ∈ H(S). i.4. v > dA − 8π R(x)euo dA S R(x)euo vdA. w− >) =< J (uo ). we obviously have R(x)euk dA→ S S R(x)euo dA. we have 0 =< J (uo ). This implies that uo is a critical point of J in H 1 (S) and hence a constant multiple of uo is a weak solution of equation (5. v >= 1 (< J (uo ). as we showed in the previous chapter. w > . and it follows that 0 =< J (uo ). let v(x) = 1 1 (w(x) + w(−x)) := (w(x) + w− (x)). We postpone the proof of this lemma for a moment. On the other hand. w > + < J (uo ). such that euk (x) dA→ S S u∈He (S) inf J(u). Furthermore. Now from this lemma. eu(x) dA . 2 2 Then obviously. and it follows that J(·) is weakly lower semicontinuous. u∈He (S) Therefore. we have J(uo ) ≥ inf J(u).1. v ∈ He (S). uo is a minimum of J in He (S). Now what left is to prove Lemma 5.4. To show that the functional J is weakly lower semicontinuous: J(uo ) ≤ limJ(uk ) = We need the following Lemma 5. v >= S < uo . limk→∞  uk 2 dA ≥ S S  uo 2 dA.1.1.e.
2. Assume that R(x) is Gsymmetric.24) imply that {euk } is a bounded sequence in H 1.24) Inequalities (5. The following lemma guarantees that such a minimum is the critical point we desired. ∀u ∈ X∗ . R(gx) = R(x). S (5. ∀g ∈ G.e. Recall that u dA = 0. We seek a minimum of J in X∗ . we have epu dA ≤ C exp{ S p2 16π  u2 dA + S p 4π udA}.p for any 1 < p < 2. there exists a subsequence of {euk } which converges strongly to euo in L1 (S).5. By the compact Sobolev embedding of H 1.25) H(S) = {u ∈ H 1 (S)  R(x)eu dA > 0}. Under the assumption (5. as we will describe in the following. Let X∗ = X H(S).25).23) And by H oder inequality. for any 0 < p < 2. (5. 1 2  u2 dx − 8π ln S S R(x)eu dx . The action of G on S 2 induces an action of G on the Hilbert space H 1 (S): g[u](x) → u(gx) under which the ﬁxed point subspace of H 1 (S) is given by X = {u ∈ H 1 (S)  u(gx) = u(x). or Ginvariant. ∀g ∈ G} be the set of ﬁxed points under the action of G.e. This completes the proof of the Lemma. ¨  (e ) dA = u p S S e  u dA ≤ S pu p e 2p 2−p u 2−p 2 dA S  u dA 2 p 2 . i. for any real number p > 0. Let G be a closed ﬁnite subgroup of O(3). Let O(3) be the group of orthogonal transformations of R3 . Let FG = {x ∈ S 2  gx = x. i. the functional J(u) = is Ginvariant in X∗ .1 Prescribing Gaussian Curvature on S 2 137 From Lemma 5. Chen and Ding [CD] generalized Moser’s result to a broader class of symmetric functions. ∀g ∈ G.p into L1 .23) and (5. J(g[u]) = J(u).1. ∀g ∈ G}. S S (5.
1. i.25) with maxS R > 0.138 5 Prescribing Gaussian Curvature on Compact 2Manifolds Lemma 5. Therefore. let m = maxFG R. in this case FG is empty. (ii) m ≤ 0. m 1 (gg1 [w] + · · · + ggm [w]) = v. then it is also a critical point of J in H(S). where id is the identity transform. the group G = {id. uo is a critical point of J in H(S). w >= i < J (uo ). gi [w] > i 1 m m < J (uo ).26) u vdA − 8π Reu dA S Reu vdA. Remark 5. . −id}. or (iii) m > 0 and c∗ = inf X∗ J < −8π ln 4πm. v > .2) has a solution provided one of the following conditions holds: (i) FG = ∅. v >= = 1 m m −1 < J (gi uo ). g[v] >=< J (g −1 [u]). v >= S (5. · · · gm }. 1 (g1 [w] + · · · + gm [w]).5 Assume that uo is a critical point of J in X∗ . for any g ∈ G and any u. Hence the above Theorem contains Moser’s existence result as a special case. Obviously. suppose that R ∈ C α (S 2 ) satisﬁes (5. w > . It follows that 0 = < J (uo ).1. we have < J (u). Where g −1 is the inverse of g.1 In the case when the function R(x) is even. First. Then equation (5. id(x) = x and − id(x) = −x ∀ x ∈ S 2 .2 (Chen and Ding [CD]).e.1. let v= Then for any g ∈ G g[v] = G = {g1 . This can be easily see from < J (u). w > i = < J (uo ). Theorem 5. Proof. If FG = ∅. v ∈ H(S). S Assume For any w ∈ H(S). m 1 m m Hence v ∈ X∗ .
Roughly speaking.29) Proof. and hence veriﬁes (5. such that for any > 0. there exists a subsequence of {ui } and a point ξ ∈ S 2 .7 Suppose that {ui } is a sequence in H(S) with J(ui ) ≤ β. the above argument shows that if a minimizing sequence of J is unbounded. S Lemma 5.30). We ﬁrst show that.30) Otherwise. Hence the inequality 1 − δ) ui 2 eui dA ≤ C exp( 16π S holds for some δ > 0. we conclude that ui is bounded.27) 1 16π  u2 dA .1 Prescribing Gaussian Curvature on S 2 139 To prove this theorem.28) and i→ ∞ lim J(ui ) ≥ −8π ln 4πR(ξ). To verify (5. If ui →∞.30) and the continuity assumption on R(x) implies (5.28) and Onofri’s Lemma: J(ui ) = 1  u2 dA − 8π ln(R(ξ) + o(1)) eui dA 2 S S 1 2 =  u dA − 8π ln(R(ξ) + o(1)) − 8π ln eui dA 2 S S 1 1 ≥  u2 dA − 8π ln(R(ξ) + o(1)) − 8π(ln 4π + 2 S 16π = −8π ln(4πR(ξ) + o(1)). the mass of the surface S with density eui (x) at point x will be concentrating at one point ξ on S.1. Then by the previous argument. then passing to a subsequence.  u2 dA) S .3. This contradicts with our assumption.1. S\B (ξ) (5. there exists two subsets of S 2 .5. Ω2 ) ≥ o > 0 such that {ui } satisﬁes all the conditions in Lemma 5. holds (5. (5.29). with dist(Ω1 . we use (5.6 (Onofri [On]) For any u ∈ H 1 (S) with eu dA ≤ 4π exp S S udA = 0. then there exists a subsequence (still denoted by {ui }) and a point ξ ∈ S 2 with R(ξ) > 0. such that R(x)eui dA = (R(ξ) + o(1)) S S eui dA. Ω1 and Ω2 . Now (5. Lemma 5. we need another two lemmas. (5.28).1. we have eui dA→0.
7.3). This a constant multiple of uo is the desired weak solution of equation (5. Let {ui } ⊂ X∗ be a minimizing sequence. (ii). S\B (ξ) (5.3) possesses a weak solution. we arrive at (5.1. i.1. R(ξ) > 0 and c∗ ≥ −8π ln 4πR(ξ). by Lemma 5.7. under what conditions on R. Here conditions (i) and (ii) in Theorem 5. under either one of the conditions (i). such that for any > 0. this again contradicts with conditions (ii) and (iii).1. However.29). Similar to the reasoning at the beginning of this section.1. we must have R(ξ) > 0.e. Therefore. we have eui dA→0. J(ui )→c∗ . Proof of the Theorem. Assume on the contrary that ui →∞. Since the right hand side of (5. Let D = {x ∈ S 2  R(x) = m}.3 (Chen and Ding [CD]). as i→∞. Suppose that R ∈ C 2 (S 2 ). then c∗ < −8π ln 4πm. that is ξ ∈ FG . Deﬁne c∗ = inf J. one may wonder.2 can be easily veriﬁed through the group G or the given function R. is Ginvariant. Noticing that m ≥ R(ξ) by deﬁnition. Then from the proof of Lemma 5. and hence possesses a subsequence converging weakly to some uo ∈ X∗ . do we have condition (iii). Moreover. there is a point ξ ∈ S 2 .33) . S\B (gξ) (5. {ui } must be bounded. we deduce that gξ = ξ. (5. FG is empty. If R(xo ) > 0 for some xo ∈ D.31) Since ui is invariant under the action of G. Therefore.29) is bounded above by β. and m = maxFG R > 0. one can see that we only need to show that {ui } is bounded in H 1 (S).140 5 Prescribing Gaussian Curvature on Compact 2Manifolds Let i→∞. we have c∗ > −∞. This completes the proof of the Lemma. and therefore (5. Now under condition (i). Theorem 5. The following theorem provides a suﬃcient condition. we must also have eui dA→0.32) Because is arbitrary. X∗ As we argue in the previous section (see (5.12)). the minimizing sequence {ui } must be bounded. or (iii).
5.1 Prescribing Gaussian Curvature on S 2
141
Proof. Choose a spherical polar coordinate system on S := S 2 : π π (θ, φ), − ≤ θ ≤ , 0 ≤ φ ≤ 2π 2 2 with xo = ( π , φ) as the north pole. 2 Consider the following family of functions: uλ (θ, φ) = ln 1 − λ2 1 , vλ = uλ − (1 − λ sin θ)2 4π − uλ + 2 = 2euλ . uλ dA, 0 < λ < 1.
S
One can verify that this family of functions satisfy the equation (5.34) As λ→1, the family {uλ } ‘blows up’ (tend to ∞) at the one point xo , while approaching −∞ everywhere else on S 2 . Therefore, the mass of the sphere with density euλ (x) is concentrating at point xo as λ→1. It is easy to see that vλ ∈ H(S) for λ closed to 1. Moreover, we claim that vλ ∈ X, i.e. vλ (gx) = vλ (x), ∀g ∈ G. In fact, since g is an isometry of S 2 , and gxo = xo , we have d(gx, xo ) = d(gx, gxo ) = d(x, xo ), where d(·, ·) is the geodesic distance on S 2 . But vλ depends only on θ = d(x, xo ), so vλ (gx) = vλ (x), ∀g ∈ G. It follows that vλ ∈ X∗ for λ suﬃciently closed to 1. By a direct computation, one can verify that 1 2 Consequently, J(vλ ) = −8π ln
S
 uλ 2 dA + 8π¯λ = 0. u
S
R(x)euλ dA.
Notice that c∗ ≤ J(vλ ) by the deﬁnition of c∗ . Thus, in order to show (5.33), we only need to verify that, for λ suﬃciently close to 1 R(x)euλ dA > 4πm.
S
(5.35)
Let δ > 0 be small, we have
2π
Reuλ dA = (1 − λ2 )
S 2π 0
π 2 π 2 −δ
[R(θ, φ) − m]
cos θ dθdφ (1 − λ sin θ)2 + 4πm
+
0
π 2 −δ
[R(θ, φ) − m]
−π 2
cos θ dθdφ (1 − λ sin θ)2
= (1 − λ2 ){I + II} + 4πm.
142
5 Prescribing Gaussian Curvature on Compact 2Manifolds
Here we have used the fact euλ dA = 4π
S
which can be derived directly from equation (5.34). For each ﬁxed δ, it is easy to check that the integral II remains bounded as λ tends to 1. Thus (5.35) will hold if we can show that the integral I → + ∞ as λ→1. Notice that xo is a maximum point of the restriction of R on FG . Hence by the principle of symmetric critically, xo is a critical point of R, that is R(xo ) = 0. Using this fact and the second order Taylor expansion of R(x) near xo , we can derive I=π ≥ co
π 2 −δ π 2 π 2 −δ π 2
R(xo ) cos2 θ + o cos3 θ dθ. (1 − λ sin θ)2
θ−
π 2 2
(1 − λ sin θ)2
cos θ dθ
Here co is a positive constant. We have chosen δ to be suﬃciently small and have used the fact that R(xo ) is positive. From the above inequality, one can easily verify that as λ→1, I→ + ∞, since the integral
π 2 π 2 −δ
cos3 θ dθ (1 − sin θ)2
diverges to +∞. This completes the proof of the Theorem.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
Let M be a compact two dimensional Riemannian manifold. Let go (x) be a metrics on M with the corresponding LaplaceBeltrami operator and Gaussian curvature Ko (x). Given a function K(x) on M , can it be realized as the Gaussian curvature associated to the pointwise conformal metrics g(x) = e2u(x) go (x)? As we mentioned before, to answer this question, it is equivalent to solve the following semilinear elliptic equation − u + Ko (x) = K(x)e2u x ∈ M. (5.36)
To simplify this equation, we introduce the following simple existence result.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
143
Lemma 5.2.1 Let f (x) ∈ L2 (M ). Then the equation − u = f (x) x ∈ M possesses a solution if and only if f (x)d A = 0.
M
(5.37)
(5.38)
Proof. The ‘only if’ part can be seen by integrating both side of the equation (5.37) on M . We now prove the ‘if’ part. Assume that (5.38) holds, we will obtain the existence of a solution to (5.37). To this end, we consider the functional J(u) = under the constraint G = {u ∈ H 1 (M ) 
M
1 2
 u2 d A −
M M
f (x)ud A.
u(x)d A = 0}.
As we have seen before, we can use, in G, the equivalent norm u =
M
 u d A
2
1 2
.
By H oder and Poincar´ inequalities, we derive ¨ e J(u) ≥ 1 u 2 − f L2 u L2 2 1 u 2 − co f L2 u ≥ 2 c2 f 1 = ( u − co f L2 )2 − o 2 2
2 L2
.
(5.39)
The above inequality implies immediately that the functional J(u) is bounded from below in the set G. Let {uk } be a minimizing sequence of J in G, i.e. J(uk )→ inf J,
G
as k→∞.
Then again by (5.39), {uk } is bounded in H 1 , and hence possesses a subsequence (still denoted by {uk }) that converges weakly to some uo in H 1 (M ). From compact Sobolev imbedding H 1 → → L2 ,
144
5 Prescribing Gaussian Curvature on Compact 2Manifolds
we see that uk →uo strongly in L2 , Therefore uo (x)d A = lim
M M
and hence so in L1 . uk (x)d A = 0, f (x)uk (x)d A.
M
(5.40)
and f (x)uo (x)d A = lim
M
(5.41)
(5.40) implies that uo ∈ G, and hence J(uo ) ≥ inf J(u).
G
(5.42)
While (5.41) and the weak lower semicontinuity of the Dirichlet integral: lim inf
M
 uk 2 d A ≥
M
 uo 2 d A
leads to
J(uo ) ≤ inf J(u).
G
From this and (5.42) we conclude that uo is the minimum of J in the set G, and therefore there exists constant λ, such that uo is a weak solution of − uo − f (x) = λ. Integrating both sides and by (5.38), we ﬁnd that λ = 0. Therefore uo is a solution of equation (5.37). This completes the proof of the Lemma. We now simplify equation (5.36). First let u = 2u, then ˜
˜ − u + 2Ko (x) = 2K(x)eu . ˜
(5.43)
Let v(x) be a solution of ¯ − v = 2Ko (x) − 2Ko where 1 ¯ Ko = M  Ko (x)d A
M
(5.44)
is the average of Ko (x) on M . Because the integral of the right hand side of (5.44) on M is 0, the solution of (5.44) exists due to Lemma 5.2.1. Let w = u + v. Then it is easy to verify that ˜ ¯ − w + 2Ko = 2K(x)e−v ew . Finally, letting (5.45)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
145
¯ α = 2Ko and R(x) = 2K(x)e−v(x) and rewriting w as u, we reduce equation (5.36) to − u + α = R(x)eu(x) , x∈M (5.46)
By the wellknown GaussBonnet Theorem, if g is a Riemannian metrics on M with corresponding Gaussian curvature Kg (x), then Kg (x)d Ag = 2πχ(M )
M
where χ(M ) is the Euler Characteristic of M , and d Ag the area element associated to metric g. We say M is negatively curved, if χ(M ) < 0, and hence α < 0. 5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper Solutions Let M be a compact two dimensional manifold. Let α < 0. In [KW], Kazdan and Warner considered the equation − u + α = R(x)eu(x) , and proved ¯ Theorem 5.2.1 (KazdanWarner) Let R be the average of R(x) on M . Then ¯ i) A necessary condition for (5.46) to have a solution is R < 0. ¯ < 0, then there is a constant αo , with −∞ ≤ αo < 0, such that ii) If R one can solve (5.46) if α > αo , but cannot solve (5.46) if α < αo . iii) αo = −∞ if and only if R(x) ≤ 0. The existence of a solution was established by using the method of upper and lower solutions. We say that u+ is an upper solution of equation (5.47) if − u+ + α ≥ R(x)eu+ , x ∈ M. Similarly, we say that u− is a lower solution of (5.47) if − u− + α ≤ R(x)eu− , x ∈ M. The following approach is adapted from [KW] with minor modiﬁcations. ¯ Lemma 5.2.2 If a solution of (5.47) exists, then R < 0. Even more strongly, the unique solution φ of φ + αφ = R(x) , x ∈ M must be positive. (5.48) x∈M (5.47)
52) that . so that eav(x)+b − a ≥ 0 . x) ≡ (−aR + α) − R(x)(eav+b − a). ∀ x ∈ M.48) must be positive. so that the right hand side of (5. Therefore w ≥ 0 and φ ≥ v > 0. This completes the proof of the Lemma. there exists an upper solution u+ of (5. v (5.50).146 5 Prescribing Gaussian Curvature on Compact 2Manifolds Proof. We ﬁrst prove the second part.48). Let v be a solution of ¯ − v = R(x) − R. such that u+ = av + b is an upper solution. This shows that a necessary condition for (5. we must have w(xo ) + αw(xo ) > 0. Using the substitution v = e−u . then there exist an upper solution u+ of (5.50) It follows that w ≥ 0. we see ¯ immediately that R < 0.51) We want to choose constant a and b. hence w(xo ) ≥ 0. b. one sees that (5.47) has a solution u if and only if there is a positive solution v of v + αv − R(x) −  v2 = 0. Let φ be the unique solution of (5.48) and let w = φ − v. when R(x) ≤ 0. We have ¯ − u+ + α − R(x)eu+ = aR(x) − aR + α − R(x)eav+b . Lemma 5. Then w + αw = −  v2 ≤ 0. we ﬁrst choose a suﬃciently large. A contradiction with (5.47). there is a point xo ∈ M . It follows from (5.51) and (5. Integrating both sides of (5. In Case i). v (5.47). (5. We try to choose constant a and b. then there exists a small δ > 0. i) If R(x) ≤ 0 but not identically 0.51) is nonnegative. Then we choose b large. Proof. such that for −δ ≤ α < 0. so that −aR + α > 0.3 (The Existence of Upper Solutions). such that w(xo ) < 0 and xo is a minimum of w. (5.52) ¯ For any given α < 0.49) Assume that v is such a positive solution. Consequently. Otherwise.2. This is possible because v(x) is continuous and M is compact. ¯ ii) If R < 0.47) to have a solution is that the unique solution φ of (5.51) as ¯ f (a. we regroup the right hand side of (5.
2 R L∞ Again by the continuity of v and the compactness of M . let ¯ K(x) = max{1.47) for α ≥ a2 .2. R In Case ii). (5. Thus there is a solution w of w = aK(x) + α. then one can clearly use any sufﬁciently large negative constant as u− .p (M ) of (5.47) such that u− < u+ . If R(x) is bounded from below. we see that eav(x) →1 . Choosing λ suﬃciently large so that a − ew−λ ≥ 0. Proof.53) .p (M ). Then − u− + α − R(x)eu− = − w + α − R(x)ew−λ = −aK(x) − R(x)ew−λ ≤ −K(x)(a − ew−λ ). w ∈ W 2. 2 R L∞ for all x ∈ M. Lemma 5. Thus u+ so constructed is an upper solution ¯ R of (5. Let u− = w − λ. where a is a positive number to be determine later. Choose a positive constant a so that aK = −α. p Then aK(x) + α = 0 and (aK(x) + α) ∈ L (M ). For general R(x) ∈ Lp (M ). Therefore. Then (5. This completes the proof of the Lemma. uniformly on M as a→0. By the Lp regularity theory.52) becomes ¯ f (a.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 147 − u+ + α − R(x)eu+ ≥ 0. we can make eav(x) − 1 ≤ ¯ −R . x) ≥ − ¯ aR − aR(x)(eav − 1) 2 ¯ R ≥ a − − R L∞ eav − 1 2 ¯ −R = a R L∞ − eav − 1 . b. by choosing a suﬃciently small. there is a lower solution u− ∈ W 2.p (M ) and hence is continuous.5.4 (Existence of Lower Solutions) Given any R ∈ Lp (M ) and a function u+ ∈ W 2. −R(x)}. we choose α ≥ a2 and eb = a.
p (M ). We will rewrite the equation (5. where k(x) is any positive function on M . 2. and if u− ≤ u+ . Let xo be a negative minimum of u.47) as Lu = f (x. In order that for each i.5 Let p > 2. Based on the maximum principle on L. u+ . Moreover.47). we suppose in the contrary. and u is C ∞ in any open set where R(x) is C ∞ . by (5. u+ ) . u− ≤ ui ≤ u+ . The Laplace operator in the original equation does not obey maximum principles. Hence the maximum principle holds for L. we choose k(x) = k1 (x)eu+ (x) . if we let Lu = − u + k(x)u. u− ≤ u ≤ u+ . This completes the proof of the Lemma. u). Therefore u− is a lower solution.148 5 Prescribing Gaussian Curvature on Compact 2Manifolds and noticing that K(x) ≥ 0. . Moreover. u). To see this. · · · . then start from the upper solution u+ and apply iterations: Lu1 = f (x. ∂u To this end.p (M ) of (5. and it follows that − u(xo ) + k(xo )u(xo ) ≤ k(xo )u(xo ) < 0.2. We now write equation (5. we arrive at − u− + α − R(x)eu− ≤ 0. If there exist upper and lower solutions. Proof. then the new operator L obeys the maximum principle: If Lu ≥ 0. Lemma 5. we need the operator L obeys some maximum principle and f (x. Lui+1 = f (x. A contradiction.53). ·) possess some monotonicity. then there is a solution u ∈ W 2.47) as Lu ≡ − u + k(x)u = R(x)eu − α + k(x)u := f (x. u− ∈ W 2. i = 1. it requires that f (x. for all x ∈ M. then − u(xo ) ≤ 0. however. to keep u− ≤ ui ≤ u+ . there is some point on M where u < 0. ui ) . one can make λ large enough such that u− = w − λ < u+ . then u(x) ≥ 0 . −R(x)}. ·) be monotone increasing. where k1 (x) = max{1. that is ∂f = R(x)eu + k(x) ≥ 0.
47) and satisﬁes u− ≤ u ≤ u+ . um+1 ) ≤ f (x. from Lu+ ≥ f (x. inequality (5.54) L(u+ − u1 ) ≥ 0. u+ ) (5.p (M ).2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 149 Base on this choice of L and f . Therefore um+1 ≥ um+2 . Since f (x. we show by math induction that the iteration sequence so constructed is monotone decreasing: ui+1 ≤ ui ≤ · · · ≤ u1 ≤ u+ . Based on these Lemmas. we have f (x. Proof of Theorem 5. (5.p . .p into C 1 . Moreover ui+1 − uj+1 W 2. Since L : W 2. ·) is monotone increasing in u for u ≤ u+ . one can show the other side of the inequality and arrive at u− ≤ ui+1 ≤ ui ≤ · · · ≤ u+ . in the Lp norm. More precisely. · · · . 2. one proves inductively that u ∈ C ∞ in any open set where R(x) is C ∞ . um+1 ≤ um ≤ u+ . Consequently.p ≤ C L(ui+1 − uj+1 ) ≤ C( R Lp Lp L∞ e ui −e uj + k Lp ui − uj L∞ ). {ui } converges strongly in W 2. it follows that u is a solution of (5. we will show that um+2 ≤ um+1 .2. By the compact imbedding of W 2. Therefore.1. Part i) can obviously derived from Lemma 5. This completes the proof of the Lemma.54) through math induction. u− .1.55). Assume that for any integer m. Lui+1 Lp = R(x)eui − α + k(x)ui Lp ≤ C. so u ∈ W 2. we conclude that the entire sequence {ui } itself converges uniformly to u.55) Since u+ . In view of the monotonicity (5. This veriﬁes (5. By the Schauder theory. there is a subsequence that converges uniformly to some function u in C 1 (M ). i = 1. if R ∈ C m+γ . i = 1. This completes the ﬁrst step of our induction. u+ ) we derive immediately and Lu1 = f (x. um ) and hence L(um+1 − um+2 ) ≥ 0.55) shows that {ui } is uniformly bounded. Then the maximum principle for L implies u+ ≥ u1 . 2.p .2. Similarly.2. then u ∈ C m+2+γ . and ui are continuous. In fact.p →Lp is continuous. · · · . we are now able to complete the proof of the Theorem 5. Hence {ui } is bounded in W 2.5.2.
then there is a solution. such that the unique solution of φ + αφ = R(x) is negative somewhere.5. Hence by Lemma 5. we only need to show Claim: There exist α < 0. almost everywhere . if for α1 .3 infers that αo = −∞ if R(x) ≤ 0 but R(x) is not identically 0. i) For a given α < 0. Let αo = inf{α  (5.2.4 and 5. that is. In fact. Now to complete the proof of the Theorem. α) be the solution of (5. we can prove a stronger result: −αφ(x)→R(x) almost everywhere as α→ − ∞. u Then by H oder inequality ¨ u This implies that L2 L2 ≤ 1 α R udA.47) at α.47) possesses a solution u1 . Actually.47). (5.56) We argue in the following 2 steps. let u = u(x. (5.56) by u and integrate over M to obtain − M  u2 dA + α M u2 dA = M R(x)u(x)dA. we have − u1 + α > R(x)eu1 .58) . then αo > −∞.57) (5.150 5 Prescribing Gaussian Curvature on Compact 2Manifolds From Lemma 5. Then for any 0 > α > αo . u(x. Lemma 5.2. u1 is an upper solution for equation (5.2. it suﬃce to prove that if R(x) is positive somewhere. there exist a δ > 0.2.47) is sovable }.2.47) is solvable for −δ ≤ α < 0. if there is a upper solution. Consequently. M ≤ 1 R α L2 . Multiplying both sides of equation (5. Hence equation − u + α = R(x)eu also has a solution. such that (5.3. as α→ − ∞. then for any α > α1 . one can solve (5. (5.56). By virtue of Lemma 5.2. α)→0.
46) has been solved completely. Then equation (5. we can express + α)−1 R(x).46) to have a solution for any 0 > α > −∞ is R(x) ≤ 0. Now by the assumption. 1 φ(x)→ R(x). We will prove the Theorem in 3 steps.2. in the case when R(x) changes signs. Let {αk } be a sequence of numbers such that αk > αo . In particular. (5. we write −αφ + R(x) = ( + α)−1 R.e. Theorem 5. . 5.. we have R ∈ L2 . α Since R(x) is continuous and positive somewhere.57).2. From the above reasoning.60) by simply apply the operator + α to both sides. equation (5. we have αo > −∞. the operator u=( + α is invertible. one can see from the Kazdan and Warner’s result that in the case when R(x) is nonpositive.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 151 Since for any α < 0. Chen and Li answered this question aﬃrmatively.46) has a solution for the critical value α = αo ? In [CL1]. (5. a.e. and αk →αo . as α→ − ∞.60) One can see the validity of (5. The Outline. one can see that.59) implies that −αφ + R(x)→0. then ( + α)−1 R(x)→0 almost everywhere . if R ∈ L2 . and hence (5. for suﬃciently negative α.2 Chen and Li’s Results From the previous section. and hence there are still some questions remained. a. Let αo be deﬁned in the previous section.46) has at least one solution when α = αo . one may naturally ask: Does equation (5.59) ii) To see (5.5. a necessary and suﬃcient condition for equation (5. However.e. φ(x) is negative somewhere. This completes the proof of the Theorem. i.2 (ChenLi) Assume that R(x) is a continuous function on M . Or Proof. as k→∞.
61) because αk →αo < 0 . we prove that {uk } is bounded in H 1 (M ) and hence converges to a desired solution.63) . Let the minimizer be uk . φk (xk )→ − ∞ . (5. Call it ψk . This is possible because R(x) is bounded below on M . · · · . 2. This veriﬁes (5. − A + αk ≤ R(x)eA . Then ˜ − ψk + αk > − ψk + αk = R(x)eψk (x) . · · · .62) Otherwise.152 5 Prescribing Gaussian Curvature on Compact 2Manifolds In step 1.62).46) for all αk .46) for α = αk . (5.61) We ﬁrst show that there exists a constant co > 0. Step 1.e. choose αo < αk < αk . In step 3. there exists a solution of equation (5. as k→∞. ˜ i. for each k. ∀ k = 1. and hence we can make R(x)eA greater than any ﬁxed negative number. as k→∞. On the other hand. we have − φk (xk ) ≤ 0 . By the result of Kazdan and ˜ Warner. For each αk > αo . 2. Choose a suﬃciently negative constant A < −co to be the lower solution of equation (5. we minimize the functional. that is. In step 2. such that φk (x) ≥ −co . we show that the sequence of minimizers {uk } is bounded in the region where R(x) is positively bounded away from 0. let xk be a minimum of φk (x) on M . These contradict equation (5. for each ﬁxed αk . Then obviously. For each ﬁxed αk . ψk is a super solution of the equation − u + αk = R(x)eu(x) . since xk is a minimum. (5. ∀ x ∈ M and ∀ k = 1. there exists a solution φk of − φk + αk = R(x)eφk . Jk (u) = 1 2  u2 d A + αk M M ud A − M R(x)eu d A in a class of functions that are between a lower and a Upper solution. Then let αk →αo . by Kazdan and Warner.
(5. we have in particular that the function g(t) = Jk (uk + tv) achieves its minimum at t = 0 on the interval [− . It follows that. one ¨ e can easily see that Jk is bounded from below. by (5.64) Here we have used the H older and Poincar´ inequality. . we have uk + tv ∈ H.68). In order to show that uk is a solution of equation (5. such that uk (xo ) = ψk (xo ). because the functional Jk (u) is bounded from below.67) Consequently − uk (xo ) + αk − R(xo )euk (xo ) ≤ 0. First we show that uk (x) < ψk (x) . Since uk is a minimum of Jk in H. we need it be in the interior of H. 2 ud A − C1 M M eψk d A (5. It follows that − w(xo ) ≤ 0. In fact. ∀ x ∈ M. we use a maximum principle.68) Let w(x) = ψk (x) − uk (x).64). such that as − ≥ t ≤ 0. we minimize the functional Jk (u) in a class of functions H = {u ∈ C 1 (M )  A ≤ u ≤ ψk }. ∀ x ∈ M.63). Also by (5. ∀ x ∈ M. we have. Jk (u) ≥ 1  u2 d A + αk 2 M 1 ≥ ( u − C2 )2 − C3 .5. Hence g (0) ≤ 0. we will derive a contradiction. there exists xo ∈ M . for any positive function v(x) with its support in Bδ (xo ). such that A < uk (x) .2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 153 Illuminated by an idea of Ding and Liu [DL]. This implies that [− uk + αk − R(x)euk ]v(x)d A ≤ 0. there is a δ > 0. we can show that there is a subsequence of a minimizing sequence which converges weakly to a minimum uk of Jk (u) in the set H. Consequently. Then w(x) ≥ 0. To verify this. From (5. While on the other hand.65) Suppose in the contrary. since M is compact and R(x) is continuous.66) (5. such that R(x) ≤ C1 . 0].65) and a similar argument as in the previous section. i. ∀x ∈ Bδ (xo ). This is possible. there exists an > 0. Since uk (xo ) > A.e. (5. M (5. there exists a constant C1 . and xo is a minimum of w. for any u ∈ H. A < uk (x) < ψk (x).
. we ﬁrst consider the region where R(x) is positively bounded away from 0. and Shafrir [BLS]: Lemma 5. Then a direct computation as we did in the past will lead to − uk + αk = R(x)euk . (5. ∀ x ∈ M. Assume that V is a Lipschitz function satisfying 0 < a ≤ V (x) ≤ b < ∞ and let K be a compact subset of a domain Ω in R2 . Then any solution u of the equation − u = V (x)eu . and hence dt Jk (u + tv)t=0 = 0. one can show that A < uk (x) . Let wk = uk + vk . for some constant a. Similarly. satisﬁes sup u + inf u ≤ C(a.154 5 Prescribing Gaussian Curvature on Compact 2Manifolds − w(xo ) = − ψk (xo ) − (− uk (xo )) ≥ −˜ k + αk > 0. Step 2. We will use a sup + inf inequality of Brezis. uk + tv is in H for all suﬃciently d small t. To show that it is also bounded from above. K Ω V L∞ . ∀ x ∈ M. Now for any v ∈ C 1 (M ). Li. Therefore we must have uk (x) < ψk (x) . It is obvious that the sequence {uk } so obtained in the previous step is uniformly bounded from below by the negative constant A.65). This veriﬁes (5. Let vk be a solution of − vk − αk = 0 x ∈ Ω vk (x) = 1 x ∈ ∂Ω.2. ∀ x ∈ Ω ≡ B2 (xo ). b. Ω). ∀ x ∈ M. Choose R(x) ≥ a > 0 . x ∈ Ω. Li. K.6 (Brezis.69) so small such that Let xo be a point on M at which R(xo ) > 0. and Shafrir). α Again a contradiction. Then wk satisﬁes − wk = R(x)e−vk ewk .
such that the set D = {x ∈ M  R(x) > δ} is nonempty. then − wk = R(x)euk − K(x)evk . x ∈ M. we have  wk 2 ewk dA = M M R(x)euk +wk dA − D K(x)euk dA. hence one can apply Lemma 5. Let K(x) be a smooth function such that K(x) < 0 . multiplying both sides of (5. It is open since R(x) is continuous. since each uk is a minimizer of the functional Jk . We show that the sequence {uk } is bounded in H 1 (M ). Locally. It follows that for any function φ ∈ H 1 (M ). {wk } is also bounded from below. Now the uniform bound for {uk } in the same region follows suit. (5. the second derivative Jk (uk ) is positively deﬁnite. Since {uk } is uniformly bounded from below.71) On the other hand. For each αk . the metric is pointwise conformal to the Euclidean metric.70) We show that {wk } is bounded in H 1 (M ). else where. and K(x) = 0 . the sequence {vk } is bounded from above and from below in the small ball Ω. and αk →αo The sequence {vk } is uniformly bounded.6 to {wk } to conclude that the sequence {wk } is also uniformly bounded from above in the smaller ball B (xo ). Step 3.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 155 By the maximum principle. Choose a small δ > 0. we know that {uk } is uniformly bounded in D. Since xo is an arbitrary point where R is positive. M Choosing φ = e wk 2 . let vk be the unique solution of − vk + αk = K(x)evk . Then since − vk + αk ≤ 0 . From the previous step.5. we derive .2.70) by ewk and integrating on M . we conclude that {wk } is uniformly bounded in the region where R(x) is positively bounded away from 0. for x ∈ D. On one hand. [ φ2 − R(x)euk φ2 ]d A ≥ 0. (5. Let wk = uk − vk .
. Taking into account that u k {uk } is bounded in the region D where R(x) is positively bounded away from 0.72). M (5. where uk is the average of uk . (5. ˜k ¯k If M u2 dA is unbounded. a contradiction with the fact that {˜k } is bounded u in H 1 (M ). {uk } must also be bounded in H 1 (M ). and so does M  uk 2 dA. This completes the proof of the Theorem.73) Notice that {uk } is uniformly bounded in region D and {wk } is bounded from below due to the fact that {uk } is bounded from below and {vk } is bounded. Obviously M uk dA = ˜ u ¯ ˜ 0.73) implies that M  wk 2 dA is bounded. Consequently. Apparently. we would have u2 dA = ˜k D D uk − uk 2 dA→∞. hence we can apply Poincar´ inequality to uk and conclude that {˜k } is e ˜ u bounded in H 1 (M ). Therefore. Consider uk = uk −¯k .156 5 Prescribing Gaussian Curvature on Compact 2Manifolds 1 4  wk 2 ewk dA ≥ M M R(x)euk +wk dA. and uo is the desired weak solution for − uo + αo = R(x)euo . we arrive at − D K(x)euk dA ≥ 3 4  wk 2 ewk dA.72) Combining (5. there exists a subsequence of {uk } that converges to a function uo in H 1 (M ). (5. then {¯k } is unbounded. ¯ for some subsequence. we have u2 dA ≤ 2 k M M u2 dA + u2 M  .71) and (5.
1 6. Kazdan and Warner raise the following question: Which functions R(x) can be realized as the scalar curvature of some conformally related metrics? It is equivalent to consider the existence of a solution to the following semilinear elliptic equation − u+ n+2 n(n − 2) n−2 u= R(x)u n−2 .3 6.1 6.2.1) The equation is “critical” in the sense that lack of compactness occurs. for n ≥ 3 6.2) .3. The conditions are: X(R)dVg = 0 Sn (6. x ∈ S n .2 Estimate the Values of the Functional The Variational Scheme In the Region Where R < 0 In the Region Where R is small In the Region Where R > 0 6.3. u > 0.6 Prescribing Scalar Curvature on S n .2 Introduction The Variational Approach 6.1 6. Besides the obvious necessary condition that R(x) be positive somewhere.3.3 The A Priori Estimates 6. 4 4(n − 1) (6. there are wellknown obstructions found by Kazdan and Warner [KW2] and later generalized by Bourguignon and Ezin [BE].2.1 Introduction On higher dimensional sphere S n with n ≥ 3.2 6.
’ then (6. (6. Xu and Yang [XY] showed that if R is ‘nondegenerate. the ‘nondegeneracy’ condition is a special case of the ‘ﬂatness condition’. Our results imply that for a rotationally symmetric function R. Roughly speaking. [Li2]. a monotone rotationally symmetric function R admits no solution. while some higher but less than nth order derivative is distinct from 0. the derivatives of R up to order (n − 2) vanish. It was illustrated by a counter example in [Bi] constructed by Bianchi.4) Now. then problem (6. and [SZ]) as a standard assumption to guarantee the existence of a solution. Recently. In this situation. The above mentioned Bianchi’s counter example seems to suggest that the ‘ﬂatness condition’ is somewhat sharp.1) is that R (r) changes signs in the region where R is positive. We call these KazdanWarner type conditions. In the last two decades.4) is a suﬃcient condition. In dimensions higher than 3. We found some stronger obstructions. the ‘nondegeneracy’ condition is no longer suﬃcient to guarantee the existence of a solution. These conditions give rise to many examples of R(x) for which equation (6. we answered question (a) negatively [CL3] [CL4]. numerous studies were dedicated to these problems and various suﬃcient conditions were found (please see the articles [Ba] [BC] [C] [CC] [CD1] [CD2] [ChL] [CS] [CY1] [CY2] [CY3] [CY4] [ES] [Ha] [Ho] [Li] [Li2] [Li3] [Mo] [SZ] [XY] and the references therein ). a more proper condition is called the ‘ﬂatness condition’. were those KazdanWarner type necessary conditions also suﬃcient ? In the case where R is rotationally symmetric. . it requires that at every critical point of R. if it is monotone in the region where it is positive. namely. and for which the problem admits no solution.1) admits no solution unless R is a constant.3) Then (a) is (6.1) on higher dimensional spheres. In particular.158 6 Prescribing Scalar Curvature on S n . Although people are wondering if the ‘ﬂatness condition’ is necessary. In other words.3) a suﬃcient condition? and (b) if not. For equation (6.1) has no solution. a major diﬃculty is that multiple blowups may occur when approaching a solution by a minimax sequence of the functional or by solutions of subcritical equations. (6. For n = 3. which is nondegenerate and nonmonotone. a necessary condition to solve (6. is this a suﬃcient condition? For prescribing Gaussian curvature equation (5.3) on S 2 . for n ≥ 3 where dVg is the volume element of the conformal metric g and X is any conformal vector ﬁeld associated with the standard metric g0 . However. the conditions become: R > 0 somewhere and R changes signs. one problem of common concern left open. among others. what are the necessary and suﬃcient conditions? Kazdan listed these as open problems in his CBMS Lecture Notes [Ka]. it is still used widely today (see [CY3]. He found some positive rotationally symmetric function R on S 4 .
where S n  is the volume of S n . under the ‘ﬂatness condition. Here. We use the ‘center of mass’ to deﬁne neighborhoods of ‘critical points at inﬁnity. n+2 Let γn = n(n−2) and τ = n−2 .1 Let n ≥ 3.’ a general one was presented in [Li2] by Y.1. and it applies to functions R with changing signs. u ≥ 0}.1) to have a solution is that R (r) changes signs in the regions where R > 0.’ is (6. Outline of the proof of Theorem 6.1. take the limit.1 Introduction 159 Now.6. We blend in our new ideas with other ideas in [XY]. with a = 0 and n − 2 < α < n. Li. (6.4) is a necessary and suﬃcient condition for (6. To ﬁnd the solution of equation (6. Let Jp (u) := Rup+1 dV Sn and E(u) := Sn ( u2 + γn u2 )dV. Let R = R(r) be rotationally symmetric and satisfy the following ﬂatness condition near every positive critical point ro : R(r) = R(ro ) + ar − ro α + h(r − ro ). and [SZ]. Then let p→τ . we essentially answer the open question (b) posed by Kazdan. This is true in all dimensions n ≥ 3. (6.6).’ (6. [Li2]. Theorem 6. we answer the question aﬃrmatively. The theorem is proved by a variational approach.5) where h (s) = o(sα−1 ). . Thus. We ﬁrst ﬁnd a positive solution up of the 4 subcritical equation − u + γn u = R(r)up .’ obtain some quite sharp and clean estimates in those neighborhoods. we construct a maxmini variational scheme. to better illustrate the idea. we only list a typical and easytoverify one.6) for each p < τ and close to τ . and construct a maxmini variational scheme at subcritical levels and then approach the critical level. We seek critical points of Jp (u) under the constraint H = {u ∈ H 1 (S n )  E(u) = γn S n . We prove that.1) to have a solution. a natural question is: Under the ‘ﬂatness condition.4) a suﬃcient condition? In this section. Then a necessary and suﬃcient condition for equation (6.1. in the statement of the following theorem. There are many versions of ‘ﬂatness conditions. [Ba].
we established apriori bounds for the solutions in the following order.8) (6. we show that the sequence can only blow up at one . take the limit. such that Jp (up ) = cp . We prove that there is an open set G1 ⊂ H (independent of p). we assume that R has at least two positive local maxima. In the following.4). there is a critical point up of Jp (·). Then similar to the approach in [C]. by (6. {up } can only blow up at most ﬁnitely many points. (6. then by condition (6. 2 (6. on each of the poles. (i) In the region where R < 0: This is done by the ‘Kelvin Transform’ and a maximum principle. we have some kinds of ‘mountain pass’ associated to each local maximum of R. such that on the boundary ∂G1 of G1 . for n ≥ 3 If R has only one positive local maximum. Roughly speaking. (iii) In the region where R is positively bounded away from 0: First due to the energy bound. Let ψ1 and ψ2 be two functions deﬁned by (6. Using a Pohozaev type identity. (6. while there is a function ψ1 ∈ G1 . and let Γ be the family of all such paths. the solutions we seek are not necessarily symmetric.6). Let γ be a path in H connecting ψ1 and ψ2 . such that δ Jp (ψ1 ) > R(r1 )S n  − .7) Here δ > 0 is independent of p. Let r1 be a positive local maximum of R. Jp (up ) ≤ R(ri )S n  − δ. we seek a solution by minimizing the functional in a family of rotationally symmetric functions in H. R is either nonpositive or has a local minimum. we have Jp (u) ≤ R(r1 )S n  − δ. Deﬁne cp = sup min Jp (u). To ﬁnd a solution of (6.10) for any positive local maximum ri of R. Obviously. a constant multiple of up is a solution of (6.1).8) associated to r1 and r2 . To show the convergence of a subsequence of {up }.7).9) γ∈Γ γ For each p < τ . we let p→τ . by compactness. The set G1 is deﬁned by using the ‘center of mass’. Let r1 and r2 be two smallest positive local maxima of R. (ii) In the region where R is small: This is mainly due to the boundedness of the energy E(up ). Moreover.160 6 Prescribing Scalar Curvature on S n . In this case. Our scheme is based on the following key estimates on the values of the functional Jp (u) in a neighborhood of the ‘critical points at inﬁnity’ associated with each local maximum of R.
8).6) for each p. u ≥ 0}. We seek critical points of Jp (u) under the constraint H := {u ∈ H 1 (S n )  E(u) = E(1) = γn S n .10) to argue that even one point blow up is impossible and thus establish an apriori bound for a subsequence of {up }. In subsection 3. . v) = ( S u v + γn uv)dV be the inner product in the Hilbert space H 1 (S n ). We divide the rest of the section into two parts.7) and (6. we establish a priori estimates on the solution sequence {up }. First. Jp (u) := Sn Rup+1 dV and E(u) := Sn ( u2 + γn u2 )dV. Let (u. Then. In subsection 2.6. 6.2 The Variational Approach In this section. Finally. we use (6.11) for each p < τ := Let n+2 n−2 . we construct a maxmini variational scheme to ﬁnd a solution of − u + γn u = R(r)up (6. and u := E(u) be the corresponding norm. we carry on the maxmini variational scheme.11).2 The Variational Approach 161 point and the point must be a local maximum of R. One can easily see that a critical point of Jp in S multiplied by a constant is a solution of (6. where S n  is the volume of S n . we carry on the maxmini variational scheme and obtain a solution up of (6. we establish the key estimates (6.
2 and 6. for n ≥ 3 6. We recall some wellknown facts in conformal geometry.13). As usual.13) We may also regard φq as depend on two parameters λ and q .1 Estimate the Values of the Functional To construct a maxmini variational scheme. θ ∈ S n−1 ) centered at the south pole where r = 0. θ) of S n (0 ≤ r ≤ π. the energy E(·).15) r hλ (r.2. we have . in the spherical polar ˜ coordinates x = (r. More generally and more precisely. we ﬁrst show that there is some kinds of ‘mountain pass’ associated to each positive local maximum of R. so that the south pole of S n is at the origin O and the center of the ball B n+1 is at (0. 2 Here det(dhλ ) is the determinant of dhλ . these ‘mountain passes’ are in some neighborhoods of the ‘critical points at inﬁnity’ (See Proposition 6.162 6 Prescribing Scalar Curvature on S n . Let φq be the standard solution with its ‘center of mass’ at q ∈ B n+1 . φq ∈ H and satisﬁes − u + γn u = γn uτ . (6.2. Choose a coordinate system in Rn+1 . When q = O (the south pole of S n ). θ). Unlike the classical ones. that is. As λ→0. and can be expressed explicitly as det(dhλ ) = λ + λ2 sin2 n r 2 cos2 r 2 . Correspondingly. there is a family of conformal transforms Tq : H −→ H. (6.2. It is wellknown that this family of conformal transforms leave the equation (6. 1). we use  ·  to denote the distance in Rn+1 . with Tq u := u(hλ (x))[det(dhλ )] n−2 2n (6. where q ˜ ˜ is the intersection of S n with the ray passing the center and the point q.˜ = ( 2 ) 2 . θ) = (2 tan−1 (λ tan ).1 below). this family of functions blows up at south pole. while tends to zero elsewhere. n−2 λ φq = φλ. 0. Deﬁne the center of mass of u as q(u) =: Sn xuτ +1 (x)dV uτ +1 (x)dV Sn (6.12) It is actually the center of mass of the sphere S n with density uτ +1 (x) at the point x. and the integral S n uτ +1 dV invariant. · · · . we can express.14) q 2 r + sin2 r λ cos 2 2 with 0 < λ ≤ 1.
(i) We will use a property of the conformal Laplacian to prove the invariance of the energy E(Tq u) = E(u) . E(Tq u) = E(u) and Sn (Tq u)τ +1 dV = Sn uτ +1 dV. Proof. where c(n) = 4(n−1) .16) uk v τ +1−k dV.2. v ∈ H 1 (S n ). On a ndimensional Riemannian manifold (M. we have ˆ Lg u = w− n−2 Lg (uw) .18) by u and integrating by parts. Then it is wellknown that ¯ ˆ go = w n−2 (x)¯ g with w(x) = satisfying the equation We also have 4 4 + x2 n+2 n−2 2 4 − ¯ w = γn w n−2 . Tq v) = (u. Sn (Tq u)k (Tq v)τ +1−k dV = (6. If M is a compact manifold without boundary. The conformal Laplacian has the following wellknown invariance property under the conformal change of metrics. .19) In our situation. The proof for (i) is similar. then multiplying both sides of (6. (6. v) (ii) Sn (6. w > 0.17) In particular.18) Interested readers may ﬁnd its proof in [Bes]. For g = w4/(n−2) g. Lg := g − c(n)Rg n−2 is call a conformal Laplacian. one would arrive at { M 2 g u ˆ + c(n)Rg u2 }dVg = ˆ ˆ M { 2 g (uw) + c(n)Rg uw2 }dVg . g and Rg are the LaplaceBeltrami operator and the scalar curvature associated with the metric g respectively. and for any nonnegative integer k. ˆ n+2 for all u ∈ C ∞ (M ). (6.1 For any u.2 The Variational Approach 163 Lemma 6. it holds the following (i) (Tq u. g). we choose g to be the Euclidean metric g in Rn with ¯ gij = δij and g = go .6. the standard metric on S n .
This completes the proof of the lemma. for n ≥ 3 c(n)Rgo = γn := and n(n − 2) 4 c(n)Rg = c(n)0 = 0.164 6 Prescribing Scalar Curvature on S n . · · · . ¯ Now formula (6. Make a stereographic projection from S n to Rn as shown in the ﬁgure below. (ii) This can be derived immediately by making change of variable y = hλ (x). Again. let q be the intersection of S n with the ray passing through the ˜ center of the sphere and the point q. 4 + λx2 It follows from this and (6.20) that E(Tq u) = Rn  o u(λx) λ(4 + x2 ) 4 + λx2 n−2 2  + γn u(λx) 2 λ(4 + x2 ) 4 + λx2 n−2 2 2 dVgo = Rn  ¯ u(λx)  ¯ u(λx) Rn λ(4 + x2 ) 4 + λx2 4λ 4 + λx2 4 4 + y2 n−2 2 w(x) 2 dx n−2 2 = = Rn 2 dx  ¯ u(y) n−2 2 2 dy = Rn  ¯ (u(y)w(y))2 dy = E(u).19) becomes { Sn o u 2 + γn u2 }dVo = Rn  ¯ (uw)2 dx.20) where o and ¯ are gradient operators associated with the metrics go and g ¯ respectively. with r x = tan 2 2 and n−2 λ(4 + x2 ) 2 (Tq u)(x) = u(λx) . the point (r. θ) ∈ S n becomes x = (x1 . One can also verify that Tq φq = 1. xn ) ∈ Rn . . where q is placed at the origin of Rn . (6. ˜ Under this projection.
n − 2 < α < n. q and Tq for q = O can be expressed in a ˜ ˜ similar way. More precisely. such that φλo .1 Through a straight forward calculation. such that for all τ ≥ p ≥ p1 . Jp is strictly less and bounded away from R(0)S n .O )→R(0)S n . one can show that Jτ (φλ.26).2. there is a p1 ≤ τ . .21) (6.6. such that for all p ≥ p1 .25) For a given δ1 > 0. Our conditions on R implies that R(r) = R(0) − arα . We will estimate the functional Jp in this neighborhood. θ) which we assume to be a positive local maximum. Proposition 6.22) Notice that the ‘centers of mass’ of functions in Σ are near the south pole O. and δo . 2 (6.25). as λ→0.2. We now carry on the estimates near the south pole (0.O ∈ Σ.q (6.24) Proof of Proposition 6. such that for all p ≥ po and for all u ∈ ∂Σ. Deﬁne Σ = {u ∈ H  q(u) ≤ ρo .O ) > R(0)S n  − δ1 .23) Σ Then we show that on the boundary of Σ.1 For any δ1 > 0. one can choose λo . for some a > 0.2. (6. Jp is continuous with respect to p. This is a neighborhood of the ‘critical points at inﬁnity’ corresponding to O. (6. We ﬁrst notice that the supremum of Jp in Σ approaches R(0)S n  as p→τ .26) It is easy to see that for a ﬁxed function φλo . in a small neighborhood of O.O . there exists a p1 .2 The Variational Approach 165 The relations between q and λ.2 There exist positive constants ρo . sup Jp (u) > R(0)S n  − δ1 . v := min u − tφq ≤ ρo }. and Jτ (φλo . by (6. (6. Hence. po . t. holds Jp (u) ≤ R(0)S n  − δo . we have Proposition 6. The estimates near other positive local maxima are similar. by (6.
q ρo ≤ q + C v .28).˜ q ≤ B (O) B (O) R(x + q )φp+1 dV ˜ λ. ˜ q2 ≤ C(˜2 + λ4 ).2. To estimate Jp for all u ∈ ∂Σ. The proof of Proposition 6. q α α α Here we have used the fact that x + q  ≥ c(x + ˜ ) for some c > 0 in ˜ q one half of the ball B (O) and the symmetry of φλ.O (6.˜) ≤ (R(0) − C1 ˜α )S n (1 + op (1)) − C1 λα+δp .28) From the deﬁnition (6.27) > 0 be small such that (6.O [R(0) − ax + q α ]φp+1 dV ˜ λ.32) (ii) Let ρo . we also need the following two lemmas that describe some useful properties of the set Σ.˜ q ···. This completes the proof of the Lemma. we have Jp (φλ.31) (6. Let B (O) ⊂ S n .27). B (O) R(x)φp+1 dV = λ. ˜ (6. (6. and (6.30) imply (6.O .O ≤ B (O) [R(0) − C3 xα − C3 ˜α ]φp+1 dV q λ.29).30) ≤ [R(0) − C3 ˜α ]S n [1 + op (1)] − C4 λα+δp (n−2)/2 .14) of φλ. Then for suﬃciently small q. we obtain q the smallness of the second integral: S n \B (O) R(x)φp+1 dV ≤ C2 λn−δp λ.22). we work in a local coordinate centered at O. (6. Noticing that α < n and δp →0.1.˜ and the boundedness of R.2. for n ≥ 3 Jp (φλo .˜) = q Sn (6. We ﬁrst estimate Jp for a family of standard functions in Σ. λ. (6.2.14). This completes the proof of Proposition 6. Lemma 6.2. q. and q be deﬁned by (6.29) To estimate the ﬁrst integral.2. Lemma 6. We express Jp (φλ. Then for ρo suﬃciently small.3 (On the ‘center of mass’) (i) Let q.2 For p suﬃciently close to τ . .O ) > R(0)S n  − δ1 .2 is rather complex. and for λ and ˜ suﬃciently q small. q q where δp := τ − p and op (1)→0 as p→τ . Proof of Lemma 6. and v be deﬁned by (6.˜ q n−2 2 for q ∈ B 2 (O). we conclude that (6.166 6 Prescribing Scalar Curvature on S n .21) holds in ··· + B (O) S n \B (O) R(x)φp+1 dV = λ.2.
2.2. by the deﬁnition. (ii) For any u = tφq + v ∈ ∂Σ. · · · . ˜ for λ suﬃciently small . Here we have used the fact that v is small and that t is close to 1.3.4 (Orthogonality) Let u ∈ Σ and v = u − to φqo be deﬁned by (6. If v = ρo . ψi (x) = xi . It follows from the deﬁnition of the ‘center of mass’ that τ +1 n x(tφq + v) ρo =  S . then one can see that (6.2. · · · . i = 1. the boundary of Σ. and by an elementary q calculation. (6.36). xn ). Sn (6.37) in terms of v : ρo =  tτ +1 tτ +1 xφτ +1 + (τ + 1)tτ q φτ +1 + (τ + 1)tτ q ≤ xφτ +1 q φτ +1 q xφτ v + o( v ) q φτ v + o( v ) q   + C v ≤ q + C v . i = 1.2.35) where ψi are ﬁrst spherical harmonic functions on S n : − ψi = n ψi (x) . · · · n. · · · . This completes the proof of Lemma 6. i = 1. we can expand the integrals in (6.˜. 2.3. we have either v = ρo or q(u) = ρo .22). and (6. then we are done. 2. n.33) Then Tqo vdV = 0 Sn (6.2 The Variational Approach 167 Lemma 6. then for x = (x1 . If we place the center of the sphere S n at the origin of Rn+1 (not in our case).36) ˜ Draw a perpendicular line segment from O to q q .14).37) (tφq + v)τ +1 Sn Noticing that v ≤ ρo is very small. one arrives at q − q  ∼ λ2 . n. Hence we assume that q(u) = ρo . (i) From the deﬁnition that φq = φλ. (6. . Let Tqo be the conformal transform such that Tqo φqo = 1.6.31) is a direct consequence of the triangle inequality and (6. Proof of Lemma 6.34) and Tqo v · ψi (x)dV = 0. (6.
(ii) To prove (6.13) that vφτo dV = 0. ψn ). φqo ) = Sn vLφqo dV.168 6 Prescribing Scalar Curvature on S n . Proof of Proposition 6. we have 0 = (v.40) where op (1)→0 as p→τ . . ¯ In step 1.2 Make a perturbation of R(x): ¯ R(x) = where m = R ∂B2ρo (O) .2.34). we show that the diﬀerence between Jp (u) and Jτ (u) is very ¯ small. q Sn Now. The estimate is divided into two steps. apply the conformal transform Tqo to both v and φqo in the above integral. We use the fact that v = u − to φqo is the minimum of E(u − tφq ) among all possible values of t and q. (i) By a variation with respect to t. · · · . (6.38) It follows from (6.2. we estimate Jτ (u). Write L = + γn . for n ≥ 3 Proof of Lemma 6. First we show ¯ ¯ Jp (u) ≤ Jτ (u)(1 + op (1)). This completes the proof of the Lemma. By the invariant property of the transform. In step 2.4. 0= q q vLφq q=qo = γn q q vφτ q=qo q Tqo v(Tqo φq )τ q=qo = τ Tqo v(φq−qo )τ q=qo q φq−qo q=qo Tqo vφτ −1o q−q =τ Tqo v(ψ1 . Step 1. we make a variation with respect to q. we arrive at (6.35). Deﬁne R(x) x ∈ B2ρo (O) m x ∈ S n \ B2ρo (O). (6. (6.39) ¯ Jp (u) = Sn ¯ R(x)up+1 dV.
¯ Now estimate the diﬀerence between Jp (u) and Jp (u). ¯ By virtue of (6.42) and (6. (6. that is ( v φq + γn vφq )dV = 0.2 The Variational Approach 169 In fact. γn S n  (6. ¯  Jp (u) − Jp (u) ≤ C1 S n \B2ρo (O) up+1 v p+1 ≤ C3 λn−δp n−2 2 ≤ C2 S n \B2ρo (O) (tφq )p+1 + C2 S n \B 2ρo (O) + C3 v p+1 . We derive t2 = 1 − It follows that ¯ R(x)uτ +1 ≤ Sn v 2 .41). by the H¨lder inequality o ¯ Rup+1 ≤ ≤ ¯ R p+1 uτ +1 dV τ +1 p+1 τ +1 p+1 τ +1 S n  τ +1 τ −p τ −p ¯ Ruτ +1 dV  R(0)S n   τ +1 .38) implies that v and φq are orthogonal with respect to the inner product associated to E(·). (6. Using the fact that both u and φq belong to H with u = γn S n  = φq . . This implies (6. we use (6.42) tτ +1 Sn ¯ R(x)φτ +1 + (τ + 1) q τ (τ + 1) ¯ R(x)φτ v + q 2 Sn Sn φτ −1 v 2 + o( v 2 ) q (6. we now only need to estimate Jτ (u). we have u 2 = v 2 + t2 φq 2 . 2 To estimate I1 .40). write u = v + tφq . Sn Notice that u 2 := E(u).27).40) and (6.6.41) Step 2.43) = I1 + (τ + 1)I2 + τ (τ + 1) I3 + o( v 2 ). For any u ∈ ∂Σ.
To estimate I2 . Notice that the positive constant c2 in (6. where λ2 = n + c2 .46) q has played a key role. for some positive number c2 . such that . in either case there exist a δo > 0.45). we have either v = ρo or q(u) = ρo .35) imply that Tq v is orthogonal to these two eigenspaces and hence (Tq v)2 dV ≥ λ2 Sn Sn Tq v2 dV .46) Now (6.45) ≤ Cρα φq o To estimate I3 .34) and (6. and (6. 4 For any u ∈ ∂Σ.44). (6. we use the H 1 orthogonality between v and φq (see (6.34) and (6. we use (6. q (6.35). is the second nonzero eigenvalue of − .47). v) o v ≤ C 4 ρα v .31) and (6.47) Here we have used the fact that v is very small and ρα v can be controlled o by ˜α + λα (see Lemma 6. γ n + n + c2 (6.2. It is wellknown that the ﬁrst and second eigenspaces of − on S n are constants and ψi corresponding to eigenvalues 0 and n respectively. Consequently v It follows that I3 ≤ R(0) Sn 2 = Tq v 2 ≥ (γn + n + c2 ) Sn (Tq v)2 . By (6. (6.47) would be 0 due to the fact that γn = n(n−1) . φτ −1 v 2 dV = R(0) q (Tq v)2 ≤ (Tq φq )τ −1 (Tq v)2 dV Sn = R(0) Sn R(0) v 2. o (6. Now (6.46) imply that there is a β > 0 such that ¯ Jτ (u) ≤ R(0)S n [1 − β(˜α + λα + v 2 )]. the coeﬃcient of v 2 in (6. for n ≥ 3 I1 ≤ (1 − τ +1 v 2 )R(0)S n (1 − c1 ˜α − c1 λα ) + O(λn ) + o( v 2 ). I2 = Sn ¯ (R(x) − m)φτ vdV = q B2ρo (O) ¯ (R(x) − m)φτ vdV q = 1 γn ¯ (R(x) − m)Lφq · vdV B2ρo (O) ≤ Cρα  o B2ρo (O) Lφq · vdV = Cρα (φq . and without it.44) q 2 γn S n  for some positive constant c1 .2).38)).170 6 Prescribing Scalar Curvature on S n .
Let r1 and r2 be the two smallest positive local maxima of R. Case (i): R has only one positive local maximum.2. the constant multiples are bounded from above and bounded away from 0. ∀u ∈ ∂Σi .51) Let γ be a path in H joining ψ1 and ψ2 .2. i=1. Let Γ be the family of all such paths.48) . Moreover. by the wellknown compactness. there exists a critical (saddle) point up of Jp with Jp (up ) = cp . we construct variational schemes to show the existence of a solution to equation (6. due to (6. ψi ∈ Σi .2 The Variational Approach 171 ¯ Jτ (u) ≤ R(0)S n  − 2δo . po < τ and δ > 0.11). there exist two disjoint open sets Σ1 . (6. 6. such that for all p ≥ po .49) Obviously. Similar to the ideas in [C]. (6. and (6. we have Jp (up ) ≤ min R(ri )S n  − δ. (6. δ Jp (ψi ) > R(ri )S n  − .2 The Variational Scheme In this part. i = 1. 2. 2. we seek a solution by maximizing the functional Jp in a class of rotationally symmetric functions: Hr = {u ∈ H  u = u(r)}. This completes the proof of the Proposition. Case (ii): R has at least two positive local maxima.2.11) for each p < τ . By Proposition 6. Σ2 ⊂ H.1 and 6.41). condition (6.2 (6.24) is an immediate consequence of (6. Therefore any maximizing sequence possesses a converging subsequence in Hr and the limiting function multiplied by a suitable constant is a solution of (6. .11) and for all p close to τ .6. (6. (6.4) implies that R must have local minima at both poles.52) γ∈Γ u∈γ For each ﬁxed p < τ .48) Now (6.51) and the deﬁnition of cp . Deﬁne cp = sup min Jp (u).2.50) (6.40). i = 1. 2 and Jp (u) ≤ R(ri )S n  − δ.53) One can easily see that a critical point of Jp in H multiplied by a constant is a solution of (6. In this case. Jp is bounded from above in Hr and it is wellknown that the variational scheme is compact for each p < τ (See the argument in Chapter 2).
6.1. Since xi may not be a local maximum of ui . Theorem 6. for n ≥ 3 6.1). we prove that as p→τ .1 In the Region Where R < 0 The apriori bound of the solutions is stated in the following proposition. {x  R(x) = 0}).11) obtained by the variational scheme are uniformly bounded. The bound depends only on δ.172 6 Prescribing Scalar Curvature on S n . It is mainly due to the energy bound.2 in our paper [CL6]. the solution of (6. which is a direct consequence of Proposition 6. To this end. dist({x  R(x) ≤ −δ}. which converges to a solution uo of (6. R close to 0. Then there exists a po < τ and a δ > 0.1 For all 1 < p ≤ τ . the a priori estimate is simpler. pi →τ . we estimate the solutions in three regions where R < 0. and a sequence of points {xi }. for rotationally symmetric functions R on S 3 . and the lower bound of R. and R > 0 respectively. The convergence is based on the following a priori estimate. we need to choose a point near xi . However.3.2. Now suppose that the conclusion of the Proposition is violated. the solutions of (6.1 Assume that R satisﬁes condition (6.5) in Theorem 6. That method may be applied to higher dimensions with a few modiﬁcations. To prove the Theorem. It is easy to see that the energy E(up ) are bounded for all p. such that for all po < p < τ . we also obtained estimates in this region by the method of moving planes.3.3. Now within our variational frame in this paper.11) obtained by the variational approach in section 2. Proposition 6. with xi →xo and R(xo ) = 0.3 The A Priori Estimates In the previous section. for every δ > 0. We will use a rescaling argument to derive a contradiction.11) are uniformly bounded in the regions where R ≤ −δ. {up } are uniformly bounded in the regions where R(x) ≤ δ. we assume that R be bounded away from zero. we removed this condition by using a blowing up analysis near R(x) = 0.2 In the Region Where R is Small In [CL6].2 Let {up } be solutions of equation (6. Proposition 6. which is almost a local maximum.11) for each p < τ . Proof. In this section. Then there exists a subsequence {ui } with ui = upi . we showed the existence of a positive solution up to the subcritical equation (6.3.3. such that for all po < p ≤ τ . 6. there is a subsequence of {up }. such that ui (xi )→∞. let K be any large number and let .1. then there exists po < τ . in that paper. In [CL7].
6. (6.54).3 The A Priori Estimates 173 ri = 2K[ui (xi )]− pi −1 2 . 2 Bλi K (ai ) ⊂ Bri (xi ). Now we can make a rescaling. This completes the proof of the Proposition. choose a local coordinate and let si (xi ) = ui (x)(ri − x − xi ) pi −1 . we use si (ai ) ≥ si (x) and derive from (6. On the other hand. one can verify that. Sn uτ +1 dV ≥ i BK (0) τ vi +1 dx. with vo (0) = 1. i (6. we use the fact si (ai ) ≥ si (xi ). for any K > 0. vi (x)τ +1 dx ≥ cK n .56) for some constant C. we derive ui (ai )(ri − ai − xi ) pi −1 ≥ ui (xi )ri i It follows that hence 2 2 p −1 pi −1 = (2K) pi −1 . BK (0) (6. vi (x) = ∀ x ∈ Bλi K (ai ). Then from the deﬁnition of ai . and it follows by a standard argument that {vi } converges to a harmonic function vo in BK (0) ⊂ Rn .54) 2 ri − ai − xi  ≥ λi K. one can easily verify that the ball Bλi K (ai ) is in the interior of Bri (xi ) and the value of ui (x) in Bλi K (ai ) is bounded above by a constant multiple of ui (ai ). (6.57) Obviously. the boundedness of the energy E(ui ) implies that Sn uτ +1 dV ≤ C. By a straight forward calculation. Consequently for i suﬃciently large. 2 In a small neighborhood of xo . Let λi = [ui (ai )]− 2 . ui (ai ) Obviously vi is bounded in BK (0). To see the ﬁrst part.55) for some c > 0. ui (x) ≤ ui (ai ) ≤ ui (ai ) 2 pi −1 .57) contradict with (6. 1 ui (λi x + ai ). Let ai be the maximum of si (x) in Bri (xi ). . (6. From this.56) and (6.55). ri − ai − xi  ri − x − xi  2 2 pi −1 To see the second part.
This is a contradiction with (6. {ui } behaves almost like a family of the standard 2 ˜ functions φλi .3. Step 2.1). and vi (x) be deﬁned as in Part II. we have Lemma 6. Moreover. Then there exists a po < τ .1.174 6 Prescribing Scalar Curvature on S n . Finally.11) obtained by the variational approach in section 2. we argue that the solutions can only blow up at ﬁnitely many points because of the energy bound. si (x). q Step 3. k (6. i Bri (xo ) Because the total energy of ui is bounded. because each ui is the minimum of Jpi on a path. . let {ui } be the sequence of critical points of Jpi we obtained in Section 2. Therefore. we have Jτ (ui )→R(xo )S n .1. Step 1. As a consequence of a result in [Li2] (also see [CL6] or [SZ]). we show that even one point blow up is impossible.2. In Step 2. Proposition 6. From the proof of Proposition 2. we see that {vi (x)} converges to a standard function vo (x) in Rn with τ − vo = R(xo )vo .3 Let {up } be solutions of equation (6. with λi = (max ui )− n−2 and with q at the maximum of R.3. This is done mainly by using a Pohozaeve type identity. we show that there is no more than one point blow up and the point must be a local maximum of R. for n ≥ 3 6.3 In the Regions Where R > 0. we can have only ﬁnitely many such xo . For convenience. It follows that uτ +1 dV ≥ co > 0.58) for all positive local maxima rk of R. Let {xi } be a sequence of points such that ui (xi )→∞ and xi →xo with R(xo ) > 0. one can obtain Jτ (ui ) ≤ min R(rk )S n  − δ.˜. In Step 3.3. Then {ui } can have at most one simple blow up point and this point must be a local maximum of R. {up } are uniformly bounded in the regions where R(x) ≥ . The proof is divided into 3 steps. Then similar to the argument in Part II.53) to conclude that even one point blow up is impossible. Let ri . Proof. Now if {ui } blow up at xo . then by Lemma 6. Therefore. {ui } can only blow up at ﬁnitely many points. This completes the proof of Theorem 6. We have shown that {ui } has only isolated blow up points. the sequence {ui } is uniformly bounded and possesses a subsequence converging to a solution of (6.1.58). such that for all po < p ≤ τ and for any > 0. The argument is standard.3. In Step 1. we use (6.1 Let R satisfy the ‘ﬂatness condition’ in Theorem 1.
we must have u (xo ) ≤ 0. Based on this observation. for a C 2 function u(x) of nindependent variables x = (x1 .4 7. the simplest version maximum principle reads: . at a local maximum xo .3 7.7 Maximum Principles 7. Correspondingly. it can not have any local maximum.1 7. we have D2 u(xo ) := (uxi xj (xo )) ≤ 0 .2 7. Since under the condition u (x) > 0. that is. · · · . More generally. the symmetric matrix is nonpositive deﬁnite at point xo . b). the graph is concave up. then u can not have any interior maximum in the interval. One can also see this geometrically.5 Introduction Weak Maximum Principle The Hopf Lemma and Strong Maximum Principle Maximum Principle Based on Comparison A Maximum Principle for Integral Equations 7.1 Introduction As a simple example of maximum principle. It is wellknown in calculus that at a local maximum point xo of u. xn ). then we have the following simplest version of maximum principle: Assume that u (x) > 0 in an open interval (a. let’s consider a C 2 function u(x) of one independent variable x.
1 2 2 One can easily see from the graph of this function that (0.1) no longer implies that the graph of u(x) is concave up. we will introduce various maximum principles. and this implies that u ≡ 0. (7.2) If a ≤ f (x) ≤ b in B1 (0). iii) Establishing the Existence of Solutions (a) For a linear equation such as (7. In this case. then u can not achieve its maximum in the interior of the domain. and which vanish on the boundary as u does. We will list some below. in which condition (7. then we can choose a = b = 0.176 7 Maximum Principles If (aij (x)) ≥ 0 and ij aij (x)uxi xj (x) > 0 (7. applying the maximum principle for operator (see Theorem 7. if f (x) ≡ 0. In this chapter.1) becomes u > 0. x2 ) = x2 − x2 . respectively.2) in any bounded open domain Ω. then we can compare the solution u with the two functions b a (1 − x2 ) and (1 − x2 ) 2n 2n which satisfy the equation with f (x) replaced by a and b. there are numerous other applications. Now.1.1 in the following). In other words. i) Providing Estimates Consider the boundary value problem − u = f (x) .1) in an open bounded domain Ω. if A ≥ 0 and B ≤ 0. then AB ≤ 0. the solution of the boundary value problem (7. Besides this. 2n 2n ii) Proving Uniqueness of Solutions In the above example. and most of them will be used in the method of moving planes in the next chapter. x ∈ B1 (0) ⊂ Rn u(x) = 0 . A simple counter example is 1 u(x1 .2) is unique. x ∈ ∂B1 (0). we obtain a b (1 − x2 ) ≤ u(x) ≤ (1 − x2 ). An interesting special case is when (aij (x)) is an identity matrix. Unlike its onedimensional counterpart. 0) is a saddle point. condition (7. let u(x) = sup φ(x) . the validity of the maximum principle comes from the simple algebraic fact: For any two n × n matrices A and B.
2 .1 Introduction 177 where the sup is taken among all the functions that satisfy the corresponding diﬀerential inequality − φ ≤ f (x) . ¯ Ω ∂Ω − ui+1 = f (ui ). (b) Now consider the nonlinear problem − u = f (u) .3).3) Assume that f (·) is a smooth function with f (·) ≥ 0. (7. we have u ≤ u1 ≤ u2 ≤ · · · ≤ ui ≤ · · · ≤ u. then min u ≥ min u. we introduce and prove the weak maximum principles. we use successive approximations. Then u is a solution of (7. such that − u ≤ f (u) ≤ f (¯) ≤ − u. ¯ Ω ∂Ω x ∈ Ω.3). Theorem 7.2). then u is a solution of the problem (7. To seek a solution of problem (7. then max u ≤ max u.) i) If − u(x) ≥ 0. . x ∈ Ω φ(x) = 0 . x ∈ ∂Ω. ¯ Let u be the limit of the sequence {ui }: u(x) = lim ui (x).7.1. x ∈ ∂Ω. x ∈ Ω. x ∈ Ω u(x) = 0 . Let − u1 = f (u) and Then by maximum principle. In Section 7.1 (Weak Maximum Principle for − . Suppose that there ¯ exist two functions u(x) ≤ u(x). u ¯ These two functions are called sub (or lower) and super (or upper) solutions respectively. ii) If − u(x) ≤ 0.
This maximum principle (as well as the weak one) can also be applied to the case when c(x) ≥ 0 with slight modiﬁcations. ii) If Lu(x) ≤ 0. ∂xi ∂xi ∂xj bi (x)Di + c(x). any ξ ∈ Rn and for some δ > 0. Actually this can not happen unless u is constant. and c(x) are bounded continuous ¯ functions in Ω. ¯ Ω ∂Ω These Weak Maximum Principles infer that the minima or maxima of u attain at some points on the boundary ∂Ω. x ∈ Ω. However. i) If Lu(x) ≥ 0. Let Di = Deﬁne ∂ ∂2 . then max u ≤ max . then u attains its minimum value only on ∂Ω unless u is constant.3 (Strong Maximum Principle for L with c(x) ≡ 0) Assume that Ω is an open. then u attains its maximum value only on ∂Ω unless u is constant. bi (x). i L=− ij aij (x)Dij + Here we always assume that aij (x).1. .178 7 Maximum Principles This result can be extended to general uniformly elliptic operators. then min u ≥ min u. Dij = . bounded. Assume that c(x) ≡ 0. and connected domain in Rn with smooth bound¯ ary ∂Ω. x ∈ Ω. Theorem 7. Theorem 7. ¯ Ω ∂Ω ii) If Lu ≤ 0 in Ω. We say that L is uniformly elliptic if aij (x)ξi ξj ≥ δξ2 for any x ∈ Ω. as we will see in the following. i) If Lu ≥ 0 in Ω. they do not exclude the possibility that the minima or maxima may also occur in the interior of Ω. Assume that c(x) ≡ 0 in Ω.1. Let u be a function in C 2 (Ω) ∩ C(Ω).2 (Weak Maximum Principle for L) Let L be the uniformly elliptic operator deﬁned above.
where we prove the ‘Maximum Principles Based on Comparisons’. − is ‘positive’. if c(x) is not ‘too negative’. as we will see in the next chapter. we all require that c(x) ≥ 0.5. as we will see in later sections. then u can not attain its nonpositive minimum in the interior of Ω unless u is constant. maximum principles hold for ‘positive’ operators. We will prove these Theorems in Section 7. Notice that in the previous Theorems. we establish a maximum principle for integral inequalities. Roughly speaking. Actually.4 (Strong Maximum Principle for L with c(x) ≥ 0) Assume that Ω is an open.5. bounded.1 Introduction 179 Theorem 7. However. x ∈ Ω. These principles can be applied very conveniently in the ‘Method of Moving Planes’ to establish the symmetry of solutions for semilinear elliptic equations.3 by using the Hopf Lemma. and obviously so does − + c(x) if c(x) ≥ 0. Let u be a function such that − u + c(x)u ≥ 0 x ∈ Ω u≥0 on ∂Ω. in practical problems it occurs frequently that the condition c(x) ≥ 0 can not be met. i) If Lu(x) ≥ 0. then u can not attain its nonnegative maximum in the interior of Ω unless u is constant. Also in Section 7. Let u be a function in C 2 (Ω) ∩ C(Ω).5 (Maximum Principle Based on Comparison) Assume that Ω is a bounded domain.5) (7. ¯ Let φ be a positive function on Ω satisfying − φ + λ(x)φ ≥ 0. Do we really need c(x) ≥ 0? The answer is ‘no’. Theorem 7.1.+ c(x)’ can still remain ‘positive’ to ensure the maximum principle. In Section 7. These will be studied in Section 7. ii) If Lu(x) ≤ 0. as consequences of Theorem 7.4. then u ≥ 0 in Ω. x ∈ Ω. we derive the ‘Narrow Region Principle’ and the ‘Decay at Inﬁnity Principle’.4) . Assume that c(x) ≥ 0 in Ω.7. (7. then the operator ‘. ∀x ∈ Ω.1. and connected domain in Rn with smooth bound¯ ary ∂Ω.1. If c(x) > λ(x) .4.
1 (Weak Maximum Principle for − . b) are large or equal to the minimum value of u at the end points (See Figure 2). we ﬁrst carry it out under the stronger assumption that −u (x) > 0. Then by the second order Taylor expansion of u around xo . u (x) satisﬁes the stronger condition (7. there is a minimum xo ∈ (a. let Ω be the interval (a. First.7). Now for u only satisfying the weaker condition (7.6) becomes u (x) ≤ 0.6) (7. then min u ≥ min u. a Figure 2 b To prove the above observation rigorously.) i) If − u(x) ≥ 0. for each hence > 0. then max u ≤ max u. This implies that the graph of u(x) on (a. Obviously. b) of u. Proof. and . The entirely similar proof also works for part ii).10).10) Let m = min∂Ω u. Suppose in contrary to (7.8) (7. and therefore one can roughly see that the values of u(x) in (a. Then condition (7. (7.2 Weak Maximum Principles In this section. b).180 7 Maximum Principles 7.6).10). To better illustrate the idea. we consider a perturbation of u: u (x) = u(x) − x2 . x ∈ Ω.9) ii) If − u(x) ≤ 0. This contradicts with the assumption (7. ¯ Ω ∂Ω x ∈ Ω. we must have −u (xo ) ≤ 0.7) (7. such that u(xo ) < m. Theorem 7. we prove the weak maximum principles. b) is concave downward. we will deal with one dimensional case and higher dimensional case separately.2. ¯ Ω ∂Ω (7. Here we only present the proof of part i).
1 u(x)dS. then − u(xo ) ≥ 0. if there exists a minimum xo of u in Ω. we need the following Lemma 7.15).7). (7. Let Br (xo ) ⊂ Ω be the ball of radius r center at xo . 1 u(x)dS. we arrive at (7. Now based on this Lemma. we have − u (xo ) ≤ 0. Obviously. Roughly speaking. . and ∂Br (xo ) be its boundary. (7. then − u(xo ) ≤ 0.16).1. i) If − u(x) > (=) 0 for x ∈ Bro (xo ) with some ro > 0. if − u(x) > 0.7. (7. ¯ Ω ∂Ω Now letting →0. then for any ro > r > 0. To prove the theorem in dimensions higher than one. By the Divergence Theorem.16) Otherwise.11) u(xo ) > (=) ∂Br (xo ) ∂Br (xo ) It follows that.2.2 Weak Maximum Principles 181 min u ≥ min u . we arrive at the desired conclusion (7. (7. − u = − u + 2 n > 0.1 (Mean Value Inequality) Let xo be a point in Ω. Now in (7. the graph of u is locally somewhat concave downward.2. (7.2.15) Hence we must have min u ≥ min u Ω ∂Ω (7. then by Lemma 7. we ﬁrst consider u (x) = u(x) − x2 . if xo is a maximum of u in Ω.12) ii) If − u(x) < 0 for x ∈ Bro (xo ) with some ro > 0. letting →0.1. This contradicts with (7. then the value of u at the center of the small ball Br (xo ) is larger than its average value on the boundary ∂Br (xo ). if xo is a minimum of u in Ω.17) where dSω is the area element of the n − 1 dimensional unit sphere S n−1 = {ω  ω = 1}. ∂r (7. The Proof of Lemma 7. This Lemma tell us that.7).14) We postpone the proof of the Lemma for a moment. then for any ro > r > 0. u(x)dx = Br (xo ) ∂Br (xo ) ∂u dS = rn−1 ∂ν S n−1 ∂u o (x + rω)dSω . This completes the proof of the Theorem.13) u(xo ) < ∂Br (xo ) ∂Br (xo ) It follows that. to prove the theorem.
Then by the continuity of u. i aij (x)Dij + (7.18) Integrating both sides of (7. ¯ Ω ∂Ω . To see (7. one can see that if we replace − operator by − + c(x) with c(x) ≥ 0.11). we can replace the Laplace operator − with general uniformly elliptic operators. ∂ { ∂r u(xo + rω)dSω } < 0. Let L be the uniformly elliptic operator deﬁned above. Theorem 7. Furthermore.12). This veriﬁes (7. ∀x ∈ Bδ (xo ). we suppose in contrary that − u(xo ) > 0. This completes the proof of the Lemma. then the conclusion of Theorem 7. From the proof of Theorem 7. bi (x).1.2. ∂xi ∂xi ∂xj bi (x)Di + c(x). such that − u(x) > 0. any ξ ∈ Rn and for some δ > 0. This contradicts with the assumption that xo is a minimum of u. ¯ Ω ∂Ω (7. We say that L is uniformly elliptic if aij (x)ξi ξj ≥ δξ2 for any x ∈ Ω.11) holds for any 0 < r < δ.18) from 0 to r yields u(xo + rω)dSω − u(xo )S n−1  < 0. It follows that u(xo ) > 1 rn−1 S n−1  ∂Br (xo ) u(x)dS. Let Di = Deﬁne L=− ij ∂2 ∂ . then by (7.17).20) ii) If Lu ≤ 0 in Ω.2.2 (Weak Maximum Principle for L with c(x) ≡ 0). Dij = . then max u ≤ max u. Consequently. there exists a δ > 0. S n−1 (7.2. then min u ≥ min u.19) Here we always assume that aij (x). (7. Assume that c(x) ≡ 0.1 is still true (with slight modiﬁcations).182 7 Maximum Principles If u < 0. and c(x) are bounded continuous ¯ functions in Ω. i) If Lu ≥ 0 in Ω. S n−1 where S n−1  is the area of S n−1 .
that is. i) If Lu ≥ 0 in Ω. ∂u(xo ) < 0.3 The Hopf Lemma and Strong Maximum Principles 183 For c(x) ≥ 0. We will prove it by using the following Lemma 7. the principle still applies with slight modiﬁcations. then max u ≤ max u+ .7.23) . Let L be the uniformly elliptic operator deﬁned above. Assume that Lu ≥ 0 in Ω. Let u be a function in ¯ C 2 (Ω) ∩ C(Ω). Theorem 7. we will show that this can not actually happen. it concludes that the minimum of u attains at some point on the boundary ∂Ω. 0} and u+ (x) = max{u(x). the minimum value of u can only be achieved on the boundary unless u is constant. 7. Assume that Ω is an open. (7. In the case Lu ≥ 0. This is called the “Strong Maximum Principle”. ∂ν (7. ¯ Ω ∂Ω ii) If Lu ≤ 0 in Ω. say in [Ev]. then min u ≥ min u− . However it does not exclude the possibility that the minimum may also attain at some point in the interior of Ω. In this section. 0}. Let L=− ij aij (x)Dij + i bi (x)Di + c(x) be uniformly elliptic in Ω with c(x) ≡ 0.1 (Hopf Lemma). ¯ Ω ∂Ω Interested readers may ﬁnd its proof in many standard books.2. (7.22) Then for any outward directional derivative at xo . ∀x ∈ B.21) Suppose there is a ball B contained in Ω with a point xo ∈ ∂Ω ∩ ∂B and suppose u(x) > u(xo ). we prove a weak form of maximum principle.3.3 The Hopf Lemma and Strong Maximum Principles In the previous section.3 (Weak Maximum Principle for L with c(x) ≥ 0). and connected domain in Rn with smooth boundary ∂Ω. bounded. Let u− (x) = min{u(x). page 327. Assume that c(x) ≥ 0.
184
7 Maximum Principles
In the case c(x) ≥ 0, if we require additionally that u(xo ) ≤ 0, then the same conclusion of the Hopf Lemma holds. Proof. Without loss of generality, we may assume that B is centered at the origin with radius r. Deﬁne w(x) = e−αr − e−αx . Consider v(x) = u(x) + w(x) on the set D = B r (xo ) ∩ B (See Figure 3). 2
2 2
r 0 D xo Figure 3
We will choose α and appropriately so that we can apply the Weak Maximum Principle to v(x) and arrive at v(x) ≥ v(xo ) ∀x ∈ D. (7.24)
Ω
We postpone the proof of (7.24) for a moment. Now from (7.24), we have ∂v o (x ) ≤ 0, ∂ν Noticing that ∂w o (x ) > 0 ∂ν We arrive at the desired inequality ∂u o (x ) < 0. ∂ν Now to complete the proof of the Lemma, what left to verify is (7.24). We will carry this out in two steps. First we show that Lv ≥ 0. Hence we can apply the Weak Maximum Principle to conclude that min v ≥ min .
¯ D ∂D
(7.25)
(7.26)
(7.27)
7.3 The Hopf Lemma and Strong Maximum Principles
185
Then we show that the minimum of v on the boundary ∂D is actually attained at xo : v(x) ≥ v(xo ) ∀x ∈ ∂D. (7.28) Obviously, (7.27) and (7.28) imply (7.24). To see (7.26), we directly calculate
n n
Lw = e−αx {4α2
i,j=1 n
2
aij (x)xi xj − 2α
i=1 n
[aii (x) − bi (x)xi ] − c(x)} + c(x)e−αr [aii (x) − bi (x)xi ] − c(x)}
i=1
2
≥ e−αx {4α2
i,j=1
2
aij (x)xi xj − 2α
(7.29)
By the ellipticity assumption, we have r aij (x)xi xj ≥ δx2 ≥ δ( )2 > 0 in D. 2 i,j=1
n
(7.30)
Hence we can choose α suﬃciently large, such that Lw ≥ 0. This, together with the assumption Lu ≥ 0 implies Lv ≥ 0, and (7.27) follows from the Weak Maximum Principle. To verify (7.28), we consider two parts of the boundary ∂D separately. (i) On ∂D ∩ B, since u(x) > u(xo ), there exists a co > 0, such that u(x) ≥ u(xo ) + co . Take small enough such that w ≤ δ on ∂D ∩ B. Hence v(x) ≥ u(xo ) = v(xo ) ∀x ∈ ∂D ∩ B. (ii) On ∂D ∩ ∂B, w(x) = 0, and by the assumption u(x) ≥ u(xo ), we have v(x) ≥ v(xo ). This completes the proof of the Lemma. Now we are ready to prove Theorem 7.3.1 (Strong Maximum Principle for L with c(x) ≡ 0.) Assume that Ω is an open, bounded, and connected domain in Rn with smooth bound¯ ary ∂Ω. Let u be a function in C 2 (Ω) ∩ C(Ω). Assume that c(x) ≡ 0 in Ω. i) If Lu(x) ≥ 0, x ∈ Ω, then u attains its minimum only on ∂Ω unless u is constant. ii) If Lu(x) ≤ 0, x ∈ Ω, then u attains its maximum only on ∂Ω unless u is constant. Proof. We prove part i) here. The proof of part ii) is similar. Let m be the minimum value of u in Ω. Set Σ = {x ∈ Ω  u(x) = m}. It is relatively closed in Ω. We show that either Σ is empty or Σ = Ω.
186
7 Maximum Principles
We argue by contradiction. Suppose Σ is a nonempty proper subset of Ω. Then we can ﬁnd an open ball B ⊂ Ω \ Σ with a point on its boundary belonging to Σ. Actually, we can ﬁrst ﬁnd a point p ∈ Ω \ Σ such that d(p, Σ) < d(p, ∂Ω), then increase the radius of a small ball center at p until it hits Σ (before hitting ∂Ω). Let xo be the point at ∂B ∩ Σ. Obviously we have in B Lu ≥ 0 and u(x) > u(xo ). Now we can apply the Hopf Lemma to conclude that the normal outward derivative ∂u o (x ) < 0. (7.31) ∂ν On the other hand, xo is an interior minimum of u in Ω, and we must have Du(xo ) = 0. This contradicts with (7.31) and hence completes the proof of the Theorem. In the case when c(x) ≥ 0, the strong principle still applies with slight modiﬁcations. Theorem 7.3.2 (Strong Maximum Principle for L with c(x) ≥ 0.) Assume that Ω is an open, bounded, and connected domain in Rn with smooth bound¯ ary ∂Ω. Let u be a function in C 2 (Ω) ∩ C(Ω). Assume that c(x) ≥ 0 in Ω. i) If Lu(x) ≥ 0, x ∈ Ω, then u can not attain its nonpositive minimum in the interior of Ω unless u is constant. ii) If Lu(x) ≤ 0, x ∈ Ω, then u can not attain its nonnegative maximum in the interior of Ω unless u is constant. Remark 7.3.1 In order that the maximum principle to hold, we assume that the domain Ω be bounded. This is essential, since it guarantees the existence ¯ of maximum and minimum of u in Ω. A simple counter example is when Ω n is the half space {x ∈ R  xn > 0}, and u(x, y) = xn . Obviously, u = 0, but u does not obey the maximum principle: max u ≤ max u.
Ω ∂Ω
Equally important is the nonnegativeness of the coeﬃcient c(x). For example, set Ω = {(x, y) ∈ R2  − π < x < π , − π < y < π }. Then u = cos x cos y 2 2 2 2 satisﬁes − u − 2u = 0, in Ω u = 0, on ∂Ω.
7.3 The Hopf Lemma and Strong Maximum Principles
187
But, obviously, there are some points in Ω at which u < 0. However, if we impose some sign restriction on u, say u ≥ 0, then both conditions can be relaxed. A simple version of such result will be present in the next theorem. Also, as one will see in the next section, c(x) is actually allowed to be negative, but not ‘too negative’. Theorem 7.3.3 (Maximum Principle and Hopf Lemma for not necessarily bounded domain and not necessarily nonnegative c(x). ) Let Ω be a domain in Rn with smooth boundary. Assume that u ∈ C 2 (Ω)∩ ¯ and satisﬁes C(Ω) − u+ u(x) = 0
n i=1 bi (x)Di u
+ c(x)u ≥ 0, u(x) ≥ 0, x ∈ Ω x ∈ ∂Ω
(7.32)
with bounded functions bi (x) and c(x). Then i) if u vanishes at some point in Ω, then u ≡ 0 in Ω; and ii) if u ≡0 in Ω, then on ∂Ω, the exterior normal derivative ∂u < 0. ∂ν
To prove the Theorem, we need the following Lemma concerning eigenvalues. Lemma 7.3.2 Let λ1 be the ﬁrst positive eigenvalue of − φ = λ1 φ(x) x ∈ B1 (0) φ(x) = 0 x ∈ ∂B1 (0) (7.33)
with the corresponding eigenfunction φ(x) > 0. Then for any ρ > 0, the ﬁrst λ1 positive eigenvalue of the problem on Bρ (0) is 2 . More precisely, if we let ρ ψ(x) = φ( x ), then ρ λ1 − ψ = 2 ψ(x) x ∈ Bρ (0) (7.34) ρ ψ(x) = 0 x ∈ ∂Bρ (0) The proof is straight forward, and will be left for the readers. The Proof of Theorem 7.3.3. i) Suppose that u = 0 at some point in Ω, but u ≡ 0 on Ω. Let Ω+ = {x ∈ Ω  u(x) > 0}. Then by the regularity assumption on u, Ω+ is an open set with C 2 boundary. Obviously, u(x) = 0 , ∀ x ∈ ∂Ω+ . Let xo be a point on ∂Ω+ , but not on ∂Ω. Then for ρ > 0 suﬃciently small, one can choose a ball Bρ/2 (¯) ⊂ Ω+ with xo as its boundary point. Let ψ be x
188
7 Maximum Principles
the positive eigenfunction of the eigenvalue problem (7.34) on Bρ (xo ) correλ1 x sponding to the eigenvalue 2 . Obviously, Bρ (xo ) completely covers Bρ/2 (¯). ρ u Let v = . Then from (7.32), it is easy to deduce that ψ 0 ≤ − v−2 v·
n
ψ + bi (x)Di v + ψ i=1
n
Di ψ − ψ + + c(x) v ψ ψ i=1
n
≡− v+
i=1
˜i (x)Di v + c(x)v. b ˜
Let φ be the positive eigenfunction of the eigenvalue problem (7.33) on B1 , then n 1 Di φ λ1 + c(x). c(x) = 2 + ˜ ρ ρ i=1 φ This allows us to choose ρ suﬃciently small so that c(x) ≥ 0. Now we can ˜ apply Hopf Lemma to conclude that, the outward normal derivative at the boundary point xo of Bρ/2 (¯), x ∂v o (x ) < 0, ∂ν because v(x) > 0 ∀x ∈ Bρ/2 (¯) and v(xo ) = x u(xo ) = 0. ψ(xo ) (7.35)
On the other hand, since xo is also a minimum of v in the interior of Ω, we must have v(xo ) = 0. This contradicts with (7.35) and hence proves part i) of the Theorem. ii) The proof goes almost the same as in part i) except we consider the point xo on ∂Ω and the ball Bρ/2 (¯) is in Ω with xo ∈ ∂Bρ/2 (¯). Then for x x the outward normal derivative of u, we have ∂u o ∂v o ∂ψ o ∂v o (x ) = (x )ψ(xo ) + v(xo ) (x ) = (x )ψ(xo ) < 0. ∂ν ∂ν ∂ν ∂ν Here we have used a wellknown fact that the eigenfunction ψ on Bρ (xo ) is radially symmetric about the center xo , and hence ψ(xo ) = 0. This completes the proof of the Theorem.
7.4 Maximum Principles Based on Comparisons
In the previous section, we show that if (− + c(x))u ≥ 0, then the maximum principle, i.e. (7.20), applies. There, we required c(x) ≥ 0. We can think −
39). the maxima or minima of φ are attained only on the boundary ∂Ω. and the maximum principle holds for any ‘positive’ operators.38).36) with λ = λ1 obey Maximum Principle.40) and (7. it is easy to verify that φ 1 φ − v=2 v· + (− u + u). Suppose that u(x) < 0 somewhere in u(x) . we can establish the following more general maximum principle based on comparison.37) Assume that u is a solution of − u + c(x)u ≥ 0 x ∈ Ω u≥0 on ∂Ω. If c(x) > λ(x) .40) φ φ φ On one hand. (7. By a direct calculation. . Let v(x) = φ(x) in Ω. More precisely. we have. c(x) need not be nonnegative.36) φ(x) = 0 x ∈ ∂Ω. and thus completes the proof of the Theorem. Proof. the solutions of (7.4 Maximum Principles Based on Comparisons 189 as a ‘positive’ operator.7. that is. − u+ φ u(xo ) ≥ − u + λ(xo )u(xo ) φ > − u + c(xo )u(xo ) ≥ 0.37). Do we really need c(x) ≥ 0 here? To answer the question. Then since φ(x) > 0.38) While on the other hand.39) (7. We notice that the eigenfunction φ corresponding to the ﬁrst positive eigenvalue λ1 is either positive or negative in Ω. This suggests that.1 Assume that Ω is a bounded domain. That is. Theorem 7. Let φ be a positive ¯ function on Ω satisfying − φ + λ(x)φ ≥ 0. We argue by contradiction. it is allowed to be as negative as −λ1 . For c(x) ≥ 0. − +c(x) is also ‘positive’. (7. we must have v(x) < 0 somewhere Ω. at point xo . Let xo ∈ Ω be a minimum of v(x). by (7. (7.41) (7.4. ∀x ∈ Ω. This is an obvious contradiction with (7. and (7. (7. and taking into account that u(xo ) < 0.41). then u ≥ 0 in Ω. since xo is a minimum. we have − v(xo ) ≤ 0 and v(xo ) = 0. to ensure the Maximum Principle. let us consider the Dirichlet eigenvalue problem of − : − φ − λφ(x) = 0 x ∈ Ω (7.
c(x) −1 satisﬁes (7. so that the Maximum Principle Based on Comparison applies: i) Narrow regions.) If u satisﬁes (7. one can choose some positive 1 number q < n − 2.42) x→∞ Then u ≥ 0 in Ω.4. i.e.1 From the proof.39) are required only at the points where v attains its minimum.4. and ii) c(x) decays fast enough at ∞. Now condition (7.4. (7. For convenience in applications. The Theorem is also valid on an unbounded domains if u is “nonnegative” at inﬁnity: Theorem 7. Hence we can directly apply Theorem l 7. ii) Decay at Inﬁnity.1 (Narrow Region Principle.39).39).2 If Ω is an unbounded domain. Still consider the same v(x) as in the proof of Theorem 7.38). c(x) > λ(x) = 2 . Then it is easy Ω 0 l x1 to see that − φ = ( 1 )2 φ.4. Corollary 7.37) and (7. provided lim inf x→∞ u(x) ≥ 0.1. Then the rest of the arguments are exactly the same as in the proof of Theorem 7. Proof.1. one can see that conditions (7. Then when the width l of the region Ω is suﬃciently small. or at points where u is negative. we assume further that lim inf u(x) ≥ 0. where λ(x) = l −1 l2 can be very negative when l is suﬃciently small.38) with bounded function c(x).42) guarantees that the minima of v(x) do not “leak” away to inﬁnity. i) Narrow Regions. In dimension n ≥ 3. we provide two typical situations where there exist such functions φ and c(x) satisfying condition (7. besides condition (7.4.190 7 Maximum Principles Remark 7.37) and (7. and let φ(x) = xq Then it is easy to verify that .1 to conclude that u ≥ 0 in Ω. When Ω = {x  0 < x1 < l} is a narrow region with width l as shown: We can choose φ(x) = sin( x1l+ ).4.
y)f (y)dy.1. they can be easily applied to a nonlinear equation. If u satisﬁes (7.44) 1 with xs s(p − 1) > 2. Assume that the solution u decays near inﬁnity at the rate of (7.4. ∀(x. 7.5 A Maximum Principle for Integral Equations 191 − φ= q(n − 2 − q) φ.4.2 From Remark 7.3 Although Theorem 7.38) on ¯ Ω. − u − up−1 u = 0 x ∈ Rn .2 (Decay at Inﬁnity) Assume there exist R > 0. If further assume that u ∂Ω ≥ 0. we can adapt the proof of Theorem 7. y) ∈ Ω × Ω. Let Ω be a region in Rn . and for the region Ω as stated in Corollary 7. c(x) satisﬁes (7. x2 In the case c(x) decays fast enough near inﬁnity. may or may not be bounded. Let c(x) = −u(x)p−1 .1 to derive Corollary 7.2. Remark 7. . Then for R suﬃciently large.1 as well as its Corollaries are stated in linear forms.43) in Ω.7.4. for example. x2 (7. then u(x) ≥ 0 for all x ∈ Ω. x→∞ xq C Let Ω be a region containing in BR (0) ≡ Rn \ BR (0). Deﬁne the integral operator T by (T f )(x) = Ω K(x. then we can derive from Corollary 7.4. y) ≥ 0.2 that u ≥ 0 in the entire region Ω.4. Assume K(x.4.4. we introduce a maximum principle for integral equations.5 A Maximum Principle for Integral Equations In this section. one can see that actually condition (7.43) is only required at points where u is negative. q(n − 2 − q) .4. such that c(x) > − Suppose u(x) lim = 0.43) Remark 7. ∀x > R.
48) (7. f + (x) = max{0. f (x)}.46) Ω+ = {x ∈ Ω  f (x) > 0}. we have similar applications for integral equations. This completes the proof of the Theorem. Let α be a number between 0 and n. whenever 0 ≤ f (x) ≤ g(x) in Ω. (7. (7. (7. It follows from (7. Since θ < 1. Theorem 7. we have f + (x) = 0 and (T f )+ (x) ≥ 0 by the deﬁnitions. Then it is easy to see that for any x ∈ Ω+ . after introducing the “Maximum Principle Based on Comparison” for Partial Diﬀerential Equations. Parallel to this. 0 ≤ f + (x) ≤ (T f )(x) = (T f )+ (x) + (T f )− (x) ≤ (T f )+ (x) (7. then f (x) ≤ 0 for all x ∈ Ω. L∞ (Ω) norm.192 7 Maximum Principles Let · be a norm in a Banach space satisfying f ≤ g . f (x)}.1 Suppose that Tg ≤ θ g If f (x) ≤ (T f )(x).49) that f + ≤ (T f )+ ≤ θ f + . . we explained how it can be applied to the situations of “Narrow Regions” and “Decay at Inﬁnity”. Therefore. which implies that f + (x) ≡ 0.47) with some 0 < θ < 1 ∀g.49) holds for any x ∈ Ω. and hence f (x) ≤ 0. Deﬁne (T f )(x) = Ω 1 c(y)f (y)dy. Recall that in Section 5.49) While in the rest of the region Ω. Proof.45). such as Lp (Ω) norms.5. f − (x) = min{0.45) There are many norms that satisfy (7. we must have f + = 0. x − yn−α Assume that f satisﬁes the integral equation f = Tf in Ω. and C(Ω) norm. Deﬁne and (7.
C ii) Near Inﬁnity: Say.5 A Maximum Principle for Integral Equations 193 or more generally.51). We will use the method of moving planes to obtain the radial symmetry and monotonicity of the positive solutions.5. Remark 7. ∀ x ∈ Ω. and thus to obtain C c(y) Lτ (Ω) < 1. o in the As an immediate application of this “Maximum Principle”. More precisely. Let f ∈ Lp (Rn ) be a nonnegative function satisfying (7.7. we can make the integral Ω c(y)τ dy as small as we wish.5.5. we study an integral equation in the next section. . such that if µ(Ω ∩ BRo (0)) ≤ o. a maximum principle that will be applied to “Narrow Regions” and “Near Inﬁnity”. i) Narrow Regions: The width of Ω is very small. τ > 1. we have Corollary 7. the integral inequality f ≤ Tf Further assume that Tf Lp (Ω) in Ω.50) ≤ C c(y) Lτ (Ω) f Lp (Ω) . from Theorem 7. Then there exist positive numbers Ro and o depending on c(y) only.1 Assume that c(y) ≥ 0 and c(y) ∈ Lτ (Rn ). (7.50) and (7. This completes the proof of the Corollary. Since c(y) ∈ Lτ (Rn ). then f + ≡ 0 in Ω. If we have some right integrability condition on c(y). the complement of the ball BR (0). Now it follows from Theorem 7. by Lebesgue integral theory.5. Proof. (7. with suﬃciently large R.51) for some p. Ω = BR (0).1. where µ(D) is the measure of the set D.1 One can see that the condition µ(Ω ∩ BRo (0)) ≤ Corollary is satisﬁed in the following two situations. then we can derived.1 that f + (x) ≡ 0 . when the measure of the intersection of Ω with BRo (0) is suﬃciently small.
.
3. there have seen many signiﬁcant contributions. and Spruck [CGS].3 Symmetry of Solutions of −∆u = eu in R2 8. it was further developed by Serrin [Se]. Gidas.4 8.2 Symmetry of Solutions of −∆u = up in Rn .2.1 Outline of the Method of Moving Planes 8. and many others. Li [Li]. and Nirenberg [GNN].3. . Decades later. to derive useful inequalities. Ni.2.2. Particularly for semilinear partial diﬀerential equations. and other problems.1 Symmetry of Solutions on a Unit Ball 8. This method has been applied to free boundary problems.1 The Background 8.5 Method of Moving Planes in an Integral Form and Symmetry of Solutions for Integral Equations The Method of Moving Planes (MMP) was invented by the Soviet mathematician Alexanderoﬀ in the early 1950s. The Method of Moving Planes and its variant–the Method of Moving Spheres–have become powerful tools in establishing symmetries and monotonicity for solutions of partial diﬀerential equations.1 The Background 8.4.8 Methods of Moving Planes and of Moving Spheres 8. semilinear partial diﬀerential equations. Caﬀarelli.3 Method of Moving Planes in a Local Way 8.4. 8. Gidas. We refer to the paper of Frenkel [F] for more descriptions on the method. They can also be used to obtain a priori estimates. and to prove nonexistence of solutions. Chen and Li [CL] [CL1].2 The A Priori Estimates Method of Moving Spheres 8.2 Applications of Maximum Principles Based on Comparison 8. Chang and Yang [CY].2 Necessary Conditions 8.
the Maximum Principles introduced in the previous chapter are applied in innovative ways. so that they will be able to apply it to their own research. In this chapter. one can change a differential equation into an integral equation. we have seen the beauty and power of maximum principles. the Narrow Region Principle and the Decay at Inﬁnity Principle will be used repeatedly in dealing with the three examples. . they employed the global properties of the solutions of integral equations. n+2 During the moving of planes. a priori estimates. It is wellknown that by using a Green’s function. n+2 in M. Instead of using local properties (say diﬀerentiability) of a diﬀerential equation. and the advantage is that each time we only need to use the maximum principle in a very narrow region.2. they are equivalent. During this process. the authors. From the previous chapter. monotonicity. the MMP is a continuous way of repeated applications of maximum principles.’ the maximum principle can still be applied. We recommend the readers study this part carefully. and − u = eu(x) x ∈ R2 . In Section 8. together with Ou. We will establish symmetry. In the authors’ research practice. even if the coeﬃcients of the equation are not ‘good. During the process of Moving Planes. the Maximum Principles Base on Comparison will play a major role. The MMP greatly enhances the power of maximum principles. Roughly speaking.3. In Section 8. and nonexistence of the solutions. we will establish radial symmetry and monotonicity for the solutions of the following three semilinear elliptic problems − u = f (u) x ∈ B1 (0) u=0 on ∂B1 (0). in such a narrow region. created an integral form of MMP. In particular.196 8 Methods of Moving Planes and of Moving Spheres From the previous chapter. To investigate the symmetry and monotonicity of integral equations. one can see that. and under certain conditions. − u = u n−2 (x) x ∈ Rn n ≥ 3. we will apply the Method of Moving Planes in a ‘local way’ to obtain a priori estimates on the solutions of the prescribing scalar curvature equation on a compact Riemannian manifold M − 4(n − 1) n−2 ou + Ro (x)u = R(x)u n−2 . we also introduced a form of maximum principle at inﬁnity to the MMP and therefore simpliﬁed many proofs and extended the results in more natural ways. we will apply the Method of Moving Planes and their variant–the Method of Moving Spheres– to study semilinear elliptic equations and integral equations. the maximum principle has been used inﬁnitely many times.
x2 . respectively. we use the Method of Moving Spheres to prove a nonexistence of solutions for the prescribing Gaussian and scalar curvature equations − u + 2 = R(x)eu . Hence we exploit its global property and develop a new idea–the Integral Form of the Method of Moving Planes to obtain the symmetry and monotonicity of the solutions. This provides a stronger necessary condition than the well known KazdanWarner condition. n−α x − y for any real number α between 0 and n. We prove that if the function R(x) is rotationally symmetric and monotone in the region where it is positive.1 Outline of the Method of Moving Planes 197 We alow the function R(x) to change signs. The Maximum Principle for Integral Equations established in Chapter 7 is combined with the estimates of various integral norms to carry on the moving of planes. we study the integral equation in Rn u(x) = Rn n+α 1 u n−α (y)dy. For any real number λ. It arises as an EulerLagrange equation for a functional in the context of the HardyLittlewoodSobolev inequalities. Due to the diﬀerent nature of the integral equation. Let u be a positive solution of a certain partial diﬀerential equation. and − u+ n+2 n−2 n(n − 2) u= R(x)u n−2 4 4(n − 1) on S 2 and on S n (n ≥ 3). the traditional Method of Moving Planes does not work.1 Outline of the Method of Moving Planes To outline how the Method of Moving Planes works.5.4. xn ) ∈ Rn  x1 = λ}.8. . we take the Euclidian space Rn for an example. In Section 8. we may assign that direction as x1 axis. In Section 8. In this situation.e. · · · . as an application of the maximum principle for integral equations introduced in Section 7. If we want to prove that it is symmetric and monotone in a given direction. This is a plane perpendicular to x1 axis and the plane that we will move with. Let Σλ denote the region to the left of the plane.5. the traditional blowingup analysis fails near the set where R(x) = 0. 8. we introduce an auxiliary function to circumvent this diﬃculty. let Tλ = {x = (x1 . and it also becomes a suﬃcient condition for the existence of solutions in most cases. We will use the Method of Moving Planes in an innovative way to obtain a priori estimates. then both equations admit no solution. i. Since the Method of Moving Planes can not be applied to the solution u directly.
maximum principles are powerful tools for this task. then there would exist λ > λo . that is wλo (x) ≡ 0 for all x ∈ Σλo . and we want to show that u is symmetric about some plane Tλo . and for partial diﬀerential equations. We prove that u is symmetric about the plane Tλo . there exists some λo . · · · . We estimate a certain norm of wλ on the set − Σλ = {x ∈ Σλ  wλ (x) < 0} where the inequality (8. we deﬁne λo = sup{λ  wλ (x) ≥ 0. − and hence Σλ is empty. x2 .198 8 Methods of Moving Planes and of Moving Spheres Σλ = {x ∈ Rn  x1 < λ}.1) is violated. More precisely. 1 . (8. We show that this norm must be zero.1) holds. ∀x ∈ Σλo .1) Then we are able to start oﬀ from this neighborhood of x1 = −∞. Step 2. such that wλo (x) ≡ 0 . and move the plane Tλ along the x1 direction to the right as long as the inequality (8. From the above illustration. · · · .1). ∀x ∈ Σλ . We ﬁrst show that for λ suﬃciently negative. we use a diﬀerent idea. To this end. This is usually carried out by a contradiction argument. we have wλ (x) ≥ 0 . one can see that the key to the Method of Moving Planes is to establish inequality (8. let wλ (x) = u(xλ ) − u(x). Let xλ = (2λ − x1 . and this contradicts with the deﬁnition of λo . xn ) about the plane Tλ (See Figure 1). Step 1. While for integral equations. We show that if wλo (x) ≡ 0. x Σλ Tλ xλ Figure 1 We compare the values of the solution u at point x and xλ . We continuously move the plane this way up to its limiting position. we generally go through the following two steps. xn ). such that (8. In order to show that. ∀x ∈ Σλ }. the reﬂection of the point x = (x1 .1) holds.
let Tλ = {x  x1 = λ} be the plane perpendicular to the x1 axis.8. x2 .2. more precisely.3) x xλ Σλ Tλ Figure 4 Let We compare the values of the solution u on Σλ with those on its reﬂection. is radially symmetric and monotone decreasing about the origin.2. As shown on Figure 4 below. Ni. The essence of the method of moving planes is the application of various maximum principles.1 Symmetry of Solutions in a Unit Ball We ﬁrst begin with an elegant result of Gidas. xλ = (2λ − x1 .2) for some constant Co . we study some semilinear elliptic equations. uλ (x) = u(xλ ).2 Applications of the Maximum Principles Based on Comparisons 199 8. and Nirenberg [GNN1]: Theorem 8. Then every positive solution u of − u = f (u) x ∈ B1 (0) u=0 on ∂B1 (0). 0 x1 . We will apply the method of moving planes to establish the symmetry of the solutions. the readers will see vividly how the Maximum Principles Based on Comparisons are applied to narrow regions and to solutions with decay at inﬁnity. let xλ be the reﬂection of the point x about the plane Tλ .1 Assume that f (·) is a Lipschitz continuous function such that f (p) − f (q) ≤ Co p − q (8. In the proof of each theorem. For each x ∈ Σλ . (8. and wλ (x) = uλ (x) − u(x). · · · . Proof.2 Applications of the Maximum Principles Based on Comparisons In this section. xn ). Let Σλ be the part of B1 (0) which is on the left of the plane Tλ . 8.
λ) ≤ Co .1 ) to conclude that. let ¯ λ = sup{λ  wλ (x) ≥ 0. ¯ Let do be the maximum width of narrow regions that we can apply the “Narrow Region Principle”. and on ∂Σλ .) Now we can apply the “Narrow Region Principle” ( Corollary 7. then the image of the curved surface part of ∂Σ ¯ under the fact. It follows ¯ that. we ¯ ¯ deduce that wλ (x) > 0 ¯ in the interior of Σλ . Obviously. By the Strong Maximum Principle. ¯ 2 . ∀x ∈ Σλ . wλ (x) > 0. f (uλ (x)) − f (u(x)) . Σλ is a narrow (in x1 direction) region. by moving this way. In ¯ < 0.6) (8. for λ suﬃciently close to −1. we move the plane Tλ to the right as long as the inequality (8. where u(x) > 0 by assumption. one can verify that wλ satisﬁes wλ + C(x. wλ (x) ≥ 0. ( On Tλ . −λ. We now increase the value of λ continuously. λ)wλ (x) = 0. wλ (x) > 0 since u > 0 in B1 (0). λ) = and by condition (8.4.200 8 Methods of Moving Planes and of Moving Spheres Then it is easy to see that uλ satisﬁes the same equation as u does. wλ (x) = 0. We show that. and this would contradicts with the deﬁnition of λ. This provides a starting point for us to move the plane Tλ Step 2: Move the Plane to Its Right Limit. if λ λ reﬂection about Tλ lies inside B1 (0). where C(x. while on the curve part of ∂Σλ . such that δ ≤ do ¯ ¯ 2 . wλ (x) ≥ 0.5) Otherwise.2). on this part of ∂Σλ .4) Step 1: Start Moving the Plane. Choose a small positive number δ. that is. we ﬁrst claim that ¯ λ ≥ 0.5) holds. We consider the function wλ+δ (x) on the narrow region (See Figure 5): ¯ do Ωδ = Σλ+δ ∩ {x  x1 > λ − }. We start from the near left end of the region. the plane will not stop before hitting the origin. (8. C(x. for λ close to −1. we will show that the plane can be further moved to the right ¯ by a small distance. ∀x ∈ Σλ }. uλ (x) − u(x) (8. More precisely. Applying the Mean Value Theorem to f (u). x ∈ Σλ .
∀x ∈ Σλ− do . 2 Notice that on this part. ¯ ¯ 2 Since wλ is continuous in λ. ∀x ∈ Σλ+δ . the boundary condition in (8. Now we can apply the “Narrow Region Principle” to conclude that wλ+δ (x) ≥ 0. we still have wλ+δ (x) ≥ 0 .6).6) implies that u(−x1 . ¯ (8. To see the boundary condition. we use continuity argument. there exists a constant co > 0. wλ+δ (x) ≥ 0.2 Applications of the Maximum Principles Based on Comparisons 201 It satisﬁes ¯ wλ+δ + C(x. ∀x ∈ Σλ− do . (8.8. (8. ¯ ¯ 2 Hence in particular.8) 1 .7) holds for such small δ. ∀x1 ≥ 0. we ﬁrst notice ¯ that it is satisﬁed on the two curved parts and one ﬂat part where x1 = λ + δ of the boundary ∂Ωδ due to the deﬁnition of wλ+δ . ¯ ¯ ¯ This contradicts with the deﬁnition of λ and thus establishes (8. To see that it is also true ¯ ¯ on the rest of the boundary where x1 = λ − do . for δ suﬃciently small. ∀x ∈ Ωδ .7) Σλ− do Ωδ 0 ¯ 2 x1 Tλ− do Tλ+δ ¯ ¯ 2 Figure 5 The equation is obvious. λ + δ)wλ+δ = 0 x ∈ Ωδ ¯ ¯ wλ+δ (x) ≥ 0 x ∈ ∂Ωδ . x ) . More ¯ precisely and more generally. ¯ And therefore. such that wλ (x) ≥ co . wλ is positive and bounded away from 0. x ) ≤ u(x1 .
Also the monotonicity easily follows from the argument.2. we see that u(x) is symmetric about the plane T0 .202 8 Methods of Moving Planes and of Moving Spheres where x = (x2 . Since we can place x1 axis in any direction. and hence assumes the form u(x) = [n(n − 2)λ2 ] n−2 4 n−2 2 (λ2 + x − xo 2 ) for some λ > 0 and xo ∈ Rn .3 i) For p = n−2 . an interesting results is the symmetry of the positive solutions of the semilinear elliptic equation: u + up = 0 . Gidas and Spruck [GS] showed that the only nonnegative solution of (8. Then. in the authors paper [CL1].10) is identically zero. x ) ≥ u(x1 .10) with reasonable behavior at inﬁnity.2.2 For p = n−2 . x ) . Caﬀarelli. namely (8. we conclude that u(x) must be radially symmetric about the origin.2 Symmetry of Solutions of − u = up in Rn In an elegant paper of Gidas. . They proved n+2 Theorem 8.8) and (8.9) Combining two opposite inequalities (8.2. xn ). ∀x1 ≥ 0. xn−2 are radially symmetric and monotone decreasing about some point. This completes the proof. and hence assume the form u(x) = [n(n − 2)λ2 ] (λ2 + x − n−2 4 n−2 2 xo 2 ) for λ > 0 and for some xo ∈ Rn .9).10) u = O( 1 ). Ni. (8.10) must be radially symmetric and monotone decreasing about some point. We then start from λ close to 1 and move the plane Tλ toward the left. 8. is in fact equivalent to the geometric result due to Obata [O]: A Riemannian metric on S n which is conformal to the standard one and having the same constant scalar curvature is the pull back of the standard one under a conformal map of S n to itself. Recently. Gidas and Spruck [CGS] removed the decay assumption u = O(x2−n ) and proved the same result. n ≥ 3. In the case that n+2 1 ≤ p < n−2 . · · · . x ∈ Rn . Similarly. every positive C 2 solution of (8. and Nirenberg [2]. as was pointed out by R. This uniqueness result. Schoen. a simpler and more elementary proof was given for almost the same result: n+2 Theorem 8. we obtain u(−x1 . all the positive solutions of (8.
However.3. to better illustrate the idea. using the uniqueness theory in Ordinary Diﬀerential Equations.1). Finally. we will move our plane Tλ in the the x1 direction toward the right as long as inequality (8.2. Prepare to Move the Plane from Near −∞. Then in the second step. Proof of Theorem 8. xλ = (2λ − x1 .e. that is near x1 = −∞. we see here c(x) = .11) Here. the “Decay at Inﬁnity” is applied. In the ﬁrst step. ∀x ∈ Σλo . By the deﬁnition of uλ .) Let uλ (x) = u(xλ ).8.4. · · · . Recalling the “Maximum Principle Based on Comparison” (Theorem 7. it is easy to verify that τ − wλ = uτ (x) − uτ (x) = τ ψλ−1 (x)wλ (x). Deﬁne Σλ = {x = (x1 . The proof consists of three steps. we apply the maximum principle n+2 to wλ (x).2 is actually included in the more general proof of the ﬁrst part of Theorem 8. (8. it is easy to see that. λ (8. · · · . The proof of Theorem 8.2 (mostly in our own idea). for λ suﬃciently negative. we will show that the solutions can only assume the given form. xn ) ∈ Rn  x1 < λ}. we start from the very left end of our region Rn . say at λ = λo . in the third step. Then by the Mean Value Theorem. Tλ = ∂Σλ and let xλ be the reﬂection point of x about the plane Tλ .12) where ψλ (x) is some number between uλ (x) and u(x). To verify (8.11) holds. ∀x ∈ Σλ .2.2. wλ (x) ≥ 0. x2 . xn ). the only nonnegative solution of (8. We will show that. (See the previous Figure 1. We will show that wλo (x) ≡ 0. how the “Decay at Inﬁnity” principle is applied here.2. Since x1 axis can be chosen in any direction. and wλ (x) = uλ (x) − u(x).10) is identically zero. Write τ = n−2 . i. we will ﬁrst present the proof of Theorem 8. This implies that the solution u is symmetric and monotone decreasing about the plane Tλo . uλ satisﬁes the same equation as u does.2 Applications of the Maximum Principles Based on Comparisons 203 ii) For p < n+2 n−2 . Step 1. And the readers will see vividly.11) for λ suﬃciently negative.2. we conclude that u must be radially symmetric and monotone about some point. The plane will stop at some limiting position.
Recall that in the last section.2). which is what (actually more ˜ x than) we desire for. By the “Decay at Inﬁnity” argument (Corollary 7. ∀x ∈ Σλo .3 ).13) Otherwise. (8. ∀x ∈ Σλo +δ . x x and hence 0 ≤ uλ (˜) ≤ ψλ (˜) ≤ u(˜). =O 1 ˜4 x .3. increase the value of λ. only at the points x ˜ where wλ is negative (see Remark 7. More precisely. due to the asymptotic behavior of u near x1 = +∞. and hence (8.11) holds. we must have (8. This completes the preparation for the moving of planes. there exists a δo > 0 such that. it can not be applied in this situation. we have wλo (x) > 0 in the interior of Σλo . Therefore. as long as the inequality (8.2 ). we have wλo +δ (x) ≥ 0 .204 8 Methods of Moving Planes and of Moving Spheres τ −τ ψλ−1 (x). we can apply the “Maximum Principle Based on Comparison” to conclude that for λ suﬃciently negative ( ˜ suﬃciently x large ).11). 1 is greater than two.14) We show that the plane Tλo can still be moved a small distance to the right. λo < +∞.e. (8. (8.. for all 0 < δ < δo . Unfortunately.4. it suﬃce τ to check the decay rate of ψλ−1 (x). Now we can move the plane Tλ toward right. because . Obviously. x x x By the decay assumption of the solution u(x) = O we derive immediately that τ ψλ−1 (˜) = O ( x 4 1 n−2 ) ˜ x 1 xn−2 . We claim that wλo (x) ≡ 0. Apparently at these points.15) This would contradict with the deﬁnition of λo . we use the “Narrow Region Principle ” to derive (8. by the “Strong Maximum Principle” on unbounded domains ( see Theorem 7. i. Here the power of Step 2.15).13) must hold. uλ (˜) < u(˜). ∀x ∈ Σλ }. Move the Plane to the Limiting Position to Derive Symmetry. and more precisely.4. Deﬁne λo = sup{λ  wλ (x) ≥ 0.
we introduce a new function wλ (x) = ¯ where φ(x) = wλ (x) . to verify (8.15). . To overcome this diﬃculty. then xo  < Ro .16) wλo (xo ) = lim wλo +δi (xi ) ≤ 0. xq Then it is a straight forward calculation to verify that − wλ = 2 wλ · ¯ ¯ We have Lemma 8.2. Then.14). what left is to prove the Lemma. ¯ ¯ i→∞ However. φ(x) 1 with 0 < q < n − 2. there is a subsequence of {xi } (still denoted by {xi }) which converges to some point xo ∈ Rn . we must have wλo (xo ) = 0.18). ¯ ¯ (8.17) φ φ + − wλ + wλ φ φ 1 φ (8.1 There exists a Ro > 0 ( independent of λ). ∀ i = 1. such that if xo is a minimum point of wλ and wλ (xo ) < 0.1. By Lemma 8. Now suppose that (8. therefore. ¯ ¯ We postpone the proof of the Lemma for a moment. Consequently. · · · . Then there exists a sequence of numbers {δi } tending to 0 and for each i.18) On the other hand. 2. we have. wλo (xo ) = lim ¯ i→ ∞ and wλo +δi (xi ) = 0 ¯ (8. the corresponding negative minimum xi of wλo +δi . the outward normal derivative ∂wλo o (x ) < 0. xo must be on the boundary of Σλo . we have xi  ≤ Ro .15) is violated for any δ > 0.8. since wλo (xo ) = 0.3.3). by (8. we already know wλo ≥ 0.2. Then by the Hopf Lemma (see Theorem 7. It ¯ ¯ follows that wλo (xo ) = wλo (xo )φ(xo ) + wλo (xo ) φ = 0 + 0 = 0.2 Applications of the Maximum Principles Based on Comparisons 205 the “narrow region” here is unbounded. and we are not able to guarantee that wλo is bounded away from 0 on the left boundary of the “narrow region”. ∂ν This contradicts with (8. Now.
and n−2 (λ2 + r2 ) 2 by the uniqueness of the ODE problem. i) The general idea in proving this part of the Theorem is almost the same as that for Theorem 8. One can verify that u(r) = u(x) = [n(n − 2)λ2 ] n−2 4 n−2 2 [n(n − 2)λ2 ] n−2 4 (λ2 + x − xo 2 ) for λ > 0 and some xo ∈ Rn . as we argued in Step 1. o x φ(xo ) This.2. this is the only solution. So we ﬁrst make a Kelvin transform to deﬁne a new function 1 x u( ). and hence completes the proof of the Lemma. ¯ Then − wλ (xo ) ≤ 0 and wλ (xo ) = 0.1.206 8 Methods of Moving Planes and of Moving Spheres Proof of Lemma 8. we show that the positive solutions of (8. ¯ ¯ (8.2. Assume that xo is a negative minimum of wλ .3. c(xo ) := −τ ψ τ −1 (xo ) > − It follows from (8. we may assume that the solutions are symmetric about the origin.2. if xo  is suﬃciently large.12) that − wλ + φ wλ (xo ) > 0. by the asymptotic behavior of u at inﬁnity. Step 3. Therefore. The main diﬀerence is that we have no decay assumption on the solution u at inﬁnity. Proof of Theorem 8.16).10) must be radially symmetric and monotone decreasing about some point in Rn .10) must assume the form for some λ > 0. Then they satisﬁes the following ordinary diﬀerential equation n−1 −u (r) − r u (r) = uτ u (0) = 0 (n−2)/4 u(0) = [n(n−2)] n−2 λ 2 is a solution.2. Since the equation is invariant under translation. hence the method of moving planes can not be applied directly to u.19) contradicts with (8. φ φ(xo ) q(n − 2 − q) ≡ . v(x) = xn−2 x2 . In the previous two steps. This completes the proof of the Theorem. together with (8. we conclude that every positive solution of (8. without loss of generality.19) On the other hand.
u is also symmetric and monotone decreasing about the origin. Each time we show that the points of interest are away from the singularities.2. (8. Step 1. for λ suﬃciently negative. And in our proof. The diﬀerence here is that. so that we can carry on the method of moving planes to the end to show the ˜ existence of a λo such that wλo (x) ≡ 0 for x ∈ Σλo and v is strictly increasing ˜λ . v(x) has the desired decay rate v + v τ (x) = 0 x ∈ Rn \ {0}. in the x1 direction in Σ o As in the proof of Theorem 8. wλ has a singularity the power of . Because v(x) may be singular at the origin. xo n−2 x  1 is greater than two. If the center point of v is not the origin. then by the deﬁnition of v.2. then v has no singularity at the origin. we consider wλ ˜ on Σλ = Σλ \ {xλ }. wλ (x) = vλ (x) − v(x). and τ − wλ = τ ψλ−1 (x)wλ (x). If the center point is the origin. and show that v is radially symmetric and monotone decreasing about some point. It is easy to verify that v satisﬁes the same equation as u does. where ψλ (x) is some number between vλ (x) and v(x). 0).20) We will apply the method of moving planes to the function v. xn−2 (8. τ ψλ−1 (xo ) ∼ ( 1 1 )τ −1 = o 4 . we treat the singular point carefully. correspondingly wλ may be singular at the point xλ = (2λ.2.8. 0.2 Applications of the Maximum Principles Based on Comparisons 207 1 at inﬁnity. n ≥ 3. · · · .2 would imply that u is symmetric and monotone decreasing about some point. at a negative minimum point xo of wλ . and we have the desired decay rate for xo  τ c(x) := −τ ψλ−1 (x). Then the same argument in the proof of Theorem 8. we have ˜ wλ (x) ≥ 0. as mentioned in Corollary 7. We show that.2.21) we derive immediately that. ∀x ∈ Σλ . Hence instead of on Σλ . and hence u has the desired decay at inﬁnity. we see that vλ satisﬁes the same equation as v does. By the asymptotic behavior v(x) ∼ 1 . but has a xn−2 possible singularity at the origin.4. Hence we can apply the “Decay at Inﬁnity” to wλ (x). Deﬁne vλ (x) = v(xλ ) . except at the origin: Then obviously.
limλ→λo inf x∈Bδ (xλ ) wλ (x) ≥ inf x∈Bδ (xλo ) wλo (x) ≥ . Now through a similar argument as in the proof of Theorem 8.208 8 Methods of Moving Planes and of Moving Spheres at xλ . Therefore wλo (x) ≡ 0. we will show that.2. Again let λk λo ˜ be a sequence such that wλk (x) < 0 for some x ∈ Σλk . ∀x ∈ Σλo . For such λ. This implies (8. ¯ ˜ for x ∈ Σλo . Suppose φ that wλo (x) ≡0. we can deduce that wλ (x) ≥ 0 for λ suﬃciently negative. Now similar to the Step 1 in the proof of Theorem 8. we ﬁrst note that for x ∈ B1 (0).2. the minimum of wλ is away from xλ . deﬁne ˜ λo = sup{λ  wλ (x) ≥ 0. This can be seen from the following facts a) There exists > 0 and δ > 0 such that wλo (x) ≥ ¯ b) for x ∈ Bδ (xλo ) \ {xλo }.22) >0 due to the fact that v(x) > 0 and v ≤ 0. Actually. Then let λ be so negative.2 except that now we need to take care of the singularities. hence we need to show that. We need to show that ¯ ˜ for each k. then by Maximum Principle we have ¯ Deﬁne wλ = ¯ wλo (x) > 0 . The rest of the proof is similar to the step 2 in the proof of Theorem 8. Step 2.22). Again. ¯ while fact (b) is obvious. ∀x ∈ Σλ }. v(x) ≥ min v(x) = ∂B1 (0) 0 (8. obviously wλ (x) ≥ 0 on B1 (xλ ) \ {xλ }. that v(x) ≤ 0 for x ∈ B1 (xλ ). then wλo (x) ≡ 0.2. then the inﬁmum is achieved in Σλ \ B1 (xλ ) ˜ To see this. ¯ .2. This is possible because v(x) → 0 as x → ∞. We will show that ˜ If λo < 0.2.2.2 one can easily arrive at a contradiction. inf ˜ wλk (x) can be achieved at some point xk ∈ Σλk and that ¯ Σλk the sequence {xk } is bounded away from the singularities xλk of wλk . wλ the same way as in the proof of Theorem 8. If inf Σλ wλ (x) < 0. ¯ ¯ ˜ Fact (a) can be shown by noting that wλo (x) > 0 on Σλo and wλo ≤ 0.
then we can carry out the above procedure in the opposite direction.8.2. Then from the above argument. we also obtain the symmetry and monotonicity in x1 direction by combining the two inequalities obtained in the two opposite directions. we may assume that the origin is at the mid point of the line segment x1 x2 .10).xo (x). we notice Due to the singularity of the coeﬃcient of v p at the origin. since equation (8. we conclude that the solution u must be radially symmetric about some point. we conclude that u ≡ 0. Hence u must also be symmetric about the origin. If our planes Tλ stop somewhere before the origin.10) is invariant under translations and rotations. If they stop at the origin again. Now given any two points x1 and x2 in Rn . It is known that φλ. v satisﬁes v+ 1 v p (x) = 0 . We will use the method of moving planes to prove: Theorem 8. then by rescaling and taking limit. x ∈ R2 (8. Finally. . if the sequence of approximate solutions “blows up”. one can easily see that v can only be symmetric about the origin if it is not identically zero. This completes the proof of the Theorem. we derive the symmetry and monotonicity in x1 direction by the above argument. we ﬁrst need to obtain some decay rate of the solutions near inﬁnity. x ∈ Rn \ {0}. also it is interesting in its own right.2.4 Every solution of (8.23) is radially symmetric with respect to some point in R2 and hence assumes the form of φλ. we must have u(x1 ) = u(x2 ). 8.xo (x) = ln 32λ2 (4 + λ2 x − xo 2 )2 for any λ > 0 and any point xo ∈ R2 is a family of explicit solutions. namely. we move the plane in the negative x1 direction from positive inﬁnity toward the origin.23) exp u(x)dx < +∞ R2 The classiﬁcation of the solutions for this limiting equation would provide essential information on the original problems on manifolds.3 Symmetry of Solutions for − u = eu in R2 When considering prescribing Gaussian curvature on two dimensional compact manifolds. xn+2−p(n−2) n+2 n−2 . one would arrive at the following equation in the entire space R2 : ∆u + exp u = 0 .2 Applications of the Maximum Principles Based on Comparisons 209 If λo = 0. To this end. from the equation (8. ii) To show the nonexistence of solutions in the case p < that after the Kelvin transform. Since the x1 direction can be chosen arbitrarily. It follows that u is constant.
then R2 Proof. then as x → +∞.2. This Theorem is a direct consequence of the following two Lemmas.2 enables us to obtain the asymptotic behavior of the solutions at inﬁnity. x ∈ R 2 and R2 R2 exp u(x)dx ≤ −4 exp u(x)dx < +∞. For −∞ < t < ∞. Lemma 8. Ding) If u is a solution of − u = eu . exp u(x)dx ≥ 8π. one can obtain Ωt exp u(x)dx = − Ωt u= ∂Ωt  uds − d Ωt  = dt ∂Ωt ds  u By the Schwartz inequality and the isoperimetric inequality. ds ·  u  u ≥ ∂Ωt 2 ≥ 4πΩt . 1 u(x) →− ln x 2π uniformly.2 (W. Theorem 8.23).2.5 If u(x) is a solution of (8. . Lemma 8. Integrating from −∞ to ∞ gives −( R2 exp u(x)dx)2 ≤ −8π R2 exp u(x)dx which implies R2 exp u(x)dx ≥ 8π as desired. ∂Ωt ∂Ωt Hence −( and so d ( dt Ωt d Ωt ) · dt Ωt exp u(x)dx ≥ 4πΩt  d Ωt ) · dt exp u(x)dx)2 = 2 exp t · ( Ωt exp u(x)dx ≤ −8πΩt et .210 8 Methods of Moving Planes and of Moving Spheres Some Global and Asymptotic Behavior of the Solutions The following Theorem gives the asymptotic behavior of the solutions near inﬁnity. which is essential to the application of the method of moving planes.2. let Ωt = {x  u(x) > t}.
I2 and I3 are the integrals on the three regions D1 = {y  x − y ≤ 1}.23). we need only to verify that I := R2 ln x − y − ln(y + 1) − ln x u(y) e dy→0 ln x as x→∞. x ∈ R2 and we will show w(x) 1 → ln x 2π R2 R2 exp u(x)dx < exp u(x)dx uniformly as x → +∞. Proof. . ln x − y − ln(y + 1) − ln x →0 ln x hence I2 →0. 2π R Then it is easy to see that w(x) = exp u(x) . as x→∞. we have. we simply notice that I1 ≤ C x−y≤1 eu(y) dy − 1 ln x ln x − yeu(y) dy x−y≤1 Then by the boundedness of eu(y) and R2 eu(y) dy. We may assume that x ≥ 3. we see that the condition ∞ implies that the solution u is bounded from above.3 . a) To estimate I1 .24) To see this.8. Let 1 w(x) = 2 (ln x − y − ln(y + 1)) exp u(y)dy. b) For each ﬁxed K. in region D2 . then as x → +∞. where I1 . By a result of Brezis and Merle [BM]. we see that I1 →0 as x→∞. If u(x) is a solution of (8. D2 = {y  x − y > 1 and y ≤ K} and D3 = {y  x − y > 1 and y > K} respectively.2.2 Applications of the Maximum Principles Based on Comparisons 211 Lemma 8. u(x) 1 →− ln x 2π R2 exp u(x)dx uniformly. (8. Write I = I1 + I2 + I3 .
3. x2 ) be the reﬂection point of x = (x1 .2 and Lemma 8. let Σλ = {(x1 .D. Without loss of generality. Therefore v must be a constant. we use the fact that for x − y > 1  ln x − y − ln(y + 1) − ln x ≤C ln x v ≡ 0 and Then let K→∞. x2 ) about the line Tλ . We also show that the solution is strictly increasing before the critical position.E. This completes the proof of our Lemma. we show the monotonicity and symmetry of the solution in the x1 direction. we apply the method of moving planes to establish the symmetry of the solutions. Finally by the uniqueness of the solution of the following O. x2 )  x1 = λ}.2. This veriﬁes (8.) Deﬁne wλ (x) = u(xλ ) − u(x).5.24). Assume that u(x) is a solution of (8. xλ = (2λ − x1 .212 8 Methods of Moving Planes and of Moving Spheres c)To see I3 →0. Since the direction can be chosen arbitrarily. A straight forward calculation shows wλ (x) + (exp ψλ (x))wλ (x) = 0 (8.23) must assume the form φλ. The Method of Moving Planes and Symmetry In this subsection. Combining Lemma 8. For λ ∈ R1 . For a given solution.2. (See the previous Figure 1. we move the family of lines which are orthogonal to a given direction from negative inﬁnity to a critical position and then show that the solution is symmetric in that direction about the critical position.25) .2. Then v(x) ≤ C + C1 ln(x + 1) for some constant C and C1 . we obtain our Theorem 8.23). problem u (r) + 1 u (r) = f (u) r u (0) = 0 u(0) = 1 we see that the solutions of (8. Consider the function v(x) = u(x) + w(x). x2 )  x1 < λ} and Let Tλ = ∂Σλ = {(x1 .x0 (x). we conclude that the solution must be radially symmetric about some point.
we have. we have eψλ (x o ) = O( 1 ) xo 4 (8. (8.2 Applications of the Maximum Principles Based on Comparisons 213 where ψλ (x) is some number between u(x) and uλ (x). while the key function φ = xq given in Corollary 7. which is a negative minimum of wλ (x).28) φ φ + eψλ (x) + φ φ Moreover.27) This is what we desire for.2. by the asymptotic behavior of the solution u near inﬁnity (see Lemma 8.5.8. we introduce the function wλ (x) = ¯ It satisﬁes wλ + 2 wλ ¯ ¯ wλ (x) . This completes Step 1. Step 1. As a modiﬁcation. By virtue of Theorem 8. we choose φ(x) = ln(x − 1).2. φ (8. At this point. we have wλ (x) ≥ 0.26) 1 Notice that we are in dimension two. one can easily derive a ¯ contradiction from (8. Then similar to the argument in Subsection 5. φ x(x − 1)2 ln(x − 1) It follows from this and (8. ¯ (8. φ(x) wλ = 0. hence it does not work here. As in the previous examples.29) Now. . for λ suﬃciently negative. and hence ψλ (xo ) ≤ u(xo ).26) that eψ(x ) + o φ o (x ) < 0 for suﬃciently large xo . At such point u(xo λ ) < u(xo ). for each ﬁxed λ. ∀x ∈ Σλ . if wλ (x) < 0 somewhere.4.27) and (8. there exists a point xo . then by (8. wλ (x) = ¯ u(xλ ) − u(x) →0 .28).2.2 requires 0 < q < n − 2. the key is to check the decay rate of c(x) := −eψλ (x) at points xo where wλ (x) is negative. Then it is easy to verify that −1 φ (x) = . ln(x − 1) as x→∞. By the “Decay at Inﬁnity” argument.3). we show that.29).
a tremendous amount of eﬀort has been put to ﬁnd the existence of the solutions and many interesting results have been obtained ( see [Ba] [BC] [Bi] [BE] [CL1] [CL3] [CL4] [CL6] [CY1] [CY2] [ES] [KW1] [Li2] [Li3] [SZ] [Lin1] and the references therein).214 8 Methods of Moving Planes and of Moving Spheres Step 2 . Due to this limitation.31) It is the socalled critical case where the lack of compactness occurs.1 The Background Let M be a Riemannian manifold of dimension n ≥ 3 with metric go .2. go ) and Ro (x) is the scalar curvature of go . go ) is the standard sphere S n . ∀x ∈ Σλ }. For example. This technical assumption became somewhat standard and has been used by many authors for quite a few years. One main ingredient in the proof of existence is to obtain a priori estimates on the solutions. x ∈ M.3 Method of Moving Planes in a Local Way 8. 8. We show that If λo < 0. This is equivalent to ﬁnding a positive solution of the semilinear elliptic equation − 4(n − 1) n−2 ou + Ro (x)u = R(x)u n−2 . a useful technique employed by most authors was a ‘blowingup’ analysis. the equation becomes − u+ n+2 n(n − 2) n−2 u= R(x)u n−2 . 4 4(n − 1) (8. x ∈ S n . there have seen a lot of progress in understanding equation (8. In recent years. When (M. it does not work near the points where R(x) = 0. ∀x ∈ Σλo . then wλo (x) ≡ 0. In this case. see articles by Bahri . people had to assume that R was positive and bounded away from 0. Given a function R(x) on M. the wellknown KazdanWarner condition gives rise to many examples of R in which there is no solution. one interesting problem in diﬀerential geometry is to ﬁnd a metric g that is pointwise conformal to the original metric go and has scalar curvature R(x). However.2 except that we use φ(x) = ln(−x1 + 2). u > 0.3. In the last few years.31) on S n .30). The argument is entirely similar to that in the Step 2 of the proof of Theorem 8. to establish a priori estimates. n+2 (8.30) where o is the BeltramiLaplace operator of (M. This completes the proof of the Theorem. For equation (8. Deﬁne λo = sup{λ  wλ (x) ≥ 0.
in the case p = n−2 .1 Let M be a locally conformally ﬂat manifold. In this proposition. and R C 2.1 Assume R ∈ C 2. we essentially required R(x) to grow linearly near its n+2 zeros. then use a blowing up argument. (8. = 0 at Γ. For a ﬁxed A. M.α (S n ) .α and R ∈ C 2. − u+ Proposition 8. Recently. δ0 .8.32) 4 1 for any exponent p between 1 + A and A. we have u ≤ C in the region where R(x) ≤ . we sharpen our method of moving planes to further weaken the condition and to obtain more general result: Theorem 8. x ∈ S n . Let Γ = {x ∈ M  R(x) = 0}. In fact. Then the solutions of the equation − ou + Ro (x)u = R(x)up .α (M ) satisfying R(x)  R(x) is continuous near Γ.3. Chang. We removed the above wellknown technical assumption and obtained a priori estimates on the solutions of (8. he ﬁrst established a Liouville type theorem in Rn . Lin [Lin1] weaken the condition by assuming only polynomial growth of R near its zeros. where A is an arbitrary positive number. . and Schoen and Zhang [SZ]. such that for any solution u of (8.32). and Yang [CY2].α (S n ) and  R(x) ≥ β0 for any x with R(x) ≤ δ0 . we used the ‘method of moving planes’ in a local way to control the growth of the solutions in this region and obtained an a priori estimate. Then there are positive constants and C depending only on β0 .3 Method of Moving Planes in a Local Way 215 [Ba]. To prove this result. Li [Li2] [Li3]. we answered this question aﬃrmatively. in M (8. u > 0.33) Let D be any compact subset of M and let p be any number greater than 1. To overcome this. Then for functions with changing signs. we derived a priori bounds for the solutions of more general equations n(n − 2) u = R(x)up . Chang and Yang [CY1]. Gursky. The main diﬃculty encountered by the traditional ‘blowingup’ analysis is near the point where R(x) = 0. Bahri and Coron [BC].31) for functions R which are allowed to change signs. (8. are there any a priori estimates? In [CL6]. ∂R ≤ 0 where ν is the ∂ν outward normal (pointing to the region where R is negative) of Γ . the bound is independent of p. by introducing a new auxiliary function. and Assume that Γ ∈ C 2.34) are uniformly bounded near Γ ∩ D.3. which seems a little restrictive. Later.
35) are uniformly bounded in the region {x ∈ K : R(x) ≤ 0 and dist(x. Since M is a locally conformally ﬂat manifold.216 8 Methods of Moving Planes and of Moving Spheres One can see that our condition (8.2 The solutions of (8. grow like exp{− d(x) Obviously. Moreover our restriction on the exponent is much weaker.n (D) and suppose u ≥ 0.2. D. Γ ) ≥ δ}. The bound depends only on δ. This is the equation we will consider throughout the section.36) where C = C(n. We ﬁrst derive the bound in the region(s) where R is negatively bounded away from zero. p > 1. Lemma 8. Let u ∈ W 2. To prove Theorem 8.3.1 (See Theorem 9. q).35) in a compact set K in Rn .3.1. the lower bound of R. The proof of the Theorem is divided into two parts. n−2 8. Then based on this bound. Then for any ball B2r (x) ⊂ D and any q > 0. It comes from a standard elliptic estimate for subharmonic functions and an integral bound on the solutions. where d(x) is the distance from the point x to Γ . p can be any number greater than 1. we derive estimates in the region(s) where R is negatively bounded away from zero. In the region(s) where R is negative In this part.3.3. In fact. B2r (x) 1 (8. a local ﬂat metric can be chosen so that equation (8. In virtue of this Lemma. we estimate the solutions of equation (8. one only need to obtain an integral bound. (8.20 in [GT]).33) is weaker than Lin’s polynomial growth restriction.34) in a compact set D ⊂ M can be reduced to − u = R(x)up . we use the ‘method of moving planes’ to estimate the solutions in the region(s) surrounding the set Γ . Part I. to establish an a priori bound of the solutions.3.34) and prove Theorem 8. we prove . this kinds of functions satisfy our condition. We prove Theorem 8. but are not polynomial growth. we introduce the following lemma of Gilbarg and Trudinger [GT]. To illustrate. Besides being n+2 . we have sup u ≤ C( Br (x) 1 rn (u+ )q dV ) q . one may simply look at functions R(x) which 1 }.2 The A Priori Estimates Now.
3. we construct some auxiliary functions. y) with y = (x2 . The Proof of Theorem 8.37) follows suit. near x0 . Let Ω be an open neighborhood of K such that dist (∂Ω. Ω (8. It is suﬃcient to show that u is bounded in a neighborhood of x0 .8. B (8. Step 1. where R is small. and x0 ∈ Γ . we have ϕα R1+α up dx ≤ C1 Ω Ω ϕα−2 Rα−2 udx ≤ C2 Ω ϕ p Rα−2 udx. xn ). Taking into account that α > 2 and ϕ and R are bounded in Ω. i. This result.38) Now (8.1 and Lemma 8. Let u be a given solution of (8. Multiply both sides of equation (8. a Kelvin transform. we arrive at o ϕα R1+α up dx ≤ C3 . Let α = 1+2p . together with the boundedness of the integral up in a small neighborhood of x0 . · · · . Transforming the region Make a translation. a rotation. or if necessary.2 Part II.35) by ϕα Rα sgnR and p−1 then integrate. Let 1 if R(x) > 0 sgnR = 0 if R(x) = 0 −1 if R(x) < 0.3. α Applying H¨lder inequality on the righthandside.35).3. Denote the new system by x = (x1 . Let x1 axis pointing .2. the values of the solution u is comparable. leads to an a priori bound of the solutions. Since the method of moving planes can not be applied directly to u.37) Proof.2 Let B be any ball in K that is bounded away from Γ . This completes the proof of the Lemma. then there is a constant C such that up dx ≤ C. Let ϕ be the ﬁrst eigenfunction of − in Ω. K ) ≥ 1. In the region where R is small In this part. through a straight forward calculation. The proof consists of the following four steps. The key ingredient is to use the method of moving planes to show that.3. along any direction that is close to the direction of R. It is a direct consequence of Lemma 8. − ϕ = λ1 ϕ(x) x ∈ Ω ϕ(x) = 0 x ∈ ∂Ω.3 Method of Moving Planes in a Local Way 217 Lemma 8. we estimate the solutions in a neighborhood of Γ .e.
Let D by the region enclosed by two surfaces ∂1 D and ∂2 D. ∂R (b) ∂x1 ≤ 0 in D. the image of the set Ω + := {x  R(x) > 0} is contained in the left of the yplane. Let ∂1 D := {x  x1 = φ(y) + . x0 becomes the origin.) Γ y ∂1 D ∂2 D R(x) > 0 0 R(x) < 0 x1 Tλ Figure 6 The small positive number (a) ∂1 D is uniformly convex. and the image of Γ is tangent to the yplane at origin. ˜ ˜ is chosen to ensure that . The image of Γ is also uniformly convex near the origin. Introducing an auxiliary function Let m = max∂1 D u. Let x1 = φ(y) be the equation of the image of Γ near the origin. such that 0 ≤ u ≤ 2m.218 8 Methods of Moving Planes and of Moving Spheres to the right. (See Figure 6. which is denote by ∂2 D. In the new system. (c)  R Step 2. Let u be a C 2 extension of u from ∂1 D to the entire ˜ ∂D. and  u ≤ Cm. x1 ≥ −2 } The intersection of ∂1 D and the plane {x ∈ Rn  x1 = −2 } encloses a part of the plane. and R(x) is continuous in D.
we show that v(x) > 0 ∀x ∈ D. Case 1) φ(y) + 2 ≤ x1 ≤ φ(y) + .2 and the standard elliptic estimate. v) = 0 x ∈ D v(x) = 0 x ∈ ∂1 D where and f (x. v(x) = 0 ∀x ∈ ∂1 D . one can choose C0 suﬃciently large. ∂x1 Noticing that ( + φ(y) − x1 ) ≥ 0. (8. v) = 2mx1 φ(y)+R(x){v+w(x)−C0 m[ +φ(y)−x1 ]−m[ +φ(y)−x1 ]2 }p At this step.3.39). and w C 1 (D) ≤ C1 m.3 Method of Moving Planes in a Local Way 219 Let w be the harmonic function w=0x∈D w = u x ∈ ∂D ˜ Then by the maximum principle and a standard elliptic estimate. we have ∂u  ≤ C2 m. such that ∂v <0 ∂x1 Then since we have v(x) > 0. In this region.8.  ∂x1 for some positive constant C2 . we have 0 ≤ w(x) ≤ 2m Introduce a new function v(x) = u(x) − w(x) + C0 m[ + φ(y) − x1 ] + m[ + φ(y) − x1 ]2 which is well deﬁned in D. Here C0 is a large constant to be determined later. ψ(y) = −C0 m φ(y) − m [ + φ(y)]2 − 2m. It follows from this and (8.39) We consider the following two cases. By Theorem 8. ∂v ≤ (C1 + C2 − C0 )m − 2m( + φ(y) − x1 ). One can easily verify that v(x) satisﬁes the following v + ψ(y) + f (x. R(x) is negative and bounded away from 0.
We show that the moving of planes can be carried on provided f (x. then we have (8. v) − f (x. More precisely. 2 Again. In fact.41) .about the plane Tλ still lies in region D. Let xλ = (2λ − x1 . If (8. Now we can start from the rightmost tip of region D. we showed that v(x) ≥ 0 in D and v(x) = 0 on ∂1 D. we have v(x) ≥ −w(x) + C0 m 2 ≥ −2m + C0 m . v) is Lipschitz continuous in v (Actually. Applying the Method of Moving Planes to v in x1 direction In the previous step. move the plane Tλ toward the left as long as the inequality (8. v) − f (x. y) be the reﬂection point of x with respect to the plane Tλ . ∂f = R(x)). Now we decrease λ. let Σλ = {x ∈ D  x1 ≥ λ}. v) = R(xλ )(vλ − v) + f (xλ . Let vλ (x) = v(xλ ) and wλ (x) = vλ (x) − v(x). we arrive at v(x) > 0. the reﬂection of Σλ . (8.41) holds. because Σλ is a narrow region and f (x. again by (8. Step 3. it is easy to see that vλ satisﬁes the equation vλ + ψ(y) + f (xλ . that is.39). and move the plane perpendicular to x1 axis toward the left. choosing C0 suﬃciently large (depending only on ). This lies a foundation for the moving of planes. We are going to show that wλ (x) ≥ 0 ∀ x ∈ Σλ . and hence wλ satisﬁes − wλ = f (xλ .40) is true when λ is less but close to . (8. This choice of 1 is to guarantee that when the plane Tλ moves to λ = − 1 . Tλ = {x ∈ Rn  x1 = λ}. In this region. the readers may see the ﬁrst example in Section 5. v(x)) for x = (x1 .40) We want to show that the above inequality holds for − 1 ≤ λ ≤ for some 0 < 1 < .40) remains true. y) ∈ D with x1 > λ > − 1 . vλ ) − f (xλ .220 8 Methods of Moving Planes and of Moving Spheres Case 2) −2 ≤ x1 ≤ φ(y) + 2 . v(x)) ≤ f (xλ . For detailed ∂v argument. v) = R(xλ )wλ + f (xλ . v) − f (x. v) + f (xλ . vλ ) = 0. v).
v) It is elementary to see that (8. x = (x1 . ∂x1 More precisely. ∀x ∈ Tλo ∩ Σλo . we notice that the uniform convexity of the image of Γ near the origin implies φ(y) ≤ −a0 < 0. a) R(x) ≤ 0. ∂x1 Then.42) If λo > − 1 . We consider the following two possibilities. ∂f (x. ∂x1 b) R(x) > 0. ∀x ∈ Σλo and ∂wλo < 0.42) and the maximum principle. ∀x ∈ Σλ }. ∂x1 ∂x1 ∂x1 First. by virtue of (8. similar to the argument in the examples in Section 5. Choose C0 suﬃciently large. we have ∂f ≤ 0. then since v(x) > 0 in D.43) for some constant a0 .3 Method of Moving Planes in a Local Way 221 − wλ − R(xλ )wλ (x) ≥ 0. one can derive a contradiction. To estimate ∂f . Let λo = inf{λ  wλ (x) ≥ 0. we use ∂x1 ∂f ∂R ∂w = 2m φ(y) + up−1 { u + R(x)p[ + C0 m + 2m( + φ(y) − x1 )]}. (8. In the part where u ≥ 1. we write ∂f ∂R = 2m φ(y)+up−1 ∂x1 ∂x1 u + R(x)/ ∂R ∂x1 p[ ∂w + C0 m + 2m( + φ(y) − x1 )] . This enables us to apply the maximum principle to wλ . (8. we use u to control R(x)/ ∂R ∂x1 p[ ∂w + C0 m + 2m( + φ(y) − x1 )]. y) with x1 > −2 1 . so that ∂w + C0 m ≥ 0 ∂x1 Noticing that ∂R ∂x1 ≤ 0 and ( + φ(y) − x1 ) ≥ 0 in D.8. ∂x1 . we have wλo (x) > 0.41) holds if ≤ 0 in the set ∂x1 {x ∈ D  R(x) < 0 or x1 > −2 1 }.
40) is true. our conclusion is: The method of moving planes can be carried on up to λ = − 1 . in a small neighborhood of the origin.44) leads immediately to u(x) + C6 ≥ u(x0 ) ∀x ∈ ∆x0 (8.1. So far. A similar argument will show that this remains true if we rotate the x1 axis by a small angle. by a similar argument. such that v(x) ≥ v(x0 ) ∀x ∈ ∆x0 (8. This completes the proof of Theorem 8.1 The Background Given a function K(x) on the two dimensional standard sphere S 2 . for any point x0 ∈ Γ . This is equivalent to solving the following nonlinear elliptic equation . one can show that (8.3. we can use 2m φ(y) to control Rp[ ∂w + C0 m + 2m( + φ(y) − x1 )]. the wellknown Nirenberg problem is to ﬁnd conditions on K(x). Now the a priori bound of the solutions is a consequence of (8. we can make R(x)/ ∂x1 arbitrarily small by a ∂f small choice of 1 .45) and an integral bound on u.2.45) is true for any point x0 in a small neighborhood of Γ . in virtue of (8.4. 8. ∂x1 Here again we use the smallness of R. the function v(x) is monotone decreasing in x1 direction.44) Noticing that w(x) is bounded in D. for any λ between − 1 and .43). and therefore arrive at ∂x1 ≤ 0.4 Method of Moving Spheres 8. Therefore.222 8 Methods of Moving Planes and of Moving Spheres Choose 1 suﬃciently small. such that  ∂R  ∼  R. More precisely.45) More generally.40) implies that. Deriving the a priori bound Inequality (8. so that it can be realized as the Gaussian curvature of some conformally related metric. one can ﬁnd a cone ∆x0 with x0 as its vertex and staying to the left of x0 . (8. Furthermore. Step 4.3.33) on R. which can be derived from Lemma 8. the intersection 0 of the cone ∆x0 with the set {xR(x) ≥ δ2 } has a positive measure and the lower bound of the measure depends only on δ0 and the C 1 norm of R. the inequality (8. ∂x1 ∂R Then by condition (8. In the part where u ≤ 1.
there are other wellknown obstructions found by Kazdan and Warner [KW1] and later generalized by Bourguignon and Ezin [BE]. we call these necessary conditions KazdanWarner type conditions. Besides the obvious necessary condition that R(x) be positive somewhere.47) For a unifying description. A pioneer work in this direction is [Mo] by Moser. Then one may naturally ask: ‘ Are the necessary conditions of KazdanWarner type also suﬃcient?’ This question has been open for many years.46) on S 2 . In [CL3]. In the following.48) and (8. a monotone rotationally symmetric function R admits no solution. see [BC] [CY1] [CY2] [CY3] [CY4] [CL10] [CD1] [CD2] [CC] [CS] [ES] [Ha] [Ho] [KW1] [Kw2] [Li1] [Li2] [Mo] and the references therein.47) have no solution. it is natural to begin with functions having certain symmetries. Where is the Laplacian operator associated to the standard metric. we let R(x) = 2K(x) be the scalar curvature. These conditions give rise to many examples of R(x) for which (8. (8. a similar problem was raised by Kazdan and Warner: Which functions R(x) can be realized as the scalar curvature of some conformally related metrics ? The problem is equivalent to the existence of positive solutions of another elliptic equation − u+ n+2 n−2 n(n − 2) u= R(x)u n−2 4 4(n − 1) (8. we gave a negative answer to this question. The conditions are: X(R)dAg = 0 (8.49) is a conformal Killing ﬁeld associated to the standard metric go on S n . On the higher dimensional sphere S n . The vector ﬁeld X in (8. In particular. can one solve the equations? What are the necessary and suﬃcient conditions for the equations to have a solution? These are very interesting problems in geometry. 4 . there are still gaps between those suﬃcient conditions and the necessary ones.48) to have a solution is R being positive somewhere.4 Method of Moving Spheres 223 − u + 1 = K(x)e2u (8. He showed that.) However. for even functions R on S 2 .46) is equivalent to − u + 2 = R(x)eu . the necessary and suﬃcient condition for (8. various suﬃcient conditions were obtained by many authors (For example. In recent years. then (8. To ﬁnd necessary and suﬃcient conditions.48) Both equations are socalled ‘critical’ in the sense that lack of compactness occurs.49) Sn where dAg is the volume element of the metric eu go and u n−2 go for n = 2 and for n ≥ 3 respectively. Then for which R.8. on S 2 .
3 Let R be rotationally symmetric. we could show this for a family of such functions. pointed out that the Kazdan. we were not able to obtain the counter part of this result in higher dimensions. However.4. A simple question was asked by Kazdan [Ka]: What are the necessary and suﬃcient conditions for rotationally symmetric functions? To be more precise.50). Assume that i) R is nondegenerate ii) R changes signs in the region where R > 0 Then problem (8.1 Let R be rotationally symmetric. for which problem (8.224 8 Methods of Moving Planes and of Moving Spheres Note that. First.48) has no solution at all. Proposition 8.50) for which problem (8. Xu and Yang found a family of rotationally symmetric functions R satisfying conditions (8. And thus. (1. In our earlier paper [CL3]. for rotationally symmetric functions R = R(θ) where θ is the distance from the north pole.50) In [XY].48) has no rotationally symmetric solution.51) Then problem (8. Although we believed that for all such functions R. Even though one didn’t know whether there were any other nonsymmetric solutions for these functions R . and R ≡ C. This generalizes Xu and Yang’s result.48) to have a solution.48) has no solution at all. there is no solution at all. since their family of functions R satisfy (8.2) is satisﬁed automatically. because it suggests that (8. In [XY]. we present a class of rotationally symmetric functions satisfying (8.50). this result is still very interesting.2 There exists a family of functions R satisfying the KazdanWarner type conditions (8.48) to have a solution. Xu and Yang also proved: Proposition 8. Thus one can interpret Moser’s result as: In the class of even functions. we proved the nonexistence of rotationally symmetric solutions: Proposition 8. for which the problem (8. (8.52) .48) and (8. we were not able to prove it at that time.4.51). At that time.50) may not be the suﬃcient condition. in the class of even functions. We also cover the higher dimensional cases. If R is monotone in the region where R > 0. for the ﬁrst time. the KazdanWarner type conditions are the necessary and suﬃcient conditions for (8. (8.48) has a solution. the KazdanWarner type conditions take the form: R > 0 somewhere and R changes signs (8.Warner type conditions are not suﬃcient for (8.47) has no rotationally symmetric solution.4.
In [CL3].8. Combining our Theorem 1 with Xu and Yang’s Proposition 3.51).2.53) 8.47) have no solution at all. we used the method of ‘moving planes’ to show that the solutions of (8. ii) R(x) ≤ 0.4. Then problems (8.52).48) and (8. Then it is equivalent to consider the following equation in Euclidean space Rn : . to prove Proposition 8.48) are rotationally symmetric. It works in all dimensions.50). would be the necessary and suﬃcient condition for (8. We also generalize this result to a family of nonsymmetric functions.48) to have a solution is R > 0 somewhere and R changes signs in the region where R > 0 (8. R is nondecreasing in xn+1 direction.47) have no solution at all.2 Let R be a continuous function. which appeared in our later paper [CL4].1 Let R be continuous and rotationally symmetric. and for all functions satisfying (8.2. This method only work for a special family of functions satisfying (8. one can obtain a necessary and suﬃcient condition in the nondegenerate case: Corollary 8. Theorem 8.1 Let R be rotationally symmetric and nondegenerate in the sense that R = 0 whenever R = 0. Let B be a geodesic ball centered at the North Pole xn+1 = 1. for x ∈ S n \ B.47) to have a solution. We call it the method of ‘moving spheres’. condition (8.48) or (8. Then the necessary and suﬃcient condition for (8. we prove Theorem 8. The main objective of this section is to present the proof on the necessary part of the conjecture. It does not work in higher dimensions. In [CL4].4.4. instead of (8.4 Method of Moving Spheres 225 The above results and many other existence results tend to lead people to believe that what really counts is whether R changes signs in the region where R > 0. Instead of showing the symmetry of the solutions. For convenience. Assume that i) R(x) ≥ 0.4. we make a stereographic projection from S n to Rn .48) and (8.4. Padilla to show the radial symmetry of the solutions for some nonlinear Dirichlet problems on annuluses.4. for x ∈ B .1 and Theorem 8. a similar method was used by P.51). Theorem 8. and R ≡ C. If R is monotone in the region where R > 0. In an earlier work [P] . Then problems (8. we obtain a comparison formula which leads to a direct contradiction.2 Necessary Conditions In this subsection. we introduce a new idea.4. The following are our main results. And we conjectured in [CL3] that for rotationally symmetric R.
57) The proof of Theorem 8. there is also no solution.57). n ≥ 3.55) with appropriate asymptotic growth of the solutions at inﬁnity u ∼ −4 ln x for n=2 .56) for some C > 0. (8. The ﬁrst two cases violate the KazdanWarner type conditions. Proof of Lemma 8. Let x ∈ B1 (0). then u(λx) > u( λx ) − 4 ln x x2 ∀x ∈ B1 (0).4. For a radial function R satisfying (8. u ∼ Cx2−n for n ≥ 3 (8. then u(λx) > x2−n u( λx ) x2 ∀x ∈ B1 (0).226 8 Methods of Moving Planes and of Moving Spheres − u = R(r)eu . or vice versa. and this projection does not change the monotonicity of the function. A similar idea will work for Lemma 8.4. The reﬂection point of λx λx about the sphere ∂Bλ (0) is x2 .4. iii) R > 0 and nonincreasing for r < ro .56) for n = 2. In step 1. ii) R ≡ C is nonnegative everywhere and monotone. The function R is also bounded and continuous. The function R(r) is the projection of the original R in equations (8. 0 < λ ≤ 1 (8. We use a new idea called the method of ‘moving spheres ’. R (r) ≤ 0 for r < 1.4.2. Lemma 8. Here is the Euclidean Laplacian operator.59) is true for λ = 1.48) and (8.55) and (8.53). We compare the values of u at those pairs λx of points λx and x2 . This is done by Kelvin transform and the strong maximum principle.4. Then in step 2. 0 < λ ≤ 1 (8.2 Let R be a continuous function satisfying (8.58) Lemma 8. Let u be a solution of (8.57).56) for n > 2. Thus. − u = R(r)uτ u > 0.47). we move (shrink) the sphere ∂Bλ (0) .4.54) (8. Without loss of generality. there are three possibilities: i) R is nonpositive everywhere. then λx ∈ Bλ (0).54) and (8. x ∈ R2 . we start with the unit sphere and show that (8. r = x and n+2 τ = n−2 is the socalled critical Sobolev exponent.1 is based on the following comparison formulas.59) We ﬁrst prove Lemma 8. (8. we may assume that: R(r) > 0. hence there is no solution. R ≤ 0 for r > ro . Let u be a solution of (8.1. we only need to show that in the remaining case.1 Let R be a continuous function satisfying (8.2. R(r) ≤ 0 for r ≥ 1. and we assume these throughout the subsection. x ∈ Rn .
we have λ R( ) − R(λr) ≤ 0 r It follows from (8. and v ≥ 0 in B1 (0). wλ + Cλ (x)wλ ≤ 0 where Cλ (x) is a bounded function if λ is bounded away from 0. (8. we establish the inequality for x ∈ B1 (0). λ ≤ 1. Then it is easy to verify that v satisﬁes the equation 1 − v = R( )v τ r (8. In this step. It follows from (8. n Let uλ (x) = λ 2 −1 u(λx). Applying the maximum principle.66) . Then it is easy to verify that λ τ − vλ = R( )vλ (x).65) that for r ≤ 1.63) (8. Then by (8. By doing so. λ = 1.4 Method of Moving Spheres 227 towards the origin. we establish the inequality for all x ∈ B1 (0) and λ ∈ (0.60). 1]. Then − uλ = R(λx)uτ (x).60) u(x) > x2−n u( 2 ) ∀x ∈ B1 (0). we move the sphere ∂Bλ (0) towards λ = 0. (8. We show that one can always shrink ∂Bλ (0) a little bit before reaching the origin. we obtain u > v in B1 (0) which is equivalent to (8. Step 1.62) x Let vλ (x) = x2−n uλ ( x2 ) be the Kelvin transform of uλ .57) that u < 0. In this step. λ λ wλ + R( )ψλ wλ = [R( ) − R(λr)]uτ λ r r (8.8. Taking into λ account of assumption (2. Thus − (u − v) > 0. We show that x (8.65) τ where ψλ is some function with values between τ uτ −1 and τ vλ−1 . x x Let v(x) = x2−n u( x2 ) be