Wenxiong Chen and Congming Li
Methods on Nonlinear Elliptic
Equations
SPIN AIMS’ internal project number, if known
– Monograph –
September 22, 2008
AIMS
Preface
In this book we intend to present basic materials as well as real research
examples to young researchers in the ﬁeld of nonlinear analysis for partial
diﬀerential equations (PDEs), in particular, for semilinear elliptic PDEs. We
hope it will become a good reading material for graduate students and a handy
textbook for professors to use in a topic course on nonlinear analysis.
We will introduce a series of wellknown typical methods in nonlinear
analysis. We will ﬁrst illustrate them by using simple examples, then we will
lead readers to the research front and explain how these methods can be
applied to solve practical problems through careful analysis of a series of
recent research articles, mostly the authors’ recent papers.
From our experience, roughly speaking, in applying these commonly used
methods, there are usually two aspects:
i) general scheme (more or less universal) and,
ii) key points for each individual problem.
We will use our own research articles to show readers what the general
schemes are and what the key points are; and with the understandings of these
examples, we hope the readers will be able to apply these general schemes to
solve their own research problems with the discovery of their own key points.
In Chapter 1, we introduce basic knowledge on Sobolev spaces and some
commonly used inequalities. These are the major spaces in which we will seek
weak solutions of PDEs.
Chapter 2 shows how to ﬁnd weak solutions for some typical linear and
semilinear PDEs by using functional analysis methods, mainly, the calculus
of variations and critical point theories including the wellknown Mountain
Pass Lemma.
In Chapter 3, we establish W
2,p
a priori estimates and regularity. We prove
that, in most cases, weak solutions are actually diﬀerentiable, and hence are
classical ones. We will also present a Regularity Lifting Theorem. It is a simple
method to boost the regularity of solutions. It has been used extensively in
various forms in the authors’ previous works. The essence of the approach is
wellknown in the analysis community. However, the version we present here
VI Preface
contains some new developments. It is much more general and is very easy
to use. We believe that our method will provide convenient ways, for both
experts and nonexperts in the ﬁeld, in obtaining regularities. We will use
examples to show how this theorem can be applied to PDEs and to integral
equations.
Chapter 4 is a preparation for Chapter 5 and 6. We introduce Rieman
nian manifolds, curvatures, covariant derivatives, and Sobolev embedding on
manifolds.
Chapter 5 deals with semilinear elliptic equations arising from prescrib
ing Gaussian curvature, on both positively and negatively curved manifolds.
We show the existence of weak solutions in critical cases via variational ap
proaches. We also introduce the method of lower and upper solutions.
Chapter 6 focus on solving a problem from prescribing scalar curvature
on S
n
for n ≥ 3. It is in the critical case where the corresponding variational
functional is not compact at any level sets. To recover the compactness, we
construct a maxmini variational scheme. The outline is clearly presented,
however, the detailed proofs are rather complex. The beginners may skip
these proofs.
Chapter 7 is devoted to the study of various maximum principles, in partic
ular, the ones based on comparisons. Besides classical ones, we also introduce a
version of maximum principle at inﬁnity and a maximum principle for integral
equations which basically depends on the absolute continuity of a Lebesgue
integral. It is a preparation for the method of moving planes.
In Chapter 8, we introduce the method of moving planes and its variant–
the method of moving spheres–and apply them to obtain the symmetry, mono
tonicity, a priori estimates, and even nonexistence of solutions. We also intro
duce an integral form of the method of moving planes which is quite diﬀerent
than the one for PDEs. Instead of using local properties of a PDE, we exploit
global properties of solutions for integral equations. Many research examples
are illustrated, including the wellknown one by Gidas, Ni, and Nirenberg, as
well as a few from the authors’ recent papers.
Chapters 7 and 8 function as a selfcontained group. The readers who are
only interested in the maximum principles and the method of moving planes,
can skip the ﬁrst six chapters, and start directly from Chapter 7.
Yeshiva University Wenxiong Chen
University of Colorado Congming Li
Contents
1 Introduction to Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Approximation by Smooth Functions. . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5 Compact Embedding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.6 Other Basic Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.6.1 Poincar´e’s Inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.6.2 The Classical HardyLittlewoodSobolev Inequality . . . . 36
2 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.1 Second Order Elliptic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.3 Methods of Linear Functional Analysis . . . . . . . . . . . . . . . . . . . . . 44
2.3.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.3.2 Some Basic Principles in Functional Analysis . . . . . . . . . 44
2.3.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 48
2.4 Variational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.1 Semilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.2 Calculus of Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.3 Existence of Minimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.4.4 Existence of Minimizers Under Constraints . . . . . . . . . . . 55
2.4.5 Minimax Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4.6 Existence of a Minimax via the Mountain Pass Theorem 65
3 Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.1 W
2,p
a Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.1.1 Newtonian Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.1.2 Uniform Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2 W
2,p
Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2.1 The Case p ≥ 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
VIII Contents
3.2.2 The Case 1 < p < 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.2.3 Other Useful Results Concerning the Existence,
Uniqueness, and Regularity . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.3 Regularity Lifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.1 Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.2 Regularity Lifting Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.3.3 Applications to PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.3.4 Applications to Integral Equations . . . . . . . . . . . . . . . . . . . 103
4 Preliminary Analysis on Riemannian Manifolds . . . . . . . . . . . . 107
4.1 Diﬀerentiable Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.2 Tangent Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3 Riemannian Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4 Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.1 Curvature of Plane Curves. . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.2 Curvature of Surfaces in R
3
. . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.3 Curvature on Riemannian Manifolds . . . . . . . . . . . . . . . . . 117
4.5 Calculus on Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.5.1 Higher Order Covariant Derivatives and the
LaplaceBertrami Operator . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.5.2 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature123
4.6 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5 Prescribing Gaussian Curvature on Compact 2Manifolds . . 127
5.1 Prescribing Gaussian Curvature on S
2
. . . . . . . . . . . . . . . . . . . . . 129
5.1.1 Obstructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.1.2 Variational Approach and Key Inequalities . . . . . . . . . . . 130
5.1.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 135
5.2 Prescribing Gaussian Curvature on Negatively Curved
Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5.2.1 Kazdan and Warner’s Results. Method of Lower and
Upper Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.2.2 Chen and Li’s Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6 Prescribing Scalar Curvature on S
n
, for n ≥ 3 . . . . . . . . . . . . . 157
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.2 The Variational Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.2.1 Estimate the Values of the Functional . . . . . . . . . . . . . . . . 162
6.2.2 The Variational Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.3 The A Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.3.1 In the Region Where R < 0 . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.3.2 In the Region Where R is Small . . . . . . . . . . . . . . . . . . . . . 172
6.3.3 In the Regions Where R > 0. . . . . . . . . . . . . . . . . . . . . . . . 174
Contents IX
7 Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.2 Weak Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.3 The Hopf Lemma and Strong Maximum Principles . . . . . . . . . . 183
7.4 Maximum Principles Based on Comparisons . . . . . . . . . . . . . . . . 188
7.5 A Maximum Principle for Integral Equations . . . . . . . . . . . . . . . . 191
8 Methods of Moving Planes and of Moving Spheres . . . . . . . . 195
8.1 Outline of the Method of Moving Planes . . . . . . . . . . . . . . . . . . . 197
8.2 Applications of the Maximum Principles Based on
Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
8.2.1 Symmetry of Solutions in a Unit Ball . . . . . . . . . . . . . . . 199
8.2.2 Symmetry of Solutions of −´u = u
p
in R
n
. . . . . . . . . . . 202
8.2.3 Symmetry of Solutions for −´u = e
u
in R
2
. . . . . . . . . . . 209
8.3 Method of Moving Planes in a Local Way . . . . . . . . . . . . . . . . . . 214
8.3.1 The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
8.3.2 The A Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
8.4 Method of Moving Spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.4.1 The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.4.2 Necessary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
8.5 Method of Moving Planes in Integral Forms and Symmetry
of Solutions for an Integral Equation . . . . . . . . . . . . . . . . . . . . . . . 228
A Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
A.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
A.1.1 Algebraic and Geometric Notations . . . . . . . . . . . . . . . . . . 235
A.1.2 Notations for Functions and Derivatives . . . . . . . . . . . . . . 235
A.1.3 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
A.1.4 Notations for Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
A.1.5 Notations from Riemannian Geometry . . . . . . . . . . . . . . . 239
A.2 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
A.3 CalderonZygmund Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 243
A.4 The Contraction Mapping Principle . . . . . . . . . . . . . . . . . . . . . . . . 246
A.5 The ArzelaAscoli Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
1
Introduction to Sobolev Spaces
1.1 Distributions
1.2 Sobolev Spaces
1.3 Approximation by Smooth Functions
1.4 Sobolev Embeddings
1.5 Compact Embedding
1.6 Other Basic Inequalities
1.6.1 Poincar´e’s Inequality
1.6.2 The Classical HardyLittlewoodSobolev Inequality
In real life, people use numbers to quantify the surrounding objects. In
mathematics, the absolute value [a − b[ is used to measure the diﬀerence
between two numbers a and b. Functions are used to describe physical states.
For example, temperature is a function of time and place. Very often, we use
a sequence of approximate solutions to approach a real one; and how close
these solutions are to the real one depends on how we measure them, that
is, which metric we are choosing. Hence, it is imperative that we need not
only develop suitable metrics to measure diﬀerent states (functions), but we
must also study relationships among diﬀerent metrics. For these purposes,
the Sobolev Spaces were introduced. They have many applications in various
branches of mathematics, in particular, in the theory of partial diﬀerential
equations.
The role of Sobolev Spaces in the analysis of PDEs is somewhat similar
to the role of Euclidean spaces in the study of geometry. The fundamental
research on the relations among various Sobolev Spaces (Sobolev norms) was
ﬁrst carried out by G. Hardy and J. Littlewood in the 1910s and then by S.
Sobolev in the 1930s. More recently, many well known mathematicians, such
as H. Brezis, L. Caﬀarelli, A. Chang, E. Lieb, L. Nirenberg, J. Serrin, and
2 1 Introduction to Sobolev Spaces
E. Stein, have worked in this area. The main objectives are to determine if
and how the norms dominate each other, what the sharp estimates are, which
functions achieve these sharp estimates, and which functions are ‘critically’
related to these sharp estimates.
To ﬁnd the existence of weak solutions for partial diﬀerential equations,
especially for nonlinear partial diﬀerential equations, the method of functional
analysis, in particular, the calculus of variations, has seen more and more
applications.
To roughly illustrate this kind of application, let’s start oﬀ with a simple
example. Let Ω be a bounded domain in R
n
and consider the Dirichlet problem
associated with the Laplace equation:
−´u = f(x), x ∈ Ω
u = 0, x ∈ ∂Ω
(1.1)
To prove the existence of solutions, one may view −´as an operator acting
on a proper linear space and then apply some known principles of functional
analysis, for instance, the ‘ﬁxed point theory’ or ‘the degree theory,’ to derive
the existence. One may also consider the corresponding variational functional
J(u) =
1
2
Ω
[u[
2
d x −
Ω
f(x) ud x (1.2)
in a proper linear space, and seek critical points of the functional in that
space. This kind of variational approach is particularly powerful in dealing
with nonlinear equations. For example, in equation (1.1), instead of f(x), we
consider f(x, u). Then it becomes a semilinear equation. Correspondingly, we
have the functional
J(u) =
1
2
Ω
[u[
2
d x −
Ω
F(x, u) d x, (1.3)
where
F(x, u) =
u
0
f(x, s) d x
is an antiderivative of f(x, ). From the deﬁnition of the functional in either
(1.2) or (1.3), one can see that the function u in the space need not be second
order diﬀerentiable as required by the classical solutions of (1.1). Hence the
critical points of the functional are solutions of the problem only in the ‘weak’
sense. However, by an appropriate regularity argument, one may recover the
diﬀerentiability of the solutions, so that they can still satisfy equation (1.1)
in the classical sense.
In general, given a PDE problem, our intention is to view it as an op
erator A acting on some proper linear spaces X and Y of functions and to
symbolically write the equation as
Au = f (1.4)
1.1 Distributions 3
Then we can apply the general and elegant principles of linear or nonlinear
functional analysis to study the solvability of various equations involving A.
The result can then be applied to a broad class of partial diﬀerential equations.
We may also associate this operator with a functional J(), whose critical
points are the solutions of the equation (1.4). In this process, the key is to
ﬁnd an appropriate operator ‘A’ and appropriate spaces ‘X’ and ‘Y’. As we
will see later, the Sobolev spaces are designed precisely for this purpose and
will work out properly.
In solving a partial diﬀerential equation, in many cases, it is natural to
ﬁrst ﬁnd a sequence of approximate solutions, and then go on to investigate
the convergence of the sequence. The limit function of a convergent sequence
of approximate solutions would be the desired exact solution of the equation.
As we will see in the next few chapters, there are two basic stages in showing
convergence:
i) in a reﬂexive Banach space, every bounded sequence has a weakly con
vergent subsequence, and then
ii) by the compact embedding from a “stronger” Sobolev space into a
“weaker” one, the weak convergent sequence in the “stronger” space becomes
a strong convergent sequence in the “weaker” space.
Before going onto the details of this chapter, the readers may have a glance
at the introduction of the next chapter to gain more motivations for studying
the Sobolev spaces.
In Section 1.1, we will introduce the distributions, mainly the notion of
the weak derivatives, which are the elements of the Sobolev spaces.
We then deﬁne Sobolev spaces in Section 1.2.
To derive many useful properties in Sobolev spaces, it is not so convenient
to work directly on weak derivatives. Hence in Section 1.3, we show that, these
weak derivatives can be approximated by smooth functions. Then in the next
three sections, we can just work on smooth functions to establish a series of
important inequalities.
1.1 Distributions
As we have seen in the introduction, for the functional J(u) in (1.2) or (1.3),
what really involved were only the ﬁrst derivatives of u instead of the second
derivatives as required classically for a second order equation; and these ﬁrst
derivatives need not be continuous or even be deﬁned everywhere. Therefore,
via functional analysis approach, one can substantially weaken the notion of
partial derivatives. The advantage is that it allows one to divide the task of
ﬁnding “suitable” smooth solutions for a PDE into two major steps:
Step 1. Existence of Weak Solutions. One seeks solutions which are less
diﬀerentiable but are easier to obtain. It is very common that people use
“energy” minimization or conservation, sometimes use ﬁnite dimensional ap
proximation, to show the existence of such weak solutions.
4 1 Introduction to Sobolev Spaces
Step 2. Regularity Lifting. One uses various analysis tools to boost the
diﬀerentiability of the known weak solutions and try to show that they are
actually classical ones.
Both the existence of weak solutions and the regularity lifting have become
two major branches of today’s PDE analysis. Various functions spaces and
the related embedding theories are basic tools in both analysis, among which
Sobolev spaces are the most frequently used ones.
In this section, we introduce the notion of ‘weak derivatives,’ which will
be the elements of the Sobolev spaces.
Let R
n
be the ndimensional Euclidean space and Ω be an open connected
subset in R
n
. Let D(Ω) = C
∞
0
(Ω) be the linear space of inﬁnitely diﬀeren
tiable functions with compact support in Ω. This is called the space of test
functions on Ω.
Example 1.1.1 Assume
B
R
(x
o
) := ¦x ∈ R
n
[ [x −x
o
[ < R¦ ⊂ Ω,
then for any r < R, the following function
f(x) =
exp¦
1
x−x
o

2
−r
2
¦ for [x −x
o
[ < r
0 elsewhere
is in C
∞
0
(Ω).
Example 1.1.2 Assume ρ ∈ C
∞
0
(R
n
), u ∈ L
p
(Ω), and supp u ⊂ K ⊂⊂ Ω.
Let
u
(x) := ρ
∗ u :=
R
n
1
n
ρ(
x −y
)u(y)dy.
Then u
∈ C
∞
0
(Ω) for suﬃciently small.
Let L
p
loc
(Ω) be the space of p
th
power locally summable functions for
1 ≤ p ≤ ∞. Such functions are Lebesque measurable functions f deﬁned on
Ω and with the property that
f
L
p
(K)
:=
K
[f(x)[
p
dx
1/p
< ∞
for every compact subset K in Ω.
Assume that u is a C
1
function in Ω and φ ∈ D(Ω). Through integration
by parts, we have, for i = 1, 2, , n,
Ω
∂u
∂x
i
φ(x)dx = −
Ω
u(x)
∂φ
∂x
i
dx. (1.5)
Now if u is not in C
1
(Ω), then
∂u
∂x
i
does not exist. However, the integral on
the right hand side of (1.5) still makes sense if u is a locally L
1
summable
1.1 Distributions 5
function. For this reason, we deﬁne the ﬁrst derivative
∂u
∂x
i
weakly as the
function v(x) that satisﬁes
Ω
v(x)φ(x)dx = −
Ω
u
∂φ
∂x
i
dx
for all functions φ ∈ D(Ω).
The same idea works for higher partial derivatives. Let α = (α
1
, α
2
, , α
n
)
be a multiindex of order
k := [α[ := α
1
+α
2
+ +α
n
.
For u ∈ C
k
(Ω), the regular αth partial derivative of u is
D
α
u =
∂
α1
∂
α2
∂
αn
u
∂x
α1
1
∂x
α2
2
∂x
αn
n
.
Given any test function φ ∈ D(Ω), through a straight forward integration by
parts k times, we arrive at
Ω
D
α
uφ(x) d x = (−1)
α
Ω
uD
α
φd x. (1.6)
There is no boundary term because φ vanishes near the boundary.
Now if u is not k times diﬀerentiable, the left hand side of (1.6) makes no
sense. However the right hand side is valid for functions u with much weaker
diﬀerentiability, i.e. u only need to be locally L
1
summable. Thus it is natural
to choose those functions v that satisfy (1.6) as the weak representatives of
D
α
u.
Deﬁnition 1.1.1 For u, v ∈ L
1
loc
(Ω), we say that v is the αth weak derivative
of u, written
v = D
α
u
provided
Ω
v(x) φ(x) d x = (−1)
α
Ω
uD
α
φd x
for all test functions φ ∈ D(Ω).
Example 1.1.3 For n = 1 and Ω = (−π, 1), let
u(x) =
cos x if −π < x ≤ 0
1 −x if 0 < x < 1.
Then its weak derivative u
(x) can be represented by
v(x) =
−sin x if −π < x ≤ 0
−1 if 0 < x < 1.
6 1 Introduction to Sobolev Spaces
To see this, we verify, for any φ ∈ D(Ω), that
1
−π
u(x)φ
(x)dx = −
1
−π
v(x)φ(x)dx. (1.7)
In fact, through integration by parts, we have
1
−π
u(x)φ
(x)dx =
0
−π
cos xφ
(x)dx +
1
0
(1 −x)φ
(x)dx
=
0
−π
sin xφ(x)dx +φ(0) −φ(0) +
1
0
φ(x)dx
= −
1
−π
v(x)φ(x)dx.
In this example, one can see that, in the classical sense, the function u is
not diﬀerentiable at x = 0. Since the weak derivative is deﬁned by integrals,
one may alter the values of the weak derivative v(x) in a set of measure zero,
and (1.7) still holds. However, it should be unique up to a set of measure zero.
Lemma 1.1.1 (Uniqueness of Weak Derivatives). If v and w are the weak
αth partial derivatives of u, D
α
u, then v(x) = w(x) almost everywhere.
Proof. By the deﬁnition of the weak derivatives, we have
Ω
v(x)φ(x)dx = (−1)
α
Ω
u(x)D
α
φdx =
Ω
w(x)φ(x)dx
for any φ ∈ D(Ω). It follows that
Ω
(v(x) −w(x))φ(x)dx = 0 ∀φ ∈ D(Ω).
Therefore, we must have
v(x) = w(x) almost everywhere .
¯ .
From the deﬁnition, we can view a weak derivative as a linear functional
acting on the space of test functions D(Ω), and we call it a distribution. More
generally, we have
Deﬁnition 1.1.2 A distribution is a continuous linear functional on D(Ω).
The linear space of distributions or the generalized functions on Ω, denoted
by D
(Ω), is the collection of all continuous linear functionals on D(Ω).
1.1 Distributions 7
Here, the continuity of a functional T on D(Ω) means that, for any se
quence ¦φ
k
¦ ⊂ D(Ω) with φ
k
→φ in D(Ω), we have
T(φ
k
)→T(φ), as k→∞;
and we say that φ
k
→φ in D(Ω) if
a) there exists K ⊂⊂ Ω such that suppφ
k
, suppφ ⊂ K, and
b) for any α, D
α
φ
k
→D
α
φ uniformly as k→∞.
The most important and most commonly used distributions are locally
summable functions. In fact, for any f ∈ L
p
loc
(Ω) with 1 ≤ p ≤ ∞, consider
T
f
(φ) =
Ω
f(x)φ(x)dx.
It is easy to verify that T
f
() is a continuous linear functional on D(Ω), and
hence it is a distribution.
For any distribution µ, if there is an f ∈ L
1
loc
(Ω) such that
µ(φ) = T
f
(φ), ∀φ ∈ D(Ω),
then we say that µ is (or can be realized as ) a locally summable function and
identify it as f.
An interesting example of a distribution that is not a locally summable
function is the wellknown Dirac delta function. Let x
o
be a point in Ω. For
any φ ∈ D(Ω), the delta function at x
o
can be deﬁned as
δ
x
o(φ) = φ(x
o
).
Hence it is a distribution. However, one can show that such a delta function is
not locally summable. It is not a function at all. This kind of “function” has
been used widely and so successfully by physicists and engineers, who often
simply view δ
x
o as
δ
x
o(x) =
0, for x = x
o
∞, for x = x
o
.
Surprisingly, such a delta function is the derivative of some function in the
following distributional sense. To explain this, let Ω = (−1, 1), and let
f(x) =
0, for x < 0
1, for x ≥ 0.
Then, we have
−
1
−1
f(x)φ
(x)dx = −
1
0
φ
(x)dx = φ(0) = δ
0
(φ).
Compare this with the deﬁnition of weak derivatives, we may regard δ
0
(x) as
f
(x) in the sense of distributions.
8 1 Introduction to Sobolev Spaces
1.2 Sobolev Spaces
Now suppose given a function f ∈ L
p
(Ω), we want to solve the partial diﬀer
ential equation
∆u = f(x)
in the sense of weak derivatives. Naturally, we would seek solutions u, such
that ∆u are in L
p
(Ω). More generally, we would start from the collections of
all distributions whose second weak derivatives are in L
p
(Ω).
In a variational approach, as we have seen in the introduction, to seek
critical points of the functional
1
2
Ω
[u[
2
dx −
Ω
F(x, u)dx,
a natural set of functions we start with is the collection of distributions whose
ﬁrst weak derivatives are in L
2
(Ω). More generally, we have
Deﬁnition 1.2.1 The Sobolev space W
k,p
(Ω) (k ≥ 0 and p ≥ 1) is the col
lection of all distributions u on Ω such that for all multiindex α with [α[ ≤ k,
D
α
u can be realized as a L
p
function on Ω. Furthermore, W
k,p
(Ω) is a Ba
nach space with the norm
[[u[[ :=
¸
¸
α≤k
Ω
[D
α
u(x)[
p
dx
¸
1/p
.
In the special case when p = 2, it is also a Hilbert space and we usually
denote it by H
k
(Ω).
Deﬁnition 1.2.2 W
k,p
0
(Ω) is the closure of C
∞
0
(Ω) in W
k,p
(Ω).
Intuitively, W
k,p
0
(Ω) is the space of functions whose up to (k −1)th order
derivatives vanish on the boundary.
Example 1.2.1 Let Ω = (−1, 1). Consider the function f(x) = [x[
β
. For
0 < β < 1, it is obviously not diﬀerentiable at the origin. However for any
1 ≤ p <
1
1 −β
, it is in the Sobolev space W
1,p
(Ω). More generally, let Ω be
an open unit ball centered at the origin in R
n
, then the function [x[
β
is in
W
1,p
(Ω) if and only if
β > 1 −
n
p
. (1.8)
To see this, we ﬁrst calculate
f
xi
(x) =
βx
i
[x[
2−β
, for x = 0,
and hence
1.3 Approximation by Smooth Functions 9
[f(x)[ =
β
[x[
1−β
. (1.9)
Fix a small > 0. Then for any φ ∈ D(Ω), by integration by parts, we
have
Ω\B(0)
f
xi
(x)φ(x)dx = −
Ω\B(0)
f(x)φ
xi
(x)dx +
∂B(0)
fφν
i
dS.
where ν = (ν
1
, ν
2
, , ν
n
) is an inwardnormal vector on ∂B
(0).
Now, under the condition that β > 1 −n/p, f
xi
is in L
p
(Ω) ⊂ L
1
(Ω), and
[
∂B(0)
fφν
i
dS[ ≤ C
n−1+β
→0, as →0.
It follows that
Ω
[x[
β
φ
xi
(x)dx = −
Ω
βx
i
[x[
2−β
φ(x)dx.
Therefore, the weak ﬁrst partial derivatives of [x[
β
are
βx
i
[x[
2−β
, i = 1, 2, , n.
Moreover, from (1.9), one can see that [f[ is in L
p
(Ω) if and only if
β > 1 −
n
p
.
1.3 Approximation by Smooth Functions
While working in Sobolev spaces, for instance, proving inequalities, it may feel
inconvenient and cumbersome to manage the weak derivatives directly based
on its deﬁnition. To get around, in this section, we will show that any function
in a Sobolev space can be approached by a sequence of smooth functions. In
other words, the smooth functions are dense in Sobolev spaces. Based on this,
when deriving many useful properties of Sobolev spaces, we can just work on
smooth functions and then take limits.
At the end of the section, for more convenient application of the approxi
mation results, we introduce an Operator Completion Theorem. We also prove
an Extension Theorem. Both theorems will be used frequently in the next few
sections.
The idea in approximation is based on molliﬁers. Let
j(x) =
c e
1
x
2
−1
if [x[ < 1
0 if [x[ ≥ 1.
One can verify that
10 1 Introduction to Sobolev Spaces
j(x) ∈ C
∞
0
(B
1
(0)).
Choose the constant c, such that
R
n
j(x)dx = 1.
For each > 0, deﬁne
j
(x) =
1
n
j(
x
).
Then obviously
R
n
j
(x)dx = 1, ∀ > 0.
One can also verify that j
∈ C
∞
0
(B
(0)), and
lim
→0
j
(x) =
0 for x = 0
∞ for x = 0.
The above observations suggest that the limit of j
(x) may be viewed as a
delta function, and from the wellknown property of the delta function, we
would naturally expect, for any continuous function f(x) and for any point
x ∈ Ω,
(J
f)(x) :=
Ω
j
(x −y)f(y)dy→f(x), as →0. (1.10)
More generally, if f(x) is only in L
p
(Ω), we would expect (1.10) to hold almost
everywhere. We call j
(x) a molliﬁer and (J
f)(x) the molliﬁcation of f(x).
We will show that, for each > 0, J
f is a C
∞
function, and as →0,
J
f→f in W
k,p
.
Notice that, actually
(J
f)(x) =
B(x)∩Ω
j
(x −y)f(y)dy,
hence, in order that J
f(x) to approximate f(x) well, we need B
(x) to be
completely contained in Ω to ensure that
B(x)∩Ω
j
(x −y)dy = 1
(one of the important property of delta function). Equivalently, we need x to
be in the interior of Ω. For this reason, we ﬁrst prove a local approximation
theorem.
Theorem 1.3.1 (Local approximation by smooth functions).
For any f ∈ W
k,p
(Ω), J
f ∈ C
∞
(R
n
) and J
f → f in W
k,p
loc
(Ω) as →0.
1.3 Approximation by Smooth Functions 11
Then, to extend this result to the entire Ω, we will choose inﬁnitely many
open sets O
i
, i = 1, 2, , each of which has a positive distance to the bound
ary of Ω, and whose union is the whole Ω. Based on the above theorem, we are
able to approximate a W
k,p
(Ω) function on each O
i
by a sequence of smooth
functions. Combining this with a partition of unity, and a cut oﬀ function if
Ω is unbounded, we will then prove
Theorem 1.3.2 (Global approximation by smooth functions).
For any f ∈ W
k,p
(Ω), there exists a sequence of functions ¦f
m
¦ ⊂
C
∞
(Ω) ∩ W
k,p
(Ω) such that f
m
→ f in W
k,p
(Ω) as m→∞.
Theorem 1.3.3 (Global approximation by smooth functions up to the bound
ary).
Assume that Ω is bounded with C
1
boundary ∂Ω, then for any f ∈
W
k,p
(Ω), there exists a sequence of functions ¦f
m
¦ ⊂ C
∞
(Ω) = C
∞
(Ω) ∩
W
k,p
(Ω) such that f
m
→ f in W
k,p
(Ω) as m→∞.
When Ω is the entire space R
n
, the approximation by C
∞
or by C
∞
0
functions are essentially the same. We have
Theorem 1.3.4 W
k,p
(R
n
) = W
k,p
0
(R
n
). In other words, for any f ∈ W
k,p
(R
n
),
there exists a sequence of functions ¦f
m
¦ ⊂ C
∞
0
(R
n
), such that
f
m
→f, as m→∞; in W
k,p
(R
n
).
Proof of Theorem 1.3.1
We prove the theorem in three steps.
In step 1, we show that J
f ∈ C
∞
(R
n
) and
J
f
L
p
(Ω)
≤ f
L
p
(Ω)
.
From the deﬁnition of J
f(x), we can see that it is well deﬁned for all x ∈ R
n
,
and it vanishes if x is of distance away from Ω. Here and in the following,
for simplicity of argument, we extend f to be zero outside of Ω.
In step 2, we prove that, if f is in L
p
(Ω),
(J
f)→f in L
p
loc
(Ω).
We ﬁrst verify this for continuous functions and then approximate L
p
functions
by continuous functions.
In step 3, we reach the conclusion of the Theorem. For each f ∈ W
k,p
(Ω)
and [α[ ≤ k, D
α
f is in L
p
(Ω). Then from the result in Step 2, we have
J
(D
α
f)→D
α
f in L
loc
(Ω).
Hence what we need to verify is
12 1 Introduction to Sobolev Spaces
D
α
(J
f)(x) = J
(D
α
f)(x).
As the readers will notice, the arguments in the last two steps only work
in any compact subset of Ω.
Step 1.
Let e
i
= (0, , 0, 1, 0, , 0) be the unit vector in the x
i
direction.
Fix > 0 and x ∈ R
n
. By the deﬁnition of J
f, we have, for [h[ < ,
(J
f)(x +he
i
) −(J
f)(x)
h
=
B2(x)∩Ω
j
(x +he
i
−y) −j
(x −y)
h
f(y)dy.
(1.11)
Since as h→0,
j
(x +he
i
−y) −j
(x −y)
h
→
∂j
(x −y)
∂x
i
uniformly for all y ∈ B
2
(x) ∩ Ω, we can pass the limit through the integral
sign in (1.11) to obtain
∂(J
f)(x)
∂x
i
=
Ω
∂j
(x −y)
∂x
i
f(y)dy.
Similarly, we have
D
α
(J
f)(x) =
Ω
D
α
x
j
(x −y)f(y)dy.
Noticing that j
() is inﬁnitely diﬀerentiable, we conclude that J
f is also
inﬁnitely diﬀerentiable.
Then we derive
J
f
L
p
(Ω)
≤ f
L
p
(Ω)
. (1.12)
By the H¨older inequality, we have
[(J
f)(x)[ = [
Ω
j
p−1
p
(x −y)j
1
p
(x −y)f(y)dy[
≤
Ω
j
(x −y)dy
p−1
p
Ω
j
(x −y)[f(y)[
p
dy
1
p
≤
B(x)
j
(x −y)[f(y)[
p
dy
1
p
.
It follows that
Ω
[(J
f)(x)[
p
dx ≤
Ω
B(x)
j
(x −y)[f(y)[
p
dy
dx
≤
Ω
[f(y)[
p
B(y)
j
(x −y)dx
dy
≤
Ω
[f(y)[
p
dy.
1.3 Approximation by Smooth Functions 13
This veriﬁes (1.12).
Step 2.
We prove that, for any compact subset K of Ω,
J
f −f
L
p
(K)
→0, as →0. (1.13)
We ﬁrst show this for a continuous function f. By writing
(J
f)(x) −f(x) =
B(0)
j
(y)[f(x −y) −f(x)]dy,
we have
[(J
f)(x) −f(x)[ ≤ max
x∈K,y<
[f(x −y) −f(x)[
B(0)
j
(y)dy
≤ max
x∈K,y<
[f(x −y) −f(x)[.
Due to the continuity of f and the compactness of K, the last term in the above
inequality tends to zero uniformly as →0. This veriﬁes (1.13) for continuous
functions.
For any f in L
p
(Ω), and given any δ > 0, choose a a continuous function
g, such that
f −g
L
p
(Ω)
<
δ
3
. (1.14)
This can be derived from the wellknown fact that any L
p
function can be
approximated by a simple function of the form
¸
k
j=1
a
j
χ
j
(x), where χ
j
is the
characteristic function of some measurable set A
j
; and each simple function
can be approximated by a continuous function.
For the continuous function g, (1.13) infers that for suﬃciently small , we
have
J
g −g
L
p
(K)
<
δ
3
.
It follows from this and (1.14) that
J
f −f
L
p
(K)
≤ J
f −J
g
L
p
(K)
+J
g −g
L
p
(K)
+g −f
L
p
(K)
≤ 2f −g
L
p
(K)
+J
g −g
L
p
(K)
≤ 2
δ
3
+
δ
3
= δ.
This proves (1.13).
Step 3.
Now assume that f ∈ W
k,p
(Ω). Then for any α with [α[ ≤ k, we have
D
α
f ∈ L
p
(Ω). We show that
14 1 Introduction to Sobolev Spaces
D
α
(J
f)→D
α
f in L
p
(K). (1.15)
By the result in Step 2, we have
J
(D
α
f)→D
α
f in L
p
(K).
Now what left to verify is, for suﬃciently small ,
D
α
(J
f)(x) = J
(D
α
f)(x), ∀x ∈ K.
To see this, we ﬁx a point x in K. Choose <
1
2
dist(K, ∂Ω), then any
point y in the ball B
(x) is in the interior of Ω. Consequently,
D
α
(J
f)(x) =
Ω
D
α
x
j
(x −y)f(y)dy
=
B(x)
D
α
x
j
(x −y)f(y)dy
= (−1)
α
B(x)
D
α
y
j
(x −y)f(y)dy
=
B(x)
j
(x −y)D
α
f(y)dy
=
Ω
j
(x −y)D
α
f(y)dy.
Here we have used the fact that j
(x−y) and all its derivatives are supported
in B
(x) ⊂ Ω.
This completes the proof of the Theorem 1.3.1. ¯ .
The Proof of Theorem 1.3.2
Step 1. For a Bounded Region Ω.
Let
Ω
i
= ¦x ∈ Ω [ dist(x, ∂Ω) >
1
i
¦, i = 1, 2, 3,
Write O
i
= Ω
i+3
`
¯
Ω
i+1
. Choose some open set O
0
⊂⊂ Ω so that Ω = ∪
∞
i=0
O
i
.
Choose a smooth partition of unity ¦η
i
¦
∞
i=0
associated with the open sets
¦O
i
¦
∞
i=0
,
0 ≤ η
i
(x) ≤ 1, η
i
∈ C
∞
0
(O
i
)
¸
∞
i=0
η
i
(x) = 1, x ∈ Ω.
Given any function f ∈ W
k,p
(Ω), obvious η
i
f ∈ W
k,p
(Ω) and supp(η
i
f) ⊂ O
i
.
Fix a δ > 0. Choose
i
> 0 so small that f
i
:= J
i
(η
i
f) satisﬁes
f
i
−η
i
f
W
k,p
(Ω)
≤
δ
2
i+1
i = 0, 1,
suppf
i
⊂ (Ω
i+4
`
¯
Ω
i
) i = 1, 2,
(1.16)
1.3 Approximation by Smooth Functions 15
Set g =
¸
∞
i=0
f
i
. Then g ∈ C
∞
(Ω), because each f
i
is in C
∞
(Ω), and for
each open set O ⊂⊂ Ω, there are at most ﬁnitely many nonzero terms in the
sum. To see that g ∈ W
k,p
(Ω), we write
g −f =
∞
¸
i=0
(f
i
−η
i
f).
It follows from (1.16) that
∞
¸
i=0
f
i
−η
i
f
W
k,p
(Ω)
≤ δ
∞
¸
i=0
1
2
i+1
= δ.
From here we can see that the series
¸
∞
i=0
(f
i
−η
i
f) converges in W
k,p
(Ω) (see
Exercise 1.3.1 below), hence (g − f) ∈ W
k,p
(Ω), and therefore g ∈ W
k,p
(Ω).
Moreover, from the above inequality, we have
g −f
W
k,p
(Ω)
≤ δ.
Since δ > 0 is any number, we complete step 1.
Remark 1.3.1 In the above argument, neither
¸
f
i
nor
¸
η
i
f converge, but
their diﬀerence converges. Although each partial sum of
¸
f
i
is in C
∞
0
(Ω),
the inﬁnite series
¸
f
i
does not converge to g in W
k,p
(Ω). Then how did we
prove that g ∈ W
k,p
(Ω)? We showed that the diﬀerence (g−f) is in W
k,p
(Ω).
Exercise 1.3.1 .
Let X be a Banach space and v
i
∈ X. Show that the series
¸
∞
i
v
i
converges
in X if
¸
∞
i
v
i
 < ∞.
Hint: Show that the partial sum is a Cauchy sequence.
Step 2. For Unbounded Region Ω.
Given any δ > 0, since f ∈ W
k,p
(Ω), there exist R > 0, such that
f
W
k,p
(Ω\B
R−2
(0))
≤ δ. (1.17)
Choose a cut oﬀ function φ ∈ C
∞
(R
n
) satisfying
φ(x) =
1, x ∈ B
R−2
(0),
0, x ∈ R
n
` B
R
(0);
and [D
α
φ(x)[ ≤ 1 ∀x ∈ R
n
, ∀[α[ ≤ k.
Then by (1.17), it is easy to verify that, there exists a constant C, such that
φf −f
W
k,p
(Ω)
≤ Cδ. (1.18)
Now in the bounded domain Ω ∩ B
R
(0), by the argument in Step 1, there is
a function g ∈ C
∞
(Ω ∩ B
R
(0)), such that
16 1 Introduction to Sobolev Spaces
g −f
W
k,p
(Ω∩B
R
(0))
≤ δ. (1.19)
Obviously, the function φg is in C
∞
(Ω), and by (1.18) and (1.19),
φg −f
W
k,p
(Ω)
≤ φg −φf
W
k,p
(Ω)
+φf −f
W
k,p
(Ω)
≤ C
1
g −f
W
k,p
(Ω∩B
R
(0))
+Cδ
≤ (C
1
+C)δ.
This completes the proof of the Theorem 1.3.2. ¯ .
The Proof of Theorem 1.3.3
We will still use the molliﬁers to approximate a function. As compared to
the proofs of the previous theorem, the main diﬃculty here is that for a point
on the boundary, there is no room to mollify. To circumvent this, we cover
Ω with ﬁnitely many open sets, and on each set that covers the boundary
layer, we will translate the function a little bit inward, so that there is room
to mollify within Ω. Then we will again use the partition of unity to complete
the proof.
Step 1. Approximating in a Small Set Covering ∂Ω.
Let x
o
be any point on ∂Ω. Since ∂Ω is C
1
, we can make a C
1
change
of coordinates locally, so that in the new coordinates system (x
1
, , x
n
), we
can express, for a suﬃciently small r > 0,
B
r
(x
o
) ∩ Ω = ¦x ∈ B
r
(x
o
) [ x
n
> φ(x
1
, x
n−1
)¦
with some C
1
function Φ.
Set
D = Ω ∩ B
r/2
(x
o
).
Shift every point x ∈ D in x
n
direction a units, deﬁne
x
= x +ae
n
.
and
D
= ¦x
[ x ∈ D¦.
This D
is obtained by shifting D toward the inside of Ω by a units. Choose
a suﬃciently large, so that the ball B
(x) lies in Ω ∩ B
r
(x
o
) for all x ∈ D
and for all small > 0. Now, there is room to mollify a given W
k,p
function
f on D
within Ω. More precisely, we ﬁrst translate f a distance in the x
n
direction to become f
(x) = f(x
), then mollify it. We claim that
J
f
→f in W
k,p
(D).
Actually, for any multiindex [α[ ≤ k, we have
D
α
(J
f
) −D
α
f
L
p
(D)
≤ D
α
(J
f
) −D
α
f

L
p
(D)
+D
α
f
−D
α
f
L
p
(D)
.
1.3 Approximation by Smooth Functions 17
A similar argument as in the proof of Theorem 1.3.1 implies that the ﬁrst
term on the right hand side goes to zero as →0; while the second term also
vanishes in the process due to the continuity of the translation in the L
p
norm.
Step 2. Applying the Partition of Unity.
Since ∂Ω is compact, we can ﬁnd ﬁnitely many such sets D, call them D
i
,
i = 1, 2, , N, the union of which covers ∂Ω. Given δ > 0, from the argument
in Step 1, for each D
i
, there exists g
i
∈ C
∞
(
¯
D
i
), such that
g
i
−f
W
k,p
(Di)
≤ δ. (1.20)
Choose an open set D
0
⊂⊂ Ω such that Ω ⊂ ∪
N
i=0
D
i
, and select a function
g
0
∈ C
∞
(
¯
D
0
) such that
g
0
−f
W
k,p
(D0)
≤ δ. (1.21)
Let ¦η
i
¦ be a smooth partition of unity subordinated to the open sets
¦D
i
¦
N
i=0
. Deﬁne
g =
N
¸
i=0
η
i
g
i
.
Then obviously g ∈ C
∞
(
¯
Ω), and f =
¸
N
i=0
η
i
f. Similar to the proof of Theo
rem 1.3.1, it follows from (1.20) and (1.21) that, for each [α[ ≤ k
D
α
g −D
α
f
L
p
(Ω)
≤
N
¸
i=0
D
α
(η
i
g
i
) −D
α
(η
i
f)
L
p
(Di)
≤ C
N
¸
i=0
g
i
−f
W
k,p
(Di)
= C(N + 1)δ.
This completes the proof of the Theorem 1.3.3. ¯ .
The Proof of Theorem 1.3.4
Let φ(r) be a C
∞
0
cut oﬀ function such that
φ(r) =
1 , for 0 ≤ r ≤ 1;
between 0 and 1 , for 1 < r < 2;
0 , for r ≥ 2.
Then by a direct computation, one can show that
φ(
[x[
R
)f(x) −f(x)
W
k,p
(R
n
)
→0, as R→∞. (1.22)
Thus, there exists a sequence of numbers ¦R
m
¦ with R
m
→∞, such that for
18 1 Introduction to Sobolev Spaces
g
m
(x) := φ(
[x[
R
m
)f(x),
we have
g
m
−f
W
k,p
(R
n
)
≤
1
m
. (1.23)
On the other hand, from the Approximation Theorem 1.3.3, for each ﬁxed
m,
J
(g
m
)→g
m
, as →0, in W
k,p
(R
n
).
Hence there exist
m
, such that for
f
m
:= J
m
(g
m
),
we have
f
m
−g
m

W
k,p
(R
n
)
≤
1
m
. (1.24)
Obviously, each function f
m
is in C
∞
0
(R
n
), and by (1.23) and (1.24),
f
m
−f
W
k,p
(R
n
)
≤ f
m
−g
m

W
k,p
(R
n
)
+g
m
−f
W
k,p
(R
n
)
≤
2
m
→0, as m→∞.
This completes the proof of Theorem 1.3.4. ¯ .
Now we have proved all the four Approximation Theorems, which show
that smooth functions are dense in Sobolev spaces W
k,p
. In other words, W
k,p
is the completion of C
k
under the norm  
W
k,p. Later, particularly in the
next three sections, when we derive various inequalities in Sobolev spaces, we
can just ﬁrst work on smooth functions and then extend them to the whole
Sobolev spaces. In order to make such extensions more conveniently, without
going through approximation process in each particular space, we prove the
following Operator Completion Theorem in general Banach spaces.
Theorem 1.3.5 Let D be a dense linear subspace of a normed space X. Let
Y be a Banach space. Assume
T : D → Y
is a bounded linear map. Then there exists an extension
¯
T of T from D to the
whole space X, such that
¯
T is a bounded linear operator from X to Y ,

¯
T = T and
¯
Tx = Tx ∀ x ∈ D.
Proof. Given any element x
o
∈ X, since D is dense in X, there exists a
sequence ¦x
i
¦ ∈ D, such that
x
i
→x
o
, as i→∞.
1.3 Approximation by Smooth Functions 19
It follows that
Tx
i
−Tx
j
 ≤ Tx
i
−x
j
→0, as i, j→∞.
This implies that ¦Tx
i
¦ is a Cauchy sequence in Y . Since Y is a Banach space,
¦Tx
i
¦ converges to some element y
o
in Y . Let
¯
Tx
o
= y
o
. (1.25)
To see that (1.25) is well deﬁned, suppose there is another sequence ¦x
i
¦ that
converges to x
o
in X and Tx
i
→y
1
in Y . Then
y
1
−y
o
 = lim
i→∞
Tx
i
−Tx
i
 ≤ Clim
i→∞
x
i
−x
i
 = 0.
Now, for x ∈ D, deﬁne
¯
Tx = Tx; and for other x ∈ X, deﬁne
¯
Tx by (1.25).
Obviously,
¯
T is a linear operator. Moreover

¯
Tx
o
 = lim
i→∞
Tx
i
 ≤ lim
i→∞
Tx
i
 = Tx
o
.
Hence
¯
T is also bounded. This completes the proof. ¯ .
As a direct application of this Operator Completion Theorem, we prove
the following Extension Theorem.
Theorem 1.3.6 Assume that Ω is bounded and ∂Ω is C
k
, then for any open
set O ⊃ Ω, there exists a bounded linear operator E : W
k,p
(Ω) → W
k,p
0
(O)
such that Eu = u a.e. in Ω.
Proof. To deﬁne the extension operator E, we ﬁrst work on functions u ∈
C
k
(
¯
Ω). Then we can apply the Operator Completion Theorem to extend E
to W
k,p
(Ω), since C
k
(
¯
Ω) is dense in W
k,p
(Ω) (see Theorem 1.3.3). From this
density, it is easy to show that, for u ∈ W
k,p
(Ω),
E(u)(x) = u(x) almost everywhere on Ω
based on
E(u)(x) = u(x) on
¯
Ω ∀ u ∈ C
k
(
¯
Ω).
Now we prove the theorem for u ∈ C
k
(
¯
Ω). We deﬁne E(u) in the following
two steps.
Step 1. The special case when Ω is a half ball
B
+
r
(0) := ¦x = (x
, x
n
) ∈ R
n
[ [x[ < 1, x
n
> 0¦.
Assume u ∈ C
k
(B
+
r
(0)). We extend u to be a C
k
function on the whole
ball B
r
(0). Deﬁne
¯ u(x
, x
n
) =
u(x
, x
n
) , x
n
≥ 0
¸
k
i=0
a
i
u(x
, −λ
i
x
n
) , x
n
< 0.
20 1 Introduction to Sobolev Spaces
To guarantee that all the partial derivatives
∂
j
¯ u(x
, x
n
)
∂x
j
n
up to the order k are
continuous across the hyper plane x
n
= 0, we ﬁrst pick λ
i
, such that
0 < λ
0
< λ
1
< < λ
k
< 1.
Then solve the algebraic system
k
¸
i=0
a
i
λ
j
i
= 1 , j = 0, 1, , k,
to determine a
0
, a
1
, , a
k
. Since the coeﬃcient determinant det(λ
j
i
) is not
zero, and hence the system has a unique solution. One can easily verify that,
for such λ
i
and a
i
, the extended function ¯ u is in C
k
(B
r
(0)).
Step 2. Reduce the general domains to the case of half ball covered in Step
1 via domain transformations and a partition of unity.
Let O be an open set containing
¯
Ω. Given any x
o
∈ ∂Ω, there exists a
neighborhood U of x
o
and a C
k
transformation φ : U→R
n
which satisﬁes
that φ(x
o
) = 0, φ(U ∩ ∂Ω) lies on the hyper plane x
n
= 0, and φ(U ∩ Ω) is
contained in R
n
+
. Then there exists an r
o
> 0, such that B
ro
(0) ⊂⊂ φ(U),
D
x
o := φ
−1
(B
ro
(0)) is an open set containing x
o
, and φ ∈ C
k
(D
x
o). Choose
r
o
suﬃciently small so that D
x
o ⊂ O. All such D
x
o and Ω forms an open
covering of
¯
Ω, hence there exist a ﬁnite subcovering
D
0
:= Ω, D
1
, , D
m
and a corresponding partition of unity
η
0
, η
1
, , η
m
,
such that
η
i
∈ C
∞
0
(D
i
) i = 0, 1, , m
and
m
¸
i=0
η
i
(x) = 1 ∀ x ∈ Ω.
Let φ
i
be the mapping associated with D
i
as described in Step 1. And let
˜ u
i
(y) = u(φ
−1
i
(y) be the function deﬁned on the half ball
B
+
ri
(0) = φ
i
(D
i
∩
¯
Ω) , i = 1, 2, m.
From Step 1, each ˜ u
i
(y) can be extended as a C
k
function ¯ u
i
(y) onto the
whole ball B
ri
(0).
Now, we can deﬁne the extension of u from Ω to its neighborhood O as
E(u) =
m
¸
i=1
η
i
(x)¯ u
i
(φ
i
(x)) +η
0
u(x).
1.4 Sobolev Embeddings 21
Obviously, E(u) ∈ C
∞
0
(O), and
E(u)
W
k,p
(O)
≤ Cu
W
k,p
(Ω)
.
Moreover, for any x ∈
¯
Ω, we have
E(u)(x) =
m
¸
i=1
η
i
(x)¯ u
i
(φ
i
(x)) +η
0
(x)u(x) =
m
¸
i=1
η
i
(x)˜ u
i
(φ
i
(x)) +η
0
(x)u(x)
=
m
¸
i=1
η
i
(x)u(φ
−1
i
(φ
i
(x))) +η
0
(x)u(x) =
m
¸
i=1
η
i
(x)u(x) +η
0
(x)u(x)
=
m
¸
i=0
η
i
(x)
u(x) = u(x).
This completes the proof of the Extension Theorem. ¯ .
Remark 1.3.2 Notice that the Extension Theorem actually implies an im
proved version of the Approximation Theorem under stronger assumption on
∂Ω (be C
k
instead of C
1
). From the Extension Theorem, one can derive im
mediately
Corollary 1.3.1 Assume that Ω is bounded and ∂Ω is C
k
. Let O be an open
set containing
¯
Ω. Let
O
:= ¦x ∈ R
n
[ dist(x, O) ≤ ¦.
Then the linear operator
J
(E(u)) : W
k,p
(Ω)→C
∞
0
(O
)
is bounded, and for each u ∈ W
k,p
(Ω), we have
J
(E(u)) −u
W
k,p
(Ω)
→0 , as →0.
Here, as compare to the previous approximation theorems, the improvement
is that one can write out the explicit form of the approximation.
1.4 Sobolev Embeddings
When we seek weak solutions of partial diﬀerential equations, we start with
functions in a Sobolev space W
k,p
. Then it is natural that we would like to
know whether the functions in this space also automatically belong to some
other spaces. The following theorem answers the question and at meantime
provides inequalities among the relevant norms.
22 1 Introduction to Sobolev Spaces
Theorem 1.4.1 (General Sobolev inequalities).
Assume Ω is bounded and has a C
1
boundary. Let u ∈ W
k,p
(Ω).
(i) If k <
n
p
, then u ∈ L
q
(Ω) with
1
q
=
1
p
−
k
n
and there exists a constant
C such that
u
L
q
(Ω)
≤ Cu
W
k,p
(Ω)
(ii) If k >
n
p
, then u ∈ C
k−[
n
p
]−1,γ
(Ω), and there exists a constant C, such
that
u
C
k−[
n
p
]−1,γ
(Ω)
≤ Cu
W
k,p
(Ω)
where
γ =
1 +
n
p
−
n
p
, if
n
p
is not an integer
any positive number < 1, if
n
p
is an integer .
Here, [b] is the integer part of the number b.
The proof of the Theorem is based upon several simpler theorems. We
ﬁrst consider the functions in W
1,p
(Ω). From the deﬁnition, apparently these
functions belong to L
q
(Ω) for 1 ≤ q ≤ p. Naturally, one would expect more,
and what is more meaningful is to ﬁnd out how large this q can be. And to
control the L
q
norm for larger q by W
1,p
norm–the norm of the derivatives
Du
L
p
(Ω)
=
Ω
[Du[
p
dx
1
p
–would suppose to be more useful in practice.
For simplicity, we start with the smooth functions with compact supports
in R
n
. We would like to know for what value of q, can we establish the in
equality
u
L
q
(R
n
)
≤ CDu
L
p
(R
n
)
, (1.26)
with constant C independent of u ∈ C
∞
0
(R
n
). Now suppose (1.26) holds. Then
it must also be true for the rescaled function of u:
u
λ
(x) = u(λx),
that is
u
λ

L
q
(R
n
)
≤ CDu
λ

L
p
(R
n
)
. (1.27)
By substitution, we have
R
n
[u
λ
(x)[
q
dx =
1
λ
n
R
n
[u(y)[
q
dy, and
R
n
[Du
λ
[
p
dx =
λ
p
λ
n
R
n
[Du(y)[
p
dy.
1.4 Sobolev Embeddings 23
It follows from (1.27) that
u
L
q
(R
n
)
≤ Cλ
1−
n
p
+
n
q
Du
L
p
(R
n
)
.
Therefore, in order that C to be independent of u, it is necessarily that the
power of λ here be zero, that is
q =
np
n −p
.
It turns out that this condition is also suﬃcient, as will be stated in the
following theorem. Here for q to be positive, we must require p < n.
Theorem 1.4.2 (GagliardoNirenbergSobolev inequality).
Assume that 1 ≤ p < n. Then there exists a constant C = C(p, n), such
that
u
L
p
∗
(R
n
)
≤ CDu
L
p
(R
n
)
, u ∈ C
1
0
(R
n
) (1.28)
where p
∗
=
np
n−p
.
Since any function in W
1,p
(R
n
) can be approached by a sequence of func
tions in C
1
0
(R
n
), we derive immediately
Corollary 1.4.1 Inequality (1.28) holds for all functions u in W
1,p
(R
n
).
For functions in W
1,p
(Ω), we can extended them to be W
1,p
(R
n
) functions
by Extension Theorem and arrive at
Theorem 1.4.3 Assume that Ω is a bounded, open subset of R
n
with C
1
boundary. Suppose that 1 ≤ p < n and u ∈ W
1,p
(Ω). Then u is in L
p∗
(Ω)
and there exists a constant C = C(p, n, Ω), such that
u
L
p∗
(Ω)
≤ Cu
W
1,p
(Ω)
. (1.29)
In the limiting case as p→n, p∗ :=
np
n−p
→∞. Then one may suspect that
u ∈ L
∞
when p = n. Unfortunately, this is true only in dimension one. For
n = 1, from
u(x) =
x
−∞
u
(x)dx,
we derive immediately that
[u(x)[ ≤
∞
−∞
[u
(x)[dx.
However, for n > 1, it is false. One counter example is u = log log(1 +
1
x
) on
Ω = B
1
(0). It belongs to W
1,n
(Ω), but not to L
∞
(Ω). This is a more delicate
situation, and we will deal with it later.
24 1 Introduction to Sobolev Spaces
Naturally, for p > n, one would expect W
1,p
to embed into better spaces.
To get some rough idea what these spaces might be, let us ﬁrst consider the
simplest case when n = 1 and p > 1. Obviously, for any x, y ∈ R
1
with x < y,
we have
u(y) −u(x) =
y
x
u
(t)dt,
and consequently, by H¨older inequality,
[u(y) −u(x)[ ≤
y
x
[u
(t)[dt ≤
y
x
[u
(t)[
p
dt
1
p
y
x
dt
1−
1
p
.
It follows that
[u(y) −u(x)[
[y −x[
1−
1
p
≤
∞
−∞
[u
(t)[
p
dt
1
p
.
Take the supremum over all pairs x, y in R
1
, the left hand side of the above
inequality is the norm in the H¨older space C
0,γ
(R
1
) with γ = 1 −
1
p
. This is
indeed true in general, and we have
Theorem 1.4.4 (Morrey’s inequality).
Assume n < p ≤ ∞. Then there exists a constant C = C(p, n) such that
u
C
0,γ
(R
n
)
≤ Cu
W
1,p
(R
n
)
, u ∈ C
1
(R
n
)
where γ = 1 −
n
p
.
To establish an inequality like (1.28), we follow two basic principles.
First, we consider the special case where Ω = R
n
. Instead of dealing with
functions in W
k,p
, we reduce the proof to functions with enough smoothness
(which is just Theorem 1.4.2).
Secondly, we deal with general domains by extending functions u ∈
W
1,p
(Ω) to W
1,p
(R
n
) via the Extension Theorem.
Here, one sees that Theorem 1.4.2 is the ‘key’ and inequality (1.27) there
provides the foundation for the proof of the Sobolev embedding. Often, the
proof of (1.27) is called a ‘hard analysis’, and the steps leading from (1.27)
to (1.28) are called ‘soft analyseses’. In the following, we will show the ‘soft’
parts ﬁrst and then the ‘hard’ ones. First, we will assume that Theorem 1.4.2
is true and derive Corollary 1.4.1 and Theorem 1.4.3. Then, we will prove
Theorem 1.4.2.
The Proof of Corollary 1.4.1.
Given any function u ∈ W
1,p
(R
n
), by the Approximation Theorem, there
exists a sequence ¦u
k
¦ ⊂ C
∞
0
(R
n
) such that u−u
k

W
1,p
(R
n
)
→ 0 as k → ∞.
Applying Theorem 1.4.2, we obtain
u
i
−u
j

L
p
∗
(R
n
)
≤ Cu
i
−u
j

W
1,p
(R
n
)
→ 0, as i, j → ∞.
1.4 Sobolev Embeddings 25
Thus ¦u
k
¦ also converges to u in L
p
∗
(R
n
). Consequently, we arrive at
u
L
p
∗
(R
n
)
= limu
k

L
p
∗
(R
n
)
≤ limCu
k

W
1,p
(R
n
)
= Cu
W
1,p
(R
n
)
.
This completes the proof of the Corollary. ¯ .
The Proof of Theorem 1.4.3.
Now for functions in W
1,p
(Ω), to apply inequality (1.27), we ﬁrst extend
them to be functions with compact supports in R
n
. More precisely, let O be
an open set that covers Ω, by the Extension Theorem (Theorem 1.3.6), for
every u in W
1,p
(Ω), there exists a function ˜ u in W
1,p
0
(O), such that
˜ u = u, almost everywhere in Ω;
moreover, there exists a constant C
1
= C
1
(p, n, Ω, O), such that
˜ u
W
1,p
(O)
≤ C
1
u
W
1,p
(Ω)
. (1.30)
Now we can apply the GagliardoNirenbergSobolev inequality to ˜ u to derive
u
L
p
∗
(Ω)
≤ ˜ u
L
p
∗
(O)
≤ C˜ u
W
1,p
(O)
≤ CC
1
u
W
1,p
(Ω)
.
This completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.2.
We ﬁrst establish the inequality for p = 1, i.e. we prove
R
n
[u[
n
n−1
dx
n−1
n
≤
R
n
[Du[dx. (1.31)
Then we will apply (1.31) to [u[
γ
for a properly chosen γ > 1 to extend the
inequality to the case when p > 1.
We need
Lemma 1.4.1 (General H¨older Inequality) Assume that
u
i
∈ L
pi
(Ω) for i = 1, 2, , m
and
1
p
1
+
1
p
2
+ +
1
p
m
= 1.
Then
Ω
[u
1
u
2
u
m
[dx ≤
m
¸
i
Ω
[u
i
[
pi
dx
1
p
i
. (1.32)
26 1 Introduction to Sobolev Spaces
The proof can be obtained by applying induction to the usual H¨older inequal
ity for two functions.
Now we are ready to prove the Theorem.
Step 1. The case p = 1.
To better illustrate the idea, we ﬁrst derive inequality (1.31) for n = 2,
that is, we prove
R
2
[u(x)[
2
dx ≤
R
2
[Du[dx
2
. (1.33)
Since u has a compact support, we have
u(x) =
x1
−∞
∂u
∂y
1
(y
1
, x
2
)dy
1
.
It follows that
[u(x)[ ≤
∞
−∞
[Du(y
1
, x
2
)[dy
1
.
Similarly, we have
[u(x)[ ≤
∞
−∞
[Du(x
1
, y
2
)[dy
2
.
The above two inequalities together imply that
[u(x)[
2
≤
∞
−∞
[Du(y
1
, x
2
)[dy
1
∞
−∞
[Du(x
1
, y
2
)[dy
2
.
Now, integrate both sides of the above inequality with respect to x
1
and x
2
from −∞ to ∞, we arrive at (1.33).
Then we deal with the general situation when n > 2. We write
u(x) =
xi
−∞
∂u
∂y
i
(x
1
, , x
i−1
, y
i
, x
i+1
, , x
n
)dy
i
, i = 1, 2, n.
Consequently,
[u(x)[ ≤
∞
−∞
[Du(x
1
, , y
i
, , x
n
)[dy
i
, i = 1, 2, n.
And it follows that
[u(x)[
n
n−1
≤
n
¸
i=1
∞
−∞
[Du(x
1
, , y
i
, , x
n
)[dy
i
1
n−1
.
Integrating both sides with respect to x
1
and applying the general H¨older
inequality (1.32), we obtain
∞
−∞
[u[
n
n−1
dx
1
≤
∞
−∞
[Du[dy
1
1
n−1
∞
−∞
n
¸
i=2
∞
−∞
[Du[dy
i
1
n−1
dx
1
≤
∞
−∞
[Du[dy
1
1
n−1
n
¸
i=2
∞
−∞
∞
−∞
[Du[dx
1
dy
i
1
n−1
.
1.4 Sobolev Embeddings 27
Then integrate the above inequality with respect to x
2
. We have
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
2
≤
∞
−∞
∞
−∞
[Du[dx
1
dy
2
1
n−1
∞
−∞
∞
−∞
[Du[dy
1
1
n−1
n
¸
i=3
∞
−∞
∞
−∞
[Du[dx
1
dy
i
1
n−1
¸
dx
2
.
Again, applying the General H¨older Inequality, we arrive at
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
2
≤
∞
−∞
∞
−∞
[Du[dy
1
dx
2
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dy
2
1
n−1
n
¸
i=3
∞
−∞
∞
−∞
∞
−∞
[Du[dx
1
dx
2
dy
i
1
n−1
.
Continuing this way by integrating with respect to x
3
, , x
n−1
, we deduce
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
n−1
≤
∞
−∞
∞
−∞
[Du[dy
1
dx
2
dx
n−1
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dy
2
dx
3
dx
n−1
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dx
n−2
dy
n−1
1
n−1
R
n
[Du[dx
1
n−1
.
Finally, integrating both sides with respect to x
n
and applying the general
H¨older inequality, we obtain
R
n
[u[
n
n−1
dx ≤
R
n
[Du[dx
n
n−1
.
This veriﬁes (1.31).
Exercise 1.4.1 Write your own proof with all details for the cases n = 3 and
n = 4.
Step 2. The Case p > 1.
Applying (1.31) to the function [u[
γ
with γ > 1 to be chosen later, and by
the H¨older inequality, we have
R
n
[u[
γn
n−1
dx
n−1
n
≤
R
n
[D([u[
γ
)[dx = γ
R
n
[u[
γ−1
[Du[dx
≤ γ
R
n
[u[
γn
n−1
dx
(γ−1)(n−1)
γn
R
n
[Du[
γn
γ+n−1
dx
γ+n−1
γn
.
28 1 Introduction to Sobolev Spaces
It follows that
R
n
[u[
γn
n−1
dx
n−1
γn
≤ γ
R
n
[Du[
γn
γ+n−1
dx
γ+n−1
γn
.
Now choose γ, so that
γn
γ +n −1
= p, that is
γ =
p(n −1)
n −p
,
we obtain
R
n
[u[
np
n−p
dx
n−p
np
≤ γ
R
n
[Du[
p
dx
1
p
.
This completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.4 (Morrey’s inequality).
We will establish two inequalities
sup
R
n
[u[ ≤ Cu
W
1,p
(R
n
)
, (1.34)
and
sup
x=y
[u(x) −u(y)[
[x −y[
1−n/p
≤ CDu
L
p
(R
n
)
. (1.35)
Both of them can be derived from the following
1
[B
r
(x)[
Br(x)
[u(y) −u(x)[dy ≤ C
Br(x)
[Du(y)[
[y −x[
n−1
dy, (1.36)
where [B
r
(x)[ is the volume of B
r
(x). We will carry the proof out in three
steps. In Step 1, we prove (1.36), and in Step 2 and Step 3, we verify (1.34)
and (1.35), respectively.
Step 1. We start from
u(y) −u(x) =
1
0
d
dt
u(x +t(y −x))dt =
s
0
Du(x +τω) ωdτ,
where
ω =
y −x
[x −y[
, s = [x −y[, and hence y = x +sω.
It follows that
[u(x +sω) −u(x)[ ≤
s
0
[Du(x +τω)[dτ.
Integrating both sides with respect to ω on the unit sphere ∂B
1
(0), then
converting the integral on the right hand side from polar to rectangular coor
dinates, we obtain
1.4 Sobolev Embeddings 29
∂B1(0)
[u(x +sω) −u(x)[dσ ≤
s
0
∂B1(0)
[Du(x +τω)[dσdτ
=
s
0
∂B1(0)
[Du(x +τω)[
τ
n−1
τ
n−1
dσdτ
=
Bs(x)
[Du(z)[
[x −z[
n−1
dz.
Multiplying both sides by s
n−1
, integrating with respect to s from 0 to r, and
taking into account that the integrand on the right hand side is nonnegative,
we arrive at
Br(x)
[u(y) −u(x)[dy ≤
r
n
n
Br(x)
[Du(y)[
[x −y[
n−1
dy.
This veriﬁes (1.36).
Step 2. For each ﬁxed x ∈ R
n
, we have
[u(x)[ =
1
[B
1
(x)[
B1(x)
[u(x)[dy
≤
1
[B
1
(x)[
B1(x)
[u(x) −u(y)[dy +
1
[B
1
(x)[
B1(x)
[u(y)[dy
= I
1
+I
2
. (1.37)
By (1.36) and the H¨older inequality, we deduce
I
1
≤ C
B1(x)
[Du(y)[
[x −y[
n−1
dy
≤ C
R
n
[Du[
p
dy
1/p
B1(x)
dy
[x −y[
(n−1)p
p−1
dy
p−1
p
≤ C
1
R
n
[Du[
p
dy
1/p
. (1.38)
Here we have used the condition that p > n, so that
(n−1)p
p−1
< n, and hence
the integral
B1(x)
dy
[x −y[
(n−1)p
p−1
is ﬁnite.
Also it is obvious that
I
2
≤ Cu
L
p
(R
n
)
. (1.39)
Now (1.34) is an immediate consequence of (1.37), (1.38), and (1.39).
30 1 Introduction to Sobolev Spaces
Step 3. For any pair of ﬁxed points x and y in R
n
, let r = [x − y[ and
D = B
r
(x) ∩ B
r
(y). Then
[u(x) −u(y)[ ≤
1
[D[
D
[u(x) −u(z)[dz +
1
[D[
D
[u(z) −u(y)[dz. (1.40)
Again by (1.36), we have
1
[D[
D
[u(x) −u(z)[dz ≤
C
[B
r
(x)[
Br(x)
[u(x) −u(z)[dz
≤ C
R
n
[Du[
p
dz
1/p
Br(x)
dz
[x −z[
(n−1)p
p−1
p−1
p
= Cr
1−n/p
Du
L
p
(R
n
)
. (1.41)
Similarly,
1
[D[
D
[u(z) −u(y)[dz ≤ Cr
1−n/p
Du
L
p
(R
n
)
. (1.42)
Now (1.40), (1.41), and (1.42) yield
[u(x) −u(y)[ ≤ C[x −y[
1−n/p
Du
L
p
(R
n
)
.
This implies (1.35) and thus completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.1 (the General Sobolev Inequality).
(i) Assume that u ∈ W
k,p
(Ω) with k <
n
p
. We want to show that u ∈
L
q
(Ω) and
u
L
q
(Ω)
≤ Cu
W
k,p
(Ω)
. (1.43)
with
1
q
=
1
p
−
k
n
. This can be done by applying Theorem 1.4.3 successively on
the integer k. Again denote p
∗
=
np
n−p
, then
1
p
∗
=
1
p
−
1
n
. For [α[ ≤ k−1, D
α
u ∈
W
1,p
(Ω). By Theorem 1.4.3, we have D
α
u ∈ L
p
∗
(Ω), thus u ∈ W
k−1,p
∗
(Ω)
and
u
W
k−1,p
∗
(Ω)
≤ C
1
u
W
k,p
(Ω)
.
Applying Theorem 1.4.3 again to W
k−1,p
∗
(Ω), we have u ∈ W
k−2,p
∗∗
(Ω) and
u
W
k−2,p
∗∗
(Ω)
≤ C
2
u
W
k−1,p
∗
(Ω)
≤ C
2
C
1
u
W
k,p
(Ω)
,
where p
∗∗
=
np∗
n−p
∗
, or
1
p
∗∗
=
1
p
∗
−
1
n
= (
1
p
−
1
n
) −
1
n
=
1
p
−
2
n
.
Continuing this way k times, we arrive at (1.43).
1.4 Sobolev Embeddings 31
(ii) Now assume that k >
n
p
. Recall that in the Morrey’s inequality
u
C
0,γ
(Ω)
≤ Cu
W
1,p
(Ω)
, (1.44)
we require that p > n. However, in our situation, this condition is not neces
sarily met. To remedy this, we can use the result in the previous step. We can
ﬁrst decrease the order of diﬀerentiation to increase the power of summability.
More precisely, we will try to ﬁnd a smallest integer m, such that
W
k,p
(Ω) → W
k−m,q
(Ω),
with q > n. That is, we want
q =
np
n −mp
> n,
and equivalently,
m >
n
p
−1.
Obviously, the smallest such integer m is
m =
¸
n
p
.
For this choice of m, we can apply Morrey’s inequality (1.44) to D
α
u, with
any [α[ ≤ k −m−1 to obtain
D
α
u
C
0,γ
(Ω)
≤ C
1
u
W
k−m,q
(Ω)
≤ C
1
C
2
u
W
k,p
(Ω)
.
Or equivalently,
u
C
k−
[
n
p
]
−1,γ
(Ω)
≤ Cu
W
k,p
(Ω)
.
Here, when
n
p
is not an integer,
γ = 1 −
n
q
= 1 +
¸
n
p
−
n
p
.
While
n
p
is an integer, we have m =
n
p
, and in this case, q can be any number
> n, which implies that γ can be any positive number < 1.
This completes the proof of the Theorem. ¯ .
32 1 Introduction to Sobolev Spaces
1.5 Compact Embedding
In the previous section, we proved that W
1,p
(Ω) is embedded into L
p
∗
(Ω)
with p
∗
=
np
n−p
. Obviously, for q < p
∗
, the embedding of W
1,p
(Ω) into L
q
(Ω)
is still true, if the region Ω is bounded. Actually, due to the strict inequality
on the exponent, one can expect more, as stated below.
Theorem 1.5.1 (RellichKondrachov Compact Embedding).
Assume that Ω is a bounded open subset in R
n
with C
1
boundary ∂Ω.
Suppose 1 ≤ p < n. Then for each 1 ≤ q < p
∗
, W
1,p
(Ω) is compactly imbedded
into L
q
(Ω):
W
1,p
(Ω) →→ L
q
(Ω),
in the sense that
i) there is a constant C, such that
u
L
q
(Ω)
≤ Cu
W
1,p
(Ω)
, ∀ u ∈ W
1,p
(Ω); (1.45)
and
ii) every bounded sequence in W
1,p
(Ω) possesses a convergent subsequence
in L
q
(Ω).
Proof. The ﬁrst part–inequality (1.45)– is included in the general Embedding
Theorem. What we need to show is that if ¦u
k
¦ is a bounded sequence in
W
1,p
(Ω), then it possesses a convergent subsequence in L
q
(Ω). This can be
derived immediately from the following
Lemma 1.5.1 Every bounded sequence in W
1,1
(Ω) possesses a convergent
subsequence in L
1
(Ω).
We postpone the proof of the Lemma for a moment and see how it implies
the Theorem.
In fact, assume that ¦u
k
¦ is a bounded sequence in W
1,p
(Ω). Then there
exists a subsequence (still denoted by ¦u
k
¦) which converges weakly to an ele
ment u
o
in W
1,p
(Ω). By the Sobolev embedding, ¦u
k
¦ is bounded in L
p∗
(Ω).
On the other hand, it is also bounded in W
1,1
(Ω), since p ≥ 1 and Ω is
bounded. Now, by Lemma 1.5.1, there is a subsequence (still denoted by ¦u
k
¦)
that converges strongly to u
o
in L
1
(Ω). Applying the H¨older inequality
u
k
−u
o

L
q
(Ω)
≤ u
k
−u
o

θ
L
1
(Ω)
u
k
−u
o

1−θ
L
p∗
(Ω)
,
we conclude immediately that ¦u
k
¦ converges strongly to u
o
in L
q
(Ω). This
proves the Theorem.
We now come back to prove the Lemma. Let ¦u
k
¦ be a bounded sequence
in W
1,1
(Ω). We will show the strong convergence of this sequence in three
steps with the help of a family of molliﬁers
u
k
(x) =
Ω
j
(y)u
k
(x −y)dy.
1.5 Compact Embedding 33
First, we show that
u
k
→u
k
in L
1
(Ω) as →0 , uniformly in k. (1.46)
Then, for each ﬁxed > 0, we prove that
there is a subsequence of ¦u
k
¦ which converges uniformly . (1.47)
Finally, corresponding to the above convergent sequence ¦u
k
¦, we extract
diagonally a subsequence of ¦u
k
¦ which converges strongly in L
1
(Ω).
Based on the Extension Theorem, we may assume, without loss of gener
ality, that Ω = R
n
, the sequence of functions ¦u
k
¦ all have compact support
in a bounded open set G ⊂ R
n
, and
u
k

W
1,1
(G)
≤ C < ∞, for all k = 1, 2, .
Since every W
1,1
function can be approached by a sequence of smooth
functions, we may also assume that each u
k
is smooth.
Step 1. From the property of molliﬁers, we have
u
k
(x) −u
k
(x) =
B1(0)
j
(y)[u
k
(x −y) −u
k
(x)]dy
=
B1(0)
j
(y)
1
0
d
dt
u
k
(x −ty)dt dy
= −
B1(0)
j
(y)
1
0
Du
k
(x −ty)dt y dy.
Integrating with respect to x and changing the order of integration, we
obtain
u
k
−u
k

L
1
(G)
≤
B1(0)
j(y)
1
0
G
[Du
k
(x −ty)[dxdt dy
≤
G
[Du
k
(z)[dz ≤ u
k

W
1,1
(G)
. (1.48)
It follows that
u
k
−u
k

L
1
(G)
→0 , as →0 , uniformly in k. (1.49)
Step 2. Now ﬁx an > 0, then for all x ∈ R
n
and for all k = 1, 2, , we
have
[u
k
(x)[ ≤
B(x)
j
(x −y)[u
k
(y)[dy
≤ j

L
∞
(R
n
)
u
k

L
1
(G)
≤
C
n
< ∞. (1.50)
34 1 Introduction to Sobolev Spaces
Similarly,
[Du
k
(x)[ ≤
C
n+1
< ∞. (1.51)
(1.50) and (1.51) imply that, for each ﬁxed > 0, the sequence ¦u
k
¦ is uni
formly bounded and equicontinuous. Therefore, by the ArzelaAscoli Theo
rem (see the Appendix), it possesses a subsequence (still denoted by ¦u
k
¦ )
which converges uniformly on G, in particular
lim
j,i→∞
u
j
−u
i

L
1
(G)
= 0. (1.52)
Step 3. Now choose to be 1, 1/2, 1/3, , 1/k, successively, and denote
the corresponding subsequence that satisﬁes (1.52) by
u
1/k
k1
, u
1/k
k2
, u
1/k
k3
,
for k = 1, 2, 3, . Pick the diagonal subsequence from the above:
¦u
1/i
ii
¦ ⊂ ¦u
k
¦.
Then select the corresponding subsequence from ¦u
k
¦:
u
11
, u
22
, u
33
, .
This is our desired subsequence, because
u
ii
−u
jj

L
1
(G)
≤ u
ii
−u
1/i
ii

L
1
(G)
+u
1/i
ii
−u
1/j
jj

L
1
(G)
+u
1/j
jj
−u
jj

L
1
(G)
→0 , as i, j→∞,
due to the fact that each of the norm on the right hand side →0.
This completes the proof of the Lemma and hence the Theorem. ¯ .
1.6 Other Basic Inequalities
1.6.1 Poincar´e’s Inequality
For functions that vanish on the boundary, we have
Theorem 1.6.1 (Poincar´e’s Inequality I).
Assume Ω is bounded. Suppose u ∈ W
1,p
0
(Ω) for some 1 ≤ p ≤ ∞. Then
u
L
p
(Ω)
≤ CDu
L
p
(Ω)
. (1.53)
1.6 Other Basic Inequalities 35
Remark 1.6.1 i) Now based on this inequality, one can take an equivalent
norm of W
1,p
0
(Ω) as u
W
1,p
0
(Ω)
= Du
L
p
(Ω)
.
ii) Just for this theorem, one can easily prove it by using the Sobolev in
equality u
L
p ≤ u
L
q with p =
nq
n−q
and the H¨older inequality. However,
the one we present in the following is a uniﬁed proof that works for both this
and the next theorem.
Proof. For convenience, we abbreviate u
L
p
(Ω)
as u
p
. Suppose inequality
(1.53) does not hold, then there exists a sequence ¦u
k
¦ ⊂ W
1,p
0
(Ω), such that
Du
k

p
= 1 , while u
k

p
→∞, as k→∞.
Since C
∞
0
(Ω) is dense in W
1,p
0
(Ω), we may assume that ¦u
k
¦ ⊂ C
∞
0
(Ω).
Let v
k
=
u
k
u
k

p
. Then
v
k

p
= 1 , and Dv
k

p
→0 .
Consequently, ¦v
k
¦ is bounded in W
1,p
(Ω) and hence possesses a subsequence
(still denoted by ¦v
k
¦) that converges weakly to some v
o
∈ W
1,p
(Ω). From the
compact embedding results in the previous section, ¦v
k
¦ converges strongly
in L
p
(Ω) to v
o
, and therefore
v
o

p
= 1 . (1.54)
On the other hand, for each φ ∈ C
∞
0
(Ω),
Ω
vφ
xi
dx = lim
k→∞
Ω
v
k
φ
xi
dx = − lim
k→∞
Ω
v
k,xi
φdx = 0.
It follows that
Dv
o
(x) = 0 , a.e.
Thus v
o
is a constant. Taking into account that v
k
∈ C
∞
0
(Ω), we must have
v
o
≡ 0. This contradicts with (1.54) and therefore completes the proof of the
Theorem. ¯ .
For functions in W
1,p
(Ω), which may not be zero on the the boundary, we
have another version of Poincar´e’s inequality.
Theorem 1.6.2 (Poincar´e Inequality II).
Let Ω be a bounded, connected, and open subset in R
n
with C
1
boundary.
Let ¯ u be the average of u on Ω. Assume 1 ≤ p ≤ ∞. Then there exists a
constant C = C(n, p, Ω), such that
u − ¯ u
p
≤ CDu
p
, ∀ u ∈ W
1,p
(Ω). (1.55)
36 1 Introduction to Sobolev Spaces
The proof of this Theorem is similar to the previous one. Instead of letting
v
k
=
u
k
u
k

p
, we choose
v
k
=
u
k
− ¯ u
k
u
k
− ¯ u
k

p
.
Remark 1.6.2 The connectness of Ω is essential in this version of the in
equality. A simple counter example is when n = 1, Ω = [0, 1] ∪ [2, 3], and
u(x) =
−1 for x ∈ [0, 1]
1 for x ∈ [2, 3] .
1.6.2 The Classical HardyLittlewoodSobolev Inequality
Theorem 1.6.3 (HardyLittlewoodSobolev Inequality).
Let 0 < λ < n and s, r > 1 such that
1
r
+
1
s
+
λ
n
= 2.
Assume that f ∈ L
r
(R
n
) and g ∈ L
s
(R
n
). Then
R
n
R
n
f(x)[x −y[
−λ
g(y)dxdy ≤ C(n, s, λ)[[f[[
r
[[g[[
s
(1.56)
where
C(n, s, λ) =
n[B
n
[
λ/n
(n −λ)rs
λ/n
1 −1/r
λ/n
+
λ/n
1 −1/s
λ/n
with [B
n
[ being the volume of the unit ball in R
n
, and where
f
r
:= f
L
r
(R
n
)
.
Proof. Without loss of generality, we may assume that both f and g are non
negative and f
r
= 1 = g
s
.
Let
χ
G
(x) =
1 if x ∈ G
0 if x ∈G
be the characteristic function of the set G. Then one can see obviously that
f(x) =
∞
0
χ
{f>a}
(x) da (1.57)
g(x) =
∞
0
χ
{g>b}
(x) db (1.58)
[x[
−λ
= λ
∞
0
c
−λ−1
χ
{x<c}
(x) dc (1.59)
1.6 Other Basic Inequalities 37
To see the last identity, one may ﬁrst write
[x[
−λ
=
∞
0
χ
{x
−λ
>˜ c}
(x) d˜ c,
and then let ˜ c = c
−λ
.
Substituting (1.57), (1.58), and (1.59) into the left hand side of (1.56), we
have
I :=
R
n
R
n
f(x)[x −y[
−λ
g(y) dxdy
= λ
∞
0
∞
0
∞
0
R
n
R
n
c
−λ−1
χ
{f>a}
(x)χ
{x<c}
(x −y)χ
{g>b}
(y) dxdy dc db da
= λ
∞
0
∞
0
∞
0
c
−λ−1
I(a, b, c) dc db da. (1.60)
Let
u(c) = [B
n
[c
n
,
the volume of the ball of radius c, and let
v(a) =
R
n
χ
{f>a}
(x) dx, w(b) =
R
n
χ
{g>b}
(y) dy,
the measure of the sets ¦x [ f(x) > a¦ and ¦y [ g(y) > b¦, respectively. Then
we can express the norms as
f
r
r
= r
∞
0
a
r−1
v(a) da = 1 and g
s
s
= s
∞
0
b
s−1
w(b) db = 1. (1.61)
It is easy to see that
I(a, b, c) ≤
R
n
R
n
χ
{x<c}
(x −y)χ
{g>b}
(y) dxdy
≤
R
n
u(c)χ
{g>b}
(y) dy = u(c)w(b)
Similarly, one can show that I(a, b, c) is bounded above by other pairs and
arrive at
I(a, b, c) ≤ min¦u(c)w(b), u(c)v(a), v(a)w(b)¦. (1.62)
We integrate with respect to c ﬁrst. By (1.62), we have
∞
0
c
−λ−1
I(a, b, c) dc
≤
u(c)≤v(a)
c
−λ−1
w(b)u(c) dc +
u(c)>v(a)
c
−λ−1
w(b)v(a) dc
38 1 Introduction to Sobolev Spaces
= w(b)[B
n
[
(v(a)/B
n
)
1/n
0
c
−λ−1+n
dc +w(b)v(a)
(v(a)/B
n
)
1/n
c
−λ−1
dc
=
[B
n
[
λ/n
n −λ
w(b)v(a)
1−λ/n
+
[B
n
[
λ/n
λ
w(b)v(a)
1−λ/n
=
[B
n
[
λ/n
λ(n −λ)
w(b)v(a)
1−λ/n
(1.63)
Exchanging v(a) with w(b) in the second line of (1.63), we also obtain
∞
0
c
−λ−1
I(a, b, c) dc
≤
u(c)≤w(b)
c
−λ−1
v(a)u(c) dc +
u(c)>w(b))
c
−λ−1
w(b)v(a) dc
≤
[B
n
[
λ/n
λ(n −λ)
v(a)w(b)
1−λ/n
(1.64)
In view of (1.61), we split the bintegral into two parts, one from 0 to a
r/s
and the other from a
r/s
to ∞. By virtue of (1.60), (1.63), and (1.64), we derive
I ≤
n
n −λ
[B
n
[
λ/n
∞
0
v(a)
a
r/s
0
w(b)
1−λ/n
db da +
∞
0
v(a)
1−λ/n
∞
a
r/s
w(b) db da. (1.65)
To estimate the ﬁrst integral in (1.65), we use H¨older inequality with
m = (s −1)(1 −λ/n)
a
r/s
0
w(b)
1−λ/n
b
m
b
−m
db
≤
a
r/s
0
w(b)b
s−1
db
1−λ/n
a
r/s
0
b
−mn/λ
db
λ/n
≤
a
r/s
0
w(b)b
s−1
db
1−λ/n
a
r−1
, (1.66)
since mn/λ < 1. It follows that the ﬁrst integral in (1.65) is bounded above
by
λ
n −s(n −λ)
λ/n
∞
0
v(a)a
r−1
da
∞
0
w(b)b
s−1
db
1−λ/n
=
1
rs
λ/n
1 −1/r
λ/n
. (1.67)
1.6 Other Basic Inequalities 39
To estimate the second integral in (1.65), we ﬁrst rewrite it as
∞
0
w(b)
b
s/r
0
v(a)
1−λ/n
da db,
then an analogous computation shows that it is bounded above by
1
rs
λ/n
1 −1/s
λ/n
. (1.68)
Now the desired HardyLittlewoodSobolev inequality follows directly from
(1.65), (1.67), and (1.68). ¯ .
Theorem 1.6.4 (An equivalent form of the HardyLittlewoodSobolev in
equality)
Let g ∈ L
p
(R
n
) for
n
n−α
< p < ∞. Deﬁne
Tg(x) =
R
n
[x −y[
α−n
g(y)dy.
Then
Tg
p
≤ C(n, p, α)g
np
n+αp
. (1.69)
Proof. By the classical HardyLittlewoodSobolev inequality, we have
< f, Tg >=< Tf, g >≤ C(n, s, α) f
r
g
s
,
where < , > is the L
2
inner product.
Consequently,
Tg
p
= sup
fr=1
< f, Tg > ≤ C(n, s, α)g
s
,
where
1
p
+
1
r
= 1
1
r
+
1
s
=
n+α
n
.
Solving for s, we arrive at
s =
np
n +αp
.
This completes the proof of the Theorem. ¯ .
Remark 1.6.3 To see the relation between inequality (1.69) and the Sobolev
inequality, let’s rewrite it as
Tg
nq
n−αq
≤ Cg
q
(1.70)
with
40 1 Introduction to Sobolev Spaces
1 < q :=
np
n +αp
<
n
α
.
Let u = Tg. Then one can show that (see [CLO]),
(−´)
α
2
u = g.
Now inequality (1.70) becomes the Sobolev one
u
nq
n−αq
≤ C(−´)
α
2
u
q
.
2
Existence of Weak Solutions
2.1 Second Order Elliptic Operators
2.2 Weak Solutions
2.3 Method of Linear Functional Analysis
2.3.1 Linear Equations
2.3.2 Some Basic Principles in Functional Analysis
2.3.3 Existence of Weak Solutions
2.4 Variational Methods
2.4.1 Semilinear Equations
2.4.2 Calculus of Variations
2.4.3 Existence of Minimizers
2.4.4 Existence of Minimizers under Constrains
2.4.5 Minimax Critical Points
2.4.6 Existence of a Minimax via the Mountain Pass Theorem
Many physical models come with natural variational structures where one
can minimize a functional, usually an energy E(u). Then the minimizers are
weak solutions of the related partial diﬀerential equation
E
(u) = 0.
For example, for an open bounded domain Ω ⊂ R
n
, and for f ∈ L
2
(Ω),
the functional
E(u) =
1
2
Ω
[u[
2
dx −
Ω
f udx
possesses a minimizer among all u ∈ H
1
0
(Ω) (try to show this by using the
knowledge from the previous chapter). As we will see in this chapter, the
minimizer is a weak solution of the Dirichlet boundary value problem
42 2 Existence of Weak Solutions
−´u = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
In practice, it is often much easier to obtain a weak solution than a classical
one. One of the seven millennium problem posed by the Clay Institute of
Mathematical Sciences is to show that some suitable weak solutions to the
NavieStoke equation with good initial values are in fact classical solutions
for all time.
In this chapter, we consider the existence of weak solutions to second order
elliptic equations, both linear and semilinear.
For linear equations, we will use the LaxMilgram Theorem to show the
existence of weak solutions.
For semilinear equations, we will introduce variational methods. We will
consider the corresponding functionals in proper Hilbert spaces and seek weak
solutions of the equations associated with the critical points of the functionals.
We will use examples to show how to ﬁnd a minimizer, a minimizer under
constraint, and a minimax critical point. The wellknown Mountain Pass
Theorem is introduced and applied to show the existence of such a minimax.
To show that a weak solution possesses the desired diﬀerentiability so that
it is actually a classical one is called the regularity method, which will be
studied in Chapter 3.
2.1 Second Order Elliptic Operators
Let Ω be an open bounded set in R
n
. Let a
ij
(x), b
i
(x), and c(x) be bounded
functions on Ω with a
ij
(x) = a
ji
(x). Consider the second order partial diﬀer
ential operator L either in the divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u, (2.1)
or in the nondivergence form
Lu = −
n
¸
i,j=1
a
ij
(x)u
xixj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (2.2)
We say that L is uniformly elliptic if there exists a constant δ > 0, such
that
n
¸
i,j=1
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for almost every x ∈ Ω and for all ξ ∈ R
n
.
In the simplest case when a
ij
(x) = δ
ij
, b
i
(x) ≡ 0, and c(x) ≡ 0, L reduces
to −´. And, as we will see later, that the solutions of the general second order
elliptic equation Lu = 0 share many similar properties with the harmonic
functions.
2.2 Weak Solutions 43
2.2 Weak Solutions
Let the operator L be in the divergence form as deﬁned in (2.1). Consider the
second order elliptic equation with Dirichlet boundary condition
Lu = f , x ∈ Ω
u = 0 , x ∈ ∂Ω
(2.3)
When f = f(x), the equation is linear; and when f = f(x, u), semilinear.
Assume that, f(x) is in L
2
(Ω), or for a given solution u, f(x, u) is in
L
2
(Ω).
Multiplying both sides of the equation by a test function v ∈ C
∞
0
(Ω), and
integrating by parts on Ω, we have
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx =
Ω
fvdx. (2.4)
There are no boundary terms because both u and v vanish on ∂Ω. One can see
that the integrals in (2.4) are well deﬁned if u, v, and their ﬁrst derivatives are
square integrable. Write H
1
(Ω) = W
1,2
(Ω), and let H
1
0
(Ω) the completion of
C
∞
0
(Ω) in the norm of H
1
(Ω):
u =
¸
Ω
([Du[
2
+[u[
2
)dx
1/2
.
Then (2.4) remains true for any v ∈ H
1
0
(Ω). One can easily see that H
1
0
(Ω)
is also the most appropriate space for u. On the other hand, if u ∈ H
1
0
(Ω)
satisﬁes (2.4) and is second order diﬀerentiable, then through integration by
parts, we have
Ω
(Lu −f)vdx = 0 ∀v ∈ H
1
0
(Ω).
This implies that
Lu = f , ∀x ∈ Ω.
And therefore u is a classical solution of (2.3).
The above observation naturally leads to
Deﬁnition 2.2.1 The weak solution of problem (2.3) is a function u ∈
H
1
0
(Ω) that satisﬁes (2.4) for all v ∈ H
1
0
(Ω).
Remark 2.2.1 Actually, here the condition on f can be relaxed a little bit.
By the Sobolev embedding
H
1
(Ω) → L
2n
n−2
(Ω),
44 2 Existence of Weak Solutions
we see that v is in L
2n
n−2
(Ω), and hence by duality, f only need to be in
L
2n
n+2
(Ω). In the semilinear case, if f(x, u) = u
p
for any p ≤
n + 2
n −2
or more
generally
f(x, u) ≤ C
1
+C
2
[u[
n+2
n−2
;
then for any u ∈ H
1
(Ω), f(x, u) is in L
2n
n+2
(Ω).
2.3 Methods of Linear Functional Analysis
2.3.1 Linear Equations
For the Dirichelet problem with linear equation
Lu = f(x) , x ∈ Ω
u = 0 , x ∈ ∂Ω,
(2.5)
to ﬁnd its weak solutions, we can apply representation type theorems in Linear
Functional Analysis, such as Riesz Representation Theorem or LaxMilgram
Theorem. To this end, we introduce the bilinear form
B[u, v] =
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx
deﬁned on H
1
0
(Ω), which is the left hand side of (2.4) in the deﬁnition of the
weak solutions. While the right hand side of (2.4) may be regarded as a linear
functional on H
1
0
(Ω);
< f, v >:=
Ω
fvdx.
Now, to ﬁnd a weak solution of (2.5), it is equivalent to show that there
exists a function u ∈ H
1
0
(Ω), such that
B[u, v] =< f, v >, ∀ v ∈ H
1
0
(Ω). (2.6)
This can be realized by representation type theorems in Functional Analysis.
2.3.2 Some Basic Principles in Functional Analysis
Here we list brieﬂy some basic deﬁnitions and principles of functional analysis,
which will be used to obtain the existence of weak solutions. For more details,
please see [Sch1].
Deﬁnition 2.3.1 A Banach space X is a complete, normed linear space with
norm   that satisﬁes
(i) u +v ≤ u +v, ∀u, v ∈ X,
(ii) λu = [λ[u, ∀u, v ∈ X, λ ∈ R,
(iii) u = 0 if and only if u = 0.
2.3 Methods of Linear Functional Analysis 45
The examples of Banach spaces are:
(i) L
p
(Ω) with norm u
L
p
(Ω)
,
(ii) Sobolev spaces W
k,p
(Ω) with norm u
W
k,p
(Ω)
, and
(iii) H¨older spaces C
0,γ
(Ω) with norm u
C
0,γ
(Ω)
.
Deﬁnition 2.3.2 A Hilbert space H is a Banach space endowed with an inner
product (, ) which generates the norm
u := (u, u)
1/2
with the following properties
(i) (u, v) = (v, u), ∀u, v ∈ H,
(ii) the mapping u → (u, v) is linear for each v ∈ H,
(iii) (u, u) ≥ 0, ∀u ∈ H, and
(iv) (u, u) = 0 if and only if u = 0.
Examples of Hilbert spaces are
(i) L
2
(Ω) with inner product
(u, v) =
Ω
u(x)v(x)dx,
and
(ii) Sobolev spaces H
1
(Ω) or H
1
0
(Ω) with inner product
(u, v) =
Ω
[u(x)v(x) +Du(x) Dv(x)]dx.
Deﬁnition 2.3.3 (i) A mapping A : X→Y is a linear operator if
A[au +bv] = aA[u] +bA[u] ∀ u, v ∈ X, a, b ∈ R
1
.
(ii) A linear operator A : X→Y is bounded provided
A := sup
u
X
≤1
Au
Y
< ∞.
(iii) A bounded linear operator f : X→R
1
is called a bounded linear func
tional on X. The collection of all bounded linear functionals on X, denoted by
X
∗
, is called the dual space of X. We use
< f, u >:= f[u]
to denote the pairing of X
∗
and X.
Lemma 2.3.1 (Projection).
Let M be a closed subspace of H. Then for every u ∈ H, there is a v ∈ M,
such that
(u −v, w) = 0, ∀w ∈ M. (2.7)
In other words,
H = M ⊕M
⊥
:= ¦u ∈ H [ u⊥M¦.
46 2 Existence of Weak Solutions
Here v is the projection of u on M. The readers may ﬁnd the proof of the
Lemma in the book [Sch] ( Theorem 13) or as given in the following exercise.
Exercise 2.3.1 i) Given any u ∈ H and u ∈M, there exist a v
o
∈ M such
that
v
o
−u = inf
v∈M
v −u.
Hint: Show that a minimizing sequence is a Cauchy sequence.
ii) Show that
(u −v
o
, w) = 0 , ∀w ∈ M.
Based on this Projection Lemma, one can prove
Theorem 2.3.1 (Riesz Representation Theorem). Let H be a real Hilbert
space with inner product (, ), and H
∗
be its dual space. Then H
∗
can be
canonically identiﬁed with H, more precisely, for each u
∗
∈ H
∗
, there exists
a unique element u ∈ H, such that
< u
∗
, v >= (u, v) ∀v ∈ H.
The mapping u
∗
→ u is a linear isomorphism from H
∗
onto H.
This is a wellknown theorem in functional analysis. The readers may see
the book [Sch1] for its proof or do it as the following exercise.
Exercise 2.3.2 Prove the Riesz Representation Theorem in 4 steps.
Let T be a linear operator from H to R
1
and T ≡0. Show that
i) K(T) := ¦u ∈ H [ Tu = 0¦ is closed.
ii) H = K(T) ⊕K(T)
⊥
.
iii) The dimension of K(T) is 1.
iv) There exists v, such that Tv = 1. Then
Tv = (u, v) , ∀u ∈ H.
From Riesz Representation Theorem, one can derive the following
Theorem 2.3.2 (LaxMilgram Theorem). Let H be a real Hilbert space with
norm  . Let
B : H H→R
1
be a bilinear mapping. Assume that there exist constants M, m > 0, such that
(i) [B[u, v][ ≤ Muv, ∀u, v ∈ H
and
(ii) mu
2
≤ B[u, u], ∀u ∈ H.
Then for each bounded linear functional f on H, there exists a unique
element u ∈ H, such that
B[u, v] =< f, v >, ∀v ∈ H.
2.3 Methods of Linear Functional Analysis 47
Proof. 1. If B[u, v] is symmetric, i.e. B[u, v] = B[v, u], then the conditions
(i) and (ii) ensure that B[u, v] can be made as an inner product in H, and
the conclusion of the theorem follows directly from the Reisz Representation
Theorem.
2. In the case B[u, v] may not be symmetric, we proceed as follows.
On one hand, by the Riesz Representation Theorem, for a given bounded
linear functional f on H, there is an
˜
f ∈ H, such that
< f, v >= (
˜
f, v), ∀v ∈ H. (2.8)
On the other hand, for each ﬁxed element u ∈ H, by condition (i), B[u, ]
is also a bounded linear functional on H, and hence there exist a ˜ u ∈ H, such
that
B[u, v] = (˜ u, v), ∀v ∈ H. (2.9)
Denote this mapping from u to ˜ u by A, i.e.
A : H→H, Au = ˜ u. (2.10)
From (2.8), (2.9), and (2.10), It suﬃce to show that for each
˜
f ∈ H, there
exists a unique element u ∈ H, such that
Au =
˜
f.
We carry this out in two parts. In part (a), we show that A is a bounded
linear operator, and in part (b), we prove that it is onetoone and onto.
(a) For any u
1
, u
2
∈ H, any a
1
, a
2
∈ R
1
, and each v ∈ H, we have
(A(a
1
u
1
+a
2
u
2
), v) = B[(a
1
u
1
+a
2
u
2
), v]
= a
1
B[u
1
, v] +a
2
B[u
2
, v]
= a
1
(Au
1
, v) +a
2
(Au
2
, v)
= (a
1
Au
1
+a
2
Au
2
, v)
Hence
A(a
1
u
1
+a
2
u
2
) = a
1
Au
1
+a
2
Au
2
.
A is linear.
Moreover, by condition (i), for any u ∈ H,
Au
2
= (Au, Au) = B[u, Au] ≤ MuAu,
and consequently
Au ≤ Mu ∀u ∈ H. (2.11)
This veriﬁes that A is bounded.
(b) To see that A is onetoone, we apply condition (ii):
mu
2
≤ B[u, u] = (Au, u) ≤ Auu,
48 2 Existence of Weak Solutions
and it follows that
mu ≤ Au. (2.12)
Now if Au = Av, then from (2.12),
mu −v ≤ Au −Av.
This implies u = v. Hence A is onetoone.
From (2.12), one can also see that R(A), the range of A in H is closed. In
fact, assume that v
k
∈ R(A) and v
k
→v in H. Then for each v
k
, there is a u
k
,
such that Au
k
= v
k
. Inequality (2.12) implies that
u
i
−u
j
 ≤ Cv
i
−v
j
,
i.e. u
k
is a Cauchy sequence in H, and hence u
k
→u for some u ∈ H. Now, by
(2.11), Au
k
→Au and v = Au ∈ H. Therefore, R(A) is closed.
To show that R(A) = H, we need the Projection Lemma 2.3.1. For any
u ∈ H, by the Projection Lemma, there exists a v ∈ R(A), such that
0 = (u −v, A(u −v)) = B[(u −v), (u −v)] ≥ mu −v
2
.
Consequently, u = v ∈ R(A) and hence H ⊂ R(A). Therefore R(A) = H.
This completes the proof of the Theorem. ¯ .
2.3.3 Existence of Weak Solutions
Now we explore that in what situations, our second order elliptic operator
L would satisfy the conditions (i) and (ii) requested by the LaxMilgram
Theorem.
Let H = H
1
0
(Ω) be our Hilbert space with inner product
(u, v) :=
Ω
(DuDv +uv)dx.
By Poincar´e inequality, we can use the equivalent inner product
(u, v)
H
:=
Ω
DuDvdx
and the corresponding norm
u
H
:=
Ω
[Du[
2
dx.
For any v ∈ H, by the Sobolev Embedding, v is in L
2n
n−2
(Ω). And by virtue
of the H¨older inequality,
2.3 Methods of Linear Functional Analysis 49
< f, v >:=
Ω
f(x)v(x)dx ≤
Ω
[f(x)[
2n
n+2
dx
n+2
2n
Ω
[v(x)[
2n
n−2
dx
n−2
2n
,
hence we only need to assume that f ∈ L
2n
n+2
(Ω).
In the simplest case when L = −´, we have
B[u, v] =
Ω
Du Dvdx.
Obviously, condition (i) is met:
[B[u, v][ ≤ u
H
v
H
,
Condition (ii) is also satisﬁed:
B[u, u] = u
2
H
.
Now applying the LaxMilgram Theorem, we conclude that there exists a
unique u ∈ H such that
B[u, v] =< f, v >, ∀v ∈ H.
We have actually proved
Theorem 2.3.3 For every f(x) in L
2n
n+2
(Ω), the Dirichlet problem
−´u = f(x), x ∈ Ω,
u(x) = 0, x ∈ ∂Ω
has a unique weak solution u in H
1
0
(Ω).
In general,
B[u, v] =
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx.
Under the assumption that
a
ij

L
∞
(Ω)
, b
i

L
∞
(Ω)
, c
L
∞
(Ω)
≤ C,
and by H¨older inequality, one can easily verify that
[B[u, v][ ≤ 3Cu
H
v
H
.
Hence condition (i) is always satisﬁed. However, condition (ii) may not auto
matically hold. It depends on the sign of c(x), and the L
∞
norm of b
i
(x) and
c(x).
First from the uniform ellipticity condition, we have
50 2 Existence of Weak Solutions
Ω
n
¸
i,j=1
a
ij
(x)u
xi
u
xj
dx ≥ δ
Ω
[Du[
2
dx = δu
2
H
, (2.13)
for some δ > 0.
From here one can see that if b
i
(x) ≡ 0 and c(x) ≥ 0, then condition (ii) is
met. Actually, the condition on c(x) can be relaxed a little bit, for instance, if
c(x) ≥ −δ/2 (2.14)
with δ as in (2.13), then condition (ii) holds. Hence we have
Theorem 2.3.4 Assume that b
i
(x) ≡ 0 and c(x) satisﬁes (2.14). Then for
every f ∈ L
2n
n+2
(Ω), the problem
Lu = f(x) x ∈ Ω
u(x) = 0 x ∈ ∂Ω
has a unique weak solution in H
1
0
(Ω).
2.4 Variational Methods
2.4.1 Semilinear Equations
To ﬁnd the weak solutions of semilinear equations
Lu = f(x, u)
the LaxMilgram Theorem can no longer be applied. We will use variational
methods to obtain the existence. Roughly speaking, we will associate the
equation with the functional
J(u) =
1
2
Ω
(Lu u)dx −
Ω
F(x, u)dx
where
F(x, u) =
u
0
f(x, s)ds.
We show that the critical point of J(u) is a weak solution of our problem.
Then we will focus on how to seek critical points in various situations.
2.4.2 Calculus of Variations
For a continuously diﬀerentiable function g deﬁned on R
n
, a critical point is
a point where Dg vanishes. The simplest sort of critical points are global or
local maxima or minima. Others are saddle points. To seek weak solutions of
2.4 Variational Methods 51
partial diﬀerential equations, we generalize this concept to inﬁnite dimensional
spaces. To illustrate the idea, let us start from a simple example:
−´u = f(x) , x ∈ Ω;
u(x) = 0 , x ∈ ∂Ω.
(2.15)
To ﬁnd weak solutions of the problem, we study the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
f(x)udx
in H
1
0
(Ω).
Now suppose that there is some function u which happen to be a minimizer
of J() in H
1
0
(Ω), then we will demonstrate that u is actually a weak solution
of our problem.
Let v be any function in H
1
0
(Ω). Consider the realvalue function
g(t) = J(u +tv), t ∈ R.
Since u is a minimizer of J(), the function g(t) has a minimum at t = 0;
and therefore, we must have
g
(0) = 0, i.e.
d
dt
J(u +tv) [
t=0
= 0.
Here, explicitly,
J(u +tv) =
1
2
Ω
[D(u +tv)[
2
dx −
Ω
f(x)(u +tv)dx,
and
d
dt
J(u +tv) =
Ω
D(u +tv) Dvdx −
Ω
f(x)vdx.
Consequently, g
(0) = 0 yields
Ω
Du Dvdx −
Ω
f(x)vdx = 0, ∀v ∈ H
1
0
(Ω). (2.16)
This implies that u is a weak solution of problem (2.15).
Furthermore, if u is also second order diﬀerentiable, then through integra
tion by parts, we obtain
Ω
[−´u −f(x)] vdx = 0.
Since this is true for any v ∈ H
1
0
(Ω), we conclude
−´u −f(x) = 0.
52 2 Existence of Weak Solutions
Therefore, u is a classical solution of the problem.
From the above argument, the readers can see that, in order to be a weak
solution of the problem, here u need not to be a minima; it can be a maxima,
or a saddle point of the functional; or more generally, any point that satisﬁes
d
dt
J(u +tv) [
t=0
.
More precisely, we have
Deﬁnition 2.4.1 Let J be a linear functional on a Banach space X.
i) We say that J is Frechet diﬀerentiable at u ∈ X if there exists a con
tinuous linear map L = L(u) from X to X
∗
satisfying:
For any > 0, there is a δ = δ(, u), such that
[J(u +v) −J(u)− < L(u), v > [ ≤ v
X
whenever v
X
< δ,
where
< L(u), v >:= L(u)(v)
is the duality between X and X
∗
.
The mapping L(u) is usually denoted by J
(u).
ii) A critical point of J is a point at which J
(u) = 0, that is
< J
(u), v >= 0 ∀v ∈ X.
iii) We call J
(u) = 0 the EulerLagrange equation of the functional J.
Remark 2.4.1 One can verify that, if J is Frechet diﬀerentiable at u, then
< J
(u), v >= lim
t→0
J(u +tv) −J(u)
t
=
d
dt
J(u +tv) [
t=0
.
2.4.3 Existence of Minimizers
In the previous subsection, we have seen that the critical points of the func
tional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
f(x)udx
are weak solutions of the Dirichlet problem (2.15). Then our next question is:
Does the functional J actually possess a critical point?
We will show that there exists a minimizer of J. To this end, we verify
that J possesses the following three properties:
i) boundedness from below,
ii) coercivity, and
iii) weak lower semicontinuity.
i) Boundedness from Below.
2.4 Variational Methods 53
First we prove that J is bounded from below in H := H
1
0
(Ω) if f ∈ L
2
(Ω).
Again, for simplicity, we use the equivalent norm in H:
u
H
=
Ω
[Du[
2
dx.
By Poincar´e and H¨older inequalities, we have
J(u) ≥
1
2
u
2
H
−Cu
H
f
L
2
(Ω)
=
1
2
u
H
−Cf
L
2
(Ω)
2
−
C
2
2
f
2
L
2
(Ω)
≥ −
C
2
2
f
2
L
2
(Ω)
. (2.17)
Therefore, the functional J is bounded from below.
ii) Coercivity. However, A functional that is bounded from below does not
guarantee that it has a minimum. A simple counter example is
g(x) =
1
1 +x
2
, x ∈ R
1
.
Obviously, the inﬁmum of g(x) is 1, but it can never be attained. If we take a
minimizing sequence ¦x
k
¦ such that g(x
k
)→1, then x
k
goes away to inﬁnity.
This suggests that in order to prevent a minimizing sequence from ‘leaking’ to
inﬁnity, we need to have some sort of ‘coercive’ condition on our functional J.
That is, if a sequence ¦u
k
¦ goes to inﬁnity, i.e. if u
H
→∞, then J(u
k
) must
also become unbounded. Therefore a minimizing sequence would be retained
in a bounded set. And this is indeed true for our functional J. Actually, from
the second part of (2.17), one can see that
if u
k

H
→∞, then J(u
k
)→∞.
This implies that any minimizing sequence must be bounded in H.
iii) Weak Lower Semicontinuity. If H is a ﬁnite dimensional space, then a
bounded minimizing sequence ¦u
k
¦ would possesses a convergent subsequence,
and the limit would be a minimizer of J. This is no longer true in inﬁnite
dimensional spaces. We therefore turn our attention to weak topology.
Deﬁnition 2.4.2 Let X be a real Banach space. If a sequence ¦u
k
¦ ⊂ X
satisﬁes
< f, u
k
> → < f, u
o
> as k→∞
for every bounded linear functional f on X; then we say that ¦u
k
¦ converges
weakly to u
o
in X, and write
u
k
u
o
.
54 2 Existence of Weak Solutions
Deﬁnition 2.4.3 A Banach space is called reﬂexive if (X
∗
)
∗
= X, where X
∗
is the dual of X.
Theorem 2.4.1 (Weak Compactness.) Let X be a reﬂexive Banach space.
Assume that the sequence ¦u
k
¦ is bounded in X. Then there exists a subse
quence of ¦u
k
¦, which converges weakly in X.
Now let’s come back to our Hilbert space H := H
1
0
(Ω). By the Reisz
Representation Theorem, for every linear functional f on H, there is a unique
v ∈ H, such that
< f, u >= (v, u), ∀u ∈ H.
It follows that H
∗
= H, and therefore H is reﬂexive. By the Coercivity and
Weak Compactness Theorem, now a minimizing sequence ¦u
k
¦ is bounded,
and hence possesses a subsequence ¦u
ki
¦ that converges weakly to an element
u
o
in H:
(u
ki
, v)→(u
o
, v), ∀v ∈ H.
Naturally, we wish that the functional J we consider is continuous with respect
to this weak convergence, that is
lim
i→∞
J(u
ki
) = J(u
o
),
then u
o
would be our desired minimizer. However, this is not the case with our
functional, nor is it true for most other functionals of interest. Fortunately,
we do not really need such a continuity, instead, we only need a weaker one:
J(u
o
) ≤ liminf
i→∞
J(u
ki
).
Since on the other hand, by the deﬁnition of a minimizing sequence, we obvi
ously have
J(u
o
) ≥ liminf
i→∞
J(u
ki
),
and hence
J(u
o
) = liminf
i→∞
J(u
ki
).
Therefore, u
o
is a minimizer.
Deﬁnition 2.4.4 We say that a functional J() is weakly lower semicontinuous
on a Banach space X, if for every weakly convergent sequence
u
k
u
o
, in X,
we have
J(u
o
) ≤ liminf
k→∞
J(u
k
).
2.4 Variational Methods 55
Now we show that our functional J is weakly lower semicontinuous in H.
Assume that ¦u
k
¦ ⊂ H, and
u
k
u
o
in H.
Since for each ﬁxed f ∈ L
2n
n+2
(Ω),
Ω
fudx is a linear functional on H, it
follows immediately
Ω
fu
k
dx→
Ω
fu
o
dx, as k→∞. (2.18)
From the algebraic inequality
a
2
+b
2
≥ 2ab,
we have
Ω
[Du
k
[
2
dx ≥
Ω
[Du
o
[
2
dx + 2
Ω
Du
o
(Du
k
−Du
o
)dx.
Here the second term on the right goes to zero due to the weak convergence
u
k
u
o
. Therefore
liminf
k→∞
Ω
[Du
k
[
2
dx ≥
Ω
[Du
o
[
2
dx. (2.19)
Now (2.18) and (2.19) imply the weakly lower semicontinuity. So far, we
have proved
Theorem 2.4.2 Assume that Ω is a bounded domain with smooth boundary.
Then for every f ∈ L
2n
n+2
(Ω), the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
fudx
possesses a minimum u
o
in H
1
0
(Ω), which is a weak solution of the boundary
value problem
−´u = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
2.4.4 Existence of Minimizers Under Constraints
Now we consider the semilinear Dirichlet problem
−´u = [u[
p−1
u, x ∈ Ω,
u(x) = 0, x ∈ ∂Ω,
(2.20)
with 1 < p <
n+2
n−2
.
56 2 Existence of Weak Solutions
Obviously, u ≡ 0 is a solution. We call this a trivial solution. Of course,
what we are more interested is whether there exists nontrivial solutions. Let
J(u) =
1
2
Ω
[Du[
2
dx −
1
p + 1
Ω
[u[
p+1
dx.
It is easy to verify that
d
dt
J(u +tv) [
t=0
=
Ω
Du Dv −[u[
p−1
uv
dx.
Therefore, a critical point of the functional J in H := H
1
0
(Ω) is a weak solution
of (2.20)
However, this functional is not bounded from below. To see this, ﬁxed an
element u ∈ H and consider
J(tu) =
t
2
2
Ω
[Du[
2
dx −
t
p+1
p + 1
Ω
[u[
p+1
dx.
Noticing p + 1 > 2, we ﬁnd that
J(tu)→−∞, as t→∞.
Therefore, there is no minimizer of J(u). For this reason, instead of J, we will
consider another functional
I(u) :=
1
2
Ω
[Du[
2
dx,
in the constrain set
M := ¦u ∈ H [ G(u) :=
Ω
[u[
p+1
dx = 1¦.
We seek minimizers of I in M. Let ¦u
k
¦ ⊂ M be a minimizing sequence, i.e.
lim
k→∞
I(u
k
) = inf
u∈M
I(u) := m.
It follows that
Ω
[Du
k
[
2
dx is bounded, hence ¦u
k
¦ is bounded in H. By the
Weak Compactness Theorem, there exist a subsequence (for simplicity, we
still denote it by ¦u
k
¦), which converges weakly to u
o
in H. The weak lower
semicontinuity of the functional I yields
I(u
o
) ≤ liminf
k→∞
I(u
k
) = m. (2.21)
On the other hand, since p+1 <
2n
n −2
, by the Compact Sobolev Embedding
Theorem
H
1
(Ω) →→ L
p+1
(Ω),
2.4 Variational Methods 57
we ﬁnd that u
k
converges strongly to u
o
in L
p+1
(Ω), and hence
Ω
[u
o
[
p+1
dx = 1.
That is u
o
∈ M, and therefore
I(u
o
) ≥ m.
Together with (2.21), we obtain
I(u
o
) = m.
Now we have shown the existence of a minimizer u
o
of I in M. To link
u
o
to a weak solution of (2.20), we derive the corresponding EulerLagrange
equation for this minimizer under the constraint.
Theorem 2.4.3 (Lagrange Multiplier). let u be a minimizer of I in M, i.e.
I(u) = min
v∈M
I(v).
Then there exists a real number λ such that
I
(u) = λG
(u),
or
< I
(u), v >= λ < G
(u), v >, ∀v ∈ H.
Before proving the theorem, let’s ﬁrst recall the Lagrange Multiplier for
functions deﬁned on R
n
. Let f(x) and g(x) be smooth functions in R
n
. It is
well known that the critical points ( in particular, the minima ) x
o
of f(x)
under the constraint g(x) = c satisfy
Df(x
o
) = λDg(x
o
) (2.22)
for some constant λ. Geometrically, Df(x
o
) is a vector in R
n
. It can be de
composed as the sum of two perpendicular vectors:
Df(x
o
) = Df(x
o
) [
N
+Df(x
o
) [
T
,
where the two terms on the right hand side are the projection of Df(x
o
)
onto the normal and tangent spaces of the level set g(x) = c at point x
o
. By
deﬁnition, x
o
being a critical point of f(x) on the level set g(x) = c means
that the tangential projection Df(x
o
) [
T
= 0. While the normal projection
Df(x
o
) [
N
is parallel to Dg(x
o
), and we therefore have (2.22). Heuristically,
we may generalize this perception into inﬁnite dimensional space H. At a ﬁxed
point u ∈ H, I
(u) is a linear functional on H. By the Reisz Representation
58 2 Existence of Weak Solutions
Theorem, it can be identiﬁed as a point (or a vector) in H, denoted by
¯
I
(u),
such that
< I
(u), v >= (
¯
I
(u), v), ∀v ∈ H.
Similarly for G
(u), we have
¯
G
(u) ∈ H, and it is a normal vector of M at
point u. At a minimum u of I on the hyper surface ¦w ∈ H [ G(w) = 1¦,
¯
I
(u) must be perpendicular to the tangent space at point u, and hence
¯
I
(u) = λ
¯
G
(u).
We now prove this rigorously.
The Proof of Theorem 2.4.3.
Recall that if u is the minimum of J in the whole space H, then, for any
v ∈ H, we have
d
dt
I(u +tv) [
t=0
. (2.23)
Now u is only the minimum on M, we can no longer use (2.23) because u+tv
can not be kept on M, we can not guarantee that
G(u +tv) :=
Ω
[u +tv[
p+1
dx = 1
for all small t. To remedy this, we consider
g(t, s) := G(u +tv +sw).
For each small t, we try to show that there is an s = s(t), such that
G(u +tv +s(t)w) = 1.
Then we can calculate the variation of J(u +tv +s(t)w) as t→0.
In fact,
∂g
∂s
(0, 0) =< G
(u), w >= (p + 1)
Ω
[u[
p−1
uwdx.
Since u ∈ M,
Ω
[u[
p+1
dx = 1, there exists w (may just take w as u),
such that the right hand side of the above is not zero. Hence, according to the
Implicit Function Theorem, there exists a C
1
function s : R
1
→R
1
, such that
s(0) = 0, and g(t, s(t)) = 1,
for suﬃciently small t. Now u+tv+s(t)w is on M, and hence at the minimum
u of J on M, we must have
d
dt
I(u +tv +s(t)w) [
t=0
=< I
(u), v > + < I
(u), w > s
(0) = 0. (2.24)
2.4 Variational Methods 59
On the other hand, we have
0 =
dg
dt
(0, 0) =
∂g
∂t
(0, 0) +
∂g
∂s
(0, 0)s
(0)
= < G
(u), v > + < G
(u), w > s
(0).
Consequently
s
(0) = −
< G
(u), v >
< G
(u), w >
.
Substitute this into (2.24), and denote
λ =
< I
(u), w >
< G
(u), w >
,
we arrive at
< I
(u), v >= λ < G
(u), v >, ∀v ∈ H.
This completes the proof of the Theorem. ¯ .
Now let’s come back to the weak solution of problem (2.20). Our minimizer
u
o
of I under the constraint G(u) = 1 satisﬁes
< I
(u
o
), v >= λ < G
(u
o
), v >, ∀v ∈ H,
that is
Ω
Du
o
Dvdx = λ
Ω
[u
o
[
p−1
u
o
vdx, ∀v ∈ H. (2.25)
One can see that λ > 0. Let ˜ u = au
o
with λ/a
p−1
= 1. This is possible
since p > 1. Then it is easy to verify that
Ω
D˜ u Dvdx =
Ω
[˜ u[
p−1
˜ uvdx.
Now ˜ u is the desired weak solution of (2.20) in H. ¯ .
Exercise 2.4.1 Show that λ > 0 in (2.25).
2.4.5 Minimax Critical Points
Besides local or global minima and maxima, there are other types of critical
points: saddle points or minimax points. For a simple example, let’s consider
the function f(x, y) = x
2
−y
2
from R
2
to R
1
. (x, y) = (0, 0) is a critical point
of f, i.e. Df(0, 0) = 0. However, it is neither a local maximum nor a local
minimum. The graph of z = f(x, y) in R
3
looks like a horse saddle. If we
go on the graph along the direction of xaxis, (0,0) is the minimum, while
along the direction of yaxis, (0,0) is the maximum. For this reason, we also
call (0,0) a minimax critical point of f(x, y). The way to locate or to show
60 2 Existence of Weak Solutions
the existence of such minimax point is called a minimax variational method.
To illustrate the main idea, let’s still take this simple example. Suppose we
don’t know whether f(x, y) possesses a minimax point, but we do detect a
‘mountain range’ on the graph near xz plane. To show the existence of a mini
max point, we pick up two points on xy plane, for instance, P = (1, 1) and
Q = (−1, −1) on two sides of the ‘mountain range’. Let γ(s) be a continuous
curve linking the two points P and Q with γ(0) = P and γ(1) = Q. To go
from (P, f(P)) to (Q, f(Q)) on the graph, one needs ﬁrst to climb over the
’mountain range’ near xz plane, then to decent to the point (Q, f(Q)). Hence
on this path, there is a highest point. Then we take the inﬁmum among the
highest points on all the possible paths linking P and Q. More precisely, we
deﬁne
c = inf
γ∈Γ
max
s∈[0,1]
f(γ(s)),
where
Γ = ¦γ [ γ = γ(s) is continuous on [0,1], and γ(0) = P, γ(1) = Q¦
We then try to show that there exists a point P
o
at which f attains this
minimax value c, and more importantly, Df(P
o
) = 0.
For a functional J deﬁned on an inﬁnite dimensional space X, we will apply
a similar idea to seek its minimax critical points. Unlike ﬁnite dimensional
space, in an inﬁnite dimensional space, a bounded sequence may not converge.
Hence we require the functional J to satisfy some compactness condition,
which is the PalaisSmale condition.
Deﬁnition 2.4.5 We say that the functional J satisﬁes the PalaisSmale con
dition (in short, (PS)), if any sequence ¦u
k
¦ ∈ X for which J(u
k
) is bounded
and J
(u
k
)→0 as k→∞ possesses a convergent subsequence.
Under this compactness condition, Ambrosetti and Rabinowitz [AR] estab
lish the following wellknown theorem on the existence of a minimax critical
point.
Theorem 2.4.4 (Mountain Pass Theorem). Let X be a real Banach space
and J ∈ C
1
(X, R
1
). Suppose J satisﬁes (PS), J(0) = 0,
(J
1
) there exist constants ρ, α > 0 such that J [
∂Bρ(0)
≥ α, and
(J
2
) there is an e ∈ X ` B
ρ
(0), such that J(e) ≤ 0.
Then J possesses a critical value c ≥ α which can be characterized as
c = inf
γ∈Γ
max
u∈γ[0,1]
J(u)
where
Γ = ¦γ ∈ C([0, 1], R
1
) [ γ(0) = 0, γ(1) = e¦.
2.4 Variational Methods 61
Here, a critical value c means a value of the functional which is attained
by a critical point u, i.e. J
(u) = 0 and J(u) = c.
Heuristically, the Theorem says if the point 0 is located in a valley sur
rounded by a mountain range, and if at the other side of the range, there is
another low point e; then there must be a mountain pass from 0 to e which
contains a minimax critical point of J. In the ﬁgure below, on the graph of
J, the path connecting the two low points (0, J(0)) and (e, J(e)) contains a
highest point S, which is the lowest as compared to the highest points on the
nearby paths. This S is a saddle point, the minimax critical point of J.
The proof of the Theorem is mainly based on a deformation theorem which
exploits the change of topology of the level sets of J when the value of J
goes through a critical value. To illustrate this, let’s come back to the simple
example f(x, y) = x
2
−y
2
. Denote the level set of f by
f
c
:= ¦(x, y) ∈ R
2
[ f(x, y) ≤ c¦.
One can see that 0 is a critical value and for any a > 0, the level sets f
a
and
f
−a
has diﬀerent topology. f
a
is connected while f
−a
is not. If there is no
critical value between the two level sets, say f
a
and f
b
with 0 < a < b, then
one can continuously deform the set f
b
into f
a
. One convenient way is let the
points in the set f
b
‘ﬂow’ along the direction of −Df into f
a
; correspondingly,
the points on the graph ‘ﬂow downhills’ into a lower level. More precisely and
generally, we have
Theorem 2.4.5 (Deformation Theorem). Let X be a real Banach space. As
sume that J ∈ C
1
(X, R
1
) and satisﬁes the (PS). Suppose that c is not a
critical value of J.
62 2 Existence of Weak Solutions
Then for each suﬃciently small > 0, there exists a constant 0 < δ <
and a continuous mapping η(t, u) from [0, 1] X to X, such that
(i) η(0, u) = u, ∀u ∈ X,
(ii) η(1, u) = u, ∀u ∈J
−1
[c −, c +],
(iii) J(η(t, u) ≤ J(u), ∀u ∈ X, t ∈ [0, 1],
(iv) η(1, J
c+δ
) ⊂ J
c−δ
.
Roughly speaking, the Theorem says if c is not a critical value of J, then
one can continuously deform a higher level set J
c+δ
into a lower level J
c−δ
by
‘ﬂowing downhills’. Condition (ii) is to ensure that after the deformation, the
two points 0 and e as in the Mountain Pass theorem still remain ﬁxed.
Proof of the Deformation Theorem.
The main idea is to solve an ODE initial value problem (with respect to
t) for each u ∈ X, roughly look like the following:
dη
dt
(t, u) = −J
(η(t, u)),
η(0, u) = u.
(2.26)
That is, to let the ‘ﬂow’ go along the direction of −J
(η(t, u)), so that the
value of J(η(t, u) would decrease as t increases. However, we need to modify
the right hand side of the equation a little bit, so that the ﬂow will satisfy
other desired conditions.
Step 1 We ﬁrst choose a small > 0, so that the ﬂow would not go too
slow in J
c+
` J
c−
. This is actually guaranteed by the (PS). We claim that
there exist constants 0 < a, < 1, such that
J
(u) ≥ a, ∀u ∈ J
c+
` J
c−
. (2.27)
Otherwise, there would exist sequences a
k
→0,
k
→0 and elements u
k
∈ J
c+
k
`
J
c−
k
with J
(u
k
) ≤ a
k
. Then by virtue of the (PS) condition, there is a
subsequence ¦u
ki
¦ that converges to an element u
o
∈ X. Since J ∈ C
1
(X, R
1
),
we must have
J(u
o
) = c and J
(u
o
) = 0.
This is a contradiction with the assumption that c is not a critical value of J.
Therefore (2.27) holds.
Step 2. To satisfy condition (ii), i.e. to make the ﬂow stay still in the set
M := ¦u ∈ X [ J(u) ≤ c − or J(u) ≥ c +¦,
we could multiply the right hand side of equation (2.26) by dist(η, M), but
this would make the ﬂow go too slow near M. To modify further, we choose
0 < δ < , such that
δ <
a
2
2
, (2.28)
and let
2.4 Variational Methods 63
N := ¦u ∈ X [ c −δ ≤ J(u) ≤ c +δ¦.
Now for u ∈ X, deﬁne
g(u) :=
dist(u, M)
dist(u, M) +dist(u, N)
,
and let
h(s) :=
1 if 0 ≤ s ≤ 1
1
s
if s > 1.
We ﬁnally modify the −J
(η) as
F(η) := −g(η)h(J
(η))J
(η).
The introduction of the factor h() is to ensure the boundedness of F().
Step 3. Now for each ﬁxed u ∈ X, we consider the modiﬁed ODE problem
dη
dt
(t, u) = F(η(t, u))
η(0, u) = u.
One can verify that F is bounded and Lipschitz continuous on bounded sets,
and therefore by the wellknown ODE theory, there exists a unique solution
η(t, u) for all time t ≥ 0.
Condition (i) is satisﬁed automatically. From the deﬁnition of g
g(u) = 0 ∀u ∈ M;
hence condition (ii) is also satisﬁed.
To verify (iii) and (iv), we compute
d
dt
J(η(t, u)) = < J
(η(t, u)),
dη
dt
(t, u) >
= < J
(η(t, u)), F(η(t, u)) >
= −g(η(t, u))h(J
(η(t, u)))J
(η(t, u))
2
. (2.29)
In the above equality, obviously, the right hand side is nonpositive, and
hence J(η(t, u)) is nonincreasing. This veriﬁes (iii).
To see (iv), it suﬃce to show that for each u in N = J
c+δ
` J
c−δ
, we have
η(1, u) ∈ J
c−δ
.
Notice that for η ∈ N, g(η) ≡ 1, and (2.29) becomes
d
dt
J(η(t, u)) = −h(J
(η(t, u)))J
(η(t, u)
2
.
By the deﬁnition of h and (2.27), we have,
64 2 Existence of Weak Solutions
d
dt
J(η(t, u)) =
−J
(η(t, u))
2
≤ −a
2
if J
(η(t, u)) ≤ 1
−J
(η(t, u)) ≤ −a if J
(η(t, u)) > 1.
In any case, we derive
J(η(1, u)) ≤ J(η(0, u)) −a
2
≤ c +δ −a
2
≤ c −δ.
This establishes (iv) and therefore completes the proof of the Theorem. ¯ .
With this Deformation Theorem, the proof of the Mountain Pass Theorem
becomes very simple.
The Proof of the Mountain Pass Theorem.
First, due to the deﬁnition, we have c < ∞. To see this, we take a particular
path γ(t) = te and consider the function
g(t) = J(γ(t)).
It is continuous on the closed interval [0,1], and hence bounded from above.
Therefore
c ≤ max
u∈γ([0,1])
J(u) = max
t∈[0,1]
g(t) < ∞.
On the other hand, for any γ ∈ Γ
γ([0, 1]) ∩ ∂B
ρ
(0) = ∅.
Hence by condition (J
1
),
max
u∈γ([0,1])
J(u) ≥ inf
v∈∂Bρ(0)
J(v) ≥ α.
And it follows that c ≥ α.
Suppose the number c so deﬁned is not a critical value. We will use the
Deformation Theorem to continuously deform a path γ ∈ Γ down to the
level set J
c−δ
to derive a contradiction. More precisely, choose the in the
Deformation Theorem to be
α
2
, and 0 < δ < . From the deﬁnition of c, we
can pick a path γ ∈ Γ, such that
max
u∈γ([0,1])
J(u) ≤ c +δ,
i.e.
γ([0, 1]) ⊂ J
c+δ
.
Now by the Deformation Theorem, this path can be continuously deformed
down to the level set J
c−δ
, that is
η(1, γ([0, 1]) ⊂ J
c−δ
,
or in order words,
2.4 Variational Methods 65
max
u∈η(1,γ([0,1]))
J(u) ≤ c −δ. (2.30)
On the other hand, since 0 and e are in J
c−
, by (ii) of the Deformation
Theorem,
η(1, 0) = 0 and η(1, e) = e.
This means that η(1, γ([0, 1])) is also a path in Γ, and therefore we must have
max
u∈η(1,γ([0,1]))
J(u) ≥ c.
This contradicts with (2.30) and hence completes the proof of the Theorem.
¯ .
2.4.6 Existence of a Minimax via the Mountain Pass Theorem
Now we apply the Mountain Pass Theorem to seek weak solutions of the
semilinear elliptic problem
−´u = f(x, u), x ∈ Ω,
u(x) = 0, x ∈ ∂Ω.
(2.31)
Although the method we illustrate here could be applied to a more general
second order uniformly elliptic equations, we prefer this simple model that
better illustrates the main ideas.
We assume that f(x, u) satisﬁes
(f
1
)f(x, s) ∈ C(
¯
Ω R
1
, R
1
),
(f
2
) there exists constants c
1
, c
2
≥ 0, such that
[f(x, s)[ ≤ c
1
+c
2
[s[
p
,
with 0 ≤ p <
n+2
n−2
if n > 2. If n = 1, (f
2
) can be dropped; if n = 2, it suﬃce
that
[f(x, s)[ ≤ c
1
e
φ(s)
,
with φ(s)/s
2
→0 as [s[→∞.
(f
3
)f(x, s) = o([s[) as s→0, and
(f
4
) there are constants µ > 2 and r ≥ 0 such that for [s[ ≥ r,
0 < µF(x, s) ≤ sf(x, s),
where
F(x, s) =
s
0
f(x, t)dt.
Remark 2.4.2 A simple example of such function f(x, u) is [u[
p−1
u.
Theorem 2.4.6 (Rabinowitz [Ra]) If f satisﬁes condition (f
1
) − (f
4
), then
problem (2.31) possesses a nontrivial weak solution.
66 2 Existence of Weak Solutions
To ﬁnd weak solutions of the problem, we seek minimax critical points of
the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
F(x, u)dx
on the Hilbert space H := H
1
0
(Ω). We verify that J satisﬁes the conditions in
the Mountain Pass Theorem.
We ﬁrst need to show that J ∈ C
1
(H, R
1
). Since we have veriﬁed that the
ﬁrst term
Ω
[Du[
2
dx
is continuously diﬀerentiable, now we only need to check the second term
I(u) :=
Ω
F(x, u)dx.
Proposition 2.4.1 (Rabinowitz [Ra]) Let Ω ⊂ R
n
be a bounded domain and
let g satisfy
(g
1
)g ∈ C(
¯
Ω R
1
, R
1
), and
(g
2
) there are constants r, s ≥ 1 and a
1
, a
2
≥ 0 such that
[g(x, ξ)[ ≤ a
1
+a
2
[ξ[
r/s
for all x ∈
¯
Ω, ξ ∈ R
1
.
Then the map u(x)→g(x, u(x)) belongs to C(L
r
(Ω), L
s
(Ω)).
Proof. By (g
2
),
Ω
[g(x, u(x))[
s
dx ≤
Ω
(a
1
+a
2
[u[
r/s
)
s
dx
≤ a
3
Ω
(1 +[u[
r
)dx.
It follows that if u ∈ L
r
(Ω), then g(x, u) ∈ L
s
(Ω). That is
g(x, ) : L
r
(Ω)→L
s
(Ω).
Now we verify the continuity of the map. Fix u ∈ L
r
(Ω). Given any > 0, we
want to show that, there is a δ > 0, such that,
whenever φ
L
r
(Ω)
< δ, we have g(, u +φ) −g(, u)
L
s
(Ω)
< . (2.32)
Let
Ω
1
:= ¦x ∈
¯
Ω [ [φ(x)[ ≤ m¦
and
Ω
2
:=
¯
Ω ` Ω
1
.
Let
2.4 Variational Methods 67
I
i
=
Ωi
[g(x, u(x) +φ(x)) −g(x, u(x))[
s
dx, i = 1, 2.
By the continuity of g(x, ), for any η > 0, there exists m > 0, such that
[g(x, u(x) +φ(x)) −g(x, u(x))[ ≤ η, ∀x ∈ Ω
1
.
It follows that
I
1
≤ [Ω
1
[η
s
≤ [Ω[η
s
,
where [Ω[ is the volume of Ω. For the given > 0, we choose η and then m,
so that
[Ω[η
s
<
2
s
,
and therefore
I
1
≤
2
s
. (2.33)
Now we ﬁx this m, and estimate I
2
.
By (g
2
), we have
I
2
≤
Ω2
(c
1
+c
2
([u[
r
+[φ[
r
)) dx ≤ c
1
[Ω
2
[ +c
2
Ω2
[u[
r
dx +c
2
φ
r
L
r
(Ω)
,
(2.34)
for some constant c
1
and c
2
.
Moreover, as φ
L
r
(Ω)
< δ,
δ
r
>
Ω
[φ[
r
dx ≥ m
r
[Ω
2
[. (2.35)
Since u ∈ L
r
(Ω), we can make
Ω2
[u[
r
dx as small as we wish by letting [Ω
2
[
small. Now by (2.35), we can choose δ so small, such that [Ω
2
[ is suﬃciently
small, and hence the right hand side of (2.34) is less than
2
s
. Consequently,
by (2.33), for such a small δ,
I
1
+I
2
≤
2
s
+
2
s
.
This veriﬁes (2.32), and therefore completes the proof of the Proposition. ¯ .
Proposition 2.4.2 (Rabinowitz [Ra]) Assume that f(x, s) satisﬁes (f
1
) and
(f
2
). Let n ≥ 3. Then I ∈ C
1
(H, R) and
< I
(u), v >=
Ω
f(x, u)vdx, ∀v ∈ H.
Moreover, I() is weakly continuous and I
() is compact, i.e.
I(u
k
)→I(u) and I
(u
k
)→I
(u), whenever u
k
u in H;
68 2 Existence of Weak Solutions
Proof. We will ﬁrst show that I is Frechet diﬀerentiable on H and then prove
that I
(u) is continuous.
1. Let u, v ∈ H. We want to show that given any > 0, there exists a
δ = δ(, u), such that
Q(u, v) := [I(u +v) −I(u) −
Ω
f(x, u)vdx[ ≤ v (2.36)
whenever v ≤ δ. Here v denotes the H
1
(Ω) norm of v.
In fact,
Q(u, v) ≤
Ω
[F(x, u(x) +v(x)) −F(x, u(x)) −f(x, u(x))v(x)[dx
=
Ω
[f(x, u(x) +ξ(x)) −f(x, u(x))[ [v(x)[dx
≤ f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
v
L
2n
n−2
(Ω)
≤ f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
Kv. (2.37)
Here we have applied the Mean Value Theorem (with [ξ(x)[ ≤ [v(x)[), the
H¨older inequality, and the Sobolev inequality
v
L
2n
n−2
(Ω)
≤ Kv.
In Proposition 2.4.1, let r =
2n
n−2
and s =
2n
n+2
. Then by (f
2
), we deduce
that the map
u(x)→f(x, u(x))
is continuous from L
2n
n−2
(Ω) to L
2n
n+2
(Ω). Hence for any given > 0, we
can choose suﬃciently small δ > 0, such that whenever v < δ, we have
v
L
2n
n−2
(Ω)
< Kδ, and hence ξ
L
2n
n−2
(Ω)
< Kδ, and therefore
f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
<
K
.
It follows from (2.37) that
Q(u, v) < , whenever v < δ.
This veriﬁes (2.36) and hence I() is diﬀerentiable, and
< I
(u), v >=
Ω
f(x, u(x))v(x)dx. (2.38)
2. To verify the continuity of I
(), we show that for each ﬁxed u,
I
(u +φ) −I
(u)→0, as φ→0. (2.39)
2.4 Variational Methods 69
By deﬁnition,
I
(u +φ) −I
(u) = sup
v≤1
< I
(u +φ) −I
(u), v >
= sup
v≤1
Ω
[f(x, u(x) +φ(x)) −f(x, u(x))]v(x)dx
≤ sup
v≤1
f(, u +φ) −f(, u)
L
2n
n+2
(Ω)
Kv
≤ Kf(, u +φ) −f(, u)
L
2n
n+2
(Ω)
(2.40)
Here we have applied (2.38), the H¨older inequality, and the Sobolev inequality.
Now (2.39) follows again from Proposition 2.4.1. Therefore, I
() is continuous.
3. Finally, we verify the weak continuity of I(u) and the compactness of
I
(u). Assume that
¦u
k
¦ ⊂ H, and u
k
u in H.
We show that
I(u
k
)→I(u) as k→∞.
In fact,
[I(u
k
) −I(u)[ ≤
Ω
[F(x, u
k
(x)) −F(x, u(x))[dx
≤
Ω
[f(x, ξ
k
(x))[ [u
k
(x) −u(x)[dx
≤ f(, ξ
k
)
L
r
(Ω)
u
k
−u
L
q
(Ω)
≤ Cξ
k

L
2n
n−2
(Ω)
u
k
−u
L
q
(Ω)
(2.41)
Here we have applied the Mean Value Theorem and the H¨older inequality with
ξ
k
(x) between u
k
(x) and u(x), and
r =
2n
p(n −2)
, q =
r
r −1
=
2n
2n −p(n −2)
.
It is easy to see that
q <
2n
n −2
.
Since a weak convergent sequence is bounded, ¦u
k
¦ is bounded in H,
and hence by Sobolev Embedding, it is bounded in L
2n
n−2
(Ω), so does ¦ξ
k
¦.
Furthermore, by Compact Embedding of H into L
q
(Ω), the last term of (2.41)
approaches zero as k→∞. This veriﬁes the weak continuity of I().
Using a similar argument as in deriving (2.40), we obtain
I
(u
k
) −I
(u) ≤ Kf(, u
k
) −f(, u)
L
2n
n+2
(Ω)
. (2.42)
70 2 Existence of Weak Solutions
Since p <
n+2
n−2
, we have
2np
n+2
<
2n
n−2
. Then by the Compact Embedding,
H →→ L
2np
n+2
(Ω),
we derive
u
k
→u strongly in L
2np
n+2
(Ω).
Consequently, form Proposition 2.4.1,
f(, u
k
)→f(, u) strongly in L
2n
n+2
(Ω).
Now, together with (2.42), we arrive at
I
(u
k
)→I
(u) as k→∞.
This completes the proof of the proposition. ¯ .
Now let’s come back to Problem (2.31) and prove Theorem 2.4.6. We will
verify that J() satisﬁes the conditions in the Mountain Pass Theorem, and
thus possesses a nontrivial minimax critical point, which is a weak solution
of (2.31).
First we verify the (PS) condition. Assume that ¦u
k
¦ is a sequence in H
satisfying
[J(u
k
)[ ≤ M and J
(u
k
)→0.
We show that ¦u
k
¦ possesses a convergent subsequence.
By (f
4
), we have
µM +J
(u
k
)u
k
 ≥ µJ(u
k
)− < J
(u
k
), u
k
>
= (
µ
2
−1)
Ω
[Du
k
[
2
dx +
Ω
[f(x, u
k
)u
k
−µF(x, u
k
)] dx
≥ (
µ
2
−1)
Ω
[Du
k
[
2
dx +
u
k
(x)≤r
[f(x, u
k
)u
k
−µF(x, u
k
)] dx
≥ (
µ
2
−1)
Ω
[Du
k
[
2
dx −(a
1
r +a
2
r
p+1
)[Ω[
= c
o
u
k

2
−C
o
, (2.43)
for some positive constant c
o
and C
o
. Since J
(u
k
) is bounded, (2.43) implies
that ¦u
k
¦ must be bounded in H. Hence there exists a subsequence (still
denoted by ¦u
k
¦), which converges weakly to some element u
o
in H.
Let A : H→H
∗
be the duality map between H and its dual space H
∗
.
Then for any u, v ∈ H,
< Au, v >=
Ω
Du Dvdx.
Consequently,
2.4 Variational Methods 71
A
−1
J
(u) = u −A
−1
f(, u). (2.44)
From Proposition 2.4.2,
f(, u
k
)→f(, u) strongly in H
∗
.
It follows that
u
k
= A
−1
J
(u
k
) +A
−1
f(, u
k
)→A
−1
f(, u
o
), as k→∞.
This veriﬁes the (PS).
To see there is a ‘mountain range’ surrounding the origin, we estimate
Ω
F(x, u)dx. By (f
3
), for any given > 0, there is a δ > 0, such that, for all
x ∈
¯
Ω,
[F(x, s)[ ≤ [s[
2
, whenever [s[ < δ. (2.45)
While by (f
2
), there is a constant M = M(δ), such that for all x ∈
¯
Ω,
[F(x, s)[ ≤ M[s[
p+1
. (2.46)
Combining (2.45) and (2.46), we have,
[F(x, s)[ ≤ [s[
2
+M[s[
p+1
, for all x ∈
¯
Ω and for all s ∈ R.
It follows, via the Poincar´e and the Sobolev inequality, that
[
Ω
F(x, u)dx[ ≤
Ω
u
2
dx+M
Ω
[u[
p+1
dx ≤ C( +Mu
p−1
)u
2
. (2.47)
And consequently,
J(u) ≥
¸
1
2
−C( +Mu
p−1
)
u
2
. (2.48)
Choose =
1
8C
. For this , we ﬁx M; then choose u = ρ small, so that
CMρ
p−1
≤
1
8
.
Then form (2.48), we deduce
J[
∂Bρ(0)
≥
1
4
ρ
2
.
To see the existence of a point e ∈ H, such that J(e) ≤ 0, we ﬁx any u = 0
in H, and consider
J(tu) =
t
2
2
Ω
[Du[
2
dx −
Ω
F(x, tu)dx. (2.49)
72 2 Existence of Weak Solutions
By (f
4
), we have for [s[ ≥ r,
d
ds
[s[
−µ
F(x, s)
= [s[
−µ−2
s (−µF(x, s) +sf(x, s))
≥ 0 if s ≥ r
≤ 0 if s ≤ −r.
In any case, we deduce, for [s[ ≥ r,
F(x, s) ≥ a
3
[s[
µ
;
and it follows that
F(x, s) ≥ a
3
[s[
µ
−a
4
. (2.50)
Combining (2.49) and (2.50), and taking into account that µ > 2, we conclude
that
J(tu)→−∞, as t→∞.
Now J satisﬁes all the conditions in the Mountain Pass Theorem, hence it
possesses a minimax critical point u
o
, which is a weak solution of (2.31).
Moreover, J(u
o
) > 0, therefore it is a nontrivial solution. This completes the
proof of the Theorem 2.4.6. ¯ .
3
Regularity of Solutions
3.1 W
2,p
a Priori Estimates
3.1.1 Newtonian Potentials,
3.1.2 Uniform Elliptic Equations
3.2 W
2,p
Regularity of Solutions
3.2.1 The Case p ≥ 2.
3.2.2 The Case 1 < p < 2.
3.2.3 Other Useful Results Concerning the Existence, Uniqueness, and
Regularity
3.3 Regularity Lifting
3.3.1 Bootstraps
3.3.2 the Regularity Lifting Theorem
3.3.3 Applications to PDEs
3.3.4 Applications to Integral Equations
In the previous chapter, we used functional analysis, mainly calculus of
variations to seek the existence of weak solutions for second order linear or
semilinear elliptic equations. These weak solutions we obtained were in the
Sobolev space W
1,2
and hence, by Sobolev imbedding, in L
p
space for p ≤
n+2
n−2
.
Usually, the weak solutions are easier to obtain than the classical ones, and
this is particularly true for nonlinear equations. However, in practice, we are
often required to ﬁnd classical solutions. Therefore, one would naturally want
to know whether these weak solutions are actually diﬀerentiable, so that they
can become classical ones. These questions will be answered here.
In this chapter, we will introduce methods to show that, in most cases, a
weak solution is in fact smooth. This is called the regularity argument. Very
often, the regularity is equivalent to the a priori estimate of solutions. To see
the diﬀerence between the regularity and the a priori estimate, we take the
equation
74 3 Regularity of Solutions
−´u = f(x)
for example. Roughly speaking, the W
2,p
regularity theory infers that
If u ∈ W
1,p
is a weak solution and if f ∈ L
p
, then u ∈ W
2,p
.
While the W
2,p
a priori estimate says
If u ∈ W
1,p
0
∩ W
2,p
is a weak solution and if f ∈ L
p
, then
u
W
2,p ≤ Cf
L
p.
In the a priori estimate, we preassumed that u is in W
2,p
. It might seem
a little bit surprising at this moment that the a priori estimates turn out to
be powerful tools in deriving regularities. The readers will see in section 3.2
how the a priori estimates and uniqueness of solutions lead to the regularity.
Let Ω be an open bounded set in R
n
. Let a
ij
(x), b
i
(x), and c(x) be bounded
functions on Ω with a
ij
(x) = a
ji
(x). Consider the second order partial diﬀer
ential equation
Lu := −
n
¸
i,j=1
a
ij
(x)u
xixj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u = f(x). (3.1)
We assume L is uniformly elliptic, that is, there exists a constant δ > 0,
such that
n
¸
i,j=1
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for a. e. x ∈ Ω and for all ξ ∈ R
n
.
In Section 3.1, we establish W
2,p
a priori estimate for the solutions of (3.1).
We show that if the function f(x) is in L
p
(Ω), and u ∈ W
2,p
(Ω) is a strong
solution of (3.1), then there exists a constant C, such that
u
W
2,p ≤ C(u
L
p +f
L
p).
We will ﬁrst establish this for a Newtonian potential, then use the method of
“frozen coeﬃcients” to generalize it to uniformly elliptic equations.
In Section 3.2, we establish a W
2,p
regularity. We show that if f(x) is in
L
p
(Ω), then any weak solution u of (3.1) is in W
2,p
(Ω).
In Section 3.3, we introduce and prove a regularity lifting theorem. It
provides a simple method for the study of regularity. The version we present
here contains some new developments. It is much more general and very easy
to use. We believe that the method will be helpful to both experts and non
experts in the ﬁeld.
We will also use examples to show how this method can be applied to boost
the regularity of the solutions for PDEs as well as for integral equations.
3.1 W
2,p
a Priori Estimates 75
3.1 W
2,p
a Priori Estimates
We establish W
2,p
estimate for the solutions of
Lu = f(x) , x ∈ Ω. (3.2)
First we start with the Newtonian potential.
3.1.1 Newtonian Potentials
Let Ω be a bounded domain of R
n
. Let
Γ(x) =
1
n(n −2)ω
n
1
[x[
n−2
n ≥ 3,
be the fundamental solution of the Laplace equation. The Newtonian potential
is known as
w(x) =
Ω
Γ(x −y)f(y)dy.
We prove
Theorem 3.1.1 Let f ∈ L
p
(Ω) for 1 < p < ∞, and let w be the Newtonian
potential of f. Then w ∈ W
2,p
(Ω) and
´w = f(x), a.e. x ∈ Ω, (3.3)
and
D
2
w
L
p
(Ω)
≤ Cf
L
p
(Ω)
. (3.4)
Write w = Nf, where N is obviously a linear operator. For ﬁxed i, j,
deﬁne the linear operator T as
Tf = D
ij
Nf = D
ij
R
n
Γ(x −y)f(y)dy
To prove Theorem 3.1.1, it is equivalent to show that
T : L
p
(Ω) −→ L
p
(Ω) is a bounded linear operator . (3.5)
Our proof can actually be applied to more general operators. To this end,
we introduce the concept of weak type and strong type operators.
Deﬁne
µ
f
(t) = [¦x ∈ Ω[ [f(x)[ > t¦[.
For p ≥ 1, a weak L
p
space L
p
w
(Ω) is the collection of functions f that
satisfy
f
p
L
p
w
(Ω)
= sup¦µ
f
(t)t
p
, ∀t > 0¦ < ∞.
76 3 Regularity of Solutions
An operator T : L
p
(Ω) → L
q
(Ω) is of strong type (p, q) if
Tf
L
q
(Ω)
≤ Cf
L
p
(Ω)
, ∀f ∈ L
p
(Ω).
T is of weak type (p, q) if
Tf
L
q
w
(Ω)
≤ Cf
L
p
(Ω)
, ∀f ∈ L
p
(Ω).
Outline of Proof of (3.5).
We decompose the proof into the proofs of the following four lemmas.
First, we use Fourier transform to show
Lemma 3.1.1 T : L
2
(Ω) → L
2
(Ω) is a bounded linear operator. i.e. T is of
strong type (2, 2).
Secondly, we use CalderonZygmund’s Decomposition Lemma to prove
Lemma 3.1.2 T is of weak type (1,1).
Thirdly, we employ Marcinkiewicz Interpolation Theorem to derive
Lemma 3.1.3 T is of strong type (r, r) for any 1 < r ≤ 2.
Finally, by duality, we conclude
Lemma 3.1.4 T is of strong type (p, p), for 1 < p < ∞.
The Proof of Lemma 3.1.1.
First we consider f ∈ C
∞
0
(Ω) ⊂ C
∞
0
(R
n
). Then obviously w ∈ C
∞
(R
n
)
and satisﬁes
´w = f(x) , ∀x ∈ R
n
.
Let
ˆ
f(ξ) =
1
(2π)
n/2
R
n
e
i<x,ξ>
f(x)dx,
be the Fourier transform of f, where
< x, ξ >=
n
¸
k
x
k
ξ
k
is the inner product of R
n
, and i =
√
−1.
We need the following simple property of the transform
¯
D
k
f(ξ) = −iξ
k
ˆ
f(ξ) , and
¯
D
jk
f(ξ) = −ξ
j
ξ
k
ˆ
f(ξ), (3.6)
and the wellknown Plancherel’s Theorem:

ˆ
f
L
2
(R
n
)
= f
L
2
(R
n
)
. (3.7)
It follows from (3.6) and (3.7),
3.1 W
2,p
a Priori Estimates 77
Ω
[f(x)[
2
dx =
R
n
[f(x)[
2
dx
=
R
n
[´w(x)[
2
dx
=
R
n
[
¯
´w(ξ)[
2
dx
=
R
n
[ξ[
4
[ ˆ w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
ξ
2
k
ξ
2
j
[ ˆ w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
[
¯
D
kj
w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
[D
kj
w(x)[
2
dx
=
R
n
[D
2
w[
2
dx.
Consequently,
Tf
L
2
(Ω)
≤ f
L
2
(Ω)
, ∀f ∈ C
∞
0
(Ω). (3.8)
Now to verify (3.8) for any f ∈ L
2
(Ω), we simply pick a sequence ¦f
k
¦ ⊂
C
∞
0
(Ω) that converges to f in L
2
(Ω), and take the limit. This proves the
Lemma. ¯ .
The Proof of Lemma 3.1.2.
We need the following wellknown Calder´onZygmund’s Decomposition
Lemma (see its proof in Appendix C).
Lemma 3.1.5 For f ∈ L
1
(R
n
), ﬁxed α > 0, ∃ E, G such that
(i) R
n
= E ∪ G, E ∩ G = ∅
(ii) [f(x)[ ≤ α, a.e. x ∈ E
(iii) G =
∞
¸
k=1
Q
k
, ¦Q
k
¦: disjoint cubes s.t.
α <
1
[Q
k
[
Q
k
[f(x)[dx ≤ 2
n
α
For any f ∈ L
1
(Ω), to apply the Calder´onZygmund’s Decomposition
Lemma, we ﬁrst extend f to vanish outside Ω. For any given α > 0, ﬁx a
large cube Q
o
in R
n
, such that
Qo
[f(x)[dx ≤ α[Q
o
[.
78 3 Regularity of Solutions
We show that
µ
Tf
(α) := [¦x ∈ R
n
[ [Tf(x)[ ≥ α¦[ ≤ C
f
L
1
(Ω)
α
. (3.9)
Split the function f into the “good” part g and “bad” part b: f = g + b,
where
g(x) =
f(x) for x ∈ E
1
Q
k

Q
k
f(x)dx for x ∈ Q
k
, k = 1, 2, .
Since the operator T is linear, Tf = Tg +Tb; and therefore
µ
Tf
(α) ≤ µ
Tg
(
α
2
) +µ
Tb
(
α
2
).
We will estimate µ
Tg
(
α
2
) and µ
Tb
(
α
2
) separately. The estimate of the ﬁrst one
is easy, because g ∈ L
2
. To estimate µ
Tb
(
α
2
), we divide R
n
into two parts: G
∗
and E
∗
:= R
n
` G
∗
(see below for the precise deﬁnition of G
∗
.) We will show
that
(a) [G
∗
[ ≤
C
α
f
L
1
(Ω)
, and
(b) [¦x ∈ E
∗
[ [Tb(x)[ ≥
α
2
¦[ ≤
C
α
E
∗
[Tb(x)[dx ≤
C
α
f
L
1
(Ω)
.
These will imply the desired estimate for µ
Tb
(
α
2
).
Obviously, from the deﬁnition of g, we have
[g(x)[ ≤ 2
n
α , almost everywhere (3.10)
and
b(x) = 0 for x ∈ E , and
Q
k
b(x)dx = 0 for k = 1, 2, . (3.11)
We ﬁrst estimate µ
Tg
. By Lemma 3.1.1 and (3.10), we derive
µ
Tg
(
α
2
) ≤
4
α
2
g
2
(x)dx ≤
2
n+2
α
[g(x)[dx ≤
2
n+2
α
[f(x)[dx. (3.12)
We then estimate µ
Tb
. Let
b
k
(x) =
b(x) for x ∈ Q
k
0 elsewhere .
Then
Tb =
∞
¸
k=1
Tb
k
.
For each ﬁxed k, let ¦b
km
¦ ⊂ C
∞
0
(Q
k
) be a sequence converging to b
k
in
L
2
(Ω) satisfying
Q
k
b
km
(x)dx =
Q
k
b
k
(x)dx = 0. (3.13)
3.1 W
2,p
a Priori Estimates 79
From the expression
Tb
km
(x) =
Q
k
D
ij
Γ(x −y)b
km
(y)dy,
one can see that due to the singularity of D
ij
Γ(x−y) in Q
k
and the fact that
b
km
may not be bounded in Q
k
, one can only estimate Tb
km
(x) when x is of
a positive distance away from Q
k
. For this reason, we cover Q
k
by a bigger
ball B
k
which has the same center as Q
k
, and the radius of the ball δ
k
is the
same as the diameter of Q
k
. We now estimate the integral in the complement
of B
k
:
R
n
\B
k
[Tb
km
[(x)dx =
Qo\B
k
[
Q
k
D
ij
Γ(x −y)b
km
(y)dy[dx
=
Qo\B
k
[
Q
k
[D
ij
Γ(x −y) −D
ij
Γ(x − ¯ y)]b
km
(y)dy[dx
≤ C δ
k
Qo\B
k
1
[x[
n+1
dx [
Q
k
b
km
(y)dy[
≤ C
1
δ
k
∞
δ
k
1
r
2
dr
Q
k
[b
km
(y)[dy
≤ C
2
Q
k
[b
km
(y)[dy (3.14)
where ¯ y is the center of the cube Q
k
. One small trick here is to add a term
(which is 0 by (3.13)):
Q
k
D
ij
Γ(x − ¯ y)b
km
(y)dy
to produce a helpful factor δ
k
by applying the mean value theorem to the
diﬀerence:
D
ij
Γ(x −y) −D
ij
Γ(x − ¯ y) = (y − ¯ y) D(D
ij
Γ)(x −ξ) ≤ δ
k
[D(D
ij
)(x −ξ)[.
Now letting m→∞ in (3.14), we obtain
R
n
\B
k
[Tb
k
(x)[dx ≤ C
Q
k
[b
k
(y)[dy.
Let
G
∗
=
∞
¸
k=1
B
k
and E
∗
= R
n
` G
∗
.
It follows that
E
∗
[Tb(x)[dx ≤ C
∞
¸
k=1
R
n
\G
∗
[Tb
k
[dx ≤ C
∞
¸
k=1
R
n
\B
k
[Tb
k
[dx
≤ C
∞
¸
k=1
Q
k
[b
k
(y)[dy ≤ C
R
n
[f(x)[dx. (3.15)
80 3 Regularity of Solutions
Obviously
µ
Tb
(
α
2
) ≤ [G
∗
[ +[¦x ∈ E
∗
[ Tb(x) ≥
α
2
¦[. (3.16)
By (iii) in the Calder´onZygmund’s Decomposition Lemma, we have
[G
∗
[ =
∞
¸
k=1
[B
k
[ = C
∞
¸
k=1
[Q
k
[ ≤
C
α
∞
¸
k=1
Q
k
[f(x)[dx =
C
α
R
n
[f(x)[dx.
(3.17)
Write
E
∗
α
= ¦x ∈ E
∗
[ [Tb(x)[ ≥
α
2
¦.
Then by (3.15), we derive
[E
∗
α
[
α
2
≤
E
∗
α
[Tb(x)[dx ≤
E
∗
[Tb(x)[dx ≤ C
R
n
[f(x)[dx. (3.18)
Now the desired inequality (3.9) is a direct consequence of (3.12), (3.16),
(3.17, and (3.18). This completes the proof of the Lemma.
The proof of Lemma 3.1.3. In the previous lemmas, we have shown that
the operator T is of weak type (1, 1) and strong type (2, 2) (of course also weak
type (2, 2)). Now Lemma 3.1.3 is a direct consequence of the Marcinkiewicz
interpolation theorem in the following restricted form:
Lemma 3.1.6 Let T be a linear operator from L
p
(Ω) ∩L
q
(Ω) into itself with
1 ≤ p < q < ∞. If T is of weak type (p, p) and weak type (q, q), then for any
p < r < q, T is of strong type (r, r). More precisely, if there exist constants
B
p
and B
q
, such that
µ
Tf
(t) ≤
B
p
f
p
t
p
and µ
Tf
(t) ≤
B
q
f
q
t
q
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
then
Tf
r
≤ CB
θ
p
B
1−θ
q
f
r
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
where
1
r
=
θ
p
+
1 −θ
q
and C depends only on p, q, and r.
The proof of this lemma can be found in Appendix C.
The proof of Lemma 3.1.4. From the previous lemma, we know that,
for any 1 < r ≤ 2, we have
Tg
L
r
(Ω)
≤ C
r
g
L
r
(Ω)
. (3.19)
3.1 W
2,p
a Priori Estimates 81
Let
< f, g >=
Ω
f(x)g(x)dx
be the duality between f and g. Then it is easy to verify that
< g, Tf >=< Tg, f > . (3.20)
Given any 2 < p < ∞, let r =
p
p−1
, i.e.
1
r
+
1
p
= 1. Obviously, 1 < r < 2. It
follows from (3.19) and (3.20) that
Tf
L
p = sup
g
L
r =1
< g, Tf >= sup
g
L
r =1
< Tg, f >
≤ sup
g
L
r =1
f
L
pTg
L
r ≤ C
r
f
L
p.
This completes the proof of the Lemma.
3.1.2 Uniform Elliptic Equations
In this section, we consider general second order uniform elliptic equations
with Dirichlet boundary condition
Lu := −
¸
n
i,j=1
a
ij
(x)u
xixj
+
¸
n
i=1
b
i
(x)u
xi
+c(x)u = f(x), x ∈ Ω
u(x) = 0, x ∈ ∂Ω.
(3.21)
We assume that Ω is bounded with C
2,α
boundary, a
ij
(x) ∈ C
0
(
¯
Ω), b
i
(x) ∈
L
∞
(Ω), c(x) ∈ L
∞
(Ω), and
λ[ξ[
2
≤ a
ij
ξ
i
ξ
j
≤ Λ[ξ[
2
for some positive constants λ and Λ.
Deﬁnition 3.1.1 We say that u is a strong solution of
Lu = f(x), x ∈ Ω
if u is twice weakly diﬀerentiable in Ω and satisﬁes the equation almost ev
erywhere in Ω.
Based on the result in the previous section–the estimates on the Newtonian
potentials–we will establish a priori estimates on the strong solutions of (3.21).
Theorem 3.1.2 Let u ∈ W
2,p
(Ω)
¸
W
1,2
0
(Ω) be a strong solution of (3.21).
Assume that f ∈ L
p
(Ω). Then
[[u[[
W
2,p
(Ω)
≤ C([[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.22)
where C is a constant depending on [[b
i
(x)[[
L
∞
(Ω)
, [[c(x)[[
L
∞
(Ω)
, λ, Λ, n, p,
and the domain Ω.
82 3 Regularity of Solutions
Remark 3.1.1 Conditions
b
i
(x), c(x) ∈ L
∞
(Ω)
can be replaced by the weaker ones
b
i
(x), c(x) ∈ L
p
(Ω) , for any p > n.
Proof of Theorem 3.1.2.
We divide the proof into two major parts.
Part 1. Interior Estimate
[[∇
2
u[[
L
p
(K)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.23)
where K is any compact subset of Ω.
Part 2. Boundary Estimate
[[u[[
W
2,p
(Ω\Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.24)
where Ω
δ
= ¦x ∈ Ω [ d(x, ∂Ω) > δ¦.
Combining the interior and the boundary estimates, we obtain
[[u[[
W
2,p
(Ω)
≤ [[u[[
W
2,p
(Ω\Ω
2δ
)
+[[u[[
W
2,p
(Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.25)
Theorem 3.1.2 is then a simple consequence of (3.25) and the following
well known GargaliadoJohnNirenberg type interpolation estimate
[[∇u[[
L
p
(Ω)
≤ C[[u[[
1/2
L
p
(Ω)
[[∇
2
u[[
1/2
L
p
(Ω)
≤ [[∇
2
u[[
L
p
(Ω)
+
C
4
[[u[[
L
p
(Ω)
.
Substituting this estimate of [[∇u[[
L
p
(Ω)
into our inequality (3.25), we get:
[[u[[
W
2,p
(Ω)
≤ C[[∇
2
u[[
L
p
(Ω)
+C(
C
4
[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
).
Choosing <
1
2C
, we can then absorb the term [[∇
2
u[[
L
p
(Ω)
on the right hand
side by the left hand side and arrive at the conclusion of our theorem
[[u[[
W
2,p
(Ω)
≤ C([[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.26)
For both the interior and the boundary estimates, the main idea is the well
known method of “frozen” coeﬃcients. Locally, near a point x
o
, we may regard
the leading coeﬃcients a
ij
(x) of the operator approximately as the constant
a
ij
(x
o
) (as if the functions a
ij
are frozen at point x
o
), and thus we are able
to treat the operator as a constant coeﬃcient one and apply the potential
estimates derived in the previous section.
3.1 W
2,p
a Priori Estimates 83
Part 1: Interior Estimates.
First we deﬁne the cutoﬀ function φ(s) to be a compact supported C
∞
function:
φ(s) = 1 if s ≤ 1
φ(s) = 0 if s ≥ 2.
Then we quantify the ‘module’ continuity of a
ij
with
(δ) = sup
x−y≤δ;x,y∈Ω;1≤i,j≤n
[a
ij
(x) −a
ij
(y)[
The function (δ) ` 0 as δ ` 0, and it measures the ‘module’ continuity
of the functions a
ij
.
For any x
o
∈ Ω
2δ
, let
η(x) = φ(
[x −x
o
[
δ
) and w(x) = η(x)u(x),
then
a
ij
(x
o
)
∂
2
w
∂x
i
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+a
ij
(x)
∂
2
w
∂x
i
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+η(x)a
ij
(x)
∂
2
u
∂x
i
∂x
j
+
+ a
ij
(x)u(x)
∂
2
η
∂x
i
∂x
j
+ 2a
ij
(x)
∂u
∂x
i
∂η
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+η(x)(b
i
(x)
∂u
∂x
i
+c(x)u −f(x)) +
+ a
ij
(x)u(x)
∂
2
η
∂x
i
∂x
j
+ 2a
ij
(x)
∂u
∂x
i
∂η
∂x
j
:= F(x) for x ∈ R
n
.
In the above, we skipped the summation sign
¸
. We abbreviated
¸
n
i,j=1
a
ij
as a
ij
and so on. Here we notice that all terms are supported in B
2δ
:=
B
2δ
(x
o
) ⊂ Ω. With a simple linear transformation, we may assume a
ij
(x
o
) =
δ
ij
. Apparently both w and
Γ ∗ F :=
1
n(n −2)ω
n
R
n
1
[x −y[
n−2
F(y)dy
are solutions of
´u = F.
Thus by the uniqueness, w = Γ ∗ F (see the exercise after the proof of
the theorem). Now, following the Newtonian potential estimate (see Theorem
3.3), we obtain:
84 3 Regularity of Solutions
[[
2
w[[
L
p
(B
2δ
)
= [[
2
w[[
L
p
(R
n
)
≤ C[[F[[
L
p
(R
n
)
= C[[F[[
L
p
(B
2δ
)
. (3.27)
Calculating term by term, we have
[[F[[
L
p
(B
2δ
)
≤ (2δ)[[
2
w[[
L
p
(B
2δ
)
+[[f[[
L
p
(B
2δ
)
+C([[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
).
Combining this with the previous inequality (3.27) and choosing δ so small
such that C(2δ) < 1/2, we get
[[
2
w[[
L
p
(B
2δ
)
≤ C(2δ)[[
2
w[[
L
p
(B
2δ
)
+C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
)
≤
1
2
[[
2
w[[
L
p
(B
2δ
)
+C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
).
This is equivalent to
[[
2
w[[
L
p
(B
2δ
)
≤ C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
)
Since u = w on B
δ
(x
o
), we have
[[
2
u[[
L
p
(B
δ
)
= [[
2
w[[
L
p
(B
δ
)
.
Consequently,
[[
2
u[[
L
p
(B
δ
)
≤ C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
). (3.28)
To extend this estimate from a δball to a compact set, we notice that the
collection of balls
¦B
δ
(x) [ x ∈ Ω
2δ
¦
forms an open covering of Ω
2δ
. Hence there are ﬁnitely many balls ¦B
δ
(x
i
) [
i = 1, .......m¦ that have already covered Ω
2δ
. We now apply the above esti
mate (3.28) on each of the balls and sum up to obtain
[[
2
u[[
L
p
(Ω
2δ
)
≤
m
¸
i=1
[[
2
u[[
L
p
(B
δ
)(x
i
)
≤
m
¸
i=1
C([[f[[
L
p
(B
2δ
)(x
i
)
+[[∇u[[
L
p
(B
2δ
)(x
i
)
+[[u[[
L
p
(B
2δ
)(x
i
)
)
≤ C([[f[[
L
p
(Ω)
+[[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
) (3.29)
For any compact subset K of Ω, let δ <
1
2
dist(K, ∂Ω), then K ⊂ Ω
2δ
, and
we derive
[[∇
2
u[[
L
p
(K)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.30)
This establishes the interior estimates.
Part 2. Boundary Estimates
The boundary estimates are very similar. Let’s just describe how we can
apply almost the same scheme here. For any point x
o
∈ ∂Ω, B
δ
(x
o
)
¸
∂Ω
3.1 W
2,p
a Priori Estimates 85
is a C
2,α
graph for δ small. With a suitable rotation, we may assume that
B
δ
(x
o
)
¸
∂Ω is given by the graph of
x
n
= h(x
1
, x
2
, ..., x
n−1
) = h(x
),
and Ω is on top of this graph locally. Let
y = ψ(x) = (x
−x
o
, x
n
−h(x
)),
then ψ is a diﬀeomorphism that maps a neighborhood of x
o
in Ω onto B
+
r
(0) =
¦y ∈ B
r
(0) [ y
n
> 0¦. The equation becomes
−
¸
n
i,j=1
¯ a
ij
(y)u
yiyj
+
¸
n
i=1
¯
b
i
(y)u
yi
+ ¯ c(y)u =
¯
f(y) , for y ∈ B
+
r
(0)
u(y) = 0 , for y ∈ ∂B
+
r
(0) with y
n
= 0;
(3.31)
where the new coeﬃcients are computed from the original ones via the chain
rule of diﬀerentiation. For example,
¯ a
ij
(y) =
∂ψ
i
∂x
l
(ψ
−1
(y))a
lk
(ψ
−1
(y))
∂ψ
j
∂x
k
(ψ
−1
(y)).
If necessary, we make a linear transformation so that ¯ a
ij
(0) = δ
ij
. Since
a plane is still mapped to a plane, we may assume that equation (3.31) is
valid for some smaller r. Applying the method of frozen coeﬃcients (with
w(y) = φ(
2y
r
)u(y)) we get:
´w(y) = F(y) on B
+
r
(0).
Now let ¯ w(y) and
¯
F(y) be the odd extension of w(y) and F(y) from B
+
r
(0)
to B
r
(0), i.e.
¯ w(y) = ¯ w(y
1
, , y
n−1
, y
n
) =
w(y
1
, , y
n−1
, y
n
), if y
n
≥ 0;
−w(y
1
, , y
n−1
, −y
n
), if y
n
< 0.
Similarly for
¯
F(y).
Then one can check that:
´¯ w(y) =
¯
F(y) , for y ∈ B
r
(0).
Through the same calculations as in Part 1, we derive the basic interior esti
mate
[[
2
u[[
L
p
(Br(x
o
))
≤ C([[f[[
L
p
(B2r(x
o
))
+[[∇u[[
L
p
(B2r(x
o
))
+[[u[[
L
p
(B2r(x
o
))
)
≤ C
1
([[f[[
L
p
(B2r(x
o
)∩Ω)
+[[∇u[[
L
p
(B2r(x
o
)∩Ω)
+[[u[[
L
p
(B2r(x
o
)∩Ω)
)
for any x
o
on ∂Ω and for some small radius r. The last inequality in the above
was due to the symmetric extension of w to ¯ w from the half ball to the whole
ball.
86 3 Regularity of Solutions
These balls form a covering of the compact set ∂Ω, and thus has a ﬁnite
covering B
ri
(x
i
) (i = 1, 2, ...., k). These balls also cover a neighborhood of ∂Ω
including Ω`Ω
δ
for some δ small. Summing the estimates up, and following
the same steps as we did for the interior estimates, we get:
[[u[[
W
2,p
(Ω\Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.32)
This establishes the boundary estimates and hence completes the proof of
the theorem. ¯ .
Exercise 3.1.1 Assume that Ω is bounded, and w ∈ W
2,p
0
(Ω). Show that if
−´w = F, then w = Γ ∗ F.
Hint: (a) Show that Γ ∗ F(x) = 0 for x ∈
¯
Ω.
(b) Show that it is true for w ∈ C
2
0
(R
n
).
(c) Show that
´(J
w) = J
(´w)
and thus
J
w = Γ ∗ (J
F).
Let →0 to derive w = Γ ∗ F.
3.2 W
2,p
Regularity of Solutions
In this section, we will use the a priori estimates established in the previous
section to derive the W
2,p
regularity for the weak solutions.
Let L be a second order uniformly elliptic operator in divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (3.33)
Deﬁnition 3.2.1 We say that u ∈ W
1,p
0
(Ω) is a weak solution of
Lu = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω
(3.34)
if for any v ∈ W
1,q
0
(Ω),
Ω
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i
b
i
(x)u
xi
v +c(x)uv
¸
¸
dx =
Ω
f(x)vdx, (3.35)
where
1
p
+
1
q
= 1.
3.2 W
2,p
Regularity of Solutions 87
Remark 3.2.1 Since C
∞
0
(Ω) is dense in W
1,p
0
(Ω), we only need to require
(3.35) be true for all v ∈ C
∞
0
(Ω); and this is more convenient in many appli
cations.
The main result of the section is
Theorem 3.2.1 Assume Ω is a bounded domain in R
n
. Let L be a uniformly
elliptic operator deﬁned in (3.33) with a
ij
(x) Lipschitz continuous and b
i
(x)
and c(x) bounded.
Assume that u ∈ W
1,p
0
(Ω) is a weak solution of (3.34). If f(x) ∈ L
p
(Ω),
then u ∈ W
2,p
(Ω) .
Outline of the Proof. The proof of the theorem is quite complex and
will be accomplished in two stages.
In stage 1, we ﬁrst consider the case when p ≥ 2, because one can rather
easily show the uniqueness of the weak solutions by multiplying both sides of
the equation by the solution itself and integrating by parts. As one will see,
this uniqueness plays a key role in deriving the regularity.
The proof of the regularity is based on the following fundamental proposi
tion on the existence, uniqueness, and regularity for the solution of the Laplace
equation on a unit ball.
Proposition 3.2.1 Assume f ∈ L
p
(B
1
(0)) with p ≥ 2. Then the Dirichlet
problem
´u = f(x) , x ∈ B
1
(0)
u(x) = 0 , x ∈ ∂B
1
(0)
(3.36)
exists a unique solution u ∈ W
2,p
(B
1
(0)) satisfying
u
W
2,p
(B1)
≤ Cf
L
p
(B1)
. (3.37)
We will prove this proposition in subsection 3.2.1, and then use the
“frozen” coeﬃcients method to derive the regularity for general operator L.
In stage 2 (subsection 3.2.2), we consider the case 1 < p < 2. The main
diﬃculty lies in the uniqueness of the weak solution. To show the uniqueness,
we ﬁrst prove a W
2,p
version of Fredholm Alternative for p ≥ 2 based on
the regularity result in stage 1, which will be used to deduce the existence of
solutions for equation
−
¸
ij
(a
ij
(x)w
xi
)
xj
= F(x).
Then we use this equation with a proper choice of F(x) to derive a contradic
tion if there exists a nontrivial solution of equation
−
¸
ij
(a
ij
(x)v
xi
)
xj
= 0.
88 3 Regularity of Solutions
After proving the uniqueness, to derive the regularity, everything else is
the same as in stage 1.
Remark 3.2.2 Actually, the conditions on the coeﬃcients of L can be weak
ened. This will use the Regularity Lifting Theorem in Section 3.3, and hence
will be deliberated there.
3.2.1 The Case p ≥ 2
We will prove Proposition 3.2.1 and use it to derive the regularity of weak
solutions for general operator L. To this end, we need
Lemma 3.2.1 (Better A Priori Estimates at Presence of Uniqueness.) As
sume that
i) for solutions of
Lu = f(x) , x ∈ Ω (3.38)
it holds the a priori estimate
u
W
2,p
(Ω)
≤ C(u
L
p
(Ω)
+f
L
p
(Ω)
), (3.39)
and
ii) (uniqueness) if Lu = 0, then u = 0.
Then we have a better a priori estimate for the solution of (3.38)
u
W
2,p
(Ω)
≤ Cf
L
p
(Ω)
. (3.40)
Proof. Suppose the inequality (3.40) is false, then there exists a sequence of
functions ¦f
k
¦ with f
k

L
p = 1 and the corresponding sequence of solutions
¦u
k
¦ for
Lu
k
= f
k
(x) , x ∈ Ω
such that
u
k

W
2,p→∞ as k→∞.
It follows from the a priori estimate (3.39) that
u
k

L
p→∞ as k→∞.
Let
v
k
=
u
k
u
k

L
p
and g
k
=
f
k
u
k

L
p
.
Then
v
k

L
p = 1 , and g
k

L
p→0. (3.41)
From the equation
Lv
k
= g
k
(x) , x ∈ Ω (3.42)
3.2 W
2,p
Regularity of Solutions 89
it holds the a priori estimate
v
k

W
2,p ≤ C(v
k

L
p +g
k

L
p). (3.43)
(3.41) and (3.43) imply that ¦v
k
¦ is bounded in W
2,p
, and hence there exists
a subsequence (still denoted by ¦v
k
¦) that converges weakly to v in W
2,p
. By
the compact Sobolev embedding, the same sequence converges strongly to v
in L
p
, and hence v
L
p = 1. From (3.42), we arrive at
Lv = 0 , x ∈ Ω.
Therefore by the uniqueness assumption, we must have v = 0. This contradicts
with the fact v
L
p = 1 and therefore completes the proof. ¯ .
The Proof of Proposition 3.2.1.
For convenience, we abbreviate B
r
(0) as B
r
.
To see the uniqueness, assume that both u and v are solutions of (3.36).
Let w = u −v. Then w weakly satisﬁes
´w = 0 , x ∈ B
1
w(x) = 0 , x ∈ ∂B
1
Multiplying both sides of the equation by w and then integrating on B
1
, we
derive
B1
[w[
2
dx = 0.
Hence w = 0 almost everywhere, this, together with the zero boundary
condition, implies w = 0 almost everywhere.
For the existence, it is wellknown that, if f is continuous, then
u(x) =
B1
G(x, y)f(y)dy
is the solution. Here
G(x, y) = Γ(x −y) −h(x, y)
is the Green’s function, in which
Γ(x) =
1
n(n−2)ωn
1
x
n−2
, n ≥ 3
1
2π
ln [x[ , n = 2
is the fundamental solution of Laplace equation as introduce in the deﬁnition
of Newtonian Potentials, and
h(x, y) =
1
n(n −2)ω
n
1
([x[[
x
x
2
−y[)
n−2
, for n ≥ 3
90 3 Regularity of Solutions
is a harmonic function. The addition of h(x, y) is to ensure that G(x, y) van
ishes on the boundary.
We have obtained the needed estimate on the ﬁrst part
B1
Γ(x −y)f(y)dy
when we worked on the Newtonian Potentials. For the second part, noticing
that h(x, y) is smooth when y is away from the boundary (see the exercise
below), we start from a smaller ball B
1−δ
for each δ > 0. Let
u
δ
(x) =
B1
G(x, y)f
δ
(y)dy,
where
f
δ
(x) =
f(x) , x ∈ B
1−δ
0 , elsewhere .
Now, by our result for the Newtonian Potentials, D
2
u
δ
is in L
p
(B
1
). Applying
the Poincar`e inequality, we derive that u
δ
is also in L
p
(B
1
), and hence it is in
W
2,p
(B
1
). Moreover, due to uniqueness and Lemma 3.2.1, we have the better
a priori estimate
u
δ

W
2,p
(B1)
≤ Cf
L
p
(B1)
. (3.44)
Pick a sequence ¦δ
i
¦ tending to zero, then the corresponding solutions
¦u
δi
¦ is a Cauchy sequence in W
2,p
(B
1
), because
u
δi
−u
δj

W
2,p
(B1)
≤ Cf
δi
−f
δj

L
p
(B1)
→0 , as i, j→∞.
Let
u
o
= lim
i→∞
u
δi
, in W
2,p
(B
1
).
Then u
o
is a solution of (3.36). Moreover, from 3.44, we see that the better a
priori estimate (3.37) holds for u
o
.
This completes the proof of Proposition 3.2.1. ¯ .
Exercise 3.2.1 Show that if [y[ ≤ 1 − δ for some δ > 0, then there exist a
constant c
o
> 0, such that
[x[[
x
[x[
2
−y[ ≥ c
o
, ∀ x ∈ B
1
(0).
Proof of Regularity Theorem 3.2.1 for p ≥ 2.
The general frame of the proof is similar to the W
2,p
a priori estimate in
the previous section. The key diﬀerence here is that in the a priori estimate, we
preassume that u is a strong solution in W
2,p
, and here we only assume that
u is a weak solution in W
1,p
. After reading the proof of the a priori estimates,
the reader may notice that if one can establish a W
2,p
regularity in a small
3.2 W
2,p
Regularity of Solutions 91
neighborhood of each point in Ω, then by an entirely similar argument as in
obtaining the a priori estimates, one can derive the regularity on the whole of
Ω.
As in the proof of the a priori estimates, we deﬁne the cutoﬀ function
φ(s) =
1 , if s ≤ 1
0 , if s ≥ 2.
Let u be a W
1,p
0
(Ω) weak solution of (3.34).
For any
x
o
∈ Ω
2δ
:= ¦x ∈ Ω [ dist(x, ∂Ω) ≥ 2δ¦,
let
η(x) = φ(
[x −x
o
[
δ
) and w(x) = η(x)u(x).
Then w is supported in B
2δ
(x
o
). By (3.35) and a straight forward calculation,
one can verify that, for any v ∈ C
∞
0
(B
2δ
(x
o
)),
B
2δ
(x
o
)
a
ij
(x
o
)w
xi
v
xj
dx =
B
2δ
(x
o
)
[a
ij
(x
o
)−a
ij
(x)]w
xi
v
xj
dx+
B
2δ
(x
o
)
F(x)vdx
where
F(x) = f(x) −(a
ij
(x)η
xi
u)
xj
−b
i
(x)u
xi
−c(x)u(x).
In order words, w is a weak solution of
a
ij
(x
o
)w
xixj
= ([a
ij
(x
o
) −a
ij
(x)]w
xi
)
xj
−F(x) , x ∈ B
2δ
(x
o
)
w(x) = 0 , x ∈ ∂B
2δ
(x
o
).
(3.45)
In the above, we omitted the summation signs
¸
. We abbreviated
¸
n
i,j=1
a
ij
as a
ij
and so on.
With a change of coordinates, we may assume that a
ij
(x
o
) = δ
ij
and write
equation (3.45) as
´w = ([a
ij
(x
o
) −a
ij
(x)]w
xi
)
xj
−F(x) (3.46)
For any v ∈ W
2,p
(B
2δ
(x
o
)), obviously,
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
∈ L
p
(B
2δ
(x
o
)).
Also, one can easily verify that F(x) is in L
p
(B
2δ
(x
o
)). By virtue of Proposi
tion 3.2.1, the operator ´ is invertible. Consider the equation in W
2,p
v = Kv +´
−1
F (3.47)
where
(Kv)(x) = ´
−1
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
.
92 3 Regularity of Solutions
Under the assumption that a
ij
(x) are Lipschitz continuous, one can verify that
(see the exercise below), for suﬃciently small δ, K is a contracting map from
W
2,p
(B
2δ
(x
o
)) to itself. Therefore, there exists a unique solution v of equation
(3.47). This v is also a weak solution of equation (3.46). Similar to the proof
of Proposition 3.2.1, one can show (as an exercise) the uniqueness of the weak
solution of (3.46). Therefore, we must have w = v and thus conclude that w is
also in W
2,p
(B
2δ
(x
o
)). This completes the stage 1 of proof for Theorem 3.2.1.
¯ .
Exercise 3.2.2 Assume that each a
ij
(x) is Lipschitz continuous. Let
(Kv)(x) = ´
−1
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
, x ∈ B
2δ
(x
o
).
Show that, for δ suﬃciently small, the operator K is a contracting map in
W
2,p
(B
2δ
(x
o
)), i.e. there exists a constant 0 < γ < 1, such that
Kφ −Kψ
W
2,p
(B
2δ
(x
o
))
≤ γφ −ψ
W
2,p
(B
2δ
(x
o
))
.
Exercise 3.2.3 Prove the uniqueness of the W
1,p
(B
2δ
(x
o
)) weak solution for
equation (3.46) when p ≥ 2.
3.2.2 The Case 1 < p < 2.
Lemma 3.2.2 Let p ≥ 2. If u = 0 whenever Lu = 0 (in the sense of W
1,p
0
(Ω)
weak solution), then for any f ∈ L
p
(Ω), there exist a unique u ∈ W
1,p
0
(Ω) ∩
W
2,p
(Ω), such that
Lu = f(x) and u
W
2,p
(Ω)
≤ Cf
L
p. (3.48)
Proof. This is actually a W
2,p
version of the Fredholm Alternative.
From the previous chapter, one can see that for α suﬃciently large, by
LaxMilgram Theorem, for any given g ∈ L
2
(Ω), there exist a unique W
1,2
0
(Ω)
weak solution v of the equation
(L +α)v = g , x ∈ Ω.
From the Regularity Theorem in this chapter, we have v ∈ W
2,2
(Ω). There
fore, we can write
v = (L +α)
−1
g
where (L +α)
−1
is a bounded linear operator from L
2
(Ω) to W
2,2
(Ω).
Let ¦f
k
¦ ⊂ C
∞
0
(Ω) be a sequence of smooth approximations of f(x). For
each k, consider equation
(L +α)w −αw = f
k
, (3.49)
3.2 W
2,p
Regularity of Solutions 93
or equivalently,
(I −K)w = F, (3.50)
where I is the identity operator, K = α(L+α)
−1
, and F = (L+α)
−1
f
k
. One
can see that K is a compact operator from W
1,2
0
(Ω) into itself, because of
the compact embedding of W
2,2
into W
1,2
. Now we can apply the Fredholm
Alternative:
Equation (3.50) exists a unique solution if and only if the corresponding
homogeneous equation (I −K)w = 0 has only trivial solution.
The second half of the above is just the assumption of the theorem, hence,
equation (3.50) possesses a unique solution u
k
in W
1,2
0
(Ω), i.e.
u
k
−α(L +α)
−1
u
k
= (L +α)
−1
f
k
. (3.51)
Furthermore, since both α(L + α)
−1
u
k
and (L + α)
−1
f
k
are in W
2,2
(Ω), we
derive immediately that u
k
is also in W
2,2
(Ω), or we can deduce this from the
Regularity Theorem 3.2.1. Since f
k
is in L
p
(Ω) for any p, we conclude from
Theorem 3.2.1 that u
k
is in W
2,p
(Ω), and furthermore, due to uniqueness, we
have the improved a priori estimate (see Lemma 3.2.1)
u
k

W
2,p ≤ Cf
k

L
p , k = 1, 2,
Moreover, for any integers i and j,
L(u
i
−u
j
) = f
i
(x) −f
j
(x) , x ∈ Ω.
It follows that
u
i
−u
j

W
2,p ≤ Cf
i
−f
j

L
p→0 , as i, j→∞.
This implies that ¦u
k
¦ is a Cauchy sequence in W
2,p
and hence it converges
to an element u in W
2,p
(Ω). Now this function u is the desired solution. ¯ .
Proposition 3.2.2 (Uniqueness of W
1,p
0
Weak Solution for 1 < p < ∞. )
Let Ω be a bounded domain in R
n
. Assume that a
ij
(x) are Lipschitz con
tinuous. If u is a W
1,p
0
(Ω) weak solution of
(a
ij
(x)u
xi
)
xj
= 0, (3.52)
then u = 0 almost every where.
Proof. . When p ≥ 2, we have proved the uniqueness. Now assume 1 < p < 2.
Suppose there exist a nontrivial weak solution u. Let ¦u
k
¦ be a sequence
of C
∞
0
(Ω) functions such that
u
k
→u in W
1,p
0
(Ω) , as k→∞.
For each k, consider the equation
94 3 Regularity of Solutions
(a
ij
(x)v
xi
)
xj
= F
k
(x) :=
[u
k
[
p/q
u
k
1 +[u
k
[
2
, (3.53)
where
1
q
+
1
p
= 1.
Obviously, for p < 2, q is greater than 2, and F
k
(x) is in L
q
(Ω). Since we
already have uniqueness for W
1,q
0
weak solution, Lemma 3.2.2 guarantees the
existence of a solution v
k
for equation (3.53).
Through integration by parts several times, one can verify that
0 =
Ω
v
k
(a
ij
(x)u
xi
)
xj
dx =
Ω
[u
k
[
p/q
u
k
1 +[u
k
[
2
udx. (3.54)
Since u
k
→u in W
1,p
, it is easy to see that
[u
k
[
p/q
u
k
1 +[u
k
[
2
→
[u[
p/q
u
1 +[u[
2
in L
q
(Ω), and therefore, by (3.54), we must have u = 0 almost everywhere.
Taking into account that u ∈ W
1,p
0
(Ω), we ﬁnally deduce that u = 0 almost
everywhere. This completes the proof of the proposition. ¯ .
After proving the uniqueness, the rest of the arguments are entirely the
same as in stage 1. This completes stage 2 of the proof for Theorem 3.2.1.
3.2.3 Other Useful Results Concerning the Existence, Uniqueness,
and Regularity
Theorem 3.2.2 Given any pair p, q > 1 and f ∈ L
q
(Ω), if u ∈ W
1,p
0
(Ω) is
a weak solution of
Lu = f(x) , x ∈ Ω, (3.55)
then u ∈ W
2,q
(Ω) and it holds the a priori estimate
u
W
2,q
(Ω)
≤ C(u
L
q +f
L
q ). (3.56)
Proof. Case i) If q ≤ p, then obviously, u is also a W
1,q
0
(Ω) weak solution, and
the results follow from the Regularity Theorem 3.2.1 and the W
2,q
a priori
estimates.
Case ii) If q > p, then we use the Sobolev embedding
W
2,p
→ W
1,p1
,
with
p
1
=
np
n −p
.
If p
1
≥ q, we are done. If p
1
< q, we use the fact that u is a W
1,p1
weak
solution, and by Regularity, u ∈ W
2,p1
. Repeating this process, after a few
steps, we will arrive at a p
k
≥ q.
3.3 Regularity Lifting 95
From this Theorem, we can derive immediately the equivalence between
the uniqueness of weak solutions in any two spaces W
1,p
0
and W
1,q
0
:
Corollary 3.2.1 For any pair p, q > 1, the following are equivalent
i) If u ∈ W
1,p
0
(Ω) is a weak solution of Lu = 0, then u = 0.
ii) If u ∈ W
1,q
0
(Ω) is a weak solution of Lu = 0, then u = 0.
Theorem 3.2.3 (W
2,p
Version of the Fredholm Alternative). Let 1 < p < ∞.
If u = 0 whenever Lu = 0 (in the sense of W
1,p
0
(Ω) weak solution), then for
any f ∈ L
p
(Ω), there exist a unique u ∈ W
1,p
0
(Ω) ∩ W
2,p
(Ω), such that
Lu = f(x) and u
W
2,p
(Ω)
≤ Cf
L
p.
For 2 ≤ p < ∞, we have stated this theorem in Lemma 3.2.2. Now, for
1 < p < 2, after we obtained the W
2,p
regularity in this case, the proof of this
theorem is entirely the same as for Lemma 3.2.2.
3.3 Regularity Lifting
3.3.1 Bootstrap
Assume that u ∈ H
1
(Ω) is a weak solution of
−´u = u
p
(x) , x ∈ Ω (3.57)
Then by Sobolev Embedding, u ∈ L
2n
n−2
(Ω). If the power p is less than the
critical number
n+2
n−2
, then the regularity of u can be enhanced repeatedly
through the equation until we reach that u ∈ C
α
(Ω). Finally the “Schauder
Estimate” will lift u to C
2+α
(Ω) and hence to be a classical solution. We call
this the “Bootstrap Method” as will be illustrated below.
For each ﬁxed p <
n+2
n−2
, write
p =
n + 2
n −2
−δ
for some δ > 0.
Since u ∈ L
2n
n−2
(Ω),
u
p
∈ L
2n
p(n−2)
(Ω) = L
2n
(n+2)−δ(n−2)
(Ω).
The equation (3.57) boosts the solution u to W
2,q1
(Ω) with
q
1
=
2n
(n + 2) −δ(n −2)
.
By the Sobolev Embedding
96 3 Regularity of Solutions
W
2,q1
(Ω) → L
nq
1
n−2q
1
(Ω)
we have u ∈ L
s1
(Ω) with
s
1
=
nq
1
n −2q
1
=
2n
n −2
1
1 −δ
.
The integrable power of u has been ampliﬁed by
1
1−δ
(> 1) times.
Now, u
p
∈ L
q2
(Ω) with
q
2
=
s
1
p
=
4n
(1 −δ)[n + 2 −δ(n −2)]
.
And hence through the equation, we derive u ∈ W
2,q2
(Ω). By Sobolev em
bedding, if 2q
2
≥ n, i.e. if
(1 −δ)[n + 2 −δ(n −2)] −4 ≤ 0 ,
then, we have either u ∈ L
q
(Ω) for any q, or u ∈ C
α
(Ω). We are done. If
2q
2
< n, then u ∈ L
s2
(Ω), with
s
2
=
2n
(1 −δ)[n + 2 −δ(n −2)] −4
.
It is easy to verify that
s
2
≥
2n
(1 −δ)(n −2)
1
1 −δ
= s
1
1
1 −δ
.
The integrable power of u has again been ampliﬁed by at least
1
1−δ
times.
Continuing this process, after a ﬁnite many steps, we will boost u to L
q
(Ω)
for any q, and hence in C
α
(Ω) for some 0 < α < 1. Finally, by the Schauder
estimate, u ∈ C
2+α
(Ω), and therefore, it is a classical solution.
From the above argument, one can see that, if p =
n+2
n−2
, the so called
“critical power”, then δ = 0, and the integrable power of the solution u can not
be boosted this way. To deal with this situation, we introduce a “Regularity
Lifting Method” in the next subsection.
3.3.2 Regularity Lifting Theorem
Here, we present a simple method to boost regularity of solutions. It has been
used extensively in various forms in the authors previous works. The essence of
the approach is wellknown in the analysis community. However, the version
we present here contains some new developments. It is much more general
and is very easy to use. We believe that our method will provide convenient
ways, for both experts and nonexperts in the ﬁeld, in obtaining regularities.
Essentially, it is based on the following Regularity Lifting Theorem.
3.3 Regularity Lifting 97
Let Z be a given vector space. Let  
X
and  
Y
be two norms on Z.
Deﬁne a new norm  
Z
by
 
Z
=
p
 
p
X
+ 
p
Y
For simplicity, we assume that Z is complete with respect to the norm  
Z
.
Let X and Y be the completion of Z under  
X
and  
Y
, respectively.
Here, one can choose p, 1 ≤ p ≤ ∞, according to what one needs. It’s easy to
see that Z = X ∩ Y .
Theorem 3.3.1 Let T be a contraction map from X into itself and from Y
into itself. Assume that f ∈ X, and that there exits a function g ∈ Z such
that f = Tf +g. Then f also belongs to Z.
Proof. Step 1. First show that T : Z −→ Z is a contraction. Since T is a
contraction on X, there exists a constant θ
1
, 0 < θ
1
< 1 such that
Th
1
−Th
2

X
≤ θ
1
h
1
−h
2

X
.
Similarly, we can ﬁnd a constant θ
2
, 0 < θ
2
< 1 such that
Th
1
−Th
2

Y
≤ θ
2
h
1
−h
2

Y
.
Let θ = max¦θ
1
, θ
2
¦. Then, for any h
1
, h
2
∈ Z,
Th
1
−Th
2

Z
=
p
Th
1
−Th
2

p
X
+Th
1
−Th
2

p
Y
≤
p
θ
p
1
h
1
−h
2

p
X
+θ
p
2
h
1
−h
2

p
Y
≤ θh
1
−h
2

Z
.
Step 2. Since T : Z −→ Z is a contraction, given g ∈ Z, we can ﬁnd a
solution h ∈ Z such that h = Th + g. We see that T : X −→ X is also a
contraction and g ∈ Z ⊂ X. The equation x = Tx + g has a unique solution
in X. Thus, f = h ∈ Z since both h and f are solutions of x = Tx +g in X.
3.3.3 Applications to PDEs
Now, we explain how the “Regularity Lifting Theorem” proved in the previous
subsection can be used to boost the regularity of week solutions involving
critical exponent:
−´u = u
n+2
n−2
. (3.58)
Still assume that Ω is a smooth bounded domain in R
n
with n ≥ 3. Let
u ∈ H
1
0
(Ω) be a weak solution of equation (3.58). Then by Sobolev embedding,
u ∈ L
2n
n−2
(Ω).
98 3 Regularity of Solutions
We can split the right hand side of (3.58) in two parts:
u
n+2
n−2
= u
4
n−2
u := a(x)u.
Then obviously, a(x) ∈ L
n
2
(Ω). Hence, more generally, we consider the regu
larity of the weak solution of the following equation
−´u = a(x)u +b(x). (3.59)
Theorem 3.3.2 Assume that a(x), b(x) ∈ L
n
2
(Ω). Let u be any weak solution
of equation (3.59) in H
1
0
(Ω). Then u is in L
p
(Ω) for any 1 ≤ p < ∞.
Remark 3.3.1 Even in the best situation when a(x) ≡ 0, we can only have
u ∈ W
2,
n
2
(Ω) via the W
2,p
estimate, and hence derive that u ∈ L
p
(Ω) for any
p < ∞ by Sobolev embedding.
Now let’s come back to nonlinear equation (3.58). Assume that u is a
H
1
0
(Ω) weak solution. From the above theorem, we ﬁrst conclude that u is
in L
q
(Ω) for any 1 < q < ∞. Then by our Regularity Theorem 3.2.1, u is
in W
2,q
(Ω). This implies that u ∈ C
1,α
(Ω) via Sobolev embedding. Finally,
from the classical C
2,α
regularity, we arrive at u ∈ C
2,α
(Ω).
Corollary 3.3.1 If u is a H
1
0
(Ω) weak solution of equation (3.58), then u ∈
C
2,α
(Ω), and hence it is a classical solution.
Proof of Theorem 3.3.2.
Let G(x, y) be the Green’s function of −´ in Ω. Deﬁne
(Tf)(x) = (−´)
−1
f(x) =
Ω
G(x, y)f(y)dy.
Then obviously
0 < G(x, y) < Γ([x −y[) :=
C
n
[x −y[
n−2
,
and it follows that, for any q ∈ (1,
n
2
),
Tf
L
nq
n−2q
(Ω)
≤ Γ ∗ f
L
nq
n−2q
(R
n
)
≤ Cf
L
q
(Ω)
.
Here, we may extend f to be zero outside Ω when necessary.
For a positive number A, deﬁne
a
A
(x) =
a(x) if [a(x)[ ≥ A
0 otherwise ,
and
a
B
(x) = a(x) −a
A
(x).
3.3 Regularity Lifting 99
Let
(T
A
u)(x) =
Ω
G(x, y)a
A
(y)u(y)dy.
Then equation (3.59) can be written as
u(x) = (T
A
u)(x) +F
A
(x),
where
F
A
(x) =
Ω
G(x, y)[a
B
(y)u(y) +b(y)]dy.
We will show that, for any 1 ≤ p < ∞,
i) T
A
is a contracting operator from L
p
(Ω) to L
p
(Ω) for A large, and
ii) F
A
(x) is in L
p
(Ω).
Then by the “Regularity Lifting Theorem”, we derive immediately that
u ∈ L
p
(Ω).
i) The Estimate of the Operator T
A
.
For any
n
n−2
< p < ∞, there is a q, 1 < q <
n
2
, such that
p =
nq
n −2q
and then by H¨older inequality, we derive
T
A
u
L
p
(Ω)
≤ Γ ∗ a
A
u
L
p
(R
n
)
≤ Ca
A
u
L
q
(Ω)
≤ Ca
A

L
n
2 (Ω)
u
L
p
(Ω)
.
Since a(x) ∈ L
n
2
(Ω), one can choose a large number A, such that
a
A

L
n
2 (Ω)
≤
1
2
and hence arrive at
T
A
u
L
p
(Ω)
≤
1
2
u
L
p
(Ω)
.
That is T
A
: L
p
(Ω)→L
p
(Ω) is a contracting operator.
ii) The Integrability of F
A
(x).
(a) First consider
F
1
A
(x) :=
Ω
G(x, y)b(y)dy.
For any
n
n−2
< p < ∞, choose 1 < q <
n
2
, such that p =
nq
n−2q
. Extending b(x)
to be zero outside Ω, we have
F
1
A

L
p
(Ω)
≤ Γ ∗ b
L
p
(R
n
)
≤ Cb
L
q
(Ω)
≤ Cb
L
n
2 (Ω)
< ∞.
Hence
F
1
A
(x) ∈ L
p
(Ω).
100 3 Regularity of Solutions
(b) Then estimate
F
2
A
(x) =
Ω
G(x, y)a
B
(y)u(y)dy.
By the boundedness of a
B
(x), we have
F
2
A

L
p
(Ω)
≤ a
B
u
L
q
(Ω)
≤ Cu
L
q
(Ω)
.
Noticing that u ∈ L
q
(Ω) for any 1 < q ≤
2n
n −2
, and
p =
2n
n −6
when q =
2n
n −2
,
we conclude that, for the following values of p and dimension n,
1 < p < ∞ when 3 ≤ n ≤ 6
1 < p <
2n
n−6
when n > 6,
F
A
(x) ∈ L
p
(Ω), and hence T
A
is a contracting operator from L
p
(Ω) to L
p
(Ω).
Applying the Regularity Lifting Theorem, we arrive at
u ∈ L
p
(Ω) , for any 1 < p < ∞ if 3 ≤ n ≤ 6
u ∈ L
p
(Ω) , for any 1 < p <
2n
n−6
if n > 6.
The only thing that prevent us from reaching the full range of p is the
estimate of F
2
A
(x). However, from the starting point where u ∈ L
2n
n−2
(Ω), we
have arrived at
u ∈ L
p
(Ω) , for any 1 < p <
2n
n −6
.
Now from this point on, through an entire similar argument as above, we will
reach
u ∈ L
p
(Ω) , for any 1 < p < ∞ if 3 ≤ n ≤ 10
u ∈ L
p
(Ω) , for any 1 < p <
2n
n−10
if n > 10.
Each step, we add 4 to the dimension n for which holds
u ∈ L
p
(Ω) , 1 ≤ p < ∞.
Continuing this way, we prove our theorem for all dimensions. This completes
the proof of the Theorem. ¯ .
Now we will use the similar idea and the Regularity Lifting Theorem to
carry out Stage 3 of the proof of the Regularity Theorem.
Let L be a second order uniformly elliptic operator in divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (3.60)
we will prove
3.3 Regularity Lifting 101
Theorem 3.3.3 (W
2,p
Regularity under Weaker Conditions)
Let Ω be a bounded domain in R
n
and 1 < p < ∞. Assume that
i) a
ij
(x) ∈ W
1,n+δ
(Ω) for some δ > 0;
ii)
b
i
(x) ∈
L
n
(Ω), if 1 < p < n
L
n+δ
(Ω), if p = n
L
p
(Ω), if n < p < ∞;
iii)
c(x) ∈
L
n/2
(Ω), if 1 < p < n/2
L
n/2+δ
(Ω), if p = n/2
L
p
(Ω), if n/2 < p < ∞.
Suppose f ∈ L
p
(Ω) and u is a W
1,p
0
(Ω) weak solution of
Lu = f(x) x ∈ Ω. (3.61)
Then u is in W
2,p
(Ω).
Proof. Write equation (3.61) as
−
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
= −
n
¸
i=1
b
i
(x)u
xi
+c(x)u
+f(x).
Let
L
o
u := −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
.
By Proposition 3.2.2, we know that equation L
o
u = 0 has only trivial
solution, and it follows from Theorem 3.2.3, the operator L
o
is invertible.
Denote it by L
−1
o
. Then L
−1
o
is a bounded linear operator from L
p
(Ω) to
W
2,p
(Ω).
For each i and each positive number A, deﬁne
b
iA
(x) =
b
i
(x) , if [b
i
(x)[ ≥ A
0 , elsewhere .
Let
b
iB
(x) := b
i
(x) −b
iA
(x).
Similarly for C
A
(x) and C
B
(x).
Now we can rewrite equation (3.61) as
u = T
A
u +F
A
(u, x) (3.62)
where
T
A
u = −L
−1
o
¸
i
b
iA
(x)u
xi
+c
A
(x)u
102 3 Regularity of Solutions
and
F
A
(u, x) = −L
−1
o
¸
i
b
iB
(x)u
xi
+c
B
(x)u
+L
−1
o
f(x).
We ﬁrst show that
F
A
(u, ) ∈ W
2,p
(Ω) , ∀u ∈ W
1,p
(Ω), f ∈ L
p
(Ω). (3.63)
Since b
iB
(x) and c
iB
(x) are bounded, one can easily verify that
¸
i
b
iB
(x)u
xi
+c
B
(x)u
∈ L
p
(Ω)
for any u ∈ W
1,p
(Ω). This implies (3.63).
Then we prove that T
A
is a contracting operator fromW
2,p
(Ω) to W
2,p
(Ω).
In fact, by H¨older inequality and Sobolev embedding from W
2,p
to W
1,q
,
one can see that, for any v ∈ W
2,p
(Ω),
b
iA
Dv
L
p ≤ C
b
iA

L
nv
W
2,p , if 1 < p < n
b
iA

L
n+δ v
W
2,p , if p = n
b
iA

L
pv
W
2,p , if n < p < ∞,
(3.64)
and
c
A
v
L
p ≤ C
c
A

L
n/2v
W
2,p , if 1 < p < n/2
c
A

L
n/2+δ v
W
2,p , if p = n/2
c
A

L
pv
W
2,p , if n/2 < p < ∞.
(3.65)
Under the conditions of the theorem, the ﬁrst norms on the right hand
side of (3.64) and (3.65), i.e. b
iA

L
n, c
A

L
n/2 can be made arbitrarily small
if A is suﬃciently large. Hence for any > 0, there exists an A > 0, such that

¸
i
b
iA
(x)v
xi
+c
A
(x)v
L
p ≤ v
W
2,p , ∀ v ∈ W
2,p
(Ω).
Taking into account that L
−1
o
is a bounded operator from L
p
(Ω) to
W
2,p
(Ω), we deduce that T
A
is a contract mapping from W
2,p
(Ω) to W
2,p
(Ω).
It follows from the Contract Mapping Theorem, there exist a v ∈ W
2,p
(Ω),
such that
v = T
A
v +F
A
(u, x).
Now u and v satisfy the same equation. By uniqueness, we must have
u = v. Therefore we derive that u ∈ W
2,p
(Ω). This completes the proof of the
theorem. ¯ .
3.3 Regularity Lifting 103
3.3.4 Applications to Integral Equations
As another application of the Regularity Lifting Theorem, we consider the
following integral equation
u(x) =
R
n
1
[x −y[
n−α
u(y)
n+α
n−α
dy , x ∈ R
n
, (3.66)
where α is any real number satisfying 0 < α < n.
It arises as an EulerLagrange equation for a functional under a constraint
in the context of the HardyLittlewoodSobolev inequalities.
It is also closely related to the following family of semilinear partial dif
ferential equations
(−∆)
α/2
u = u
(n+α)/(n−α)
, x ∈ R
n
n ≥ 3. (3.67)
Actually, one can prove (See [CLO]) that the integral equation (3.66) and the
PDE (3.67) are equivalent.
In the special case α = 2, (3.67) becomes
−∆u = u
(n+2)/(n−2)
, x ∈ R
n
.
which is the well known Yamabe equation.
In the context of the HardyLittlewoodSobolev inequality, it is natural
to start with u ∈ L
2n
n−α
(R
n
). We will use the Regularity Lifting Theorem to
boost u to L
q
(R
n
) for any 1 < q < ∞, and hence in L
∞
(R
n
). More generally,
we consider the following equation
u(x) =
R
n
K(y)[u(y)[
p−1
u(y)
[x −y[
n−α
dy (3.68)
for some 1 < p < ∞. We prove
Theorem 3.3.4 Let u be a solution of (3.68). Assume that
u ∈ L
qo
(R
n
) for some q
o
>
n
n −α
, (3.69)
R
n
[K(y)[u(y)[
p−1
[
n
α
dy < ∞ , and [K(y)[ ≤ C(1 +
1
[y[
γ
) for some γ < α.
(3.70)
Then u is in L
q
(R
n
) for any 1 < q < ∞.
Remark 3.3.2 i) If
p >
n
n −α
, [K(x)[ ≤ M, and u ∈ L
n(p−1)
α
(R
n
), (3.71)
then conditions (3.69) and (3.70) are satisﬁed.
104 3 Regularity of Solutions
ii) Last condition in (3.71) is somewhat sharp in the sense that if it is
violated, then equation (3.68) when K(x) ≡ 1 possesses singular solutions
such as
u(x) =
c
[x[
α
p−1
.
Proof of Theorem 3.3.4: Deﬁne the linear operator
T
v
w =
R
n
K(y)[v(y)[
p−1
w
[x −y[
n−α
dy.
For any real number a > 0, deﬁne
u
a
(x) = u(x) , if [u(x)[ > a , or if [x[ > a
u
a
(x) = 0 , otherwise .
Let u
b
(x) = u(x) −u
a
(x).
Then since u satisﬁes equation (3.68), one can verify that u
a
satisﬁes the
equation
u
a
= T
ua
u
a
+g(x) (3.72)
with the function
g(x) =
R
n
K(y)[u
b
(y)[
p−1
u
b
(y)
[x −y[
n−α
dy −u
b
(x).
Under the second part of the condition (3.70), it is obvious that
g(x) ∈ L
∞
∩ L
qo
.
For any q >
n
n−α
, we ﬁrst apply the HardyLittlewoodSobolev inequality
to obtain
T
ua
w
L
q ≤ C(α, n, q)K[u
a
[
p−1
w
L
nq
n+αq
. (3.73)
Then write H(x) = K(x)[u
a
(x)[
p−1
, we apply the H¨older inequality to the
right hand side of the above inequality
R
n
[H(y)[
nq
n+αq
[f(y)[
nq
n+αq
dy
n+αq
nq
≤
R
n
[H(y)[
nq
n+αq
·r
dy
1
r
R
n
[f(y)[
nq
n+αq
·s
dy
1
s
¸
n+αq
nq
=
R
n
[H(y)[
n
α
dy
α
n
w
L
q . (3.74)
Here we have chosen
s =
n +αq
n
and r =
n +αq
αq
.
3.3 Regularity Lifting 105
It follows from (3.73) and (3.74) that
T
ua
w
L
q ≤ C(α, n, q)(
[K(y)[u
a
(y)[
p−1
[
n
α
dy)
α
n
w
L
q . (3.75)
By (3.70) and (3.75), we deduce, for suﬃciently large a,
T
ua
w
L
q ≤
1
2
w
L
q . (3.76)
Applying (3.76) to both the case q = q
o
and the case q = p
o
> q
o
, and by
the Contracting Mapping Theorem, we see that the equation
w = T
ua
w +g(x) (3.77)
has a unique solution in both L
qo
and L
po
∩L
qo
. From (3.72), u
a
is a solution
of (3.77) in L
qo
. Let w be the solution of (3.77) in L
po
∩L
qo
, then w is also a
solution in L
qo
. By the uniqueness, we must have u
a
= w ∈ L
po
∩L
qo
for any
p
o
>
n
n−α
. So does u. This completes the proof of the Theorem.
4
Preliminary Analysis on Riemannian Manifolds
4.1 Diﬀerentiable Manifolds
4.2 Tangent Spaces
4.3 Riemannian Metrics
4.4 Curvature
4.4.1 Curvature of Plane Curves
4.4.2 Curvature of Surfaces in R
3
4.4.3 Curvature on Riemannian Manifolds
4.5 Calculus on Manifolds
4.5.1 Higher Order Covariant Derivatives and the LaplaceBertrami Op
erator
4.5.2 Integrals
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature
4.6 Sobolev Embeddings
According to Einstein, the Universe we live in is a curved space. We call
this curved space a manifold, which is a generalization of the ﬂat Euclidean
space. Roughly speaking, each neighborhood of a point on a manifold is home
omorphic to an open set in the Euclidean space, and the manifold as a whole
is obtained by pasting together pieces of the Euclidean spaces. In this section,
we will introduce the most basic concepts of diﬀerentiable manifolds, tangent
space, Riemannian metrics, and curvatures. We will try to be as intuitive and
brief as possible. For more rigorous arguments and detailed discussions, please
see, for example, the book Riemannian Geometry by do Carmo [Ca].
4.1 Diﬀerentiable Manifolds
A simple example of a two dimensional diﬀerentiable manifold is a regular
surface in R
3
. Intuitively, it is a union of open sets of R
2
, organized in such a
108 4 Preliminary Analysis on Riemannian Manifolds
way that at the intersection of any two open sets the change from one to the
other can be made in a diﬀerentiable manner. More precisely, we have
Deﬁnition 4.1.1 (Diﬀerentiable Manifolds)
A diﬀerentiable ndimensional manifold M is
(i) a union of open sets, M =
¸
α
V
α
and a family of homeomorphisms φ
α
from V
α
to an open set φ
α
(V
α
) in
R
n
, such that
(ii) For any pair α and β with V
α
¸
V
β
= U = ∅, the images of the
intersection φ
α
(U) and φ
β
(U) are open sets in R
n
and the mappings φ
β
◦φ
−1
α
are diﬀerentiable.
(iii) The family ¦V
α
, φ
α
¦ are maximal relative to the conditions (i) and
(ii).
A manifold M is a union of open sets. In each open set U ⊂ M, one
can deﬁne a coordinates chart. In fact, let φ
U
be the homeomorphism from
U to φ
U
(U) in R
n
, for each p ∈ U, if φ
U
(p) = (x
1
, x
2
, , x
n
) in R
n
, then
we can deﬁne the coordinates of p to be (x
1
, x
2
, x
n
). This is called a local
coordinates of p, and (U, φ
U
) is called a coordinates chart. If a family of
coordinates charts that covers M are well made so that the transition from
one to the other is in a diﬀerentiable way as in condition (ii), then it is called
a diﬀerentiable structure of M. Given a diﬀerentiable structure on M, one
can easily complete it into a maximal one, by taking the union of all the
coordinates charts of M that are compatible with (in the sense of condition
(ii)) the ones in the given structure.
A trivial example of ndimensional manifolds is the Euclidean space R
n
,
with the diﬀerentiable structure given by the identity. The following are two
nontrivial examples.
Example 1. The ndimensional unit sphere
S
n
= ¦x ∈ R
n+1
[ x
2
1
+ +x
2
n+1
= 1¦.
Take the unit circle S
1
for instance, we can use the following four coordinates
charts:
V
1
= ¦x ∈ S
1
[ x
2
> 0¦, φ
1
(x) = x
1
,
V
2
= ¦x ∈ S
1
[ x
2
< 0¦, φ
2
(x) = x
1
,
V
3
= ¦x ∈ S
1
[ x
1
> 0¦, φ
3
(x) = x
2
,
V
4
= ¦x ∈ S
1
[ x
1
< 0¦, φ
4
(x) = x
2
.
Obviously, S
1
is the union of the four open sets V
1
, V
2
, V
3
, and V
4
. On the
intersection of any two open sets, say on V
1
¸
V
3
, we have
x
2
=
1 −x
2
1
, x
1
> 0;
x
1
=
1 −x
2
2
, x
2
> 0.
They are both diﬀerentiable functions. Same is true on other intersections.
Therefore, with this diﬀerentiable structure, S
1
is a 1dimensional diﬀeren
tiable manifold.
4.2 Tangent Spaces 109
Example 2. The ndimensional real projective space P
n
(R). It is the set of
straight lines of R
n+1
which passes through the origin (0, , 0) ∈ R
n+1
, or
the quotient space of R
n+1
` ¦0¦ by the equivalence relation
(x
1
, x
n+1
) ∼ (λx
1
, , λx
n+1
), λ ∈ R, λ = 0.
We denote the points in P
n
(R) by [x]. Observe that if x
i
= 0, then
[x
1
, , x
n+1
] = [x
1
/x
i
, x
i−1
/x
i
, 1, x
i+1
/x
i
, x
n+1
/x
i
].
For i = 1, 2, , n + 1, let
V
i
= ¦[x] [ x
i
= 0¦.
Apparently,
P
n
(R) =
n+1
¸
i=1
V
i
.
Geometrically, V
i
is the set of straight lines of R
n+1
which passes through the
origin and which do not belong to the hyperplane x
i
= 0.
Deﬁne
φ
i
([x]) = (ξ
i
1
, , ξ
i
i−1
, ξ
i
i+1
, , ξ
i
n+1
),
where ξ
i
k
= x
k
/x
i
(i = k) and 1 ≤ i ≤ n + 1.
On the intersection V
i
¸
V
j
, the changes of coordinates
ξ
j
k
=
ξ
i
k
ξ
i
j
k = i, j
ξ
j
i
=
1
ξ
i
j
.
are obviously smooth, thus ¦V
i
, φ
i
¦ is a diﬀerentiable structure of P
n
(R) and
hence P
n
(R) is a diﬀerentiable manifold.
4.2 Tangent Spaces
We know that at every point of a regular curve or at a regular surface, there
exists a tangent line or a tangent plane. Similarly, given a diﬀerentiable struc
ture on a manifold, we can deﬁne a tangent space at each point. For a regular
surfaces S in R
3
, at each given point p, the tangent plane is the space of all
tangent vectors, and each tangent vector is deﬁned as the “velocity in the am
bient space R
3
. Since on a diﬀerential manifold, there is no such an ambient
space, we have to ﬁnd a characteristic property of the tangent vector which
will not involve the concepts of velocity. To this end, let’s consider a smooth
curve in R
n
:
γ(t) = (x
1
(t), , x
n
(t)), t ∈ (−, ), with γ(0) = p.
110 4 Preliminary Analysis on Riemannian Manifolds
The tangent vector of γ at point p is
v = (x
1
(0), , x
n
(0)).
Let f(x) be a smooth function near p. Then the directional derivative of f
along vector v ∈ R
n
can be expressed as
d
dt
(f ◦ γ) [
t=0
=
n
¸
i=1
∂f
∂x
i
(p)
∂x
i
∂t
(0) = (
¸
i
x
i
(0)
∂
∂x
i
)f.
Hence the directional derivative is an operator on diﬀerentiable functions that
depends uniquely on the vector v. We can identify v with this operator and
thus generalize the concept of tangent vectors to diﬀerentiable manifolds M.
Deﬁnition 4.2.1 A diﬀerentiable function γ : (−, )→M is called a (diﬀer
entiable) curve in M.
Let D be the set of all functions that are diﬀerentiable at point p = γ(0) ∈
M. The tangent vector to the curve γ at t = 0 is an operator (or function)
γ
(0) : D→R given by
γ
(0)f =
d
dt
(f ◦ γ) [
t=0
.
A tangent vector at p is the tangent vector of some curve γ at t = 0 with
γ(0) = p. The set of all tangent vectors at p is called the tangent space at p
and denoted by T
p
M.
Let (x
1
, , x
n
) be the coordinates in a local chart (U, φ) that covers p
with φ(p) = 0 ∈ φ(U). Let γ(t) = (x
1
(t), , x
n
(t)) be a curve in M with
γ(0) = p. Then
γ
(0)f =
d
dt
f(γ(t)) [
t=0
=
d
dt
f(x
1
(t), , x
n
(t)) [
t=0
=
¸
i
x
i
(0)
∂f
∂x
i
=
¸
i
x
i
(0)
∂
∂x
i
p
f.
This shows that the vector γ
(0) can be expressed as the linear combination
of
∂
∂xi
p
, i.e.
γ
(0) =
¸
i
x
i
(0)
∂
∂x
i
p
.
Here
∂
∂xi
p
is the tangent vector at p of the “coordinate curve”:
x
i
→φ
−1
(0, , 0, x
i
, 0, , 0);
and
4.2 Tangent Spaces 111
∂
∂x
i
p
f =
∂
∂x
i
f(φ
−1
(0, , 0, x
i
, 0, , 0)) [
xi=0
.
One can see that the tangent space T
p
M is an ndimensional linear space with
the basis
∂
∂x
1
p
, ,
∂
∂x
n
p
.
The dual space of the tangent space T
p
M is called the cotangent space,
and denoted by T
∗
p
M. We can realize it in the following.
We ﬁrst introduce the diﬀerential of a diﬀerentiable mapping.
Deﬁnition 4.2.2 Let M and N be two diﬀerential manifolds and let f :
M→N be a diﬀerentiable mapping. For every p ∈ M and for each v ∈ T
p
M,
choose a diﬀerentiable curve γ : (−, )→M, with γ(0) = p and γ
(0) = v. Let
β = f ◦ γ. Deﬁne the mapping
df
p
: T
p
M→T
f(p)
N
by
df
p
(v) = β
(0).
We call df
p
the diﬀerential of f. Obviously, it is a linear mapping from one
tangent space T
p
M to the other T
f(p)
N; And one can show that it does not
depend on the choice of γ (See [Ca]). When N = R
1
, the collection of all such
diﬀerentials form the cotangent space. From the deﬁnition, one can derive
that
df
p
(
∂
∂x
i
p
) =
∂
∂x
i
p
f.
In particular, for the diﬀerential (dx
i
)
p
of the coordinates function x
i
, we have
(dx
i
)
p
(
∂
∂x
j
p
) = δ
ij
.
And consequently,
df
p
=
¸
i
∂f
∂x
i
p
(dx
i
)
p
.
From here we can see that
(dx
1
)
p
, (dx
2
)
p
, , (dx
n
)
p
form a basis of the cotangent space T
∗
p
M at point p.
Deﬁnition 4.2.3 The tangent space of M is
TM =
¸
p∈M
T
p
M.
And the cotangent space of M is
T
∗
M =
¸
p∈M
T
∗
p
M.
112 4 Preliminary Analysis on Riemannian Manifolds
4.3 Riemannian Metrics
To measure the arc length, area, or volume on a manifold, we need to introduce
some kind of measurements, or ‘metrics’. For a surface S in the three dimen
sional Euclidean space R
3
, there is a natural way of measuring the length of
vectors tangent to S, which are simply the length of the vectors in the ambient
space R
3
. These can be expressed in terms of the inner product <, > in R
3
.
Given a curve γ(t) on S for t ∈ [a, b], its length is the integral
b
a
[γ
(t)[dt,
where the length of the velocity vector γ
(t) is given by the inner product in
R
3
:
[γ
(t)[ =
< γ
(t), γ
(t) >.
The deﬁnition of the inner product <, > enable us to measure not only
the lengths of curves on S, but also the area of domains on S, as well as other
quantities in geometry.
Now we generalize this concept to diﬀerentiable manifolds without using
the ambient space. More precisely, we have
Deﬁnition 4.3.1 A Riemannian metric on a diﬀerentiable manifold M is a
correspondence which associates to each point p of M an inner product < , >
p
on the tangent space T
p
M, which varies diﬀerentiably, that is,
g
ij
(q) ≡<
∂
∂x
i
q
,
∂
∂x
j
q
>
q
is a diﬀerentiable function of q for all q near p.
A diﬀerentiable manifold with a given Riemannian metric will be called a
Riemannian manifold.
It is clear this deﬁnition does not depend on the choice of coordinate
system. Also one can prove that ( see [Ca])
Proposition 4.3.1 A diﬀerentiable manifold M has a Riemannian metric.
Now we show how a Riemannian metric can be used to measure the length
of a curve on a manifold M.
Let I be an open interval in R
1
, and let γ : I→M be a diﬀerentiable
mapping ( curve ) on M. For each t ∈ I, dγ is a linear mapping from T
t
I to
T
γ(t)
M, and γ
(t) ≡
dγ
dt
≡ dγ(
d
dt
) is a tangent vector of T
γ(t)
M. The restriction
of the curve γ to a closed interval [a, b] ⊂ I is called a segment. We deﬁne the
length of the segment by
b
a
< γ
(t), γ
(t) >
1/2
dt.
4.3 Riemannian Metrics 113
Here the arc length element is
ds =< γ
(t), γ
(t) >
1/2
dt. (4.1)
As we mentioned in the previous section, let (x
1
, , x
n
) be the coordinates
in a local chart (U, φ) that covers p = γ(t) = (x
1
(t), , x
n
(t)). Then
γ
(t) =
¸
i
x
i
(t)
∂
∂x
i
p
.
Consequently,
ds
2
= < γ
(t), γ
(t) > dt
2
=
n
¸
i,j=1
<
∂
∂x
i
,
∂
∂x
j
>
p
x
i
(t)dt x
j
(t)dt
=
n
¸
i,j=1
g
ij
(p)dx
i
dx
j
.
This expresses the length element ds in terms of the metric g
ij
.
To derive the formula for the volume of a region (an open connected subset)
D in M in terms of metric g
ij
, let’s begin with the volume of the parallelepiped
form by the tangent vectors. Let ¦e
1
, , e
n
¦ be an orthonormal basis of T
p
M.
Let
∂
∂x
i
(p) =
¸
j
a
ij
e
j
, i = 1, n.
Then
g
ik
(p) =<
∂
∂x
i
,
∂
∂x
k
>
p
=
¸
j l
a
ij
a
kl
< e
j
, e
l
>=
¸
j
a
ij
a
kj
. (4.2)
We know the volume of the parallelepiped form by
∂
∂x1
,
∂
∂xn
is the deter
minant det(a
ij
), and by virtue of (4.2), it is the same as
det(g
ij
).
Assume that the region D is cover by a coordinate chart (U, φ) with co
ordinates (x
1
, x
n
). From the above arguments, it is natural to deﬁne the
volume of D as
vol(D) =
φ(U)
det(g
ij
)dx
1
dx
n
.
A trivial example is M = R
n
, the ndimensional Euclidean space with
∂
∂x
i
= e
i
= (0, , 1, 0, , 0).
The metric is given by
g
ij
=< e
i
, e
j
>= δ
ij
.
114 4 Preliminary Analysis on Riemannian Manifolds
A less trivial example is S
2
, the 2dimensional unit sphere with standard
metric, i.e., with the metric inherited from the ambient space R
3
= ¦(x, y, z)¦.
We will use the usual spherical coordinates (θ, ψ) with
x = sin θ cos ψ
y = sin θ sin ψ
z = cos θ
to illustrate how we use the measurement (inner product) in R
3
to derive the
expression of the standard metric g
ij
in term of the local coordinate (θ, ψ) on
S
2
.
Let p be a point on S
2
covered by a local coordinates chart (U, φ) with
coordinates (θ, ψ). We ﬁrst derive the expression for
∂
∂θ
p
and
∂
∂ψ
p
the
tangent vectors at point p = φ
−1
(θ, ψ) of the coordinate curves ψ = constant
and θ =constant respectively.
In the coordinates of R
3
, we have,
∂
∂θ
p
= (
∂x
∂θ
,
∂y
∂θ
,
∂z
∂θ
) = (cos θ cos ψ, cos θ sin ψ, −sin θ),
and
∂
∂ψ
p
= (
∂x
∂ψ
,
∂y
∂ψ
,
∂z
∂ψ
) = (−sin θ sin ψ, sin θ cos ψ, 0).
It follows that
g
11
(p) =<
∂
∂θ
,
∂
∂θ
>
p
= cos
2
θ cos
2
ψ + cos
2
θ sin
2
ψ + sin
2
θ = 1,
g
12
(p) = g
21
(p) =<
∂
∂θ
,
∂
∂ψ
>
p
= 0,
and
g
22
(p) =<
∂
∂ψ
,
∂
∂ψ
>
p
= sin
2
θ.
Consequently, the length element is
ds =
g
11
dθ
2
+ 2g
12
dθdψ +g
22
dψ
2
=
dθ
2
+ sin
2
dψ
2
,
and the area element is
dA =
det(g
ij
)dθdψ = sin θdθdψ.
These conform with our knowledge on the length and area in the spherical
coordinates.
4.4 Curvature 115
4.4 Curvature
4.4.1 Curvature of Plane Curves
Consider curves on a plane. Intuitively, we feel that a straight line does not
bend at all, and a circle of radius one bends more than a circle of radius 2.
To measure the extent of bending of a curve, or the amount it deviates from
being straight, we introduce curvature.
Let p be a point on a curve, and T(p) be the unit tangent vector at p. Let
q be a nearby point. Let ∆α be the amount of change of the angle from T(p)
to T(q), and ∆s the arc length between p and q on the curve. We deﬁne the
curvature at p as
k(p) = lim
q→p
∆α
∆s
=
dα
ds
(p).
If a plane curve is given parametrically as c(t) = (x(t), y(t)), then the unit
tangent vector at c(t) is
T(t) =
(x
(t), y
(t))
[x
(t)]
2
+ [y
(t)]
2
,
and
ds =
[x
(t)]
2
+ [y
(t)]
2
dt.
By a straight forward calculation, one can verify that
k(c(t)) =
[dT[
ds
=
x
(t)y
(t) −y
(t)x
(t)
¦[x
(t)]
2
+ [y
(t)]
2
¦
3/2
.
If a plane curve is given explicitly as y = f(x), then in the above formula,
replacing t by x and y(t) by f(x), we ﬁnd that the curvature at point (x, f(x))
is
k(x, f(x)) =
f
(x)
¦1 + [f
(x)]
2
¦
3/2
. (4.3)
4.4.2 Curvature of Surfaces in R
3
1. Principle Curvature
Let S be a surface in R
3
, p be a point on S, and N(p) a normal vector
of S at p. Given a tangent vector T(p) at p, consider the intersection of
the surface with the plane containing T(p) and N(p). This intersection is a
plane curve, and its curvature is taken as the absolute value of the normal
curvature, which is positive if the curve turns in the same direction as the
surface’s chosen normal, and negative otherwise. The normal curvature varies
as the tangent vector T(p) changes. The maximum and minimum values of
the normal curvature at point p are called principal curvatures, and usually
denoted by k
1
(p) and k
2
(p).
116 4 Preliminary Analysis on Riemannian Manifolds
2. Gaussian Curvature. One way to measure the extent of bending of a
surface is by Gaussian curvature. It is deﬁned as the product of the two
principal curvatures
K(p) = k
1
(p) k
2
(p).
From this, one can see that no matter which side normal is chosen (say, for
a closed surface, one may chose outer normals or inner normals), the Gaussian
curvature of a sphere is positive, of a one sheet hyperboloids is negative, and
of a torus or a plane is zero. A sphere of radius R has Gaussian curvature
1
R
2
.
Let f(x, y) be a diﬀerentiable function. Consider a surface S in R
3
deﬁned
as the graph of x
3
= f(x
1
, x
2
). Assume that the origin is on S, and the xy
plane is tangent to S, that is
f(0, 0) = 0 and f
1
(0, 0) = 0 = f
2
(0, 0),
where, for simplicity, we write f
i
=
∂f
∂xi
and we will use f
ij
to denote
∂
2
f
∂xi∂xj
.
We calculate the Gaussian curvature of S at the origin 0 = (0, 0, 0).
Let u = (u
1
, u
2
) be a unit vector on the tangent plane. Then by (4.3), the
normal curvature k
n
(u) in u direction is
∂
2
f
∂u
2
[(
∂f
∂u
)
2
+ 1]
3/2
(0)
where
∂f
∂u
and
∂
2
f
∂u
2
are ﬁrst and second directional derivative of f in u direction.
Using the assumption that f
1
(0) = 0 = f
2
(0), we arrive at
k
n
(u) = (f
11
u
2
1
+ 2f
12
u
1
u
2
+f
22
u
2
2
)(0) = u
T
F u,
where
F =
f
11
f
12
f
21
f
22
(0),
and u
T
denotes the transpose of u.
Let λ
1
and λ
2
be two eigenvalues of the matrix F with corresponding unit
eigenvectors e
1
and e
2
, i.e.
F e
i
= λ
i
e
i
, i = 1, 2. (4.4)
Since the matrix F is symmetric, the two eigenvectors e
1
and e
2
are orthog
onal, and it is obvious that
e
T
i
F e
i
= λ
i
, i = 1, 2.
Let θ be the angle between u and e
1
. Then we can express
u = cos θ e
1
+ sin θ e
2
.
It follows that
4.4 Curvature 117
k
n
(u) = u
T
F u = cos
2
θ λ
1
+ sin
2
θ λ
2
.
Assume that λ
1
≤ λ
2
. Then
λ
1
≤ λ
1
+ sin
2
θ (λ
2
−λ
1
) = k
n
(u) = cos
2
θ (λ
1
−λ
2
) +λ
2
≤ λ
2
.
Hence λ
1
and λ
2
are minimum and maximum values of normal curvature,
and therefore they are principal curvatures k
1
and k
2
.
From (4.4), we see that λ
1
and λ
2
are solutions of the quadratic equation
f
11
−λ f
12
f
21
f
22
−λ
= λ
2
−(f
11
+f
22
)λ + (f
11
f
22
−f
12
f
21
) = 0.
Therefore, by deﬁnition, the Gaussian curvature at 0 is
K(0) = k
1
k
2
= λ
1
λ
2
= (f
11
f
22
−f
12
f
21
)(0).
4.4.3 Curvature on Riemannian Manifolds
On a general n dimensional Riemannian manifold M, there are several kinds
of curvatures. Before introducing them, we need a few preparations.
1. Vector Fields and Brackets
A vector ﬁeld X on M is a correspondence that assigns each point p ∈ M
a vector X(p) in the tangent space T
p
M. It is a mapping from M to T M. If
this mapping is diﬀerentiable, then we say the vector ﬁeld X is diﬀerentiable.
In a local coordinates, we can express
X(p) =
n
¸
i
a
i
(p)
∂
∂x
i
.
When applying to a smooth function f on M, it is a directional derivative
operator in the direction X(p):
(Xf)(p) =
n
¸
i
a
i
(p)
∂f
∂x
i
.
Deﬁnition 4.4.1 For two diﬀerentiable vector ﬁelds X and Y, deﬁne the Lie
Bracket as
[X, Y ] = XY −Y X.
If X(p) =
¸
n
i
a
i
(p)
∂
∂xi
and Y (p) =
¸
n
j
b
j
(p)
∂
∂xj
, then
[X, Y ]f = XY f −Y Xf =
n
¸
i,j=1
a
i
∂b
j
∂x
i
−b
i
∂a
j
∂x
i
∂f
∂x
j
.
118 4 Preliminary Analysis on Riemannian Manifolds
2. Covariant Derivatives and Riemannian Connections
To be more intuitive, we begin with a surface S in R
3
. Let c : I→S be a
curve on S, and let V (t) be a vector ﬁeld along c(t), t ∈ I. One can see that
dV
dt
is a vector in the ambient space R
3
, but it may not belong to the tangent
plane of S. To consider that rate of change of V (t) restricted to the surface S,
we make an orthogonal projection of
dV
dt
onto the tangent plane, and denote
it by
DV
dt
. We call this the covariant derivative of V (t) on S – the derivative
viewed by the twodimensional creatures on S.
On an ndimensional diﬀerentiable manifold M, the covariant derivative
is deﬁned by aﬃne connections.
Let D(M) be the set of all smooth vector ﬁelds.
Deﬁnition 4.4.2 Let X, Y, Z ∈ D(M), and let f and g be smooth functions
on M. An aﬃne connection D on a diﬀerentiable manifold M is a mapping
from D(M) D(M) to D(M) which maps (X, Y ) to D
X
Y and satisﬁes
1)D
fX+gY
Z = fD
X
Z +gD
Y
Z
2)D
X
(Y +Z) = D
X
Y +D
X
Z
3)D
X
(fY ) = fD
X
Y +X(f)Y.
Given an aﬃne connection D on M, for a vector ﬁeld V (t) = Y (c(t)) along
a diﬀerentiable curve c : I→M, deﬁne the covariant derivative of V along c(t)
as
DV
dt
= Ddc
dt
Y.
In a local coordinates, let
c(t) = (x
1
(t), x
n
(t)) and V (t) =
n
¸
i
v
i
(t)
∂
∂x
i
.
Then by the properties of the aﬃne connection, one can express
DV
dt
=
¸
i
dv
i
dt
∂
∂x
i
+
¸
i,j
dx
i
dt
v
j
D ∂
∂x
i
∂
∂x
j
. (4.5)
Recall that in Euclidean spaces with inner product <, >, a vector ﬁeld
V (t) along a curve c(t) is parallel if only if
dV
dt
= 0; and for a pair of vector
ﬁelds X and Y along c(t), we have < X, Y >= constant. Similarly, we have
Deﬁnition 4.4.3 Let M be a diﬀerentiable manifold with an aﬃne connec
tion D. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if
DV
dt
= 0.
4.4 Curvature 119
Deﬁnition 4.4.4 Let M be a diﬀerentiable manifold with an aﬃne connec
tion D and a Riemannian metric <, >. If for any pair of parallel vector ﬁelds
X and Y along any smooth curve c(t), we have
< X, Y >
c(t)
= constant ,
then we say that D is compatible with the metric <, >.
The following fundamental theorem of Levi and Civita guarantees the
existence of such an aﬃne connection on a Riemannian manifold.
Theorem 4.4.1 Given a Riemannian manifold M with metric <, >, there
exists a unique connection D on M that is compatible with <, >. Moreover
this connection is symmetric, i.e.
D
X
Y −D
Y
X = [X, Y ].
We skip the proof of the theorem. Interested readers may see the book of do
Carmo [ca] (page 55).
In a local coordinates (x
1
, , x
n
), this unique Riemannian connection can
be expressed as
D ∂
∂x
i
∂
∂x
j
=
¸
k
Γ
k
ij
∂
∂x
k
,
where
Γ
k
ij
=
1
2
¸
m
∂g
jm
∂x
i
+
∂g
mi
∂x
j
−
∂g
ij
∂x
m
g
mk
is called the Christoﬀel symbols, g
ij
=<
∂
∂xi
,
∂
∂xj
>, and (g
ij
) is the inverse
matrix of (g
ij
).
3. Curvature
Deﬁnition 4.4.5 Let M be a Riemannian manifold with the Riemannian
connection D. The curvature R is a correspondence that assigns every pair of
smooth vector ﬁeld X, Y ∈ D(M) a mapping R(X, Y ) : D(M)→D(M) deﬁned
by
R(X, Y )Z = D
Y
D
X
Z −D
X
D
Y
Z +D
[X,Y ]
Z, ∀Z ∈ D(M).
Deﬁnition 4.4.6 The sectional curvature of a given two dimensional sub
space E ⊂ T
p
M at point p is
K(E) =
< R(X, Y )X, Y >
[X[
2
[Y [
2
− < X, Y >
2
,
where X and Y are any two linearly independent vectors in E, and
[X[
2
[Y [
2
− < X, Y >
2
is the area of the parallelogram spun by X and Y .
120 4 Preliminary Analysis on Riemannian Manifolds
One can show that K(E) so deﬁned is independent of the choice of X and Y .
It is actually the Gaussian curvature of the section.
Example. Let M be the unit sphere with standard metric, then one can
verify that its sectional curvature at every point is K(E) = 1.
Deﬁnition 4.4.7 Let X = Y
n
be a unit vector in T
p
M, and let ¦Y
1
, , Y
n−1
¦
be an orthonormal basis of the hyper plane in T
p
M orthogonal to X, then the
Ricci curvature in the direction X is the average of the sectional curvature of
the n −1 sections spun by X and Y
1
, X and Y
2
, , and X and Y
n−1
:
Ric
p
(X) =
1
n −1
n−1
¸
i=1
< R(X, Y
i
)X, Y
i
> .
The scalar curvature at p is
R(p) =
1
n
n
¸
i=1
Ric
p
(Y
i
).
One can show that the above deﬁnition does not depend on the choice of
the orthonormal basis. On two dimensional manifolds, Ricci curvature is the
same as sectional curvature, and they both depends only on the point p.
4.5 Calculus on Manifolds
In the previous chapter, we introduced the concept of aﬃne connection D on
a diﬀerentiable manifold M. It is a mapping from Γ(M) Γ(M) to Γ(M),
where Γ(M) is the set of all smooth vector ﬁelds on M. Here we will extend
this deﬁnition to the space of diﬀerentiable tensor ﬁelds, and thus introduce
higher order covariant derivatives. For convenience, we adapt the summation
convention and write, for instance,
X
i
∂
∂x
i
:=
¸
i
X
i
∂
∂x
i
,
Γ
j
ik
dx
k
:=
¸
k
Γ
j
ik
dx
k
,
and so on.
4.5.1 Higher Order Covariant Derivatives and the
LaplaceBertrami Operator
Let p and q be two nonnegative integers. At a point x on M, as usual, we
denote the tangent space by T
x
M. Deﬁne T
q
p
(T
x
M) as the space of (p, q)
tensors on T
x
M, that is, the space of (p +q)linear forms
4.5 Calculus on Manifolds 121
η : T
x
M T
x
M
. .. .
p
T
∗
x
M T
∗
x
M
. .. .
q
→R
1
.
In a local coordinates, we can express
T = T
j1···jq
i1···ip
dx
i1
⊗ ⊗dx
ip
⊗
∂
∂x
j1
⊗ ⊗
∂
∂x
jq
.
Recall that, for the aﬃne connection D, in a local coordinates, if we set
i
= D ∂
∂x
i
,
then
i
∂
∂x
j
(x) = Γ
k
ij
∂
∂x
k
x
,
where Γ
k
ij
are the Christoﬀel symbols. For smooth vector ﬁelds X, Y ∈ Γ(M)
with
X = (X
1
, , X
n
) and Y = (Y
1
, , Y
n
),
by the bilinear properties of the connection, we have
D
X
Y = D
X
i ∂
∂x
i
Y = X
i
i
Y
= X
i
∂Y
j
∂x
i
∂
∂x
j
+Y
j
Γ
k
ij
∂
∂x
k
= X
i
∂Y
j
∂x
i
+Γ
j
ik
Y
k
∂
∂x
j
(4.6)
Now we naturally extend this aﬃne connection D to diﬀerentiable (p, q)
tensor ﬁelds T on M. For a point x on M, and a vector X ∈ T
x
M, D
X
T is
deﬁned to be a (p, q)tensor on T
x
M by
D
X
T(x) = X
i
(
i
T)(x),
where
(
i
T)(x)
j1···jq
i1···ip
=
∂T
j1···jq
i1···ip
∂x
i
x
−
p
¸
k=1
Γ
α
ii
k
(x)T(x)
j1···jq
i1···i
k−1
αi
k+1
···ip
+
q
¸
k=1
Γ
j
k
iα
(x)T(x)
j1···j
k−1
αj
k+1
jq
i1···ip
. (4.7)
Notice that (4.6) is a particular case when (4.7) is applied to the (0, 1)
tensor Y . If we apply it to a (1, 0)tensor, say dx
j
, then
i
(dx
j
) = −Γ
j
ik
dx
k
. (4.8)
Given two diﬀerentiable tensor ﬁelds T and
˜
T, we have
122 4 Preliminary Analysis on Riemannian Manifolds
D
X
(T ⊗
˜
T) = D
X
T ⊗
˜
T +T ⊗D
X
˜
T.
For a diﬀerentiable (p, q)tensor ﬁeld T, we deﬁne T to be the (p +1, q)
tensor ﬁeld whose components are given by
(T)
j1···jq
i1···ip+1
= (
i1
T)
j1···jq
i2···ip+1
.
Similarly, one can deﬁne
2
T,
3
T, .
For example, a smooth function f(x) on M is a (0, 0)tensor. Hence f is
a (1, 0)tensor deﬁned by
f =
i
fdx
i
=
∂f
∂x
i
dx
i
.
And
2
f is a (2, 0)tensor:
2
f =
j
∂f
∂x
i
dx
i
⊗dx
j
=
j
∂f
∂x
i
dx
i
+
∂f
∂x
i
j
(dx
i
)
dx
j
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
dx
i
⊗dx
j
.
Here we have used (4.7) and (4.8). In the Riemannian context,
2
f is called
the Hessian of f and denoted by Hess(f). Its ij
th
component is
(
2
f)
ij
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
.
The trace of the Hessian matrix ((
2
f)
ij
) is deﬁned to be the Laplace
Bertrami operator ´ acting on f,
´f = g
ij
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
where (g
ij
) is the inverse matrix of (g
ij
). Through a straight forward calcula
tion, one can verify that
´ =
1
[g[
n
¸
i,j=1
∂
∂x
i
(
[g[ g
ij
∂
∂x
j
)
where [g[ is the determinant of (g
ij
).
4.5.2 Integrals
On a smooth ndimensional Riemannian manifold (M, g), one can deﬁne a
natural positive Radon measure on M, and the theory of Lebesgue integral
applies.
4.5 Calculus on Manifolds 123
Let (U
i
, φ
i
)
i
be a family of coordinates charts that covers M. We say that
a family (U
i
, φ
i
, η
i
)
i
is a partition of unity associate to (U
i
, φ
i
)
i
if
(i) (η
i
)
i
is a smooth partition of unity associated to the covering (U
i
)
i
,
and
(ii) for any i, supp η
i
⊂ U
i
.
One can show that, for each family of coordinates chart (U
i
, φ
i
)
i
of M,
there exists a corresponding family of partition of unity (U
i
, φ
i
, η
i
)
i
.
Let f be a continuous function on M with compact support, and given a
family of coordinates charts (U
i
, φ
i
)
i
of M, we deﬁne its integral on M as the
sum of the integrals on open sets in R
n
:
M
fdV
g
=
¸
i
φi(Ui)
(η
i
[g[f) ◦ φ
−1
i
dx
where (U
i
, φ
i
, η
i
)
i
is a family of partition of unity associated to (U
i
, φ
i
)
i
, [g[ is
the determinant of (g
ij
) in the charts (U
i
, φ
i
)
i
, and dx stands for the volume
element of R
n
. One can prove that such a deﬁnition does not depend on the
choice of the charts (U
i
, φ
i
)
i
and the partition of unity (U
i
, φ
i
, η
i
)
i
.
For smooth functions u and v on a compact manifold M, one can verify
the following integration by parts formula
−
M
´
g
uvdV
g
=
M
< u, v > dV
g
, (4.9)
where ´
g
is the LaplaceBertrami operator associated to the metric g, and
< u, v >= g
ij
i
u
j
v = g
ij
∂u
∂x
i
∂v
∂x
j
is the scalar product associated with g for 1forms.
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature
A Riemannian metric g on M is said to be pointwise conformal to another
metric g
o
if there exist a positive function ρ, such that
g(x) = ρ(x)g
o
(x), ∀ x ∈ M.
Suppose M is a two dimensional Riemannian manifold with a metric g
o
and
the corresponding Gaussian curvature K
o
(x). Let g(x) = e
2u(x)
g
o
(x) for some
smooth function u(x) on M. Let K(x) be the Gaussian curvature associated
to the metric g. Then by a straight forward calculation, one can verify that
−´
o
u +K
o
(x) = K(x)e
2u(x)
, ∀ x ∈ M. (4.10)
Here ´
o
is the LaplaceBertrami operator associated to g
o
.
124 4 Preliminary Analysis on Riemannian Manifolds
Similarly, for conformal relation between scalar curvatures on ndimensional
manifolds ( n ≥ 3), say on standard sphere S
n
, we have
−´
o
u +
n(n −2)
4
u =
(n −2)
4(n −1)
R(x)u
n+2
n−2
, ∀ x ∈ S
n
, (4.11)
where g
o
is the standard metric on S
n
with the corresponding Laplace
Bertrami operator ´
o
, R(x) is the scalar curvature associated to the point
wise conformal metric g = u
4
n−2
g
o
.
In Chapter 5 and 6, we will study the existence and qualitative properties
of the solutions u for the above two equations respectively.
4.6 Sobolev Embeddings
Let (M, g) be a smooth Riemannian manifold. Let u be a smooth function
on M and k be a nonnegative integer. We denote
k
u the k
th
covariant
derivative of u as deﬁned in the previous section, and its norm [
k
u[ is deﬁned
in a local coordinates chart by
[
k
u[
2
= g
i1j1
g
i
k
j
k
(
k
u)
i1···i
k
(
k
u)
j1···j
k
.
In particular, we have [
0
u[ = [u[ and
[
1
u[
2
= [u[
2
= g
ij
(u)
i
(u)
j
= g
ij
i
u
j
u = g
ij
∂u
∂x
i
∂u
∂x
j
.
For an integer k and a real number p ≥ 1, let (
p
k
(M) be the space of C
∞
functions on M such that
M
[
j
u[
p
dV
g
< ∞, ∀ j = 0, 1, , k.
Deﬁnition 4.6.1 The Sobolev space H
k,p
(M) is the completion of (
p
k
(M)
with respect to the norm
u
H
k,p
(M)
=
k
¸
j=0
M
[
j
u[
p
dV
g
1/p
.
Many results concerning the Sobolev spaces on R
n
introduced in Chapter
1 can be generalized to Sobolev spaces on Riemannian manifolds. For appli
cation purpose, we list some of them in the following. Interested readers may
ﬁnd the proofs in [He] or [Au].
Theorem 4.6.1 For any integer k, H
k,2
(M) is a Hilbert space when equipped
with the equivalent norm
4.6 Sobolev Embeddings 125
u =
k
¸
i=1
M
[
i
u[
2
dV
g
1/2
.
The corresponding scalar product is deﬁned by
< u, v >=
k
¸
i=1
M
<
i
u,
i
v > dV
g
where < , > is the scalar product on covariant tensor ﬁelds associated to the
metric g.
Theorem 4.6.2 If M is compact, then H
k,p
(M) does not depend on the met
ric.
Theorem 4.6.3 (Sobolev Embeddings I)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold. As
sume 1 ≤ q < n and p =
nq
n−q
. Then
H
1,q
(M) ⊂ L
p
(M) .
More generally, if m, k are two integers with 0 ≤ m < k, and if p =
nq
n−q(k−m)
, then
H
k,q
(M) ⊂ H
m,p
(M) .
Theorem 4.6.4 (Sobolev Embeddings II)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold. As
sume that q ≥ 1 and m, k are two integers with 0 ≤ m < k. If q >
n
k−m
,
then
H
k,q
(M) ⊂ C
m
(M) .
Theorem 4.6.5 (Compact Embeddings)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold.
(i) Assume that q ≥ 1 and m, k are two integers with 0 ≤ m < k. Then
for any real number p such that 1 ≤ p <
nq
n−q(k−m)
, the embedding
H
k,q
(M) ⊂ H
m,p
(M)
is compact. In particular, the embedding of H
1,q
(M) into L
p
(M) is compact
if 1 ≤ p <
nq
n−q
.
(ii) For q > n, the embedding of H
k,q
(M) into C
λ
(M) is compact, if
(k−λ) >
n
q
. In particular, the embedding of H
1,q
(M) into C
0
(M) is compact.
126 4 Preliminary Analysis on Riemannian Manifolds
Theorem 4.6.6 (Poincar´e Inequality)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold and
let p ∈ [1, n) be real. Then there exists a positive constant C such that for any
u ∈ H
1,p
(M),
M
[u − ¯ u[
p
dV
g
1/p
≤ C
M
[u[
p
dV
g
1/p
,
where
¯ u =
1
V ol
(M,g)
M
udV
g
is the average of u on M.
5
Prescribing Gaussian Curvature on Compact
2Manifolds
5.1 Prescribing Gaussian Curvature on S
2
5.1.1 Obstructions
5.1.2 Variational Approaches and Key Inequalities
5.1.3 Existence of Weak Solutions
5.2 Gaussian Curvature on Negatively Curved Manifolds
5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper Solu
tions
5.2.2 Chen and Li’s Results
In this and the next chapters, we will present the readers with real re
search examples–semilinear equations arising from prescribing Gaussian and
scalar curvatures. We will illustrate, among other methods, how the calculus
of variations can be applied to seek weak solutions of these equations.
To show the existence of weak solutions for a certain equation, we consider
the corresponding functional J() in an appropriate Hilbert space. According
to the compactness of the functional, people usually divide it into three cases:
a) subcritical case where the level set of J is compact, say, J satisﬁes the
(PS) condition (see also Chapter 2):
Every sequence ¦u
k
¦ that satisﬁes J(u
k
)→c and J
(u
k
)→0 possesses a
convergent subsequence,
b) critical case which is the limiting situation, and can be approximated
by subcritical cases, or
c) super critical case–the remaining cases.
To roughly see when a functional may lose compactness, let’s consider
J(x) = (x
2
−1)e
x
deﬁned on one dimensional space R
1
. One can easily see that
there exists a sequence ¦x
k
¦, for instance, x
k
= −k, such that J(x
k
)→0 and
J
(x
k
)→0 as k→∞, however, ¦x
k
¦ possesses no convergent subsequence be
cause this sequence is not bounded. The functional has no compactness at level
128 5 Prescribing Gaussian Curvature on Compact 2Manifolds
J = 0. We know that in a ﬁnite dimensional space, a bounded sequence has a
convergent subsequence. While in inﬁnite dimensional space, even a bounded
sequence may not have a convergent subsequence. For instance ¦sin kx¦ is a
bounded sequence in L
2
([0, 1]) but does not have a convergent subsequence.
In the situation where the lack of compactness occurs, to ﬁnd a critical
point of the functional, one would try to recover the compactness through
various means. Take again the one dimensional example J(x) = (x
2
− 1)e
x
,
which has no compactness at level J = 0. However, if we restrict it to levels
less than 0, we can recover the compactness. That is, if we pick a sequence
¦x
k
¦ satisfying J(x
k
) < 0 and J
(x
k
)→0, then one can verify that it possesses
a subsequence which converges to a critical point x
0
of J. Actually, this x
0
is
a minimum of J.
To further illustrate the concept of compactness, we consider some exam
ples in inﬁnite dimensional space. Let Ω be an open bounded subset in R
n
,
and let H
1
0
(Ω) be the Hilbert space introduced in previous chapters. Consider
the Dirichlet boundary value problem
−´u = u
p
(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
(5.1)
To seek weak solutions, we study the corresponding functional
J
p
(u) =
Ω
[u[
p+1
dx
on the constraint
H = ¦u ∈ H
1
0
(Ω) [
Ω
[u[
2
dx = 1¦.
When p <
n+2
n−2
, it is in the subcritical case. As we have seen in Chapter
2, the functional satisﬁes the (PS) condition. If we deﬁne
m
p
= inf
u∈H
J
p
(u),
then a minimizing sequence ¦u
k
¦, J
p
(u
k
)→m
p
possesses a convergent subse
quence in H
1
0
(Ω), and the limiting function u
0
would be the minimum of J
p
in H, hence it is the desired weak solution.
When p = τ :=
n+2
n−2
, it is in the critical case. One can ﬁnd a minimizing
sequence that does not converge. For example, when Ω = B
1
(0) is a ball,
consider
u
k
(x) = u(x) =
[n(n −2)k]
n−2
4
(1 +k[x[
2
)
n−2
2
η(x)
where η ∈ C
∞
0
(Ω) is a cut oﬀ function satisfying
η(x) =
1 , x ∈ B1
2
(0)
between 0 and 1 , elsewhere .
5.1 Prescribing Gaussian Curvature on S
2
129
Let
v
k
(x) =
u
k
(x)
Ω
[u
k
[
2
dx
.
Then one can verify that ¦v
k
¦ is a minimizing sequence of J
τ
in H with
J
(v
k
)→0. However it does not have a convergent subsequence. Actually, as
k→∞, the energy of ¦v
k
¦ are concentrating at one point 0. We say that the
sequence “blows up ” at point 0.
In essence, the compactness of the functional J
τ
here relies on the Sobolev
embedding:
H
1
(Ω) → L
p+1
(Ω).
As we learned in Chapter 1, this embedding is compact when p <
n+2
n−2
and is
not when p =
n+2
n−2
.
In the critical case, the compactness may be recovered by considering
other level sets or by changing the topology of the domain. For example, if Ω
is an annulus, then the above J
τ
has a minimizer in H. In this and the next
chapter, the readers will see examples from prescribing Gaussian and scalar
curvature, where the corresponding equations are in critical case and where
various means have been exploited to recover the compactness.
5.1 Prescribing Gaussian Curvature on S
2
Given a function K(x) on two dimensional unit sphere S := S
2
with standard
metric g
o
, can it be realized as the Gaussian curvature of some pointwise
conformal metric g? This has been an interesting problem in diﬀerential ge
ometry and is known as the Nirenberg problem. If we let g = e
2u
g
o
, then it is
equivalent to solving the following semilinear elliptic equation on S
2
:
−´u + 1 = K(x)e
2u
, x ∈ S
2
. (5.2)
where ´is the LaplaceBertrami operator associated with the standard metric
g
o
.
If we replace 2u by u and let R(x) = 2K(x), then equation (5.2) becomes
−´u + 2 = R(x)e
u(x)
. (5.3)
Here 2 and R(x) are actually the scalar curvature of S
2
with standard metric
g
o
and with the pointwise conformal metric e
u
g
o
, respectively.
5.1.1 Obstructions
For equation (5.3) to have a solution, the function R(x) must satisfy the
obvious GaussBonnet sign condition
S
R(x)e
u
dx = 8π. (5.4)
130 5 Prescribing Gaussian Curvature on Compact 2Manifolds
This can be seen by integrating both sides of the equation (5.3), and the fact
that
−
S
´u dx =
S
u 1 dx = 0.
Condition (5.4) implies that R(x) must be positive somewhere. Besides this
obvious necessary condition, there are other necessary conditions found by
Kazdan and Warner. It reads
S
φ
i
R(x)e
u
dx = 0, i = 1, 2, 3. (5.5)
Here φ
i
are the ﬁrst spherical harmonic functions, i.e
−´φ
i
= 2φ
i
(x), i = 1, 2, 3.
Actually, φ
i
(x) = x
i
in the coordinates x = (x
1
, x
2
, x
3
) of R
3
.
Later, Bouguignon and Ezin generalized this condition to
S
X(R)e
u
dx = 0. (5.6)
where X is any conformal Killing vector ﬁeld on standard S
2
.
Condition (5.5) gives rise to many examples of R(x) for which the equa
tion (5.3) has no solution. In particular, a monotone rotationally symmetric
function admits no solution. Then for which kinds of functions R(x), can one
solve (5.3)? This has been an interesting problem in geometry.
5.1.2 Variational Approach and Key Inequalities
To obtain the existence of a solution for equation (5.3), people usually use
a variational approach. First prove the existence of a weak solution, then by
a regularity argument, one can show that the weak solution is smooth and
hence is the classical solution.
To obtain a weak solution, we let
H
1
(S) := H
1,2
(S)
be the Hilbert space with norm
u
1
=
¸
S
([u[
2
+u
2
)dx
1/2
.
Let
H(S) = ¦u ∈ H
1
(S) [
S
udx = 0,
S
R(x)e
u
dx > 0¦.
Then by the Poincar´e inequality, we can use
5.1 Prescribing Gaussian Curvature on S
2
131
u =
¸
S
[u[
2
dx
1/2
as an equivalent norm in H(S).
Consider the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
in H(S).
To estimate the value of the functional J, we introduce some useful in
equalities.
Lemma 5.1.1 (MoserTrudinger Inequality)
Let S be a compact two dimensional Riemannian manifold. Then there
exists constant C, such that the inequality
S
e
4πu
2
dA ≤ C (5.7)
holds for all u ∈ H
1
(S), satisfying
S
[u[
2
dA ≤ 1 and
S
udA = 0. (5.8)
We skip the proof of this inequality. Interested readers many see the article
of Moser [Mo1] or the article of Chen [C1].
From this inequality, one can derive immediately that
Lemma 5.1.2 There exists constant C, such that for all u ∈ H
1
(S) holds
S
e
u
dA ≤ C exp
1
16π
S
[u[
2
dA+
1
4π
S
udA
. (5.9)
Proof. Let
¯ u =
1
4π
S
u(x)dA
be the average of u on S. Still denote
u =
¸
S
[u[
2
dx
1/2
.
i) For any v ∈ H
1
(S) with ¯ v = 0, let w =
v
v
. Then w = 1 and ¯ w = 0.
Applying Lemma 5.1.1 to such w, we have
S
e
4πv
2
v
2
dA ≤ C. (5.10)
132 5 Prescribing Gaussian Curvature on Compact 2Manifolds
From the obvious inequality
v
4
√
π
−
2
√
πv
v
2
≥ 0,
we deduce immediately
v ≤
v
2
16π
+
4πv
2
v
2
.
Together with (5.10), we have
S
e
v
dA ≤ C exp
1
16π
v
2
, ∀v ∈ H
1
(S), with ¯ v = 0. (5.11)
ii) For any u ∈ H
1
(S), let v = u−¯ u. Then ¯ v = 0, and we can apply (5.11)
to v and obtain
S
e
u−¯ u
dA ≤ C exp
1
16π
u
2
.
Consequently
S
e
u
dA ≤ C exp
1
16π
u
2
+ ¯ u
.
This completes the proof of the Lemma. ¯ .
From Lemma 5.1.2, we can conclude that the functional J() is bounded
from below in H(S). In fact, by the Lemma,
S
R(x)e
u
dA ≤ max R
S
e
u
dA ≤ max R(x) C exp
1
16π
u
2
.
It follows that
J(u) ≥
1
2
u
2
−8π
¸
ln(C max R) +
1
16π
u
2
≥ −8π ln(C max R) (5.12)
Therefore, the functional J is bounded from below in H(S). Now we can take
a minimizing sequence ¦u
k
¦ ⊂ H(S):
J(u
k
)→ inf
u∈H(S)
J(u).
However, as we mentioned before, a minimizing sequence may not be bounded,
that is, it may ‘leak’ to inﬁnity. To prevent this from happening, we need some
coerciveness of the functional. Although it is not true in general, we do have
coerciveness for J in some special subset of H(S), In particular, in the set
of even functions–antipodal symmetric functions. More precisely, we have the
following ”Distribution of Mass Principle” from Chen and Li [CL7] [CL8].
5.1 Prescribing Gaussian Curvature on S
2
133
Lemma 5.1.3 Let Ω
1
and Ω
2
be two subsets of S such that
dist(Ω
1
, Ω
2
) ≥ δ
o
> 0.
Let 0 < α
o
≤
1
2
. Then for any > 0, there exists a constant C = C(α
o
, δ
o
, ),
such that the inequality
S
e
u
dA ≤ C exp
1
32π
+
u
2
+ ¯ u
(5.13)
holds for all u ∈ H
1
(S) satisfying
Ω1
e
u
dA
S
e
u
dA
≥ α
o
and
Ω2
e
u
dA
S
e
u
dA
≥ α
o
. (5.14)
Intuitively, if we think e
u(x)
as the density of a solid sphere S at point
x, then
Ωi
e
u
dA is the mass of the region Ω
i
, and
S
e
u
dA the total mass
of the sphere. The Lemma says that if the mass of the sphere is kind of
distributed in two parts of the sphere, more precisely, if the total mass does
not concentrate at one point, then we can have a stronger inequality than
(5.9). In particular, for even functions, this stronger inequality (5.13) holds.
If we apply this stronger inequality to the family of even functions in H(S),
we have
J(u) ≥
1
2
u
2
−8π
¸
ln(C max R) +
1
32π
+
u
2
≥ −8π ln(C max R) +
1
4
−8π
u
2
.
Choosing small enough, we arrive at
J(u) ≥
1
8
u
2
−C
1
, for all even functions u in H(S). (5.15)
This stronger inequality guarantees the boundedness of a minimizing se
quence, and hence it converges to a weak limit u
o
in H
1
(S). As we will show in
the next section, the functional J is weakly lower semicontinuous, and there
fore, u
o
so obtained is the minimum of J(u), i.e., a weak solution of equation
(5.2). Then by a standard regularity argument, as we will see in the next next
section, we will be able to conclude that u
o
is a classical solution.
In many articles, the authors use the “center of mass ” analysis, i.e., con
sider the center of mass of the sphere with density e
u(x)
at point x:
P(u) =
S
x e
u
dA
S
e
u
dA
,
and use this to study the behavior of a minimizing sequence (See [CD], [CD1],
[CY], and [CY1]). Both analysis are equivalent on the sphere S
2
. However,
134 5 Prescribing Gaussian Curvature on Compact 2Manifolds
since “the center of mass” P(u) lies in the ambient space R
3
, on a general
two dimensional compact Riemannian surfaces, this kinds of analysis can not
be applied. While the “Distribution of Mass” analysis can be applied to any
compact surfaces, even surfaces with conical singularities. Interested reader
may see the author’s paper [CL7] [CL8]for more general version of inequality
(5.13).
Proof of Lemma 5.1.3.
Let g
1
, g
2
be two smooth functions, such that
1 ≥ g
i
≥ 0, g
i
(x) ≡ 1, for x ∈ Ω
i
, i = 1, 2
and supp g
1
∩ supp g
2
= ∅. It suﬃce to show that for u ∈ H
1
(S) with
S
udA =
0, (5.14) implies
S
e
u
dA ≤ C exp
(
1
32π
+)u
2
. (5.16)
Case i) If g
1
u ≤ g
2
u, then by (5.9)
S
e
u
dA ≤
1
α
o
Ω1
e
u
dA ≤
1
α
o
S
e
g1u
dA
≤
C
α
o
exp¦
1
16π
g
1
u
2
+g
1
u¦
≤
C
α
o
exp¦
1
32π
g
1
u +g
2
u
2
+g
1
u¦
≤
C
α
o
exp¦
1
32π
(1 +
1
)u
2
+c
1
(
1
)u
2
L
2¦. (5.17)
for some small
1
> 0. Here we have used the fact that
g
1
u +g
2
u
2
=
S
[(g
1
u +g
2
u)[
2
dA
=
S
[(g
1
+g
2
)u + (g
1
+g
2
)u[
2
dA
≤ u
2
+C
1
uu
L
2 +C
2
u
2
L
2.
In order to get rid of the term u
2
L
2
on the right hand side of the above
inequality, we employ the condition
S
udA = 0.
Given any η > 0, choose a, such that meas¦x ∈ S[u(x) ≥ a¦ = η. Apply
(5.17) to the function (u −a)
+
≡ max¦0, (u −a)¦, we have
S
e
u
dA ≤ e
a
S
e
u−a
dA ≤ e
a
S
e
(u−a)
+
dA
≤ C exp¦
1
32π
(1 +
1
)u
2
+c
1
(
1
)(u −a)
+

2
2
+a¦ (5.18)
5.1 Prescribing Gaussian Curvature on S
2
135
Where C = C(δ, α
o
).
By the H¨ older and Sobolev inequality (See Theorem 4.6.3)
(u −a)
+

2
L
2 ≤ η
1
2
(u −a)
+

2
L
4 ≤ cη
1
2
(u
2
+(u −a)
+

2
L
2) (5.19)
Choose η so small that Cη
1/2
≤
1
2
, then the above inequality implies
(u −a)
+

2
L
2 ≤ 2Cη
1
2
u
2
. (5.20)
Now by Poincar´e inequality (See Theorem 4.6.6)
a η ≤
u≥a
udA ≤
S
[u[dA ≤ cu
hence for any δ > 0,
a ≤ δu
2
+
c
2
4δη
2
. (5.21)
Now (5.16) follows from (5.18), (5.20) and (5.21).
Case ii) If g
2
u ≤ g
1
u, by a similar argument as in case i), we obtain
(5.17) and then (5.16). This completes the proof.
5.1.3 Existence of Weak Solutions
Theorem 5.1.1 (Moser) If the function R is even, that is if
R(x) = R(−x) ∀ x ∈ S
2
,
where x and −x are two antipodal points. Then the necessary and suﬃcient
condition for equation (5.3) to have a solution is R(x) be positive somewhere.
To prove the existence of a weak solution, as in the previous section, we
consider the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
in H
e
(S) = ¦u ∈ H(S) [ u is even almost everywhere ¦. Let ¦u
k
¦ be a mini
mizing sequence of J in H
e
(S). Then by (5.15), ¦u
k
¦ is bounded in H
1
(S), and
hence possesses a subsequence (still denoted by ¦u
k
¦) that converges weakly
to some element u
o
in H
1
(S). Then by the Compact Sobolev Embedding of
H
1
into L
2
u
k
→u
o
in L
2
(S). (5.22)
Consequently
u
o
(−x) = u
o
(x) almost everywhere , and
S
u
o
(x)dA = 0,
136 5 Prescribing Gaussian Curvature on Compact 2Manifolds
that is u
o
∈ H
e
(S).
Furthermore, as we showed in the previous chapter, the integral is weakly
lower semicontinuous, i.e.
lim
k→∞
S
[u
k
[
2
dA ≥
S
[u
o
[
2
dA.
To show that the functional J is weakly lower semicontinuous:
J(u
o
) ≤ limJ(u
k
) = inf
u∈He(S)
J(u).
We need the following
Lemma 5.1.4 If ¦u
k
¦ converges weakly to u in H
1
(S), then there exists a
subsequence (still denoted by ¦u
k
¦), such that
S
e
u
k
(x)
dA→
S
e
u(x)
dA, as k→∞.
We postpone the proof of this lemma for a moment. Now from this lemma,
we obviously have
S
R(x)e
u
k
dA→
S
R(x)e
uo
dA,
and it follows that J() is weakly lower semicontinuous. On the other hand,
we have
J(u
o
) ≥ inf
u∈He(S)
J(u).
Therefore, u
o
is a minimum of J in H
e
(S).
Consequently, for any v ∈ H
e
(S), we have
0 =< J
(u
o
), v >=
S
< u
o
, v > dA−
8π
S
R(x)e
uo
dA
S
R(x)e
uo
vdA.
For any w ∈ H(S), let
v(x) =
1
2
(w(x) +w(−x)) :=
1
2
(w(x) +w
−
(x)).
Then obviously, v ∈ H
e
(S), and it follows that
0 =< J
(u
o
), v >=
1
2
(< J
(u
o
), w > + < J
(u
o
), w
−
>) =< J
(u
o
), w > .
Here we have used the fact that u
o
(−x) = u
o
(x). This implies that u
o
is a
critical point of J in H
1
(S) and hence a constant multiple of u
o
is a weak
solution of equation (5.3). Now what left is to prove Lemma 5.1.4.
The Proof of Lemma 5.1.4.
5.1 Prescribing Gaussian Curvature on S
2
137
From Lemma 5.1.2, for any real number p > 0, we have
S
e
pu
dA ≤ C exp¦
p
2
16π
S
[u[
2
dA+
p
4π
S
udA¦. (5.23)
And by H¨ oder inequality, for any 0 < p < 2,
S
[(e
u
)[
p
dA =
S
e
pu
[u[
p
dA ≤
S
e
2p
2−p
u
dA
2−p
2
S
[u[
2
dA
p
2
.
(5.24)
Inequalities (5.23) and (5.24) imply that ¦e
u
k
¦ is a bounded sequence in H
1,p
for any 1 < p < 2. By the compact Sobolev embedding of H
1,p
into L
1
, there
exists a subsequence of ¦e
u
k
¦ which converges strongly to e
uo
in L
1
(S). This
completes the proof of the Lemma.
Chen and Ding [CD] generalized Moser’s result to a broader class of sym
metric functions, as we will describe in the following.
Let O(3) be the group of orthogonal transformations of R
3
. Let G be a
closed ﬁnite subgroup of O(3). Let
F
G
= ¦x ∈ S
2
[ gx = x, ∀g ∈ G¦
be the set of ﬁxed points under the action of G.
The action of G on S
2
induces an action of G on the Hilbert space H
1
(S):
g[u](x) → u(gx)
under which the ﬁxed point subspace of H
1
(S) is given by
X = ¦u ∈ H
1
(S) [ u(gx) = u(x), ∀g ∈ G¦.
Assume that R(x) is Gsymmetric, or Ginvariant, i.e.
R(gx) = R(x), ∀g ∈ G. (5.25)
Let X
∗
= X
¸
H(S). Recall that
H(S) = ¦u ∈ H
1
(S) [
S
udA = 0,
S
R(x)e
u
dA > 0¦.
Under the assumption (5.25), the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
is Ginvariant in X
∗
, i.e.
J(g[u]) = J(u), ∀g ∈ G, ∀u ∈ X
∗
.
We seek a minimum of J in X
∗
. The following lemma guarantees that such a
minimum is the critical point we desired.
138 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Lemma 5.1.5 Assume that u
o
is a critical point of J in X
∗
, then it is also
a critical point of J in H(S).
Proof. First, for any g ∈ G and any u, v ∈ H(S), we have
< J
(u), g[v] >=< J
(g
−1
[u]), v > . (5.26)
Where g
−1
is the inverse of g. This can be easily see from
< J
(u), v >=
S
uvdA−
8π
S
Re
u
dA
S
Re
u
vdA.
Assume
G = ¦g
1
, g
m
¦.
For any w ∈ H(S), let
v =
1
m
(g
1
[w] + +g
m
[w]).
Then for any g ∈ G
g[v] =
1
m
(gg
1
[w] + +gg
m
[w]) = v.
Hence v ∈ X
∗
. It follows that
0 = < J
(u
o
), v >=
1
m
m
¸
i
< J
(u
o
), g
i
[w] >
=
1
m
m
¸
i
< J
(g
−1
i
u
o
), w >=
1
m
m
¸
i
< J
(u
o
), w >
= < J
(u
o
), w > .
Therefore, u
o
is a critical point of J in H(S). ¯ .
Theorem 5.1.2 (Chen and Ding [CD]). suppose that R ∈ C
α
(S
2
) satisﬁes
(5.25) with max
S
R > 0. If F
G
= ∅, let m = max
F
G
R. Then equation (5.2)
has a solution provided one of the following conditions holds:
(i) F
G
= ∅,
(ii) m ≤ 0, or
(iii) m > 0 and c
∗
= inf
X∗
J < −8π ln 4πm.
Remark 5.1.1 In the case when the function R(x) is even, the group G =
¦id, −id¦, where id is the identity transform, i.e.
id(x) = x and −id(x) = −x ∀ x ∈ S
2
.
Obviously, in this case F
G
is empty. Hence the above Theorem contains
Moser’s existence result as a special case.
5.1 Prescribing Gaussian Curvature on S
2
139
To prove this theorem, we need another two lemmas.
Lemma 5.1.6 (Onofri [On]) For any u ∈ H
1
(S) with
S
udA = 0, holds
S
e
u
dA ≤ 4π exp
1
16π
S
[u[
2
dA
. (5.27)
Lemma 5.1.7 Suppose that ¦u
i
¦ is a sequence in H(S) with J(u
i
) ≤ β. If
u
i
→∞, then there exists a subsequence (still denoted by ¦u
i
¦) and a point
ξ ∈ S
2
with R(ξ) > 0, such that
S
R(x)e
ui
dA = (R(ξ) +o(1))
S
e
ui
dA, (5.28)
and
lim
i→∞
J(u
i
) ≥ −8π ln 4πR(ξ). (5.29)
Proof. We ﬁrst show that, there exists a subsequence of ¦u
i
¦ and a point
ξ ∈ S
2
, such that for any > 0, we have
S\B(ξ)
e
ui
dA→0. (5.30)
Otherwise, there exists two subsets of S
2
, Ω
1
and Ω
2
, with dist(Ω
1
, Ω
2
)
≥
o
> 0 such that ¦u
i
¦ satisﬁes all the conditions in Lemma 5.1.3. Hence the
inequality
S
e
ui
dA ≤ C exp(
1
16π
−δ)u
i

2
holds for some δ > 0. Then by the previous argument, we conclude that u
i

is bounded. This contradicts with our assumption, and hence veriﬁes (5.30).
Now (5.30) and the continuity assumption on R(x) implies (5.28).
Roughly speaking, the above argument shows that if a minimizing sequence
of J is unbounded, then passing to a subsequence, the mass of the surface S
with density e
ui(x)
at point x will be concentrating at one point ξ on S.
To verify (5.29), we use (5.28) and Onofri’s Lemma:
J(u
i
) =
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1))
S
e
ui
dA
=
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1)) −8π ln
S
e
ui
dA
≥
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1)) −8π(ln 4π +
1
16π
S
[u[
2
dA)
= −8π ln(4πR(ξ) +o(1)).
140 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Let i→∞, we arrive at (5.29). Since the right hand side of (5.29) is bounded
above by β, we must have R(ξ) > 0. This completes the proof of the Lemma.
Proof of the Theorem. Deﬁne
c
∗
= inf
X∗
J.
As we argue in the previous section (see (5.12)), we have c
∗
> −∞. Let
¦u
i
¦ ⊂ X
∗
be a minimizing sequence, i.e. J(u
i
)→c
∗
, as i→∞.
Similar to the reasoning at the beginning of this section, one can see that
we only need to show that ¦u
i
¦ is bounded in H
1
(S). Assume on the contrary
that u
i
→∞. Then from the proof of Lemma 5.1.7, there is a point ξ ∈ S
2
,
such that for any > 0, we have
S\B(ξ)
e
ui
dA→0. (5.31)
Since u
i
is invariant under the action of G, we must also have
S\B(gξ)
e
ui
dA→0. (5.32)
Because is arbitrary, we deduce that gξ = ξ, that is ξ ∈ F
G
.
Now under condition (i), F
G
is empty. Therefore, ¦u
i
¦ must be bounded.
Moreover, by Lemma 5.1.7,
R(ξ) > 0 and c
∗
≥ −8π ln 4πR(ξ).
Noticing that m ≥ R(ξ) by deﬁnition, this again contradicts with conditions
(ii) and (iii).
Therefore, under either one of the conditions (i), (ii), or (iii), the mini
mizing sequence ¦u
i
¦ must be bounded, and hence possesses a subsequence
converging weakly to some u
o
∈ X
∗
. This a constant multiple of u
o
is the
desired weak solution of equation (5.3).
Here conditions (i) and (ii) in Theorem 5.1.2 can be easily veriﬁed through
the group G or the given function R. However, one may wonder, under what
conditions on R, do we have condition (iii). The following theorem provides a
suﬃcient condition.
Theorem 5.1.3 (Chen and Ding [CD]). Suppose that R ∈ C
2
(S
2
), is G
invariant, and m = max
F
G
R > 0. Let
D = ¦x ∈ S
2
[ R(x) = m¦.
If ´R(x
o
) > 0 for some x
o
∈ D, then
c
∗
< −8π ln 4πm, (5.33)
and therefore (5.3) possesses a weak solution.
5.1 Prescribing Gaussian Curvature on S
2
141
Proof. Choose a spherical polar coordinate system on S := S
2
:
(θ, φ), −
π
2
≤ θ ≤
π
2
, 0 ≤ φ ≤ 2π
with x
o
= (
π
2
, φ) as the north pole.
Consider the following family of functions:
u
λ
(θ, φ) = ln
1 −λ
2
(1 −λsin θ)
2
, v
λ
= u
λ
−
1
4π
S
u
λ
dA, 0 < λ < 1.
One can verify that this family of functions satisfy the equation
−´u
λ
+ 2 = 2e
u
λ
. (5.34)
As λ→1, the family ¦u
λ
¦ ‘blows up’ (tend to ∞) at the one point x
o
,
while approaching −∞ everywhere else on S
2
. Therefore, the mass of the
sphere with density e
u
λ
(x)
is concentrating at point x
o
as λ→1.
It is easy to see that v
λ
∈ H(S) for λ closed to 1. Moreover, we claim that
v
λ
∈ X, i.e. v
λ
(gx) = v
λ
(x), ∀g ∈ G.
In fact, since g is an isometry of S
2
, and gx
o
= x
o
, we have
d(gx, x
o
) = d(gx, gx
o
) = d(x, x
o
),
where d(, ) is the geodesic distance on S
2
. But v
λ
depends only on θ =
d(x, x
o
), so
v
λ
(gx) = v
λ
(x), ∀g ∈ G.
It follows that v
λ
∈ X
∗
for λ suﬃciently closed to 1.
By a direct computation, one can verify that
1
2
S
[u
λ
[
2
dA+ 8π¯ u
λ
= 0.
Consequently,
J(v
λ
) = −8π ln
S
R(x)e
u
λ
dA.
Notice that c
∗
≤ J(v
λ
) by the deﬁnition of c
∗
. Thus, in order to show (5.33),
we only need to verify that, for λ suﬃciently close to 1
S
R(x)e
u
λ
dA > 4πm. (5.35)
Let δ > 0 be small, we have
S
Re
u
λ
dA = (1 −λ
2
)
2π
0
π
2
π
2
−δ
[R(θ, φ) −m]
cos θ
(1 −λsin θ)
2
dθdφ
+
2π
0
π
2
−δ
−
π
2
[R(θ, φ) −m]
cos θ
(1 −λsin θ)
2
dθdφ
¸
+ 4πm
= (1 −λ
2
)¦I +II¦ + 4πm.
142 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Here we have used the fact
S
e
u
λ
dA = 4π
which can be derived directly from equation (5.34).
For each ﬁxed δ, it is easy to check that the integral II remains bounded
as λ tends to 1. Thus (5.35) will hold if we can show that the integral I →+∞
as λ→1.
Notice that x
o
is a maximum point of the restriction of R on F
G
. Hence
by the principle of symmetric critically, x
o
is a critical point of R, that is
R(x
o
) = 0. Using this fact and the second order Taylor expansion of R(x)
near x
o
, we can derive
I = π
π
2
π
2
−δ
´R(x
o
) cos
2
θ +o
θ −
π
2
2
(1 −λsin θ)
2
cos θ dθ
≥ c
o
π
2
π
2
−δ
cos
3
θ
(1 −λsin θ)
2
dθ.
Here c
o
is a positive constant. We have chosen δ to be suﬃciently small and
have used the fact that ´R(x
o
) is positive. From the above inequality, one
can easily verify that as λ→1, I→+∞, since the integral
π
2
π
2
−δ
cos
3
θ
(1 −sin θ)
2
d θ
diverges to +∞.
This completes the proof of the Theorem. ¯ .
5.2 Prescribing Gaussian Curvature on Negatively
Curved Manifolds
Let M be a compact two dimensional Riemannian manifold. Let g
o
(x) be
a metrics on M with the corresponding LaplaceBeltrami operator ´ and
Gaussian curvature K
o
(x). Given a function K(x) on M, can it be realized as
the Gaussian curvature associated to the pointwise conformal metrics g(x) =
e
2u(x)
g
o
(x)? As we mentioned before, to answer this question, it is equivalent
to solve the following semilinear elliptic equation
−´u +K
o
(x) = K(x)e
2u
x ∈ M. (5.36)
To simplify this equation, we introduce the following simple existence re
sult.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 143
Lemma 5.2.1 Let f(x) ∈ L
2
(M). Then the equation
−´u = f(x) x ∈ M (5.37)
possesses a solution if and only if
M
f(x)d A = 0. (5.38)
Proof. The ‘only if’ part can be seen by integrating both side of the equation
(5.37) on M.
We now prove the ‘if’ part. Assume that (5.38) holds, we will obtain the
existence of a solution to (5.37). To this end, we consider the functional
J(u) =
1
2
M
[u[
2
d A−
M
f(x)ud A.
under the constraint
G = ¦u ∈ H
1
(M) [
M
u(x)d A = 0¦.
As we have seen before, we can use, in G, the equivalent norm
u =
M
[u[
2
d A
1
2
.
By H¨ oder and Poincar´e inequalities, we derive
J(u) ≥
1
2
u
2
−f
L
2u
L
2
≥
1
2
u
2
−c
o
f
L
2u
=
1
2
(u −c
o
f
L
2)
2
−
c
2
o
f
2
L
2
2
. (5.39)
The above inequality implies immediately that the functional J(u) is
bounded from below in the set G.
Let ¦u
k
¦ be a minimizing sequence of J in G, i.e.
J(u
k
)→inf
G
J, as k→∞.
Then again by (5.39), ¦u
k
¦ is bounded in H
1
, and hence possesses a subse
quence (still denoted by ¦u
k
¦) that converges weakly to some u
o
in H
1
(M).
From compact Sobolev imbedding
H
1
→→ L
2
,
144 5 Prescribing Gaussian Curvature on Compact 2Manifolds
we see that
u
k
→u
o
strongly in L
2
, and hence so in L
1
.
Therefore
M
u
o
(x)d A = lim
M
u
k
(x)d A = 0, (5.40)
and
M
f(x)u
o
(x)d A = lim
M
f(x)u
k
(x)d A. (5.41)
(5.40) implies that u
o
∈ G, and hence
J(u
o
) ≥ inf
G
J(u). (5.42)
While (5.41) and the weak lower semicontinuity of the Dirichlet integral:
liminf
M
[u
k
[
2
d A ≥
M
[u
o
[
2
d A
leads to
J(u
o
) ≤ inf
G
J(u).
From this and (5.42) we conclude that u
o
is the minimum of J in the set G,
and therefore there exists constant λ, such that u
o
is a weak solution of
−´u
o
−f(x) = λ.
Integrating both sides and by (5.38), we ﬁnd that λ = 0. Therefore u
o
is a
solution of equation (5.37). This completes the proof of the Lemma. ¯ .
We now simplify equation (5.36). First let ˜ u = 2u, then
−´˜ u + 2K
o
(x) = 2K(x)e
˜ u
. (5.43)
Let v(x) be a solution of
−´v = 2K
o
(x) −2
¯
K
o
(5.44)
where
¯
K
o
=
1
[M[
M
K
o
(x)d A
is the average of K
o
(x) on M. Because the integral of the right hand side of
(5.44) on M is 0, the solution of (5.44) exists due to Lemma 5.2.1.
Let w = ˜ u +v. Then it is easy to verify that
−´w + 2
¯
K
o
= 2K(x)e
−v
e
w
. (5.45)
Finally, letting
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 145
α = 2
¯
K
o
and R(x) = 2K(x)e
−v(x)
and rewriting w as u, we reduce equation (5.36) to
−´u +α = R(x)e
u(x)
, x ∈ M (5.46)
By the wellknown GaussBonnet Theorem, if g is a Riemannian metrics
on M with corresponding Gaussian curvature K
g
(x), then
M
K
g
(x)d A
g
= 2πχ(M)
where χ(M) is the Euler Characteristic of M, and d A
g
the area element
associated to metric g.
We say M is negatively curved, if χ(M) < 0, and hence α < 0.
5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper
Solutions
Let M be a compact two dimensional manifold. Let α < 0. In [KW], Kazdan
and Warner considered the equation
−´u +α = R(x)e
u(x)
, x ∈ M (5.47)
and proved
Theorem 5.2.1 (KazdanWarner) Let
¯
R be the average of R(x) on M. Then
i) A necessary condition for (5.46) to have a solution is
¯
R < 0.
ii) If
¯
R < 0, then there is a constant α
o
, with −∞ ≤ α
o
< 0, such that
one can solve (5.46) if α > α
o
, but cannot solve (5.46) if α < α
o
.
iii) α
o
= −∞ if and only if R(x) ≤ 0.
The existence of a solution was established by using the method of upper
and lower solutions. We say that u
+
is an upper solution of equation (5.47) if
−´u
+
+α ≥ R(x)e
u+
, x ∈ M.
Similarly, we say that u
−
is a lower solution of (5.47) if
−´u
−
+α ≤ R(x)e
u−
, x ∈ M.
The following approach is adapted from [KW] with minor modiﬁcations.
Lemma 5.2.2 If a solution of (5.47) exists, then
¯
R < 0. Even more strongly,
the unique solution φ of
´φ +αφ = R(x) , x ∈ M (5.48)
must be positive.
146 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Proof. We ﬁrst prove the second part. Using the substitution v = e
−u
, one
sees that (5.47) has a solution u if and only if there is a positive solution v of
´v +αv −R(x) −
[v[
2
v
= 0. (5.49)
Assume that v is such a positive solution. Let φ be the unique solution of
(5.48) and let w = φ −v. Then
´w +αw = −
[v[
2
v
≤ 0. (5.50)
It follows that w ≥ 0. Otherwise, there is a point x
o
∈ M, such that w(x
o
) < 0
and x
o
is a minimum of w, hence ´w(x
o
) ≥ 0. Consequently, we must have
´w(x
o
) +αw(x
o
) > 0.
A contradiction with (5.50). Therefore w ≥ 0 and φ ≥ v > 0. This shows
that a necessary condition for (5.47) to have a solution is that the unique
solution φ of (5.48) must be positive. Integrating both sides of (5.48), we see
immediately that
¯
R < 0. This completes the proof of the Lemma. ¯ .
Lemma 5.2.3 (The Existence of Upper Solutions).
i) If R(x) ≤ 0 but not identically 0, then there exist an upper solution u
+
of (5.47).
ii) If
¯
R < 0, then there exists a small δ > 0, such that for −δ ≤ α < 0,
there exists an upper solution u
+
of (5.47).
Proof. Let v be a solution of
−´v = R(x) −
¯
R.
We try to choose constant a and b, such that u
+
= av+b is an upper solution.
We have
−´u
+
+α −R(x)e
u+
= aR(x) −a
¯
R +α −R(x)e
av+b
. (5.51)
We want to choose constant a and b, so that the right hand side of (5.51) is
nonnegative.
In Case i), when R(x) ≤ 0, we regroup the right hand side of (5.51) as
f(a, b, x) ≡ (−a
¯
R +α) −R(x)(e
av+b
−a). (5.52)
For any given α < 0, we ﬁrst choose a suﬃciently large, so that −a
¯
R+α > 0.
Then we choose b large, so that
e
av(x)+b
−a ≥ 0 , ∀ x ∈ M.
This is possible because v(x) is continuous and M is compact. It follows from
(5.51) and (5.52) that
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 147
−´u
+
+α −R(x)e
u+
≥ 0.
In Case ii), we choose α ≥
a
¯
R
2
and e
b
= a, where a is a positive number
to be determine later. Then (5.52) becomes
f(a, b, x) ≥ −
a
¯
R
2
−aR(x)(e
av
−1)
≥ a
−
¯
R
2
−R
L
∞[e
av
−1[
= aR
L
∞
−
¯
R
2R
L
∞
−[e
av
−1[
.
Again by the continuity of v and the compactness of M, we see that
e
av(x)
→1 , uniformly on M as a→0.
Therefore, we can make
[e
av(x)
−1[ ≤
−
¯
R
2R
L
∞
, for all x ∈ M,
by choosing a suﬃciently small. Thus u
+
so constructed is an upper solution
of (5.47) for α ≥
a
¯
R
2
. This completes the proof of the Lemma. ¯ .
Lemma 5.2.4 (Existence of Lower Solutions) Given any R ∈ L
p
(M) and a
function u
+
∈ W
2,p
(M), there is a lower solution u
−
∈ W
2,p
(M) of (5.47)
such that u
−
< u
+
.
Proof. If R(x) is bounded from below, then one can clearly use any suf
ﬁciently large negative constant as u
−
. For general R(x) ∈ L
p
(M), let
K(x) = max¦1, −R(x)¦. Choose a positive constant a so that a
¯
K = −α.
Then aK(x) +α = 0 and (aK(x) + α) ∈ L
p
(M). Thus there is a solution w
of
´w = aK(x) +α.
By the L
p
regularity theory, w ∈ W
2,p
(M) and hence is continuous. Let
u
−
= w −λ. Then
−´u
−
+α −R(x)e
u−
= −´w +α −R(x)e
w−λ
= −aK(x) −R(x)e
w−λ
≤ −K(x)(a −e
w−λ
). (5.53)
Choosing λ suﬃciently large so that
a −e
w−λ
≥ 0,
148 5 Prescribing Gaussian Curvature on Compact 2Manifolds
and noticing that K(x) ≥ 0, by (5.53), we arrive at
−´u
−
+α −R(x)e
u−
≤ 0.
Therefore u
−
is a lower solution. Moreover, one can make λ large enough such
that u
−
= w −λ < u
+
. This completes the proof of the Lemma. ¯ .
Lemma 5.2.5 Let p > 2. If there exist upper and lower solutions, u
+
, u
−
∈
W
2,p
(M) of (5.47), and if u
−
≤ u
+
, then there is a solution u ∈ W
2,p
(M).
Moreover, u
−
≤ u ≤ u
+
, and u is C
∞
in any open set where R(x) is C
∞
.
Proof. We will rewrite the equation (5.47) as
Lu = f(x, u),
then start from the upper solution u
+
and apply iterations:
Lu
1
= f(x, u
+
) , Lu
i+1
= f(x, u
i
) , i = 1, 2, .
In order that for each i, u
−
≤ u
i
≤ u
+
, we need the operator L obeys some
maximum principle and f(x, ) possess some monotonicity. The Laplace oper
ator in the original equation does not obey maximum principles, however, if
we let
Lu = −´u +k(x)u,
where k(x) is any positive function on M, then the new operator L obeys the
maximum principle:
If Lu ≥ 0, then u(x) ≥ 0 , for all x ∈ M.
To see this, we suppose in the contrary, there is some point on M where u < 0.
Let x
o
be a negative minimum of u, then −´u(x
o
) ≤ 0, and it follows that
−´u(x
o
) +k(x
o
)u(x
o
) ≤ k(x
o
)u(x
o
) < 0.
A contradiction. Hence the maximum principle holds for L. We now write
equation (5.47) as
Lu ≡ −´u +k(x)u = R(x)e
u
−α +k(x)u := f(x, u).
Based on the maximum principle on L, to keep u
−
≤ u
i
≤ u
+
, it requires that
f(x, ) be monotone increasing, that is
∂f
∂u
= R(x)e
u
+k(x) ≥ 0.
To this end, we choose
k(x) = k
1
(x)e
u+(x)
, where k
1
(x) = max¦1, −R(x)¦.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 149
Base on this choice of L and f, we show by math induction that the
iteration sequence so constructed is monotone decreasing:
u
i+1
≤ u
i
≤ ≤ u
1
≤ u
+
, i = 1, 2, . (5.54)
In fact, from
Lu
+
≥ f(x, u
+
) and Lu
1
= f(x, u
+
)
we derive immediately
L(u
+
−u
1
) ≥ 0.
Then the maximum principle for L implies u
+
≥ u
1
. This completes the ﬁrst
step of our induction. Assume that for any integer m, u
m+1
≤ u
m
≤ u
+
, we
will show that u
m+2
≤ u
m+1
. Since f(x, ) is monotone increasing in u for
u ≤ u
+
, we have
f(x, u
m+1
) ≤ f(x, u
m
)
and hence L(u
m+1
−u
m+2
) ≥ 0. Therefore u
m+1
≥ u
m+2
. This veriﬁes (5.54)
through math induction. Similarly, one can show the other side of the inequal
ity and arrive at
u
−
≤ u
i+1
≤ u
i
≤ ≤ u
+
, i = 1, 2, . (5.55)
Since u
+
, u
−
, and u
i
are continuous, inequality (5.55) shows that ¦[u
i
[¦ is
uniformly bounded. Consequently, in the L
p
norm,
Lu
i+1

L
p = R(x)e
ui
−α +k(x)u
i

L
p ≤ C.
Hence ¦u
i
¦ is bounded in W
2,p
(M). By the compact imbedding of W
2,p
into
C
1
, there is a subsequence that converges uniformly to some function u in
C
1
(M). In view of the monotonicity (5.55), we conclude that the entire se
quence ¦u
i
¦ itself converges uniformly to u. Moreover
u
i+1
−u
j+1

W
2,p ≤ CL(u
i+1
−u
j+1
)
L
p
≤ C(R
L
pe
ui
−e
uj

L
∞ +k
L
pu
i
−u
j

L
∞).
Therefore, ¦u
i
¦ converges strongly in W
2,p
, so u ∈ W
2,p
. Since L : W
2,p
→L
p
is continuous, it follows that u is a solution of (5.47) and satisﬁes u
−
≤ u ≤ u
+
.
By the Schauder theory, one proves inductively that u ∈ C
∞
in any open
set where R(x) is C
∞
. More precisely, if R ∈ C
m+γ
, then u ∈ C
m+2+γ
. This
completes the proof of the Lemma. ¯ .
Based on these Lemmas, we are now able to complete the proof of the
Theorem 5.2.1.
Proof of Theorem 5.2.1.
Part i) can obviously derived from Lemma 5.2.2.
150 5 Prescribing Gaussian Curvature on Compact 2Manifolds
From Lemma 5.2.4 and 5.2.5, if there is a upper solution, then there is
a solution. Hence by Lemma 5.2.3, there exist a δ > 0, such that (5.47) is
solvable for −δ ≤ α < 0. Let
α
o
= inf¦α [ (5.47) is sovable ¦.
Then for any 0 > α > α
o
, one can solve (5.47). In fact, if for α
1
, (5.47)
possesses a solution u
1
, then for any α > α
1
, we have
−´u
1
+α > R(x)e
u1
,
that is, u
1
is an upper solution for equation (5.47) at α. Hence equation
−´u +α = R(x)e
u
also has a solution.
Lemma 5.2.3 infers that α
o
= −∞ if R(x) ≤ 0 but R(x) is not identically
0. Now to complete the proof of the Theorem, it suﬃce to prove that if R(x)
is positive somewhere, then α
o
> −∞. By virtue of Lemma 5.2.2, we only
need to show
Claim: There exist α < 0, such that the unique solution of
´φ +αφ = R(x) (5.56)
is negative somewhere.
Actually, we can prove a stronger result:
−αφ(x)→R(x) almost everywhere as α→−∞. (5.57)
We argue in the following 2 steps.
i) For a given α < 0, let u = u(x, α) be the solution of (5.56). Multiplying
both sides of equation (5.56) by u and integrate over M to obtain
−
M
[u[
2
dA+α
M
u
2
dA =
M
R(x)u(x)dA.
Consequently,
u
L
2 ≤
1
α
M
RudA.
Then by H¨ oder inequality
u
L
2 ≤
1
α
R
L
2.
This implies that
as α→−∞, u(x, α)→0, almost everywhere . (5.58)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 151
Since for any α < 0, the operator ´+α is invertible, we can express
u = (´+α)
−1
R(x).
From the above reasoning, one can see that, as α→−∞,
if R ∈ L
2
, then (´+α)
−1
R(x)→0 almost everywhere . (5.59)
ii) To see (5.57), we write
−αφ +R(x) = (´+α)
−1
´R. (5.60)
One can see the validity of (5.60) by simply apply the operator ´+α to both
sides.
Now by the assumption, we have ´R ∈ L
2
, and hence (5.59) implies that
−αφ +R(x)→0, a.e..
Or
φ(x)→
1
α
R(x). a.e.
Since R(x) is continuous and positive somewhere, for suﬃciently negative α,
φ(x) is negative somewhere.
This completes the proof of the Theorem.
5.2.2 Chen and Li’s Results
From the previous section, one can see from the Kazdan and Warner’s result
that in the case when R(x) is nonpositive, equation (5.46) has been solved
completely. i.e. a necessary and suﬃcient condition for equation (5.46) to have
a solution for any 0 > α > −∞ is R(x) ≤ 0. However, in the case when R(x)
changes signs, we have α
o
> −∞, and hence there are still some questions
remained. In particular, one may naturally ask:
Does equation (5.46) has a solution for the critical value α = α
o
?
In [CL1], Chen and Li answered this question aﬃrmatively.
Theorem 5.2.2 (ChenLi)
Assume that R(x) is a continuous function on M. Let α
o
be deﬁned in the
previous section. Then equation (5.46) has at least one solution when α = α
o
.
Proof. The Outline. Let ¦α
k
¦ be a sequence of numbers such that
α
k
> α
o
, and α
k
→α
o
, as k→∞.
We will prove the Theorem in 3 steps.
152 5 Prescribing Gaussian Curvature on Compact 2Manifolds
In step 1, we minimize the functional, for each ﬁxed α
k
,
J
k
(u) =
1
2
M
[u[
2
d A+α
k
M
ud A−
M
R(x)e
u
d A
in a class of functions that are between a lower and a Upper solution. Let the
minimizer be u
k
. Then let α
k
→α
o
.
In step 2, we show that the sequence of minimizers ¦u
k
¦ is bounded in the
region where R(x) is positively bounded away from 0.
In step 3, we prove that ¦u
k
¦ is bounded in H
1
(M) and hence converges
to a desired solution.
Step 1. For each α
k
> α
o
, by Kazdan and Warner, there exists a solution
φ
k
of
−´φ
k
+α
k
= R(x)e
φ
k
. (5.61)
We ﬁrst show that there exists a constant c
o
> 0, such that
φ
k
(x) ≥ −c
o
, ∀ x ∈ M and ∀ k = 1, 2, . (5.62)
Otherwise, for each k, let x
k
be a minimum of φ
k
(x) on M. Then obviously,
φ
k
(x
k
)→−∞, as k→∞.
On the other hand, since x
k
is a minimum, we have
−´φ
k
(x
k
) ≤ 0 , ∀ k = 1, 2, .
These contradict equation (5.61) because
α
k
→α
o
< 0 , as k→∞.
This veriﬁes (5.62).
Choose a suﬃciently negative constant A < −c
o
to be the lower solution
of equation (5.46) for all α
k
, that is,
−´A+α
k
≤ R(x)e
A
.
This is possible because R(x) is bounded below on M, and hence we can make
R(x)e
A
greater than any ﬁxed negative number.
For each ﬁxed α
k
, choose α
o
< ˜ α
k
< α
k
. By the result of Kazdan and
Warner, there exists a solution of equation (5.46) for α = ˜ α
k
. Call it ψ
k
. Then
−´ψ
k
+α
k
> −´ψ
k
+ ˜ α
k
= R(x)e
ψ
k
(x)
,
i.e. ψ
k
is a super solution of the equation
−´u +α
k
= R(x)e
u(x)
. (5.63)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 153
Illuminated by an idea of Ding and Liu [DL], we minimize the functional
J
k
(u) in a class of functions
H = ¦u ∈ C
1
(M) [ A ≤ u ≤ ψ
k
¦.
This is possible, because the functional J
k
(u) is bounded from below. In
fact, since M is compact and R(x) is continuous, there exists a constant C
1
,
such that
[R(x)[ ≤ C
1
, ∀ x ∈ M.
Consequently, for any u ∈ H,
J
k
(u) ≥
1
2
M
[u[
2
d A+α
k
M
ud A−C
1
M
e
ψ
k
d A
≥
1
2
(u −C
2
)
2
−C
3
. (5.64)
Here we have used the H¨ older and Poincar´e inequality. From (5.64), one
can easily see that J
k
is bounded from below. Also by (5.65) and a similar
argument as in the previous section, we can show that there is a subsequence
of a minimizing sequence which converges weakly to a minimum u
k
of J
k
(u)
in the set H. In order to show that u
k
is a solution of equation (5.63), we
need it be in the interior of H, i.e.
A < u
k
(x) < ψ
k
(x), ∀ x ∈ M. (5.65)
To verify this, we use a maximum principle.
First we show that
u
k
(x) < ψ
k
(x) , ∀ x ∈ M. (5.66)
Suppose in the contrary, there exists x
o
∈ M, such that u
k
(x
o
) = ψ
k
(x
o
), we
will derive a contradiction. Since u
k
(x
o
) > A, there is a δ > 0, such that
A < u
k
(x) , ∀x ∈ B
δ
(x
o
).
It follows that, for any positive function v(x) with its support in B
δ
(x
o
),
there exists an > 0, such that as − ≥ t ≤ 0, we have u
k
+ tv ∈ H.
Since u
k
is a minimum of J
k
in H, we have in particular that the function
g(t) = J
k
(u
k
+tv) achieves its minimum at t = 0 on the interval [−, 0]. Hence
g
(0) ≤ 0. This implies that
M
[−´u
k
+α
k
−R(x)e
u
k
]v(x)d A ≤ 0. (5.67)
Consequently
−´u
k
(x
o
) +α
k
−R(x
o
)e
u
k
(xo)
≤ 0. (5.68)
Let w(x) = ψ
k
(x) −u
k
(x). Then w(x) ≥ 0, and x
o
is a minimum of w. It
follows that −´w(x
o
) ≤ 0. While on the other hand, we have, by (5.68),
154 5 Prescribing Gaussian Curvature on Compact 2Manifolds
−´w(x
o
) = −´ψ
k
(x
o
) −(−´u
k
(x
o
)) ≥ −˜ α
k
+α
k
> 0.
Again a contradiction. Therefore we must have
u
k
(x) < ψ
k
(x) , ∀ x ∈ M.
Similarly, one can show that
A < u
k
(x) , ∀ x ∈ M.
This veriﬁes (5.65). Now for any v ∈ C
1
(M), u
k
+tv is in H for all suﬃciently
small t, and hence
d
dt
J
k
(u+tv)[
t=0
= 0. Then a direct computation as we did
in the past will lead to
−´u
k
+α
k
= R(x)e
u
k
, ∀ x ∈ M.
Step 2. It is obvious that the sequence ¦u
k
¦ so obtained in the previous
step is uniformly bounded from below by the negative constant A. To show
that it is also bounded from above, we ﬁrst consider the region where R(x) is
positively bounded away from 0. We will use a sup + inf inequality of Brezis,
Li, and Shafrir [BLS]:
Lemma 5.2.6 (Brezis, Li, and Shafrir). Assume that V is a Lipschitz func
tion satisfying
0 < a ≤ V (x) ≤ b < ∞
and let K be a compact subset of a domain Ω in R
2
. Then any solution u of
the equation
−´u = V (x)e
u
, x ∈ Ω,
satisﬁes
sup
K
u + inf
Ω
u ≤ C(a, b, V 
L
∞, K, Ω). (5.69)
Let x
o
be a point on M at which R(x
o
) > 0. Choose so small such that
R(x) ≥ a > 0 , ∀ x ∈ Ω ≡ B
2
(x
o
),
for some constant a.
Let v
k
be a solution of
−´v
k
−α
k
= 0 x ∈ Ω
v
k
(x) = 1 x ∈ ∂Ω.
Let w
k
= u
k
+v
k
. Then w
k
satisﬁes
−´w
k
= R(x)e
−v
k
e
w
k
.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 155
By the maximum principle, the sequence ¦v
k
¦ is bounded from above and from
below in the small ball Ω. Since ¦u
k
¦ is uniformly bounded from below, ¦w
k
¦
is also bounded from below. Locally, the metric is pointwise conformal to the
Euclidean metric, hence one can apply Lemma 5.2.6 to ¦w
k
¦ to conclude that
the sequence ¦w
k
¦ is also uniformly bounded from above in the smaller ball
B
(x
o
). Since x
o
is an arbitrary point where R is positive, we conclude that
¦w
k
¦ is uniformly bounded in the region where R(x) is positively bounded
away from 0. Now the uniform bound for ¦u
k
¦ in the same region follows suit.
Step 3. We show that the sequence ¦u
k
¦ is bounded in H
1
(M).
Choose a small δ > 0, such that the set
D = ¦x ∈ M [ R(x) > δ¦
is nonempty. It is open since R(x) is continuous. From the previous step, we
know that ¦u
k
¦ is uniformly bounded in D.
Let K(x) be a smooth function such that
K(x) < 0 , for x ∈ D; and K(x) = 0 , else where.
For each α
k
, let v
k
be the unique solution of
−´v
k
+α
k
= K(x)e
v
k
, x ∈ M.
Then since
−´v
k
+α
k
≤ 0 , and α
k
→α
o
The sequence ¦v
k
¦ is uniformly bounded.
Let w
k
= u
k
−v
k
, then
−´w
k
= R(x)e
u
k
−K(x)e
v
k
. (5.70)
We show that ¦w
k
¦ is bounded in H
1
(M).
On one hand, multiplying both sides of (5.70) by e
w
k
and integrating on
M, we have
M
[w
k
[
2
e
w
k
dA =
M
R(x)e
u
k
+w
k
dA−
D
K(x)e
u
k
dA. (5.71)
On the other hand, since each u
k
is a minimizer of the functional J
k
, the
second derivative J
k
(u
k
) is positively deﬁnite. It follows that for any function
φ ∈ H
1
(M),
M
[[φ[
2
−R(x)e
u
k
φ
2
]d A ≥ 0.
Choosing φ = e
w
k
2
, we derive
156 5 Prescribing Gaussian Curvature on Compact 2Manifolds
1
4
M
[w
k
[
2
e
w
k
dA ≥
M
R(x)e
u
k
+w
k
dA. (5.72)
Combining (5.71) and (5.72), we arrive at
−
D
K(x)e
u
k
dA ≥
3
4
M
[w
k
[
2
e
w
k
dA. (5.73)
Notice that ¦u
k
¦ is uniformly bounded in region D and ¦w
k
¦ is bounded
from below due to the fact that ¦u
k
¦ is bounded from below and ¦v
k
¦ is
bounded, (5.73) implies that
M
[w
k
[
2
dAis bounded, and so does
M
[u
k
[
2
dA.
Consider ˜ u
k
= u
k
−¯ u
k
, where ¯ u
k
is the average of u
k
. Obviously
M
˜ u
k
dA =
0, hence we can apply Poincar´e inequality to ˜ u
k
and conclude that ¦˜ u
k
¦ is
bounded in H
1
(M). Apparently, we have
M
u
2
k
dA ≤ 2
M
˜ u
2
k
dA+ ¯ u
2
k
[M[
.
If
M
u
2
k
dA is unbounded, then ¦¯ u
k
¦ is unbounded. Taking into account that
¦u
k
¦ is bounded in the region D where R(x) is positively bounded away from
0, we would have
D
˜ u
2
k
dA =
D
[u
k
− ¯ u
k
[
2
dA→∞,
for some subsequence, a contradiction with the fact that ¦˜ u
k
¦ is bounded
in H
1
(M). Therefore, ¦u
k
¦ must also be bounded in H
1
(M). Consequently,
there exists a subsequence of ¦u
k
¦ that converges to a function u
o
in H
1
(M),
and u
o
is the desired weak solution for
−´u
o
+α
o
= R(x)e
uo
.
This completes the proof of the Theorem. ¯ .
6
Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.1 Introduction
6.2 The Variational Approach
6.2.1 Estimate the Values of the Functional
6.2.2 The Variational Scheme
6.3 The A Priori Estimates
6.3.1 In the Region Where R < 0
6.3.2 In the Region Where R is small
6.3.3 In the Region Where R > 0
6.1 Introduction
On higher dimensional sphere S
n
with n ≥ 3, Kazdan and Warner raise the
following question:
Which functions R(x) can be realized as the scalar curvature of some
conformally related metrics? It is equivalent to consider the existence of a
solution to the following semilinear elliptic equation
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
, u > 0, x ∈ S
n
. (6.1)
The equation is “critical” in the sense that lack of compactness occurs.
Besides the obvious necessary condition that R(x) be positive somewhere,
there are wellknown obstructions found by Kazdan and Warner [KW2] and
later generalized by Bourguignon and Ezin [BE]. The conditions are:
S
n
X(R)dV
g
= 0 (6.2)
158 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
where dV
g
is the volume element of the conformal metric g and X is any
conformal vector ﬁeld associated with the standard metric g
0
. We call these
KazdanWarner type conditions. These conditions give rise to many examples
of R(x) for which equation (6.1) has no solution. In particular, a monotone
rotationally symmetric function R admits no solution.
In the last two decades, numerous studies were dedicated to these problems
and various suﬃcient conditions were found (please see the articles [Ba] [BC]
[C] [CC] [CD1] [CD2] [ChL] [CS] [CY1] [CY2] [CY3] [CY4] [ES] [Ha] [Ho] [Li]
[Li2] [Li3] [Mo] [SZ] [XY] and the references therein ). However, among others,
one problem of common concern left open, namely, were those KazdanWarner
type necessary conditions also suﬃcient ? In the case where R is rotationally
symmetric, the conditions become:
R > 0 somewhere and R
changes signs. (6.3)
Then
(a) is (6.3) a suﬃcient condition? and
(b) if not, what are the necessary and suﬃcient conditions?
Kazdan listed these as open problems in his CBMS Lecture Notes [Ka].
Recently, we answered question (a) negatively [CL3] [CL4]. We found some
stronger obstructions. Our results imply that for a rotationally symmetric
function R, if it is monotone in the region where it is positive, then problem
(6.1) admits no solution unless R is a constant.
In other words, a necessary condition to solve (6.1) is that
R
(r) changes signs in the region where R is positive. (6.4)
Now, is this a suﬃcient condition?
For prescribing Gaussian curvature equation (5.3) on S
2
, Xu and Yang
[XY] showed that if R is ‘nondegenerate,’ then (6.4) is a suﬃcient condition.
For equation (6.1) on higher dimensional spheres, a major diﬃculty is
that multiple blowups may occur when approaching a solution by a mini
max sequence of the functional or by solutions of subcritical equations.
In dimensions higher than 3, the ‘nondegeneracy’ condition is no longer
suﬃcient to guarantee the existence of a solution. It was illustrated by a
counter example in [Bi] constructed by Bianchi. He found some positive ro
tationally symmetric function R on S
4
, which is nondegenerate and non
monotone, and for which the problem admits no solution. In this situation,
a more proper condition is called the ‘ﬂatness condition’. Roughly speaking,
it requires that at every critical point of R, the derivatives of R up to order
(n − 2) vanish, while some higher but less than n
th
order derivative is dis
tinct from 0. For n = 3, the ‘nondegeneracy’ condition is a special case of
the ‘ﬂatness condition’. Although people are wondering if the ‘ﬂatness condi
tion’ is necessary, it is still used widely today (see [CY3], [Li2], and [SZ]) as a
standard assumption to guarantee the existence of a solution. The above men
tioned Bianchi’s counter example seems to suggest that the ‘ﬂatness condition’
is somewhat sharp.
6.1 Introduction 159
Now, a natural question is:
Under the ‘ﬂatness condition,’ is (6.4) a suﬃcient condition?
In this section, we answer the question aﬃrmatively. We prove that, under
the ‘ﬂatness condition,’ (6.4) is a necessary and suﬃcient condition for (6.1) to
have a solution. This is true in all dimensions n ≥ 3, and it applies to functions
R with changing signs. Thus, we essentially answer the open question (b)
posed by Kazdan.
There are many versions of ‘ﬂatness conditions,’ a general one was pre
sented in [Li2] by Y. Li. Here, to better illustrate the idea, in the statement
of the following theorem, we only list a typical and easytoverify one.
Theorem 6.1.1 Let n ≥ 3. Let R = R(r) be rotationally symmetric and
satisfy the following ﬂatness condition near every positive critical point r
o
:
R(r) = R(r
o
) +a[r −r
o
[
α
+h([r −r
o
[), with a = 0 and n−2 < α < n. (6.5)
where h
(s) = o(s
α−1
).
Then a necessary and suﬃcient condition for equation (6.1) to have a
solution is that
R
(r) changes signs in the regions where R > 0.
The theorem is proved by a variational approach. We blend in our new
ideas with other ideas in [XY], [Ba], [Li2], and [SZ]. We use the ‘center of
mass’ to deﬁne neighborhoods of ‘critical points at inﬁnity,’ obtain some quite
sharp and clean estimates in those neighborhoods, and construct a maxmini
variational scheme at subcritical levels and then approach the critical level.
Outline of the proof of Theorem 6.1.1.
Let γ
n
=
n(n−2)
4
and τ =
n+2
n−2
. We ﬁrst ﬁnd a positive solution u
p
of the
subcritical equation
−´u +γ
n
u = R(r)u
p
. (6.6)
for each p < τ and close to τ. Then let p→τ, take the limit.
To ﬁnd the solution of equation (6.6), we construct a maxmini variational
scheme. Let
J
p
(u) :=
S
n
Ru
p+1
dV
and
E(u) :=
S
n
([u[
2
+γ
n
u
2
)dV.
We seek critical points of J
p
(u) under the constraint
H = ¦u ∈ H
1
(S
n
) [ E(u) = γ
n
[S
n
[, u ≥ 0¦.
where [S
n
[ is the volume of S
n
.
160 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
If R has only one positive local maximum, then by condition (6.4), on each
of the poles, R is either nonpositive or has a local minimum. Then similar
to the approach in [C], we seek a solution by minimizing the functional in a
family of rotationally symmetric functions in H.
In the following, we assume that R has at least two positive local maxima.
In this case, the solutions we seek are not necessarily symmetric.
Our scheme is based on the following key estimates on the values of the
functional J
p
(u) in a neighborhood of the ‘critical points at inﬁnity’ associated
with each local maximum of R. Let r
1
be a positive local maximum of R. We
prove that there is an open set G
1
⊂ H (independent of p), such that on the
boundary ∂G
1
of G
1
, we have
J
p
(u) ≤ R(r
1
)[S
n
[ −δ, (6.7)
while there is a function ψ
1
∈ G
1
, such that
J
p
(ψ
1
) > R(r
1
)[S
n
[ −
δ
2
. (6.8)
Here δ > 0 is independent of p. Roughly speaking, we have some kinds of
‘mountain pass’ associated to each local maximum of R. The set G
1
is deﬁned
by using the ‘center of mass’.
Let r
1
and r
2
be two smallest positive local maxima of R. Let ψ
1
and ψ
2
be two functions deﬁned by (6.8) associated to r
1
and r
2
. Let γ be a path in
H connecting ψ
1
and ψ
2
, and let Γ be the family of all such paths.
Deﬁne
c
p
= sup
γ∈Γ
min
γ
J
p
(u). (6.9)
For each p < τ, by compactness, there is a critical point u
p
of J
p
(), such
that
J
p
(u
p
) = c
p
.
Obviously, a constant multiple of u
p
is a solution of (6.6). Moreover, by (6.7),
J
p
(u
p
) ≤ R(r
i
)[S
n
[ −δ, (6.10)
for any positive local maximum r
i
of R.
To ﬁnd a solution of (6.1), we let p→τ, take the limit. To show the conver
gence of a subsequence of ¦u
p
¦, we established apriori bounds for the solutions
in the following order.
(i) In the region where R < 0: This is done by the ‘Kelvin Transform’ and
a maximum principle.
(ii) In the region where R is small: This is mainly due to the boundedness
of the energy E(u
p
).
(iii) In the region where R is positively bounded away from 0: First due to
the energy bound, ¦u
p
¦ can only blow up at most ﬁnitely many points. Using
a Pohozaev type identity, we show that the sequence can only blow up at one
6.2 The Variational Approach 161
point and the point must be a local maximum of R. Finally, we use (6.10) to
argue that even one point blow up is impossible and thus establish an apriori
bound for a subsequence of ¦u
p
¦.
In subsection 2, we carry on the maxmini variational scheme and obtain
a solution u
p
of (6.6) for each p.
In subsection 3, we establish a priori estimates on the solution sequence
¦u
p
¦.
6.2 The Variational Approach
In this section, we construct a maxmini variational scheme to ﬁnd a solution
of
−´u +γ
n
u = R(r)u
p
(6.11)
for each p < τ :=
n+2
n−2
.
Let
J
p
(u) :=
S
n
Ru
p+1
dV
and
E(u) :=
S
n
([u[
2
+γ
n
u
2
)dV.
Let
(u, v) = (
S
uv +γ
n
uv)dV
be the inner product in the Hilbert space H
1
(S
n
), and
u :=
E(u)
be the corresponding norm.
We seek critical points of J
p
(u) under the constraint
H := ¦u ∈ H
1
(S
n
) [ E(u) = E(1) = γ
n
[S
n
[, u ≥ 0¦.
where [S
n
[ is the volume of S
n
.
One can easily see that a critical point of J
p
in S multiplied by a constant
is a solution of (6.11).
We divide the rest of the section into two parts.
First, we establish the key estimates (6.7) and (6.8).
Then, we carry on the maxmini variational scheme.
162 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.2.1 Estimate the Values of the Functional
To construct a maxmini variational scheme, we ﬁrst show that there is some
kinds of ‘mountain pass’ associated to each positive local maximum of R.
Unlike the classical ones, these ‘mountain passes’ are in some neighborhoods
of the ‘critical points at inﬁnity’ (See Proposition 6.2.2 and 6.2.1 below).
Choose a coordinate system in R
n+1
, so that the south pole of S
n
is at
the origin O and the center of the ball B
n+1
is at (0, , 0, 1). As usual, we
use [ [ to denote the distance in R
n+1
.
Deﬁne the center of mass of u as
q(u) =:
S
n
xu
τ+1
(x)dV
S
n
u
τ+1
(x)dV
(6.12)
It is actually the center of mass of the sphere S
n
with density u
τ+1
(x) at the
point x.
We recall some wellknown facts in conformal geometry.
Let φ
q
be the standard solution with its ‘center of mass’ at q ∈ B
n+1
, that
is, φ
q
∈ H and satisﬁes
−´u +γ
n
u = γ
n
u
τ
, (6.13)
We may also regard φ
q
as depend on two parameters λ and ˜ q, where ˜ q
is the intersection of S
n
with the ray passing the center and the point q.
When ˜ q = O (the south pole of S
n
), we can express, in the spherical polar
coordinates x = (r, θ) of S
n
(0 ≤ r ≤ π, θ ∈ S
n−1
) centered at the south pole
where r = 0,
φ
q
= φ
λ,˜ q
= (
λ
λ
2
cos
2
r
2
+ sin
2 r
2
)
n−2
2
. (6.14)
with 0 < λ ≤ 1. As λ→0, this family of functions blows up at south pole,
while tends to zero elsewhere.
Correspondingly, there is a family of conformal transforms
T
q
: H −→ H; T
q
u := u(h
λ
(x))[det(dh
λ
)]
n−2
2n
(6.15)
with
h
λ
(r, θ) = (2 tan
−1
(λtan
r
2
), θ).
Here det(dh
λ
) is the determinant of dh
λ
, and can be expressed explicitly as
det(dh
λ
) =
λ
cos
2
r
2
+λ
2
sin
2 r
2
n
.
It is wellknown that this family of conformal transforms leave the equation
(6.13), the energy E(), and the integral
S
n
u
τ+1
dV invariant. More generally
and more precisely, we have
6.2 The Variational Approach 163
Lemma 6.2.1 For any u, v ∈ H
1
(S
n
), and for any nonnegative integer k,
it holds the following
(i) (T
q
u, T
q
v) = (u, v) (6.16)
(ii)
S
n
(T
q
u)
k
(T
q
v)
τ+1−k
dV =
S
n
u
k
v
τ+1−k
dV. (6.17)
In particular,
E(T
q
u) = E(u) and
S
n
(T
q
u)
τ+1
dV =
S
n
u
τ+1
dV.
Proof. (i) We will use a property of the conformal Laplacian to prove the
invariance of the energy
E(T
q
u) = E(u) .
The proof for (i) is similar.
On a ndimensional Riemannian manifold (M, g),
L
g
:= ´
g
−c(n)R
g
is call a conformal Laplacian, where c(n) =
n−2
4(n−1)
, ´
g
and R
g
are the Laplace
Beltrami operator and the scalar curvature associated with the metric g re
spectively.
The conformal Laplacian has the following wellknown invariance property
under the conformal change of metrics. For ˆ g = w
4/(n−2)
g, w > 0, we have
L
ˆ g
u = w
−
n+2
n−2
L
g
(uw) , for all u ∈ C
∞
(M). (6.18)
Interested readers may ﬁnd its proof in [Bes].
If M is a compact manifold without boundary, then multiplying both sides
of (6.18) by u and integrating by parts, one would arrive at
M
¦[
ˆ g
u[
2
+c(n)R
ˆ g
[u[
2
¦dV
ˆ g
=
M
¦[
g
(uw)[
2
+c(n)R
g
[uw[
2
¦dV
g
. (6.19)
In our situation, we choose g to be the Euclidean metric ¯ g in R
n
with
¯ g
ij
= δ
ij
and ˆ g = g
o
, the standard metric on S
n
. Then it is wellknown that
g
o
= w
4
n−2
(x)¯ g
with
w(x) =
4
4 +[x[
2
n−2
2
satisfying the equation
−
¯
´w = γ
n
w
n+2
n−2
.
We also have
164 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
c(n)R
go
= γ
n
:=
n(n −2)
4
and
c(n)R
¯ g
= c(n)0 = 0.
Now formula (6.19) becomes
S
n
¦[
o
u[
2
+γ
n
[u[
2
¦dV
o
=
R
n
[
¯
(uw)[
2
dx, (6.20)
where
o
and
¯
are gradient operators associated with the metrics g
o
and ¯ g
respectively.
Again, let ˜ q be the intersection of S
n
with the ray passing through the
center of the sphere and the point q. Make a stereographic projection from
S
n
to R
n
as shown in the ﬁgure below, where ˜ q is placed at the origin of R
n
.
Under this projection, the point (r, θ) ∈ S
n
becomes x = (x
1
, , x
n
) ∈
R
n
, with
[x[
2
= tan
r
2
and
(T
q
u)(x) = u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
.
It follows from this and (6.20) that
E(T
q
u) =
R
n
[
o
¸
u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
¸
[
2
+γ
n
[u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
[
2
¸
dV
go
=
R
n
[
¯
¸
u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
w(x)
¸
[
2
dx
=
R
n
[
¯
¸
u(λx)
4λ
4 +λ[x[
2
n−2
2
¸
[
2
dx
=
R
n
[
¯
¸
u(y)
4
4 +[y[
2
n−2
2
¸
[
2
dy
=
R
n
[
¯
(u(y)w(y))[
2
dy
= E(u).
(ii) This can be derived immediately by making change of variable y =
h
λ
(x).
This completes the proof of the lemma. ¯ .
One can also verify that
T
q
φ
q
= 1.
6.2 The Variational Approach 165
The relations between q and λ, ˜ q and T
q
for ˜ q = O can be expressed in a
similar way.
We now carry on the estimates near the south pole (0, θ) which we as
sume to be a positive local maximum. The estimates near other positive local
maxima are similar. Our conditions on R implies that
R(r) = R(0) −ar
α
, for some a > 0, n −2 < α < n. (6.21)
in a small neighborhood of O.
Deﬁne
Σ = ¦u ∈ H [ [q(u)[ ≤ ρ
o
, v := min
t,q
u −tφ
q
 ≤ ρ
o
¦. (6.22)
Notice that the ‘centers of mass’ of functions in Σ are near the south pole O.
This is a neighborhood of the ‘critical points at inﬁnity’ corresponding to O.
We will estimate the functional J
p
in this neighborhood.
We ﬁrst notice that the supremum of J
p
in Σ approaches R(0)[S
n
[ as p→τ.
More precisely, we have
Proposition 6.2.1 For any δ
1
> 0, there is a p
1
≤ τ, such that for all
τ ≥ p ≥ p
1
,
sup
Σ
J
p
(u) > R(0)[S
n
[ −δ
1
. (6.23)
Then we show that on the boundary of Σ, J
p
is strictly less and bounded
away from R(0)[S
n
[.
Proposition 6.2.2 There exist positive constants ρ
o
, p
o
, and δ
o
, such that
for all p ≥ p
o
and for all u ∈ ∂Σ, holds
J
p
(u) ≤ R(0)[S
n
[ −δ
o
. (6.24)
Proof of Proposition 6.2.1
Through a straight forward calculation, one can show that
J
τ
(φ
λ,O
)→R(0)[S
n
[, as λ→0. (6.25)
For a given δ
1
> 0, by (6.25), one can choose λ
o
, such that φ
λo,O
∈ Σ, and
J
τ
(φ
λo,O
) > R(0)[S
n
[ −
δ
1
2
. (6.26)
It is easy to see that for a ﬁxed function φ
λo,O
, J
p
is continuous with
respect to p. Hence, by (6.26), there exists a p
1
, such that for all p ≥ p
1
,
166 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
J
p
(φ
λo,O
) > R(0)[S
n
[ −δ
1
.
This completes the proof of Proposition 6.2.1.
The proof of Proposition 6.2.2 is rather complex. We ﬁrst estimate J
p
for
a family of standard functions in Σ.
Lemma 6.2.2 For p suﬃciently close to τ, and for λ and [˜ q[ suﬃciently
small, we have
J
p
(φ
λ,˜ q
) ≤ (R(0) −C
1
[˜ q[
α
)[S
n
[(1 +o
p
(1)) −C
1
λ
α+δp
, (6.27)
where δ
p
:= τ −p and o
p
(1)→0 as p→τ.
Proof of Lemma 6.2.2. Let > 0 be small such that (6.21) holds in
B
(O) ⊂ S
n
. We express
J
p
(φ
λ,˜ q
) =
S
n
R(x)φ
p+1
λ,˜ q
dV =
B(O)
+
S
n
\B(O)
. (6.28)
From the deﬁnition (6.14) of φ
λ,˜ q
and the boundedness of R, we obtain
the smallness of the second integral:
S
n
\B(O)
R(x)φ
p+1
λ,˜ q
dV ≤ C
2
λ
n−δp
n−2
2
for ˜ q ∈ B
2
(O). (6.29)
To estimate the ﬁrst integral, we work in a local coordinate centered at O.
B(O)
R(x)φ
p+1
λ,˜ q
dV =
B(O)
R(x + ˜ q)φ
p+1
λ,O
dV
≤
B(O)
[R(0) −a[x + ˜ q[
α
]φ
p+1
λ,O
dV
≤
B(O)
[R(0) −C
3
[x[
α
−C
3
[˜ q[
α
]φ
p+1
λ,O
dV
≤ [R(0) −C
3
[˜ q[
α
][S
n
[[1 +o
p
(1)] −C
4
λ
α+δp(n−2)/2
. (6.30)
Here we have used the fact that [x + ˜ q[
α
≥ c([x[
α
+[˜ q[
α
) for some c > 0 in
one half of the ball B
(O) and the symmetry of φ
λ,O
.
Noticing that α < n and δ
p
→0, we conclude that (6.28), (6.29), and (6.30)
imply (6.27). This completes the proof of the Lemma.
To estimate J
p
for all u ∈ ∂Σ, we also need the following two lemmas that
describe some useful properties of the set Σ.
Lemma 6.2.3 (On the ‘center of mass’)
(i) Let q, λ, and ˜ q be deﬁned by (6.14). Then for suﬃciently small q,
[q[
2
≤ C([˜ q[
2
+λ
4
). (6.31)
(ii) Let ρ
o
, q, and v be deﬁned by (6.22). Then for ρ
o
suﬃciently small,
ρ
o
≤ [q[ +Cv. (6.32)
6.2 The Variational Approach 167
Lemma 6.2.4 (Orthogonality)
Let u ∈ Σ and v = u−t
o
φ
qo
be deﬁned by (6.22). Let T
qo
be the conformal
transform such that
T
qo
φ
qo
= 1. (6.33)
Then
S
n
T
qo
vdV = 0 (6.34)
and
S
n
T
qo
v ψ
i
(x)dV = 0, i = 1, 2, n. (6.35)
where ψ
i
are ﬁrst spherical harmonic functions on S
n
:
−´ψ
i
= nψ
i
(x) , i = 1, 2, , n.
If we place the center of the sphere S
n
at the origin of R
n+1
(not in our case),
then for x = (x
1
, , x
n
),
ψ
i
(x) = x
i
, i = 1, 2, , n.
Proof of Lemma 6.2.3.
(i) From the deﬁnition that φ
q
= φ
λ,˜ q
, and (6.14), and by an elementary
calculation, one arrives at
[q − ˜ q[ ∼ λ
2
, for λ suﬃciently small . (6.36)
Draw a perpendicular line segment from O to q˜ q, then one can see that
(6.31) is a direct consequence of the triangle inequality and (6.36).
(ii) For any u = tφ
q
+ v ∈ ∂Σ, the boundary of Σ, by the deﬁnition, we
have either v = ρ
o
or [q(u)[ = ρ
o
. If v = ρ
o
, then we are done. Hence we
assume that [q(u)[ = ρ
o
. It follows from the deﬁnition of the ‘center of mass’
that
ρ
o
= [
S
n
x(tφ
q
+v)
τ+1
S
n
(tφ
q
+v)
τ+1
[. (6.37)
Noticing that v ≤ ρ
o
is very small, we can expand the integrals in (6.37) in
terms of v:
ρ
o
= [
t
τ+1
xφ
τ+1
q
+ (τ + 1)t
τ
xφ
τ
q
v +o(v)
t
τ+1
φ
τ+1
q
+ (τ + 1)t
τ
φ
τ
q
v +o(v)
[
≤ [
xφ
τ+1
q
φ
τ+1
q
[ +Cv ≤ [q[ +Cv.
Here we have used the fact that v is small and that t is close to 1. This
completes the proof of Lemma 6.2.3.
168 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
Proof of Lemma 6.2.4.
Write L = ´+γ
n
. We use the fact that v = u −t
o
φ
qo
is the minimum of
E(u −tφ
q
) among all possible values of t and q.
(i) By a variation with respect to t, we have
0 = (v, φ
qo
) =
S
n
vLφ
qo
dV. (6.38)
It follows from (6.13) that
S
n
vφ
τ
qo
dV = 0.
Now, apply the conformal transform T
qo
to both v and φ
qo
in the above
integral. By the invariant property of the transform, we arrive at (6.34).
(ii) To prove (6.35), we make a variation with respect to q.
0 =
q
vLφ
q
[
q=qo
= γ
n
q
vφ
τ
q
[
q=qo
q
T
qo
v(T
qo
φ
q
)
τ
[
q=qo
=
q
T
qo
v(φ
q−qo
)
τ
[
q=qo
τ
T
qo
vφ
τ−1
q−qo
q
φ
q−qo
[
q=qo
= τ
T
qo
v(ψ
1
, , ψ
n
).
This completes the proof of the Lemma.
Proof of Proposition 6.2.2
Make a perturbation of R(x):
¯
R(x) =
R(x) x ∈ B
2ρo
(O)
m x ∈ S
n
` B
2ρo
(O),
(6.39)
where m = R [
∂B
2ρo(O)
.
Deﬁne
¯
J
p
(u) =
S
n
¯
R(x)u
p+1
dV.
The estimate is divided into two steps.
In step 1, we show that the diﬀerence between J
p
(u) and
¯
J
τ
(u) is very
small. In step 2, we estimate
¯
J
τ
(u).
Step 1.
First we show
¯
J
p
(u) ≤
¯
J
τ
(u)(1 +o
p
(1)). (6.40)
where o
p
(1)→0 as p→τ.
6.2 The Variational Approach 169
In fact, by the H¨older inequality
¯
Ru
p+1
≤
¯
R
τ+1
p+1
u
τ+1
dV
p+1
τ+1
[S
n
[
τ−p
τ+1
≤
¯
Ru
τ+1
dV
p+1
τ+1
[ R(0)[S
n
[ [
τ−p
τ+1
.
This implies (6.40).
Now estimate the diﬀerence between J
p
(u) and
¯
J
p
(u).
[ J
p
(u) −
¯
J
p
(u) [≤ C
1
S
n
\B2ρo
(O)
u
p+1
≤ C
2
S
n
\B2ρo
(O)
(tφ
q
)
p+1
+C
2
S
n
\B2ρo
(O)
v
p+1
≤ C
3
λ
n−δp
n−2
2
+C
3
v
p+1
.
(6.41)
Step 2.
By virtue of (6.40) and (6.41), we now only need to estimate
¯
J
τ
(u). For
any u ∈ ∂Σ, write u = v + tφ
q
. (6.38) implies that v and φ
q
are orthogonal
with respect to the inner product associated to E(), that is
S
n
(vφ
q
+γ
n
vφ
q
)dV = 0.
Notice that u
2
:= E(u), we have
u
2
= v
2
+t
2
φ
q

2
.
Using the fact that both u and φ
q
belong to H with
u = γ
n
[S
n
[ = φ
q
,
We derive
t
2
= 1 −
v
2
γ
n
[S
n
[
. (6.42)
It follows that
S
n
¯
R(x)u
τ+1
≤
t
τ+1
S
n
¯
R(x)φ
τ+1
q
+ (τ + 1)
S
n
¯
R(x)φ
τ
q
v +
τ(τ + 1)
2
S
n
φ
τ−1
q
v
2
+o(v
2
)
= I
1
+ (τ + 1)I
2
+
τ(τ + 1)
2
I
3
+o(v
2
). (6.43)
To estimate I
1
, we use (6.42) and (6.27),
170 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
I
1
≤ (1−
τ + 1
2
v
2
γ
n
[S
n
[
)R(0)[S
n
[(1−c
1
[˜ q[
α
−c
1
λ
α
) +O(λ
n
) +o(v
2
). (6.44)
for some positive constant c
1
.
To estimate I
2
, we use the H
1
orthogonality between v and φ
q
(see (6.38)),
I
2
=
S
n
(
¯
R(x) −m)φ
τ
q
vdV =
B2ρo
(O)
(
¯
R(x) −m)φ
τ
q
vdV
=
1
γ
n
B2ρo
(O)
(
¯
R(x) −m)Lφ
q
vdV
≤ Cρ
α
o
[
B2ρo
(O)
Lφ
q
vdV [= Cρ
α
o
[(φ
q
, v)[
≤ Cρ
α
o
φ
q
v ≤ C
4
ρ
α
o
v. (6.45)
To estimate I
3
, we use (6.34) and (6.35). It is wellknown that the ﬁrst
and second eigenspaces of −´ on S
n
are constants and ψ
i
corresponding to
eigenvalues 0 and n respectively. Now (6.34) and (6.35) imply that T
q
v is
orthogonal to these two eigenspaces and hence
S
n
(T
q
v)[
2
dV ≥ λ
2
S
n
[T
q
v[
2
dV ,
where λ
2
= n+c
2
, for some positive number c
2
, is the second nonzero eigen
value of −´. Consequently
v
2
= T
q
v
2
≥ (γ
n
+n +c
2
)
S
n
(T
q
v)
2
.
It follows that
I
3
≤ R(0)
S
n
φ
τ−1
q
v
2
dV = R(0)
S
n
(T
q
φ
q
)
τ−1
(T
q
v)
2
dV
= R(0)
S
n
(T
q
v)
2
≤
R(0)
γ
n
+n +c
2
v
2
. (6.46)
Now (6.44), (6.45), and (6.46) imply that there is a β > 0 such that
¯
J
τ
(u) ≤ R(0)[S
n
[[1 −β([˜ q[
α
+λ
α
+v
2
)]. (6.47)
Here we have used the fact that v is very small and ρ
α
o
v can be controlled
by [˜ q[
α
+λ
α
(see Lemma 6.2.2). Notice that the positive constant c
2
in (6.46)
has played a key role, and without it, the coeﬃcient of v
2
in (6.47) would
be 0 due to the fact that γ
n
=
n(n−1)
4
.
For any u ∈ ∂Σ, we have
either v = ρ
o
or [q(u)[ = ρ
o
.
By (6.31) and (6.47), in either case there exist a δ
o
> 0, such that
6.2 The Variational Approach 171
¯
J
τ
(u) ≤ R(0)[S
n
[ −2δ
o
. (6.48)
Now (6.24) is an immediate consequence of (6.40), (6.41), and (6.48) . This
completes the proof of the Proposition.
6.2.2 The Variational Scheme
In this part, we construct variational schemes to show the existence of a
solution to equation (6.11) for each p < τ.
Case (i): R has only one positive local maximum.
In this case, condition (6.4) implies that R must have local minima at
both poles. Similar to the ideas in [C], we seek a solution by maximizing the
functional J
p
in a class of rotationally symmetric functions:
H
r
= ¦u ∈ H [ u = u(r)¦. (6.49)
Obviously, J
p
is bounded from above in H
r
and it is wellknown that the
variational scheme is compact for each p < τ (See the argument in Chapter
2). Therefore any maximizing sequence possesses a converging subsequence in
H
r
and the limiting function multiplied by a suitable constant is a solution of
(6.11).
Case (ii): R has at least two positive local maxima.
Let r
1
and r
2
be the two smallest positive local maxima of R. By Propo
sition 6.2.1 and 6.2.2, there exist two disjoint open sets Σ
1
, Σ
2
⊂ H, ψ
i
∈
Σ
i
, p
o
< τ and δ > 0, such that for all p ≥ p
o
,
J
p
(ψ
i
) > R(r
i
)[S
n
[ −
δ
2
, i = 1, 2; (6.50)
and
J
p
(u) ≤ R(r
i
)[S
n
[ −δ, ∀u ∈ ∂Σ
i
, i = 1, 2. (6.51)
Let γ be a path in H joining ψ
1
and ψ
2
. Let Γ be the family of all such
paths. Deﬁne
c
p
= sup
γ∈Γ
min
u∈γ
J
p
(u). (6.52)
For each ﬁxed p < τ, by the wellknown compactness, there exists a critical
(saddle) point u
p
of J
p
with
J
p
(u
p
) = c
p
.
Moreover, due to (6.51) and the deﬁnition of c
p
, we have
J
p
(u
p
) ≤ min
i=1,2
R(r
i
)[S
n
[ −δ. (6.53)
One can easily see that a critical point of J
p
in H multiplied by a constant
is a solution of (6.11) and for all p close to τ, the constant multiples are
bounded from above and bounded away from 0.
172 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.3 The A Priori Estimates
In the previous section, we showed the existence of a positive solution u
p
to
the subcritical equation (6.11) for each p < τ. In this section, we prove that
as p→τ, there is a subsequence of ¦u
p
¦, which converges to a solution u
o
of
(6.1). The convergence is based on the following a priori estimate.
Theorem 6.3.1 Assume that R satisﬁes condition (6.5) in Theorem 6.1.1,
then there exists p
o
< τ, such that for all p
o
< p < τ, the solution of (6.11)
obtained by the variational scheme are uniformly bounded.
To prove the Theorem, we estimate the solutions in three regions where
R < 0, R close to 0, and R > 0 respectively.
6.3.1 In the Region Where R < 0
The apriori bound of the solutions is stated in the following proposition, which
is a direct consequence of Proposition 6.2.2 in our paper [CL6].
Proposition 6.3.1 For all 1 < p ≤ τ, the solutions of (6.11) are uniformly
bounded in the regions where R ≤ −δ, for every δ > 0. The bound depends
only on δ, dist(¦x [ R(x) ≤ −δ¦, ¦x [ R(x) = 0¦), and the lower bound of R.
6.3.2 In the Region Where R is Small
In [CL6], we also obtained estimates in this region by the method of moving
planes. However, in that paper, we assume that R be bounded away from
zero. In [CL7], for rotationally symmetric functions R on S
3
, we removed this
condition by using a blowing up analysis near R(x) = 0. That method may
be applied to higher dimensions with a few modiﬁcations. Now within our
variational frame in this paper, the a priori estimate is simpler. It is mainly
due to the energy bound.
Proposition 6.3.2 Let ¦u
p
¦ be solutions of equation (6.11) obtained by the
variational approach in section 2. Then there exists a p
o
< τ and a δ > 0,
such that for all p
o
< p ≤ τ, ¦u
p
¦ are uniformly bounded in the regions where
[R(x)[ ≤ δ.
Proof. It is easy to see that the energy E(u
p
) are bounded for all p. Now
suppose that the conclusion of the Proposition is violated. Then there exists
a subsequence ¦u
i
¦ with u
i
= u
pi
, p
i
→τ, and a sequence of points ¦x
i
¦, with
x
i
→x
o
and R(x
o
) = 0, such that
u
i
(x
i
)→∞.
We will use a rescaling argument to derive a contradiction. Since x
i
may not
be a local maximum of u
i
, we need to choose a point near x
i
, which is almost
a local maximum. To this end, let K be any large number and let
6.3 The A Priori Estimates 173
r
i
= 2K[u
i
(x
i
)]
−
p
i
−1
2
.
In a small neighborhood of x
o
, choose a local coordinate and let
s
i
(x
i
) = u
i
(x)(r
i
−[x −x
i
[)
2
p
i
−1
.
Let a
i
be the maximum of s
i
(x) in B
ri
(x
i
). Let λ
i
= [u
i
(a
i
)]
−
p
i
−1
2
.
Then from the deﬁnition of a
i
, one can easily verify that the ball B
λiK
(a
i
)
is in the interior of B
ri
(x
i
) and the value of u
i
(x) in B
λiK
(a
i
) is bounded
above by a constant multiple of u
i
(a
i
).
To see the ﬁrst part, we use the fact s
i
(a
i
) ≥ s
i
(x
i
). From this, we derive
u
i
(a
i
)(r
i
−[a
i
−x
i
[)
2
p
i
−1
≥ u
i
(x
i
)r
2
p
i
−1
i
= (2K)
2
p
i
−1
.
It follows that
r
i
−[a
i
−x
i
[
2
≥ λ
i
K, (6.54)
hence
B
λiK
(a
i
) ⊂ B
ri
(x
i
).
To see the second part, we use s
i
(a
i
) ≥ s
i
(x) and derive from (6.54),
u
i
(x) ≤ u
i
(a
i
)
r
i
−[a
i
−x
i
[
r
i
−[x −x
i
[
2
p
i
−1
≤ u
i
(a
i
) 2
2
p
i
−1
, ∀ x ∈ B
λiK
(a
i
).
Now we can make a rescaling.
v
i
(x) =
1
u
i
(a
i
)
u
i
(λ
i
x +a
i
).
Obviously v
i
is bounded in B
K
(0), and it follows by a standard argument
that ¦v
i
¦ converges to a harmonic function v
o
in B
K
(0) ⊂ R
n
, with v
o
(0) = 1.
Consequently for i suﬃciently large,
B
K
(0)
v
i
(x)
τ+1
dx ≥ cK
n
, (6.55)
for some c > 0.
On the other hand, the boundedness of the energy E(u
i
) implies that
S
n
u
τ+1
i
dV ≤ C, (6.56)
for some constant C. By a straight forward calculation, one can verify that,
for any K > 0,
S
n
u
τ+1
i
dV ≥
B
K
(0)
v
τ+1
i
dx. (6.57)
Obviously, (6.56) and (6.57) contradict with (6.55). This completes the
proof of the Proposition. ¯ .
174 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.3.3 In the Regions Where R > 0.
Proposition 6.3.3 Let ¦u
p
¦ be solutions of equation (6.11) obtained by the
variational approach in section 2. Then there exists a p
o
< τ, such that for
all p
o
< p ≤ τ and for any > 0, ¦u
p
¦ are uniformly bounded in the regions
where R(x) ≥ .
Proof. The proof is divided into 3 steps. In Step 1, we argue that the solutions
can only blow up at ﬁnitely many points because of the energy bound. In Step
2, we show that there is no more than one point blow up and the point must
be a local maximum of R. This is done mainly by using a Pohozaeve type
identity. In Step 3, we use (6.53) to conclude that even one point blow up is
impossible.
Step 1. The argument is standard. Let ¦x
i
¦ be a sequence of points such
that u
i
(x
i
)→∞and x
i
→x
o
with R(x
o
) > 0. Let r
i
, s
i
(x), and v
i
(x) be deﬁned
as in Part II. Then similar to the argument in Part II, we see that ¦v
i
(x)¦
converges to a standard function v
o
(x) in R
n
with
−´v
o
= R(x
o
)v
τ
o
.
It follows that
Br
i
(xo)
u
τ+1
i
dV ≥ c
o
> 0.
Because the total energy of u
i
is bounded, we can have only ﬁnitely many
such x
o
. Therefore, ¦u
i
¦ can only blow up at ﬁnitely many points.
Step 2. We have shown that ¦u
i
¦ has only isolated blow up points. As a
consequence of a result in [Li2] (also see [CL6] or [SZ]), we have
Lemma 6.3.1 Let R satisfy the ‘ﬂatness condition’ in Theorem 1, Then ¦u
i
¦
can have at most one simple blow up point and this point must be a local
maximum of R. Moreover, ¦u
i
¦ behaves almost like a family of the standard
functions φ
λi,˜ q
, with λ
i
= (max u
i
)
−
2
n−2
and with ˜ q at the maximum of R.
Step 3. Finally, we show that even one point blow up is impossible. For
convenience, let ¦u
i
¦ be the sequence of critical points of J
pi
we obtained in
Section 2. From the proof of Proposition 2.2, one can obtain
J
τ
(u
i
) ≤ min
k
R(r
k
)[S
n
[ −δ, (6.58)
for all positive local maxima r
k
of R, because each u
i
is the minimum of J
pi
on a path. Now if ¦u
i
¦ blow up at x
o
, then by Lemma 6.3.1, we have
J
τ
(u
i
)→R(x
o
)[S
n
[.
This is a contradiction with (6.58).
Therefore, the sequence ¦u
i
¦ is uniformly bounded and possesses a subse
quence converging to a solution of (6.1). This completes the proof of Theorem
6.1.1. ¯ .
7
Maximum Principles
7.1 Introduction
7.2 Weak Maximum Principle
7.3 The Hopf Lemma and Strong Maximum Principle
7.4 Maximum Principle Based on Comparison
7.5 A Maximum Principle for Integral Equations
7.1 Introduction
As a simple example of maximum principle, let’s consider a C
2
function u(x)
of one independent variable x. It is wellknown in calculus that at a local
maximum point x
o
of u, we must have
u
(x
o
) ≤ 0.
Based on this observation, then we have the following simplest version of
maximum principle:
Assume that u
(x) > 0 in an open interval (a, b), then u can not have
any interior maximum in the interval.
One can also see this geometrically. Since under the condition u
(x) > 0,
the graph is concave up, it can not have any local maximum.
More generally, for a C
2
function u(x) of nindependent variables x =
(x
1
, , x
n
), at a local maximum x
o
, we have
D
2
u(x
o
) := (u
xixj
(x
o
)) ≤ 0 ,
that is, the symmetric matrix is nonpositive deﬁnite at point x
o
. Correspond
ingly, the simplest version maximum principle reads:
176 7 Maximum Principles
If
(a
ij
(x)) ≥ 0 and
¸
ij
a
ij
(x)u
xixj
(x) > 0 (7.1)
in an open bounded domain Ω, then u can not achieve its maximum in the
interior of the domain.
An interesting special case is when (a
ij
(x)) is an identity matrix, in which
condition (7.1) becomes
´u > 0.
Unlike its onedimensional counterpart, condition (7.1) no longer implies
that the graph of u(x) is concave up. A simple counter example is
u(x
1
, x
2
) = x
2
1
−
1
2
x
2
2
.
One can easily see from the graph of this function that (0, 0) is a saddle point.
In this case, the validity of the maximum principle comes from the simple
algebraic fact:
For any two n n matrices A and B, if A ≥ 0 and B ≤ 0, then AB ≤ 0.
In this chapter, we will introduce various maximum principles, and most
of them will be used in the method of moving planes in the next chapter.
Besides this, there are numerous other applications. We will list some below.
i) Providing Estimates
Consider the boundary value problem
−´u = f(x) , x ∈ B
1
(0) ⊂ R
n
u(x) = 0 , x ∈ ∂B
1
(0).
(7.2)
If a ≤ f(x) ≤ b in B
1
(0), then we can compare the solution u with the
two functions
a
2n
(1 −[x[
2
) and
b
2n
(1 −[x[
2
)
which satisfy the equation with f(x) replaced by a and b, respectively, and
which vanish on the boundary as u does. Now, applying the maximum prin
ciple for ´ operator (see Theorem 7.1.1 in the following), we obtain
a
2n
(1 −[x[
2
) ≤ u(x) ≤
b
2n
(1 −[x[
2
).
ii) Proving Uniqueness of Solutions
In the above example, if f(x) ≡ 0, then we can choose a = b = 0, and
this implies that u ≡ 0. In other words, the solution of the boundary value
problem (7.2) is unique.
iii) Establishing the Existence of Solutions
(a) For a linear equation such as (7.2) in any bounded open domain Ω, let
u(x) = sup φ(x)
7.1 Introduction 177
where the sup is taken among all the functions that satisfy the corresponding
diﬀerential inequality
−´φ ≤ f(x) , x ∈ Ω
φ(x) = 0 , x ∈ ∂Ω.
Then u is a solution of (7.2).
(b) Now consider the nonlinear problem
−´u = f(u) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
(7.3)
Assume that f() is a smooth function with f
() ≥ 0. Suppose that there
exist two functions u(x) ≤ ¯ u(x), such that
−´u ≤ f(u) ≤ f(¯ u) ≤ −´¯ u.
These two functions are called sub (or lower) and super (or upper) solutions
respectively.
To seek a solution of problem (7.3), we use successive approximations. Let
−´u
1
= f(u) and −´u
i+1
= f(u
i
).
Then by maximum principle, we have
u ≤ u
1
≤ u
2
≤ ≤ u
i
≤ ≤ ¯ u.
Let u be the limit of the sequence ¦u
i
¦:
u(x) = limu
i
(x),
then u is a solution of the problem (7.3).
In Section 7.2 , we introduce and prove the weak maximum principles.
Theorem 7.1.1 (Weak Maximum Principle for −´.)
i) If
−´u(x) ≥ 0, x ∈ Ω,
then
min
¯
Ω
u ≥ min
∂Ω
u.
ii) If
−´u(x) ≤ 0, x ∈ Ω,
then
max
¯
Ω
u ≤ max
∂Ω
u.
178 7 Maximum Principles
This result can be extended to general uniformly elliptic operators. Let
D
i
=
∂
∂x
i
, D
ij
=
∂
2
∂x
i
∂x
j
.
Deﬁne
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x).
Here we always assume that a
ij
(x), b
i
(x), and c(x) are bounded continuous
functions in
¯
Ω. We say that L is uniformly elliptic if
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for any x ∈ Ω, any ξ ∈ R
n
and for some δ > 0.
Theorem 7.1.2 (Weak Maximum Principle for L) Let L be the uniformly
elliptic operator deﬁned above. Assume that c(x) ≡ 0.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u.
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
.
These Weak Maximum Principles infer that the minima or maxima of u
attain at some points on the boundary ∂Ω. However, they do not exclude
the possibility that the minima or maxima may also occur in the interior of
Ω. Actually this can not happen unless u is constant, as we will see in the
following.
Theorem 7.1.3 (Strong Maximum Principle for L with c(x) ≡ 0) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≡ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u attains its minimum value only on ∂Ω unless u is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u attains its maximum value only on ∂Ω unless u is constant.
This maximum principle (as well as the weak one) can also be applied to
the case when c(x) ≥ 0 with slight modiﬁcations.
7.1 Introduction 179
Theorem 7.1.4 (Strong Maximum Principle for L with c(x) ≥ 0) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≥ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u can not attain its nonpositive minimum in the interior of Ω unless u
is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u can not attain its nonnegative maximum in the interior of Ω unless
u is constant.
We will prove these Theorems in Section 7.3 by using the Hopf Lemma.
Notice that in the previous Theorems, we all require that c(x) ≥ 0.
Roughly speaking, maximum principles hold for ‘positive’ operators. −´ is
‘positive’, and obviously so does −´+ c(x) if c(x) ≥ 0. However, as we will
see in the next chapter, in practical problems it occurs frequently that the
condition c(x) ≥ 0 can not be met. Do we really need c(x) ≥ 0? The answer
is ‘no’. Actually, if c(x) is not ‘too negative’, then the operator ‘´+ c(x)’
can still remain ‘positive’ to ensure the maximum principle. These will be
studied in Section 7.4, where we prove the ‘Maximum Principles Based on
Comparisons’.
Let φ be a positive function on
¯
Ω satisfying
−´φ +λ(x)φ ≥ 0. (7.4)
Let u be a function such that
−´u +c(x)u ≥ 0 x ∈ Ω
u ≥ 0 on ∂Ω.
(7.5)
Theorem 7.1.5 (Maximum Principle Based on Comparison)
Assume that Ω is a bounded domain. If
c(x) > λ(x) , ∀x ∈ Ω,
then u ≥ 0 in Ω.
Also in Section 7.4, as consequences of Theorem 7.1.5, we derive the ‘Nar
row Region Principle’ and the ‘Decay at Inﬁnity Principle’. These principles
can be applied very conveniently in the ‘Method of Moving Planes’ to estab
lish the symmetry of solutions for semilinear elliptic equations, as we will see
in later sections.
In Section 7.5, we establish a maximum principle for integral inequalities.
180 7 Maximum Principles
7.2 Weak Maximum Principles
In this section, we prove the weak maximum principles.
Theorem 7.2.1 (Weak Maximum Principle for −´.)
i) If
−´u(x) ≥ 0, x ∈ Ω, (7.6)
then
min
¯
Ω
u ≥ min
∂Ω
u. (7.7)
ii) If
−´u(x) ≤ 0, x ∈ Ω, (7.8)
then
max
¯
Ω
u ≤ max
∂Ω
u. (7.9)
Proof. Here we only present the proof of part i). The entirely similar proof
also works for part ii).
To better illustrate the idea, we will deal with one dimensional case and
higher dimensional case separately.
First, let Ω be the interval (a, b). Then condition (7.6) becomes u
(x) ≤
0. This implies that the graph of u(x) on (a, b) is concave downward, and
therefore one can roughly see that the values of u(x) in (a, b) are large or
equal to the minimum value of u at the end points (See Figure 2).
a
b
Figure 2
1
To prove the above observation rigorously, we ﬁrst carry it out under the
stronger assumption that
−u
(x) > 0. (7.10)
Let m = min
∂Ω
u. Suppose in contrary to (7.7), there is a minimum x
o
∈ (a, b)
of u, such that u(x
o
) < m. Then by the second order Taylor expansion of u
around x
o
, we must have −u
(x
o
) ≤ 0. This contradicts with the assumption
(7.10).
Now for u only satisfying the weaker condition (7.6), we consider a per
turbation of u:
u
(x) = u(x) −x
2
.
Obviously, for each > 0, u
(x) satisﬁes the stronger condition (7.10), and
hence
7.2 Weak Maximum Principles 181
min
¯
Ω
u
≥ min
∂Ω
u
.
Now letting →0, we arrive at (7.7).
To prove the theorem in dimensions higher than one, we need the following
Lemma 7.2.1 (Mean Value Inequality) Let x
o
be a point in Ω. Let B
r
(x
o
) ⊂
Ω be the ball of radius r center at x
o
, and ∂B
r
(x
o
) be its boundary.
i) If −´u(x) > (=) 0 for x ∈ B
ro
(x
o
) with some r
o
> 0, then for any
r
o
> r > 0,
u(x
o
) > (=)
1
[∂B
r
(x
o
)[
∂Br(x
o
)
u(x)dS. (7.11)
It follows that, if x
o
is a minimum of u in Ω, then
−´u(x
o
) ≤ 0. (7.12)
ii) If −´u(x) < 0 for x ∈ B
ro
(x
o
) with some r
o
> 0, then for any r
o
>
r > 0,
u(x
o
) <
1
[∂B
r
(x
o
)[
∂Br(x
o
)
u(x)dS. (7.13)
It follows that, if x
o
is a maximum of u in Ω, then
−´u(x
o
) ≥ 0. (7.14)
We postpone the proof of the Lemma for a moment. This Lemma tell
us that, if −´u(x) > 0, then the value of u at the center of the small ball
B
r
(x
o
) is larger than its average value on the boundary ∂B
r
(x
o
). Roughly
speaking, the graph of u is locally somewhat concave downward. Now based
on this Lemma, to prove the theorem, we ﬁrst consider u
(x) = u(x) −[x[
2
.
Obviously,
−´u
= −´u + 2n > 0. (7.15)
Hence we must have
min
Ω
u
≥ min
∂Ω
u
(7.16)
Otherwise, if there exists a minimum x
o
of u in Ω, then by Lemma 7.2.1, we
have −´u
(x
o
) ≤ 0. This contradicts with (7.15). Now in (7.16), letting →0,
we arrive at the desired conclusion (7.7).
This completes the proof of the Theorem.
The Proof of Lemma 7.2.1. By the Divergence Theorem,
Br(x
o
)
´u(x)dx =
∂Br(x
o
)
∂u
∂ν
dS = r
n−1
S
n−1
∂u
∂r
(x
o
+rω)dS
ω
, (7.17)
where dS
ω
is the area element of the n − 1 dimensional unit sphere S
n−1
=
¦ω [ [ω[ = 1¦.
182 7 Maximum Principles
If ´u < 0, then by (7.17),
∂
∂r
¦
S
n−1
u(x
o
+rω)dS
ω
¦ < 0. (7.18)
Integrating both sides of (7.18) from 0 to r yields
S
n−1
u(x
o
+rω)dS
ω
−u(x
o
)[S
n−1
[ < 0,
where [S
n−1
[ is the area of S
n−1
. It follows that
u(x
o
) >
1
r
n−1
[S
n−1
[
∂Br(x
o
)
u(x)dS.
This veriﬁes (7.11).
To see (7.12), we suppose in contrary that −´u(x
o
) > 0. Then by the
continuity of ´u, there exists a δ > 0, such that
−´u(x) > 0, ∀x ∈ B
δ
(x
o
).
Consequently, (7.11) holds for any 0 < r < δ. This contradicts with the
assumption that x
o
is a minimum of u.
This completes the proof of the Lemma.
From the proof of Theorem 7.2.1, one can see that if we replace −´ op
erator by −´+ c(x) with c(x) ≥ 0, then the conclusion of Theorem 7.2.1 is
still true (with slight modiﬁcations). Furthermore, we can replace the Laplace
operator −´ with general uniformly elliptic operators. Let
D
i
=
∂
∂x
i
, D
ij
=
∂
2
∂x
i
∂x
j
.
Deﬁne
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x). (7.19)
Here we always assume that a
ij
(x), b
i
(x), and c(x) are bounded continuous
functions in
¯
Ω. We say that L is uniformly elliptic if
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for any x ∈ Ω, any ξ ∈ R
n
and for some δ > 0.
Theorem 7.2.2 (Weak Maximum Principle for L with c(x) ≡ 0). Let L be
the uniformly elliptic operator deﬁned above. Assume that c(x) ≡ 0.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u. (7.20)
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
u.
7.3 The Hopf Lemma and Strong Maximum Principles 183
For c(x) ≥ 0, the principle still applies with slight modiﬁcations.
Theorem 7.2.3 (Weak Maximum Principle for L with c(x) ≥ 0). Let L be
the uniformly elliptic operator deﬁned above. Assume that c(x) ≥ 0. Let
u
−
(x) = min¦u(x), 0¦ and u
+
(x) = max¦u(x), 0¦.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u
−
.
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
u
+
.
Interested readers may ﬁnd its proof in many standard books, say in [Ev],
page 327.
7.3 The Hopf Lemma and Strong Maximum Principles
In the previous section, we prove a weak form of maximum principle. In the
case Lu ≥ 0, it concludes that the minimum of u attains at some point on the
boundary ∂Ω. However it does not exclude the possibility that the minimum
may also attain at some point in the interior of Ω. In this section, we will show
that this can not actually happen, that is, the minimum value of u can only
be achieved on the boundary unless u is constant. This is called the “Strong
Maximum Principle”. We will prove it by using the following
Lemma 7.3.1 (Hopf Lemma). Assume that Ω is an open, bounded, and
connected domain in R
n
with smooth boundary ∂Ω. Let u be a function in
C
2
(Ω) ∩ C(
¯
Ω). Let
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x)
be uniformly elliptic in Ω with c(x) ≡ 0. Assume that
Lu ≥ 0 in Ω. (7.21)
Suppose there is a ball B contained in Ω with a point x
o
∈ ∂Ω ∩ ∂B and
suppose
u(x) > u(x
o
), ∀x ∈ B. (7.22)
Then for any outward directional derivative at x
o
,
∂u(x
o
)
∂ν
< 0. (7.23)
184 7 Maximum Principles
In the case c(x) ≥ 0, if we require additionally that u(x
o
) ≤ 0, then the
same conclusion of the Hopf Lemma holds.
Proof. Without loss of generality, we may assume that B is centered at
the origin with radius r. Deﬁne
w(x) = e
−αr
2
−e
−αx
2
.
Consider v(x) = u(x) +w(x) on the set D = B
r
2
(x
o
) ∩ B (See Figure 3).
x
o
0
D
r
Ω
Figure 3
1
We will choose α and appropriately so that we can apply the Weak
Maximum Principle to v(x) and arrive at
v(x) ≥ v(x
o
) ∀x ∈ D. (7.24)
We postpone the proof of (7.24) for a moment. Now from (7.24), we have
∂v
∂ν
(x
o
) ≤ 0, (7.25)
Noticing that
∂w
∂ν
(x
o
) > 0
We arrive at the desired inequality
∂u
∂ν
(x
o
) < 0.
Now to complete the proof of the Lemma, what left to verify is (7.24). We
will carry this out in two steps. First we show that
Lv ≥ 0. (7.26)
Hence we can apply the Weak Maximum Principle to conclude that
min
¯
D
v ≥ min
∂D
. (7.27)
7.3 The Hopf Lemma and Strong Maximum Principles 185
Then we show that the minimum of v on the boundary ∂D is actually
attained at x
o
:
v(x) ≥ v(x
o
) ∀x ∈ ∂D. (7.28)
Obviously, (7.27) and (7.28) imply (7.24).
To see (7.26), we directly calculate
Lw = e
−αx
2
¦4α
2
n
¸
i,j=1
a
ij
(x)x
i
x
j
−2α
n
¸
i=1
[a
ii
(x) −b
i
(x)x
i
] −c(x)¦ +c(x)e
−αr
2
≥ e
−αx
2
¦4α
2
n
¸
i,j=1
a
ij
(x)x
i
x
j
−2α
n
¸
i=1
[a
ii
(x) −b
i
(x)x
i
] −c(x)¦ (7.29)
By the ellipticity assumption, we have
n
¸
i,j=1
a
ij
(x)x
i
x
j
≥ δ[x[
2
≥ δ(
r
2
)
2
> 0 in D. (7.30)
Hence we can choose α suﬃciently large, such that Lw ≥ 0. This, together
with the assumption Lu ≥ 0 implies Lv ≥ 0, and (7.27) follows from the Weak
Maximum Principle.
To verify (7.28), we consider two parts of the boundary ∂D separately.
(i) On ∂D ∩ B, since u(x) > u(x
o
), there exists a c
o
> 0, such that
u(x) ≥ u(x
o
) +c
o
. Take small enough such that [w[ ≤ δ on ∂D∩B. Hence
v(x) ≥ u(x
o
) = v(x
o
) ∀x ∈ ∂D ∩ B.
(ii) On ∂D∩∂B, w(x) = 0, and by the assumption u(x) ≥ u(x
o
), we have
v(x) ≥ v(x
o
).
This completes the proof of the Lemma.
Now we are ready to prove
Theorem 7.3.1 (Strong Maximum Principle for L with c(x) ≡ 0.) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≡ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u attains its minimum only on ∂Ω unless u is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u attains its maximum only on ∂Ω unless u is constant.
Proof. We prove part i) here. The proof of part ii) is similar. Let m be
the minimum value of u in Ω. Set Σ = ¦x ∈ Ω [ u(x) = m¦. It is relatively
closed in Ω. We show that either Σ is empty or Σ = Ω.
186 7 Maximum Principles
We argue by contradiction. Suppose Σ is a nonempty proper subset of
Ω. Then we can ﬁnd an open ball B ⊂ Ω ` Σ with a point on its boundary
belonging to Σ. Actually, we can ﬁrst ﬁnd a point p ∈ Ω ` Σ such that
d(p, Σ) < d(p, ∂Ω), then increase the radius of a small ball center at p until
it hits Σ (before hitting ∂Ω). Let x
o
be the point at ∂B ∩ Σ. Obviously we
have in B
Lu ≥ 0 and u(x) > u(x
o
).
Now we can apply the Hopf Lemma to conclude that the normal outward
derivative
∂u
∂ν
(x
o
) < 0. (7.31)
On the other hand, x
o
is an interior minimum of u in Ω, and we must have
Du(x
o
) = 0. This contradicts with (7.31) and hence completes the proof of
the Theorem.
In the case when c(x) ≥ 0, the strong principle still applies with slight
modiﬁcations.
Theorem 7.3.2 (Strong Maximum Principle for L with c(x) ≥ 0.) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≥ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u can not attain its nonpositive minimum in the interior of Ω unless u
is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u can not attain its nonnegative maximum in the interior of Ω unless
u is constant.
Remark 7.3.1 In order that the maximum principle to hold, we assume that
the domain Ω be bounded. This is essential, since it guarantees the existence
of maximum and minimum of u in
¯
Ω. A simple counter example is when Ω
is the half space ¦x ∈ R
n
[ x
n
> 0¦, and u(x, y) = x
n
. Obviously, ´u = 0, but
u does not obey the maximum principle:
max
Ω
u ≤ max
∂Ω
u.
Equally important is the nonnegativeness of the coeﬃcient c(x). For exam
ple, set Ω = ¦(x, y) ∈ R
2
[ −
π
2
< x <
π
2
, −
π
2
< y <
π
2
¦. Then u = cos xcos y
satisﬁes
−´u −2u = 0, in Ω
u = 0, on ∂Ω.
7.3 The Hopf Lemma and Strong Maximum Principles 187
But, obviously, there are some points in Ω at which u < 0.
However, if we impose some sign restriction on u, say u ≥ 0, then both
conditions can be relaxed. A simple version of such result will be present in
the next theorem.
Also, as one will see in the next section, c(x) is actually allowed to be
negative, but not ‘too negative’.
Theorem 7.3.3 (Maximum Principle and Hopf Lemma for not necessarily
bounded domain and not necessarily nonnegative c(x). )
Let Ω be a domain in R
n
with smooth boundary. Assume that u ∈ C
2
(Ω)∩
C(
¯
Ω) and satisﬁes
−´u +
¸
n
i=1
b
i
(x)D
i
u +c(x)u ≥ 0, u(x) ≥ 0, x ∈ Ω
u(x) = 0 x ∈ ∂Ω
(7.32)
with bounded functions b
i
(x) and c(x). Then
i) if u vanishes at some point in Ω, then u ≡ 0 in Ω; and
ii) if u ≡0 in Ω, then on ∂Ω, the exterior normal derivative
∂u
∂ν
< 0.
To prove the Theorem, we need the following Lemma concerning eigenval
ues.
Lemma 7.3.2 Let λ
1
be the ﬁrst positive eigenvalue of
−´φ = λ
1
φ(x) x ∈ B
1
(0)
φ(x) = 0 x ∈ ∂B
1
(0)
(7.33)
with the corresponding eigenfunction φ(x) > 0. Then for any ρ > 0, the ﬁrst
positive eigenvalue of the problem on B
ρ
(0) is
λ
1
ρ
2
. More precisely, if we let
ψ(x) = φ(
x
ρ
), then
−´ψ =
λ
1
ρ
2
ψ(x) x ∈ B
ρ
(0)
ψ(x) = 0 x ∈ ∂B
ρ
(0)
(7.34)
The proof is straight forward, and will be left for the readers.
The Proof of Theorem 7.3.3.
i) Suppose that u = 0 at some point in Ω, but u ≡0 on Ω. Let
Ω
+
= ¦x ∈ Ω [ u(x) > 0¦.
Then by the regularity assumption on u, Ω
+
is an open set with C
2
boundary.
Obviously,
u(x) = 0 , ∀ x ∈ ∂Ω
+
.
Let x
o
be a point on ∂Ω
+
, but not on ∂Ω. Then for ρ > 0 suﬃciently small,
one can choose a ball B
ρ/2
(¯ x) ⊂ Ω
+
with x
o
as its boundary point. Let ψ be
188 7 Maximum Principles
the positive eigenfunction of the eigenvalue problem (7.34) on B
ρ
(x
o
) corre
sponding to the eigenvalue
λ
1
ρ
2
. Obviously, B
ρ
(x
o
) completely covers B
ρ/2
(¯ x).
Let v =
u
ψ
. Then from (7.32), it is easy to deduce that
0 ≤ −´v −2v
ψ
ψ
+
n
¸
i=1
b
i
(x)D
i
v +
−´ψ
ψ
+
n
¸
i=1
D
i
ψ
ψ
+c(x)
v
≡ −´v +
n
¸
i=1
˜
b
i
(x)D
i
v + ˜ c(x)v.
Let φ be the positive eigenfunction of the eigenvalue problem (7.33) on B
1
,
then
˜ c(x) =
λ
1
ρ
2
+
1
ρ
n
¸
i=1
D
i
φ
φ
+c(x).
This allows us to choose ρ suﬃciently small so that ˜ c(x) ≥ 0. Now we can
apply Hopf Lemma to conclude that, the outward normal derivative at the
boundary point x
o
of B
ρ/2
(¯ x),
∂v
∂ν
(x
o
) < 0, (7.35)
because
v(x) > 0 ∀x ∈ B
ρ/2
(¯ x) and v(x
o
) =
u(x
o
)
ψ(x
o
)
= 0.
On the other hand, since x
o
is also a minimum of v in the interior of Ω,
we must have
v(x
o
) = 0.
This contradicts with (7.35) and hence proves part i) of the Theorem.
ii) The proof goes almost the same as in part i) except we consider the
point x
o
on ∂Ω and the ball B
ρ/2
(¯ x) is in Ω with x
o
∈ ∂B
ρ/2
(¯ x). Then for
the outward normal derivative of u, we have
∂u
∂ν
(x
o
) =
∂v
∂ν
(x
o
)ψ(x
o
) +v(x
o
)
∂ψ
∂ν
(x
o
) =
∂v
∂ν
(x
o
)ψ(x
o
) < 0.
Here we have used a wellknown fact that the eigenfunction ψ on B
ρ
(x
o
) is ra
dially symmetric about the center x
o
, and hence ψ(x
o
) = 0. This completes
the proof of the Theorem.
7.4 Maximum Principles Based on Comparisons
In the previous section, we show that if (−´+c(x))u ≥ 0, then the maximum
principle, i.e. (7.20), applies. There, we required c(x) ≥ 0. We can think −´
7.4 Maximum Principles Based on Comparisons 189
as a ‘positive’ operator, and the maximum principle holds for any ‘positive’
operators. For c(x) ≥ 0, −´+c(x) is also ‘positive’. Do we really need c(x) ≥ 0
here? To answer the question, let us consider the Dirichlet eigenvalue problem
of −´:
−´φ −λφ(x) = 0 x ∈ Ω
φ(x) = 0 x ∈ ∂Ω.
(7.36)
We notice that the eigenfunction φ corresponding to the ﬁrst positive eigen
value λ
1
is either positive or negative in Ω. That is, the solutions of (7.36)
with λ = λ
1
obey Maximum Principle, that is, the maxima or minima of φ
are attained only on the boundary ∂Ω. This suggests that, to ensure the Max
imum Principle, c(x) need not be nonnegative, it is allowed to be as negative
as −λ
1
. More precisely, we can establish the following more general maximum
principle based on comparison.
Theorem 7.4.1 Assume that Ω is a bounded domain. Let φ be a positive
function on
¯
Ω satisfying
−´φ +λ(x)φ ≥ 0. (7.37)
Assume that u is a solution of
−´u +c(x)u ≥ 0 x ∈ Ω
u ≥ 0 on ∂Ω.
(7.38)
If
c(x) > λ(x) , ∀x ∈ Ω, (7.39)
then u ≥ 0 in Ω.
Proof. We argue by contradiction. Suppose that u(x) < 0 somewhere in
Ω. Let v(x) =
u(x)
φ(x)
. Then since φ(x) > 0, we must have v(x) < 0 somewhere
in Ω. Let x
o
∈ Ω be a minimum of v(x). By a direct calculation, it is easy to
verify that
−´v = 2v
φ
φ
+
1
φ
(−´u +
´φ
φ
u). (7.40)
On one hand, since x
o
is a minimum, we have
−´v(x
o
) ≤ 0 and v(x
o
) = 0. (7.41)
While on the other hand, by (7.37), (7.38), and (7.39), and taking into
account that u(x
o
) < 0, we have, at point x
o
,
−´u +
´φ
φ
u(x
o
) ≥ −´u +λ(x
o
)u(x
o
)
> −´u +c(x
o
)u(x
o
) ≥ 0.
This is an obvious contradiction with (7.40) and (7.41), and thus completes
the proof of the Theorem.
190 7 Maximum Principles
Remark 7.4.1 From the proof, one can see that conditions (7.37) and (7.39)
are required only at the points where v attains its minimum, or at points where
u is negative.
The Theorem is also valid on an unbounded domains if u is “nonnegative”
at inﬁnity:
Theorem 7.4.2 If Ω is an unbounded domain, besides condition (7.38), we
assume further that
liminf
x→∞
u(x) ≥ 0. (7.42)
Then u ≥ 0 in Ω.
Proof. Still consider the same v(x) as in the proof of Theorem 7.4.1. Now
condition (7.42) guarantees that the minima of v(x) do not “leak” away to
inﬁnity. Then the rest of the arguments are exactly the same as in the proof
of Theorem 7.4.1.
For convenience in applications, we provide two typical situations where
there exist such functions φ and c(x) satisfying condition (7.37) and (7.39),
so that the Maximum Principle Based on Comparison applies:
i) Narrow regions, and
ii) c(x) decays fast enough at ∞.
i) Narrow Regions. When
Ω = ¦x [ 0 < x
1
< l¦
is a narrow region with width l as shown:
Ω

0 l x
1
We can choose φ(x) = sin(
x1+
l
). Then it is easy
to see that −´φ = (
1
l
)
2
φ, where λ(x) =
−1
l
2
can be very negative when l is suﬃciently small.
Corollary 7.4.1 (Narrow Region Principle.) If u satisﬁes (7.38) with bounded
function c(x). Then when the width l of the region Ω is suﬃciently small, c(x)
satisﬁes (7.39), i.e. c(x) > λ(x) =
−1
l
2
. Hence we can directly apply Theorem
7.4.1 to conclude that u ≥ 0 in Ω, provided liminf
x→∞
u(x) ≥ 0.
ii) Decay at Inﬁnity. In dimension n ≥ 3, one can choose some positive
number q < n −2, and let φ(x) =
1
x
q
Then it is easy to verify that
7.5 A Maximum Principle for Integral Equations 191
−´φ =
q(n −2 −q)
[x[
2
φ.
In the case c(x) decays fast enough near inﬁnity, we can adapt the proof of
Theorem 7.4.1 to derive
Corollary 7.4.2 (Decay at Inﬁnity) Assume there exist R > 0, such that
c(x) > −
q(n −2 −q)
[x[
2
, ∀[x[ > R. (7.43)
Suppose
lim
x→∞
u(x)
[x[
q
= 0.
Let Ω be a region containing in B
C
R
(0) ≡ R
n
` B
R
(0). If u satisﬁes (7.38) on
¯
Ω, then
u(x) ≥ 0 for all x ∈ Ω.
Remark 7.4.2 From Remark 7.4.1, one can see that actually condition
(7.43) is only required at points where u is negative.
Remark 7.4.3 Although Theorem 7.4.1 as well as its Corollaries are stated
in linear forms, they can be easily applied to a nonlinear equation, for example,
−´u −[u[
p−1
u = 0 x ∈ R
n
. (7.44)
Assume that the solution u decays near inﬁnity at the rate of
1
[x[
s
with
s(p − 1) > 2. Let c(x) = −[u(x)[
p−1
. Then for R suﬃciently large, and for
the region Ω as stated in Corollary 7.4.2, c(x) satisﬁes (7.43) in Ω. If further
assume that
u [
∂Ω
≥ 0,
then we can derive from Corollary 7.4.2 that u ≥ 0 in the entire region Ω.
7.5 A Maximum Principle for Integral Equations
In this section, we introduce a maximum principle for integral equations.
Let Ω be a region in R
n
, may or may not be bounded. Assume
K(x, y) ≥ 0, ∀(x, y) ∈ Ω Ω.
Deﬁne the integral operator T by
(Tf)(x) =
Ω
K(x, y)f(y)dy.
192 7 Maximum Principles
Let   be a norm in a Banach space satisfying
f ≤ g, whenever 0 ≤ f(x) ≤ g(x) in Ω. (7.45)
There are many norms that satisfy (7.45), such as L
p
(Ω) norms, L
∞
(Ω) norm,
and C(Ω) norm.
Theorem 7.5.1 Suppose that
Tg ≤ θg with some 0 < θ < 1 ∀g. (7.46)
If
f(x) ≤ (Tf)(x), (7.47)
then
f(x) ≤ 0 for all x ∈ Ω. (7.48)
Proof. Deﬁne
Ω
+
= ¦x ∈ Ω [ f(x) > 0¦,
and
f
+
(x) = max¦0, f(x)¦, f
−
(x) = min¦0, f(x)¦.
Then it is easy to see that for any x ∈ Ω
+
,
0 ≤ f
+
(x) ≤ (Tf)(x) = (Tf)
+
(x) + (Tf)
−
(x) ≤ (Tf)
+
(x) (7.49)
While in the rest of the region Ω, we have f
+
(x) = 0 and (Tf)
+
(x) ≥ 0 by
the deﬁnitions. Therefore, (7.49) holds for any x ∈ Ω. It follows from (7.49)
that
f
+
 ≤ (Tf)
+
 ≤ θf
+
.
Since θ < 1, we must have
f
+
 = 0,
which implies that f
+
(x) ≡ 0, and hence f(x) ≤ 0. This completes the proof
of the Theorem.
Recall that in Section 5, after introducing the “Maximum Principle Based
on Comparison” for Partial Diﬀerential Equations, we explained how it can
be applied to the situations of “Narrow Regions” and “Decay at Inﬁnity”.
Parallel to this, we have similar applications for integral equations. Let α be
a number between 0 and n. Deﬁne
(Tf)(x) =
Ω
1
[x −y[
n−α
c(y)f(y)dy.
Assume that f satisﬁes the integral equation
f = Tf in Ω,
7.5 A Maximum Principle for Integral Equations 193
or more generally, the integral inequality
f ≤ Tf in Ω. (7.50)
Further assume that
Tf
L
p
(Ω)
≤ Cc(y)
L
τ
(Ω)
f
L
p
(Ω)
, (7.51)
for some p, τ > 1.
If we have some right integrability condition on c(y), then we can derived,
from Theorem 7.5.1, a maximum principle that will be applied to “Narrow
Regions” and “Near Inﬁnity”. More precisely, we have
Corollary 7.5.1 Assume that c(y) ≥ 0 and c(y) ∈ L
τ
(R
n
). Let f ∈ L
p
(R
n
)
be a nonnegative function satisfying (7.50) and (7.51). Then there exist posi
tive numbers R
o
and
o
depending on c(y) only, such that
if µ(Ω ∩ B
Ro
(0)) ≤
o
, then f
+
≡ 0 in Ω.
where µ(D) is the measure of the set D.
Proof. Since c(y) ∈ L
τ
(R
n
), by Lebesgue integral theory, when the measure
of the intersection of Ω with B
Ro
(0) is suﬃciently small, we can make the
integral
Ω
[c(y)[
τ
dy as small as we wish, and thus to obtain
Cc(y)
L
τ
(Ω)
< 1.
Now it follows from Theorem 7.5.1 that
f
+
(x) ≡ 0 , ∀ x ∈ Ω.
This completes the proof of the Corollary.
Remark 7.5.1 One can see that the condition µ(Ω ∩ B
Ro
(0)) ≤
o
in the
Corollary is satisﬁed in the following two situations.
i) Narrow Regions: The width of Ω is very small.
ii) Near Inﬁnity: Say, Ω = B
C
R
(0), the complement of the ball B
R
(0), with
suﬃciently large R.
As an immediate application of this “Maximum Principle”, we study an
integral equation in the next section. We will use the method of moving planes
to obtain the radial symmetry and monotonicity of the positive solutions.
8
Methods of Moving Planes and of Moving
Spheres
8.1 Outline of the Method of Moving Planes
8.2 Applications of Maximum Principles Based on Comparison
8.2.1 Symmetry of Solutions on a Unit Ball
8.2.2 Symmetry of Solutions of −∆u = u
p
in R
n
.
8.2.3 Symmetry of Solutions of −∆u = e
u
in R
2
8.3 Method of Moving Planes in a Local Way
8.3.1 The Background
8.3.2 The A Priori Estimates
8.4 Method of Moving Spheres
8.4.1 The Background
8.4.2 Necessary Conditions
8.5 Method of Moving Planes in an Integral Form and
Symmetry of Solutions for Integral Equations
The Method of Moving Planes (MMP) was invented by the Soviet math
ematician Alexanderoﬀ in the early 1950s. Decades later, it was further de
veloped by Serrin [Se], Gidas, Ni, and Nirenberg [GNN], Caﬀarelli, Gidas,
and Spruck [CGS], Li [Li], Chen and Li [CL] [CL1], Chang and Yang [CY],
and many others. This method has been applied to free boundary problems,
semilinear partial diﬀerential equations, and other problems. Particularly for
semilinear partial diﬀerential equations, there have seen many signiﬁcant con
tributions. We refer to the paper of Frenkel [F] for more descriptions on the
method.
The Method of Moving Planes and its variant–the Method of Moving
Spheres–have become powerful tools in establishing symmetries and mono
tonicity for solutions of partial diﬀerential equations. They can also be used
to obtain a priori estimates, to derive useful inequalities, and to prove non
existence of solutions.
196 8 Methods of Moving Planes and of Moving Spheres
From the previous chapter, we have seen the beauty and power of maxi
mum principles. The MMP greatly enhances the power of maximum principles.
Roughly speaking, the MMP is a continuous way of repeated applications of
maximum principles. During this process, the maximum principle has been
used inﬁnitely many times; and the advantage is that each time we only need
to use the maximum principle in a very narrow region. From the previous
chapter, one can see that, in such a narrow region, even if the coeﬃcients of
the equation are not ‘good,’ the maximum principle can still be applied. In the
authors’ research practice, we also introduced a form of maximum principle
at inﬁnity to the MMP and therefore simpliﬁed many proofs and extended
the results in more natural ways. We recommend the readers study this part
carefully, so that they will be able to apply it to their own research.
It is wellknown that by using a Green’s function, one can change a dif
ferential equation into an integral equation, and under certain conditions,
they are equivalent. To investigate the symmetry and monotonicity of inte
gral equations, the authors, together with Ou, created an integral form of
MMP. Instead of using local properties (say diﬀerentiability) of a diﬀeren
tial equation, they employed the global properties of the solutions of integral
equations.
In this chapter, we will apply the Method of Moving Planes and their
variant–the Method of Moving Spheres– to study semilinear elliptic equa
tions and integral equations. We will establish symmetry, monotonicity, a
priori estimates, and nonexistence of the solutions. During the process of
Moving Planes, the Maximum Principles introduced in the previous chapter
are applied in innovative ways.
In Section 8.2, we will establish radial symmetry and monotonicity for the
solutions of the following three semilinear elliptic problems
−´u = f(u) x ∈ B
1
(0)
u = 0 on ∂B
1
(0);
−´u = u
n+2
n−2
(x) x ∈ R
n
n ≥ 3;
and
−´u = e
u(x)
x ∈ R
2
.
During the moving of planes, the Maximum Principles Base on Comparison
will play a major role. In particular, the Narrow Region Principle and the
Decay at Inﬁnity Principle will be used repeatedly in dealing with the three
examples.
In Section 8.3, we will apply the Method of Moving Planes in a ‘local way’
to obtain a priori estimates on the solutions of the prescribing scalar curvature
equation on a compact Riemannian manifold M
−
4(n −1)
n −2
´
o
u +R
o
(x)u = R(x)u
n+2
n−2
, in M.
8.1 Outline of the Method of Moving Planes 197
We alow the function R(x) to change signs. In this situation, the traditional
blowingup analysis fails near the set where R(x) = 0. We will use the Method
of Moving Planes in an innovative way to obtain a priori estimates. Since the
Method of Moving Planes can not be applied to the solution u directly, we
introduce an auxiliary function to circumvent this diﬃculty.
In Section 8.4, we use the Method of Moving Spheres to prove a non
existence of solutions for the prescribing Gaussian and scalar curvature equa
tions
−´u + 2 = R(x)e
u
,
and
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
on S
2
and on S
n
(n ≥ 3), respectively. We prove that if the function R(x)
is rotationally symmetric and monotone in the region where it is positive,
then both equations admit no solution. This provides a stronger necessary
condition than the well known KazdanWarner condition, and it also becomes
a suﬃcient condition for the existence of solutions in most cases.
In Section 8.5, as an application of the maximum principle for integral
equations introduced in Section 7.5, we study the integral equation in R
n
u(x) =
R
n
1
[x −y[
n−α
u
n+α
n−α
(y)dy,
for any real number α between 0 and n. It arises as an EulerLagrange equation
for a functional in the context of the HardyLittlewoodSobolev inequalities.
Due to the diﬀerent nature of the integral equation, the traditional Method of
Moving Planes does not work. Hence we exploit its global property and develop
a new idea–the Integral Form of the Method of Moving Planes to obtain the
symmetry and monotonicity of the solutions. The Maximum Principle for
Integral Equations established in Chapter 7 is combined with the estimates
of various integral norms to carry on the moving of planes.
8.1 Outline of the Method of Moving Planes
To outline how the Method of Moving Planes works, we take the Euclidian
space R
n
for an example. Let u be a positive solution of a certain partial
diﬀerential equation. If we want to prove that it is symmetric and monotone
in a given direction, we may assign that direction as x
1
axis. For any real
number λ, let
T
λ
= ¦x = (x
1
, x
2
, , x
n
) ∈ R
n
[ x
1
= λ¦.
This is a plane perpendicular to x
1
axis and the plane that we will move with.
Let Σ
λ
denote the region to the left of the plane, i.e.
198 8 Methods of Moving Planes and of Moving Spheres
Σ
λ
= ¦x ∈ R
n
[ x
1
< λ¦.
Let
x
λ
= (2λ −x
1
, x
2
, , x
n
),
the reﬂection of the point x = (x
1
, , x
n
) about the plane T
λ
(See Figure 1).
x
x
λ
Σ
λ
T
λ
Figure 1
1
We compare the values of the solution u at point x and x
λ
, and we want
to show that u is symmetric about some plane T
λo
. To this end, let
w
λ
(x) = u(x
λ
) −u(x).
In order to show that, there exists some λ
o
, such that
w
λo
(x) ≡ 0 , ∀x ∈ Σ
λo
,
we generally go through the following two steps.
Step 1. We ﬁrst show that for λ suﬃciently negative, we have
w
λ
(x) ≥ 0 , ∀x ∈ Σ
λ
. (8.1)
Then we are able to start oﬀ from this neighborhood of x
1
= −∞, and move
the plane T
λ
along the x
1
direction to the right as long as the inequality (8.1)
holds.
Step 2. We continuously move the plane this way up to its limiting position.
More precisely, we deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
We prove that u is symmetric about the plane T
λo
, that is w
λo
(x) ≡ 0 for all
x ∈ Σ
λo
. This is usually carried out by a contradiction argument. We show
that if w
λo
(x) ≡ 0, then there would exist λ > λ
o
, such that (8.1) holds, and
this contradicts with the deﬁnition of λ
o
.
From the above illustration, one can see that the key to the Method of
Moving Planes is to establish inequality (8.1), and for partial diﬀerential equa
tions, maximum principles are powerful tools for this task. While for integral
equations, we use a diﬀerent idea. We estimate a certain norm of w
λ
on the
set
Σ
−
λ
= ¦x ∈ Σ
λ
[ w
λ
(x) < 0¦
where the inequality (8.1) is violated. We show that this norm must be zero,
and hence Σ
−
λ
is empty.
8.2 Applications of the Maximum Principles Based on Comparisons 199
8.2 Applications of the Maximum Principles Based on
Comparisons
In this section, we study some semilinear elliptic equations. We will apply
the method of moving planes to establish the symmetry of the solutions. The
essence of the method of moving planes is the application of various maximum
principles. In the proof of each theorem, the readers will see vividly how the
Maximum Principles Based on Comparisons are applied to narrow regions and
to solutions with decay at inﬁnity.
8.2.1 Symmetry of Solutions in a Unit Ball
We ﬁrst begin with an elegant result of Gidas, Ni, and Nirenberg [GNN1]:
Theorem 8.2.1 Assume that f() is a Lipschitz continuous function such
that
[f(p) −f(q)[ ≤ C
o
[p −q[ (8.2)
for some constant C
o
. Then every positive solution u of
−´u = f(u) x ∈ B
1
(0)
u = 0 on ∂B
1
(0).
(8.3)
is radially symmetric and monotone decreasing about the origin.
Proof.
As shown on Figure 4 below, let T
λ
= ¦x [ x
1
= λ¦ be the plane perpen
dicular to the x
1
axis. Let Σ
λ
be the part of B
1
(0) which is on the left of the
plane T
λ
. For each x ∈ Σ
λ
, let x
λ
be the reﬂection of the point x about the
plane T
λ
, more precisely, x
λ
= (2λ −x
1
, x
2
, , x
n
).
0
x x
λ
Σ
λ
T
λ
Figure 4
x
1
1
We compare the values of the solution u on Σ
λ
with those on its reﬂection.
Let
u
λ
(x) = u(x
λ
), and w
λ
(x) = u
λ
(x) −u(x).
200 8 Methods of Moving Planes and of Moving Spheres
Then it is easy to see that u
λ
satisﬁes the same equation as u does. Applying
the Mean Value Theorem to f(u), one can verify that w
λ
satisﬁes
´w
λ
+C(x, λ)w
λ
(x) = 0, x ∈ Σ
λ
,
where
C(x, λ) =
f(u
λ
(x)) −f(u(x))
u
λ
(x) −u(x)
,
and by condition (8.2),
[C(x, λ)[ ≤ C
o
. (8.4)
Step 1: Start Moving the Plane.
We start from the near left end of the region. Obviously, for λ suﬃciently
close to −1, Σ
λ
is a narrow (in x
1
direction) region, and on ∂Σ
λ
, w
λ
(x) ≥ 0.
( On T
λ
, w
λ
(x) = 0; while on the curve part of ∂Σ
λ
, w
λ
(x) > 0 since u > 0
in B
1
(0).)
Now we can apply the “Narrow Region Principle” ( Corollary 7.4.1 ) to
conclude that, for λ close to −1,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.5)
This provides a starting point for us to move the plane T
λ
Step 2: Move the Plane to Its Right Limit.
We now increase the value of λ continuously, that is, we move the plane
T
λ
to the right as long as the inequality (8.5) holds. We show that, by moving
this way, the plane will not stop before hitting the origin. More precisely, let
¯
λ = sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦,
we ﬁrst claim that
¯
λ ≥ 0. (8.6)
Otherwise, we will show that the plane can be further moved to the right
by a small distance, and this would contradicts with the deﬁnition of
¯
λ. In
fact, if
¯
λ < 0, then the image of the curved surface part of ∂Σ¯
λ
under the
reﬂection about T¯
λ
lies inside B
1
(0), where u(x) > 0 by assumption. It follows
that, on this part of ∂Σ¯
λ
, w¯
λ
(x) > 0. By the Strong Maximum Principle, we
deduce that
w¯
λ
(x) > 0
in the interior of Σ¯
λ
.
Let d
o
be the maximum width of narrow regions that we can apply the
“Narrow Region Principle”. Choose a small positive number δ, such that δ ≤
do
2
, −
¯
λ. We consider the function w¯
λ+δ
(x) on the narrow region (See Figure
5):
Ω
δ
= Σ¯
λ+δ
∩ ¦x [ x
1
>
¯
λ −
d
o
2
¦.
8.2 Applications of the Maximum Principles Based on Comparisons 201
It satisﬁes
´w¯
λ+δ
+C(x,
¯
λ +δ)w¯
λ+δ
= 0 x ∈ Ω
δ
w¯
λ+δ
(x) ≥ 0 x ∈ ∂Ω
δ
.
(8.7)
0
Ω
δ
T
¯
λ−
do
2
Σ
¯
λ−
do
2
T¯
λ+δ
Figure 5
x
1
1
The equation is obvious. To see the boundary condition, we ﬁrst notice
that it is satisﬁed on the two curved parts and one ﬂat part where x
1
=
¯
λ+δ
of the boundary ∂Ω
δ
due to the deﬁnition of w¯
λ+δ
. To see that it is also true
on the rest of the boundary where x
1
=
¯
λ −
do
2
, we use continuity argument.
Notice that on this part, w¯
λ
is positive and bounded away from 0. More
precisely and more generally, there exists a constant c
o
> 0, such that
w¯
λ
(x) ≥ c
o
, ∀x ∈ Σ
¯
λ−
do
2
.
Since w
λ
is continuous in λ, for δ suﬃciently small, we still have
w¯
λ+δ
(x) ≥ 0 , ∀x ∈ Σ
¯
λ−
do
2
.
Hence in particular, the boundary condition in (8.7) holds for such small δ.
Now we can apply the “Narrow Region Principle” to conclude that
w¯
λ+δ
(x) ≥ 0, ∀x ∈ Ω
δ
.
And therefore,
w¯
λ+δ
(x) ≥ 0, ∀x ∈ Σ¯
λ+δ
.
This contradicts with the deﬁnition of
¯
λ and thus establishes (8.6).
(8.6) implies that
u(−x
1
, x
) ≤ u(x
1
, x
) , ∀x
1
≥ 0, (8.8)
202 8 Methods of Moving Planes and of Moving Spheres
where x
= (x
2
, , x
n
).
We then start from λ close to 1 and move the plane T
λ
toward the left.
Similarly, we obtain
u(−x
1
, x
) ≥ u(x
1
, x
) , ∀x
1
≥ 0, (8.9)
Combining two opposite inequalities (8.8) and (8.9), we see that u(x) is
symmetric about the plane T
0
. Since we can place x
1
axis in any direction,
we conclude that u(x) must be radially symmetric about the origin. Also the
monotonicity easily follows from the argument. This completes the proof.
8.2.2 Symmetry of Solutions of −u = u
p
in R
n
In an elegant paper of Gidas, Ni, and Nirenberg [2], an interesting results is
the symmetry of the positive solutions of the semilinear elliptic equation:
´u +u
p
= 0 , x ∈ R
n
, n ≥ 3. (8.10)
They proved
Theorem 8.2.2 For p =
n+2
n−2
, all the positive solutions of (8.10) with rea
sonable behavior at inﬁnity, namely
u = O(
1
[x[
n−2
),
are radially symmetric and monotone decreasing about some point, and hence
assume the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for λ > 0 and for some x
o
∈ R
n
.
This uniqueness result, as was pointed out by R. Schoen, is in fact equiv
alent to the geometric result due to Obata [O]: A Riemannian metric on S
n
which is conformal to the standard one and having the same constant scalar
curvature is the pull back of the standard one under a conformal map of
S
n
to itself. Recently, Caﬀarelli, Gidas and Spruck [CGS] removed the de
cay assumption u = O([x[
2−n
) and proved the same result. In the case that
1 ≤ p <
n+2
n−2
, Gidas and Spruck [GS] showed that the only nonnegative solu
tion of (8.10) is identically zero. Then, in the authors paper [CL1], a simpler
and more elementary proof was given for almost the same result:
Theorem 8.2.3 i) For p =
n+2
n−2
, every positive C
2
solution of (8.10) must
be radially symmetric and monotone decreasing about some point, and
hence assumes the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for some λ > 0 and x
o
∈ R
n
.
8.2 Applications of the Maximum Principles Based on Comparisons 203
ii) For p <
n+2
n−2
, the only nonnegative solution of (8.10) is identically zero.
The proof of Theorem 8.2.2 is actually included in the more general proof
of the ﬁrst part of Theorem 8.2.3. However, to better illustrate the idea, we
will ﬁrst present the proof of Theorem 8.2.2 (mostly in our own idea). And
the readers will see vividly, how the “Decay at Inﬁnity” principle is applied
here.
Proof of Theorem 8.2.2.
Deﬁne
Σ
λ
= ¦x = (x
1
, , x
n
) ∈ R
n
[ x
1
< λ¦, T
λ
= ∂Σ
λ
and let x
λ
be the reﬂection point of x about the plane T
λ
, i.e.
x
λ
= (2λ −x
1
, x
2
, , x
n
).
(See the previous Figure 1.)
Let
u
λ
(x) = u(x
λ
), and w
λ
(x) = u
λ
(x) −u(x).
The proof consists of three steps. In the ﬁrst step, we start from the very
left end of our region R
n
, that is near x
1
= −∞. We will show that, for λ
suﬃciently negative,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.11)
Here, the “Decay at Inﬁnity” is applied.
Then in the second step, we will move our plane T
λ
in the the x
1
direction
toward the right as long as inequality (8.11) holds. The plane will stop at
some limiting position, say at λ = λ
o
. We will show that
w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
.
This implies that the solution u is symmetric and monotone decreasing about
the plane T
λo
. Since x
1
axis can be chosen in any direction, we conclude that
u must be radially symmetric and monotone about some point.
Finally, in the third step, using the uniqueness theory in Ordinary Diﬀer
ential Equations, we will show that the solutions can only assume the given
form.
Step 1. Prepare to Move the Plane from Near −∞.
To verify (8.11) for λ suﬃciently negative, we apply the maximum principle
to w
λ
(x). Write τ =
n+2
n−2
. By the deﬁnition of u
λ
, it is easy to see that, u
λ
satisﬁes the same equation as u does. Then by the Mean Value Theorem, it is
easy to verify that
−´w
λ
= u
τ
λ
(x) −u
τ
(x) = τψ
τ−1
λ
(x)w
λ
(x). (8.12)
where ψ
λ
(x) is some number between u
λ
(x) and u(x). Recalling the “Maxi
mum Principle Based on Comparison” (Theorem 7.4.1), we see here c(x) =
204 8 Methods of Moving Planes and of Moving Spheres
−τψ
τ−1
λ
(x). By the “Decay at Inﬁnity” argument (Corollary 7.4.2), it suﬃce
to check the decay rate of ψ
τ−1
λ
(x), and more precisely, only at the points ˜ x
where w
λ
is negative (see Remark 7.4.2 ). Apparently at these points,
u
λ
(˜ x) < u(˜ x),
and hence
0 ≤ u
λ
(˜ x) ≤ ψ
λ
(˜ x) ≤ u(˜ x).
By the decay assumption of the solution
u(x) = O
1
[x[
n−2
,
we derive immediately that
ψ
τ−1
λ
(˜ x) = O
(
1
[˜ x[
)
4
n−2
= O
1
[˜ x[
4
.
Here the power of
1
[˜ x[
is greater than two, which is what (actually more
than) we desire for. Therefore, we can apply the “Maximum Principle Based
on Comparison” to conclude that for λ suﬃciently negative ( [˜ x[ suﬃciently
large ), we must have (8.11). This completes the preparation for the moving
of planes.
Step 2. Move the Plane to the Limiting Position to Derive Symmetry.
Now we can move the plane T
λ
toward right, i.e., increase the value of λ,
as long as the inequality (8.11) holds. Deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
Obviously, λ
o
< +∞, due to the asymptotic behavior of u near x
1
= +∞. We
claim that
w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
. (8.13)
Otherwise, by the “Strong Maximum Principle” on unbounded domains ( see
Theorem 7.3.3 ), we have
w
λo
(x) > 0 in the interior of Σ
λo
. (8.14)
We show that the plane T
λo
can still be moved a small distance to the right.
More precisely, there exists a δ
o
> 0 such that, for all 0 < δ < δ
o
, we have
w
λo+δ
(x) ≥ 0 , ∀x ∈ Σ
λo+δ
. (8.15)
This would contradict with the deﬁnition of λ
o
, and hence (8.13) must hold.
Recall that in the last section, we use the “Narrow Region Principle ” to
derive (8.15). Unfortunately, it can not be applied in this situation, because
8.2 Applications of the Maximum Principles Based on Comparisons 205
the “narrow region” here is unbounded, and we are not able to guarantee that
w
λo
is bounded away from 0 on the left boundary of the “narrow region”.
To overcome this diﬃculty, we introduce a new function
¯ w
λ
(x) =
w
λ
(x)
φ(x)
,
where
φ(x) =
1
[x[
q
with 0 < q < n −2.
Then it is a straight forward calculation to verify that
−´¯ w
λ
= 2¯ w
λ
φ
φ
+
−´w
λ
+
´φ
φ
w
λ
1
φ
(8.16)
We have
Lemma 8.2.1 There exists a R
o
> 0 ( independent of λ), such that if x
o
is
a minimum point of ¯ w
λ
and ¯ w
λ
(x
o
) < 0, then [x
o
[ < R
o
.
We postpone the proof of the Lemma for a moment. Now suppose that (8.15) is
violated for any δ > 0. Then there exists a sequence of numbers ¦δ
i
¦ tending
to 0 and for each i, the corresponding negative minimum x
i
of w
λo+δi
. By
Lemma 8.2.1, we have
[x
i
[ ≤ R
o
, ∀ i = 1, 2, .
Then, there is a subsequence of ¦x
i
¦ (still denoted by ¦x
i
¦) which converges
to some point x
o
∈ R
n
. Consequently,
¯ w
λo
(x
o
) = lim
i→∞
¯ w
λo+δi
(x
i
) = 0 (8.17)
and
¯ w
λo
(x
o
) = lim
i→∞
¯ w
λo+δi
(x
i
) ≤ 0.
However, we already know ¯ w
λo
≥ 0, therefore, we must have ¯ w
λo
(x
o
) = 0. It
follows that
w
λo
(x
o
) = ¯ w
λo
(x
o
)φ(x
o
) + ¯ w
λo
(x
o
)φ = 0 + 0 = 0. (8.18)
On the other hand, by (8.14), since w
λo
(x
o
) = 0, x
o
must be on the
boundary of Σ
λo
. Then by the Hopf Lemma (see Theorem 7.3.3), we have,
the outward normal derivative
∂w
λo
∂ν
(x
o
) < 0.
This contradicts with (8.18). Now, to verify (8.15), what left is to prove the
Lemma.
206 8 Methods of Moving Planes and of Moving Spheres
Proof of Lemma 8.2.1. Assume that x
o
is a negative minimum of ¯ w
λ
.
Then
−´¯ w
λ
(x
o
) ≤ 0 and ¯ w
λ
(x
o
) = 0. (8.19)
On the other hand, as we argued in Step 1, by the asymptotic behavior of
u at inﬁnity, if [x
o
[ is suﬃciently large,
c(x
o
) := −τψ
τ−1
(x
o
) > −
q(n −2 −q)
[x
o
[
≡
´φ(x
o
)
φ(x
o
)
.
It follows from (8.12) that
−´w
λ
+
´φ
φ
w
λ
(x
o
) > 0.
This, together with (8.19) contradicts with (8.16), and hence completes the
proof of the Lemma.
Step 3. In the previous two steps, we show that the positive solutions
of (8.10) must be radially symmetric and monotone decreasing about some
point in R
n
. Since the equation is invariant under translation, without loss of
generality, we may assume that the solutions are symmetric about the origin.
Then they satisﬁes the following ordinary diﬀerential equation
−u
(r) −
n−1
r
u
(r) = u
τ
u
(0) = 0
u(0) =
[n(n−2)]
(n−2)/4
λ
n−2
2
for some λ > 0. One can verify that u(r) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+r
2
)
n−2
2
is a solution, and
by the uniqueness of the ODE problem, this is the only solution. Therefore,
we conclude that every positive solution of (8.10) must assume the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for λ > 0 and some x
o
∈ R
n
. This completes the proof of the Theorem.
Proof of Theorem 8.2.3.
i) The general idea in proving this part of the Theorem is almost the
same as that for Theorem 8.2.2. The main diﬀerence is that we have no decay
assumption on the solution u at inﬁnity, hence the method of moving planes
can not be applied directly to u. So we ﬁrst make a Kelvin transform to deﬁne
a new function
v(x) =
1
[x[
n−2
u(
x
[x[
2
).
8.2 Applications of the Maximum Principles Based on Comparisons 207
Then obviously, v(x) has the desired decay rate
1
[x[
n−2
at inﬁnity, but has a
possible singularity at the origin. It is easy to verify that v satisﬁes the same
equation as u does, except at the origin:
´v +v
τ
(x) = 0 x ∈ R
n
` ¦0¦, n ≥ 3. (8.20)
We will apply the method of moving planes to the function v, and show
that v is radially symmetric and monotone decreasing about some point. If
the center point is the origin, then by the deﬁnition of v, u is also symmetric
and monotone decreasing about the origin. If the center point of v is not the
origin, then v has no singularity at the origin, and hence u has the desired
decay at inﬁnity. Then the same argument in the proof of Theorem 8.2.2 would
imply that u is symmetric and monotone decreasing about some point.
Deﬁne
v
λ
(x) = v(x
λ
) , w
λ
(x) = v
λ
(x) −v(x).
Because v(x) may be singular at the origin, correspondingly w
λ
may be sin
gular at the point x
λ
= (2λ, 0, , 0). Hence instead of on Σ
λ
, we consider w
λ
on
˜
Σ
λ
= Σ
λ
` ¦x
λ
¦. And in our proof, we treat the singular point carefully.
Each time we show that the points of interest are away from the singularities,
so that we can carry on the method of moving planes to the end to show the
existence of a λ
o
such that w
λo
(x) ≡ 0 for x ∈
˜
Σ
λo
and v is strictly increasing
in the x
1
direction in
˜
Σ
λo
.
As in the proof of Theorem 8.2.2, we see that v
λ
satisﬁes the same equation
as v does, and
−´w
λ
= τψ
τ−1
λ
(x)w
λ
(x).
where ψ
λ
(x) is some number between v
λ
(x) and v(x).
Step 1. We show that, for λ suﬃciently negative, we have
w
λ
(x) ≥ 0, ∀x ∈
˜
Σ
λ
. (8.21)
By the asymptotic behavior
v(x) ∼
1
[x[
n−2
,
we derive immediately that, at a negative minimum point x
o
of w
λ
,
ψ
τ−1
λ
(x
o
) ∼ (
1
[x
o
[
n−2
)
τ−1
=
1
[x
o
[
4
,
the power of
1
[x
o
[
is greater than two, and we have the desired decay rate for
c(x) := −τψ
τ−1
λ
(x), as mentioned in Corollary 7.4.2. Hence we can apply the
“Decay at Inﬁnity” to w
λ
(x). The diﬀerence here is that, w
λ
has a singularity
208 8 Methods of Moving Planes and of Moving Spheres
at x
λ
, hence we need to show that, the minimum of w
λ
is away from x
λ
.
Actually, we will show that,
If inf
˜
Σ
λ
w
λ
(x) < 0, then the inﬁmum is achieved in Σ
λ
` B
1
(x
λ
) (8.22)
To see this, we ﬁrst note that for x ∈ B
1
(0),
v(x) ≥ min
∂B1(0)
v(x) =
0
> 0
due to the fact that v(x) > 0 and ´v ≤ 0.
Then let λ be so negative, that v(x) ≤
0
for x ∈ B
1
(x
λ
). This is possible
because v(x) → 0 as [x[ → ∞.
For such λ, obviously w
λ
(x) ≥ 0 on B
1
(x
λ
) ` ¦x
λ
¦. This implies (8.22).
Now similar to the Step 1 in the proof of Theorem 8.2.2, we can deduce that
w
λ
(x) ≥ 0 for λ suﬃciently negative.
Step 2. Again, deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈
˜
Σ
λ
¦.
We will show that
If λ
o
< 0, then w
λo
(x) ≡ 0, ∀x ∈
˜
Σ
λo
.
Deﬁne ¯ w
λ
=
w
λ
φ
the same way as in the proof of Theorem 8.2.2. Suppose
that ¯ w
λo
(x) ≡0, then by Maximum Principle we have
¯ w
λo
(x) > 0 , for x ∈
˜
Σ
λo
.
The rest of the proof is similar to the step 2 in the proof of Theorem 8.2.2
except that now we need to take care of the singularities. Again let λ
k
` λ
o
be a sequence such that ¯ w
λ
k
(x) < 0 for some x ∈
˜
Σ
λ
k
. We need to show that
for each k, inf
˜
Σ
λ
k
¯ w
λ
k
(x) can be achieved at some point x
k
∈
˜
Σ
λ
k
and that
the sequence ¦x
k
¦ is bounded away from the singularities x
λ
k
of w
λ
k
. This
can be seen from the following facts
a) There exists > 0 and δ > 0 such that
¯ w
λo
(x) ≥ for x ∈ B
δ
(x
λo
) ` ¦x
λo
¦.
b) lim
λ→λo
inf
x∈B
δ
(x
λ
)
¯ w
λ
(x) ≥ inf
x∈B
δ
(x
λo
)
¯ w
λo
(x) ≥ .
Fact (a) can be shown by noting that ¯ w
λo
(x) > 0 on
˜
Σ
λo
and ´w
λo
≤ 0,
while fact (b) is obvious.
Now through a similar argument as in the proof of Theorem 8.2.2 one can
easily arrive at a contradiction. Therefore ¯ w
λo
(x) ≡ 0.
8.2 Applications of the Maximum Principles Based on Comparisons 209
If λ
o
= 0, then we can carry out the above procedure in the opposite di
rection, namely, we move the plane in the negative x
1
direction from positive
inﬁnity toward the origin. If our planes T
λ
stop somewhere before the origin,
we derive the symmetry and monotonicity in x
1
direction by the above ar
gument. If they stop at the origin again, we also obtain the symmetry and
monotonicity in x
1
direction by combining the two inequalities obtained in
the two opposite directions. Since the x
1
direction can be chosen arbitrarily,
we conclude that the solution u must be radially symmetric about some point.
ii) To show the nonexistence of solutions in the case p <
n+2
n−2
, we notice
that after the Kelvin transform, v satisﬁes
´v +
1
[x[
n+2−p(n−2)
v
p
(x) = 0 , x ∈ R
n
` ¦0¦.
Due to the singularity of the coeﬃcient of v
p
at the origin, one can easily
see that v can only be symmetric about the origin if it is not identically
zero. Hence u must also be symmetric about the origin. Now given any two
points x
1
and x
2
in R
n
, since equation (8.10) is invariant under translations
and rotations, we may assume that the origin is at the mid point of the line
segment x
1
x
2
. Then from the above argument, we must have u(x
1
) = u(x
2
).
It follows that u is constant. Finally, from the equation (8.10), we conclude
that u ≡ 0. This completes the proof of the Theorem.
8.2.3 Symmetry of Solutions for −u = e
u
in R
2
When considering prescribing Gaussian curvature on two dimensional com
pact manifolds, if the sequence of approximate solutions “blows up”, then by
rescaling and taking limit, one would arrive at the following equation in the
entire space R
2
:
∆u + exp u = 0 , x ∈ R
2
R
2
exp u(x)dx < +∞
(8.23)
The classiﬁcation of the solutions for this limiting equation would provide es
sential information on the original problems on manifolds, also it is interesting
in its own right.
It is known that
φ
λ,x
o(x) = ln
32λ
2
(4 +λ
2
[x −x
o
[
2
)
2
for any λ > 0 and any point x
o
∈ R
2
is a family of explicit solutions.
We will use the method of moving planes to prove:
Theorem 8.2.4 Every solution of (8.23) is radially symmetric with respect
to some point in R
2
and hence assumes the form of φ
λ,x
o(x).
To this end, we ﬁrst need to obtain some decay rate of the solutions near
inﬁnity.
210 8 Methods of Moving Planes and of Moving Spheres
Some Global and Asymptotic Behavior of the Solutions
The following Theorem gives the asymptotic behavior of the solutions near
inﬁnity, which is essential to the application of the method of moving planes.
Theorem 8.2.5 If u(x) is a solution of (8.23), then as [x[ → +∞,
u(x)
ln [x[
→ −
1
2π
R
2
exp u(x)dx ≤ −4
uniformly.
This Theorem is a direct consequence of the following two Lemmas.
Lemma 8.2.2 (W. Ding) If u is a solution of
−´u = e
u
, x ∈ R
2
and
R
2
exp u(x)dx < +∞,
then
R
2
exp u(x)dx ≥ 8π.
Proof. For −∞ < t < ∞, let Ω
t
= ¦x [ u(x) > t¦, one can obtain
Ωt
exp u(x)dx = −
Ωt
´u =
∂Ωt
[u[ds
−
d
dt
[Ω
t
[ =
∂Ωt
ds
[u[
By the Schwartz inequality and the isoperimetric inequality,
∂Ωt
ds
[u[
∂Ωt
[u[ ≥ [∂Ω
t
[
2
≥ 4π[Ω
t
[.
Hence
−(
d
dt
[Ω
t
[)
Ωt
exp u(x)dx ≥ 4π[Ω
t
[
and so
d
dt
(
Ωt
exp u(x)dx)
2
= 2 exp t (
d
dt
[Ω
t
[)
Ωt
exp u(x)dx ≤ −8π[Ω
t
[e
t
.
Integrating from −∞ to ∞ gives
−(
R
2
exp u(x)dx)
2
≤ −8π
R
2
exp u(x)dx
which implies
R
2
exp u(x)dx ≥ 8π as desired.
Lemma 8.2.2 enables us to obtain the asymptotic behavior of the solutions
at inﬁnity.
8.2 Applications of the Maximum Principles Based on Comparisons 211
Lemma 8.2.3 . If u(x) is a solution of (8.23), then as [x[ → +∞,
u(x)
ln [x[
→ −
1
2π
R
2
exp u(x)dx uniformly.
Proof.
By a result of Brezis and Merle [BM], we see that the condition
R
2
exp u(x)dx <
∞ implies that the solution u is bounded from above.
Let
w(x) =
1
2π
R
2
(ln [x −y[ −ln([y[ + 1)) exp u(y)dy.
Then it is easy to see that
´w(x) = exp u(x) , x ∈ R
2
and we will show
w(x)
ln [x[
→
1
2π
R
2
exp u(x)dx uniformly as [x[ → +∞. (8.24)
To see this, we need only to verify that
I :=
R
2
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
e
u(y)
dy→0
as [x[→∞. Write I = I
1
+I
2
+I
3
, where I
1
, I
2
and I
3
are the integrals on the
three regions
D
1
= ¦y [ [x −y[ ≤ 1¦,
D
2
= ¦y [ [x −y[ > 1 and [y[ ≤ K¦
and
D
3
= ¦y [ [x −y[ > 1 and [y[ > K¦
respectively. We may assume that [x[ ≥ 3.
a) To estimate I
1
, we simply notice that
I
1
≤ C
x−y≤1
e
u(y)
dy −
1
ln [x[
x−y≤1
ln [x −y[e
u(y)
dy
Then by the boundedness of e
u(y)
and
R
2
e
u(y)
dy, we see that I
1
→0 as
[x[→∞.
b) For each ﬁxed K, in region D
2
, we have, as [x[→∞,
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
→0
hence I
2
→0.
212 8 Methods of Moving Planes and of Moving Spheres
c)To see I
3
→0, we use the fact that for [x −y[ > 1
[
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
[ ≤ C
Then let K→∞. This veriﬁes (8.24).
Consider the function v(x) = u(x) +w(x). Then ´v ≡ 0 and
v(x) ≤ C +C
1
ln([x[ + 1)
for some constant C and C
1
. Therefore v must be a constant. This completes
the proof of our Lemma.
Combining Lemma 8.2.2 and Lemma 8.2.3, we obtain our Theorem 8.2.5.
The Method of Moving Planes and Symmetry
In this subsection, we apply the method of moving planes to establish the
symmetry of the solutions. For a given solution, we move the family of lines
which are orthogonal to a given direction from negative inﬁnity to a critical
position and then show that the solution is symmetric in that direction about
the critical position. We also show that the solution is strictly increasing before
the critical position. Since the direction can be chosen arbitrarily, we conclude
that the solution must be radially symmetric about some point. Finally by
the uniqueness of the solution of the following O.D.E. problem
u
(r) +
1
r
u
(r) = f(u)
u
(0) = 0
u(0) = 1
we see that the solutions of (8.23) must assume the form φ
λ,x
0(x).
Assume that u(x) is a solution of (8.23). Without loss of generality, we
show the monotonicity and symmetry of the solution in the x
1
direction.
For λ ∈ R
1
, let
Σ
λ
= ¦(x
1
, x
2
) [ x
1
< λ¦
and
T
λ
= ∂Σ
λ
= ¦(x
1
, x
2
) [ x
1
= λ¦.
Let
x
λ
= (2λ −x
1
, x
2
)
be the reﬂection point of x = (x
1
, x
2
) about the line T
λ
. (See the previous
Figure 1.)
Deﬁne
w
λ
(x) = u(x
λ
) −u(x).
A straight forward calculation shows
´w
λ
(x) + (exp ψ
λ
(x))w
λ
(x) = 0 (8.25)
8.2 Applications of the Maximum Principles Based on Comparisons 213
where ψ
λ
(x) is some number between u(x) and u
λ
(x).
Step 1. As in the previous examples, we show that, for λ suﬃciently neg
ative, we have
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
.
By the “Decay at Inﬁnity” argument, the key is to check the decay rate of
c(x) := −e
ψ
λ
(x)
at points x
o
where w
λ
(x) is negative. At such point
u(x
oλ
) < u(x
o
), and hence ψ
λ
(x
o
) ≤ u(x
o
).
By virtue of Theorem 8.2.5, we have
e
ψ
λ
(x
o
)
= O(
1
[x
o
[
4
) (8.26)
Notice that we are in dimension two, while the key function φ =
1
x
q
given
in Corollary 7.4.2 requires 0 < q < n − 2, hence it does not work here. As a
modiﬁcation, we choose
φ(x) = ln([x[ −1).
Then it is easy to verify that
´φ
φ
(x) =
−1
[x[([x[ −1)
2
ln([x[ −1)
.
It follows from this and (8.26) that
e
ψ(x
o
)
+
´φ
φ
(x
o
) < 0 for suﬃciently large [x
o
[. (8.27)
This is what we desire for.
Then similar to the argument in Subsection 5.2, we introduce the function
¯ w
λ
(x) =
w
λ
(x)
φ(x)
.
It satisﬁes
´¯ w
λ
+ 2¯ w
λ
φ
φ
+
e
ψ
λ
(x)
+
´φ
φ
¯ w
λ
= 0. (8.28)
Moreover, by the asymptotic behavior of the solution u near inﬁnity (see
Lemma 8.2.3), we have, for each ﬁxed λ,
¯ w
λ
(x) =
u(x
λ
) −u(x)
ln([x[ −1)
→0 , as [x[→∞. (8.29)
Now, if w
λ
(x) < 0 somewhere, then by (8.29), there exists a point x
o
,
which is a negative minimum of ¯ w
λ
(x). At this point, one can easily derive a
contradiction from (8.27) and (8.28). This completes Step 1.
214 8 Methods of Moving Planes and of Moving Spheres
Step 2 . Deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
We show that
If λ
o
< 0, then w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
.
The argument is entirely similar to that in the Step 2 of the proof of
Theorem 8.2.2 except that we use φ(x) = ln(−x
1
+ 2). This completes the
proof of the Theorem.
8.3 Method of Moving Planes in a Local Way
8.3.1 The Background
Let M be a Riemannian manifold of dimension n ≥ 3 with metric g
o
. Given
a function R(x) on M, one interesting problem in diﬀerential geometry is to
ﬁnd a metric g that is pointwise conformal to the original metric g
o
and has
scalar curvature R(x). This is equivalent to ﬁnding a positive solution of the
semilinear elliptic equation
−
4(n −1)
n −2
´
o
u +R
o
(x)u = R(x)u
n+2
n−2
, x ∈ M, (8.30)
where ´
o
is the BeltramiLaplace operator of (M, g
o
) and R
o
(x) is the scalar
curvature of g
o
.
In recent years, there have seen a lot of progress in understanding equation
(8.30). When (M, g
o
) is the standard sphere S
n
, the equation becomes
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
, u > 0, x ∈ S
n
. (8.31)
It is the socalled critical case where the lack of compactness occurs. In this
case, the wellknown KazdanWarner condition gives rise to many examples of
R in which there is no solution. In the last few years, a tremendous amount of
eﬀort has been put to ﬁnd the existence of the solutions and many interesting
results have been obtained ( see [Ba] [BC] [Bi] [BE] [CL1] [CL3] [CL4] [CL6]
[CY1] [CY2] [ES] [KW1] [Li2] [Li3] [SZ] [Lin1] and the references therein).
One main ingredient in the proof of existence is to obtain a priori estimates
on the solutions. For equation (8.31) on S
n
, to establish a priori estimates,
a useful technique employed by most authors was a ‘blowingup’ analysis.
However, it does not work near the points where R(x) = 0. Due to this
limitation, people had to assume that R was positive and bounded away from
0. This technical assumption became somewhat standard and has been used
by many authors for quite a few years. For example, see articles by Bahri
8.3 Method of Moving Planes in a Local Way 215
[Ba], Bahri and Coron [BC], Chang and Yang [CY1], Chang, Gursky, and
Yang [CY2], Li [Li2] [Li3], and Schoen and Zhang [SZ].
Then for functions with changing signs, are there any a priori estimates?
In [CL6], we answered this question aﬃrmatively. We removed the above
wellknown technical assumption and obtained a priori estimates on the solu
tions of (8.31) for functions R which are allowed to change signs.
The main diﬃculty encountered by the traditional ‘blowingup’ analysis
is near the point where R(x) = 0. To overcome this, we used the ‘method of
moving planes’ in a local way to control the growth of the solutions in this
region and obtained an a priori estimate.
In fact, we derived a priori bounds for the solutions of more general equa
tions
−´u +
n(n −2)
4
u = R(x)u
p
, u > 0, x ∈ S
n
. (8.32)
for any exponent p between 1 +
1
A
and A, where A is an arbitrary positive
number. For a ﬁxed A, the bound is independent of p.
Proposition 8.3.1 Assume R ∈ C
2,α
(S
n
) and [R(x)[ ≥ β
0
for any x with
[R(x)[ ≤ δ
0
. Then there are positive constants and C depending only on β
0
,
δ
0
, M, and R
C
2,α
(S
n
)
, such that for any solution u of (8.32), we have u ≤ C
in the region where [R(x)[ ≤ .
In this proposition, we essentially required R(x) to grow linearly near its
zeros, which seems a little restrictive. Recently, in the case p =
n+2
n−2
, Lin [Lin1]
weaken the condition by assuming only polynomial growth of R near its zeros.
To prove this result, he ﬁrst established a Liouville type theorem in R
n
, then
use a blowing up argument.
Later, by introducing a new auxiliary function, we sharpen our method
of moving planes to further weaken the condition and to obtain more general
result:
Theorem 8.3.1 Let M be a locally conformally ﬂat manifold. Let
Γ = ¦x ∈ M [ R(x) = 0¦.
Assume that Γ ∈ C
2,α
and R ∈ C
2,α
(M) satisfying
∂R
∂ν
≤ 0 where ν is the
outward normal (pointing to the region where R is negative) of Γ, and
R(x)
[R(x)[
is
continuous near Γ,
= 0 at Γ.
(8.33)
Let D be any compact subset of M and let p be any number greater than 1.
Then the solutions of the equation
−´
o
u +R
o
(x)u = R(x)u
p
, in M (8.34)
are uniformly bounded near Γ ∩ D.
216 8 Methods of Moving Planes and of Moving Spheres
One can see that our condition (8.33) is weaker than Lin’s polynomial
growth restriction. To illustrate, one may simply look at functions R(x) which
grow like exp¦−
1
d(x)
¦, where d(x) is the distance from the point x to Γ.
Obviously, this kinds of functions satisfy our condition, but are not polynomial
growth.
Moreover our restriction on the exponent is much weaker. Besides being
n + 2
n −2
, p can be any number greater than 1.
8.3.2 The A Priori Estimates
Now, we estimate the solutions of equation (8.34) and prove Theorem 8.3.1.
Since M is a locally conformally ﬂat manifold, a local ﬂat metric can be
chosen so that equation (8.34) in a compact set D ⊂ M can be reduced to
−´u = R(x)u
p
, p > 1, (8.35)
in a compact set K in R
n
. This is the equation we will consider throughout
the section.
The proof of the Theorem is divided into two parts. We ﬁrst derive the
bound in the region(s) where R is negatively bounded away from zero. Then
based on this bound, we use the ‘method of moving planes’ to estimate the
solutions in the region(s) surrounding the set Γ.
Part I. In the region(s) where R is negative
In this part, we derive estimates in the region(s) where R is negatively
bounded away from zero. It comes from a standard elliptic estimate for sub
harmonic functions and an integral bound on the solutions. We prove
Theorem 8.3.2 The solutions of (8.35) are uniformly bounded in the region
¦x ∈ K : R(x) ≤ 0 and dist(x, Γ) ≥ δ¦. The bound depends only on δ, the
lower bound of R.
To prove Theorem 8.3.2, we introduce the following lemma of Gilbarg and
Trudinger [GT].
Lemma 8.3.1 (See Theorem 9.20 in [GT]). Let u ∈ W
2,n
(D) and suppose
´u ≥ 0. Then for any ball B
2r
(x) ⊂ D and any q > 0, we have
sup
Br(x)
u ≤ C(
1
r
n
B2r(x)
(u
+
)
q
dV )
1
q
, (8.36)
where C = C(n, D, q).
In virtue of this Lemma, to establish an a priori bound of the solutions,
one only need to obtain an integral bound. In fact, we prove
8.3 Method of Moving Planes in a Local Way 217
Lemma 8.3.2 Let B be any ball in K that is bounded away from Γ, then
there is a constant C such that
B
u
p
dx ≤ C. (8.37)
Proof. Let Ω be an open neighborhood of K such that dist (∂Ω, K ) ≥ 1.
Let ϕ be the ﬁrst eigenfunction of −´ in Ω, i.e.
−´ϕ = λ
1
ϕ(x) x ∈ Ω
ϕ(x) = 0 x ∈ ∂Ω.
Let
sgnR =
1 if R(x) > 0
0 if R(x) = 0
−1 if R(x) < 0.
Let α =
1+2p
p−1
. Multiply both sides of equation (8.35) by ϕ
α
[R[
α
sgnR and
then integrate. Taking into account that α > 2 and ϕ and R are bounded in
Ω, through a straight forward calculation, we have
Ω
ϕ
α
[R[
1+α
u
p
dx ≤ C
1
Ω
ϕ
α−2
[R[
α−2
udx ≤ C
2
Ω
ϕ
α
p
[R[
α−2
udx.
Applying H¨older inequality on the righthandside, we arrive at
Ω
ϕ
α
[R[
1+α
u
p
dx ≤ C
3
. (8.38)
Now (8.37) follows suit. This completes the proof of the Lemma.
The Proof of Theorem 8.3.2. It is a direct consequence of Lemma 8.3.1
and Lemma 8.3.2
Part II. In the region where R is small
In this part, we estimate the solutions in a neighborhood of Γ, where R is
small.
Let u be a given solution of (8.35), and x
0
∈ Γ. It is suﬃcient to show that
u is bounded in a neighborhood of x
0
. The key ingredient is to use the method
of moving planes to show that, near x
0
, along any direction that is close to
the direction of R, the values of the solution u is comparable. This result,
together with the boundedness of the integral
u
p
in a small neighborhood
of x
0
, leads to an a priori bound of the solutions. Since the method of moving
planes can not be applied directly to u, we construct some auxiliary functions.
The proof consists of the following four steps.
Step 1. Transforming the region
Make a translation, a rotation, or if necessary, a Kelvin transform. Denote
the new system by x = (x
1
, y) with y = (x
2
, , x
n
). Let x
1
axis pointing
218 8 Methods of Moving Planes and of Moving Spheres
to the right. In the new system, x
0
becomes the origin, the image of the set
Ω
+
:= ¦x [ R(x) > 0¦ is contained in the left of the yplane, and the image of
Γ is tangent to the yplane at origin. The image of Γ is also uniformly convex
near the origin.
Let x
1
= φ(y) be the equation of the image of Γ near the origin. Let
∂
1
D := ¦x [ x
1
= φ(y) +, x
1
≥ −2¦
The intersection of ∂
1
D and the plane ¦x ∈ R
n
[ x
1
= −2¦ encloses a part
of the plane, which is denote by ∂
2
D. Let D by the region enclosed by two
surfaces ∂
1
D and ∂
2
D. (See Figure 6.)
x
1 0
y
Γ
∂
1
D
∂
2
D
R(x) > 0
Figure 6
T
λ
R(x) < 0
1
The small positive number is chosen to ensure that
(a) ∂
1
D is uniformly convex,
(b)
∂R
∂x1
≤ 0 in D, and
(c)
R(x)
[R[
is continuous in D.
Step 2. Introducing an auxiliary function
Let m = max
∂1D
u. Let ˜ u be a C
2
extension of u from ∂
1
D to the entire
∂D, such that
0 ≤ ˜ u ≤ 2m, and [˜ u[ ≤ Cm.
8.3 Method of Moving Planes in a Local Way 219
Let w be the harmonic function
´w = 0 x ∈ D
w = ˜ u x ∈ ∂D
Then by the maximum principle and a standard elliptic estimate, we have
0 ≤ w(x) ≤ 2m and w
C
1
(D)
≤ C
1
m. (8.39)
Introduce a new function
v(x) = u(x) −w(x) +C
0
m[ +φ(y) −x
1
] +m[ +φ(y) −x
1
]
2
which is well deﬁned in D. Here C
0
is a large constant to be determined later.
One can easily verify that v(x) satisﬁes the following
´v +ψ(y) +f(x, v) = 0 x ∈ D
v(x) = 0 x ∈ ∂
1
D
where
ψ(y) = −C
0
m´φ(y) −m´[ +φ(y)]
2
−2m,
and
f(x, v) = 2mx
1
´φ(y)+R(x)¦v+w(x)−C
0
m[+φ(y)−x
1
]−m[+φ(y)−x
1
]
2
¦
p
At this step, we show that
v(x) > 0 ∀x ∈ D.
We consider the following two cases.
Case 1) φ(y) +
2
≤ x
1
≤ φ(y) + . In this region, R(x) is negative and
bounded away from 0. By Theorem 8.3.2 and the standard elliptic estimate,
we have
[
∂u
∂x
1
[ ≤ C
2
m,
for some positive constant C
2
. It follows from this and (8.39),
∂v
∂x
1
≤ (C
1
+C
2
−C
0
)m−2m( +φ(y) −x
1
).
Noticing that ( + φ(y) − x
1
) ≥ 0, one can choose C
0
suﬃciently large, such
that
∂v
∂x
1
< 0
Then since
v(x) = 0 ∀x ∈ ∂
1
D
we have v(x) > 0.
220 8 Methods of Moving Planes and of Moving Spheres
Case 2) −2 ≤ x
1
≤ φ(y) +
2
. In this region, again by (8.39), we have
v(x) ≥ −w(x) +C
0
m
2
≥ −2m+C
0
m
2
.
Again, choosing C
0
suﬃciently large (depending only on ), we arrive at v(x) >
0.
Step 3. Applying the Method of Moving Planes to v in x
1
direc
tion
In the previous step, we showed that v(x) ≥ 0 in D and v(x) = 0 on ∂
1
D.
This lies a foundation for the moving of planes. Now we can start from the
rightmost tip of region D, and move the plane perpendicular to x
1
axis toward
the left. More precisely, let
Σ
λ
= ¦x ∈ D [ x
1
≥ λ¦,
T
λ
= ¦x ∈ R
n
[ x
1
= λ¦.
Let x
λ
= (2λ − x
1
, y) be the reﬂection point of x with respect to the plane
T
λ
. Let
v
λ
(x) = v(x
λ
) and w
λ
(x) = v
λ
(x) −v(x).
We are going to show that
w
λ
(x) ≥ 0 ∀ x ∈ Σ
λ
. (8.40)
We want to show that the above inequality holds for −
1
≤ λ ≤ for some
0 <
1
< . This choice of
1
is to guarantee that when the plane T
λ
moves to
λ = −
1
, the reﬂection of Σ
λ
.about the plane T
λ
still lies in region D.
(8.40) is true when λ is less but close to , because Σ
λ
is a narrow region
and f(x, v) is Lipschitz continuous in v (Actually,
∂f
∂v
= R(x)). For detailed
argument, the readers may see the ﬁrst example in Section 5.
Now we decrease λ, that is, move the plane T
λ
toward the left as long as
the inequality (8.40) remains true. We show that the moving of planes can be
carried on provided
f(x, v(x)) ≤ f(x
λ
, v(x)) for x = (x
1
, y) ∈ D with x
1
> λ > −
1
. (8.41)
In fact, it is easy to see that v
λ
satisﬁes the equation
´v
λ
+ψ(y) +f(x
λ
, v
λ
) = 0,
and hence w
λ
satisﬁes
−´w
λ
= f(x
λ
, v
λ
) −f(x
λ
, v) +f(x
λ
, v) −f(x, v)
= R(x
λ
)(v
λ
−v) +f(x
λ
, v) −f(x, v)
= R(x
λ
)w
λ
+f(x
λ
, v) −f(x, v).
If (8.41) holds, then we have
8.3 Method of Moving Planes in a Local Way 221
−´w
λ
−R(x
λ
)w
λ
(x) ≥ 0. (8.42)
This enables us to apply the maximum principle to w
λ
.
Let
λ
o
= inf¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
If λ
o
> −
1
, then since v(x) > 0 in D, by virtue of (8.42) and the maximum
principle, we have
w
λo
(x) > 0, ∀x ∈ Σ
λo
and
∂w
λo
∂x
1
< 0, ∀x ∈ T
λo
∩ Σ
λo
.
Then, similar to the argument in the examples in Section 5, one can derive a
contradiction.
It is elementary to see that (8.41) holds if
∂f(x, v)
∂x
1
≤ 0 in the set
¦x ∈ D [ R(x) < 0 or x
1
> −2
1
¦.
To estimate
∂f
∂x
1
, we use
∂f
∂x
1
= 2m´φ(y) +u
p−1
¦
∂R
∂x
1
u +R(x)p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)]¦.
First, we notice that the uniform convexity of the image of Γ near the
origin implies
´φ(y) ≤ −a
0
< 0. (8.43)
for some constant a
0
.
We consider the following two possibilities.
a) R(x) ≤ 0. Choose C
0
suﬃciently large, so that
∂w
∂x
1
+C
0
m ≥ 0
Noticing that
∂R
∂x1
≤ 0 and ( +φ(y) −x
1
) ≥ 0 in D, we have
∂f
∂x
1
≤ 0.
b) R(x) > 0, x = (x
1
, y) with x
1
> −2
1
.
In the part where u ≥ 1, we use u to control
R(x)/
∂R
∂x
1
p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)].
More precisely, we write
∂f
∂x
1
= 2m´φ(y)+u
p−1
∂R
∂x
1
u +
R(x)/
∂R
∂x
1
p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)]
.
222 8 Methods of Moving Planes and of Moving Spheres
Choose
1
suﬃciently small, such that
[
∂R
∂x
1
[ ∼ [R[.
Then by condition (8.33) on R, we can make R(x)/
∂R
∂x1
arbitrarily small by a
small choice of
1
, and therefore arrive at
∂f
∂x1
≤ 0.
In the part where u ≤ 1, in virtue of (8.43), we can use 2m´φ(y) to control
Rp[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)].
Here again we use the smallness of R.
So far, our conclusion is: The method of moving planes can be carried on
up to λ = −
1
. More precisely, for any λ between −
1
and , the inequality
(8.40) is true.
Step 4. Deriving the a priori bound
Inequality (8.40) implies that, in a small neighborhood of the origin, the
function v(x) is monotone decreasing in x
1
direction. A similar argument
will show that this remains true if we rotate the x
1
axis by a small angle.
Therefore, for any point x
0
∈ Γ, one can ﬁnd a cone ∆
x0
with x
0
as its vertex
and staying to the left of x
0
, such that
v(x) ≥ v(x
0
) ∀x ∈ ∆
x0
(8.44)
Noticing that w(x) is bounded in D, (8.44) leads immediately to
u(x) +C
6
≥ u(x
0
) ∀x ∈ ∆
x0
(8.45)
More generally, by a similar argument, one can show that (8.45) is true
for any point x
0
in a small neighborhood of Γ. Furthermore, the intersection
of the cone ∆
x0
with the set ¦x[R(x) ≥
δ0
2
¦ has a positive measure and the
lower bound of the measure depends only on δ
0
and the C
1
norm of R. Now
the a priori bound of the solutions is a consequence of (8.45) and an integral
bound on u, which can be derived from Lemma 8.3.2.
This completes the proof of Theorem 8.3.1.
8.4 Method of Moving Spheres
8.4.1 The Background
Given a function K(x) on the two dimensional standard sphere S
2
, the well
known Nirenberg problem is to ﬁnd conditions on K(x), so that it can be
realized as the Gaussian curvature of some conformally related metric. This
is equivalent to solving the following nonlinear elliptic equation
8.4 Method of Moving Spheres 223
−´u + 1 = K(x)e
2u
(8.46)
on S
2
. Where ´ is the Laplacian operator associated to the standard metric.
On the higher dimensional sphere S
n
, a similar problem was raised by
Kazdan and Warner:
Which functions R(x) can be realized as the scalar curvature of some
conformally related metrics ?
The problem is equivalent to the existence of positive solutions of another
elliptic equation
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
(8.47)
For a unifying description, on S
2
, we let R(x) = 2K(x) be the scalar curvature,
then (8.46) is equivalent to
−´u + 2 = R(x)e
u
. (8.48)
Both equations are socalled ‘critical’ in the sense that lack of compactness
occurs. Besides the obvious necessary condition that R(x) be positive some
where, there are other wellknown obstructions found by Kazdan and Warner
[KW1] and later generalized by Bourguignon and Ezin [BE]. The conditions
are:
S
n
X(R)dA
g
= 0 (8.49)
where dA
g
is the volume element of the metric e
u
g
o
and u
4
n−2
g
o
for n = 2 and
for n ≥ 3 respectively. The vector ﬁeld X in (8.49) is a conformal Killing ﬁeld
associated to the standard metric g
o
on S
n
. In the following, we call these
necessary conditions KazdanWarner type conditions.
These conditions give rise to many examples of R(x) for which (8.48) and
(8.47) have no solution. In particular, a monotone rotationally symmetric
function R admits no solution.
Then for which R, can one solve the equations? What are the necessary
and suﬃcient conditions for the equations to have a solution? These are very
interesting problems in geometry.
In recent years, various suﬃcient conditions were obtained by many au
thors (For example, see [BC] [CY1] [CY2] [CY3] [CY4] [CL10] [CD1] [CD2]
[CC] [CS] [ES] [Ha] [Ho] [KW1] [Kw2] [Li1] [Li2] [Mo] and the references
therein.) However, there are still gaps between those suﬃcient conditions and
the necessary ones. Then one may naturally ask: ‘ Are the necessary condi
tions of KazdanWarner type also suﬃcient?’ This question has been open for
many years. In [CL3], we gave a negative answer to this question.
To ﬁnd necessary and suﬃcient conditions, it is natural to begin with
functions having certain symmetries. A pioneer work in this direction is [Mo]
by Moser. He showed that, for even functions R on S
2
, the necessary and
suﬃcient condition for (8.48) to have a solution is R being positive somewhere.
224 8 Methods of Moving Planes and of Moving Spheres
Note that, in the class of even functions, (1.2) is satisﬁed automatically. Thus
one can interpret Moser’s result as:
In the class of even functions, the KazdanWarner type conditions are the
necessary and suﬃcient conditions for (8.48) to have a solution.
A simple question was asked by Kazdan [Ka]:
What are the necessary and suﬃcient conditions for rotationally symmetric
functions?
To be more precise, for rotationally symmetric functions R = R(θ) where
θ is the distance from the north pole, the KazdanWarner type conditions take
the form:
R > 0 somewhere and R
changes signs (8.50)
In [XY], Xu and Yang found a family of rotationally symmetric functions
R
satisfying conditions (8.50), for which problem (8.48) has no rotationally
symmetric solution. Even though one didn’t know whether there were any
other nonsymmetric solutions for these functions R
, this result is still very
interesting, because it suggests that (8.50) may not be the suﬃcient condition.
In our earlier paper [CL3], we present a class of rotationally symmetric
functions satisfying (8.50) for which problem (8.48) has no solution at all. And
thus, for the ﬁrst time, pointed out that the Kazdan Warner type conditions
are not suﬃcient for (8.48) to have a solution. First, we proved the non
existence of rotationally symmetric solutions:
Proposition 8.4.1 Let R be rotationally symmetric. If
R is monotone in the region where R > 0, and R ≡ C, (8.51)
Then problem (8.48) and (8.47) has no rotationally symmetric solution.
This generalizes Xu and Yang’s result, since their family of functions R
satisfy (8.51). We also cover the higher dimensional cases.
Although we believed that for all such functions R, there is no solution at
all, we were not able to prove it at that time. However, we could show this
for a family of such functions.
Proposition 8.4.2 There exists a family of functions R satisfying the Kazdan
Warner type conditions (8.50), for which the problem (8.48) has no solution
at all.
At that time, we were not able to obtain the counter part of this result in
higher dimensions.
In [XY], Xu and Yang also proved:
Proposition 8.4.3 Let R be rotationally symmetric. Assume that
i) R is nondegenerate
ii)
R
changes signs in the region where R > 0 (8.52)
Then problem (8.48) has a solution.
8.4 Method of Moving Spheres 225
The above results and many other existence results tend to lead people
to believe that what really counts is whether R
changes signs in the region
where R > 0. And we conjectured in [CL3] that for rotationally symmetric
R, condition (8.52), instead of (8.50), would be the necessary and suﬃcient
condition for (8.48) or (8.47) to have a solution.
The main objective of this section is to present the proof on the necessary
part of the conjecture, which appeared in our later paper [CL4]. In [CL3], to
prove Proposition 8.4.2, we used the method of ‘moving planes’ to show that
the solutions of (8.48) are rotationally symmetric. This method only work
for a special family of functions satisfying (8.51). It does not work in higher
dimensions. In [CL4], we introduce a new idea. We call it the method of ‘mov
ing spheres’. It works in all dimensions, and for all functions satisfying (8.51).
Instead of showing the symmetry of the solutions, we obtain a comparison for
mula which leads to a direct contradiction. In an earlier work [P] , a similar
method was used by P. Padilla to show the radial symmetry of the solutions
for some nonlinear Dirichlet problems on annuluses.
The following are our main results.
Theorem 8.4.1 Let R be continuous and rotationally symmetric. If
R is monotone in the region where R > 0, and R ≡ C, (8.53)
Then problems (8.48) and (8.47) have no solution at all.
We also generalize this result to a family of nonsymmetric functions.
Theorem 8.4.2 Let R be a continuous function. Let B be a geodesic ball
centered at the North Pole x
n+1
= 1. Assume that
i) R(x) ≥ 0, R is nondecreasing in x
n+1
direction, for x ∈ B ;
ii) R(x) ≤ 0, for x ∈ S
n
` B;
Then problems (8.48) and (8.47) have no solution at all.
Combining our Theorem 1 with Xu and Yang’s Proposition 3, one can
obtain a necessary and suﬃcient condition in the nondegenerate case:
Corollary 8.4.1 Let R be rotationally symmetric and nondegenerate in the
sense that R
= 0 whenever R
= 0. Then the necessary and suﬃcient condi
tion for (8.48) to have a solution is
R > 0 somewhere and R
changes signs in the region where R > 0
8.4.2 Necessary Conditions
In this subsection, we prove Theorem 8.4.1 and Theorem 8.4.2.
For convenience, we make a stereographic projection from S
n
to R
n
. Then
it is equivalent to consider the following equation in Euclidean space R
n
:
226 8 Methods of Moving Planes and of Moving Spheres
−´u = R(r)e
u
, x ∈ R
2
. (8.54)
−´u = R(r)u
τ
u > 0, x ∈ R
n
, n ≥ 3, (8.55)
with appropriate asymptotic growth of the solutions at inﬁnity
u ∼ −4 ln [x[ for n=2 ; u ∼ C[x[
2−n
for n ≥ 3 (8.56)
for some C > 0. Here ´ is the Euclidean Laplacian operator, r = [x[ and
τ =
n+2
n−2
is the socalled critical Sobolev exponent. The function R(r) is the
projection of the original R in equations (8.48) and (8.47); and this projection
does not change the monotonicity of the function. The function R is also
bounded and continuous, and we assume these throughout the subsection.
For a radial function R satisfying (8.53), there are three possibilities:
i) R is nonpositive everywhere,
ii) R ≡ C is nonnegative everywhere and monotone,
iii) R > 0 and nonincreasing for r < r
o
, R ≤ 0 for r > r
o
; or vice versa.
The ﬁrst two cases violate the KazdanWarner type conditions, hence there
is no solution. Thus, we only need to show that in the remaining case, there
is also no solution. Without loss of generality, we may assume that:
R(r) > 0, R
(r) ≤ 0 for r < 1; R(r) ≤ 0 for r ≥ 1. (8.57)
The proof of Theorem 8.4.1 is based on the following comparison formulas.
Lemma 8.4.1 Let R be a continuous function satisfying (8.57). Let u be a
solution of (8.54) and (8.56) for n = 2, then
u(λx) > u(
λx
[x[
2
) −4 ln [x[ ∀x ∈ B
1
(0), 0 < λ ≤ 1 (8.58)
Lemma 8.4.2 Let R be a continuous function satisfying (8.57). Let u be a
solution of (8.55) and (8.56) for n > 2, then
u(λx) > [x[
2−n
u(
λx
[x[
2
) ∀x ∈ B
1
(0), 0 < λ ≤ 1 (8.59)
We ﬁrst prove Lemma 8.4.2. A similar idea will work for Lemma 8.4.1.
Proof of Lemma 8.4.2. We use a new idea called the method of
‘moving spheres ’. Let x ∈ B
1
(0), then λx ∈ B
λ
(0). The reﬂection point of λx
about the sphere ∂B
λ
(0) is
λx
x
2
. We compare the values of u at those pairs
of points λx and
λx
x
2
. In step 1, we start with the unit sphere and show that
(8.59) is true for λ = 1. This is done by Kelvin transform and the strong
maximum principle. Then in step 2, we move (shrink) the sphere ∂B
λ
(0)
8.4 Method of Moving Spheres 227
towards the origin. We show that one can always shrink ∂B
λ
(0) a little bit
before reaching the origin. By doing so, we establish the inequality for all
x ∈ B
1
(0) and λ ∈ (0, 1].
Step 1. In this step, we establish the inequality for x ∈ B
1
(0), λ =
1. We show that
u(x) > [x[
2−n
u(
x
[x[
2
) ∀x ∈ B
1
(0). (8.60)
Let v(x) = [x[
2−n
u(
x
x
2
) be the Kelvin transform. Then it is easy to verify
that v satisﬁes the equation
−´v = R(
1
r
)v
τ
(8.61)
and v is regular.
It follows from (8.57) that ´u < 0, and ´v ≥ 0 in B
1
(0). Thus −´(u −
v) > 0. Applying the maximum principle, we obtain
u > v in B
1
(0) (8.62)
which is equivalent to (8.60).
Step 2. In this step, we move the sphere ∂B
λ
(0) towards λ = 0.
We show that the moving can not be stopped until λ reaches the origin. This
is done again by the Kelvin transform and the strong maximum principle.
Let u
λ
(x) = λ
n
2
−1
u(λx). Then
−´u
λ
= R(λx)u
τ
λ
(x). (8.63)
Let v
λ
(x) = [x[
2−n
u
λ
(
x
x
2
) be the Kelvin transform of u
λ
. Then it is easy
to verify that
−´v
λ
= R(
λ
r
)v
τ
λ
(x). (8.64)
Let w
λ
= u
λ
−v
λ
. Then by (8.63) and (8.64),
´w
λ
+R(
λ
r
)ψ
λ
w
λ
= [R(
λ
r
) −R(λr)]u
τ
λ
(8.65)
where ψ
λ
is some function with values between τu
τ−1
λ
and τv
τ−1
λ
. Taking into
account of assumption (2.4), we have
R(
λ
r
) −R(λr) ≤ 0 for r ≤ 1, λ ≤ 1.
It follows from (8.65) that
´w
λ
+C
λ
(x)w
λ
≤ 0 (8.66)
where C
λ
(x) is a bounded function if λ is bounded away from 0.
228 8 Methods of Moving Planes and of Moving Spheres
It is easy to see that for any λ, strict inequality holds somewhere for (8.66).
Thus applying the strong maximum principle, we know that the inequality
(8.59) is equivalent to
w
λ
≥ 0. (8.67)
From step 1, we know (8.67) is true for (x, λ) ∈ B
1
(0) ¦1¦. Now we decrease
λ. Suppose (8.67) does not hold for all λ ∈ (0, 1] and let λ
o
> 0 be the smallest
number such that the inequality is true for (x, λ) ∈ B
1
(0) [λ
o
, 1]. We will
derive a contradiction by showing that for λ close to and less than λ
o
, the
inequality is still true. In fact, we can apply the strong maximum principle
and then the Hopf lemma to (8.66) for λ = λ
o
to get:
w
λo
> 0 in B
1
and
∂w
λo
∂r
< 0 on ∂B
1
. (8.68)
These combined with the fact that w
λ
≡ 0 on ∂B
1
imply that (8.67) holds for
λ close to and less than λ
o
.
Proof of Lemma 8.4.1. In step 1, we let
v(x) = u(
x
[x[
2
) −4 ln [x[.
In step 2, let
u
λ
(x) = u(λx) + 2 ln λ, v
λ
(x) = u
λ
(
x
[x[
2
) −4 ln [x[.
Then arguing as in the proof of Lemma 8.4.2 we derive the conclusion of
Lemma 8.4.1.
Now we are ready to prove Theorem 8.4.1.
Proof of Theorem 8.4.1. Taking λ to 0 in (8.58), we get ln [x[ > 0
for [x[ < 1 which is impossible. Letting λ→0 in (8.59) and using the fact
that u(0) > 0 we obtain again a contradiction. These complete the proof of
Theorem 8.4.1.
Proof of Theorem 8.4.2. Noting that in the proof of Theorem
8.4.1, we only compare the values of the solutions along the same radial di
rection, one can easily adapt the argument in the proof of Theorem 8.4.1 to
derive the conclusion of Theorem 8.4.2.
8.5 Method of Moving Planes in Integral Forms and
Symmetry of Solutions for an Integral Equation
Let R
n
be the n−dimensional Euclidean space, and let α be a real number
satisfying 0 < α < n. Consider the integral equation
u(x) =
R
n
1
[x −y[
n−α
u(y)
(n+α)/(n−α)
dy. (8.69)
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 229
It arises as an EulerLagrange equation for a functional under a constraint
in the context of the HardyLittlewoodSobolev inequalities. In [L], Lieb clas
siﬁed the maximizers of the functional, and thus obtained the best constant
in the HLS inequalities. He then posed the classiﬁcation of all the critical
points of the functional – the solutions of the integral equation (8.69) as an
open problem.
This integral equation is also closely related to the following family of
semilinear partial diﬀerential equations
(−∆)
α/2
u = u
(n+α)/(n−α)
. (8.70)
In the special case n ≥ 3 and α = 2, it becomes
−∆u = u
(n+2)/(n−2)
. (8.71)
As we mentioned in Section 6, solutions to this equation were studied by
Gidas, Ni, and Nirenberg [GNN]; Caﬀarelli, Gidas, and Spruck [CGS]; Chen
and Li [CL]; and Li [Li]. Recently, Wei and Xu [WX] generalized this result
to the solutions of (8.70) with α being any even numbers between 0 and n.
Apparently, for other real values of α between 0 and n, equation (8.70) is
also of practical interest and importance. For instance, it arises as the Euler
Lagrange equation of the functional
I(u) =
R
n
[(−∆)
α
4
u[
2
dx/(
R
n
[u[
2n
n−α
dx)
n−α
n
.
The classiﬁcation of the solutions would provide the best constant in the
inequality of the critical Sobolev imbedding from H
α
2
(R
n
) to L
2n
n−α
(R
n
):
(
R
n
[u[
2n
n−α
dx)
n−α
n
≤ C
R
n
[(−∆)
α
4
u[
2
dx.
As usual, here (−∆)
α/2
is deﬁned by Fourier transform. We can show that
(see [CLO]), all the solutions of partial diﬀerential equation (8.70) satisfy our
integral equation (8.69), and vise versa. Therefore, to classify the solutions
of this family of partial diﬀerential equations, we only need to work on the
integral equations (8.69).
In order either the HardyLittlewoodSobolev inequality or the above men
tioned critical Sobolev imbedding to make sense, u must be in L
2n
n−α
(R
n
). Un
der this assumption, we can show that (see [CLO]) a positive solution u is in
fact bounded, and therefore possesses higher regularity. Furthermore, by using
the method of moving planes, we can show that if a solution is locally L
2n
n−α
,
then it is in L
2n
n−α
(R
n
). Hence, in the following, we call a solution u regular if
it is locally L
2n
n−α
. It is interesting to notice that if this condition is violated,
then a solution may not be bounded. A simple example is u =
1
x
(n−α)/2
,
which is a singular solution. We studied such solutions in [CLO1].
We will use the method of moving planes to prove
230 8 Methods of Moving Planes and of Moving Spheres
Theorem 8.5.1 Every positive regular solution u(x) of (8.69) is radially
symmetric and decreasing about some point x
o
and therefore assumes the form
c(
t
t
2
+[x −x
o
[
2
)
(n−α)/2
, (8.72)
with some positive constants c and t.
To better illustrate the idea, we will ﬁrst prove the Theorem under stronger
assumption that u ∈ L
2n
n−2
(R
n
). Then we will show how the idea of this proof
can be extended under weaker condition that u is only locally in L
2n
n−2
.
For a given real number λ, deﬁne
Σ
λ
= ¦x = (x
1
, . . . , x
n
) [ x
1
≤ λ¦,
and let x
λ
= (2λ−x
1
, x
2
, . . . , x
n
) and u
λ
(x) = u(x
λ
). (Again see the previous
Figure 1.)
Lemma 8.5.1 For any solution u(x) of (1.1), we have
u(x) −u
λ
(x) =
Σ
λ
(
1
[x −y[
n−α
−
1
[x
λ
−y[
n−α
)(u(y)
n+α
n−α
−u
λ
(y)
n+α
n−α
)dy,
(8.73)
It is also true for v(x) =
1
[x[
n−α
u(
x
[x[
2
), the Kelvin type transform of u(x),
for any x = 0.
Proof. Since [x −y
λ
[ = [x
λ
−y[, we have
u(x) =
Σ
λ
1
[x −y[
n−α
u(y)
n+α
n−α
dy +
Σ
λ
1
[x
λ
−y[
n−α
u
λ
(y)
n+α
n−α
dy, and
u(x
λ
) =
Σ
λ
1
[x
λ
−y[
n−α
u(y)
n+α
n−α
dy +
Σ
λ
1
[x −y[
n−α
u
λ
(y)
n+α
n−α
.
This implies (8.73). The conclusion for v(x) follows similarly since it satisﬁes
the same integral equation (8.69) for x = 0.
Proof of the Theorem 8.5.1 (Under Stronger Assumption u ∈ L
2n
n−2
(R
n
)).
Let w
λ
(x) = u
λ
(x) − u(x). As in Section 6, we will ﬁrst show that, for λ
suﬃciently negative,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.74)
Then we can start moving the plane T
λ
= ∂Σ
λ
from near −∞ to the right,
as long as inequality (8.74) holds. Let
λ
o
= sup¦λ ≤ 0 [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 231
We will show that
λ
o
< ∞, and w
λo
(x) ≡ 0.
Unlike partial diﬀerential equations, here we do not have any diﬀerential
equations for w
λ
. Instead, we will introduce a new idea – the method of moving
planes in integral forms. We will exploit some global properties of the integral
equations.
Step 1. Start Moving the Plane from near −∞.
Deﬁne
Σ
−
λ
= ¦x ∈ Σ
λ
[ w
λ
(x) < 0¦. (8.75)
We show that for suﬃciently negative values of λ, Σ
−
λ
must be empty. By
Lemma 8.5.1, it is easy to verify that
u(x) −u
λ
(x) ≤
Σ
−
λ
1
[x −y[
n−α
−
1
[x
λ
−y[
n−α
[u
τ
(y) −u
τ
λ
(y)]dy
≤
Σ
−
λ
1
[x −y[
n−α
[u
τ
(y) −u
τ
λ
(y)]dy
= τ
Σ
−
λ
1
[x −y[
n−α
ψ
τ−1
λ
(y)[u(y) −u
λ
(y)]dy
≤ τ
Σ
−
λ
1
[x −y[
n−α
u
τ−1
(y)[u(y) −u
λ
(y)]dy,
where ψ
λ
(x) is valued between u
λ
(x) and u(x) by the Mean Value Theorem,
and since on Σ
−
λ
, u
λ
(x) < u(x), we have ψ
λ
(x) ≤ u(x).
We ﬁrst apply the classical HardyLittlewoodSobolev inequality (its equiv
alent form) to the above to obtain, for any q >
n
n−α
:
w
λ

L
q
(Σ
−
λ
)
≤ Cu
τ−1
w
λ

L
nq/(n+αq)
(Σ
−
λ
)
=
Σ
λ
−
[u
τ−1
(x)[
nq
n+αq
[w
λ
(x)[
nq
n+αq
dx
n+αq
nq
.
Then use the H¨older inequality to the last integral in the above. First choose
s =
n +αq
n
so that
nq
n +αq
s = q,
then choose
r =
n +αq
αq
232 8 Methods of Moving Planes and of Moving Spheres
to ensure
1
r
+
1
s
= 1.
We thus arrive at
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
By condition that u ∈ L
τ+1
(R
n
), we can choose N suﬃciently large, such
that for λ ≤ −N, we have
C¦
Σ
λ
u
τ+1
(y) dy¦
α
n
≤
1
2
.
Now (8.76) implies that w
λ

L
q
(Σ
−
λ
)
= 0, and therefore Σ
−
λ
must be measure
zero, and hence empty by the continuity of w
λ
(x). This veriﬁes (8.74) and
hence completes Step 1. The readers can see that here we have actually used
Corollary 7.5.1, with Ω = Σ
λ
, a region near inﬁnity.
Step 2. We now move the plane T
λ
= ¦x [ x
1
= λ¦ to the right as long as
(8.74) holds. Let λ
o
as deﬁned before, then we must have λ
o
< ∞. This can
be seen by applying a similar argument as in Step 1 from λ near +∞.
Now we show that
w
λo
(x) ≡ 0 , ∀ x ∈ Σ
λo
. (8.76)
Otherwise, we have w
λo
(x) ≥ 0, but w
λo
(x) ≡ 0 on Σ
λo
; we show that the
plane can be moved further to the right. More precisely, there exists an
depending on n, α, and the solution u(x) itself such that w
λ
(x) ≥ 0 on Σ
λ
for
all λ in [λ
o
, λ
o
+).
By Lemma 8.5.1, we have in fact w
λo
(x) > 0 in the interior of Σ
λo
. Let
Σ
−
λo
= ¦x ∈ Σ
λo
[ w
λo
(x) ≤ 0¦. Then obviously, Σ
−
λo
has measure zero, and
lim
λ→λo
Σ
−
λ
⊂ Σ
−
λo
. From the ﬁrst inequality of (8.76), we deduce,
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
(8.77)
Condition u ∈ L
τ+1
(R
n
) ensures that one can choose suﬃciently small,
so that for all λ in [λ
o
, λ
o
+),
C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
≤
1
2
.
Now by (8.77), we have w
λ

L
q
(Σ
−
λ
)
= 0, and therefore Σ
−
λ
must be empty.
Here we have actually applied Corollary 7.5.1 with Ω = Σ
−
λ
. For λ suﬃciently
close to λ
o
, Ω is contained in a very narrow region.
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 233
Proof of the Theorem 8.5.1 ( under Weaker Assumption that u is only
locally in L
τ+1
).
Outline. Since we do not assume any integrability condition of u(x) near
inﬁnity, we are not able to carry on the method of moving planes directly on
u(x). To overcome this diﬃculty, we consider v(x), the Kelvin type transform
of u(x). It is easy to verify that v(x) satisﬁes the same equation (8.69), but
has a possible singularity at origin, where we need to pay some particular
attention. Since u is locally L
τ+1
, it is easy to see that v(x) has no singularity
at inﬁnity, i.e. for any domain Ω that is a positive distance away from the
origin,
Ω
v
τ+1
(y) dy < ∞. (8.78)
Let λ be a real number and let the moving plane be x
1
= λ. We compare
v(x) and v
λ
(x) on Σ
λ
` ¦0¦. The proof consists of three steps. In step 1, we
show that there exists an N > 0 such that for λ ≤ −N, we have
v(x) ≤ v
λ
(x), ∀x ∈ Σ
λ
` ¦0¦. (8.79)
Thus we can start moving the plane continuously from λ ≤ −N to the right as
long as (8.79) holds. If the plane stops at x
1
= λ
o
for some λ
o
< 0, then v(x)
must be symmetric and monotone about the plane x
1
= λ
o
. This implies that
v(x) has no singularity at the origin and u(x) has no singularity at inﬁnity.
In this case, we can carry on the moving planes on u(x) directly to obtain
the radial symmetry and monotonicity. Otherwise, we can move the plane all
the way to x
1
= 0, which is shown in step 2. Since the direction of x
1
can
be chosen arbitrarily, we deduce that v(x) must be radially symmetric and
decreasing about the origin. We will show in step 3 that, in any case, u(x) can
not have a singularity at inﬁnity, and hence both u and v are in L
τ+1
(R
n
).
Step 1 and Step 2 are entirely similar to the proof under stronger assump
tion. What we need to do is to replace u there by v and Σ
λ
there by Σ
λ
`¦0¦.
Hence we only show Step 3 here.
Step 3. Finally, we showed that u has the desired asymptotic behavior at
inﬁnity, i.e., it satisﬁes
u(x) = O(
1
[x[
n−2
).
Suppose in the contrary, let x
1
and x
2
be any two points in R
n
and let x
o
be
the midpoint of the line segment x
1
x
2
. Consider the Kelvin type transform
centered at x
o
:
v(x) =
1
[x −x
o
[
n−α
u(
x −x
o
[x −x
o
[
2
).
Then v(x) has a singularity at x
o
. Carry on the arguments as in Steps 1 and 2,
we conclude that v(x) must be radially symmetric about x
o
, and in particular,
u(x
1
) = u(x
2
). Since x
1
and x
2
are any two points in R
n
, u must be constant.
This is impossible. Similarly, The continuity and higher regularity of u follows
234 8 Methods of Moving Planes and of Moving Spheres
from standard theory on singular integral operators. This completes the proof
of the Theorem.
A
Appendices
A.1 Notations
A.1.1 Algebraic and Geometric Notations
1. A = (a
ij
) is a an mn matrix with ij
th
entry a
ij
.
2. det(a
ij
) is the determinant of the matrix (a
ij
).
3. A
T
= transpose of the matrix A.
4. R
n
= ndimensional real Euclidean space with a typical point x =
(x
1
, , x
n
).
5. Ω usually denotes and open set in R
n
.
6.
¯
Ω is the closure of Ω.
7. ∂Ω is the boundary of Ω.
8. For x = (x
1
, , x
n
) and y = (y
1
, , y
n
) in R
n
,
x y =
n
¸
i=1
x
i
y
i
, [x[ =
n
¸
i=1
x
2
i
1
2
.
9. [x −y[ is the distance of the two points x and y in R
n
.
10. B
r
(x
o
) = ¦x ∈ R
n
[ [x − x
o
[ < r¦ is the ball of radius r centered at the
point x
o
in R
n
.
11. S
n
is the sphere of radius one centered at the origin in R
n+1
, i.e. the
boundary of the ball of radius one centered at the origin:
B
1
(0) = ¦x ∈ R
n+1
[ [x[ < 1¦.
A.1.2 Notations for Functions and Derivatives
1. For a function u : Ω ⊂ R
n
→R
1
, we write
u(x) = u(x
1
, , x
n
) , x ∈ Ω.
We say u is smooth if it is inﬁnitely diﬀerentiable.
236 A Appendices
2. For two functions u and v, u ≡ v means u is identically equal to v.
3. We set
u := v
to deﬁne u as equaling v.
4. We usually write u
xi
for
∂u
∂xi
, the ﬁrst partial derivative of u with respect
to x
i
. Similarly
u
xixj
=
∂
2
u
∂x
i
∂x
j
u
xixjx
k
=
∂
3
u
∂x
i
∂x
j
∂x
k
.
5.
D
α
u(x) :=
∂
α
u(x)
∂x
α1
1
∂x
αn
n
where α = (α
1
, , α
n
) is a multiindex of order
[α[ = α
1
+α
2
+ α
n
.
6. For a nonnegative integer k,
D
k
u(x) := ¦D
α
u(x) [ [α[ = k¦
the set of all partial derivatives of order k.
If k = 1, we regard the elements of Du as being arranged in a vector
Du := (u
x1
, , u
xn
) = u,
the gradient vector.
If k = 2, we regard the elements of D
2
u being arranged in a matrix
D
2
u :=
¸
¸
∂
2
u
∂x1∂x1
∂
2
u
∂x1∂xn
∂
2
u
∂xn∂x1
∂
2
u
∂xn∂xn
¸
,
the Hessian matrix.
7. The Laplacian
´u :=
n
¸
i=1
u
xixi
is the trace of D
2
u.
8.
[D
k
u[ :=
¸
¸
α=k
[D
α
u[
2
¸
1/2
.
A.1 Notations 237
A.1.3 Function Spaces
1.
C(Ω) := ¦u : Ω→R
1
[ u is continuous ¦
2.
C(
¯
Ω) := ¦u ∈ C(Ω) [ u is uniformly continuous¦
3.
u
C(
¯
Ω)
:= sup
x∈Ω
[u(x)[
4.
C
k
(Ω) := ¦u : Ω→R
1
[ u is ktimes continuously diﬀerentiable ¦
5.
C
k
(
¯
Ω) := ¦u ∈ C
k
(Ω) [ D
α
u is uniformly continuous for all [α[ ≤ k¦
6.
u
C
k
(
¯
Ω)
:=
¸
α≤k
D
α
u
C(
¯
Ω)
7. Assume Ω is an open subset in R
n
and 0 < α ≤ 1. If there exists a
constant C such that
[u(x) −u(y)[ ≤ C[x −y[
α
, ∀ x, y ∈ Ω,
then we say that u is H¨ oder continuous in Ω.
8. The α
th
H¨ oder seminorm of u is
[u]
C
0,α
(
¯
Ω)
:= sup
x=y
[u(x) −u(y)[
[x −y[
α
9. The α
th
H¨ oder norm is
u
C
0,α
(
¯
Ω)
:= u
C(
¯
Ω)
+ [u]
C
0,α
(
¯
Ω)
10. The H¨ oder space C
k,α
(
¯
Ω) consists of all functions u ∈ C
k
(
¯
Ω) for which
the norm
u
C
k,α
(
¯
Ω)
:=
¸
β≤k
D
β
u
C(
¯
Ω)
+
¸
β=k
[D
β
u]
C
0,α
(
¯
Ω)
is ﬁnite.
11. C
∞
(Ω) is the set of all inﬁnitely diﬀerentiable functions
12. C
k
0
(Ω) denotes the subset of functions in C
k
(Ω) with compact support in
Ω.
238 A Appendices
13. L
p
(Ω) is the set of Lebesgue measurable functions u on Ω with
u
L
p
(Ω)
:=
Ω
[u(x)[
p
dx
1/p
< ∞.
14. L
∞
(Ω) is the set of Lebesgue measurable functions u with
u
L
∞
(Ω)
:= ess sup
Ω
[u[ < ∞.
15. L
p
loc
(Ω) denotes the set of Lebesgue measurable functions u on Ω whose
p
th
power is locally integrable, i.e. for any compact subset K ⊂ Ω,
K
[u(x)[
p
dx < ∞.
16. W
k,p
(Ω) denotes the Sobolev space consisting functions u with
u
W
k,p
(Ω)
:=
¸
0≤α≤k
Ω
[D
α
u[
p
dx
1/p
< ∞.
17. H
k,p
(Ω) is the completion of C
∞
(Ω) under the norm u
W
k,p
(Ω)
.
18. For any open subset Ω ⊂ R
n
, any nonnegative integer k and any p ≥ 1,
it is shown (See Adams [Ad]) that
H
k,p
(Ω) = W
k,p
(Ω).
19. For p = 2, we usually write
H
k
(Ω) = H
k,2
(Ω).
A.1.4 Notations for Estimates
1. In the process of estimates, we use C to denote various constants that can
be explicitly computed in terms of known quantities. The value of C may
change from line to line in a given computation.
2. We write
u = O(v) as x→x
o
,
provided there exists a constant C such that
[u(x)[ ≤ C[v(x)[
for x suﬃciently close to x
o
.
3. We write
u = o(v) as x→x
o
,
provided
[u(x)[
[v(x)[
→0 as x→x
o
.
A.1 Notations 239
A.1.5 Notations from Riemannian Geometry
1. M denotes a diﬀerentiable manifold.
2. T
p
M is the tangent space of M at point p.
3. T
∗
p
M is the cotangent space of M at point p.
4. A Riemannian metric g can be expressed in a matrix
g =
¸
g
11
g
1n
g
n1
g
nn
¸
where in a local coordinates
g
ij
(p) =<
∂
∂x
i
p
,
∂
∂x
j
p
>
p
.
5. (M, g) denotes an ndimensional manifold M with Riemannain metric g.
In a local coordinates, the length element
ds :=
¸
n
¸
i,j=1
g
ij
dx
i
dx
j
¸
1/2
.
The volume element
dV :=
det(g
ij
)dx
1
dx
n
.
6. K(x) denotes the Gaussian curvature, and R(x) scalar curvature of a
manifold M at point x.
7. For two diﬀerentiable vector ﬁelds X and Y on M, the Lie Bracket is
[X, Y ] := XY −Y X.
8. D denotes an aﬃne connection on a diﬀerentiable manifold M.
9. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if
DV
dt
= 0.
10. Given a Riemannian manifold M with metric <, >, there exists a unique
connection D on M that is compatible with the metric <, >, i.e.
< X, Y >
c(t)
= constant ,
for any pair of parallel vector ﬁelds X and Y along any smooth curve c(t).
11. In a local coordinates (x
1
, , x
n
), this unique Riemannian connection
can be expressed as
D ∂
∂x
i
∂
∂x
j
=
¸
k
Γ
k
ij
∂
∂x
k
,
where Γ
k
ij
is the Christoﬀel symbols.
240 A Appendices
12. We denote
i
= D ∂
∂x
i
.
13. The summation convention. We write, for instance,
X
i
∂
∂x
i
:=
¸
i
X
i
∂
∂x
i
,
Γ
j
ik
dx
k
:=
¸
k
Γ
j
ik
dx
k
,
and so on.
14. A smooth function f(x) on M is a (0, 0)tensor. f is a (1, 0)tensor
deﬁned by
f =
i
fdx
i
=
∂f
∂x
i
dx
i
.
2
f is a (2, 0)tensor:
2
f =
j
∂f
∂x
i
dx
i
⊗dx
j
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
dx
i
⊗dx
j
.
In the Riemannian context,
2
f is called the Hessian of f and denoted
by Hess(f). Its ij
th
component is
(
2
f)
ij
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
.
15. The trace of the Hessian matrix ((
2
f)
ij
) is deﬁned to be the Laplace
Beltrami operator ´.
´ =
1
[g[
n
¸
i,j=1
∂
∂x
i
(
[g[ g
ij
∂
∂x
j
)
where [g[ is the determinant of (g
ij
).
16. We abbreviate:
M
uvdV
g
:=
M
< u, v > dV
g
where
< u, v >= g
ij
i
u
j
v = g
ij
∂u
∂x
i
∂v
∂x
j
is the scalar product associated with g for 1forms.
A.2 Inequalities 241
17. The norm of the k
th
covariant derivative of u, [
k
u[ is deﬁned in a local
coordinates chart by
[
k
u[
2
= g
i1j1
g
i
k
j
k
(
k
u)
i1···i
k
(
k
u)
j1···j
k
.
In particular, we have
[
1
u[
2
= [u[
2
= g
ij
(u)
i
(u)
j
= g
ij
i
u
j
u = g
ij
∂u
∂x
i
∂u
∂x
j
.
18. (
p
k
(M) is the space of C
∞
functions on M such that
M
[
j
u[
p
dV
g
< ∞, ∀ j = 0, 1, , k.
19. The Sobolev space H
k,p
(M) is the completion of (
p
k
(M) with respect to
the norm
u
H
k,p
(M)
=
k
¸
j=0
M
[
j
u[
p
dV
g
1/p
.
A.2 Inequalities
Young’s Inequality. Assume 1 < p, q < ∞,
1
p
+
1
q
= 1. Then for any a, b > 0,
it holds
ab ≤
a
p
p
+
b
q
q
(A.1)
Proof. We will use the convexity of the exponential function e
x
. Let x
1
< x
2
be any two point in R
1
. Then since
1
p
+
1
q
= 1,
1
p
x
1
+
1
q
x
2
is a point between
x
1
and x
2
. By the convexity of e
x
, we have
e
1
p
x1+
1
q
x2
≤
1
p
e
x1
+
1
q
e
x2
. (A.2)
Taking x
1
= ln a
p
and x
2
= ln b
q
in (A.2), we arrive at inequality (A.1). ¯ .
CauchySchwarz Inequality.
[x y[ ≤ [x[[y[ , ∀ x, y ∈ R
n
.
Proof. For any real number λ > 0, we have
0 ≤ [x ±λy[
2
= [x[
2
±2λx y +λ
2
[y[
2
.
It follows that
±x y ≤
1
2λ
[x[
2
+
λ
2
[y[
2
.
The minimum value of the right hand side is attained at λ =
x
y
. At this value
of λ, we arrive at desired inequality. ¯ .
242 A Appendices
H¨ oder’s Inequality. Let Ω be a domain in R
n
. Assume that u ∈ L
p
(Ω)
and v ∈ L
q
(Ω) with 1 ≤ p, q ≤ ∞ and
1
p
+
1
q
= 1. Then
Ω
[uv[dx ≤ u
L
p
(Ω)
v
L
q
(Ω)
. (A.3)
Proof. Let f =
u
u
L
p
(Ω)
and g =
v
v
L
q
(Ω)
. Then obviously, f
L
p
(Ω)
= 1 =
g
L
q
(Ω)
. It follows from the Young’s inequality (A.1),
Ω
[f g[dx ≤
1
p
Ω
[f[
p
dx +
1
q
Ω
[g[
q
dx =
1
p
+
1
q
= 1.
This implies (A.3). ¯ .
Minkowski’s Inequality. Assume 1 ≤ p ≤ ∞. Then for any u, v ∈
L
p
(Ω),
u +v
L
p
(Ω)
≤ u
L
p
(Ω)
+v
L
p
(Ω)
.
Proof. By the above H¨ oder inequality, we have
u +v
p
L
p
(Ω)
=
Ω
[u +v[
p
dx ≤
Ω
[u +v[
p−1
([u[ +[v[)dx
≤
Ω
[u +v[
p
dx
p−1
p
¸
Ω
[u[
p
dx
1
p
+
Ω
[v[
p
dx
1
p
¸
= u +v
p−1
L
p
(Ω)
u
L
p
(Ω)
+v
L
p
(Ω)
.
Then divide both sides by u +v
p−1
L
p
(Ω)
. ¯ .
Interpolation Inequality for L
p
Norms. Assume that u ∈ L
r
(Ω) ∩
L
s
(Ω) with 1 ≤ r ≤ s ≤ ∞. Then for any r ≤ t ≤ s and 0 < θ < 1 such that
1
t
=
θ
r
+
1 −θ
s
(A.4)
we have u ∈ L
t
(Ω), and
u
L
t
(Ω)
≤ u
θ
L
r
(Ω)
u
1−θ
L
s
(Ω)
. (A.5)
Proof. Write
Ω
[u[
t
dx =
Ω
[u[
a
[u[
t−a
dx.
Choose p and q so that
ap = r , (t −a)q = s , and
1
p
+
1
q
= 1. (A.6)
A.3 CalderonZygmund Decomposition 243
Then by H¨ oder inequality, we have
Ω
[u[
t
dx ≤
Ω
[u[
r
dx
a/r
Ω
[u[
s
dx
(t−a)/s
.
Divide through the powers by t, we obtain
u
t
≤ u
a/t
r
u
1−a/t
s
.
Now let θ = a/t, we arrive at desired inequality (A.5). Here the restriction
(A.4) comes from (A.6). ¯ .
A.3 CalderonZygmund Decomposition
Lemma A.3.1 For f ∈ L
1
(R
n
), f ≥ 0, ﬁxed α > 0, ∃ E, G such that
(i) R
n
= E ∪ G, E ∩ G = ∅
(ii) f(x) ≤ α, a.e. x ∈ E
(iii) G =
∞
¸
k=1
Q
k
, ¦Q
k
¦: disjoint cubes s.t.
α <
1
[Q
k
[
Q
k
f(x)dx ≤ 2
n
α
Proof. Since
R
n
f(x)dx is ﬁnite, for a given α > 0, one can pick a cube Q
o
suﬃciently large, such that
Qo
f(x)dx ≤ α[Q
o
[. (A.7)
Divide Q
o
into 2
n
equal subcubes with disjoint interior. Those subcubes
Q satisfying
Q
f(x)dx ≤ α[Q[
are similarly subdivided, and this process is repeated indeﬁnitely. Let O de
note the set of subcubes Q thus obtained that satisfy
Q
f(x)dx > α[Q[.
For each Q ∈ O, let
˜
Q be its predecessor, i.e., Q is one of the 2
n
subcubes of
˜
Q. Then obviously, we have [
˜
Q[/[Q[ = 2
n
, and consequently,
α <
1
[Q[
Q
f(x)dx ≤
1
[Q[
˜
Q
f(x)dx ≤
1
[Q[
α[
˜
Q[ = 2
n
α. (A.8)
Let
244 A Appendices
G =
¸
Q∈Q
Q , and E = Q
o
` G.
Then iii) follows immediately. To see ii), noticing that each point of E lies in
a nested sequence of cubes Q with diameters tending to zero and satisfying
Q
f(x)dx ≤ α[Q[,
now by Lebesgue’s Diﬀerentiation Theorem, we have
f(x) ≤ α , a.e in E. (A.9)
This completes the proof of the Lemma. ¯ .
Lemma A.3.2 Let T be a linear operator from L
p
(Ω)∩L
q
(Ω) into itself with
1 ≤ p < q < ∞. If T is of weak type (p, p) and weak type (q, q), then for any
p < r < q, T is of strong type (r, r). More precisely, if there exist constants
B
p
and B
q
, such that, for any t > 0,
µ
Tf
(t) ≤
B
p
f
p
t
p
and µ
Tf
(t) ≤
B
q
f
q
t
q
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
then
Tf
r
≤ CB
θ
p
B
1−θ
q
f
r
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
where
1
r
=
θ
p
+
1 −θ
q
and C depends only on p, q, and r.
Proof. For any number s > 0, let
g(x) =
f(x) if [f(x)[ ≤ s
0 if [f(x)[ > s.
We split f into the good part g and the bad part b: f(x) = g(x) +b(x). Then
[Tf(x)[ ≤ [Tg(x)[ +[Tb(x)[,
and hence
µ(t) ≡ µ
Tf
(t) ≤ µ
Tg
(
t
2
) +µ
Tb
(
t
2
)
≤
2B
q
t
q
Ω
[g(x)[
q
dx +
2B
p
t
p
Ω
[b(x)[
p
dx.
It follows that
A.3 CalderonZygmund Decomposition 245
Ω
[Tf[
r
dx =
∞
0
µ(t)d(t
r
) = r
∞
0
t
r−1
µ(t)dt
≤ r(2B
q
)
q
∞
0
t
r−1−q
f≤s
[f(x)[
q
dx
dt
+ r(2B
p
)
p
∞
0
t
r−1−p
f>s
[f(x)[
p
dx
dt
≡ r(2B
q
)
q
I
q
+r(2B
p
)
p
I
p
. (A.10)
Let s = t/A for some positive number A to be ﬁxed later. Then
I
q
= A
r−q
∞
0
s
r−1−q
f≤s
[f(x)[
q
dx
ds
= A
r−q
Ω
[f(x)[
q
∞
f
s
r−1−q
ds
dx
=
A
r−q
q −r
Ω
[f(x)[
r
dx. (A.11)
Similarly,
I
p
= A
r−p
∞
0
s
r−1−p
f>s
[f(x)[
p
dx
ds =
Ω
[f(x)[
p
f
0
s
r−1−p
ds
=
A
r−p
r −p
Ω
[f(x)[
r
dx. (A.12)
Combining (A.10), (A.11), and (A.12), we derive
Ω
[Tf(x)[
r
dx ≤ rF(A)
Ω
[f(x)[
r
dx, (A.13)
where
F(A) =
(2B
q
)
q
A
r−q
q −r
+
(2B
p
)
p
A
r−p
r −p
.
By elementary calculus, one can easily verify that the minimum of F(A) is
A = 2B
q/(q−p)
q
B
p/(p−q)
p
.
For this value of A, (A.13) becomes
Ω
[Tf(x)[
r
dx ≤ r2
r
1
q −r
+
1
r −p
B
q(r−p)/(q−p)
q
B
p(q−r)/(q−p)
p
Ω
[f(x)[
r
dx.
Letting
C = 2
r
q −r
+
r
r −p
1/r
, and
1
r
=
θ
p
+
1 −θ
q
,
246 A Appendices
we arrive immediately at
Tf
L
r
(Ω)
≤ CB
θ
p
B
1−θ
q
f
L
r
(Ω)
.
This completes the proof of the Lemma. ¯ .
A.4 The Contraction Mapping Principle
Let X be a linear space with norm  , and let T be a mapping from X into
itself. If there exists a number θ < 1, such that
Tx −Ty ≤ θx −y for all x, y ∈ X,
Then T is called a contraction mapping .
Theorem A.4.1 Let T be a contraction mapping in a Banach space B. Then
it has a unique ﬁxed point in B, that is, there is a unique x ∈ B, such that
Tx = x.
Proof. Let x
o
be any point in the Banach space B. Deﬁne
x
1
= Tx
o
, and x
k+1
= Tx
k
k = 1, 2, .
We show that ¦x
k
¦ is a Cauchy sequence in B. In fact, for any integers n > m,
x
n
−x
m
 ≤
n−1
¸
k=m
x
k+1
−x
k

=
n−1
¸
k=m
T
k
x
1
−T
k
x
o

≤
n−1
¸
k=m
θ
k
x
1
−x
o

≤
θ
m
1 −θ
x
1
−x
o
→0 as m→∞.
Since a Banach is complete, the sequence ¦x
k
¦ converges to an element x
in B. Taking limit on both side of
x
k+1
= Tx
k
,
we arrive at
Tx = x. (A.14)
A.5 The ArzelaAscoli Theorem 247
To see the uniqueness, assume that y is a solution of (A.14). Then
x −y = Tx −Ty ≤ θx −y.
Therefore, we must have
x −y = 0,
That is x = y. ¯ .
A.5 The ArzelaAscoli Theorem
We say that the sequence of functions ¦u
k
(x)¦ are uniformly equicontinuous
if for each > 0, there exists δ > 0, such that
[u
k
(x) −u
k
(y)[ < whenever [x −y[ < δ
for all x, y ∈ R
n
and k = 1, 2, .
Theorem A.5.1 (ArzelaAscoli Compactness Criterion for Uniform Conver
gence).
Assume that ¦u
k
(x)¦ is a sequence of realvalued functions deﬁned on R
n
satisfying
[u
k
(x)[ ≤ M k = 1, 2, , x ∈ R
n
for some positive number M, and ¦u
k
(x)¦ are uniformly equicontinuous. Then
there exists a subsequence ¦u
ki
(x)¦ of ¦u
k
(x)¦ and a continuous function u(x),
such that
u
ki
→u
uniformly on any compact subset of R
n
.
References
[Ad] R. Adams, Sobolev Spaces, Academic Press, San Diego, 1978.
[AR] A. Ambrosetti and P. Rabinowitz, Dual variational method in critical
point theory and applications, J. Funct. Anal. 14(1973), 349381.
[Au] T. Aubin, Nonlinear Analysis on Manifolds. MongeAmpere Equations,
SpringerVerlag, 1982.
[Ba] A. Bahri, Critical points at inﬁnity in some variational problems, Pit
man, 1989, Research Notes in Mathematics Series, Vol. 182, 1988.
[BC] A. Bahri and J. Coron, The scalar curvature problem on three
dimensional sphere, J. Funct. Anal., 95(1991), 106172.
[BE] J. Bourguignon and J. Ezin, Scalar curvature functions in a class of
metrics and conformal transformations, Trans. AMS, 301(1987), 723
736.
[Bes] A. Besse, Einstein Manifolds, SpringerVerlag, Berling, Heidelberg,
1987.
[Bi] G. Bianchi, Nonexistence and symmetry of solutions to the scalar curva
ture equation, Comm. Partial Diﬀerential Equations, 21(1996), 229234.
[BLS] H. Brezis, Y. Li, and I. Shafrir, A sup + inf inequality for some nonlinear
elliptic equations involving exponential nonlinearities, J. Funct. Anal.
115 (1993), 344358.
[BM] H. Brezis, and F. Merle, Uniform estimates and blowup behavior for
solutions of u = V (x)e
u
in two dimensions, Comm. PDE. 16 (1991),
12231253.
[C] W. Chen, Scalar curvatures on S
n
, Mathematische Annalen, 283(1989),
353365.
[C1] W. Chen, A Tr¨ udinger inequality on surfaces with conical singularities,
Proc. AMS 3(1990), 821832.
[Ca] M. do Carmo, Riemannian Geometry, Translated by F. Flaherty,
Birkhauser Boston, 1992.
[CC] K. Cheng and J. Chern, Conformal metrics with prescribed curvature
on S
n
, J. Diﬀ. Equs., 106(1993), 155185.
[CD1] W. Chen and W. Ding, A problem concerning the scalar curvature on
S
2
, Kexue Tongbao, 33(1988), 533537.
[CD2] W. Chen and W. Ding, Scalar curvatures on S
2
, Trans. AMS,
303(1987), 365382.
250 References
[CGS] L.Caﬀarelli, B.Gidas, and J.Spruck, Asymptotic symmetry and local
behavior of semilinear elliptic equations with critical Sobolev growth,
Comm. Pure Appl. Math. XLII(1989), 271297.
[ChL] K. Chang and Q. Liu, On Nirenberg’s problem, International J. Math.,
4(1993), 5988.
[CL1] W. Chen and C. Li, Classiﬁcation of solutions of some nonlinear elliptic
equations , Duke Math. J., 63(1991), 615622.
[CL2] W. Chen and C. Li, Qualitative properties of solutions to some nonlinear
elliptic equations in R
2
, Duke Math J., 71(1993), 427439.
[CL3] W. Chen and C. Li, A note on KazdanWarner type conditions, J. Diﬀ.
Geom. 41(1995), 259268.
[CL4] W. Chen and C. Li, A necessary and suﬃcient condition for the Niren
berg problem, Comm. Pure Appl. Math., 48(1995), 657667.
[CL5] W. Chen and C. Li A priori estimates for solutions to nonlinear elliptic
equations, Arch. Rat. Mech. Anal., 122(1993), 145157.
[CL6] W. Chen and C. Li, A priori estimates for prescribing scalar curvature
equations, Ann. Math., 145(1997), 547564.
[CL7] W. Chen and C. Li, Prescribing Gaussian curvature on surfaces with
conical singularities, J. Geom. Analysis, 1(1991), 359372.
[CL8] W. Chen and C. Li, Gaussian curvature on singular surfaces, J. Geom.
Anal., 3(1993), 315334.
[CL9] W. Chen and C. Li, What kinds of singular surface can admit constant
curvature, Duke Math J., 2(1995), 437451.
[CL10] W. Chen and C. Li, Prescribing scalar curvature on S
n
, Paciﬁc J.
Math, 1(2001), 6178.
[CL11] W. Chen and C. Li, Indeﬁnite elliptic problems in a domain, Disc.
Cont. Dyn. Sys., 3(1997), 333340.
[CL12] W. Chen and C. Li, Gaussian curvature in the negative case, Proc.
AMS, 131(2003), 741744.
[CL13] W. Chen and C. Li, Regularity of solutions for a system of integral
equations, C.P.A.A., 4(2005), 18.
[CL14] W. Chen and C. Li, Moving planes, moving spheres, and a priori esti
mates, J. Diﬀ. Equa. 195(2003), 113.
[CLO] W. Chen, C. Li, and B. Ou, Classiﬁcation of solutions for an integral
equation, , CPAM, 59(2006), 330343.
[CLO1] W. Chen, C. Li, and B. Ou, Qualitative properties of solutions for an
integral equation, Disc. Cont. Dyn. Sys. 12(2005), 347354.
[CLO2] W. Chen, C. Li, and B. Ou, Classiﬁcation of solutions for a system of
integral equations, Comm. in PDE, 30(2005), 5965.
[CS] K. Cheng and J. Smoller, Conformal metrics with prescribed Gaussian
curvature on S
2
, Trans. AMS, 336(1993), 219255.
[CY1] A. Chang, and P. Yang, A perturbation result in prescribing scalar
curvature on S
n
, Duke Math. J., 64(1991), 2769.
[CY2] A. Chang, M. Gursky, and P. Yang, The scalar curvature equation on
2 and 3spheres, Calc. Var., 1(1993), 205229.
[CY3] A. Chang and P. Yang, Prescribing Gaussian curvature on S
2
, Acta.
Math., 159(1987), 214259.
[CY4] A. Chang, P. Yang, Conformal deformation of metric on S
2
, J. Diﬀ.
Geom., 23(1988) 259296.
References 251
bibitem[DL]DL W. Ding and J. Liu, A note on the problem of prescribing
Gaussian curvature on surfaces, Trans. AMS, 347(1995), 10591065.
[ES] J. Escobar and R. Schoen, Conformal metric with prescribed scalar
curvature, Invent. Math. 86(1986), 243254.
[Ev] L. Evans, Partial Diﬀerential Equations, Graduate Studies in Math,
Vol 19, AMS, 1998.
[F] L. Frenkel, An Introduction to Maximum Principles and Symmetry in
Elliptic Problems, Cambridge Unversity Press, New York, 2000.
[He] E. Hebey, Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequal
ities, AMS, Courant Institute of Mathematical Sciences Lecture Notes
5, 1999.
[GNN] B. Gidas, W. Ni, and L. Nirenberg, Symmetry of positive solutions of
nonlinear elliptic equations in R
n
, In the book Mathematical Analysis
and Applications, Academic Press, New York, 1981.
[GNN1] B. Gidas, W. Ni, and L. Nirenberg, Symmetry and related properties
via the maximum principle, Comm. Math. Phys., 68(1979), 209243.
[GS] B. Gidas and J. Spruck, Global and local behavior of positive solutions
of nonlinear elliptic equations, Comm. Pure Appl. Math., 34(1981), 525
598.
[GT] D. Gilbarg and N. Trudinger, Elliptic partial diﬀerential equations of
second order, SpringerVerlag, 1983.
[Ha] Z. Han, Prescribing Gaussian curvature on S
2
, Duke Math. J., 61(1990),
679703.
[Ho] C. Hong, A best constant and the Gaussian curvature, Proc. of AMS,
97(1986), 737747.
[Ka] J. Kazdan, Prescribing the curvature of a Riemannian manifold, AMS
CBMS Lecture, No.57, 1984.
[KW1] J. Kazdan and F. Warner, Curvature functions for compact two mani
folds, Ann. of Math. , 99(1974), 1447.
[Kw2] J. Kazdan and F. Warner, Existence and conformal deformation of met
rics with prescribing Gaussian and scalar curvature, Annals of Math.,
101(1975), 317331.
[L] E.Lieb, Sharp constants in the HardyLittlewoodSobolev and related
inequalities, Ann. of Math. 118(1983), 349374.
[Li] C. Li, Local asymptotic symmetry of singular solutions to nonlinear
elliptic equations, Invent. Math. 123(1996), 221231.
[Li0] C. Li, Monotonicity and symmetry of solutions of fully nonlinear elliptic
equations on bounded domain, Comm. in PDE. 16(2&3)(1991), 491526.
[Li1] Y. Li, On prescribing scalar curvature problem on S
3
and S
4
, C. R.
Acad. Sci. Paris, t. 314, Serie I (1992), 5559.
[Li2] Y. Li, Prescribing scalar curvature on S
n
and related problems, J.
Diﬀerential Equations 120 (1995), 319410.
[Li3] Y. Li, Prescribing scalar curvature on S
n
and related problems, II: Ex
istence and compactness, Comm. Pure Appl. Math. 49 (1996), 541597.
[Lin1] C. Lin, On Liouville theorem and apriori estimates for the scalar curva
ture equations, Ann. Scuola. Norm. Sup. Pisa, Cl. Sci. 27(1998), 107130.
[Mo] J. Moser, On a nonlinear problem in diﬀerential geometry, Dynamical
Systems, Academic Press, N.Y., (1973).
[P] P. Padilla, Symmetry properties of positive solutions of elliptic equations
on symmetric domains, Appl. Anal. 64(1997), 153169.
252 References
[O] M. Obata, The conjecture on conformal transformations of Riemannian
manifolds, J. Diﬀ. Geom., 6(1971), 247258.
[On] E. Onofri, On the positivity of the eﬀective action in a theory of random
surfaces, Comm. Math. Phys., 86 (1982), 321326.
[Rab] P. Rabinowitz, Minimax Methods in Critical Point Theory with Appli
cations to Diﬀerential Equations, CBMS series, No. 65, 1986.
[Sch] M. Schechter, Modern Methods in Partial Diﬀerential Equations,
McGrawHill Inc., 1977.
[Sch1] M. Schester, Principles of Functional Analysis, Graduate Studies in
Math, Vol. 6, AMS, 2002.
[SZ] R. Schoen and D. Zhang, Prescribed scalar curvature on the nspheres,
Calc. Var. Partial Diﬀerential Equations 4 (1996), 125.
[WX] J. Wei and X. Xu, Classiﬁcation of solutions of higher order conformally
invariant equations, Math. Ann. 313(1999), 207228.
[XY] X. Xu and P. Yang, Remarks on prescribing Gauss curvature, Tran.
AMS, Vol. 336, No. 2, (1993), 831840.
Preface
In this book we intend to present basic materials as well as real research examples to young researchers in the ﬁeld of nonlinear analysis for partial diﬀerential equations (PDEs), in particular, for semilinear elliptic PDEs. We hope it will become a good reading material for graduate students and a handy textbook for professors to use in a topic course on nonlinear analysis. We will introduce a series of wellknown typical methods in nonlinear analysis. We will ﬁrst illustrate them by using simple examples, then we will lead readers to the research front and explain how these methods can be applied to solve practical problems through careful analysis of a series of recent research articles, mostly the authors’ recent papers. From our experience, roughly speaking, in applying these commonly used methods, there are usually two aspects: i) general scheme (more or less universal) and, ii) key points for each individual problem. We will use our own research articles to show readers what the general schemes are and what the key points are; and with the understandings of these examples, we hope the readers will be able to apply these general schemes to solve their own research problems with the discovery of their own key points. In Chapter 1, we introduce basic knowledge on Sobolev spaces and some commonly used inequalities. These are the major spaces in which we will seek weak solutions of PDEs. Chapter 2 shows how to ﬁnd weak solutions for some typical linear and semilinear PDEs by using functional analysis methods, mainly, the calculus of variations and critical point theories including the wellknown Mountain Pass Lemma. In Chapter 3, we establish W 2,p a priori estimates and regularity. We prove that, in most cases, weak solutions are actually diﬀerentiable, and hence are classical ones. We will also present a Regularity Lifting Theorem. It is a simple method to boost the regularity of solutions. It has been used extensively in various forms in the authors’ previous works. The essence of the approach is wellknown in the analysis community. However, the version we present here
VI
Preface
contains some new developments. It is much more general and is very easy to use. We believe that our method will provide convenient ways, for both experts and nonexperts in the ﬁeld, in obtaining regularities. We will use examples to show how this theorem can be applied to PDEs and to integral equations. Chapter 4 is a preparation for Chapter 5 and 6. We introduce Riemannian manifolds, curvatures, covariant derivatives, and Sobolev embedding on manifolds. Chapter 5 deals with semilinear elliptic equations arising from prescribing Gaussian curvature, on both positively and negatively curved manifolds. We show the existence of weak solutions in critical cases via variational approaches. We also introduce the method of lower and upper solutions. Chapter 6 focus on solving a problem from prescribing scalar curvature on S n for n ≥ 3. It is in the critical case where the corresponding variational functional is not compact at any level sets. To recover the compactness, we construct a maxmini variational scheme. The outline is clearly presented, however, the detailed proofs are rather complex. The beginners may skip these proofs. Chapter 7 is devoted to the study of various maximum principles, in particular, the ones based on comparisons. Besides classical ones, we also introduce a version of maximum principle at inﬁnity and a maximum principle for integral equations which basically depends on the absolute continuity of a Lebesgue integral. It is a preparation for the method of moving planes. In Chapter 8, we introduce the method of moving planes and its variant– the method of moving spheres–and apply them to obtain the symmetry, monotonicity, a priori estimates, and even nonexistence of solutions. We also introduce an integral form of the method of moving planes which is quite diﬀerent than the one for PDEs. Instead of using local properties of a PDE, we exploit global properties of solutions for integral equations. Many research examples are illustrated, including the wellknown one by Gidas, Ni, and Nirenberg, as well as a few from the authors’ recent papers. Chapters 7 and 8 function as a selfcontained group. The readers who are only interested in the maximum principles and the method of moving planes, can skip the ﬁrst six chapters, and start directly from Chapter 7.
Yeshiva University University of Colorado
Wenxiong Chen Congming Li
Contents
1
Introduction to Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Approximation by Smooth Functions . . . . . . . . . . . . . . . . . . . . . . . 1.4 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Compact Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Other Basic Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Poincar´’s Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . e 1.6.2 The Classical HardyLittlewoodSobolev Inequality . . . . Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Second Order Elliptic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Methods of Linear Functional Analysis . . . . . . . . . . . . . . . . . . . . . 2.3.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Some Basic Principles in Functional Analysis . . . . . . . . . 2.3.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Variational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Semilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Calculus of Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Existence of Minimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Existence of Minimizers Under Constraints . . . . . . . . . . . 2.4.5 Minimax Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 Existence of a Minimax via the Mountain Pass Theorem Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 W 2,p a Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Newtonian Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Uniform Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 W 2,p Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 The Case p ≥ 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 3 8 9 21 32 34 34 36 41 42 43 44 44 44 48 50 50 50 52 55 59 65 73 75 75 81 86 88
2
3
. . . . . . . . 151 Prescribing Scalar Curvature on S n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 6. .6 Sobolev Embeddings . . . 95 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Diﬀerentiable Manifolds . . . . . . . .2 Tangent Spaces . . . . . .3 Equations on Prescribing Gaussian and Scalar Curvature123 4.2. . . . Method of Lower and Upper Solutions . . . . . . . .2 Variational Approach and Key Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5. . . . . . . . . . . . . 115 4. . . . . . . . . . . . . . .2 Chen and Li’s Results . . . . . . . 135 5. . . 122 4. . . . . . 172 6. . . . . . . .1 Estimate the Values of the Functional . . . . . . . . . 162 6. .1 Kazdan and Warner’s Results. . 142 5. . . . . . . . . Uniqueness. . . . . .3 The A Priori Estimates . . . . . .1 Introduction . . . . .2. .3. . . . . . . . . . . . . . . . . . . . . 117 4. . 107 4. . . . . . . . . . . . . . . . . . . . . . . 129 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Prescribing Gaussian Curvature on Negatively Curved Manifolds . . . . . . . . . . . 127 5. . . .3 Other Useful Results Concerning the Existence. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . 172 6. . . . . . . . . . . 120 4. . .1 Curvature of Plane Curves . . . . . . . . . . . . . . . . . . . . . 115 4. . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3. . . . . . . . . 145 5. . . . . . . . . 157 6. .2 In the Region Where R is Small .3 In the Regions Where R > 0. . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . and Regularity . . . . . . . . . . . . . . 112 4. .5.3 Regularity Lifting . . . . .2 Integrals . . . . . . . . 109 4. . . . . . .4.2 Curvature of Surfaces in R3 . .5. 174 5 6 . . . . . . . . . 157 6. . . . .VIII Contents 3. . . . 124 Prescribing Gaussian Curvature on Compact 2Manifolds . . . . . . . . . . . . . . . . . . . . . . . . 97 3. . . . . . . . 94 3. . . . . . . . . . . . . . . .2 The Case 1 < p < 2. . . . .3 Curvature on Riemannian Manifolds . . . . . . . .5. . . . . .5 Calculus on Manifolds . . . . . 103 4 Preliminary Analysis on Riemannian Manifolds . . . . . . . . . . . . 171 6.2 Regularity Lifting Theorem . . . . . . . . . . . . . . . . for n ≥ 3 . . . . 129 5. . . . .1.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . .4 Applications to Integral Equations . . . .2 The Variational Approach . . . . . . . . . . . .3 Riemannian Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . .1 In the Region Where R < 0 . . . . . . . . . . . .4 Curvature . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Applications to PDEs . .2 The Variational Scheme .3. . . . . . . .1 Prescribing Gaussian Curvature on S 2 .3. . . . . . . . . . .1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator . .1 Obstructions . . . . . . . .3. . . . . . . . . . . . . 161 6. 96 3. . . . . . . 120 4.2. .4. .1 Bootstrap . . . . . . . .3 Existence of Weak Solutions . . . . . . . . . . . .2. . . . . . . . . . . . . . 107 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3. . . . .
. . . . . . . . . . . . . . .2 Inequalities . . . . . 191 Methods of Moving Planes and of Moving Spheres . . . . . . . . . . . . . . . . . .5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 8. . . . . . . . . . . . . . . . . . . 246 A. . . . . . . . 195 8. . . . . . . 209 8. . . 216 8. . . . . . 180 7. . . . . . . . . . . . . . . . . . . 197 8. . . . . . .2 Necessary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . .2. . .3. . . . . . . . . . 214 8.2. . .3 Symmetry of Solutions for − u = eu in R2 . .2. . .5 Notations from Riemannian Geometry . . . . . . . . . . . . . . . . . . .4 Method of Moving Spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . .1. .1 The Background . . . . . . .1 Outline of the Method of Moving Planes . . . . . . 239 A. . . . . . . . . . .3 CalderonZygmund Decomposition . . . . 241 A. . . . . . . . . . . . . . . . . . . . . . .5 The ArzelaAscoli Theorem . . . . 199 8. . . . . . . . . . . . . . . . . . . 175 7. . . . .2 Symmetry of Solutions of − u = up in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 8. . . . . . 243 A. . . . . . . . . . . . . .Contents IX 7 Maximum Principles . . . . . . . . . . .1 Algebraic and Geometric Notations . .2 Applications of the Maximum Principles Based on Comparisons . .1 Symmetry of Solutions in a Unit Ball . .2 Notations for Functions and Derivatives . . . . . . . . . . . . 225 8. . . 175 7. . . . . . . . . . . . . . .4 Maximum Principles Based on Comparisons . . . . . . . . . . . .4 Notations for Estimates . . . . .2 Weak Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 The A Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Function Spaces . . . . . . 235 A. . 235 A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . . . . . . . . 247 8 A References . . 214 8. . . . . . . . . . . . . . . . . . . . . . . . . 183 7. . . . . . . . . . 228 Appendices . . . . . 202 8. . . . . . . . . . . . . . . . . . . . 235 A. . . . . . . . .3 Method of Moving Planes in a Local Way . . 188 7. . . . .1. . . . . . . . . 235 A.1 Notations . . . . . . . . .4 The Contraction Mapping Principle . . . . . . . . . .5 A Maximum Principle for Integral Equations . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 The Background . . . . . . . . . . . . . . . . 222 8. . . . .3 The Hopf Lemma and Strong Maximum Principles . . . . . . . . . . . . .1. . . . . . .3.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 . . . . . .1. . . . . 237 A. . . . . . . . . . . . . . . . . . . . 238 A. .
.
The fundamental research on the relations among various Sobolev Spaces (Sobolev norms) was ﬁrst carried out by G.6 Distributions Sobolev Spaces Approximation by Smooth Functions Sobolev Embeddings Compact Embedding Other Basic Inequalities 1. E.5 1. A. that is. Caﬀarelli.1 1. For these purposes. Hardy and J. many well known mathematicians. L.2 Poincar´’s Inequality e The Classical HardyLittlewoodSobolev Inequality In real life. For example. Lieb. Very often. More recently. J. Brezis. it is imperative that we need not only develop suitable metrics to measure diﬀerent states (functions). in the theory of partial diﬀerential equations.1 Introduction to Sobolev Spaces 1.3 1. the absolute value a − b is used to measure the diﬀerence between two numbers a and b. such as H. the Sobolev Spaces were introduced. and how close these solutions are to the real one depends on how we measure them. They have many applications in various branches of mathematics. people use numbers to quantify the surrounding objects. Littlewood in the 1910s and then by S. Serrin. In mathematics. in particular.6. and . L.6. Nirenberg. Sobolev in the 1930s. we use a sequence of approximate solutions to approach a real one.1 1. Hence. Chang. temperature is a function of time and place. which metric we are choosing.2 1. Functions are used to describe physical states.4 1. The role of Sobolev Spaces in the analysis of PDEs is somewhat similar to the role of Euclidean spaces in the study of geometry. but we must also study relationships among diﬀerent metrics.
From the deﬁnition of the functional in either (1. the method of functional analysis. (1. and seek critical points of the functional in that space. and which functions are ‘critically’ related to these sharp estimates. one can see that the function u in the space need not be second order diﬀerentiable as required by the classical solutions of (1. our intention is to view it as an operator A acting on some proper linear spaces X and Y of functions and to symbolically write the equation as Au = f (1.1) To prove the existence of solutions. instead of f (x). have worked in this area. given a PDE problem. which functions achieve these sharp estimates.’ to derive the existence. for instance. has seen more and more applications. Let Ω be a bounded domain in Rn and consider the Dirichlet problem associated with the Laplace equation: − u = f (x). what the sharp estimates are. the calculus of variations. Stein. Correspondingly.1). one may recover the diﬀerentiability of the solutions. by an appropriate regularity argument.1) in the classical sense. One may also consider the corresponding variational functional J(u) = 1 2  u2 d x − Ω Ω f (x) u d x (1. let’s start oﬀ with a simple example. u) d x. in particular. s) d x is an antiderivative of f (x. especially for nonlinear partial diﬀerential equations. The main objectives are to determine if and how the norms dominate each other.3) f (x. To ﬁnd the existence of weak solutions for partial diﬀerential equations. one may view − as an operator acting on a proper linear space and then apply some known principles of functional analysis. ·). in equation (1. To roughly illustrate this kind of application. u). x ∈ ∂Ω (1. we consider f (x. x ∈ Ω u = 0. For example. u) = 0 1 2  u2 d x − Ω u Ω F (x.2) in a proper linear space. This kind of variational approach is particularly powerful in dealing with nonlinear equations. Then it becomes a semilinear equation. In general.2 1 Introduction to Sobolev Spaces E.1).3). However. Hence the critical points of the functional are solutions of the problem only in the ‘weak’ sense.4) .2) or (1. so that they can still satisfy equation (1. the ‘ﬁxed point theory’ or ‘the degree theory. we have the functional J(u) = where F (x.
it is not so convenient to work directly on weak derivatives. Hence in Section 1. it is natural to ﬁrst ﬁnd a sequence of approximate solutions. The limit function of a convergent sequence of approximate solutions would be the desired exact solution of the equation. the Sobolev spaces are designed precisely for this purpose and will work out properly.3. and these ﬁrst derivatives need not be continuous or even be deﬁned everywhere. every bounded sequence has a weakly convergent subsequence.2. we show that.1 Distributions 3 Then we can apply the general and elegant principles of linear or nonlinear functional analysis to study the solvability of various equations involving A. we will introduce the distributions. Then in the next three sections. we can just work on smooth functions to establish a series of important inequalities. It is very common that people use “energy” minimization or conservation. In solving a partial diﬀerential equation. there are two basic stages in showing convergence: i) in a reﬂexive Banach space. Existence of Weak Solutions. In Section 1. to show the existence of such weak solutions. in many cases. and then go on to investigate the convergence of the sequence. the readers may have a glance at the introduction of the next chapter to gain more motivations for studying the Sobolev spaces. which are the elements of the Sobolev spaces. mainly the notion of the weak derivatives. the weak convergent sequence in the “stronger” space becomes a strong convergent sequence in the “weaker” space. We may also associate this operator with a functional J(·).4). what really involved were only the ﬁrst derivatives of u instead of the second derivatives as required classically for a second order equation.1 Distributions As we have seen in the introduction. One seeks solutions which are less diﬀerentiable but are easier to obtain. To derive many useful properties in Sobolev spaces. whose critical points are the solutions of the equation (1. . via functional analysis approach.1.3). these weak derivatives can be approximated by smooth functions. sometimes use ﬁnite dimensional approximation. In this process. Therefore.1. The result can then be applied to a broad class of partial diﬀerential equations. the key is to ﬁnd an appropriate operator ‘A’ and appropriate spaces ‘X’ and ‘Y’. one can substantially weaken the notion of partial derivatives. We then deﬁne Sobolev spaces in Section 1. The advantage is that it allows one to divide the task of ﬁnding “suitable” smooth solutions for a PDE into two major steps: Step 1. 1. and then ii) by the compact embedding from a “stronger” Sobolev space into a “weaker” one. As we will see in the next few chapters. for the functional J(u) in (1.2) or (1. Before going onto the details of this chapter. As we will see later.
1. Let D(Ω) = C0 (Ω) be the linear space of inﬁnitely diﬀerentiable functions with compact support in Ω. Through integration by parts. u ∈ Lp (Ω). we have.4 1 Introduction to Sobolev Spaces Step 2.1. for i = 1.5) still makes sense if u is a locally L1 summable Now if u is not in C 1 (Ω). then . the following function f (x) = ∞ is in C0 (Ω). among which Sobolev spaces are the most frequently used ones. However. ∂xi (1. Such functions are Lebesque measurable functions f deﬁned on Ω and with the property that 1/p f Lp (K) := K f (x)p dx <∞ for every compact subset K in Ω. In this section. Example 1. Let Lp (Ω) be the space of pth power locally summable functions for loc 1 ≤ p ≤ ∞. u (x) := ρ ∗ u := n Rn 1 exp{ x−xo 2 −r2 } for x − xo  < r 0 elsewhere Then u ∈ ∞ C0 (Ω) for suﬃciently small. then for any r < R.2 Assume ρ ∈ C0 (Rn ). ∂u φ(x)dx = − ∂xi u(x) Ω Ω ∂φ dx. Regularity Lifting. n.’ which will be the elements of the Sobolev spaces. Various functions spaces and the related embedding theories are basic tools in both analysis. ∞ Example 1. we introduce the notion of ‘weak derivatives. Assume that u is a C 1 function in Ω and φ ∈ D(Ω). the integral on ∂xi the right hand side of (1. Both the existence of weak solutions and the regularity lifting have become two major branches of today’s PDE analysis. Let 1 x−y ρ( )u(y)dy. Let Rn be the ndimensional Euclidean space and Ω be an open connected ∞ subset in Rn . One uses various analysis tools to boost the diﬀerentiability of the known weak solutions and try to show that they are actually classical ones. · · · . 2.1 Assume BR (xo ) := {x ∈ Rn  x − xo  < R} ⊂ Ω. and supp u ⊂ K ⊂⊂ Ω.5) ∂u does not exist. This is called the space of test functions on Ω.
For this reason.1. αn ) be a multiindex of order k := α := α1 + α2 + · · · + αn . (1. Example 1. i. the left hand side of (1. Deﬁnition 1. through a straight forward integration by parts k times. Then its weak derivative u (x) can be represented by v(x) = − sin x if − π < x ≤ 0 −1 if 0 < x < 1. Thus it is natural to choose those functions v that satisfy (1.6) There is no boundary term because φ vanishes near the boundary. let u(x) = cos x if − π < x ≤ 0 1 − x if 0 < x < 1. the regular αth partial derivative of u is Dα u = ∂ α1 ∂ α2 · · · ∂ αn u . written v = Dα u provided v(x) φ(x) d x = (−1)α Ω Ω u Dα φ d x for all test functions φ ∈ D(Ω). Now if u is not k times diﬀerentiable. ∂xα1 ∂xα2 · · · ∂xαn n 1 2 Given any test function φ ∈ D(Ω). For u ∈ C k (Ω). u only need to be locally L1 summable. · · · . The same idea works for higher partial derivatives. Let α = (α1 . we arrive at Dα u φ(x) d x = (−1)α Ω Ω u Dα φ d x. . α2 . we deﬁne the ﬁrst derivative function v(x) that satisﬁes v(x)φ(x)dx = − Ω Ω ∂u weakly as the ∂xi u ∂φ dx ∂xi for all functions φ ∈ D(Ω).1. v ∈ L1 (Ω).1. 1).6) as the weak representatives of Dα u. we say that v is the αth weak derivative loc of u.6) makes no sense.e.1 Distributions 5 function.3 For n = 1 and Ω = (−π. However the right hand side is valid for functions u with much weaker diﬀerentiability.1 For u.
it should be unique up to a set of measure zero. we have Deﬁnition 1. then v(x) = w(x) almost everywhere. However. we have 1 0 1 u(x)φ (x)dx = −π −π 0 cos x φ (x)dx + 0 (1 − x)φ (x)dx 1 = −π sin x φ(x)dx + φ(0) − φ(0) + 0 1 φ(x)dx =− −π v(x)φ(x)dx. Lemma 1. By the deﬁnition of the weak derivatives. denoted by D (Ω). for any φ ∈ D(Ω). and (1.7) still holds. . the function u is not diﬀerentiable at x = 0. More generally. (1. The linear space of distributions or the generalized functions on Ω. in the classical sense. If v and w are the weak αth partial derivatives of u. Proof.1 (Uniqueness of Weak Derivatives). we have v(x)φ(x)dx = (−1)α Ω Ω u(x)Dα φdx = Ω w(x)φ(x)dx for any φ ∈ D(Ω).1. Ω Therefore.6 1 Introduction to Sobolev Spaces To see this. that 1 1 u(x)φ (x)dx = − −π −π v(x)φ(x)dx. we must have v(x) = w(x) almost everywhere . through integration by parts. In this example. we verify. It follows that (v(x) − w(x))φ(x)dx = 0 ∀φ ∈ D(Ω).2 A distribution is a continuous linear functional on D(Ω). one may alter the values of the weak derivative v(x) in a set of measure zero.1. Dα u. and we call it a distribution. From the deﬁnition. we can view a weak derivative as a linear functional acting on the space of test functions D(Ω). one can see that.7) In fact. Since the weak derivative is deﬁned by integrals. is the collection of all continuous linear functionals on D(Ω).
1). and we say that φk →φ in D(Ω) if a) there exists K ⊂⊂ Ω such that suppφk . It is not a function at all. For any distribution µ. such a delta function is the derivative of some function in the following distributional sense. for any sequence {φk } ⊂ D(Ω) with φk →φ in D(Ω). then we say that µ is (or can be realized as ) a locally summable function and identify it as f . for x ≥ 0. The most important and most commonly used distributions are locally summable functions. In fact. one can show that such a delta function is not locally summable. Let xo be a point in Ω. . as k→∞.1 Distributions 7 Here. we have 1 1 0. It is easy to verify that Tf (·) is a continuous linear functional on D(Ω). and hence it is a distribution. for x = xo . This kind of “function” has been used widely and so successfully by physicists and engineers. − −1 f (x)φ (x)dx = − 0 φ (x)dx = φ(0) = δ0 (φ). the delta function at xo can be deﬁned as δxo (φ) = φ(xo ). if there is an f ∈ L1 (Ω) such that loc µ(φ) = Tf (φ). Compare this with the deﬁnition of weak derivatives. and let f (x) = Then.1. and b) for any α. suppφ ⊂ K. An interesting example of a distribution that is not a locally summable function is the wellknown Dirac delta function. Hence it is a distribution. Dα φk →Dα φ uniformly as k→∞. For any φ ∈ D(Ω). the continuity of a functional T on D(Ω) means that. let Ω = (−1. ∀φ ∈ D(Ω). To explain this. we may regard δ0 (x) as f (x) in the sense of distributions. who often simply view δxo as 0. consider loc Tf (φ) = Ω f (x)φ(x)dx. for any f ∈ Lp (Ω) with 1 ≤ p ≤ ∞. However. we have T (φk )→T (φ). for x < 0 1. for x = xo δxo (x) = ∞. Surprisingly.
8
1 Introduction to Sobolev Spaces
1.2 Sobolev Spaces
Now suppose given a function f ∈ Lp (Ω), we want to solve the partial diﬀerential equation ∆u = f (x) in the sense of weak derivatives. Naturally, we would seek solutions u, such that ∆u are in Lp (Ω). More generally, we would start from the collections of all distributions whose second weak derivatives are in Lp (Ω). In a variational approach, as we have seen in the introduction, to seek critical points of the functional 1 2  u2 dx −
Ω Ω
F (x, u)dx,
a natural set of functions we start with is the collection of distributions whose ﬁrst weak derivatives are in L2 (Ω). More generally, we have Deﬁnition 1.2.1 The Sobolev space W k,p (Ω) (k ≥ 0 and p ≥ 1) is the collection of all distributions u on Ω such that for all multiindex α with α ≤ k, Dα u can be realized as a Lp function on Ω. Furthermore, W k,p (Ω) is a Banach space with the norm u :=
α≤k Ω
1/p Dα u(x)p dx .
In the special case when p = 2, it is also a Hilbert space and we usually denote it by H k (Ω).
k,p ∞ Deﬁnition 1.2.2 W0 (Ω) is the closure of C0 (Ω) in W k,p (Ω). k,p Intuitively, W0 (Ω) is the space of functions whose up to (k − 1)th order derivatives vanish on the boundary.
Example 1.2.1 Let Ω = (−1, 1). Consider the function f (x) = xβ . For 0 < β < 1, it is obviously not diﬀerentiable at the origin. However for any 1 , it is in the Sobolev space W 1,p (Ω). More generally, let Ω be 1≤p< 1−β an open unit ball centered at the origin in Rn , then the function xβ is in W 1,p (Ω) if and only if n β >1− . (1.8) p To see this, we ﬁrst calculate fxi (x) = and hence βxi , x2−β for x = 0,
1.3 Approximation by Smooth Functions
9
 f (x) = Fix a small have
β . x1−β
(1.9)
> 0. Then for any φ ∈ D(Ω), by integration by parts, we f (x)φxi (x)dx +
Ω\B (0) ∂B (0)
fxi (x)φ(x)dx = −
Ω\B (0)
f φνi dS.
where ν = (ν1 , ν2 , · · · , νn ) is an inwardnormal vector on ∂B (0). Now, under the condition that β > 1 − n/p, fxi is in Lp (Ω) ⊂ L1 (Ω), and 
∂B (0)
f φνi dS ≤ C
n−1+β
→0,
as →0.
It follows that xβ φxi (x)dx = −
Ω Ω
βxi φ(x)dx. x2−β
Therefore, the weak ﬁrst partial derivatives of xβ are βxi , i = 1, 2, · · · , n. x2−β Moreover, from (1.9), one can see that  f  is in Lp (Ω) if and only if β >1− n . p
1.3 Approximation by Smooth Functions
While working in Sobolev spaces, for instance, proving inequalities, it may feel inconvenient and cumbersome to manage the weak derivatives directly based on its deﬁnition. To get around, in this section, we will show that any function in a Sobolev space can be approached by a sequence of smooth functions. In other words, the smooth functions are dense in Sobolev spaces. Based on this, when deriving many useful properties of Sobolev spaces, we can just work on smooth functions and then take limits. At the end of the section, for more convenient application of the approximation results, we introduce an Operator Completion Theorem. We also prove an Extension Theorem. Both theorems will be used frequently in the next few sections. The idea in approximation is based on molliﬁers. Let j(x) = One can verify that c e x2 −1 if x < 1 0 if x ≥ 1.
1
10
1 Introduction to Sobolev Spaces
∞ j(x) ∈ C0 (B1 (0)).
Choose the constant c, such that j(x)dx = 1.
Rn
For each
> 0, deﬁne j (x) =
1
n
x j( ).
Then obviously j (x)dx = 1, ∀ > 0.
Rn ∞ One can also verify that j ∈ C0 (B (0)), and
lim j (x) = →0
0 for x = 0 ∞ for x = 0.
The above observations suggest that the limit of j (x) may be viewed as a delta function, and from the wellknown property of the delta function, we would naturally expect, for any continuous function f (x) and for any point x ∈ Ω, (J f )(x) :=
Ω
j (x − y)f (y)dy→f (x),
as →0.
(1.10)
More generally, if f (x) is only in Lp (Ω), we would expect (1.10) to hold almost everywhere. We call j (x) a molliﬁer and (J f )(x) the molliﬁcation of f (x). We will show that, for each > 0, J f is a C ∞ function, and as →0, J f →f in W k,p . Notice that, actually (J f )(x) =
B (x)∩Ω
j (x − y)f (y)dy,
hence, in order that J f (x) to approximate f (x) well, we need B (x) to be completely contained in Ω to ensure that j (x − y)dy = 1
B (x)∩Ω
(one of the important property of delta function). Equivalently, we need x to be in the interior of Ω. For this reason, we ﬁrst prove a local approximation theorem. Theorem 1.3.1 (Local approximation by smooth functions). k,p For any f ∈ W k,p (Ω), J f ∈ C ∞ (Rn ) and J f → f in Wloc (Ω) as →0.
1.3 Approximation by Smooth Functions
11
Then, to extend this result to the entire Ω, we will choose inﬁnitely many open sets Oi , i = 1, 2, · · · , each of which has a positive distance to the boundary of Ω, and whose union is the whole Ω. Based on the above theorem, we are able to approximate a W k,p (Ω) function on each Oi by a sequence of smooth functions. Combining this with a partition of unity, and a cut oﬀ function if Ω is unbounded, we will then prove Theorem 1.3.2 (Global approximation by smooth functions). For any f ∈ W k,p (Ω), there exists a sequence of functions {fm } ⊂ ∞ C (Ω) ∩ W k,p (Ω) such that fm → f in W k,p (Ω) as m→∞. Theorem 1.3.3 (Global approximation by smooth functions up to the boundary). Assume that Ω is bounded with C 1 boundary ∂Ω, then for any f ∈ k,p W (Ω), there exists a sequence of functions {fm } ⊂ C ∞ (Ω) = C ∞ (Ω) ∩ W k,p (Ω) such that fm → f in W k,p (Ω) as m→∞.
∞ When Ω is the entire space Rn , the approximation by C ∞ or by C0 functions are essentially the same. We have k,p Theorem 1.3.4 W k,p (Rn ) = W0 (Rn ). In other words, for any f ∈ W k,p (Rn ), ∞ there exists a sequence of functions {fm } ⊂ C0 (Rn ), such that
fm →f,
as m→∞;
in W k,p (Rn ).
Proof of Theorem 1.3.1 We prove the theorem in three steps. In step 1, we show that J f ∈ C ∞ (Rn ) and Jf
Lp (Ω)
≤ f
Lp (Ω) .
From the deﬁnition of J f (x), we can see that it is well deﬁned for all x ∈ Rn , and it vanishes if x is of distance away from Ω. Here and in the following, for simplicity of argument, we extend f to be zero outside of Ω. In step 2, we prove that, if f is in Lp (Ω), (J f )→f in Lp (Ω). loc We ﬁrst verify this for continuous functions and then approximate Lp functions by continuous functions. In step 3, we reach the conclusion of the Theorem. For each f ∈ W k,p (Ω) and α ≤ k, Dα f is in Lp (Ω). Then from the result in Step 2, we have J (Dα f )→Dα f Hence what we need to verify is in Lloc (Ω).
12
1 Introduction to Sobolev Spaces
Dα (J f )(x) = J (Dα f )(x). As the readers will notice, the arguments in the last two steps only work in any compact subset of Ω. Step 1. Let ei = (0, · · · , 0, 1, 0, · · · , 0) be the unit vector in the xi direction. Fix > 0 and x ∈ Rn . By the deﬁnition of J f , we have, for h < , (J f )(x + hei ) − (J f )(x) = h Since as h→0,
B2 (x)∩Ω
j (x + hei − y) − j (x − y) f (y)dy. h (1.11)
j (x + hei − y) − j (x − y) ∂j (x − y) → h ∂xi uniformly for all y ∈ B2 (x) ∩ Ω, we can pass the limit through the integral sign in (1.11) to obtain ∂(J f )(x) = ∂xi Similarly, we have Dα (J f )(x) =
Ω α Dx j (x − y)f (y)dy. Ω
∂j (x − y) f (y)dy. ∂xi
Noticing that j (·) is inﬁnitely diﬀerentiable, we conclude that J f is also inﬁnitely diﬀerentiable. Then we derive Jf
p−1 p
Lp (Ω)
≤ f
1
Lp (Ω) .
(1.12)
By the H¨lder inequality, we have o (J f )(x) = 
Ω
j
(x − y)j p (x − y)f (y)dy
p−1 p
≤
Ω
j (x − y)dy
j (x − y)f (y) dy
p Ω
1 p
1 p
≤
B (x)
j (x − y)f (y)p dy
.
It follows that (J f )(x)p dx ≤
Ω Ω B (x)
j (x − y)f (y)p dy dx f (y)p
Ω B (y)
≤ ≤
Ω
j (x − y)dx dy
f (y)p dy.
For the continuous function g.3 Approximation by Smooth Functions 13 This veriﬁes (1.14) f − g Lp (Ω) < . Lp (K) Step 3. for any compact subset K of Ω. (1.y< max f (x − y) − f (x) B (0) j (y)dy max f (x − y) − f (x). Due to the continuity of f and the compactness of K. we have δ J g − g Lp (K) < .13).13) for continuous functions. we have (J f )(x) − f (x) ≤ ≤ x∈K. Then for any α with α ≤ k. This veriﬁes (1. By writing (J f )(x) − f (x) = B (0) j (y)[f (x − y) − f (x)]dy.14) that J f −f Lp (K) ≤ J f −J g Lp (K) + J g−g Lp (K) + g−f Lp (K) ≤ 2 f − g Lp (K) + J g − g δ δ ≤ 2· + 3 3 = δ.13) We ﬁrst show this for a continuous function f . where χj is the characteristic function of some measurable set Aj .p (Ω).13) infers that for suﬃciently small . and each simple function can be approximated by a continuous function.12). Now assume that f ∈ W k. we have α D f ∈ Lp (Ω). Step 2. as →0. such that δ (1. 3 It follows from this and (1. and given any δ > 0. We prove that. the last term in the above inequality tends to zero uniformly as →0. J f −f Lp (K) →0. (1. This proves (1.1. For any f in Lp (Ω). 3 This can be derived from the wellknown fact that any Lp function can be k approximated by a simple function of the form j=1 aj χj (x).y< x∈K. choose a a continuous function g. We show that .
For a Bounded Region Ω. · · · (1. Given any function f ∈ W k. Choose < 2 dist(K. ∂Ω). obvious ηi f ∈ W k.p (Ω) and supp(ηi f ) ⊂ Oi .14 1 Introduction to Sobolev Spaces Dα (J f )→Dα f By the result in Step 2. Let 1 Ωi = {x ∈ Ω  dist(x. i=0 Choose a smooth partition of unity {ηi }∞ associated with the open sets i=0 {Oi }∞ . Consequently. (1. Fix a δ > 0. Choose i > 0 so small that fi := J i (ηi f ) satisﬁes δ fi − ηi f W k. · · · ¯i ) suppfi ⊂ (Ωi+4 \ Ω i = 1.16) . Dα (J f )(x) = Ω α Dx j (x − y)f (y)dy α Dx j (x − y)f (y)dy B (x) α Dy j (x − y)f (y)dy B (x) = = (−1)α = B (x) j (x − y)Dα f (y)dy j (x − y)Dα f (y)dy. · · · i ¯ Write Oi = Ωi+3 \ Ωi+1 . for suﬃciently small . i=0 ∞ 0 ≤ ηi (x) ≤ 1. Ω = Here we have used the fact that j (x − y) and all its derivatives are supported in B (x) ⊂ Ω. 1 To see this. i = 1. ηi ∈ C0 (Oi ) ∞ i=0 ηi (x) = 1. 2. 1. ∀x ∈ K. Choose some open set O0 ⊂⊂ Ω so that Ω = ∪∞ Oi . we have J (Dα f )→Dα f in Lp (K). x ∈ Ω.3. The Proof of Theorem 1. 2. Now what left to verify is.p (Ω) ≤ 2i+1 i = 0.1. ∂Ω) > }. we ﬁx a point x in K. 3.15) in Lp (K).3. This completes the proof of the Theorem 1.2 Step 1. Dα (J f )(x) = J (Dα f )(x). then any point y in the ball B (x) is in the interior of Ω.p (Ω).
0. Exercise 1. neither fi nor ηi f converge. Then g ∈ C ∞ (Ω). and Dα φ(x) ≤ 1 ∀x ∈ Rn . Hint: Show that the partial sum is a Cauchy sequence.1 In the above argument. hence (g − f ) ∈ W (Ω).p (Ω).17). Remark 1. such that f W k. it is easy to verify that.16) that ∞ ∞ fi − η i f i=0 W k. Although each partial sum of fi is in C0 (Ω).1 below).p (Ω) ∞ i=0 (fi −ηi f ) k.p (Ω\BR−2 (0)) ≤ δ.18) Now in the bounded domain Ω ∩ BR (0).1 . Given any δ > 0. there exist R > 0. 2i+1 From here we can see that the series converges in W k. (1. ∀α ≤ k. Moreover. Then by (1. but ∞ their diﬀerence converges.3. ∞ i vi converges Step 2. k.p (Ω)? We showed that the diﬀerence (g − f ) is in W k.17) Choose a cut oﬀ function φ ∈ C ∞ (Rn ) satisfying φ(x) = 1. since f ∈ W k.3. there are at most ﬁnitely many nonzero terms in the sum.p (Ω).3 Approximation by Smooth Functions ∞ 15 Set g = i=0 fi . such that φf − f W k. Then how did we prove that g ∈ W k. and therefore g ∈ W k.p the inﬁnite series fi does not converge to g in W (Ω). we complete step 1. from the above inequality. we write ∞ g−f = i=0 (fi − ηi f ).p (Ω) ≤δ i=0 1 = δ.p (Ω). by the argument in Step 1.p (Ω) ≤ Cδ.3. because each fi is in C ∞ (Ω). there exists a constant C. Show that the series ∞ in X if i vi < ∞. x ∈ BR−2 (0).p (Ω). It follows from (1. For Unbounded Region Ω.p (Ω) (see Exercise 1. Let X be a Banach space and vi ∈ X.1. we have g−f W k. Since δ > 0 is any number. (1.p ≤ δ. there is a function g ∈ C ∞ (Ω ∩ BR (0)). To see that g ∈ W k. such that . and for each open set O ⊂⊂ Ω. x ∈ Rn \ BR (0).
p (Ω) ≤ φg − φf ≤ C1 g − f W k. so that in the new coordinates system (x1 . for a suﬃciently small r > 0. Br (xo ) ∩ Ω = {x ∈ Br (xo )  xn > φ(x1 . Then we will again use the partition of unity to complete the proof. The Proof of Theorem 1.p (Ω∩BR (0)) ≤ δ. for any multiindex α ≤ k. Since ∂Ω is C 1 . We claim that J f →f in W k. so that there is room to mollify within Ω. .19). the main diﬃculty here is that for a point on the boundary.p (Ω∩B ≤ (C1 + C)δ. (1. Actually.p function f on D within Ω. there is room to mollify a given W k. then mollify it. deﬁne x = x + a en .p (Ω) + φf − f + Cδ R (0)) W k. · · · xn−1 )} with some C 1 function Φ. Let xo be any point on ∂Ω.p (D). the function φg is in C ∞ (Ω).2. Approximating in a Small Set Covering ∂Ω. and D = {x  x ∈ D}. · · · . φg − f W k. Choose a suﬃciently large.3. we will translate the function a little bit inward. we have Dα (J f ) − Dα f Lp (D) ≤ Dα (J f ) − Dα f Lp (D) + Dα f − Dα f Lp (D) . As compared to the proofs of the previous theorem.3. xn ).19) Obviously. This D is obtained by shifting D toward the inside of Ω by a units.p (Ω) W k.16 1 Introduction to Sobolev Spaces g−f W k. and on each set that covers the boundary layer. This completes the proof of the Theorem 1. and by (1.3 We will still use the molliﬁers to approximate a function. Now.18) and (1. To circumvent this. Step 1. we ﬁrst translate f a distance in the xn direction to become f (x) = f (x ). we can express. there is no room to mollify. so that the ball B (x) lies in Ω ∩ Br (xo ) for all x ∈ D and for all small > 0. we cover Ω with ﬁnitely many open sets. Set D = Ω ∩ Br/2 (xo ). Shift every point x ∈ D in xn direction a units. More precisely. we can make a C 1 change of coordinates locally.
p (Rn ) →0.3.21) Let {ηi } be a smooth partition of unity subordinated to the open sets {Di }N .3. for each Di . φ(r) = 0. there exists gi ∈ C ∞ (Di ).p (Di ) = C(N + 1)δ. N .p (Di ) ≤ δ. (1. This completes the proof of the Theorem 1.3. call them Di . and select a function i=0 ¯ g0 ∈ C ∞ (D0 ) such that g0 − f W k. N ¯ Then obviously g ∈ C ∞ (Ω). · · · . the union of which covers ∂Ω. such that for . there exists a sequence of numbers {Rm } with Rm →∞. 1 . it follows from (1.1. we can ﬁnd ﬁnitely many such sets D. as R→∞. for 1 < r < 2. 2. one can show that φ( x )f (x) − f (x) R W k. between 0 and 1 .21) that.4 ∞ Let φ(r) be a C0 cut oﬀ function such that for 0 ≤ r ≤ 1. Step 2.1 implies that the ﬁrst term on the right hand side goes to zero as →0. Then by a direct computation. for each α ≤ k N Dα g − Dα f Lp (Ω) ≤ i=0 N Dα (ηi gi ) − Dα (ηi f ) gi − f i=0 Lp (Di ) ≤C W k. such that gi − f W k. i = 1. The Proof of Theorem 1.20) and (1.3 Approximation by Smooth Functions 17 A similar argument as in the proof of Theorem 1. while the second term also vanishes in the process due to the continuity of the translation in the Lp norm. from the argument ¯ in Step 1. (1. Since ∂Ω is compact. (1.p (D0 ) ≤ δ.20) Choose an open set D0 ⊂⊂ Ω such that Ω ⊂ ∪N Di .1. Applying the Partition of Unity. Given δ > 0. Deﬁne i=0 N g= i=0 ηi gi .22) Thus. Similar to the proof of Theorem 1. for r ≥ 2.3. and f = i=0 ηi f.3.
as i→∞.3.3. in W k. Let Y be a Banach space. we prove the following Operator Completion Theorem in general Banach spaces. as →0. J (gm )→gm . each function fm fm −f W k. such that xi →xo .24). Proof. since D is dense in X.23) and (1. such that for fm := J m (gm ). (1. Hence there exist m.p (Rn ) m. we have Obviously. In other words. from the Approximation Theorem 1.3. as m→∞. In order to make such extensions more conveniently.4.24) m ∞ is in C0 (Rn ).p (Rn ) ≤ ≤ fm −gm W k. we can just ﬁrst work on smooth functions and then extend them to the whole Sobolev spaces. such that T ¯ T = T ¯ and T x = T x ∀ x ∈ D.3. Given any element xo ∈ X. which show that smooth functions are dense in Sobolev spaces W k. when we derive various inequalities in Sobolev spaces. Rm ≤ 1 . Then there exists an extension T of T from D to the ¯ is a bounded linear operator from X to Y . Assume T :D→Y ¯ is a bounded linear map.p (Rn ) fm − gm 1 . Later. m This completes the proof of Theorem 1. whole space X. particularly in the next three sections. Theorem 1.23) m On the other hand.p (Rn ) + gm −f W k. W k. Now we have proved all the four Approximation Theorems.5 Let D be a dense linear subspace of a normed space X.18 1 Introduction to Sobolev Spaces gm (x) := φ( we have x )f (x). without going through approximation process in each particular space.p is the completion of C k under the norm · W k.p . for each ﬁxed gm − f W k. W k. . (1.p (Rn ) ≤ 2 →0. and by (1.p (Rn ). there exists a sequence {xi } ∈ D.p .
Step 1. ¯ Hence T is also bounded. then for any open k. and for other x ∈ X. xn > 0}.p (Ω) → W0 (O) such that Eu = u a.p (Ω). xn ≥ 0 k ai u(x . Then we can apply the Operator Completion Theorem to extend E ¯ to W k.p set O ⊃ Ω.1. Deﬁne u(x . −λi xn ) .3). we prove the following Extension Theorem. From this density. ¯ is a linear operator. This implies that {T xi } is a Cauchy sequence in Y . xn < 0.p (Ω) (see Theorem 1. We extend u to be a C k function on the whole ball Br (0). deﬁne T x = T x. Let ¯ T xo = yo . i=0 .3.25) To see that (1. xn ) . we ﬁrst work on functions u ∈ ¯ C k (Ω).3.p (Ω). Then y1 − yo = lim T xi − T xi ≤ Climi→∞ xi − xi = 0. xn ) ∈ Rn  x < 1. since C k (Ω) is dense in W k. Theorem 1.25). {T xi } converges to some element yo in Y . suppose there is another sequence {xi } that converges to xo in X and T xi →y1 in Y .3 Approximation by Smooth Functions 19 It follows that T xi − T xj ≤ T xi − xj →0. for u ∈ W k. E(u)(x) = u(x) almost everywhere on Ω based on ¯ ¯ E(u)(x) = u(x) on Ω ∀ u ∈ C k (Ω). for x ∈ D. We deﬁne E(u) in the following two steps. (1. xn ) = ¯ u(x .6 Assume that Ω is bounded and ∂Ω is C k .e. To deﬁne the extension operator E. deﬁne T x by (1. it is easy to show that. j→∞. T ¯ T xo = lim T xi ≤ lim T i→∞ i→ ∞ xi = T xo . As a direct application of this Operator Completion Theorem. The special case when Ω is a half ball + Br (0) := {x = (x . there exists a bounded linear operator E : W k. This completes the proof. as i. i→∞ ¯ ¯ Now. Proof.25) is well deﬁned. + Assume u ∈ C k (Br (0)). Since Y is a Banach space. in Ω. Moreover Obviously. ¯ Now we prove the theorem for u ∈ C k (Ω).
such that and ∞ ηi ∈ C0 (Di ) i = 0. · · · . 1. Given any xo ∈ ∂Ω. hence there exist a ﬁnite subcovering D0 := Ω. From Step 1. u . η1 . · · · . 1. and hence the system has a unique solution. Since the coeﬃcient determinant det(λj ) is not i zero. m m ηi (x) = 1 ∀ x ∈ Ω. there exists a o k neighborhood U of x and a C transformation φ : U →Rn which satisﬁes that φ(xo ) = 0. we ﬁrst pick λi . Then there exists an ro > 0. ¯ Step 2. ∂ j u(x . And let ui (y) = u(φ−1 (y) be the function deﬁned on the half ball ˜ i + ¯ Bri (0) = φi (Di ∩ Ω) . we can deﬁne the extension of u from Ω to its neighborhood O as m E(u) = i=1 ηi (x)¯i (φi (x)) + η0 u(x). · · · . · · · . 2. D1 . j = 0. Dm and a corresponding partition of unity η0 . ηm . ¯ Let O be an open set containing Ω. a1 . i=0 Let φi be the mapping associated with Di as described in Step 1. each ui (y) can be extended as a C k function ui (y) onto the ˜ ¯ whole ball Bri (0). such that Bro (0) ⊂⊂ φ(U ). and φ(U ∩ Ω) is n contained in R+ . ak . i = 1. i i=0 to determine a0 . −1 Dxo := φ (Bro (0)) is an open set containing xo . · · · . and φ ∈ C k (Dxo ). φ(U ∩ ∂Ω) lies on the hyper plane xn = 0. such that 0 < λ0 < λ1 < · · · < λk < 1. Reduce the general domains to the case of half ball covered in Step 1 via domain transformations and a partition of unity. One can easily verify that. All such Dxo and Ω forms an open ¯ covering of Ω.20 1 Introduction to Sobolev Spaces To guarantee that all the partial derivatives up to the order k are ∂xj n continuous across the hyper plane xn = 0. · · · m. for such λi and ai . Now. the extended function u is in C k (Br (0)). k. xn ) ¯ Then solve the algebraic system k ai λj = 1 . Choose ro suﬃciently small so that Dxo ⊂ O.
Then it is natural that we would like to know whether the functions in this space also automatically belong to some other spaces. Let O := {x ∈ Rn  dist(x. Then the linear operator ∞ J (E(u)) : W k.p (Ω)→C0 (O ) is bounded.4 Sobolev Embeddings When we seek weak solutions of partial diﬀerential equations. we have J (E(u)) − u W k.3. Here. we have m m E(u)(x) = i=1 m ηi (x)¯i (φi (x)) + η0 (x)u(x) = u i=1 ηi (x)˜i (φi (x)) + η0 (x)u(x) u m = = ηi (x)u(φ−1 (φi (x))) i i=1 m + η0 (x)u(x) = i=1 ηi (x)u(x) + η0 (x)u(x) ηi (x) u(x) = u(x).p (Ω) →0 .4 Sobolev Embeddings ∞ Obviously. O) ≤ }. i=0 This completes the proof of the Extension Theorem. for any x ∈ Ω. Remark 1.p (Ω) .p (Ω). From the Extension Theorem. we start with functions in a Sobolev space W k.3. as →0. . and 21 E(u) W k. one can derive immediately Corollary 1. E(u) ∈ C0 (O). the improvement is that one can write out the explicit form of the approximation.p . ¯ Moreover.1 Assume that Ω is bounded and ∂Ω is C k .p (O) ≤C u W k. as compare to the previous approximation theorems. The following theorem answers the question and at meantime provides inequalities among the relevant norms.1.2 Notice that the Extension Theorem actually implies an improved version of the Approximation Theorem under stronger assumption on ∂Ω (be C k instead of C 1 ). and for each u ∈ W k. 1. Let O be an open ¯ set containing Ω.
From the deﬁnition. For simplicity. The proof of the Theorem is based upon several simpler theorems. and there exists a constant C.27) Rn Du(y)p dy. (1. We ﬁrst consider the functions in W 1. Here. if p p any positive number < 1.γ (Ω). apparently these functions belong to Lq (Ω) for 1 ≤ q ≤ p.p (Ω) (ii) If k > that n p.p (Ω) 1 + n − n.γ p (Ω) ≤C u W k.p (Ω). we have 1 λn λp Duλ p dx = n λ Rn uλ (x)q dx = u(y)q dy. Rn . one would expect more. Assume Ω is bounded and has a C 1 boundary. then u ∈ C k−[ p ]−1. then u ∈ Lq (Ω) with 1 = p − n and there exists a constant p q C such that u Lq (Ω) ≤ C u W k. and what is more meaningful is to ﬁnd out how large this q can be. and Rn Lq (Rn ) ≤ C Duλ Lp (Rn ) . [b] is the integer part of the number b. Let u ∈ W k.4.p (Ω). Then it must also be true for the rescaled function of u: uλ (x) = u(λx). Now suppose (1.p norm–the norm of the derivatives Du Lp (Ω) = Ω Du dx p 1 p –would suppose to be more useful in practice. if n p n p is not an integer is an integer .22 1 Introduction to Sobolev Spaces Theorem 1. can we establish the inequality u Lq (Rn ) ≤ C Du Lp (Rn ) .1 (General Sobolev inequalities). 1 k (i) If k < n . (1. Naturally. we start with the smooth functions with compact supports in Rn . such n u where γ= C k−[ n ]−1. that is uλ By substitution. We would like to know for what value of q.26) ∞ with constant C independent of u ∈ C0 (Rn ).26) holds. And to control the Lq norm for larger q by W 1.
4 Sobolev Embeddings 23 It follows from (1. n).p (Rn ). It belongs to W 1.28) holds for all functions u in W 1.n (Ω). in order that C to be independent of u. we derive immediately that ∞ u(x) ≤ −∞ u (x)dx. (1. open subset of Rn with C 1 boundary.1 Inequality (1. Here for q to be positive. Suppose that 1 ≤ p < n and u ∈ W 1. For functions in W 1.28) where p∗ = np n−p .p (Ω).4.27) that u Lq (Rn ) ≤ Cλ1− p + q Du n n Lp (Rn ) . Then there exists a constant C = C(p.p (Ω).1. u ∈ C0 (Rn ) (1. 1 However. One counter example is u = log log(1 + x ) on Ω = B1 (0). .3 Assume that Ω is a bounded.p (Rn ) can be approached by a sequence of func1 tions in C0 (Rn ).2 (GagliardoNirenbergSobolev inequality). Since any function in W 1. Ω). it is necessarily that the power of λ here be zero.4. it is false.p (Rn ) functions by Extension Theorem and arrive at Theorem 1.4.p (Ω) . this is true only in dimension one. Then u is in Lp∗ (Ω) and there exists a constant C = C(p. p∗ := n−p →∞. for n > 1. from x u(x) = −∞ u (x)dx. and we will deal with it later. Theorem 1. such that u Lp∗ (Ω) ≤C u W 1. n−p It turns out that this condition is also suﬃcient. Unfortunately. we must require p < n.29) np In the limiting case as p→n. we derive immediately Corollary 1. This is a more delicate situation. as will be stated in the following theorem. Then one may suspect that u ∈ L∞ when p = n. such that 1 u Lp∗ (Rn ) ≤ C Du Lp (Rn ) . that is q= np . Therefore. but not to L∞ (Ω). we can extended them to be W 1. Assume that 1 ≤ p < n. For n = 1. n.
2 is true and derive Corollary 1. Often.4. Then there exists a constant C = C(p. for p > n. the left hand side of the above 1 inequality is the norm in the H¨lder space C 0. for any x. Then. Here. we obtain ui − uj Lp∗ (Rn ) C 0. First.4 (Morrey’s inequality).4. Obviously.γ (R1 ) with γ = 1 − p . p To establish an inequality like (1. we consider the special case where Ω = Rn .4.2. . one would expect W 1. Assume n < p ≤ ∞.4. Given any function u ∈ W 1. o y y u(y) − u(x) ≤ x u (t)dt ≤ x u (t) dt p 1 p y 1 1− p · x dt . First. y in R1 .2). and consequently. This is o indeed true in general. by the Approximation Theorem.27) to (1. It follows that u(y) − u(x) y − x1− p 1 ∞ ≤ −∞ u (t) dt p 1 p .28) are called ‘soft analyseses’.4. The Proof of Corollary 1.4.24 1 Introduction to Sobolev Spaces Naturally. In the following. one sees that Theorem 1.4. the proof of (1. by H¨lder inequality. we will show the ‘soft’ parts ﬁrst and then the ‘hard’ ones. we will assume that Theorem 1. j → ∞.27) is called a ‘hard analysis’. we reduce the proof to functions with enough smoothness (which is just Theorem 1.p (Rn ) via the Extension Theorem.2 is the ‘key’ and inequality (1. Instead of dealing with functions in W k.27) there provides the foundation for the proof of the Sobolev embedding. there ∞ exists a sequence {uk } ⊂ C0 (Rn ) such that u − uk W 1.2. Applying Theorem 1.p (Ω) to W 1. let us ﬁrst consider the simplest case when n = 1 and p > 1.p . n) such that u where γ = 1 − n .p (Rn ) . y ∈ R1 with x < y.3. To get some rough idea what these spaces might be.p to embed into better spaces. we will prove Theorem 1. and the steps leading from (1. u ∈ C 1 (Rn ) ≤ C ui − uj W 1.4. and we have Theorem 1.1. we have y u(y) − u(x) = x u (t)dt. we follow two basic principles.1 and Theorem 1. Secondly.28).p (Rn ) → 0 as k → ∞.p (Rn ) → 0. we deal with general domains by extending functions u ∈ W 1.γ (Rn ) ≤C u W 1. as i. Take the supremum over all pairs x.4.p (Rn ).
p (Ω) . for 1. (1. We ﬁrst establish the inequality for p = 1.4. we prove u Rn n n−1 n−1 n dx ≤ Rn Dudx. The Proof of Theorem 1. This completes the proof of the Theorem. Ω. by the Extension Theorem (Theorem 1. we arrive at u Lp∗ (Rn ) ∗ = lim uk Lp∗ (Rn ) ≤ lim C uk W 1.e. 2. such that ˜ u = u. moreover.32) .1. · · · . n. to apply inequality (1. there exists a function u in W0 (O).30) Now we can apply the GagliardoNirenbergSobolev inequality to u to derive ˜ u Lp∗ (Ω) ≤ u ˜ Lp∗ (O) ≤C u ˜ W 1.4. i.1 (General H¨lder Inequality) Assume that o ui ∈ Lpi (Ω) for i = 1.31) to uγ for a properly chosen γ > 1 to extend the inequality to the case when p > 1.4 Sobolev Embeddings 25 Thus {uk } also converges to u in Lp (Rn ). there exists a constant C1 = C1 (p. More precisely.p every u in W 1.p (Rn ) =C u W 1.p (Ω) . let O be an open set that covers Ω. The Proof of Theorem 1.p (O) ≤ CC1 u W 1. such that u ˜ W 1.6). p1 p2 pm m Then u1 u2 · · · um dx ≤ Ω ui pi dx i Ω 1 pi . (1.3.p (Ω). Consequently.p (O) ≤ C1 u W 1. This completes the proof of the Corollary.27).4. m and 1 1 1 + + ··· + = 1. We need Lemma 1.31) Then we will apply (1.2.3. we ﬁrst extend them to be functions with compact supports in Rn .p (Ω).p (Rn ) . ˜ almost everywhere in Ω. (1. O). Now for functions in W 1.
i = 1. To better illustrate the idea. i = 1. Now we are ready to prove the Theorem. xi+1 . The case p = 1. 2. . · · · n. yi . · · · . xn )dyi . that is. · · · . Now. we arrive at (1. we obtain ∞ u −∞ n n−1 ∞ 1 n−1 ∞ n ∞ 1 n−1 dx1 ≤ −∞ ∞ Dudy1 −∞ i=2 1 n−1 Dudyi −∞ ∞ n ∞ −∞ dx1 1 n−1 ≤ −∞ Dudy1 i=2 −∞ Dudx1 dyi . (1. ∞ xi −∞ ∂u (x1 . · · · . we ﬁrst derive inequality (1. −∞ The above two inequalities together imply that ∞ ∞ u(x)2 ≤ −∞ Du(y1 . xn )dyi 1 n−1 . ∂y1 Du(y1 . x2 )dy1 . · · · . · · · . 2. Step 1. xn )dyi .33). Integrating both sides with respect to x1 and applying the general H¨lder o inequality (1. · · · . And it follows that n u(x) n n−1 ∞ ≤ i=1 −∞ Du(x1 . yi . · · · n. yi . y2 )dy2 .33) Since u has a compact support.31) for n = 2. we prove 2 u(x)2 dx ≤ R2 R2 Dudx . ∂yi u(x) ≤ −∞ Du(x1 . We write u(x) = Consequently. y2 )dy2 . xi−1 . x2 )dy1 · −∞ Du(x1 . we have u(x) ≤ ∞ Du(x1 . we have u(x) = It follows that u(x) ≤ −∞ x1 −∞ ∞ ∂u (y1 . integrate both sides of the above inequality with respect to x1 and x2 from −∞ to ∞.26 1 Introduction to Sobolev Spaces The proof can be obtained by applying induction to the usual H¨lder inequalo ity for two functions.32). Similarly. x2 )dy1 . Then we deal with the general situation when n > 2.
4. Step 2. × Dudx1 dx2 dyi i=3 −∞ Continuing this way by integrating with respect to x3 . we obtain o u n−1 dx ≤ Rn Rn n n n−1 Dudx . Exercise 1. we have o u Rn γn n−1 n−1 n dx ≤ Rn D(uγ )dx = γ Rn uγ−1 Dudx Du Rn γn γ+n−1 γ+n−1 γn ≤γ Rn u γn n−1 (γ−1)(n−1) γn dx dx . and by the H¨lder inequality. We have ∞ −∞ ∞ −∞ ∞ −∞ ∞ 1 n−1 u n−1 dx1 dx2 ≤ ∞ −∞ ∞ 1 n−1 n n ∞ −∞ ∞ 1 n−1 Dudx1 dy2 −∞ −∞ Dudy1 · i=3 −∞ Dudx1 dyi dx2 . This veriﬁes (1. we deduce ∞ ∞ ··· −∞ ∞ −∞ u n−1 dx1 · · · dxn−1 ≤ ∞ 1 n−1 n ∞ ∞ 1 n−1 ··· −∞ ∞ −∞ ∞ Dudy1 dx2 · · · dxn−1 −∞ 1 n−1 ··· −∞ Dudx1 dy2 dx3 · · · dxn−1 1 n−1 ··· × ··· −∞ −∞ Dudx1 · · · dxn−2 dyn−1 Rn Dudx . The Case p > 1.1. . applying the General H¨lder Inequality. we arrive at o ∞ −∞ ∞ −∞ ∞ −∞ n ∞ 1 n−1 u n−1 dx1 dx2 ≤ ∞ −∞ ∞ −∞ ∞ ∞ 1 n−1 n Dudy1 dx2 −∞ ∞ −∞ −∞ 1 n−1 Dudx1 dy2 . Again. Finally. · · · .4 Sobolev Embeddings 27 Then integrate the above inequality with respect to x2 .31). integrating both sides with respect to xn and applying the general H¨lder inequality. xn−1 . Applying (1.31) to the function uγ with γ > 1 to be chosen later.1 Write your own proof with all details for the cases n = 3 and n = 4.
34) and (1.34) and sup x=y u(x) − u(y) ≤ C Du x − y1−n/p Lp (Rn ) . x − y s u(x + sω) − u(x) ≤ 0 Du(x + τ ω)dτ. then converting the integral on the right hand side from polar to rectangular coordinates. We will establish two inequalities sup u ≤ C u Rn W 1.4. we verify (1.36).p (Rn ) . We will carry the proof out in three steps. n−p 1 p we obtain u n−p dx Rn np n−p np ≤γ Rn Dup dx . respectively. (1. y − xn−1 (1.4 (Morrey’s inequality). The Proof of Theorem 1. This completes the proof of the Theorem. and in Step 2 and Step 3. that is γ+n−1 γ= p(n − 1) . (1.35) Both of them can be derived from the following 1 Br (x) u(y) − u(x)dy ≤ C Br (x) Br (x) Du(y) dy. s = x − y. Now choose γ. we prove (1. Step 1. In Step 1.28 1 Introduction to Sobolev Spaces It follows that u Rn γn n−1 n−1 γn dx ≤γ Rn Du γn γ+n−1 γ+n−1 γn dx . We start from 1 u(y) − u(x) = 0 d u(x + t(y − x))dt = dt s Du(x + τ ω) · ωdτ. so that γn = p. we obtain .35). Integrating both sides with respect to ω on the unit sphere ∂B1 (0). and hence y = x + sω.36) where Br (x) is the volume of Br (x). 0 where ω= It follows that y−x .
we deduce o I1 ≤ C B1 (x) u(x)dy B1 (x) u(x) − u(y)dy + B1 (x) 1 B1 (x) u(y)dy B1 (x) (1. and taking into account that the integrand on the right hand side is nonnegative. For each ﬁxed x ∈ Rn . (n−1)p p−1 Here we have used the condition that p > n.39).34) is an immediate consequence of (1. we have u(x) = 1 B1 (x) 1 ≤ B1 (x) = I1 + I2 . (1. . Also it is obvious that I2 ≤ C u Lp (Rn ) .39) Now (1.37).37) Du(y) dy x − yn−1 1/p ≤C Rn Dup dy B1 (x) 1/p dy x − y (n−1)p p−1 p−1 p dy (1. Step 2. x − zn−1 Multiplying both sides by sn−1 . By (1. and hence x − y (n−1)p p−1 is ﬁnite. so that the integral dy B1 (x) < n.38).36) and the H¨lder inequality. (1. x − yn−1 This veriﬁes (1.36). we arrive at u(y) − u(x)dy ≤ Br (x) rn n Br (x) Du(y) dy. and (1. integrating with respect to s from 0 to r.38) ≤ C1 Rn Dup dy .1.4 Sobolev Embeddings s 29 u(x + sω) − u(x)dσ ≤ ∂B1 (0) 0 s ∂B1 (0) Du(x + τ ω)dσdτ Du(x + τ ω) 0 ∂B1 (0) = = Bs (x) τ n−1 dσdτ τ n−1 Du(z) dz.
36). then p1 = p − n .42) yield u(x) − u(y) ≤ Cx − y1−n/p Du Lp (Rn ) . we arrive at (1.p (Ω). we have Dα u ∈ Lp (Ω).1 (the General Sobolev Inequality). or 1 1 1 1 1 1 1 2 = ∗ − =( − )− = − .30 1 Introduction to Sobolev Spaces Step 3.40). For any pair of ﬁxed points x and y in Rn . Then u(x) − u(y) ≤ 1 D u(x) − u(z)dz + D 1 D u(z) − u(y)dz. We want to show that u ∈ p Lq (Ω) and u Lq (Ω) ≤ C u W k.p∗∗ (Ω) ∗ ∗∗ ≤ C2 u W k−1.4. (1. we have u ∈ W k−2. 1 D (1.43) 1 k with 1 = p − n . Again denote p∗ = n−p .3 again to W k−1. Dα u ∈ ∗ ∗ ∗ W 1.p∗ (Ω) ≤ C2 C1 u W k.4.p (Ω) with k < .4.43). . thus u ∈ W k−1. D (1.3. (1. dz x − z (n−1)p p−1 p−1 p = Cr1−n/p Du Similarly. (1. np∗ n−p∗ .4. let r = x − y and D = Br (x) ∩ Br (y).p (Ω) and u where p∗∗ = W k−2.p (Ω) . This can be done by applying Theorem 1.3 successively on q np 1 1 the integer k.p (Ω) and u W k−1.p (Ω) .41) u(z) − u(y)dz ≤ Cr1−n/p Du D Lp (Rn ) .p∗ (Ω) ≤ C1 u W k.p (Ω) .40) Again by (1.35) and thus completes the proof of the Theorem. ∗∗ p p n p n n p n Continuing this way k times. This implies (1. The Proof of Theorem 1. Applying Theorem 1.42) Now (1. and (1. For α ≤ k−1. we have 1 D u(x) − u(z)dz ≤ D C Br (x) u(x) − u(z)dz Br (x) 1/p p ≤C Rn Du dz Br (x) Lp (Rn ) . By Theorem 1.p (Ω).41). n (i) Assume that u ∈ W k.
n − mp n − 1.p (Ω) → W k−m. when n is not an integer. with q > n. Recall that in the Morrey’s inequality p ≤C u W 1.γ (Ω) ≤ C1 u W k−m. To remedy this. This completes the proof of the Theorem. More precisely.p (Ω) . m> np > n. p Obviously. the smallest such integer m is m= n . we want q= and equivalently.p (Ω) .q (Ω). p For this choice of m. [ ] ≤C u Here.4 Sobolev Embeddings 31 (ii) Now assume that k > u n . this condition is not necessarily met. we can use the result in the previous step. and in this case. which implies that γ can be any positive number < 1.q (Ω) ≤ C1 C2 u W k. with any α ≤ k − m − 1 to obtain Dα u Or equivalently. C 0. q p p n n is an integer. While . W k.p (Ω) .γ (Ω) (1.γ p (Ω) C 0.1. p γ =1− n n n =1+ − . such that W k. we have m = . However.44) we require that p > n. We can ﬁrst decrease the order of diﬀerentiation to increase the power of summability. we will try to ﬁnd a smallest integer m. u C k− n −1.44) to Dα u. q can be any number p p > n. in our situation. we can apply Morrey’s inequality (1. That is.
Obviously.5. Proof.1 (Ω) possesses a convergent subsequence in L1 (Ω). This can be derived immediately from the following Lemma 1.p (Ω) into Lq (Ω) is still true. if the region Ω is bounded.p (Ω).p (Ω).45) and ii) every bounded sequence in W 1. we proved that W 1. We now come back to prove the Lemma.5.5.p (Ω). We will show the strong convergence of this sequence in three steps with the help of a family of molliﬁers uk (x) = Ω j (y)uk (x − y)dy.1 (Ω). In fact.p (Ω). We postpone the proof of the Lemma for a moment and see how it implies the Theorem. Actually. since p ≥ 1 and Ω is bounded. Now.1 (Ω). for q < p∗ . in the sense that i) there is a constant C. due to the strict inequality on the exponent.p (Ω) possesses a convergent subsequence in Lq (Ω).1 Every bounded sequence in W 1. we conclude immediately that {uk } converges strongly to uo in Lq (Ω). then it possesses a convergent subsequence in Lq (Ω).p (Ω) is embedded into Lp (Ω) np with p∗ = n−p . Theorem 1. Assume that Ω is a bounded open subset in Rn with C 1 boundary ∂Ω. Let {uk } be a bounded sequence in W 1. as stated below. Applying the H¨lder inequality o uk − uo Lq (Ω) ≤ uk − uo θ L1 (Ω) uk − uo 1−θ Lp∗ (Ω) . What we need to show is that if {uk } is a bounded sequence in W 1. such that u Lq (Ω) ∗ ≤C u W 1. (1. By the Sobolev embedding.45)– is included in the general Embedding Theorem.p (Ω) → → Lq (Ω).5 Compact Embedding In the previous section. the embedding of W 1. Then for each 1 ≤ q < p∗ .32 1 Introduction to Sobolev Spaces 1.1. ∀ u ∈ W 1. it is also bounded in W 1. there is a subsequence (still denoted by {uk }) that converges strongly to uo in L1 (Ω). . {uk } is bounded in Lp∗ (Ω).1 (RellichKondrachov Compact Embedding). Suppose 1 ≤ p < n. assume that {uk } is a bounded sequence in W 1.p (Ω) is compactly imbedded into Lq (Ω): W 1. by Lemma 1. On the other hand.p (Ω) . one can expect more. This proves the Theorem. Then there exists a subsequence (still denoted by {uk }) which converges weakly to an element uo in W 1. W 1. The ﬁrst part–inequality (1.
≤ G Duk (z)dz ≤ (1.46) there is a subsequence of {uk } which converges uniformly . Now ﬁx an have > 0. From the property of molliﬁers. the sequence of functions {uk } all have compact support in a bounded open set G ⊂ Rn . Based on the Extension Theorem. that Ω = Rn . we may assume. for all k = 1. 2. we may also assume that each uk is smooth. for each ﬁxed > 0.1 (G) ≤ C < ∞ . as →0 . uniformly in k. and uk W 1.5 Compact Embedding 33 First. · · ·. then for all x ∈ Rn and for all k = 1. without loss of generality. (1. Finally. · · · . we show that uk →uk in L1 (Ω) as →0 .1 function can be approached by a sequence of smooth functions. we extract diagonally a subsequence of {uk } which converges strongly in L1 (Ω). we have uk (x) − uk (x) = B1 (0) 1 j (y)[uk (x − y) − uk (x)]dy j (y) B1 (0) 0 = =− d uk (x − ty)dt dy dt 1 j (y) B1 (0) 0 Duk (x − ty)dt y dy.49) Step 2. Step 1.48) It follows that uk − uk L1 (G) →0 . we obtain 1 uk − uk L1 (G) ≤ B1 (0) j(y) 0 G Duk (x − ty)dx dt dy uk W 1. uniformly in k.50) .1 (G) . corresponding to the above convergent sequence {uk }. we prove that (1. we j (x − y)uk (y)dy B (x) L∞ (Rn ) uk (x) ≤ ≤ j uk L1 (G) ≤ C n < ∞.47) (1. 2. Integrating with respect to x and changing the order of integration.1. Then. Since every W 1. (1.
i→∞ lim uj − ui L1 (G) = 0. This is our desired subsequence. due to the fact that each of the norm on the right hand side →0. for each ﬁxed > 0.51) (1. 1/3. · · · . because uii −ujj L1 (G) 1/i 1/k 1/k 1/k ≤ uii −uii 1/i L1 (G) + uii −ujj 1/i 1/j L1 (G) + ujj −ujj 1/j L1 (G) →0 . it possesses a subsequence (still denoted by {uk } ) which converges uniformly on G. uk2 . 3. the sequence {uk } is uniformly bounded and equicontinuous. uk3 · · · . Duk (x) ≤ C n+1 < ∞.p Assume Ω is bounded. e 1. · · · . 2.52) by uk1 .1 (Poincar´’s Inequality I). 1/k. (1. u22 . Now choose to be 1.1 Poincar´’s Inequality e For functions that vanish on the boundary. (1.6 Other Basic Inequalities 1. in particular j.6. 1/2. j→∞. we have Theorem 1.53) . Then u Lp (Ω) ≤ C Du Lp (Ω) . u33 .51) imply that. This completes the proof of the Lemma and hence the Theorem.34 1 Introduction to Sobolev Spaces Similarly.52) Step 3. Pick the diagonal subsequence from the above: {uii } ⊂ {uk }.50) and (1. · · · successively. Therefore. as i.6. 1. for k = 1. by the ArzelaAscoli Theorem (see the Appendix). and denote the corresponding subsequence that satisﬁes (1. Suppose u ∈ W0 (Ω) for some 1 ≤ p ≤ ∞. · · · . (1. Then select the corresponding subsequence from {uk }: u11 .
a. while uk p →∞ .6. we may assume that {uk } ⊂ C0 (Ω). This contradicts with (1. and Dvk p →0 .53) does not hold. 0 ii) Just for this theorem.1. one can take an equivalent 1. Ω It follows that Dvo (x) = 0 . For convenience. e Let Ω be a bounded.54) and therefore completes the proof of the Theorem. we must have vo ≡ 0. for each φ ∈ C0 (Ω). as k→∞. o the one we present in the following is a uniﬁed proof that works for both this and the next theorem.p (Ω) and hence possesses a subsequence (still denoted by {vk }) that converges weakly to some vo ∈ W 1.54) ∞ On the other hand. we have another version of Poincar´’s inequality.p norm of W0 (Ω) as u W 1. which may not be zero on the the boundary. ∀ u ∈ W 1.2 (Poincar´ Inequality II).55) . Ω). and therefore vo p = 1. we abbreviate u Lp (Ω) as u p . (1. However. For functions in W 1. (1. Ω vφxi dx = lim k →∞ Ω vk φxi dx = − lim k→∞ vk.6. Proof. and open subset in Rn with C 1 boundary. then there exists a sequence {uk } ⊂ W0 (Ω). Assume 1 ≤ p ≤ ∞.6 Other Basic Inequalities 35 Remark 1. e Theorem 1.1 i) Now based on this inequality. Taking into account that vk ∈ C0 (Ω). Then uk p vk p = 1 . Let u be the average of u on Ω. Then there exists a ¯ constant C = C(n.p (Ω) = Du Lp (Ω) . ∞ Thus vo is a constant.p (Ω).p (1. Consequently. one can easily prove it by using the Sobolev innq equality u Lp ≤ u Lq with p = n−q and the H¨lder inequality. {vk } is bounded in W 1.xi φdx = 0. 1. uk Let vk = .p ∞ ∞ Since C0 (Ω) is dense in W0 (Ω). such that Duk p = 1 . Suppose inequality 1.e. {vk } converges strongly in Lp (Ω) to vo . such that u−u ¯ p ≤ C Du p . From the compact embedding results in the previous section.p (Ω). connected.p (Ω). p.
r > 1 such that 1 1 λ + + = 2.6. 3] . Let 0 < λ < n and s. A simple counter example is when n = 1. Instead of letting uk vk = . λ) = nB n λ/n (n − λ)rs λ/n 1 − 1/r λ/n + λ/n 1 − 1/s λ/n with B n  being the volume of the unit ball in Rn .3 (HardyLittlewoodSobolev Inequality). s.6. λ)f r gs Rn Rn (1. Proof. 3]. 1] ∪ [2. and u(x) = −1 for x ∈ [0. Then one can see obviously that ∞ f (x) = 0 χ{f >a} (x) da ∞ (1.59) g(x) = 0 ∞ χ{g>b} (x) db x−λ = λ 0 c−λ−1 χ{x<c} (x) dc . r s n Assume that f ∈ Lr (Rn ) and g ∈ Ls (Rn ). Without loss of generality.58) (1. and where f r := f Lr (Rn ) . s. Let 1 if x ∈ G χG (x) = 0 if x ∈G be the characteristic function of the set G.56) where C(n. Then f (x)x − y−λ g(y)dxdy ≤ C(n. 1] 1 for x ∈ [2.2 The Classical HardyLittlewoodSobolev Inequality Theorem 1.36 1 Introduction to Sobolev Spaces The proof of this Theorem is similar to the previous one.6. we choose uk p uk − uk ¯ . we may assume that both f and g are nonnegative and f r = 1 = g s . vk = uk − uk p ¯ Remark 1. 1.57) (1.2 The connectness of Ω is essential in this version of the inequality. Ω = [0.
c) is bounded above by other pairs and arrive at I(a. the measure of the sets {x  f (x) > a} and {y  g(y) > b}.61) It is easy to see that I(a. By (1.1. Then we can express the norms as ∞ ∞ f r r =r 0 ar−1 v(a) da = 1 and g s s =s 0 bs−1 w(b) db = 1. (1. b.60) u(c) = B n cn . c) dc db da. c) ≤ min{u(c)w(b). the volume of the ball of radius c. c c and then let c = c−λ .62) We integrate with respect to c ﬁrst. b.57).62). b. (1.58).6 Other Basic Inequalities 37 To see the last identity. c) ≤ Rn Rn χ{x<c} (x − y)χ{g>b} (y) dx dy u(c)χ{g>b} (y) dy = u(c)w(b) Rn ≤ Similarly. u(c)v(a). one can show that I(a. (1. we have ∞ c−λ−1 I(a. one may ﬁrst write ∞ x−λ = 0 χ{x−λ >˜} (x) d˜. and (1. respectively. we have I := Rn f (x)x − y−λ g(y) dx dy Rn ∞ ∞ 0 ∞ ∞ 0 0 0 ∞ ∞ = λ 0 Rn Rn c−λ−1 χ{f >a} (x)χ{x<c} (x − y)χ{g>b} (y) dx dy dc db da c−λ−1 I(a.59) into the left hand side of (1. 0 = λ Let (1.56). ˜ Substituting (1. v(a)w(b)}. c) dc 0 ≤ u(c)≤v(a) c−λ−1 w(b)u(c) dc + u(c)>v(a) c−λ−1 w(b)v(a) dc . w(b) = Rn χ{g>b} (y) dy. b. and let v(a) = Rn χ{f >a} (x) dx . b.
63). By virtue of (1. (1.61).65) To estimate the ﬁrst integral in (1. and (1. (1. one from 0 to ar/s and the other from ar/s to ∞.63).66) since mn/λ < 1.38 1 Introduction to Sobolev Spaces (v(a)/B n )1/n = w(b)B  n 0 c−λ−1+n dc + w(b)v(a) (v(a)/B n )1/n c−λ−1 dc = B n λ/n B n λ/n w(b)v(a)1−λ/n + w(b)v(a)1−λ/n n−λ λ B n λ/n = w(b)v(a)1−λ/n λ(n − λ) (1. It follows that the ﬁrst integral in (1. b. we also obtain ∞ c−λ−1 I(a.65). we derive I ≤ × 0 n B n λ/n n−λ ∞ ar/s ∞ ∞ v(a) 0 w(b)1−λ/n db da + 0 v(a)1−λ/n ar/s w(b) db da.65) is bounded above by λ n − s(n − λ) λ/n 0 ∞ ∞ 1−λ/n v(a)ar−1 da 0 w(b)bs−1 db = 1 rs λ/n 1 − 1/r λ/n .63) Exchanging v(a) with w(b) in the second line of (1.67) .60).64) ≤ B n λ/n v(a)w(b)1−λ/n λ(n − λ) In view of (1. (1. we split the bintegral into two parts. c) dc 0 ≤ u(c)≤w(b) c−λ−1 v(a)u(c) dc + u(c)>w(b)) c−λ−1 w(b)v(a) dc (1. we use H¨lder inequality with o m = (s − 1)(1 − λ/n) ar/s w(b)1−λ/n bm b−m db 0 ar/s 1−λ/n ar/s λ/n ≤ 0 ar/s w(b)b s−1 db 0 1−λ/n b −mn/λ db ≤ 0 w(b)b s−1 db · ar−1 . (1.64).
1.6 Other Basic Inequalities
39
To estimate the second integral in (1.65), we ﬁrst rewrite it as
∞ bs/r
w(b)
0 0
v(a)1−λ/n da db,
then an analogous computation shows that it is bounded above by 1 rs λ/n 1 − 1/s
λ/n
.
(1.68)
Now the desired HardyLittlewoodSobolev inequality follows directly from (1.65), (1.67), and (1.68). Theorem 1.6.4 (An equivalent form of the HardyLittlewoodSobolev inequality) n Let g ∈ Lp (Rn ) for n−α < p < ∞. Deﬁne T g(x) =
Rn
x − yα−n g(y)dy.
Then Tg
p
≤ C(n, p, α) g
np n+αp
.
(1.69)
Proof. By the classical HardyLittlewoodSobolev inequality, we have < f, T g >=< T f, g >≤ C(n, s, α) f where < ·, · > is the L2 inner product. Consequently, Tg where
p r
g
s
,
= sup < f, T g > ≤ C(n, s, α) g s ,
f
r =1
1 p 1 r
+ +
1 r 1 s
=1 = n+α . n
Solving for s, we arrive at s=
np . n + αp
This completes the proof of the Theorem. Remark 1.6.3 To see the relation between inequality (1.69) and the Sobolev inequality, let’s rewrite it as Tg with
nq n−αq
≤C g
q
(1.70)
40
1 Introduction to Sobolev Spaces
1 < q :=
np n < . n + αp α
Let u = T g. Then one can show that (see [CLO]), (− ) 2 u = g. Now inequality (1.70) becomes the Sobolev one u
nq n−αq α
≤ C (− ) 2 u q .
α
2 Existence of Weak Solutions
2.1 2.2 2.3
Second Order Elliptic Operators Weak Solutions Method of Linear Functional Analysis 2.3.1 2.3.2 2.3.3 Linear Equations Some Basic Principles in Functional Analysis Existence of Weak Solutions Semilinear Equations Calculus of Variations Existence of Minimizers Existence of Minimizers under Constrains Minimax Critical Points Existence of a Minimax via the Mountain Pass Theorem
2.4
Variational Methods 2.4.1 2.4.2 2.4.3 2.4.4 2.4.5 2.4.6
Many physical models come with natural variational structures where one can minimize a functional, usually an energy E(u). Then the minimizers are weak solutions of the related partial diﬀerential equation E (u) = 0. For example, for an open bounded domain Ω ⊂ Rn , and for f ∈ L2 (Ω), the functional 1  u2 dx − f u dx E(u) = 2 Ω Ω
1 possesses a minimizer among all u ∈ H0 (Ω) (try to show this by using the knowledge from the previous chapter). As we will see in this chapter, the minimizer is a weak solution of the Dirichlet boundary value problem
42
2 Existence of Weak Solutions
− u = f (x) , x ∈ Ω u(x) = 0 , x ∈ ∂Ω. In practice, it is often much easier to obtain a weak solution than a classical one. One of the seven millennium problem posed by the Clay Institute of Mathematical Sciences is to show that some suitable weak solutions to the NavieStoke equation with good initial values are in fact classical solutions for all time. In this chapter, we consider the existence of weak solutions to second order elliptic equations, both linear and semilinear. For linear equations, we will use the LaxMilgram Theorem to show the existence of weak solutions. For semilinear equations, we will introduce variational methods. We will consider the corresponding functionals in proper Hilbert spaces and seek weak solutions of the equations associated with the critical points of the functionals. We will use examples to show how to ﬁnd a minimizer, a minimizer under constraint, and a minimax critical point. The wellknown Mountain Pass Theorem is introduced and applied to show the existence of such a minimax. To show that a weak solution possesses the desired diﬀerentiability so that it is actually a classical one is called the regularity method, which will be studied in Chapter 3.
2.1 Second Order Elliptic Operators
Let Ω be an open bounded set in Rn . Let aij (x), bi (x), and c(x) be bounded functions on Ω with aij (x) = aji (x). Consider the second order partial diﬀerential operator L either in the divergence form
n n
Lu = −
(aij (x)uxi )xj +
i,j=1 i=1
bi (x)uxi + c(x)u,
(2.1)
or in the nondivergence form
n n
Lu = −
i,j=1
aij (x)uxi xj +
i=1
bi (x)uxi + c(x)u.
(2.2)
We say that L is uniformly elliptic if there exists a constant δ > 0, such that
n
aij (x)ξi ξj ≥ δξ2
i,j=1
for almost every x ∈ Ω and for all ξ ∈ Rn . In the simplest case when aij (x) = δij , bi (x) ≡ 0, and c(x) ≡ 0, L reduces to − . And, as we will see later, that the solutions of the general second order elliptic equation Lu = 0 share many similar properties with the harmonic functions.
2.2 Weak Solutions
43
2.2 Weak Solutions
Let the operator L be in the divergence form as deﬁned in (2.1). Consider the second order elliptic equation with Dirichlet boundary condition Lu = f , x ∈ Ω u = 0 , x ∈ ∂Ω (2.3)
When f = f (x), the equation is linear; and when f = f (x, u), semilinear. Assume that, f (x) is in L2 (Ω), or for a given solution u, f (x, u) is in 2 L (Ω). ∞ Multiplying both sides of the equation by a test function v ∈ C0 (Ω), and integrating by parts on Ω, we have
n n
Ω i,j=1
aij (x)uxi vxj +
i=1
bi (x)uxi v + c(x)uv dx =
Ω
f vdx.
(2.4)
There are no boundary terms because both u and v vanish on ∂Ω. One can see that the integrals in (2.4) are well deﬁned if u, v, and their ﬁrst derivatives are 1 square integrable. Write H 1 (Ω) = W 1,2 (Ω), and let H0 (Ω) the completion of ∞ 1 C0 (Ω) in the norm of H (Ω):
1/2
u =
Ω
(Du2 + u2 )dx
.
1 1 Then (2.4) remains true for any v ∈ H0 (Ω). One can easily see that H0 (Ω) 1 is also the most appropriate space for u. On the other hand, if u ∈ H0 (Ω) satisﬁes (2.4) and is second order diﬀerentiable, then through integration by parts, we have 1 (Lu − f )vdx = 0 ∀v ∈ H0 (Ω). Ω
This implies that
Lu = f , ∀x ∈ Ω.
And therefore u is a classical solution of (2.3). The above observation naturally leads to Deﬁnition 2.2.1 The weak solution of problem (2.3) is a function u ∈ 1 1 H0 (Ω) that satisﬁes (2.4) for all v ∈ H0 (Ω). Remark 2.2.1 Actually, here the condition on f can be relaxed a little bit. By the Sobolev embedding H 1 (Ω) → L n−2 (Ω),
2n
j=1 aij (x)uxi vxj + i=1 bi (x)uxi v + c(x)uv dx 1 deﬁned on H0 (Ω). (2. ∀u.3.5) to ﬁnd its weak solutions. f (x. 2. v] = Ω i. f only need to be in 2n n+2 or more L n+2 (Ω).4) may be regarded as a linear 1 functional on H0 (Ω). (iii) u = 0 if and only if u = 0. it is equivalent to show that there 1 exists a function u ∈ H0 (Ω). v ∈ X. Deﬁnition 2. 2n 2. ∀u. (ii) λu = λ u . v >:= Ω f vdx. While the right hand side of (2.44 2 Existence of Weak Solutions 2n we see that v is in L n−2 (Ω). u) ≤ C1 + C2 u n−2 . v] =< f. x ∈ Ω u=0. we can apply representation type theorems in Linear Functional Analysis. x ∈ ∂Ω.5). please see [Sch1].2 Some Basic Principles in Functional Analysis Here we list brieﬂy some basic deﬁnitions and principles of functional analysis. which is the left hand side of (2. v ∈ X. ∀ v ∈ H0 (Ω). then for any u ∈ H 1 (Ω). u) is in L n+2 (Ω). to ﬁnd a weak solution of (2. (2. For more details.6) This can be realized by representation type theorems in Functional Analysis.3 Methods of Linear Functional Analysis 2. v >. < f. such that 1 B[u. if f (x. and hence by duality. Now.4) in the deﬁnition of the weak solutions. To this end. λ ∈ R. u) = up for any p ≤ n−2 generally n+2 f (x.3. In the semilinear case. normed linear space with norm · that satisﬁes (i) u + v ≤ u + v . .3.1 A Banach space X is a complete. which will be used to obtain the existence of weak solutions. such as Riesz Representation Theorem or LaxMilgram Theorem. we introduce the bilinear form n n B[u.1 Linear Equations For the Dirichelet problem with linear equation Lu = f (x) .
w) = 0. (iii) A bounded linear operator f : X→R1 is called a bounded linear functional on X. . Examples of Hilbert spaces are (i) L2 (Ω) with inner product (u. and (iv) (u. v ∈ H.3. u) = 0 if and only if u = 0.7) In other words. denoted by X ∗ . Then for every u ∈ H. and 1 (ii) Sobolev spaces H 1 (Ω) or H0 (Ω) with inner product (u. (iii) (u.3 (i) A mapping A : X→Y is a linear operator if A[au + bv] = aA[u] + bA[u] ∀ u. u >:= f [u] to denote the pairing of X ∗ and X.γ (Ω) with norm u C 0. o Deﬁnition 2. (2. u)1/2 with the following properties (i) (u.3 Methods of Linear Functional Analysis 45 The examples of Banach spaces are: (i) Lp (Ω) with norm u Lp (Ω) .p (Ω) . such that (u − v. Deﬁnition 2. (ii) the mapping u → (u. v) = (v. a.3. is called the dual space of X.2 A Hilbert space H is a Banach space endowed with an inner product (·.2. b ∈ R1 . (ii) A linear operator A : X→Y is bounded provided A := sup u X ≤1 Au Y < ∞. v) is linear for each v ∈ H.γ (Ω) . ∀u ∈ H.3. Lemma 2. there is a v ∈ M . u) ≥ 0. (ii) Sobolev spaces W k.1 (Projection). H = M ⊕ M ⊥ := {u ∈ H  u⊥M }. Let M be a closed subspace of H. and (iii) H¨lder spaces C 0.p (Ω) with norm u W k. The collection of all bounded linear functionals on X. ∀w ∈ M. u). v) = Ω [u(x)v(x) + Du(x) · Dv(x)]dx. ·) which generates the norm u := (u. v) = Ω u(x)v(x)dx. ∀u. v ∈ X. We use < f.
Then T v = (u. w) = 0 . Then H ∗ can be canonically identiﬁed with H. ∀v ∈ H. iii) The dimension of K(T ) is 1.1 (Riesz Representation Theorem). Exercise 2. for each u∗ ∈ H ∗ . one can derive the following Theorem 2. Assume that there exist constants M. v] ≤ M u v .46 2 Existence of Weak Solutions Here v is the projection of u on M . ·).2 (LaxMilgram Theorem). more precisely.2 Prove the Riesz Representation Theorem in 4 steps. Let T be a linear operator from H to R1 and T ≡0. such that < u∗ . The readers may see the book [Sch1] for its proof or do it as the following exercise. such that B[u. ∀w ∈ M. This is a wellknown theorem in functional analysis. Let B : H × H→R1 be a bilinear mapping. Show that i) K(T ) := {u ∈ H  T u = 0} is closed. The mapping u∗ → u is a linear isomorphism from H ∗ onto H. From Riesz Representation Theorem. v >= (u.3. Exercise 2. Let H be a real Hilbert space with norm · . Then for each bounded linear functional f on H. m > 0. The readers may ﬁnd the proof of the Lemma in the book [Sch] ( Theorem 13) or as given in the following exercise. there exist a vo ∈ M such that vo − u = inf v − u . v >.3. ∀u ∈ H. there exists a unique element u ∈ H. ∀u. iv) There exists v. v] =< f. one can prove Theorem 2. u].3. . such that (i) B[u. v) ∀v ∈ H. v) .1 i) Given any u ∈ H and u ∈M . such that T v = 1. ii) Show that (u − vo .3. ∀u ∈ H. Let H be a real Hilbert space with inner product (·. v∈M Hint: Show that a minimizing sequence is a Cauchy sequence. ii) H = K(T ) ⊕ K(T )⊥ . and H ∗ be its dual space. there exists a unique element u ∈ H. v ∈ H and (ii) m u 2 ≤ B[u. Based on this Projection Lemma.
v). v) Hence A(a1 u1 + a2 u2 ) = a1 Au1 + a2 Au2 . B[u. Moreover. by condition (i). for any u ∈ H. 2. ∀v ∈ H. for a given bounded ˜ linear functional f on H. u (2.3 Methods of Linear Functional Analysis 47 Proof.e. any a1 . v) = B[(a1 u1 + a2 u2 ). v] is symmetric. i. 1.10). v] = a1 (Au1 .8) On the other hand. ·] is also a bounded linear functional on H. we prove that it is onetoone and onto. (b) To see that A is onetoone. such that ˜ Au = f . ∀v ∈ H. u]. such that ˜ < f. we have (A(a1 u1 + a2 u2 ). Au] ≤ M u Au . v] can be made as an inner product in H. v] + a2 B[u2 . for each ﬁxed element u ∈ H. u] = (Au. v) + a2 (Au2 .2. v] may not be symmetric. by the Riesz Representation Theorem. u2 ∈ H. (a) For any u1 . If B[u. v). we show that A is a bounded linear operator. and each v ∈ H.11) ≤ B[u. there exists a unique element u ∈ H. In part (a). i. ˜ A : H→H. then the conditions (i) and (ii) ensure that B[u.9) Denote this mapping from u to u by A. On one hand. We carry this out in two parts. In the case B[u. such ˜ that B[u. Au = u. B[u. This veriﬁes that A is bounded. v) = (a1 Au1 + a2 Au2 . and in part (b). v] = (˜.8). v] = a1 B[u1 .9). Au and consequently Au ≤ M u ∀u ∈ H. v] = B[v. we proceed as follows. by condition (i). It suﬃce to show that for each f ∈ H.10) ˜ From (2. v >= (f . . ˜ (2. A is linear. and (2. (2. Au) = B[u. a2 ∈ R1 . we apply condition (ii): m u 2 2 = (Au. and hence there exist a u ∈ H.e. (2. and the conclusion of the theorem follows directly from the Reisz Representation Theorem. u) ≤ Au u . there is an f ∈ H. (2.
v)H := Ω DuDvdx and the corresponding norm u := Ω H Du2 dx.11). Then for each vk . By Poincar´ inequality. such that 0 = (u − v. 1 Let H = H0 (Ω) be our Hilbert space with inner product (u. i. there is a uk . v is in L n−2 (Ω). (u − v)] ≥ m u − v 2 . we can use the equivalent inner product e (u.12) This implies u = v. one can also see that R(A). u = v ∈ R(A) and hence H ⊂ R(A). o .12). and hence uk →u for some u ∈ H. our second order elliptic operator L would satisfy the conditions (i) and (ii) requested by the LaxMilgram Theorem. such that Auk = vk . Now if Au = Av.12).3 Existence of Weak Solutions Now we explore that in what situations. Inequality (2. Now. Hence A is onetoone. A(u − v)) = B[(u − v).1. by the Sobolev Embedding.12) implies that ui − uj ≤ C vi − vj . R(A) is closed. Consequently. assume that vk ∈ R(A) and vk →v in H.48 2 Existence of Weak Solutions and it follows that m u ≤ Au .e. we need the Projection Lemma 2. From (2. (2. And by virtue of the H¨lder inequality. In fact. m u − v ≤ Au − Av . then from (2. 2n For any v ∈ H. To show that R(A) = H. by (2. This completes the proof of the Theorem.3. there exists a v ∈ R(A). by the Projection Lemma. v) := Ω (DuDv + uv)dx. Therefore R(A) = H. 2.3. Auk →Au and v = Au ∈ H. the range of A in H is closed. Therefore. For any u ∈ H. uk is a Cauchy sequence in H.
i. and the L∞ norm of bi (x) and c(x). v] = Ω 2n Du · Dvdx. H v H. ∀v ∈ H. and by H¨lder inequality. v] ≤ u Condition (ii) is also satisﬁed: B[u. v >:= Ω f (x)v(x)dx ≤ Ω f (x) 2n n+2 n+2 2n dx Ω v(x) 2n n−2 dx . x ∈ Ω.3. condition (ii) may not automatically hold. u(x) = 0. one can easily verify that o B[u. v] = Ω n n bi (x)uxi v + c(x)uv dx. condition (i) is met: B[u. Now applying the LaxMilgram Theorem. It depends on the sign of c(x). Hence condition (i) is always satisﬁed. the Dirichlet problem − u = f (x). x ∈ ∂Ω 1 has a unique weak solution u in H0 (Ω). Obviously. we have B[u. v >. 2n In general.j=1 aij (x)uxi vxj + i=1 Under the assumption that aij L∞ (Ω) . We have actually proved Theorem 2. However. u] = u 2 H.3 Methods of Linear Functional Analysis 49 n−2 2n < f. hence we only need to assume that f ∈ L n+2 (Ω). First from the uniform ellipticity condition. bi L∞ (Ω) . v] ≤ 3C u H v H. c L∞ (Ω) ≤ C. v] =< f. we conclude that there exists a unique u ∈ H such that B[u. In the simplest case when L = − . we have .2. B[u.3 For every f (x) in L n+2 (Ω).
1 Semilinear Equations To ﬁnd the weak solutions of semilinear equations Lu = f (x. (2. Then for 2n every f ∈ L n+2 (Ω). To seek weak solutions of .50 2 Existence of Weak Solutions n aij (x)uxi uxj dx ≥ δ Ω i. Others are saddle points. (2. u) = 0 1 2 (Lu · u)dx − Ω u Ω F (x.4 Variational Methods 2. then condition (ii) holds.2 Calculus of Variations For a continuously diﬀerentiable function g deﬁned on Rn . 2. Actually. we will associate the equation with the functional J(u) = where F (x. Then we will focus on how to seek critical points in various situations. a critical point is a point where Dg vanishes.3. s)ds. for instance.14) 2.4 Assume that bi (x) ≡ 0 and c(x) satisﬁes (2. u)dx f (x.13) for some δ > 0.4.j=1 Ω Du2 dx = δ u 2 H. We show that the critical point of J(u) is a weak solution of our problem.4.14).13). We will use variational methods to obtain the existence. Hence we have Theorem 2. if c(x) ≥ −δ/2 with δ as in (2. Roughly speaking. u) the LaxMilgram Theorem can no longer be applied. then condition (ii) is met. The simplest sort of critical points are global or local maxima or minima. the condition on c(x) can be relaxed a little bit. From here one can see that if bi (x) ≡ 0 and c(x) ≥ 0. the problem Lu = f (x) x ∈ Ω u(x) = 0 x ∈ ∂Ω 1 has a unique weak solution in H0 (Ω).
d J(u + tv) = dt D(u + tv) · Dvdx − Ω Ω f (x)vdx. then through integration by parts. Consider the realvalue function g(t) = J(u + tv). (2. and therefore. x ∈ Ω. Ω 1 Since this is true for any v ∈ H0 (Ω). g (0) = 0 yields Du · Dvdx − Ω Ω 1 f (x)vdx = 0. 1 Let v be any function in H0 (Ω). Furthermore. we must have g (0) = 0. Since u is a minimizer of J(·).e. u(x) = 0 . t ∈ R. To illustrate the idea. ∀v ∈ H0 (Ω).2. we obtain [− u − f (x)] vdx = 0. we generalize this concept to inﬁnite dimensional spaces. the function g(t) has a minimum at t = 0.15). To ﬁnd weak solutions of the problem. . Now suppose that there is some function u which happen to be a minimizer 1 of J(·) in H0 (Ω). if u is also second order diﬀerentiable. we conclude − u − f (x) = 0.4 Variational Methods 51 partial diﬀerential equations. we study the functional J(u) = 1 2 Du2 dx − Ω Ω (2. dt f (x)(u + tv)dx. then we will demonstrate that u is actually a weak solution of our problem. J(u + tv) = and 1 2 D(u + tv)2 dx − Ω Ω i.15) f (x)udx 1 in H0 (Ω).16) This implies that u is a weak solution of problem (2. explicitly. x ∈ ∂Ω. Here. Consequently. d J(u + tv) t=0 = 0. let us start from a simple example: − u = f (x) .
i) We say that J is Frechet diﬀerentiable at u ∈ X if there exists a continuous linear map L = L(u) from X to X ∗ satisfying: For any > 0. From the above argument.52 2 Existence of Weak Solutions Therefore. here u need not to be a minima. or more generally. Then our next question is: Does the functional J actually possess a critical point? We will show that there exists a minimizer of J. then < J (u). dt More precisely.1 One can verify that. we verify that J possesses the following three properties: i) boundedness from below.4. v >= 0 ∀v ∈ X. To this end.3 Existence of Minimizers In the previous subsection. u is a classical solution of the problem. we have Deﬁnition 2. it can be a maxima. v >  ≤ where < L(u).1 Let J be a linear functional on a Banach space X. such that J(u + v) − J(u)− < L(u).4. ii) coercivity. the readers can see that. and iii) weak lower semicontinuity. or a saddle point of the functional. Remark 2. v >= lim t→0 d J(u + tv) − J(u) = J(u + tv) t=0 .15). 2. we have seen that the critical points of the functional 1 J(u) = Du2 dx − f (x)udx 2 Ω Ω are weak solutions of the Dirichlet problem (2. there is a δ = δ( . iii) We call J (u) = 0 the EulerLagrange equation of the functional J. in order to be a weak solution of the problem. v >:= L(u)(v) is the duality between X and X ∗ . . i) Boundedness from Below. The mapping L(u) is usually denoted by J (u). if J is Frechet diﬀerentiable at u. u). that is < J (u). any point that satisﬁes d J(u + tv) t=0 . t dt v X whenever v X < δ.4. ii) A critical point of J is a point at which J (u) = 0.
then J(uk )→∞. x ∈ R1 . ii) Coercivity. That is. A functional that is bounded from below does not guarantee that it has a minimum. we need to have some sort of ‘coercive’ condition on our functional J.4 Variational Methods 53 1 First we prove that J is bounded from below in H := H0 (Ω) if f ∈ L2 (Ω). but it can never be attained. uk > → < f. 1 + x2 Obviously. if u H →∞. This suggests that in order to prevent a minimizing sequence from ‘leaking’ to inﬁnity. and write uk uo . This implies that any minimizing sequence must be bounded in H. we have e o J(u) ≥ 1 u 2 − C u H f L2 (Ω) H 2 1 C2 2 = u H − C f L2 (Ω) − f 2 2 C2 ≥− f 2 2 (Ω) . This is no longer true in inﬁnite dimensional spaces.17) Therefore. If a sequence {uk } ⊂ X satisﬁes < f. Actually.17). the functional J is bounded from below. If H is a ﬁnite dimensional space. Again.4. then xk goes away to inﬁnity. we use the equivalent norm in H: u H = Ω Du2 dx.2 Let X be a real Banach space. Therefore a minimizing sequence would be retained in a bounded set. By Poincar´ and H¨lder inequalities. If we take a minimizing sequence {xk } such that g(xk )→1. then we say that {uk } converges weakly to uo in X.e. A simple counter example is g(x) = 1 . then J(uk ) must also become unbounded. i. one can see that if uk H →∞. We therefore turn our attention to weak topology. then a bounded minimizing sequence {uk } would possesses a convergent subsequence. and the limit would be a minimizer of J. uo > as k→∞ for every bounded linear functional f on X.2. And this is indeed true for our functional J. However. if a sequence {uk } goes to inﬁnity. for simplicity. L 2 2 L2 (Ω) (2. iii) Weak Lower Semicontinuity. from the second part of (2. the inﬁmum of g(x) is 1. Deﬁnition 2. .
4. v). this is not the case with our functional. in X. which converges weakly in X. and therefore H is reﬂexive. i→∞ and hence J(uo ) = lim inf J(uki ). ∀v ∈ H. 1 Now let’s come back to our Hilbert space H := H0 (Ω). Theorem 2.4. Assume that the sequence {uk } is bounded in X. if for every weakly convergent sequence uk we have J(uo ) ≤ lim inf J(uk ). Naturally.54 2 Existence of Weak Solutions Deﬁnition 2. we only need a weaker one: J(uo ) ≤ lim inf J(uki ). by the deﬁnition of a minimizing sequence. u >= (v. then uo would be our desired minimizer. that is i→ ∞ lim J(uki ) = J(uo ). instead. ∀u ∈ H. . such that < f. where X ∗ is the dual of X.4. for every linear functional f on H. i→∞ Since on the other hand. k →∞ uo . we do not really need such a continuity. we obviously have J(uo ) ≥ lim inf J(uki ). Then there exists a subsequence of {uk }. we wish that the functional J we consider is continuous with respect to this weak convergence.4 We say that a functional J(·) is weakly lower semicontinuous on a Banach space X. u). uo is a minimizer. However. now a minimizing sequence {uk } is bounded. nor is it true for most other functionals of interest. and hence possesses a subsequence {uki } that converges weakly to an element uo in H: (uki . Deﬁnition 2. By the Coercivity and Weak Compactness Theorem. By the Reisz Representation Theorem. v)→(uo .1 (Weak Compactness. there is a unique v ∈ H. Fortunately. i→∞ Therefore.) Let X be a reﬂexive Banach space. It follows that H ∗ = H.3 A Banach space is called reﬂexive if (X ∗ )∗ = X.
Here the second term on the right goes to zero due to the weak convergence uk uo . (2. with 1 < p < n+2 n−2 . Ω f udx is a linear functional on H. x ∈ ∂Ω.20) . it f uo dx.2 Assume that Ω is a bounded domain with smooth boundary. Assume that {uk } ⊂ H. 2n Then for every f ∈ L n+2 (Ω). u(x) = 0. the functional J(u) = 1 2 Du2 dx − Ω Ω f udx 1 possesses a minimum uo in H0 (Ω).18) From the algebraic inequality a2 + b2 ≥ 2ab. Therefore lim inf k→∞ Duk 2 dx ≥ Ω Ω Duo 2 dx.19) Now (2.4 Variational Methods 55 Now we show that our functional J is weakly lower semicontinuous in H. and uk Since for each ﬁxed f ∈ L n+2 (Ω). x ∈ ∂Ω.4.2.4. So far.4 Existence of Minimizers Under Constraints Now we consider the semilinear Dirichlet problem − u = up−1 u. follows immediately f uk dx→ Ω Ω 2n uo in H. x ∈ Ω. we have proved Theorem 2. 2. (2. which is a weak solution of the boundary value problem − u = f (x) . (2.19) imply the weakly lower semicontinuity. we have Duk 2 dx ≥ Ω Ω Duo 2 dx + 2 Ω Duo · (Duk − Duo )dx.18) and (2. as k→∞. x ∈ Ω u(x) = 0 .
as t→∞. what we are more interested is whether there exists nontrivial solutions. For this reason. by the Compact Sobolev Embedding n−2 H 1 (Ω) → → Lp+1 (Ω). k →∞ On the other hand. Let J(u) = It is easy to verify that d J(u + tv) t=0 = dt Du · Dv − up−1 uv dx. hence {uk } is bounded in H. i.e. we will consider another functional I(u) := in the constrain set M := {u ∈ H  G(u) := Ω 1 2 Du2 dx. u∈M It follows that Ω Duk 2 dx is bounded. Ω 1 Therefore. Ω Noticing p + 1 > 2.20) However. Therefore. a critical point of the functional J in H := H0 (Ω) is a weak solution of (2. which converges weakly to uo in H. We call this a trivial solution. k →∞ lim I(uk ) = inf I(u) := m. u ≡ 0 is a solution.56 2 Existence of Weak Solutions Obviously. By the Weak Compactness Theorem. To see this. there exist a subsequence (for simplicity. The weak lower semicontinuity of the functional I yields I(uo ) ≤ lim inf I(uk ) = m. we still denote it by {uk }). since p+1 < Theorem (2. ﬁxed an element u ∈ H and consider J(tu) = t2 2 Du2 dx − Ω tp+1 p+1 up+1 dx. We seek minimizers of I in M. Ω 1 2 Du2 dx − Ω 1 p+1 up+1 dx. instead of J. . Let {uk } ⊂ M be a minimizing sequence.21) 2n . this functional is not bounded from below. Ω up+1 dx = 1}. there is no minimizer of J(u). we ﬁnd that J(tu)→ − ∞. Of course.
22) for some constant λ. Now we have shown the existence of a minimizer uo of I in M . v >= λ < G (u). Df (xo ) is a vector in Rn . ∀v ∈ H. It is well known that the critical points ( in particular.22). While the normal projection Df (xo ) N is parallel to Dg(xo ).20). and hence uo p+1 dx = 1. let u be a minimizer of I in M. Together with (2. By the Reisz Representation .4. I (u) is a linear functional on H. i. At a ﬁxed point u ∈ H. or < I (u). we obtain I(uo ) = m. the minima ) xo of f (x) under the constraint g(x) = c satisfy Df (xo ) = λDg(xo ) (2.21). Let f (x) and g(x) be smooth functions in Rn . It can be decomposed as the sum of two perpendicular vectors: Df (xo ) = Df (xo ) N +Df (xo ) T .4 Variational Methods 57 we ﬁnd that uk converges strongly to uo in Lp+1 (Ω). we derive the corresponding EulerLagrange equation for this minimizer under the constraint. v∈M Then there exists a real number λ such that I (u) = λG (u). let’s ﬁrst recall the Lagrange Multiplier for functions deﬁned on Rn . Ω That is uo ∈ M . Geometrically. Heuristically. and we therefore have (2.e. Before proving the theorem.2. we may generalize this perception into inﬁnite dimensional space H. and therefore I(uo ) ≥ m. To link uo to a weak solution of (2. xo being a critical point of f (x) on the level set g(x) = c means that the tangential projection Df (xo ) T = 0.3 (Lagrange Multiplier). Theorem 2. By deﬁnition. where the two terms on the right hand side are the projection of Df (xo ) onto the normal and tangent spaces of the level set g(x) = c at point xo . I(u) = min I(v). v >.
dt (2. for suﬃciently small t. ∀v ∈ H. such that G(u + tv + s(t)w) = 1. s(t)) = 1. Similarly for G (u).4. such that < I (u). we have d I(u + tv) t=0 . we have G (u) ∈ H. w > s (0) = 0. there exists w (may just take w as u).24) . In fact. we can no longer use (2. Ω Since u ∈ M . and hence at the minimum u of J on M . Hence. v >= (I (u). such that the right hand side of the above is not zero. Ω up+1 dx = 1. there exists a C 1 function s : R1 →R1 . Now u + tv + s(t)w is on M .58 2 Existence of Weak Solutions Theorem. for any v ∈ H.3. and hence I (u) = λG (u). Then we can calculate the variation of J(u + tv + s(t)w) as t→0. For each small t.23) dt Now u is only the minimum on M . w >= (p + 1) ∂s up−1 uwdx. (2. we try to show that there is an s = s(t). We now prove this rigorously. v). s) := G(u + tv + sw). we must have d I(u + tv + s(t)w) t=0 =< I (u). we consider g(t. we can not guarantee that G(u + tv) := Ω u + tvp+1 dx = 1 for all small t. The Proof of Theorem 2. and g(t. it can be identiﬁed as a point (or a vector) in H. according to the Implicit Function Theorem. 0) =< G (u).23) because u + tv can not be kept on M . then. Recall that if u is the minimum of J in the whole space H. and it is a normal vector of M at point u. v > + < I (u). ∂g (0. denoted by I (u). such that s(0) = 0. To remedy this. At a minimum u of I on the hyper surface {w ∈ H  G(w) = 1}. I (u) must be perpendicular to the tangent space at point u.
4 Variational Methods 59 On the other hand. ∀v ∈ H. v > . w > Consequently s (0) = − Substitute this into (2.2. 0) + (0. < G (u). 2. The graph of z = f (x. v >= λ < G (uo ). 0) is a critical point of f . The way to locate or to show . w > s (0).20) in H. v >. ˜ Exercise 2.24).20).0) is the minimum. ∀v ∈ H. that is Duo · Dvdx = λ Ω Ω uo p−1 uo vdx. v > + < G (u). (0.0) is the maximum. (x. while along the direction of yaxis.1 Show that λ > 0 in (2. and denote λ= we arrive at < I (u).e. Now let’s come back to the weak solution of problem (2. However. ∀v ∈ H. Our minimizer uo of I under the constraint G(u) = 1 satisﬁes < I (uo ). (2. y) = (0.4. w > < I (u). there are other types of critical points: saddle points or minimax points. v >. (0. < G (u). 0)s (0) dt ∂t ∂s = < G (u). Then it is easy to verify that D˜ · Dvdx = u Ω Ω ˜p−1 uvdx.0) a minimax critical point of f (x. y) = x2 − y 2 from R2 to R1 .25). i. Let u = auo with λ/ap−1 = 1. < G (u). we also call (0.25) One can see that λ > 0. y). For this reason. it is neither a local maximum nor a local minimum. This is possible ˜ since p > 1. we have 0= dg ∂g ∂g (0. For a simple example. w > . 0) = 0. let’s consider the function f (x. If we go on the graph along the direction of xaxis. y) in R3 looks like a horse saddle. 0) = (0. Df (0.4. v >= λ < G (u). This completes the proof of the Theorem. u ˜ Now u is the desired weak solution of (2.5 Minimax Critical Points Besides local or global minima and maxima.
4. and more importantly. Df (Po ) = 0. then to decent to the point (Q.1]. More precisely. Unlike ﬁnite dimensional space. such that J(e) ≤ 0. and γ(0) = P. (J1 ) there exist constants ρ. Hence on this path. J(0) = 0. we deﬁne c = inf max f (γ(s)).60 2 Existence of Weak Solutions the existence of such minimax point is called a minimax variational method. let’s still take this simple example. f (Q)). Suppose we don’t know whether f (x. . Hence we require the functional J to satisfy some compactness condition. To go from (P.4 (Mountain Pass Theorem). For a functional J deﬁned on an inﬁnite dimensional space X. there is a highest point. P = (1. Under this compactness condition. one needs ﬁrst to climb over the ’mountain range’ near xz plane. we will apply a similar idea to seek its minimax critical points. −1) on two sides of the ‘mountain range’.1] where Γ = {γ ∈ C([0.4. if any sequence {uk } ∈ X for which J(uk ) is bounded and J (uk )→0 as k→∞ possesses a convergent subsequence. Suppose J satisﬁes (PS). Ambrosetti and Rabinowitz [AR] establish the following wellknown theorem on the existence of a minimax critical point. Then J possesses a critical value c ≥ α which can be characterized as c = inf max J(u) γ∈Γ u∈γ[0. γ(1) = e}. To show the existence of a minimax point. 1) and Q = (−1. a bounded sequence may not converge. Deﬁnition 2. Let γ(s) be a continuous curve linking the two points P and Q with γ(0) = P and γ(1) = Q. α > 0 such that J ∂Bρ (0) ≥ α. for instance. y) possesses a minimax point. Theorem 2. 1]. (PS)). γ∈Γ s∈[0. we pick up two points on xy plane. f (Q)) on the graph. but we do detect a ‘mountain range’ on the graph near xz plane.1] where Γ = {γ  γ = γ(s) is continuous on [0. R1 ). and (J2 ) there is an e ∈ X \ Bρ (0). γ(1) = Q} We then try to show that there exists a point Po at which f attains this minimax value c. Let X be a real Banach space and J ∈ C 1 (X. Then we take the inﬁmum among the highest points on all the possible paths linking P and Q. f (P )) to (Q. which is the PalaisSmale condition.5 We say that the functional J satisﬁes the PalaisSmale condition (in short. To illustrate the main idea. in an inﬁnite dimensional space. R1 )  γ(0) = 0.
i. the points on the graph ‘ﬂow downhills’ into a lower level. R1 ) and satisﬁes the (PS). there is another low point e. fa is connected while f−a is not. If there is no critical value between the two level sets. which is the lowest as compared to the highest points on the nearby paths. the minimax critical point of J. One can see that 0 is a critical value and for any a > 0. One convenient way is let the points in the set fb ‘ﬂow’ along the direction of −Df into fa . Denote the level set of f by fc := {(x. y) ∈ R2  f (x. say fa and fb with 0 < a < b.5 (Deformation Theorem). Assume that J ∈ C 1 (X. let’s come back to the simple example f (x. J(e)) contains a highest point S. y) ≤ c}. the level sets fa and f−a has diﬀerent topology. on the graph of J. The proof of the Theorem is mainly based on a deformation theorem which exploits the change of topology of the level sets of J when the value of J goes through a critical value. This S is a saddle point.e. . Let X be a real Banach space. we have Theorem 2. and if at the other side of the range. then there must be a mountain pass from 0 to e which contains a minimax critical point of J. J(0)) and (e. y) = x2 − y 2 . J (u) = 0 and J(u) = c. In the ﬁgure below. To illustrate this. a critical value c means a value of the functional which is attained by a critical point u. More precisely and generally. correspondingly. Suppose that c is not a critical value of J.4 Variational Methods 61 Here.4. the Theorem says if the point 0 is located in a valley surrounded by a mountain range.2. then one can continuously deform the set fb into fa . Heuristically. the path connecting the two low points (0.
u)). to let the ‘ﬂow’ go along the direction of −J (η(t. ∀u ∈J −1 [c − . we choose 0 < δ < . Condition (ii) is to ensure that after the deformation. ∀u ∈ X. u) from [0. M ). u) would decrease as t increases. k →0 and elements uk ∈ Jc+ k \ Jc− k with J (uk ) ≤ ak . we need to modify the right hand side of the equation a little bit. Jc+δ ) ⊂ Jc−δ . This is actually guaranteed by the (PS). such that a2 δ< . there is a subsequence {uki } that converges to an element uo ∈ X. Therefore (2. 1]. there exists a constant 0 < δ < and a continuous mapping η(t.62 2 Existence of Weak Solutions Then for each suﬃciently small > 0. Roughly speaking. The main idea is to solve an ODE initial value problem (with respect to t) for each u ∈ X. Step 2. the two points 0 and e as in the Mountain Pass theorem still remain ﬁxed. to make the ﬂow stay still in the set M := {u ∈ X  J(u) ≤ c − or J(u) ≥ c + }. This is a contradiction with the assumption that c is not a critical value of J. so that the ﬂow will satisfy other desired conditions. (2. c + ]. roughly look like the following: = −J (η(t. such that (i) η(0. η(0. ∀u ∈ X. (ii) η(1.27) holds. R1 ).28) 2 and let . ∀u ∈ Jc+ \ Jc− . such that J (u) ≥ a. u)). We claim that there exist constants 0 < a. so that the value of J(η(t.e.26) by dist(η. Step 1 We ﬁrst choose a small > 0. Proof of the Deformation Theorem. but this would make the ﬂow go too slow near M . t ∈ [0. we must have J(uo ) = c and J (uo ) = 0. i.26) That is. To modify further. then one can continuously deform a higher level set Jc+δ into a lower level Jc−δ by ‘ﬂowing downhills’.27) Otherwise. the Theorem says if c is not a critical value of J. To satisfy condition (ii). However. Since J ∈ C 1 (X. there would exist sequences ak →0. (iii) J(η(t. u) = u. (2. u) = u. (iv) η(1. u) ≤ J(u). dη dt (t. Then by virtue of the (PS) condition. < 1. 1] × X to X. u) = u. u) (2. so that the ﬂow would not go too slow in Jc+ \ Jc− . we could multiply the right hand side of equation (2.
u) > dt dt = < J (η(t. (t.27). deﬁne g(u) := and let h(s) := We ﬁnally modify the −J (η) as F (η) := −g(η)h( J (η) )J (η). (2. From the deﬁnition of g g(u) = 0 ∀u ∈ M . This veriﬁes (iii). we have. u)) = < J (η(t. The introduction of the factor h(·) is to ensure the boundedness of F (·). u))h( J (η(t. u)) > = −g(η(t.29) becomes d J(η(t. dist(u.4 Variational Methods 63 N := {u ∈ X  c − δ ≤ J(u) ≤ c + δ}. u) dist(u. u)). u)) 2 . Now for each ﬁxed u ∈ X. To see (iv). u)) ) J (η(t. M ) + dist(u. To verify (iii) and (iv). u)) ) J (η(t. u)). N ) 1 if 0 ≤ s ≤ 1 1 s if s > 1. dt By the deﬁnition of h and (2. u) ∈ Jc−δ . hence condition (ii) is also satisﬁed. and (2. M ) . F (η(t. u) for all time t ≥ 0.29) In the above equality. g(η) ≡ 1. One can verify that F is bounded and Lipschitz continuous on bounded sets. u)) η(0. we consider the modiﬁed ODE problem = F (η(t. Condition (i) is satisﬁed automatically. u) 2 . and therefore by the wellknown ODE theory. Now for u ∈ X. there exists a unique solution η(t. the right hand side is nonpositive. obviously. u)) is nonincreasing.2. and hence J(η(t. Step 3. Notice that for η ∈ N . dη dt (t. . we compute d dη J(η(t. u) = u. we have η(1. u)) = −h( J (η(t. it suﬃce to show that for each u in N = Jc+δ \ Jc−δ .
1] On the other hand. u)) ≤ 1 − J (η(t. that is η(1. for any γ ∈ Γ γ([0. 1]) ∩ ∂Bρ (0) = ∅. 1]) ⊂ Jc+δ . Hence by condition (J1 ). u∈γ([0. Suppose the number c so deﬁned is not a critical value. u)) 2 ≤ −a2 if J (η(t.e. And it follows that c ≥ α. u)) − a2 ≤ c + δ − a2 ≤ c − δ.1]) max J(u) ≤ c + δ. we take a particular path γ(t) = te and consider the function g(t) = J(γ(t)). and hence bounded from above.1]) max J(u) ≥ v∈∂Bρ (0) inf J(v) ≥ α. This establishes (iv) and therefore completes the proof of the Theorem. we have c < ∞. To see this.64 2 Existence of Weak Solutions d J(η(t. γ([0. choose the in the Deformation Theorem to be α .1]) t∈[0. J(η(1. u∈γ([0. we 2 can pick a path γ ∈ Γ . u)) > 1. the proof of the Mountain Pass Theorem becomes very simple. or in order words. 1]) ⊂ Jc−δ . due to the deﬁnition. First. we derive − J (η(t. u)) ≤ J(η(0. The Proof of the Mountain Pass Theorem. such that u∈γ([0. u)) ≤ −a if J (η(t. this path can be continuously deformed down to the level set Jc−δ . More precisely. We will use the Deformation Theorem to continuously deform a path γ ∈ Γ down to the level set Jc−δ to derive a contradiction. Therefore c ≤ max J(u) = max g(t) < ∞. . and 0 < δ < . From the deﬁnition of c. i. u)) = dt In any case. With this Deformation Theorem. γ([0. Now by the Deformation Theorem.1]. It is continuous on the closed interval [0.
s). with 0 ≤ p < that n+2 n−2 if n > 2. u) satisﬁes ¯ (f1 )f (x. c2 ≥ 0.γ([0. u) is up−1 u.4. e) = e.4. If n = 1. γ([0. (f2 ) there exists constants c1 . .6 (Rabinowitz [Ra]) If f satisﬁes condition (f1 ) − (f4 ).4 Variational Methods u∈η(1.30) On the other hand. with φ(s)/s2 →0 as s→∞. s) ≤ sf (x. s) = 0 s f (x.6 Existence of a Minimax via the Mountain Pass Theorem Now we apply the Mountain Pass Theorem to seek weak solutions of the semilinear elliptic problem − u = f (x. then problem (2. and therefore we must have u∈η(1.30) and hence completes the proof of the Theorem.1])) 65 max J(u) ≤ c − δ.31) Although the method we illustrate here could be applied to a more general second order uniformly elliptic equations. This contradicts with (2.2. and (f4 ) there are constants µ > 2 and r ≥ 0 such that for s ≥ r. Theorem 2.2 A simple example of such function f (x.1])) max J(u) ≥ c. R1 ). We assume that f (x. it suﬃce f (x. where F (x. s) ≤ c1 eφ(s) . t)dt. 1])) is also a path in Γ . η(1. x ∈ Ω. 0) = 0 and η(1. u). Remark 2. if n = 2. 2. by (ii) of the Deformation Theorem. (f3 )f (x. s) ≤ c1 + c2 sp . since 0 and e are in Jc− .31) possesses a nontrivial weak solution. we prefer this simple model that better illustrates the main ideas. (f2 ) can be dropped. (2. s) ∈ C(Ω × R1 .γ([0. 0 < µF (x.4. u(x) = 0. This means that η(1. x ∈ ∂Ω. (2. such that f (x. s) = o(s) as s→0.
Given any > 0. there is a δ > 0. (2. Proof.4. Ls (Ω) < . ·) : Lr (Ω)→Ls (Ω). ξ ∈ R1 . Now we verify the continuity of the map. whenever Let and Let φ Lr (Ω) < δ. now we only need to check the second term I(u) := Ω F (x.32) . u)dx J(u) = 2 Ω Ω 1 on the Hilbert space H := H0 (Ω). then g(x. Then the map u(x)→g(x. and (g2 ) there are constants r. u) ¯ Ω1 := {x ∈ Ω  φ(x) ≤ m} ¯ Ω2 := Ω \ Ω1 . a2 ≥ 0 such that g(x. That is g(x. g(x. s ≥ 1 and a1 . Ω ≤ a3 It follows that if u ∈ Lr (Ω). u + φ) − g(·. u(x)) belongs to C(Lr (Ω). By (g2 ). u(x))s dx ≤ Ω Ω (a1 + a2 ur/s )s dx (1 + ur )dx.1 (Rabinowitz [Ra]) Let Ω ⊂ Rn be a bounded domain and let g satisfy ¯ (g1 )g ∈ C(Ω × R1 . we have g(·. We ﬁrst need to show that J ∈ C 1 (H. ξ) ≤ a1 + a2 ξr/s ¯ for all x ∈ Ω. R1 ). Ls (Ω)). we seek minimax critical points of the functional 1 Du2 dx − F (x. Since we have veriﬁed that the ﬁrst term Du2 dx Ω is continuously diﬀerentiable. We verify that J satisﬁes the conditions in the Mountain Pass Theorem. Proposition 2. Fix u ∈ Lr (Ω).66 2 Existence of Weak Solutions To ﬁnd weak solutions of the problem. such that. u) ∈ Ls (Ω). we want to show that. R1 ). u)dx.
we have I2 ≤ Ω2 (c1 + c2 (ur + φr )) dx ≤ c1 Ω2  + c2 Ω2 ur dx + c2 φ r Lr (Ω) . This veriﬁes (2. u(x) + φ(x)) − g(x. ∀x ∈ Ω1 .33). such that g(x. as φ Lr (Ω) < δ. R) and < I (u). there exists m > 0. By the continuity of g(x. and estimate I2 . For the given > 0. by (2. Consequently. we choose η and then m. v >= Ω f (x. u)vdx. Moreover. u(x)) ≤ η. δr > Ω (2.e. I1 + I2 ≤ s 2 + s 2 . s) satisﬁes (f1 ) and (f2 ). so that s . we can choose δ so small. i = 1. Ωη s < 2 and therefore s I1 ≤ .34) φr dx ≥ mr Ω2 . (2.35) Since u ∈ Lr (Ω). Moreover. for any η > 0.34) is less than 2 .2. I(·) is weakly continuous and I (·) is compact. such that Ω2  is suﬃciently s small. Now by (2. It follows that I1 ≤ Ω1 η s ≤ Ωη s .35). ∀v ∈ H. we can make Ω2 ur dx as small as we wish by letting Ω2  small. u(x))s dx.4. ·). for such a small δ. . Let n ≥ 3. u(x) + φ(x)) − g(x.32). Proposition 2. By (g2 ). and therefore completes the proof of the Proposition. (2. i.2 (Rabinowitz [Ra]) Assume that f (x.4 Variational Methods 67 Ii = Ωi g(x. where Ω is the volume of Ω. I(uk )→I(u) and I (uk )→I (u). Then I ∈ C 1 (H. for some constant c1 and c2 . whenever uk u in H. 2.33) 2 Now we ﬁx this m. and hence the right hand side of (2.
u(x) + ξ(x)) − f (x.39) .68 2 Existence of Weak Solutions Proof. we show that for each ﬁxed u.36) whenever v ≤ δ. 2n 2n In Proposition 2. we deduce that the map u(x)→f (x. u(x) + v(x)) − F (x. we can choose suﬃciently small δ > 0. Here v denotes the H 1 (Ω) norm of v. there exists a δ = δ( . whenever v < δ.38) 2. v) < . the H¨lder inequality. Hence for any given > 0. v ∈ H. u(x)) · v(x)dx Ω 2n = ≤ f (·. and hence ξ n−2 < Kδ. such that whenever v < δ. In fact. we have 2n 2n v n−2 < Kδ. u) L n+2 (Ω) 2n v 2n L n−2 (Ω) L n+2 (Ω) K v . such that Q(u. u) It follows from (2.1. Then by (f2 ). v >= Ω f (x. This veriﬁes (2. I (u + φ) − I (u) →0. u). v) := I(u + v) − I(u) − Ω f (x. u) ≤ f (·. u(x)) − f (x. and therefore L (Ω) L (Ω) 2n 2n f (·.4. u(x)) is continuous from L n−2 (Ω) to L n+2 (Ω). We will ﬁrst show that I is Frechet diﬀerentiable on H and then prove that I (u) is continuous. and the Sobolev inequality o v 2n L n−2 (Ω) ≤K v .37) Here we have applied the Mean Value Theorem (with ξ(x) ≤ v(x)). (2. To verify the continuity of I (·). u + ξ) − f (·. (2. u(x))v(x)dx. u)vdx ≤ v (2. Q(u. let r = n−2 and s = n+2 . v) ≤ Ω F (x. 2n L n+2 (Ω) < K . and < I (u). (2. u(x))v(x)dx f (x. Let u. 1.36) and hence I(·) is diﬀerentiable.37) that Q(u. u + ξ) − f (·. We want to show that given any > 0. u + ξ) − f (·. as φ →0.
Furthermore. and uk u in H. u + φ) − f (·. n−2 Since a weak convergent sequence is bounded. {uk } is bounded in H. I (·) is continuous. I (u + φ) − I (u) = sup < I (u + φ) − I (u).4. Finally.40).41) Here we have applied the Mean Value Theorem and the H¨lder inequality with o ξk (x) between uk (x) and u(x). ξk (x)) · uk (x) − u(x)dx Ω Lr (Ω) 2n ≤ ≤ f (·. it is bounded in L n−2 (Ω).1. u(x))dx f (x. We show that In fact. v > v ≤1 = sup v ≤1 Ω [f (x. u) 2n ≤ sup v ≤1 L n+2 (Ω) K v (2. p(n − 2) r−1 2n − p(n − 2) 2n . (2. This veriﬁes the weak continuity of I(·).39) follows again from Proposition 2.2. o Now (2. I(uk ) − I(u) ≤ Ω I(uk )→I(u) as k→∞. u(x) + φ(x)) − f (x.38). and r= It is easy to see that q< r 2n 2n . the H¨lder inequality. u + φ) − f (·. by Compact Embedding of H into Lq (Ω). 2n and hence by Sobolev Embedding. we obtain I (uk ) − I (u) ≤ K f (·.40) ≤ K f (·. u) 2n L n+2 (Ω) . Assume that {uk } ⊂ H. q= = . ξk ) ≤ C ξk uk − u uk − u Lq (Ω) Lq (Ω) L n−2 (Ω) (2. uk ) − f (·. we verify the weak continuity of I(u) and the compactness of I (u). Therefore. 3. so does {ξk }. F (x. u(x))]v(x)dx f (·. uk (x)) − F (x.4 Variational Methods 69 By deﬁnition. the last term of (2.41) approaches zero as k→∞. and the Sobolev inequality.42) . Using a similar argument as in deriving (2. u) 2n L n+2 (Ω) Here we have applied (2.
42). uk > µ = ( − 1) Duk 2 dx + [f (x. Now. Let A : H→H ∗ be the duality map between H and its dual space H ∗ . < Au. f (·.43) implies that {uk } must be bounded in H. form Proposition 2. uk )uk − µF (x. We show that {uk } possesses a convergent subsequence.1.6. uk )uk − µF (x. Then by the Compact Embedding. we have µM + J (uk ) uk ≥ µJ(uk )− < J (uk ).4. which is a weak solution of (2. uk )] dx 2 Ω Ω µ ≥ ( − 1) Duk 2 dx + [f (x. Now let’s come back to Problem (2. uk )→f (·.31).70 2 Existence of Weak Solutions Since p < n+2 n−2 . We will verify that J(·) satisﬁes the conditions in the Mountain Pass Theorem. v >= Ω 2n Du · Dvdx. we arrive at I (uk )→I (u) as k→∞.4. 2np Consequently. Consequently. Assume that {uk } is a sequence in H satisfying J(uk ) ≤ M and J (uk )→0. . which converges weakly to some element uo in H. v ∈ H. First we verify the (PS) condition.31) and prove Theorem 2. (2. u) strongly in L n+2 (Ω). Since J (uk ) is bounded. This completes the proof of the proposition.43) for some positive constant co and Co . By (f4 ). we derive uk →u strongly in L n+2 (Ω). uk )] dx 2 Ω uk (x)≤r µ ≥ ( − 1) Duk 2 dx − (a1 r + a2 rp+1 )Ω 2 Ω = co uk 2 − Co . (2. Then for any u. together with (2. we have 2np n+2 < 2n n−2 . and thus possesses a nontrivial minimax critical point. Hence there exists a subsequence (still denoted by {uk }). 2np H → → L n+2 (Ω).
48) For this . u)dx ≤ Ω u2 dx + M Ω up+1 dx ≤ C( + M u p−1 ) u 2 . such that J(e) ≤ 0.45) and (2. we ﬁx any u = 0 in H. s) ≤ s2 + M sp+1 .4. via the Poincar´ and the Sobolev inequality. (2.47) And consequently. To see there is a ‘mountain range’ surrounding the origin. uk )→A−1 f (·. F (x.49) .44) This veriﬁes the (PS). 8 Then form (2. we estimate F (x. for all Ω ¯ x ∈ Ω. we have. so that CM ρp−1 ≤ 1 . F (x. u)dx. (2. Combining (2. we deduce J∂Bρ (0) ≥ 1 2 ρ .46). u). u) strongly in H ∗ .2.48). then choose u = ρ small. s) ≤ s2 . (2. F (x. and consider J(tu) = t2 2 Du2 dx − Ω Ω F (x. such that for all x ∈ Ω. tu)dx. for any given > 0. we ﬁx M . that e  Ω F (x. (2.4 Variational Methods 71 A−1 J (u) = u − A−1 f (·. It follows that uk = A−1 J (uk ) + A−1 f (·. 1 − C( + M u 2 p−1 ) u 2. whenever s < δ. From Proposition 2. J(u) ≥ Choose = 1 8C .45) ¯ While by (f2 ). uo ).2. 4 To see the existence of a point e ∈ H. as k→∞. uk )→f (·. such that. s) ≤ M sp+1 . (2. ¯ for all x ∈ Ω and for all s ∈ R. there is a δ > 0. (2. f (·. By (f3 ).46) It follows. there is a constant M = M (δ).
therefore it is a nontrivial solution. s) + sf (x. s) ≥ a3 sµ .50) ≥ 0 if s ≥ r ≤ 0 if s ≤ −r. we have for s ≥ r. Now J satisﬁes all the conditions in the Mountain Pass Theorem.72 2 Existence of Weak Solutions By (f4 ). Moreover.50). (2. hence it possesses a minimax critical point uo . J(uo ) > 0. and taking into account that µ > 2. we deduce. .31). as t→∞. for s ≥ r. Combining (2. which is a weak solution of (2. s) = s−µ−2 s (−µF (x. d s−µ F (x. s)) ds In any case. and it follows that F (x. F (x. s) ≥ a3 sµ − a4 .49) and (2. This completes the proof of the Theorem 2. we conclude that J(tu)→ − ∞.4.6.
2 and hence. and this is particularly true for nonlinear equations.2 The Case 1 < p < 2. However.3.2 Newtonian Potentials. by Sobolev imbedding. These questions will be answered here.1 3. we used functional analysis.2 W 2.1 3. 3. mainly calculus of variations to seek the existence of weak solutions for second order linear or semilinear elliptic equations.3 Regularity of Solutions 3. a weak solution is in fact smooth. Therefore. the weak solutions are easier to obtain than the classical ones. This is called the regularity argument.3 Regularity Lifting 3.3 Other Useful Results Concerning the Existence.3. in practice. in most cases.3 3. the regularity is equivalent to the a priori estimate of solutions. so that they can become classical ones. Very often. in Lp space for p ≤ n−2 . Usually. Uniqueness.3. Uniform Elliptic Equations 3.1.4 Bootstraps the Regularity Lifting Theorem Applications to PDEs Applications to Integral Equations In the previous chapter.2 3. In this chapter. To see the diﬀerence between the regularity and the a priori estimate. we are often required to ﬁnd classical solutions. and Regularity 3.2.1 W 2. we will introduce methods to show that.1 The Case p ≥ 2.2. we take the equation . one would naturally want to know whether these weak solutions are actually diﬀerentiable.3.p Regularity of Solutions 3. These weak solutions we obtained were in the n+2 Sobolev space W 1.2. 3.p a Priori Estimates 3.1.
1). and c(x) be bounded functions on Ω with aij (x) = aji (x). We show that if f (x) is in p L (Ω).p ≤C f Lp . In the a priori estimate.p ≤ C( u Lp + f Lp ). then any weak solution u of (3.p (Ω) is a strong solution of (3.p a priori estimate says 1. In Section 3.74 3 Regularity of Solutions − u = f (x) for example. Roughly speaking.p .1. We show that if the function f (x) is in Lp (Ω). x ∈ Ω and for all ξ ∈ Rn .1) is in W 2.2 how the a priori estimates and uniqueness of solutions lead to the regularity. It provides a simple method for the study of regularity. we establish W 2. It might seem a little bit surprising at this moment that the a priori estimates turn out to be powerful tools in deriving regularities. such that n aij (x)ξi ξj ≥ δξ2 i. (3.3. The version we present here contains some new developments. and u ∈ W 2.p If u ∈ W0 ∩ W 2.p (Ω). bi (x). then u W 2.p . It is much more general and very easy to use. Consider the second order partial diﬀerential equation n n Lu := − i. e. we preassumed that u is in W 2. The readers will see in section 3. While the W 2.p is a weak solution and if f ∈ Lp .j=1 for a. Let Ω be an open bounded set in Rn . We will also use examples to show how this method can be applied to boost the regularity of the solutions for PDEs as well as for integral equations. We believe that the method will be helpful to both experts and nonexperts in the ﬁeld.1). In Section 3.2. that is.p a priori estimate for the solutions of (3. we introduce and prove a regularity lifting theorem. Let aij (x). such that u W 2. we establish a W 2.j=1 aij (x)uxi xj + i=1 bi (x)uxi + c(x)u = f (x).1) We assume L is uniformly elliptic. the W 2. then use the method of “frozen coeﬃcients” to generalize it to uniformly elliptic equations. . then u ∈ W 2. We will ﬁrst establish this for a Newtonian potential.p regularity. there exists a constant δ > 0. In Section 3.p is a weak solution and if f ∈ Lp .p regularity theory infers that If u ∈ W 1. then there exists a constant C.
Ω We prove Theorem 3. For p ≥ 1. 3.e. j. a weak Lp space Lp (Ω) is the collection of functions f that w satisfy f p Lp (Ω) w = sup{µf (t)tp .3) (3.1.p a Priori Estimates We establish W 2.1.2) First we start with the Newtonian potential. (3.1. Write w = N f. Deﬁne µf (t) = {x ∈ Ω  f (x) > t}.1 Let f ∈ Lp (Ω) for 1 < p < ∞. and D2 w Lp (Ω) (3. To this end. ∀t > 0} < ∞. it is equivalent to show that T : Lp (Ω) −→ Lp (Ω) is a bounded linear operator . x ∈ Ω. where N is obviously a linear operator.p a Priori Estimates 75 3. (3. a. The Newtonian potential is known as w(x) = Γ (x − y)f (y)dy. x ∈ Ω. be the fundamental solution of the Laplace equation.1 W 2. deﬁne the linear operator T as T f = Dij N f = Dij Rn Γ (x − y)f (y)dy To prove Theorem 3. .p estimate for the solutions of Lu = f (x) . Then w ∈ W 2.4) ≤C f Lp (Ω) . Let Γ (x) = 1 1 n(n − 2)ωn xn−2 n ≥ 3.3. we introduce the concept of weak type and strong type operators.1 Newtonian Potentials Let Ω be a bounded domain of Rn . and let w be the Newtonian potential of f .p (Ω) and w = f (x).1 W 2.5) Our proof can actually be applied to more general operators.1. For ﬁxed i.
T is of strong type (2. Let ˆ f (ξ) = 1 (2π)n/2 ei<x. i. Outline of Proof of (3.1. We need the following simple property of the transform ˆ Dk f (ξ) = −iξk f (ξ) .1. .4 T is of strong type (p. First. 2). We decompose the proof into the proofs of the following four lemmas. for 1 < p < ∞.ξ> f (x)dx. (3. where n < x. ∀f ∈ Lp (Ω). ∀x ∈ Rn . (3.3 T is of strong type (r.7). r) for any 1 < r ≤ 2.1. Finally.76 3 Regularity of Solutions An operator T : Lp (Ω) → Lq (Ω) is of strong type (p.1. Then obviously w ∈ C ∞ (Rn ) and satisﬁes w = f (x) .2 T is of weak type (1. Thirdly. q) if Tf Lq (Ω) w ≤C f Lp (Ω) . we use Fourier transform to show Lemma 3. Secondly. ∀f ∈ Lp (Ω).e. ˆ and Djk f (ξ) = −ξj ξk f (ξ). T is of weak type (p.6) and (3. we employ Marcinkiewicz Interpolation Theorem to derive Lemma 3.1.1. ξ >= k xk ξk √ is the inner product of Rn . p). q) if Tf Lq (Ω) ≤C f Lp (Ω) . by duality. ∞ ∞ First we consider f ∈ C0 (Ω) ⊂ C0 (Rn ).7) It follows from (3. The Proof of Lemma 3.1 T : L2 (Ω) → L2 (Ω) is a bounded linear operator. and i = −1.1). we use CalderonZygmund’s Decomposition Lemma to prove Lemma 3. Rn be the Fourier transform of f .5).6) and the wellknown Plancherel’s Theorem: ˆ f L2 (Rn ) = f L2 (Rn ) . we conclude Lemma 3.
p a Priori Estimates 77 f (x)2 dx = Ω Rn f (x)2 dx  w(x)2 dx Rn = = Rn  w(ξ)2 dx ξ4 w(ξ)2 dξ ˆ Rn n 2 2 ξk ξj w(ξ)2 dξ ˆ = = k. For any given α > 0.8) Now to verify (3. ∀f ∈ C0 (Ω). a.j=1 Rn = = Rn D2 w2 dx.e. ﬁxed α > 0.j=1 n Rn Dkj w(ξ)2 dξ Dkj w(x)2 dx k. ∃ (i) Rn = E ∪ G.5 For f ∈ L1 (Rn ). (3.1. {Qk }: disjoint cubes s.j=1 n Rn = k. Consequently. Tf ∞ C0 (Ω) L2 (Ω) ≤ f L2 (Ω) ∞ . Qo . This proves the Lemma.3. G such that Qk . we ﬁrst extend f to vanish outside Ω.t.8) for any f ∈ L2 (Ω).1. ﬁx a large cube Qo in Rn . Lemma 3. The Proof of Lemma 3.1 W 2. such that f (x)dx ≤ αQo . x ∈ E (iii) G = ∞ E.2. We need the following wellknown Calder´nZygmund’s Decomposition o Lemma (see its proof in Appendix C). E ∩ G = ∅ (ii) f (x) ≤ α. we simply pick a sequence {fk } ⊂ that converges to f in L2 (Ω). k=1 α< 1 Qk  f (x)dx ≤ 2n α Qk For any f ∈ L1 (Ω). to apply the Calder´nZygmund’s Decomposition o Lemma. and take the limit.
and therefore α α µT f (α) ≤ µT g ( ) + µT b ( ). (3. Since the operator T is linear. By Lemma 3.12) We then estimate µT b .13) .78 3 Regularity of Solutions We show that µT f (α) := {x ∈ Rn  T f (x) ≥ α} ≤ C f L1 (Ω) α . Let bk (x) = Then Tb = k=1 ∞ For each ﬁxed k.) We will show that C (a) G∗  ≤ f L1 (Ω) . we derive 4 α µT g ( ) ≤ 2 2 α g 2 (x)dx ≤ 2n+2 α g(x)dx ≤ 2n+2 α f (x)dx. k = 1. (3. E∗ 2 α α α These will imply the desired estimate for µT b ( 2 ). (3. almost everywhere and b(x) = 0 for x ∈ E . 2.11) We ﬁrst estimate µT g . and Qk (3. where f (x) for x ∈ E g(x) = 1 Qk  Qk f (x)dx for x ∈ Qk . let {bkm } ⊂ C0 (Qk ) be a sequence converging to bk in 2 L (Ω) satisfying b(x) for x ∈ Qk 0 elsewhere . The estimate of the ﬁrst one 2 2 is easy.9) Split the function f into the “good” part g and “bad” part b: f = g + b. bkm (x)dx = Qk Qk bk (x)dx = 0.1.10). Obviously. (3. from the deﬁnition of g. · · · . and α α C C (b) {x ∈ E ∗  T b(x) ≥ } ≤ T b(x)dx ≤ f L1 (Ω) . T f = T g + T b. because g ∈ L2 . ∞ T bk .10) b(x)dx = 0 for k = 1. · · · . 2 2 We will estimate µT g ( α ) and µT b ( α ) separately. To estimate µT b ( α ). 2.1 and (3. we divide Rn into two parts: G∗ 2 and E ∗ := Rn \ G∗ (see below for the precise deﬁnition of G∗ . we have g(x) ≤ 2n α .
and the radius of the ball δk is the same as the diameter of Qk . we obtain T bk (x)dx ≤ C Rn \Bk Qk ∞ bk (y)dy.1 W 2. It follows that ∞ ∞ T b(x)dx ≤ C E∗ k=1 ∞ Rn \G∗ T bk dx ≤ C k=1 Rn \Bk T bk dx f (x)dx. Let G∗ = Bk k=1 and E ∗ = Rn \ G∗ . one can see that due to the singularity of Dij Γ (x − y) in Qk and the fact that bkm may not be bounded in Qk .14).3.14) ≤ C2 Qk bkm (y)dy where y is the center of the cube Qk .13)): Dij Γ (x − y )bkm (y)dy ¯ Qk to produce a helpful factor δk by applying the mean value theorem to the diﬀerence: Dij Γ (x − y) − Dij Γ (x − y ) = (y − y ) · D(Dij Γ )(x − ξ) ≤ δk D(Dij )(x − ξ). One small trick here is to add a term ¯ (which is 0 by (3.15) ≤C k=1 Qk bk (y)dy ≤ C Rn . one can only estimate T bkm (x) when x is of a positive distance away from Qk . (3. we cover Qk by a bigger ball Bk which has the same center as Qk .p a Priori Estimates 79 From the expression T bkm (x) = Qk Dij Γ (x − y)bkm (y)dy. ¯ ¯ Now letting m→∞ in (3. For this reason. We now estimate the integral in the complement of Bk : T bkm (x)dx = Rn \Bk Qo \Bk  Qk Dij Γ (x − y)bkm (y)dydx [Dij Γ (x − y) − Dij Γ (x − y )]bkm (y)dydx ¯ Qk = Qo \Bk  ≤ C δk Qo \Bk ∞ 1 dx ·  xn+1 Qk bkm (y)dy Qk ≤ C1 δk δk 1 dr · r2 bkm (y)dy (3.
. (3.1.16). The proof of this lemma can be found in Appendix C.15). (3. 1) and strong type (2. The proof of Lemma 3. (3. and r.80 3 Regularity of Solutions Obviously α α µT b ( ) ≤ G∗  + {x ∈ E ∗  T b(x) ≥ }. In the previous lemmas.3 is a direct consequence of the Marcinkiewicz interpolation theorem in the following restricted form: Lemma 3. T is of strong type (r.16) G∗  = k=1 Bk  = C k=1 Qk  ≤ C α ∞ f (x)dx = k=1 Qk C α f (x)dx. If T is of weak type (p.17. From the previous lemma.3. 1 θ 1−θ = + r p q and C depends only on p. r). and (3. 2) (of course also weak type (2. p) and weak type (q.4. Rn Write (3. ∀f ∈ Lp (Ω) ∩ Lq (Ω). 2 2 By (iii) in the Calder´nZygmund’s Decomposition Lemma. if there exist constants Bp and Bq .12). we have shown that the operator T is of weak type (1.1.19) . 2 ∗ Eα = {x ∈ E ∗  T b(x) ≥ Then by (3.18) Now the desired inequality (3. for any 1 < r ≤ 2. such that µT f (t) ≤ then Tf where r θ 1−θ ≤ CBp Bq f r Bp f t p p and µT f (t) ≤ Bq f t q q .18). then for any p < r < q. 2)). we derive ∗ Eα  α ≤ 2 T b(x)dx ≤ ∗ Eα T b(x)dx ≤ C E∗ Rn f (x)dx.1.6 Let T be a linear operator from Lp (Ω) ∩ Lq (Ω) into itself with 1 ≤ p < q < ∞.1. q). More precisely. ∀f ∈ Lp (Ω) ∩ Lq (Ω). we have Tg Lr (Ω) ≤ Cr g Lr (Ω) . q. Now Lemma 3.9) is a direct consequence of (3.17) α }. (3. we know that. we have o ∞ ∞ (3. This completes the proof of the Lemma. The proof of Lemma 3.
3.21). we consider general second order uniform elliptic equations with Dirichlet boundary condition + c(x)u = f (x). Lr =1 ≤ g sup Lr =1 f Lp Tg Lr ≤ Cr f This completes the proof of the Lemma. x ∈ Ω x ∈ ∂Ω. p Given any 2 < p < ∞. Λ.20) + 1 p = 1. Then it is easy to verify that < g.20) that 1 r (3. (3.1. Then 1. T f >= g < T g. aij (x) ∈ C0 (Ω). n. bi (x) ∈ L∞ (Ω).2 Let u ∈ W 2. .22) where C is a constant depending on bi (x)L∞ (Ω) .1.j=1 aij (x)uxi xj + n i=1 bi (x)uxi uW 2.21) ¯ We assume that Ω is bounded with C 2. 1 < r < 2. and λξ2 ≤ aij ξi ξj ≤ Λξ2 for some positive constants λ and Λ.p (Ω) ≤ C(uLp (Ω) + f Lp (Ω) ) (3. 3.2 Uniform Elliptic Equations In this section. T f >=< T g.21). Theorem 3. c(x)L∞ (Ω) . f > Lp . Based on the result in the previous section–the estimates on the Newtonian potentials–we will establish a priori estimates on the strong solutions of (3. Deﬁnition 3. λ. c(x) ∈ L∞ (Ω). Obviously. and the domain Ω.e.1 W 2. i.α boundary. let r = p−1 .p (Ω) Assume that f ∈ Lp (Ω). g >= Ω f (x)g(x)dx be the duality between f and g. x ∈ Ω if u is twice weakly diﬀerentiable in Ω and satisﬁes the equation almost everywhere in Ω. It sup Tf Lp = g sup Lr =1 < g.19) and (3.2 W0 (Ω) be a strong solution of (3.1 We say that u is a strong solution of Lu = f (x).1. Lu := − u(x) = 0. f > . n i.p a Priori Estimates 81 Let < f. follows from (3. p.
We divide the proof into two major parts. Boundary Estimate uW 2.1. Part 2.p (Ω) ≤ C(uLp (Ω) + f Lp (Ω) ) (3.p (Ω\Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ) where Ωδ = {x ∈ Ω  d(x. and thus we are able to treat the operator as a constant coeﬃcient one and apply the potential estimates derived in the previous section. Combining the interior and the boundary estimates. we obtain uW 2.24) uLp (Ω) ≤  1/2 2 uLp (Ω) + C uLp (Ω) . Part 1. Locally. near a point xo . Proof of Theorem 3.p (Ω\Ω2δ ) + uW 2.1.23) where K is any compact subset of Ω.p (Ω) ≤ uW 2.25). 4 Substituting this estimate of  uLp (Ω) into our inequality (3. the main idea is the well known method of “frozen” coeﬃcients.p (Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ).25) Theorem 3.2 is then a simple consequence of (3. ∂Ω) > δ}.2. c(x) ∈ Lp (Ω) .82 3 Regularity of Solutions Remark 3. we may regard the leading coeﬃcients aij (x) of the operator approximately as the constant aij (xo ) (as if the functions aij are frozen at point xo ). for any p > n. (3.25) and the following well known GargaliadoJohnNirenberg type interpolation estimate  uLp (Ω) ≤ CuLp (Ω)  1/2 2 (3. .1 Conditions bi (x). we get: uW 2. Interior Estimate  2 uLp (K) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ) (3. c(x) ∈ L∞ (Ω) can be replaced by the weaker ones bi (x). we can then absorb the term  2 uLp (Ω) on the right hand side by the left hand side and arrive at the conclusion of our theorem uW 2. 4 1 Choosing < 2C .26) For both the interior and the boundary estimates.1.p (Ω) ≤ C  2 uLp (Ω) + C( C uLp (Ω) + f Lp (Ω) ).
We abbreviated i.3. let η(x) = φ( then aij (xo ) ∂2w ∂2w ∂2w = (aij (xo ) − aij (x)) + aij (x) ∂xi ∂xj ∂xi ∂xj ∂xi ∂xj = (aij (xo ) − aij (x)) + aij (x)u(x) ∂2w ∂2u + η(x)aij (x) + ∂xi ∂xj ∂xi ∂xj sup x−y≤δ. For any xo ∈ Ω2δ . With a simple linear transformation.p a Priori Estimates 83 Part 1: Interior Estimates.1≤i.j=1 aij as aij and so on. w = Γ ∗ F (see the exercise after the proof of the theorem). following the Newtonian potential estimate (see Theorem 3.j≤n aij (x) − aij (y) 0. Then we quantify the ‘module’ continuity of aij with (δ) = The function (δ) 0 as δ of the functions aij . Apparently both w and Γ ∗ F := are solutions of 1 n(n − 2)ωn 1 F (y)dy x − yn−2 n Rn u = F. we may assume aij (xo ) = δij . Here we notice that all terms are supported in B2δ := B2δ (xo ) ⊂ Ω. Now. First we deﬁne the cutoﬀ function φ(s) to be a compact supported C ∞ function: φ(s) = 1 if s ≤ 1 φ(s) = 0 if s ≥ 2. In the above.3).1 W 2.x. Thus by the uniqueness. we obtain: . we skipped the summation sign .y∈Ω. and it measures the ‘module’ continuity x − xo  ) and w(x) = η(x)u(x). δ ∂2η ∂u ∂η + 2aij (x) ∂xi ∂xj ∂xi ∂xj ∂u ∂2w + η(x)(bi (x) + c(x)u − f (x)) + ∂xi ∂xj ∂xi = (aij (xo ) − aij (x)) + aij (x)u(x) ∂2η ∂u ∂η + 2aij (x) ∂xi ∂xj ∂xi ∂xj := F (x) for x ∈ Rn .
uLp (Bδ ) ≤ C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ). Part 2. ∂Ω). (3. we have  Consequently.29) ≤ C(f Lp (Ω) +  uLp (Ω) + uLp (Ω) ) For any compact subset K of Ω.28) To extend this estimate from a δball to a compact set.84 3 Regularity of Solutions  2 wLp (B2δ ) =  2 wLp (Rn ) ≤ CF Lp (Rn ) = CF Lp (B2δ ) ..  2 2 uLp (Bδ ) =  2 wLp (Bδ ) . then K ⊂ Ω2δ .. Let’s just describe how we can apply almost the same scheme here.. 2 This is equivalent to  2 wLp (B2δ ) ≤ C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ) Since u = w on Bδ (xo ).28) on each of the balls and sum up to obtain m  2 uLp (Ω2δ ) ≤ i=1 m  2 uLp (Bδ )(xi ) ≤ i=1 C(f Lp (B2δ )(xi ) +  uLp (B2δ )(xi ) + uLp (B2δ )(xi ) ) (3. Combining this with the previous inequality (3. let δ < 1 dist(K.27) Calculating term by term. We now apply the above estimate (3. we have F Lp (B2δ ) ≤ (2δ) 2 wLp (B2δ ) +f Lp (B2δ ) +C( uLp (B2δ ) +uLp (B2δ ) ). we notice that the collection of balls {Bδ (x)  x ∈ Ω2δ } forms an open covering of Ω2δ . For any point xo ∈ ∂Ω... .27) and choosing δ so small such that C (2δ) < 1/2. and 2 we derive  2 uLp (K) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ). (3.. we get  2 wLp (B2δ ) ≤ C (2δ) 2 wLp (B2δ ) + C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ) ≤ 1  2 wLp (B2δ ) + C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ).m} that have already covered Ω2δ .30) This establishes the interior estimates. Boundary Estimates The boundary estimates are very similar. Hence there are ﬁnitely many balls {Bδ (xi )  i = 1. (3. Bδ (xo ) ∂Ω .
3. Let y = ψ(x) = (x − xo . . For example. yn−1 . aij (y) = ¯ ∂ψ j ∂ψ i −1 (ψ (y))alk (ψ −1 (y)) k (ψ −1 (y)). for y ∈ Br (0) ¯ b ¯ + u(y) = 0 . . Since ¯ a plane is still mapped to a plane. ¯ w(y) = F (y) . The equation becomes n n + ¯ − i. we derive the basic interior estimate  2 uLp (Br (xo )) ≤ C(f Lp (B2r (xo )) +  uLp (B2r (xo )) + uLp (B2r (xo )) ) ≤ C1 (f Lp (B2r (xo )∩Ω) +  uLp (B2r (xo )∩Ω) + uLp (B2r (xo )∩Ω) ) for any xo on ∂Ω and for some small radius r.31) where the new coeﬃcients are computed from the original ones via the chain rule of diﬀerentiation. −yn ). if yn ≥ 0. w(y) = w(y1 .α graph for δ small. and Ω is on top of this graph locally. xn − h(x )). ∂xl ∂x If necessary. x2 . we may assume that Bδ (xo ) ∂Ω is given by the graph of xn = h(x1 . xn−1 ) = h(x ). yn−1 . we make a linear transformation so that aij (0) = δij .31) is valid for some smaller r. Applying the method of frozen coeﬃcients (with w(y) = φ( 2y )u(y)) we get: r + w(y) = F (y) on Br (0).p a Priori Estimates 85 is a C 2. Then one can check that: w(y1 . The last inequality in the above was due to the symmetric extension of w to w from the half ball to the whole ¯ ball. yn−1 .e. · · · . we may assume that equation (3. for y ∈ ∂Br (0) with yn = 0. −w(y1 . ¯ Through the same calculations as in Part 1.1 W 2.. (3... + then ψ is a diﬀeomorphism that maps a neighborhood of xo in Ω onto Br (0) = {y ∈ Br (0)  yn > 0}. yn ) = ¯ ¯ ¯ Similarly for F (y).j=1 aij (y)uyi yj + i=1 ¯i (y)uyi + c(y)u = f (y) . + ¯ Now let w(y) and F (y) be the odd extension of w(y) and F (y) from Br (0) ¯ to Br (0). i. yn ). · · · . if yn < 0. With a suitable rotation. · · · . for y ∈ Br (0).
.p Deﬁnition 3. Summing the estimates up. Let L be a second order uniformly elliptic operator in divergence form n n Lu = − i. then w = Γ ∗ F .p Regularity of Solutions In this section. 2 n (b) Show that it is true for w ∈ C0 (R ). 2.86 3 Regularity of Solutions These balls form a covering of the compact set ∂Ω. (3.1 We say that u ∈ W0 (Ω) is a weak solution of Lu = f (x) .2..32) This establishes the boundary estimates and hence completes the proof of the theorem.33) 1.1. . (c) Show that (J w) = J ( w) and thus J w = Γ ∗ (J F ). n (3.j=1 (aij (x)uxi )xj + i=1 bi (x)uxi + c(x)u. and thus has a ﬁnite covering Bri (xi ) (i = 1.1 Assume that Ω is bounded. x ∈ ∂Ω 1. Ω Ω i. 2.j=1 1 p aij (x)uxi vxj + i (3.35) where + 1 q = 1. These balls also cover a neighborhood of ∂Ω including Ω\Ωδ for some δ small. we will use the a priori estimates established in the previous section to derive the W 2.p Exercise 3. x ∈ Ω u(x) = 0 . 3.p regularity for the weak solutions.2 W 2. Let →0 to derive w = Γ ∗ F . and w ∈ W0 (Ω). k).q if for any v ∈ W0 (Ω). and following the same steps as we did for the interior estimates..p (Ω\Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ).. . Show that if − w = F . (3. ¯ Hint: (a) Show that Γ ∗ F (x) = 0 for x ∈Ω. we get: uW 2.34) n bi (x)uxi v + c(x)uv dx = f (x)vdx.
Outline of the Proof. we only need to require ∞ (3.2 W 2.p (B1 (0)) satisfying u W 2.36) u(x) = 0 .p (B1 ) ≤C f Lp (B1 ) . The proof of the regularity is based on the following fundamental proposition on the existence. (3.p Regularity of Solutions 87 1. . we consider the case 1 < p < 2. The proof of the theorem is quite complex and will be accomplished in two stages. x ∈ ∂B1 (0) exists a unique solution u ∈ W 2. The main result of the section is Theorem 3.2.37) We will prove this proposition in subsection 3. this uniqueness plays a key role in deriving the regularity. we ﬁrst prove a W 2. In stage 2 (subsection 3. Then the Dirichlet problem u = f (x) .3.2. As one will see.2). and then use the “frozen” coeﬃcients method to derive the regularity for general operator L.p version of Fredholm Alternative for p ≥ 2 based on the regularity result in stage 1.2. and this is more convenient in many applications.2.1 Since C0 (Ω) is dense in W0 (Ω).2. which will be used to deduce the existence of solutions for equation − ij (aij (x)wxi )xj = F (x).p ∞ Remark 3.34). Proposition 3. because one can rather easily show the uniqueness of the weak solutions by multiplying both sides of the equation by the solution itself and integrating by parts.p Assume that u ∈ W0 (Ω) is a weak solution of (3. 1. Then we use this equation with a proper choice of F (x) to derive a contradiction if there exists a nontrivial solution of equation − ij (aij (x)vxi )xj = 0. we ﬁrst consider the case when p ≥ 2. then u ∈ W 2. If f (x) ∈ Lp (Ω).33) with aij (x) Lipschitz continuous and bi (x) and c(x) bounded.1 Assume Ω is a bounded domain in Rn . uniqueness.1. and regularity for the solution of the Laplace equation on a unit ball.35) be true for all v ∈ C0 (Ω). In stage 1. Let L be a uniformly elliptic operator deﬁned in (3.1 Assume f ∈ Lp (B1 (0)) with p ≥ 2. To show the uniqueness. The main diﬃculty lies in the uniqueness of the weak solution.p (Ω) . x ∈ B1 (0) (3.
40) is false.39) that uk Let vk = Then vk From the equation Lp Lp →∞ as k→∞.38) it holds the a priori estimate u W 2. (3.1 (Better A Priori Estimates at Presence of Uniqueness. we need Lemma 3. To this end.2. and hence will be deliberated there. then u = 0.p (Ω) ≤C f Lp (Ω) . (3. to derive the regularity. Then we have a better a priori estimate for the solution of (3.2. x ∈ Ω such that uk W 2. fk uk and gk = . (3.39) and ii) (uniqueness) if Lu = 0. everything else is the same as in stage 1. Suppose the inequality (3.2. x ∈ Ω (3.41) (3.2.) Assume that i) for solutions of Lu = f (x) .38) u W 2. 3.42) Lvk = gk (x) .3.40) Proof. p uk L uk Lp = 1 . and gk Lp →0.1 and use it to derive the regularity of weak solutions for general operator L.p (Ω) ≤ C( u Lp (Ω) + f Lp (Ω) ).p →∞ as k→∞. x ∈ Ω .1 The Case p ≥ 2 We will prove Proposition 3. This will use the Regularity Lifting Theorem in Section 3. the conditions on the coeﬃcients of L can be weakened. Remark 3.2 Actually. It follows from the a priori estimate (3. then there exists a sequence of functions {fk } with fk Lp = 1 and the corresponding sequence of solutions {uk } for Luk = fk (x) .88 3 Regularity of Solutions After proving the uniqueness.
43) imply that {vk } is bounded in W 2.1. y) = Γ (x − y) − h(x. we abbreviate Br (0) as Br . For the existence.p .n≥3 n=2 is the fundamental solution of Laplace equation as introduce in the deﬁnition of Newtonian Potentials. To see the uniqueness.p Regularity of Solutions 89 it holds the a priori estimate vk W 2. for n ≥ 3 x n(n − 2)ωn (x x2 − y)n−2 .p ≤ C( vk Lp + gk Lp ). and hence v Lp = 1.43) (3. it is wellknown that. we derive  w2 dx = 0. Therefore by the uniqueness assumption. By the compact Sobolev embedding. we must have v = 0. x ∈ ∂B1 Multiplying both sides of the equation by w and then integrating on B1 .2 W 2. B1 Hence w = 0 almost everywhere. then u(x) = B1 G(x. and h(x. y)f (y)dy is the solution. and hence there exists a subsequence (still denoted by {vk }) that converges weakly to v in W 2.3. implies w = 0 almost everywhere.41) and (3. assume that both u and v are solutions of (3. The Proof of Proposition 3. Let w = u − v. we arrive at Lv = 0 . For convenience. (3.p . the same sequence converges strongly to v in Lp .2. x ∈ Ω. x ∈ B1 w(x) = 0 . Here G(x. From (3. if f is continuous. y) is the Green’s function. .36). This contradicts with the fact v Lp = 1 and therefore completes the proof. y) = 1 1 . in which Γ (x) = 1 1 n(n−2)ωn xn−2 1 2π ln x . together with the zero boundary condition.42). this. Then w weakly satisﬁes w = 0 .
p (B1 ) ≤ C f Lp (B1 ) . Moreover.p . The addition of h(x. as i. f (x) . Moreover. such that x x − y ≥ co . we preassume that u is a strong solution in W 2. i→∞ Then uo is a solution of (3. elsewhere . uo = lim uδi . from 3. and here we only assume that u is a weak solution in W 1. Exercise 3.p (B1 ). This completes the proof of Proposition 3.36). (3. j→∞.2.1. y) vanishes on the boundary.p regularity in a small . D2 uδ is in Lp (B1 ). The general frame of the proof is similar to the W 2.p a priori estimate in the previous section.p . the reader may notice that if one can establish a W 2. x ∈ B1−δ 0. For the second part. we start from a smaller ball B1−δ for each δ > 0. because uδi − uδj Let W 2.1 for p ≥ 2.p (B1 ) ≤ C fδi − fδj Lp (B1 ) →0 . noticing that h(x.1.37) holds for uo . Let uδ (x) = B1 G(x.2. y) is smooth when y is away from the boundary (see the exercise below).44) Pick a sequence {δi } tending to zero. then there exist a constant co > 0. we have the better a priori estimate uδ W 2. by our result for the Newtonian Potentials.90 3 Regularity of Solutions is a harmonic function.p (B1 ).2. Applying the Poincar` inequality.1 Show that if y ≤ 1 − δ for some δ > 0. where fδ (x) = Now. After reading the proof of the a priori estimates.p (B1 ). The key diﬀerence here is that in the a priori estimate. and hence it is in e W 2. x2 Proof of Regularity Theorem 3. we see that the better a priori estimate (3. we derive that uδ is also in Lp (B1 ). in W 2. We have obtained the needed estimate on the ﬁrst part Γ (x − y)f (y)dy B1 when we worked on the Newtonian Potentials. y)fδ (y)dy. then the corresponding solutions {uδi } is a Cauchy sequence in W 2.44. ∀ x ∈ B1 (0).2. due to uniqueness and Lemma 3. y) is to ensure that G(x.
x ∈ ∂B2δ (xo ).3.47) ([aij (xo ) − aij (x)]vxi )xj . x ∈ B2δ (xo ) w(x) = 0 .2 W 2. then by an entirely similar argument as in obtaining the a priori estimates. we omitted the summation signs . if s ≥ 2. one can easily verify that F (x) is in Lp (B2δ (xo )).34). the operator is invertible. η(x) = φ( aij (xo )wxi vxj dx = B2δ (xo ) B2δ (xo ) let [aij (xo )−aij (x)]wxi vxj dx+ B2δ (xo ) F (x)vdx where F (x) = f (x) − (aij (x)ηxi u)xj − bi (x)uxi − c(x)u(x). 1.p Regularity of Solutions 91 neighborhood of each point in Ω. ∂Ω) ≥ 2δ}.p v = Kv + where (Kv)(x) = −1 −1 (3. Consider the equation in W 2. for any v ∈ C0 (B2δ (xo )). For any xo ∈ Ω2δ := {x ∈ Ω  dist(x. obviously. We abbreviated i. we may assume that aij (xo ) = δij and write equation (3.1. w is a weak solution of aij (xo )wxi xj = ([aij (xo ) − aij (x)]wxi )xj − F (x) .p (B2δ (xo )). In order words.2. By virtue of Proposition 3. With a change of coordinates. if s ≤ 1 0 . ([aij (xo ) − aij (x)]vxi )xj ∈ Lp (B2δ (xo )). δ Then w is supported in B2δ (xo ). n (3.45) as w = ([aij (xo ) − aij (x)]wxi )xj − F (x) For any v ∈ W 2. By (3. one can derive the regularity on the whole of Ω.j=1 aij as aij and so on. Also. As in the proof of the a priori estimates.45) In the above. ∞ one can verify that.35) and a straight forward calculation. x − xo  ) and w(x) = η(x)u(x). .p Let u be a W0 (Ω) weak solution of (3. we deﬁne the cutoﬀ function φ(s) = 1 .46) F (3.
46) when p ≥ 2. This v is also a weak solution of equation (3. there exist a unique W0 (Ω) weak solution v of the equation (L + α)v = g .3 Prove the uniqueness of the W 1.p version of the Fredholm Alternative.2 LaxMilgram Theorem. consider equation (L + α)w − αw = fk . K is a contracting map from W 2.p (B2δ (xo )) .e. then for any f ∈ L (Ω). such that Kφ − Kψ W 2. x ∈ B2δ (xo ).p p weak solution).2. This is actually a W 2.2. (3. for suﬃciently small δ. x ∈ Ω. 1. for any given g ∈ L2 (Ω).46).2.2 Assume that each aij (x) is Lipschitz continuous.2. 3. For each k.2 (Ω). such that Lu = f (x) and u W 2. Show that. Therefore.2 Let p ≥ 2. Let (Kv)(x) = −1 ([aij (xo ) − aij (x)]vxi )xj .2. Therefore. This completes the stage 1 of proof for Theorem 3. one can see that for α suﬃciently large.1.p (Ω) ≤C f Lp .p (B2δ (xo )) ≤γ φ−ψ W 2.p (B2δ (xo )) weak solution for equation (3. for δ suﬃciently small.46).2 (Ω). one can show (as an exercise) the uniqueness of the weak solution of (3. i.48) Proof.1. From the Regularity Theorem in this chapter. by 1. From the previous chapter. ∞ Let {fk } ⊂ C0 (Ω) be a sequence of smooth approximations of f (x).2. Exercise 3. there exist a unique u ∈ W0 (Ω) ∩ W 2. there exists a unique solution v of equation (3. (3. Exercise 3.47).p (B2δ (xo )). we must have w = v and thus conclude that w is also in W 2. Similar to the proof of Proposition 3. one can verify that (see the exercise below).p Lemma 3.92 3 Regularity of Solutions Under the assumption that aij (x) are Lipschitz continuous. If u = 0 whenever Lu = 0 (in the sense of W0 (Ω) 1. there exists a constant 0 < γ < 1. the operator K is a contracting map in W 2.p (Ω).2 The Case 1 < p < 2.49) . Therefore.p (B2δ (xo )) to itself.p (B2δ (xo )). we can write v = (L + α)−1 g where (L + α)−1 is a bounded linear operator from L2 (Ω) to W 2. we have v ∈ W 2.
2 (Uniqueness of W0 Weak Solution for 1 < p < ∞. because of the compact embedding of W 2. 2.p Regularity of Solutions 93 or equivalently.1. Assume that aij (x) are Lipschitz con1. and F = (L + α)−1 fk .2. or we can deduce this from the Regularity Theorem 3. L(ui − uj ) = fi (x) − fj (x) . and furthermore. consider the equation . Suppose there exist a nontrivial weak solution u. we conclude from Theorem 3.p uk →u in W0 (Ω) .3. (3. as k→∞. ) n Let Ω be a bounded domain in R . Now we can apply the Fredholm Alternative: Equation (3. x ∈ Ω.2. 1. (3. since both α(L + α)−1 uk and (L + α)−1 fk are in W 2. we have the improved a priori estimate (see Lemma 3.2 can see that K is a compact operator from W0 (Ω) into itself.52) Proof.p (Ω).p (Ω). then u = 0 almost every where.2 .50) possesses a unique solution uk in W0 (Ω).2.p and hence it converges to an element u in W 2. This implies that {uk } is a Cauchy sequence in W 2. j→∞.50) where I is the identity operator. K = α(L + α)−1 . Since fk is in Lp (Ω) for any p. (3. (I − K)w = F.p ≤ C fk Lp .2 W 2. · · · Moreover.50) exists a unique solution if and only if the corresponding homogeneous equation (I − K)w = 0 has only trivial solution.e. Now assume 1 < p < 2. One 1. . k = 1.2 (Ω).2 (Ω).51) Furthermore.1 that uk is in W 2. we derive immediately that uk is also in W 2. Now this function u is the desired solution.p ≤ C fi − fj Lp →0 .2 equation (3. If u is a W0 (Ω) weak solution of (aij (x)uxi )xj = 0. It follows that ui − uj W 2. due to uniqueness. hence. When p ≥ 2. for any integers i and j.1) uk W 2. i. we have proved the uniqueness.p Proposition 3. 1. The second half of the above is just the assumption of the theorem. uk − α(L + α)−1 uk = (L + α)−1 fk .2.2 into W 1. Let {uk } be a sequence ∞ of C0 (Ω) functions such that 1.p tinuous. For each k. as i.
q (Ω) ≤ C( u Lq + f Lq ). x ∈ Ω.q (Ω) and it holds the a priori estimate u W 2.53).2. it is easy to see that  uk p/q uk 1 +  uk 2 →  up/q u 1 +  u2 in Lq (Ω). Through integration by parts several times. Case ii) If q > p. (3. u ∈ W 2.2. Uniqueness.94 3 Regularity of Solutions (aij (x)vxi )xj = Fk (x) := ·  uk p/q uk 1 +  uk 2 .54). and Regularity 1.p1 . 3. Since we 1. we ﬁnally deduce that u = 0 almost everywhere.q a priori estimates.56) 1. we use the fact that u is a W 1.54) Since uk →u in W 1.3 Other Useful Results Concerning the Existence.53) 1 where 1 + p = 1. the rest of the arguments are entirely the same as in stage 1.p → W 1. (3.p . q > 1 and f ∈ Lq (Ω). q is greater than 2. 1. then obviously.2. and therefore. Case i) If q ≤ p. u is also a W0 (Ω) weak solution. by (3. and by Regularity. Repeating this process.2. and Fk (x) is in Lq (Ω). (3. with p1 = np . we must have u = 0 almost everywhere. we will arrive at a pk ≥ q.55) then u ∈ W 2.2 Given any pair p. for p < 2. q Obviously. This completes stage 2 of the proof for Theorem 3.p1 . . If p1 < q. after a few steps.p Theorem 3. we are done.1. This completes the proof of the proposition.1 and the W 2.p1 weak solution. n−p If p1 ≥ q. one can verify that 0= Ω vk (aij (x)uxi )xj dx = Ω  uk p/q uk 1 +  uk 2 · u dx.2. then we use the Sobolev embedding W 2.q already have uniqueness for W0 weak solution.p Taking into account that u ∈ W0 (Ω).2 guarantees the existence of a solution vk for equation (3. if u ∈ W0 (Ω) is a weak solution of Lu = f (x) . Lemma 3.q Proof. (3. and the results follow from the Regularity Theorem 3. After proving the uniqueness.
3. If the power p is less than the n+2 critical number n−2 . Let 1 < p < ∞.3 Regularity Lifting 95 From this Theorem. n+2 For each ﬁxed p < n−2 .57) Then by Sobolev Embedding. write p= for some δ > 0. For 2 ≤ p < ∞. then the regularity of u can be enhanced repeatedly through the equation until we reach that u ∈ C α (Ω). the proof of this theorem is entirely the same as for Lemma 3.57) boosts the solution u to W 2. Now.3 Regularity Lifting 3. then for 1.2. we have stated this theorem in Lemma 3. u ∈ L n−2 (Ω).2. we can derive immediately the equivalence between 1.q ii) If u ∈ W0 (Ω) is a weak solution of Lu = 0. (n + 2) − δ(n − 2) 2n 2n n+2 −δ n−2 .p (Ω).2. Theorem 3. such that Lu = f (x) and u W 2. for 1 < p < 2.2.1 For any pair p. 2n Since u ∈ L n−2 (Ω). up ∈ L p(n−2) (Ω) = L (n+2)−δ(n−2) (Ω).3 (W 2.p (Ω) ≤C f Lp .q1 (Ω) with q1 = By the Sobolev Embedding 2n .q 1.p If u = 0 whenever Lu = 0 (in the sense of W0 (Ω) weak solution). 3.2.p any f ∈ Lp (Ω). Finally the “Schauder Estimate” will lift u to C 2+α (Ω) and hence to be a classical solution. 1. then u = 0.p the uniqueness of weak solutions in any two spaces W0 and W0 : Corollary 3. there exist a unique u ∈ W0 (Ω) ∩ W 2.p i) If u ∈ W0 (Ω) is a weak solution of Lu = 0. 1.2. the following are equivalent 1. The equation (3.3.p regularity in this case.p Version of the Fredholm Alternative). We call this the “Bootstrap Method” as will be illustrated below. after we obtained the W 2.1 Bootstrap Assume that u ∈ H 1 (Ω) is a weak solution of − u = up (x) . x ∈ Ω 2n (3. q > 1. then u = 0.
3. u ∈ C 2+α (Ω). Continuing this process. the so called “critical power”. . then δ = 0.2 Regularity Lifting Theorem Here.3. if 2q2 ≥ n. it is a classical solution. i. for both experts and nonexperts in the ﬁeld. then. (1 − δ)(n − 2) 1 − δ 1−δ 2n . Finally. and therefore. Essentially. and hence in C α (Ω) for some 0 < α < 1. with s2 = It is easy to verify that s2 ≥ 2n 1 1 = s1 . if p = n−2 . or u ∈ C α (Ω). n − 2q1 n−21−δ 1 1−δ nq1 The integrable power of u has been ampliﬁed by Now. we present a simple method to boost regularity of solutions. we introduce a “Regularity Lifting Method” in the next subsection. after a ﬁnite many steps. It is much more general and is very easy to use. we have either u ∈ Lq (Ω) for any q. If 2q2 < n. However. the version we present here contains some new developments. we derive u ∈ W 2. if (1 − δ)[n + 2 − δ(n − 2)] − 4 ≤ 0 . We believe that our method will provide convenient ways. one can see that. up ∈ Lq2 (Ω) with q2 = (> 1) times.q2 (Ω). it is based on the following Regularity Lifting Theorem. By Sobolev embedding. and the integrable power of the solution u can not be boosted this way. To deal with this situation.q1 (Ω) → L n−2q1 (Ω) we have u ∈ Ls1 (Ω) with s1 = nq1 2n 1 = . It has been used extensively in various forms in the authors previous works. p (1 − δ)[n + 2 − δ(n − 2)] And hence through the equation. The essence of the approach is wellknown in the analysis community. (1 − δ)[n + 2 − δ(n − 2)] − 4 1 The integrable power of u has again been ampliﬁed by at least 1−δ times. we will boost u to Lq (Ω) for any q. s1 4n = .e. n+2 From the above argument. in obtaining regularities. We are done. then u ∈ Ls2 (Ω). by the Schauder estimate.96 3 Regularity of Solutions W 2.
Let Deﬁne a new norm · Z by · Z · X and · Y be two norms on Z.3. 0 < θ2 < 1 such that T h1 − T h2 Y ≤ θ2 h1 − h2 Y .3 Applications to PDEs Now. The equation x = T x + g has a unique solution in X. u ∈ L n−2 (Ω). Assume that f ∈ X.58) Still assume that Ω is a smooth bounded domain in Rn with n ≥ 3. First show that T : Z −→ Z is a contraction. Let X and Y be the completion of Z under · X and · Y . θ2 }. Theorem 3. T h1 − T h2 Z = p T h1 − T h2 p p X + T h1 − T h2 p X p + θ2 h1 − h2 p Y p Y ≤ p θ1 h1 − h2 Z. Proof. we explain how the “Regularity Lifting Theorem” proved in the previous subsection can be used to boost the regularity of week solutions involving critical exponent: n+2 − u = u n−2 . respectively. Let 1 u ∈ H0 (Ω) be a weak solution of equation (3. Since T is a contraction on X. Step 1. Then f also belongs to Z.3 Regularity Lifting 97 Let Z be a given vector space. f = h ∈ Z since both h and f are solutions of x = T x + g in X. according to what one needs. we can ﬁnd a solution h ∈ Z such that h = T h + g. (3. Similarly. Then by Sobolev embedding. Since T : Z −→ Z is a contraction. h2 ∈ Z. 3. Here.1 Let T be a contraction map from X into itself and from Y into itself. ≤ θ h1 − h2 Step 2. It’s easy to see that Z = X ∩ Y . 1 ≤ p ≤ ∞. 0 < θ1 < 1 such that T h1 − T h2 X ≤ θ1 h1 − h2 X. Then.3. and that there exits a function g ∈ Z such that f = T f + g. = p · p X + · p Y For simplicity. we can ﬁnd a constant θ2 .3. We see that T : X −→ X is also a contraction and g ∈ Z ⊂ X. given g ∈ Z. Let θ = max{θ1 . there exists a constant θ1 .58). 2n . we assume that Z is complete with respect to the norm · Z . for any h1 . one can choose p. Thus.
aB (x) = a(x) − aA (x). b(x) ∈ L 2 (Ω).3. Here.α regularity.2.58).59) Theorem 3. for any q ∈ (1.p estimate. Remark 3.α (Ω) via Sobolev embedding. n ). Now let’s come back to nonlinear equation (3. and hence it is a classical solution. Finally. a(x) ∈ L 2 (Ω).α C (Ω). Then obviously 0 < G(x. we may extend f to be zero outside Ω when necessary. n n n+2 4 (3. Let u be any weak solution 1 of equation (3. Then obviously. 2 Tf nq Cn . we ﬁrst conclude that u is in Lq (Ω) for any 1 < q < ∞.1 If u is a H0 (Ω) weak solution of equation (3.58) in two parts: u n−2 = u n−2 u := a(x)u. x − yn−2 L n−2q (Ω) ≤ Γ ∗f nq L n−2q (Rn ) ≤C f Lq (Ω) . y)f (y)dy.3.α (Ω). we consider the regularity of the weak solution of the following equation − u = a(x)u + b(x). Assume that u is a 1 H0 (Ω) weak solution.3.98 3 Regularity of Solutions We can split the right hand side of (3.q (Ω). then u ∈ 2. From the above theorem.59) in H0 (Ω). This implies that u ∈ C 1. y) be the Green’s function of − (T f )(x) = (− )−1 f (x) = Ω in Ω. For a positive number A. we can only have n u ∈ W 2. Then by our Regularity Theorem 3. . Let G(x. deﬁne aA (x) = and a(x) if a(x) ≥ A 0 otherwise .3.1. 2 (Ω) via the W 2.58). Hence.2 Assume that a(x).1 Even in the best situation when a(x) ≡ 0. Proof of Theorem 3. Deﬁne G(x. more generally. u is in W 2.2. y) < Γ (x − y) := and it follows that. from the classical C 2. Then u is in Lp (Ω) for any 1 ≤ p < ∞. we arrive at u ∈ C 2. and hence derive that u ∈ Lp (Ω) for any p < ∞ by Sobolev embedding. 1 Corollary 3.
such that 2 p= nq n − 2q and then by H¨lder inequality. i) TA is a contracting operator from Lp (Ω) to Lp (Ω) for A large. Then equation (3. y)[aB (y)u(y) + b(y)]dy. Since a(x) ∈ L 2 (Ω). (a) First consider TA u Lp (Ω) ≤ 1 FA (x) := Ω G(x. ≤ Γ ∗b Lp (Rn ) ≤C b Lq (Ω) L 2 (Ω) n Hence 1 FA (x) ∈ Lp (Ω). We will show that. we derive immediately that u ∈ Lp (Ω).3. ii) The Integrability of FA (x). n For any n−2 < p < ∞. Then by the “Regularity Lifting Theorem”. choose 1 < q < to be zero outside Ω. y)aA (y)u(y)dy. we have 1 FA Lp (Ω) such that p = ≤C b nq n−2q . for any 1 ≤ p < ∞. y)b(y)dy. Extending b(x) < ∞. there is a q. we derive o TA u Lp (Ω) ≤ Γ ∗ aA u n Lp (Rn ) ≤ C aA u Lq (Ω) ≤ C aA L 2 (Ω) n u Lp (Ω) . one can choose a large number A. where FA (x) = Ω G(x. i) The Estimate of the Operator TA . 2 That is TA : Lp (Ω)→Lp (Ω) is a contracting operator. n 2. n For any n−2 < p < ∞. 1 < q < n . and ii) FA (x) is in Lp (Ω).59) can be written as u(x) = (TA u)(x) + FA (x). such that aA and hence arrive at L 2 (Ω) n ≤ 1 2 1 u Lp (Ω) .3 Regularity Lifting 99 Let (TA u)(x) = Ω G(x. .
we have 2 FA Lp (Ω) ≤ aB u Lq (Ω) ≤C u Lq (Ω) .j=1 (aij (x)uxi )xj + i=1 bi (x)uxi + c(x)u. for any 1 < p < n−6 if n > 6. and n−2 2n 2n when q = . FA (x) ∈ Lp (Ω). The only thing that prevent us from reaching the full range of p is the 2n 2 estimate of FA (x). Now we will use the similar idea and the Regularity Lifting Theorem to carry out Stage 3 of the proof of the Regularity Theorem. Continuing this way. 1 < p < ∞ when 3 ≤ n ≤ 6 2n 1 < p < n−6 when n > 6. we prove our theorem for all dimensions. Each step. y)aB (y)u(y)dy.60) we will prove . By the boundedness of aB (x). Noticing that u ∈ Lq (Ω) for any 1 < q ≤ p= 2n . we add 4 to the dimension n for which holds u ∈ Lp (Ω) . through an entire similar argument as above. Let L be a second order uniformly elliptic operator in divergence form n n Lu = − i. from the starting point where u ∈ L n−2 (Ω). n−6 Now from this point on.100 3 Regularity of Solutions (b) Then estimate 2 FA (x) = Ω G(x. and hence TA is a contracting operator from Lp (Ω) to Lp (Ω). we will reach u ∈ Lp (Ω) . for the following values of p and dimension n. we have arrived at 2n u ∈ Lp (Ω) . for any 1 < p < . (3. However. for any 1 < p < n−10 if n > 10. for any 1 < p < ∞ if 3 ≤ n ≤ 10 2n u ∈ Lp (Ω) . This completes the proof of the Theorem. 1 ≤ p < ∞. n−6 n−2 we conclude that. Applying the Regularity Lifting Theorem. for any 1 < p < ∞ if 3 ≤ n ≤ 6 2n u ∈ Lp (Ω) . we arrive at u ∈ Lp (Ω) .
p (Ω). c(x) ∈ Ln/2+δ (Ω). Now we can rewrite equation (3. and it follows from Theorem 3. if p = n p L (Ω). Proof.p Suppose f ∈ Lp (Ω) and u is a W0 (Ω) weak solution of Lu = f (x) x ∈ Ω.61) − i. iii) n/2 if 1 < p < n/2 L (Ω). Similarly for CA (x) and CB (x). Then L−1 is a bounded linear operator from Lp (Ω) to o o W 2. x) where TA u = −L−1 o i (3. deﬁne biA (x) = Let bi (x) .3. if bi (x) ≥ A 0.j=1 (aij (x)uxi )xj = − i=1 bi (x)uxi + c(x)u + f (x). 1.p (Ω). Denote it by L−1 .j=1 By Proposition 3. Let Lo u := − n (aij (x)uxi )xj .2. i. Write equation (3. we know that equation Lo u = 0 has only trivial solution. if n/2 < p < ∞.61) as n n (3.2.62) biA (x)uxi + cA (x)u .p Regularity under Weaker Conditions) Let Ω be a bounded domain in Rn and 1 < p < ∞. ii) n if 1 < p < n L (Ω). Assume that i) aij (x) ∈ W 1.3 Regularity Lifting 101 Theorem 3. biB (x) := bi (x) − biA (x).3. if n < p < ∞. if p = n/2 p L (Ω). elsewhere .2. Then u is in W 2. the operator Lo is invertible. bi (x) ∈ Ln+δ (Ω).n+δ (Ω) for some δ > 0. For each i and each positive number A.3 (W 2.61) as u = TA u + FA (u.3.
and cA v cA cA ≤C· cA Ln/2 Lp v W 2.64) and (3.p . (3.p . By uniqueness.65) Under the conditions of the theorem. Therefore we derive that u ∈ W 2.p (Ω) to W 2.p to W 1. by H¨lder inequality and Sobolev embedding from W 2. there exist a v ∈ W 2. one can easily verify that biB (x)uxi + cB (x)u i (3. Taking into account that L−1 is a bounded operator from Lp (Ω) to o W (Ω).p (Ω) to W 2. Lp Ln/2+δ v W 2. biA Ln v W 2. 2. i.p . the ﬁrst norms on the right hand side of (3.p (Ω).p v W 2. for any v ∈ W 2. . there exists an A > 0.p (Ω). o We ﬁrst show that FA (u.q . This completes the proof of the theorem.63). if 1 < p < n biA Ln+δ v W 2.102 3 Regularity of Solutions and FA (u. such that v = TA v + FA (u. Then we prove that TA is a contracting operator from W 2. if p = n/2 if n/2 < p < ∞. cA Ln/2 can be made arbitrarily small if A is suﬃciently large. such that biA (x)vxi + cA (x)v i Lp ≤ v W 2.p (Ω). x) = −L−1 o i biB (x)uxi + cB (x)u + L−1 f (x). o one can see that.p (Ω). ∀u ∈ W 1. It follows from the Contract Mapping Theorem.e.p . In fact. if p = n biA Dv Lp ≤ C · (3.65).p (Ω). if 1 < p < n/2 . Since biB (x) and ciB (x) are bounded.63) ∈ Lp (Ω) for any u ∈ W 1.p (Ω).p (Ω) . x).p . ∀ v ∈ W 2. if n < p < ∞.p (Ω).p Now u and v satisfy the same equation. biA Ln . we must have u = v.p (Ω). we deduce that TA is a contract mapping from W 2. Hence for any > 0. f ∈ Lp (Ω).64) biA Lp v W 2. ·) ∈ W 2. This implies (3.p .
we consider the following equation u(x) = Rn K(y)u(y)p−1 u(y) dy x − yn−α (3.66) and the PDE (3.68) for some 1 < p < ∞. K(x) ≤ M.67) Actually.4 Applications to Integral Equations As another application of the Regularity Lifting Theorem. and u ∈ L α (Rn ). We will use the Regularity Lifting Theorem to boost u to Lq (Rn ) for any 1 < q < ∞. We prove Theorem 3.70) are satisﬁed. and K(y) ≤ C(1 + Rn n Then u is in Lq (Rn ) for any 1 < q < ∞. x ∈ Rn .69) K(y)u(y)p−1  α dy < ∞ . n−α (3. we consider the following integral equation u(x) = Rn n+α 1 u(y) n−α dy .4 Let u be a solution of (3. n−α x − y (3.67) are equivalent.2 i) If p> 1 ) for some γ < α .3.3. Assume that u ∈ Lqo (Rn ) for some qo > n . and hence in L∞ (Rn ).67) becomes −∆u = u(n+2)/(n−2) . It is also closely related to the following family of semilinear partial differential equations (−∆)α/2 u = u(n+α)/(n−α) .3. one can prove (See [CLO]) that the integral equation (3. Remark 3.71) then conditions (3.70) n(p−1) n . . (3. (3.3 Regularity Lifting 103 3. yγ (3. x ∈ Rn n ≥ 3.3. It arises as an EulerLagrange equation for a functional under a constraint in the context of the HardyLittlewoodSobolev inequalities. More generally. In the special case α = 2. which is the well known Yamabe equation.66) where α is any real number satisfying 0 < α < n. In the context of the HardyLittlewoodSobolev inequality. x ∈ Rn . it is natural 2n to start with u ∈ L n−α (Rn ).68).69) and (3. n−α (3.
Here we have chosen s= n + αq n + αq and r = .4: Deﬁne the linear operator Tv w = Rn K(y)v(y)p−1 w dy. it is obvious that g(x) ∈ L∞ ∩ Lqo . Let ub (x) = u(x) − ua (x). then equation (3.104 3 Regularity of Solutions ii) Last condition in (3. Then since u satisﬁes equation (3. one can verify that ua satisﬁes the equation ua = Tua ua + g(x) (3. n αq . otherwise . x − yn−α For any real number a > 0.74) = Rn α n H(y) dy w Lq . x p−1 Proof of Theorem 3. deﬁne ua (x) = u(x) . For any q > to obtain n n−α . if u(x) > a . n. (3. we apply the H¨lder inequality to the o right hand side of the above inequality H(y) n+αq f (y) n+αq dy Rn nq n+αq ·r 1 r nq nq n+αq nq ≤ Rn H(y) n α dy Rn f (y) nq n+αq ·s 1 s n+αq nq dy (3. or if x > a ua (x) = 0 . x − yn−α Under the second part of the condition (3.70).72) with the function g(x) = Rn K(y)ub (y)p−1 ub (y) dy − ub (x).68).71) is somewhat sharp in the sense that if it is violated. q) Kua p−1 w nq L n+αq .68) when K(x) ≡ 1 possesses singular solutions such as c u(x) = α .73) Then write H(x) = K(x)ua (x)p−1 .3. we ﬁrst apply the HardyLittlewoodSobolev inequality Lq Tua w ≤ C(α.
By the uniqueness. n.3 Regularity Lifting 105 It follows from (3.76) Applying (3. then w is also a solution in Lqo .75).77) has a unique solution in both Lqo and Lpo ∩ Lqo . ua is a solution of (3. Let w be the solution of (3. we deduce. This completes the proof of the Theorem.76) to both the case q = qo and the case q = po > qo .75) By (3. . we must have ua = w ∈ Lpo ∩ Lqo for any n po > n−α . for suﬃciently large a.77) in Lqo .72). Tua w Lq ≤ 1 w 2 Lq .77) in Lpo ∩ Lqo . (3.73) and (3.74) that Tua w Lq ≤ C(α. (3. From (3. So does u.3. q)( K(y)ua (y)p−1  α dy) n w n α Lq . and by the Contracting Mapping Theorem. we see that the equation w = Tua w + g(x) (3.70) and (3.
.
5. each neighborhood of a point on a manifold is homeomorphic to an open set in the Euclidean space. Riemannian metrics.4 Preliminary Analysis on Riemannian Manifolds 4. and the manifold as a whole is obtained by pasting together pieces of the Euclidean spaces. For more rigorous arguments and detailed discussions.3 Equations on Prescribing Gaussian and Scalar Curvature 4. tangent space.2 4.3 Curvature on Riemannian Manifolds 4. We will try to be as intuitive and brief as possible.4 Diﬀerentiable Manifolds Tangent Spaces Riemannian Metrics Curvature 4. Roughly speaking. In this section.1 Curvature of Plane Curves 4.4.5. Intuitively.2 Curvature of Surfaces in R3 4.2 Integrals 4. it is a union of open sets of R2 . the book Riemannian Geometry by do Carmo [Ca].1 4. organized in such a .3 4.1 Diﬀerentiable Manifolds A simple example of a two dimensional diﬀerentiable manifold is a regular surface in R3 . please see. which is a generalization of the ﬂat Euclidean space.4. We call this curved space a manifold.5. we will introduce the most basic concepts of diﬀerentiable manifolds.6 Sobolev Embeddings According to Einstein.5 Calculus on Manifolds 4. the Universe we live in is a curved space. for example.4. and curvatures. 4.1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator 4.
one can deﬁne a coordinates chart. The following are two nontrivial examples. φ1 (x) = x1 . 1 n+1 Take the unit circle S 1 for instance. φ4 (x) = x2 .1 (Diﬀerentiable Manifolds) A diﬀerentiable ndimensional manifold M is (i) a union of open sets. · · · xn ). A manifold M is a union of open sets. and V4 . then it is called a diﬀerentiable structure of M . then we can deﬁne the coordinates of p to be (x1 . If a family of coordinates charts that covers M are well made so that the transition from one to the other is in a diﬀerentiable way as in condition (ii).1. with the diﬀerentiable structure given by the identity. Given a diﬀerentiable structure on M . and (U. we have Deﬁnition 4. φU ) is called a coordinates chart. On the intersection of any two open sets. 1 1 − x2 . x2 . x2 > 0. A trivial example of ndimensional manifolds is the Euclidean space Rn . Same is true on other intersections. x1 > 0. V2 = {x ∈ S 1  x2 < 0}.108 4 Preliminary Analysis on Riemannian Manifolds way that at the intersection of any two open sets the change from one to the other can be made in a diﬀerentiable manner. In each open set U ⊂ M . let φU be the homeomorphism from U to φU (U ) in Rn . The ndimensional unit sphere S n = {x ∈ Rn+1  x2 + · · · + x2 = 1}. In fact. V2 . (iii) The family {Vα . φ2 (x) = x1 . if φU (p) = (x1 . φα } are maximal relative to the conditions (i) and (ii). M = α Vα and a family of homeomorphisms φα from Vα to an open set φα (Vα ) in Rn . we can use the following four coordinates charts: V1 = {x ∈ S 1  x2 > 0}. V3 . xn ) in Rn . for each p ∈ U . Example 1. by taking the union of all the coordinates charts of M that are compatible with (in the sense of condition (ii)) the ones in the given structure. More precisely. . x2 . with this diﬀerentiable structure. Therefore. one can easily complete it into a maximal one. This is called a local coordinates of p. V4 = {x ∈ S 1  x1 < 0}. 2 They are both diﬀerentiable functions. · · · . we have x2 = x1 = 1 − x2 . such that (ii) For any pair α and β with Vα Vβ = U = ∅. Obviously. S 1 is the union of the four open sets V1 . S 1 is a 1dimensional diﬀerentiable manifold. φ3 (x) = x2 . V3 = {x ∈ S 1  x1 > 0}. the images of the intersection φα (U ) and φβ (U ) are open sets in Rn and the mappings φβ ◦ φ−1 α are diﬀerentiable. say on V1 V3 .
let Vi = {[x]  xi = 0}. t ∈ (− . at each given point p. · · · . φi } is a diﬀerentiable structure of P n (R) and hence P n (R) is a diﬀerentiable manifold. · · · . the tangent plane is the space of all tangent vectors. the changes of coordinates i ξ j = ξk k = i. xn (t)). let’s consider a smooth curve in Rn : γ(t) = (x1 (t). · · · . xi+1 /xi . 4. 0) ∈ Rn+1 . given a diﬀerentiable structure on a manifold.2 Tangent Spaces 109 Example 2. Apparently. The ndimensional real projective space P n (R). λ ∈ R. ). On the intersection Vi Vj . thus {Vi . we can deﬁne a tangent space at each point. i where ξk = xk /xi (i = k) and 1 ≤ i ≤ n + 1. Vi is the set of straight lines of Rn+1 which passes through the origin and which do not belong to the hyperplane xi = 0.4.2 Tangent Spaces We know that at every point of a regular curve or at a regular surface. Observe that if xi = 0. ξn+1 ). · · · xn+1 ) ∼ (λx1 . ξj are obviously smooth. n+1 P n (R) = i=1 Vi . To this end. then [x1 . n + 1. It is the set of straight lines of Rn+1 which passes through the origin (0. there is no such an ambient space. j k ξi j ξi = j 1 i . Since on a diﬀerential manifold. there exists a tangent line or a tangent plane. and each tangent vector is deﬁned as the “velocity in the ambient space R3 . ξi−1 . We denote the points in P n (R) by [x]. · · · . For a regular surfaces S in R3 . or the quotient space of Rn+1 \ {0} by the equivalence relation (x1 . λxn+1 ). we have to ﬁnd a characteristic property of the tangent vector which will not involve the concepts of velocity. 2. λ = 0. ξi+1 . · · · . . xn+1 ] = [x1 /xi . Geometrically. 1. Similarly. · · · . For i = 1. · · · xi−1 /xi . · · · . · · · xn+1 /xi ]. with γ(0) = p. Deﬁne i i i i φi ([x]) = (ξ1 .
xn ) be the coordinates in a local chart (U. dt A tangent vector at p is the tangent vector of some curve γ at t = 0 with γ(0) = p. p Here ∂ ∂xi p is the tangent vector at p of the “coordinate curve”: xi →φ−1 (0. )→M is called a (diﬀerentiable) curve in M . We can identify v with this operator and thus generalize the concept of tangent vectors to diﬀerentiable manifolds M .2. · · · . Then the directional derivative of f along vector v ∈ Rn can be expressed as d (f ◦ γ) t=0 = dt n i=1 ∂xi ∂f (p) (0) = ( ∂xi ∂t xi (0) i ∂ )f.1 A diﬀerentiable function γ : (− . 0). Let f (x) be a smooth function near p. xi . Deﬁnition 4.110 4 Preliminary Analysis on Riemannian Manifolds The tangent vector of γ at point p is v = (x1 (0). · · · . i. · · · . Let γ(t) = (x1 (t). The tangent vector to the curve γ at t = 0 is an operator (or function) γ (0) : D→R given by γ (0)f = d (f ◦ γ) t=0 . xn (0)). xn (t)) be a curve in M with γ(0) = p. γ (0) = i xi (0) ∂ ∂xi . ∂xi Hence the directional derivative is an operator on diﬀerentiable functions that depends uniquely on the vector v. ∂xi ∂xi p i i This shows that the vector γ (0) can be expressed as the linear combination of ∂ ∂xi p . · · · . The set of all tangent vectors at p is called the tangent space at p and denoted by Tp M . 0. · · · . xn (t)) t=0 dt dt ∂ ∂f = xi (0) = xi (0) f.e. φ) that covers p with φ(p) = 0 ∈ φ(U ). 0. and . · · · . Let D be the set of all functions that are diﬀerentiable at point p = γ(0) ∈ M . Then γ (0)f = d d f (γ(t)) t=0 = f (x1 (t). Let (x1 .
And the cotangent space of M is T ∗M = p∈M ∗ Tp M.2. We call dfp the diﬀerential of f . dfp = i ∂ ∂xj ∂f ∂xi ) = δij . the collection of all such diﬀerentials form the cotangent space.3 The tangent space of M is TM = p∈M Tp M. ∂x1 p ∂xn p The dual space of the tangent space Tp M is called the cotangent space. · · · . Deﬁnition 4. )→M . · · · . p From here we can see that (dx1 )p .2. ∂xi One can see that the tangent space Tp M is an ndimensional linear space with the basis ∂ ∂ . for the diﬀerential (dxi )p of the coordinates function xi . We ﬁrst introduce the diﬀerential of a diﬀerentiable mapping.4.···. 0. We can realize it in the following. 0. 0)) xi =0 . (dx2 )p . Obviously. p (dxi )p . Deﬁnition 4. with γ(0) = p and γ (0) = v. . From the deﬁnition.2 Let M and N be two diﬀerential manifolds and let f : M →N be a diﬀerentiable mapping. dfp ( ∂xi p ∂xi p In particular. ∗ and denoted by Tp M . When N = R1 . Let β = f ◦ γ.2 Tangent Spaces 111 ∂ ∂xi f= p ∂ f (φ−1 (0. . For every p ∈ M and for each v ∈ Tp M . · · · . And one can show that it does not depend on the choice of γ (See [Ca]). we have (dxi )p ( And consequently. Deﬁne the mapping dfp : Tp M →Tf (p) N by dfp (v) = β (0). one can derive that ∂ ∂ )= f. choose a diﬀerentiable curve γ : (− . it is a linear mapping from one tangent space Tp M to the other Tf (p) N . xi . (dxn )p ∗ form a basis of the cotangent space Tp M at point p.
a where the length of the velocity vector γ (t) is given by the inner product in R3 : γ (t) = < γ (t). Let I be an open interval in R1 . gij (q) ≡< ∂ ∂xi . γ (t) >.112 4 Preliminary Analysis on Riemannian Manifolds 4. > enable us to measure not only the lengths of curves on S. or ‘metrics’. b]. Now we show how a Riemannian metric can be used to measure the length of a curve on a manifold M . Also one can prove that ( see [Ca]) Proposition 4. A diﬀerentiable manifold with a given Riemannian metric will be called a Riemannian manifold. as well as other quantities in geometry. which varies diﬀerentiably. a . The deﬁnition of the inner product <. that is. More precisely. there is a natural way of measuring the length of vectors tangent to S. its length is the integral b γ (t)dt. γ (t) >1/2 dt. It is clear this deﬁnition does not depend on the choice of coordinate system. For a surface S in the three dimensional Euclidean space R3 .1 A diﬀerentiable manifold M has a Riemannian metric.1 A Riemannian metric on a diﬀerentiable manifold M is a correspondence which associates to each point p of M an inner product < ·. These can be expressed in terms of the inner product <. we have Deﬁnition 4. we need to introduce some kind of measurements. and let γ : I→M be a diﬀerentiable mapping ( curve ) on M . We deﬁne the length of the segment by b < γ (t). The restriction dt of the curve γ to a closed interval [a. > in R3 .3. and γ (t) ≡ dγ ≡ dγ( dt ) is a tangent vector of Tγ(t) M . but also the area of domains on S. area. Given a curve γ(t) on S for t ∈ [a. or volume on a manifold.3.3 Riemannian Metrics To measure the arc length. which are simply the length of the vectors in the ambient space R3 . dγ is a linear mapping from Tt I to d Tγ(t) M . For each t ∈ I. b] ⊂ I is called a segment. · >p on the tangent space Tp M . Now we generalize this concept to diﬀerentiable manifolds without using the ambient space. q ∂ ∂xj >q q is a diﬀerentiable function of q for all q near p.
2). To derive the formula for the volume of a region (an open connected subset) D in M in terms of metric gij .j=1 n < ∂ ∂ .3 Riemannian Metrics 113 Here the arc length element is ds =< γ (t). Then γ (t) = i xi (t) ∂ ∂xi . (4. (4. · · · . Assume that the region D is cover by a coordinate chart (U. ds2 = < γ (t).1) As we mentioned in the previous section. xn ) be the coordinates in a local chart (U. en } be an orthonormal basis of Tp M . This expresses the length element ds in terms of the metric gij . el >= jl j aij akj . it is natural to deﬁne the volume of D as vol(D) = det(gij )dx1 · · · dxn . 1. the ndimensional Euclidean space with ∂ = ei = (0.2) ∂ ∂ We know the volume of the parallelepiped form by ∂x1 . 0). let (x1 . Let {e1 . p Consequently. · · · n. φ) that covers p = γ(t) = (x1 (t).4. and by virtue of (4. ∂xi j Then gik (p) =< ∂ ∂ . xn (t)). let’s begin with the volume of the parallelepiped form by the tangent vectors. it is the same as det(gij ). >p xi (t)dt xj (t)dt ∂xi ∂xj = i. φ) with coordinates (x1 . ∂xi The metric is given by gij =< ei . . i = 1. >p = ∂xi ∂xk aij akl < ej . · · · . φ(U ) A trivial example is M = Rn .j=1 gij (p)dxi dxj . · · · . · · · . Let ∂ (p) = aij ej . γ (t) >1/2 dt. · · · . From the above arguments. γ (t) > dt2 n = i. · · · xn ). 0. · · · ∂xn is the determinant det(aij ). ej >= δij .
y. >p = cos2 θ cos2 ψ + cos2 θ sin2 ψ + sin2 θ = 1. the 2dimensional unit sphere with standard metric. we have. We will use the usual spherical coordinates (θ. ∂θ ∂ψ ∂ ∂ . ) = (− sin θ sin ψ. φ) with coordinates (θ. − sin θ). the length element is ds = g11 dθ2 + 2g12 dθdψ + g22 dψ 2 = dθ2 + sin2 dψ 2 . ψ) of the coordinate curves ψ = constant and θ =constant respectively. z)}. ψ) on S2. Let p be a point on S 2 covered by a local coordinates chart (U. ) = (cos θ cos ψ. . sin θ cos ψ. ∂θ ∂θ g12 (p) = g21 (p) =< and g22 (p) =< ∂ ∂ .114 4 Preliminary Analysis on Riemannian Manifolds A less trivial example is S 2 . >p = 0. These conform with our knowledge on the length and area in the spherical coordinates. and the area element is dA = det(gij )dθdψ = sin θdθdψ. ∂ ∂θ and ∂ ∂ψ =( p p the ∂x ∂y ∂z . . ∂θ ∂θ ∂θ ∂x ∂y ∂z . 0). >p = sin2 θ. We ﬁrst derive the expression for −1 ∂ ∂θ p and ∂ ∂ψ tangent vectors at point p = φ (θ. cos θ sin ψ. i. ψ). ∂ψ ∂ψ Consequently. ψ) with x = sin θ cos ψ y = sin θ sin ψ z = cos θ to illustrate how we use the measurement (inner product) in R3 to derive the expression of the standard metric gij in term of the local coordinate (θ. with the metric inherited from the ambient space R3 = {(x. ∂ψ ∂ψ ∂ψ =( p It follows that g11 (p) =< ∂ ∂ ..e. In the coordinates of R3 . .
Principle Curvature Let S be a surface in R3 . The maximum and minimum values of the normal curvature at point p are called principal curvatures. y (t)) [x (t)]2 + [y (t)]2 . The normal curvature varies as the tangent vector T(p) changes. Let q be a nearby point. which is positive if the curve turns in the same direction as the surface’s chosen normal. By a straight forward calculation.4. replacing t by x and y(t) by f (x). . To measure the extent of bending of a curve. We deﬁne the curvature at p as dα ∆α = (p). and N(p) a normal vector of S at p.4 Curvature 115 4. and its curvature is taken as the absolute value of the normal curvature. Given a tangent vector T(p) at p.4. then the unit tangent vector at c(t) is T(t) = and (x (t). p be a point on S.4 Curvature 4. ds = [x (t)]2 + [y (t)]2 dt. and negative otherwise. and ∆s the arc length between p and q on the curve. or the amount it deviates from being straight. y(t)). and T(p) be the unit tangent vector at p. This intersection is a plane curve. one can verify that k(c(t)) = dT x (t)y (t) − y (t)x (t) = . (4. we feel that a straight line does not bend at all. ds {[x (t)]2 + [y (t)]2 }3/2 If a plane curve is given explicitly as y = f (x). f (x)) = . and a circle of radius one bends more than a circle of radius 2. consider the intersection of the surface with the plane containing T(p) and N(p).2 Curvature of Surfaces in R3 1.3) {1 + [f (x)]2 }3/2 4. we introduce curvature.1 Curvature of Plane Curves Consider curves on a plane. and usually denoted by k1 (p) and k2 (p). Let ∆α be the amount of change of the angle from T(p) to T(q). k(p) = lim q →p ∆s ds If a plane curve is given parametrically as c(t) = (x(t). Let p be a point on a curve.4. Intuitively. f(x)) is f (x) k(x. we ﬁnd that the curvature at point (x. then in the above formula.
for simplicity. One way to measure the extent of bending of a surface is by Gaussian curvature. y) be a diﬀerentiable function. We calculate the Gaussian curvature of S at the origin 0 = (0. Consider a surface S in R deﬁned as the graph of x3 = f (x1 . F ei = λi ei . the normal curvature kn (u) in u direction is ∂2f ∂u2 ∂f [( ∂u )2 + 2 2 1]3/2 (0) ∂f ∂ where ∂u and ∂uf are ﬁrst and second directional derivative of f in u direction. 1 2 where F= f11 f12 f21 f22 (0). It follows that . the Gaussian curvature of a sphere is positive. Gaussian Curvature. 0) = 0 and f1 (0. and it is obvious that eT F ei = λi . Let λ1 and λ2 be two eigenvalues of the matrix F with corresponding unit eigenvectors e1 and e2 . we arrive at kn (u) = (f11 u2 + 2f12 u1 u2 + f22 u2 )(0) = uT F u. 2 Using the assumption that f1 (0) = 0 = f2 (0). i = 1.116 4 Preliminary Analysis on Riemannian Manifolds 2.4) Since the matrix F is symmetric. x2 ). Assume that the origin is on S. one can see that no matter which side normal is chosen (say. u2 ) be a unit vector on the tangent plane. 2. i Let θ be the angle between u and e1 . and 1 of a torus or a plane is zero. Then we can express u = cos θ e1 + sin θ e2 . (4. 0) = 0 = f2 (0. that is f (0. we write fi = ∂xi and we will use fij to denote ∂xi ∂xj . A sphere of radius R has Gaussian curvature R2 . It is deﬁned as the product of the two principal curvatures K(p) = k1 (p) · k2 (p). for a closed surface. From this. 0. Let u = (u1 . one may chose outer normals or inner normals). i = 1. 2. and uT denotes the transpose of u. the two eigenvectors e1 and e2 are orthogonal. ∂ f ∂f where. i.e. and the xy plane is tangent to S. 3 Let f (x. 0). of a one sheet hyperboloids is negative.3). Then by (4. 0).
then we say the vector ﬁeld X is diﬀerentiable.4. the Gaussian curvature at 0 is K(0) = k1 k2 = λ1 λ2 = (f11 f22 − f12 f21 )(0). Then λ1 ≤ λ1 + sin2 θ (λ2 − λ1 ) = kn (u) = cos2 θ (λ1 − λ2 ) + λ2 ≤ λ2 .4 Curvature 117 kn (u) = uT F u = cos2 θ λ1 + sin2 θ λ2 . 4. Vector Fields and Brackets A vector ﬁeld X on M is a correspondence that assigns each point p ∈ M a vector X(p) in the tangent space Tp M . we need a few preparations. we can express n X(p) = i ai (p) ∂ . From (4. ∂xj [X.j=1 ai ∂bj ∂aj − bi ∂xi ∂xi . Assume that λ1 ≤ λ2 . it is a directional derivative operator in the direction X(p): n (Xf )(p) = i ai (p) ∂f . If X(p) = n i ∂ ai (p) ∂xi and Y (p) = n ∂ j bj (p) ∂xj . In a local coordinates. f21 f22 − λ Therefore. Y ]f = XY f − Y Xf = i. It is a mapping from M to T M . there are several kinds of curvatures. we see that λ1 and λ2 are solutions of the quadratic equation f11 − λ f12 = λ2 − (f11 + f22 )λ + (f11 f22 − f12 f21 ) = 0. If this mapping is diﬀerentiable. and therefore they are principal curvatures k1 and k2 . ∂xi When applying to a smooth function f on M .4.4).3 Curvature on Riemannian Manifolds On a general n dimensional Riemannian manifold M .4. ∂xi Deﬁnition 4. n then ∂f . 1. Before introducing them.1 For two diﬀerentiable vector ﬁelds X and Y. by deﬁnition. Y ] = XY − Y X. Hence λ1 and λ2 are minimum and maximum values of normal curvature. deﬁne the Lie Bracket as [X.
deﬁne the covariant derivative of V along c(t) as DV = D dc Y. but it may not belong to the tangent dt plane of S. Z ∈ D(M ). Covariant Derivatives and Riemannian Connections To be more intuitive. To consider that rate of change of V (t) restricted to the surface S.3 Let M be a diﬀerentiable manifold with an aﬃne connecDV tion D. dt dt In a local coordinates. · · · xn (t)) and V (t) = i vi (t) ∂ . one can express DV = dt dvi ∂ + dt ∂xi ∂ dxi vj D ∂ . a vector ﬁeld dV V (t) along a curve c(t) is parallel if only if = 0. Similarly. Let c : I→S be a curve on S. One can see that dV is a vector in the ambient space R3 . Deﬁnition 4. dV we make an orthogonal projection of onto the tangent plane. and for a pair of vector dt ﬁelds X and Y along c(t). On an ndimensional diﬀerentiable manifold M .118 4 Preliminary Analysis on Riemannian Manifolds 2. for a vector ﬁeld V (t) = Y (c(t)) along a diﬀerentiable curve c : I→M .j Recall that in Euclidean spaces with inner product <. An aﬃne connection D on a diﬀerentiable manifold M is a mapping from D(M ) × D(M ) to D(M ) which maps (X. and denote dt DV it by . we have < X. we begin with a surface S in R3 . Y >= constant. ∂xi Then by the properties of the aﬃne connection. Y.5) i i. and let f and g be smooth functions on M . We call this the covariant derivative of V (t) on S – the derivative dt viewed by the twodimensional creatures on S. let n c(t) = (x1 (t).4. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if = 0.2 Let X. Given an aﬃne connection D on M . dt . we have Deﬁnition 4. ∂xi ∂x dt j (4. Y ) to DX Y and satisﬁes 1)Df X+gY Z = f DX Z + gDY Z 2)DX (Y + Z) = DX Y + DX Z 3)DX (f Y ) = f DX Y + X(f )Y. t ∈ I.4. Let D(M ) be the set of all smooth vector ﬁelds. the covariant derivative is deﬁned by aﬃne connections. and let V (t) be a vector ﬁeld along c(t). >.
3. Y )X. and (g ij ) is the inverse Deﬁnition 4. Y > . Deﬁnition 4.4 Curvature 119 Deﬁnition 4. Y ∈ D(M ) a mapping R(X. X2 Y 2 − < X. Y >2 is the area of the parallelogram spun by X and Y .4. The following fundamental theorem of Levi and Civita guarantees the existence of such an aﬃne connection on a Riemannian manifold.4. >. this unique Riemannian connection can be expressed as ∂ k ∂ = Γij . Moreover this connection is symmetric.4. >. there exists a unique connection D on M that is compatible with <. Y ]. gij =< matrix of (gij ). ∂xj g mk is called the Christoﬀel symbols.4. Y ) : D(M )→D(M ) deﬁned by R(X.5 Let M be a Riemannian manifold with the Riemannian connection D. Theorem 4. i. In a local coordinates (x1 . Y )Z = DY DX Z − DX DY Z + D[X. D ∂ ∂xi ∂x ∂xk j k where k Γij = 1 2 m ∂gjm ∂gmi ∂gij + − ∂xi ∂xj ∂xm ∂ ∂ ∂xi . We skip the proof of the theorem. . then we say that D is compatible with the metric <. and X2 Y 2 − < X. The curvature R is a correspondence that assigns every pair of smooth vector ﬁeld X.4.4 Let M be a diﬀerentiable manifold with an aﬃne connection D and a Riemannian metric <. If for any pair of parallel vector ﬁelds X and Y along any smooth curve c(t).6 The sectional curvature of a given two dimensional subspace E ⊂ Tp M at point p is K(E) = < R(X. >. we have < X. Interested readers may see the book of do Carmo [ca] (page 55). DX Y − DY X = [X. >. Curvature >.1 Given a Riemannian manifold M with metric <. ∀Z ∈ D(M ). Y >2 where X and Y are any two linearly independent vectors in E. · · · . xn ).Y ] Z.e. Y >c(t) = constant .
and they both depends only on the point p. we q denote the tangent space by Tx M . ∂xi j Γik dxk := k j Γik dxk . for instance. Xi ∂ := ∂xi Xi i ∂ . At a point x on M . · · · . Yi )X.120 4 Preliminary Analysis on Riemannian Manifolds One can show that K(E) so deﬁned is independent of the choice of X and Y . then the Ricci curvature in the direction X is the average of the sectional curvature of the n − 1 sections spun by X and Y1 . the space of (p + q)linear forms . Deﬁne Tp (Tx M ) as the space of (p. It is actually the Gaussian curvature of the section. Deﬁnition 4. Yn−1 } be an orthonormal basis of the hyper plane in Tp M orthogonal to X. and X and Yn−1 : Ricp (X) = 1 n−1 n−1 < R(X. and thus introduce higher order covariant derivatives. Yi > . 4. For convenience. as usual.1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator Let p and q be two nonnegative integers. q)tensors on Tx M . Here we will extend this deﬁnition to the space of diﬀerentiable tensor ﬁelds. we introduced the concept of aﬃne connection D on a diﬀerentiable manifold M. i=1 The scalar curvature at p is R(p) = 1 n n Ricp (Yi ). and let {Y1 . On two dimensional manifolds.5 Calculus on Manifolds In the previous chapter. Ricci curvature is the same as sectional curvature.4.7 Let X = Yn be a unit vector in Tp M . i=1 One can show that the above deﬁnition does not depend on the choice of the orthonormal basis. where Γ (M ) is the set of all smooth vector ﬁelds on M . · · ·. It is a mapping from Γ (M ) × Γ (M ) to Γ (M ). Let M be the unit sphere with standard metric.5. 4. then one can verify that its sectional curvature at every point is K(E) = 1. that is. X and Y2 . we adapt the summation convention and write. and so on. Example.
if we set i =D ∂ ∂xi .6) Now we naturally extend this aﬃne connection D to diﬀerentiable (p. DX T is deﬁned to be a (p. x k where Γij are the Christoﬀel symbols.7) Notice that (4. p q 121 In a local coordinates. we have . Y n ). we can express T = Ti11···ipq dxi1 ⊗ · · · ⊗ dxip ⊗ j ···j ∂ ∂ ⊗ ··· ⊗ . then i (dx j j ) = −Γik dxk .7) is applied to the (0. Y ∈ Γ (M ) with X = (X 1 . For smooth vector ﬁelds X. · · · . and a vector X ∈ Tx M . by the bilinear properties of the connection. 0)tensor.5 Calculus on Manifolds ∗ ∗ η : Tx M × · · · × Tx M × Tx M × · · · × Tx M →R1 .4. 1)tensor Y . For a point x on M . say dxj . q) tensor ﬁelds T on M. If we apply it to a (1. ∂xj1 ∂xjq Recall that. in a local coordinates. q)tensor on Tx M by DX T (x) = X i ( where ( 1 q i T )(x)i1 ···ip = i T )(x). for the aﬃne connection D. ∂ ∂xk then i ∂ ∂xj k (x) = Γij . · · · . (4. j ···j ∂Ti11···ipq ∂xi q x j ···j p − k=1 j ···j 1 q α Γiik (x)T (x)i1 ···ik−1 αik+1 ···ip j ···j + k=1 jk 1 k−1 Γiα (x)T (x)i1 ···ip αjk+1 jq . X n ) and Y = (Y 1 . (4. we have DX Y = DX i = Xi = Xi ∂ ∂xi Y = Xi iY ∂Y j ∂ k ∂ + Y j Γij ∂xi ∂xj ∂xk ∂Y j j + Γik Y k ∂xi ∂ ∂xj (4.6) is a particular case when (4.8) ˜ Given two diﬀerentiable tensor ﬁelds T and T .
3 T . In the Riemannian context. 0)tensor: 2 f = = = ∂f i dx ⊗ dxj ∂xi ∂f ∂f (dxi ) dxj dxi + j ∂xi ∂xi j ∂2f k ∂f − Γij dxi ⊗ dxj .j=1 n where g is the determinant of (gij ). we deﬁne tensor ﬁeld whose components are given by 1 q ( T )i1 ···ip+1 = ( T to be the (p + 1. g). · · ·. ∂xi ∂xj ∂xk j 2 Here we have used (4. 0)tensor.8).5. For a diﬀerentiable (p. Its ij th component is ( 2 f is called f )ij = ∂2f k ∂f − Γij . one can deﬁne a natural positive Radon measure on M .122 4 Preliminary Analysis on Riemannian Manifolds ˜ ˜ ˜ DX (T ⊗ T ) = DX T ⊗ T + T ⊗ DX T . q)tensor ﬁeld T . and the theory of Lebesgue integral applies. f = g ij f )ij ) is deﬁned to be the Laplace ∂2f k ∂f − Γij ∂xi ∂xj ∂xk where (g ij ) is the inverse matrix of (gij ). the Hessian of f and denoted by Hess(f ). one can verify that = 1 ∂ ∂ ( g g ij ) ∂xi ∂xj g i. Similarly. one can deﬁne 2 T . q) j ···j j1 ···jq i1 T )i2 ···ip+1 . a smooth function f (x) on M is a (0.7) and (4. For example. 4. . Through a straight forward calculation. ∂xi f is a (2. ∂xi ∂xj ∂xk 2 The trace of the Hessian matrix (( Bertrami operator acting on f . Hence a (1. 0)tensor deﬁned by f= And 2 i i f dx f is = ∂f i dx .2 Integrals On a smooth ndimensional Riemannian manifold (M.
φi )i of M . ηi )i is a family of partition of unity associated to (Ui . one can verify the following integration by parts formula − M g u vdVg = M < u. φi . 4. φi . Let K(x) be the Gaussian curvature associated to the metric g. One can prove that such a deﬁnition does not depend on the choice of the charts (Ui . Let g(x) = e2u(x) go (x) for some smooth function u(x) on M . φi )i be a family of coordinates charts that covers M . ηi )i . one can verify that − Here o ou + Ko (x) = K(x)e2u(x) . we deﬁne its integral on M as the sum of the integrals on open sets in Rn : f dVg = M i φi (Ui ) (ηi gf ) ◦ φ−1 dx i where (Ui . and (ii) for any i. and < u. φi )i if (i) (ηi )i is a smooth partition of unity associated to the covering (Ui )i .4.9) where g is the LaplaceBertrami operator associated to the metric g. v > dVg . φi )i of M . φi . φi )i . there exists a corresponding family of partition of unity (Ui . φi . ηi )i is a partition of unity associate to (Ui . For smooth functions u and v on a compact manifold M . . We say that a family (Ui .3 Equations on Prescribing Gaussian and Scalar Curvature A Riemannian metric g on M is said to be pointwise conformal to another metric go if there exist a positive function ρ. φi )i and the partition of unity (Ui . (4. and given a family of coordinates charts (Ui . ∀ x ∈ M. (4. One can show that. supp ηi ⊂ Ui . such that g(x) = ρ(x)go (x).5 Calculus on Manifolds 123 Let (Ui . Then by a straight forward calculation. φi )i . ∀ x ∈ M.10) is the LaplaceBertrami operator associated to go . ηi )i . g is the determinant of (gij ) in the charts (Ui . and dx stands for the volume element of Rn .5. Let f be a continuous function on M with compact support. v >= g ij iu jv = g ij ∂u ∂v ∂xi ∂xj is the scalar product associated with g for 1forms. for each family of coordinates chart (Ui . Suppose M is a two dimensional Riemannian manifold with a metric go and the corresponding Gaussian curvature Ko (x).
say on standard sphere S n . In Chapter 5 and 6.p (M ) = j=0 M  j up dVg . for conformal relation between scalar curvatures on ndimensional manifolds ( n ≥ 3). we have − ou + n+2 n(n − 2) (n − 2) u= R(x)u n−2 . 1. ∀ j = 0. we will study the existence and qualitative properties of the solutions u for the above two equations respectively.1 The Sobolev space H k.6.1 For any integer k. 4 4(n − 1) (4.11) where go is the standard metric on S n with the corresponding LaplaceBertrami operator o . p Deﬁnition 4. Many results concerning the Sobolev spaces on Rn introduced in Chapter 1 can be generalized to Sobolev spaces on Riemannian manifolds.6 Sobolev Embeddings Let (M. We denote k u the k th covariant derivative of u as deﬁned in the previous section. k. Interested readers may ﬁnd the proofs in [He] or [Au]. let Ck (M ) be the space of C ∞ functions on M such that  M j up dVg < ∞ . Let u be a smooth function on M and k be a nonnegative integer.p (M ) is the completion of Ck (M ) with respect to the norm k 1/p u H k. we have   1 u = u and iu ju u2 =  u2 = g ij ( u)i ( u)j = g ij = g ij ∂u ∂u . Theorem 4. For application purpose. we list some of them in the following. 4.6. and its norm  k u is deﬁned in a local coordinates chart by  k u2 = g i1 j1 · · · g ik jk ( 0 k u)i1 ···ik ( k u)j1 ···jk .124 4 Preliminary Analysis on Riemannian Manifolds Similarly. H k. R(x) is the scalar curvature associated to the point4 wise conformal metric g = u n−2 go . · · · . ∂xi ∂xj p For an integer k and a real number p ≥ 1. In particular.2 (M ) is a Hilbert space when equipped with the equivalent norm . g) be a smooth Riemannian manifold. ∀ x ∈ S n .
Asnq sume 1 ≤ q < n and p = n−q . Theorem 4. the embedding of H 1. the embedding of H k.6.q (M ) ⊂ H m. Theorem 4.p (M ) . v >= i=1 M < i u. Theorem 4. (i) Assume that q ≥ 1 and m. and if p = nq n−q(k−m) . if m. If q > k−m . (ii) For q > n. then H k. k are two integers with 0 ≤ m < k. Then H 1. then H k. In particular. · > is the scalar product on covariant tensor ﬁelds associated to the metric g.6.4 (Sobolev Embeddings II) Let (M. In particular. compact ndimensional Riemannian manifold. Asn sume that q ≥ 1 and m. Then nq for any real number p such that 1 ≤ p < n−q(k−m) .p (M ) does not depend on the metric.2 If M is compact. compact ndimensional Riemannian manifold.q (M ) into C 0 (M ) is compact.6.5 (Compact Embeddings) Let (M.4. More generally. g) be a smooth. g) be a smooth. if (k − λ) > n .q (M ) into Lp (M ) is compact nq if 1 ≤ p < n−q .q (M ) ⊂ C m (M ) .q (M ) into C λ (M ) is compact.6 Sobolev Embeddings k 1/2 125 u = i=1 M  i u dVg 2 . q . k are two integers with 0 ≤ m < k.q (M ) ⊂ H m. Theorem 4.6.q (M ) ⊂ Lp (M ) . compact ndimensional Riemannian manifold. g) be a smooth. k are two integers with 0 ≤ m < k.p (M ) is compact. the embedding H k. The corresponding scalar product is deﬁned by k < u. then H k.3 (Sobolev Embeddings I) Let (M. i v > dVg where < ·. the embedding of H 1.
6 (Poincar´ Inequality) e Let (M. where u= ¯ is the average of u on M . 1/p 1/p u − up dVg ¯ M ≤C M  up dVg .126 4 Preliminary Analysis on Riemannian Manifolds Theorem 4.p (M ). n) be real.g) M udVg . Then there exists a positive constant C such that for any u ∈ H 1. compact ndimensional Riemannian manifold and let p ∈ [1.6. 1 V ol(M. g) be a smooth.
1. how the calculus of variations can be applied to seek weak solutions of these equations. To show the existence of weak solutions for a certain equation. According to the compactness of the functional.1. One can easily see that there exists a sequence {xk }. To roughly see when a functional may lose compactness.2.1 tions 5. we will present the readers with real research examples–semilinear equations arising from prescribing Gaussian and scalar curvatures.2 5. for instance.1 Prescribing Gaussian Curvature on S 2 5.2 In this and the next chapters. We will illustrate. among other methods. J satisﬁes the (P S) condition (see also Chapter 2): Every sequence {uk } that satisﬁes J(uk )→c and J (uk )→0 possesses a convergent subsequence. or c) super critical case–the remaining cases. The functional has no compactness at level .1 5. however.5 Prescribing Gaussian Curvature on Compact 2Manifolds 5. we consider the corresponding functional J(·) in an appropriate Hilbert space.2 Gaussian Curvature on Negatively Curved Manifolds 5. people usually divide it into three cases: a) subcritical case where the level set of J is compact. and can be approximated by subcritical cases. such that J(xk )→0 and J (xk )→0 as k→∞.1. say.3 Obstructions Variational Approaches and Key Inequalities Existence of Weak Solutions Kazdan and Warner’s Results. xk = −k. Method of Lower and Upper SoluChen and Li’s Results 5.2. {xk } possesses no convergent subsequence because this sequence is not bounded. b) critical case which is the limiting situation. let’s consider J(x) = (x2 −1)ex deﬁned on one dimensional space R1 .
if we pick a sequence {xk } satisfying J(xk ) < 0 and J (xk )→0. 1 and let H0 (Ω) be the Hilbert space introduced in previous chapters. One can ﬁnd a minimizing sequence that does not converge. then one can verify that it possesses a subsequence which converges to a critical point x0 of J. one would try to recover the compactness through various means. To further illustrate the concept of compactness. Let Ω be an open bounded subset in Rn . x ∈ B 1 (0) 2 between 0 and 1 . Take again the one dimensional example J(x) = (x2 − 1)ex . to ﬁnd a critical point of the functional. For example. n+2 When p = τ := n−2 . If we deﬁne  u2 dx = 1}. it is in the critical case. . it is in the subcritical case. We know that in a ﬁnite dimensional space.1) up+1 dx on the constraint 1 H = {u ∈ H0 (Ω)  Ω n+2 When p < n−2 . However. That is. if we restrict it to levels less than 0. As we have seen in Chapter 2. mp = inf Jp (u). and the limiting function u0 would be the minimum of Jp in H. For instance {sin kx} is a bounded sequence in L2 ([0. which has no compactness at level J = 0. hence it is the desired weak solution. Consider the Dirichlet boundary value problem − u = up (x) . elsewhere . While in inﬁnite dimensional space. Actually. To seek weak solutions. when Ω = B1 (0) is a ball.128 5 Prescribing Gaussian Curvature on Compact 2Manifolds J = 0. a bounded sequence has a convergent subsequence. x ∈ ∂Ω. we consider some examples in inﬁnite dimensional space. consider n−2 [n(n − 2)k] 4 uk (x) = u(x) = n−2 η(x) (1 + kx2 ) 2 ∞ where η ∈ C0 (Ω) is a cut oﬀ function satisfying η(x) = 1. Jp (uk )→mp possesses a convergent subse1 quence in H0 (Ω). In the situation where the lack of compactness occurs. 1]) but does not have a convergent subsequence. x ∈ Ω u(x) = 0 . this x0 is a minimum of J. we study the corresponding functional Jp (u) = Ω (5. the functional satisﬁes the (P S) condition. even a bounded sequence may not have a convergent subsequence. u∈H then a minimizing sequence {uk }. we can recover the compactness.
if Ω is an annulus.4) . However it does not have a convergent subsequence. the compactness of the functional Jτ here relies on the Sobolev embedding: H 1 (Ω) → Lp+1 (Ω). can it be realized as the Gaussian curvature of some pointwise conformal metric g? This has been an interesting problem in diﬀerential geometry and is known as the Nirenberg problem. the compactness may be recovered by considering other level sets or by changing the topology of the domain. S (5. Actually.3) Here 2 and R(x) are actually the scalar curvature of S 2 with standard metric go and with the pointwise conformal metric eu go .1. We say that the sequence “blows up ” at point 0. x ∈ S 2 . the readers will see examples from prescribing Gaussian and scalar curvature.3) to have a solution.1 Prescribing Gaussian Curvature on S 2 Given a function K(x) on two dimensional unit sphere S := S 2 with standard metric go . In this and the next chapter. In essence. (5. (5. this embedding is compact when p < n−2 and is n+2 not when p = n−2 . 5. then the above Jτ has a minimizer in H.  uk 2 dx Ω Then one can verify that {vk } is a minimizing sequence of Jτ in H with J (vk )→0. where the corresponding equations are in critical case and where various means have been exploited to recover the compactness.2) where is the LaplaceBertrami operator associated with the standard metric go .5. 5. then it is equivalent to solving the following semilinear elliptic equation on S 2 : − u + 1 = K(x)e2u . as k→∞. If we replace 2u by u and let R(x) = 2K(x). If we let g = e2u go .1 Prescribing Gaussian Curvature on S 2 129 Let vk (x) = uk (x) . the energy of {vk } are concentrating at one point 0.1 Obstructions For equation (5. respectively. For example. then equation (5. n+2 As we learned in Chapter 1. In the critical case. the function R(x) must satisfy the obvious GaussBonnet sign condition R(x)eu dx = 8π.2) becomes − u + 2 = R(x)eu(x) .
2 (S) be the Hilbert space with norm 1/2 u Let 1 = S ( u2 + u2 )dx . x2 . Then by the Poincar´ inequality.e − φi = 2φi (x). It reads φi R(x)eu dx = 0. 3. and the fact that − u dx = u · 1 dx = 0. S (5.130 5 Prescribing Gaussian Curvature on Compact 2Manifolds This can be seen by integrating both sides of the equation (5. Later.5) Here φi are the ﬁrst spherical harmonic functions. x3 ) of R3 .2 Variational Approach and Key Inequalities To obtain the existence of a solution for equation (5. i = 1. there are other necessary conditions found by Kazdan and Warner.5) gives rise to many examples of R(x) for which the equation (5. one can show that the weak solution is smooth and hence is the classical solution. we can use e .1. i = 1. To obtain a weak solution. In particular. S S Condition (5. can one solve (5. people usually use a variational approach.4) implies that R(x) must be positive somewhere. a monotone rotationally symmetric function admits no solution. we let H 1 (S) := H 1. S R(x)eu dx > 0}. 2. First prove the existence of a weak solution.3). 2. 5. φi (x) = xi in the coordinates x = (x1 . Bouguignon and Ezin generalized this condition to X(R)eu dx = 0. i.3) has no solution. 3.3)? This has been an interesting problem in geometry. S (5.6) where X is any conformal Killing vector ﬁeld on standard S 2 . Besides this obvious necessary condition.3). H(S) = {u ∈ H 1 (S)  S udx = 0. Then for which kinds of functions R(x). Actually. then by a regularity argument. Condition (5.
From this inequality. . Then there exists constant C. (5. Let u= ¯ 1 4π u(x)dA S be the average of u on S. satisfying  u2 dA ≤ 1 and S S udA = 0. To estimate the value of the functional J. one can derive immediately that Lemma 5. Interested readers many see the article of Moser [Mo1] or the article of Chen [C1].1. such that for all u ∈ H 1 (S) holds eu dA ≤ C exp S 1 16π  u2 dA + S 1 4π udA . Lemma 5. let w = ¯ Applying Lemma 5.5.10) .7) holds for all u ∈ H 1 (S).8) We skip the proof of this inequality.2 There exists constant C. we introduce some useful inequalities. ¯ i) For any v ∈ H 1 (S) with v = 0.1 to such w.1. we have e S 4πv 2 v 2 dA ≤ C.1. Still denote 1/2 u = S  u2 dx v v . Then w = 1 and w = 0. such that the inequality e4πu dA ≤ C S 2 (5. S (5.1 (MoserTrudinger Inequality) Let S be a compact two dimensional Riemannian manifold. Consider the functional J(u) = 1 2  u2 dx − 8π ln S S R(x)eu dx in H(S).9) Proof.1 Prescribing Gaussian Curvature on S 2 1/2 131 u = S  u2 dx as an equivalent norm in H(S). (5.
2. we have the following ”Distribution of Mass Principle” from Chen and Li [CL7] [CL8]. by the Lemma. eu−¯ dA ≤ C exp 16π S Consequently eu dA ≤ C exp S 1 u 16π 2 +u . with v = 0. v 2 4πv 2 . a minimizing sequence may not be bounded. Although it is not true in general. It follows that J(u) ≥ 1 1 u 2 − 8π ln(C max R) + u 2 16π ≥ −8π ln(C max R) 2 (5. we do have coerciveness for J in some special subset of H(S). as we mentioned before. the functional J is bounded from below in H(S). let v = u − u. In fact. u∈H(S) However. we can conclude that the functional J(·) is bounded from below in H(S).132 5 Prescribing Gaussian Curvature on Compact 2Manifolds From the obvious inequality √ v 2 πv √ − v 4 π we deduce immediately v≤ Together with (5.11) ii) For any u ∈ H 1 (S). ¯ (5. we need some coerciveness of the functional. and we can apply (5.11) ¯ ¯ to v and obtain 1 u u 2 . R(x)eu dA ≤ max R S S eu dA ≤ max R(x) · C exp 1 u 16π 2 . More precisely. we have ev dA ≤ C exp S 2 ≥ 0. Now we can take a minimizing sequence {uk } ⊂ H(S): J(uk )→ inf J(u). ∀v ∈ H 1 (S).1.10).12) Therefore. in the set of even functions–antipodal symmetric functions. In particular. ¯ This completes the proof of the Lemma. To prevent this from happening. + 16π v 2 1 v 16π 2 . that is. From Lemma 5. it may ‘leak’ to inﬁnity. . Then v = 0.
for even functions. i.13) holds for all u ∈ H 1 (S) satisfying Ω1 S eu dA eu dA ≥ αo and Ω2 S eu dA eu dA ≥ αo . The Lemma says that if the mass of the sphere is kind of distributed in two parts of the sphere. eu dA S and use this to study the behavior of a minimizing sequence (See [CD]. u 2 ≥ −8π ln(C max R) + Choosing small enough. the authors use the “center of mass ” analysis. we arrive at 1 u 8 2 J(u) ≥ − C1 .. a weak solution of equation (5. then Ωi eu dA is the mass of the region Ωi .e. Then by a standard regularity argument. 1 + 32π eu dA ≤ C exp S u 2 +u ¯ (5. [CD1]. [CY]. consider the center of mass of the sphere with density eu(x) at point x: P (u) = S x eu dA . In many articles.1.5. . However. i. δo . we will be able to conclude that uo is a classical solution.1 Prescribing Gaussian Curvature on S 2 133 Lemma 5.. (5. As we will show in the next section. and hence it converges to a weak limit uo in H 1 (S). uo so obtained is the minimum of J(u). and S eu dA the total mass of the sphere. Both analysis are equivalent on the sphere S 2 . the functional J is weakly lower semicontinuous. and therefore. there exists a constant C = C(αo .15) This stronger inequality guarantees the boundedness of a minimizing sequence. we have J(u) ≥ 1 u 2 2 − 8π ln(C max R) + 1 − 8π 4 1 + 32π u 2.2). and [CY1]). if the total mass does not concentrate at one point.3 Let Ω1 and Ω2 be two subsets of S such that dist(Ω1 . (5. this stronger inequality (5. 1 Let 0 < αo ≤ 2 . for all even functions u in H(S).14) Intuitively. ). If we apply this stronger inequality to the family of even functions in H(S). as we will see in the next next section.9). In particular. Ω2 ) ≥ δo > 0.e. Then for any such that the inequality > 0.13) holds. if we think eu(x) as the density of a solid sphere S at point x. then we can have a stronger inequality than (5. more precisely.
17) to the function (u − a)+ ≡ max{0. Given any η > 0.17) for some small 1 > 0. such that meas{x ∈ Su(x) ≥ a} = η. Interested reader may see the author’s paper [CL7] [CL8]for more general version of inequality (5.3. It suﬃce to show that for u ∈ H 1 (S) with 0. i = 1. choose a.18) . 2 S and supp g1 ∩ supp g2 = ∅.14) implies eu dA ≤ C exp ( S udA = 1 + ) u 32π 2 . (5. While the “Distribution of Mass” analysis can be applied to any compact surfaces. even surfaces with conical singularities. Here we have used the fact that 2 g1 u + g2 u = S  (g1 u + g2 u)2 dA ( g1 + S 2 = ≤ u g2 )u + (g1 + g2 ) u2 dA L2 + C1 u u + C2 u 2 L2 . g2 be two smooth functions. this kinds of analysis can not be applied.16) Case i) If g1 u ≤ g2 u .134 5 Prescribing Gaussian Curvature on Compact 2Manifolds since “the center of mass” P (u) lies in the ambient space R3 . In order to get rid of the term u 2 2 on the right hand side of the above L inequality. then by (5.9) eu dA ≤ S 1 1 eu dA ≤ eg1 u dA αo Ω1 αo S C 1 g1 u 2 + g1 u} ≤ exp{ αo 16π C 1 ≤ exp{ g1 u + g2 u 2 + g1 u} αo 32π C 1 ≤ exp{ (1 + 1 ) u 2 + c1 ( 1 ) u αo 32π 2 L2 }.1. we have eu dA ≤ ea S S eu−a dA ≤ ea · S 1) e(u−a) dA u 2 + 1 (1 + ≤ C exp{ 32π + c1 ( 1 ) (u − a)+ 2 2 + a} (5. on a general two dimensional compact Riemannian surfaces. Apply (5. Let g1 . gi (x) ≡ 1. Proof of Lemma 5. such that 1 ≥ gi ≥ 0. (u − a)}. (5. (5. we employ the condition S udA = 0. f or x ∈ Ωi .13).
20) Now by Poincar´ inequality (See Theorem 4.6.17) and then (5. and hence possesses a subsequence (still denoted by {uk }) that converges weakly to some element uo in H 1 (S). we obtain (5. where x and −x are two antipodal points.19) Choose η so small that Cη 1/2 ≤ 1 . To prove the existence of a weak solution. By the H older and Sobolev inequality (See Theorem 4. 4δη 2 (5.21) Now (5. Then by the Compact Sobolev Embedding of H 1 into L2 uk →uo in L2 (S).16). Case ii) If g2 u ≤ g1 u .1.1 Prescribing Gaussian Curvature on S 2 135 Where C = C(δ. . This completes the proof. and S uo (x)dA = 0. then the above inequality implies 2 (u − a)+ 2 L2 ≤ 2Cη 2 u 2 . (5.15). Then by (5.6.22) Consequently uo (−x) = uo (x) almost everywhere . by a similar argument as in case i).3 Existence of Weak Solutions Theorem 5. αo ). we consider the functional J(u) = 1 2  u2 dx − 8π ln S S R(x)eu dx in He (S) = {u ∈ H(S)  u is even almost everywhere }.1 (Moser) If the function R is even.21). Let {uk } be a minimizing sequence of J in He (S).18).3) ¨ (u − a)+ 2 L2 ≤ η 2 (u − a)+ 1 2 L4 ≤ cη 2 ( u 1 2 + (u − a)+ 2 L2 ) (5.1. as in the previous section.20) and (5. 1 (5.5. that is if R(x) = R(−x) ∀ x ∈ S 2 .16) follows from (5.3) to have a solution is R(x) be positive somewhere. 5. a≤δ u 2 + c2 . Then the necessary and suﬃcient condition for equation (5. {uk } is bounded in H 1 (S).6) e a·η ≤ u≥a udA ≤ S udA ≤ c u hence for any δ > 0. (5.
Furthermore.1.1. for any v ∈ He (S). v > dA − 8π R(x)euo dA S R(x)euo vdA. let v(x) = 1 1 (w(x) + w(−x)) := (w(x) + w− (x)).4 If {uk } converges weakly to u in H 1 (S). eu(x) dA .3). v >= 1 (< J (uo ). then there exists a subsequence (still denoted by {uk }).136 5 Prescribing Gaussian Curvature on Compact 2Manifolds that is uo ∈ He (S). To show that the functional J is weakly lower semicontinuous: J(uo ) ≤ limJ(uk ) = We need the following Lemma 5. the integral is weakly lower semicontinuous. v ∈ He (S).e. This implies that uo is a critical point of J in H 1 (S) and hence a constant multiple of uo is a weak solution of equation (5. w > . Now what left is to prove Lemma 5. The Proof of Lemma 5. v >= S < uo . such that euk (x) dA→ S S u∈He (S) inf J(u). we obviously have R(x)euk dA→ S S R(x)euo dA. uo is a minimum of J in He (S). w > + < J (uo ). We postpone the proof of this lemma for a moment. S For any w ∈ H(S). Now from this lemma. limk→∞  uk 2 dA ≥ S S  uo 2 dA. u∈He (S) Therefore. and it follows that 0 =< J (uo ). and it follows that J(·) is weakly lower semicontinuous. 2 Here we have used the fact that uo (−x) = uo (x). we have J(uo ) ≥ inf J(u).1. as k→∞.4. 2 2 Then obviously. i.4. we have 0 =< J (uo ). . On the other hand. Consequently. w− >) =< J (uo ). as we showed in the previous chapter.
¨  (e ) dA = u p S S e  u dA ≤ S pu p e 2p 2−p u 2−p 2 dA S  u dA 2 p 2 . The following lemma guarantees that such a minimum is the critical point we desired.23) And by H oder inequality. Assume that R(x) is Gsymmetric. By the compact Sobolev embedding of H 1. S (5. we have epu dA ≤ C exp{ S p2 16π  u2 dA + S p 4π udA}. i. ∀g ∈ G}.24) Inequalities (5. 1 2  u2 dx − 8π ln S S R(x)eu dx . for any 0 < p < 2.24) imply that {euk } is a bounded sequence in H 1. ∀g ∈ G. i.2. (5. ∀u ∈ X∗ .1. Recall that u dA = 0. or Ginvariant. Chen and Ding [CD] generalized Moser’s result to a broader class of symmetric functions.25) H(S) = {u ∈ H 1 (S)  R(x)eu dA > 0}. the functional J(u) = is Ginvariant in X∗ . We seek a minimum of J in X∗ . ∀g ∈ G} be the set of ﬁxed points under the action of G.5. Under the assumption (5. as we will describe in the following.e. for any real number p > 0.p for any 1 < p < 2. ∀g ∈ G. This completes the proof of the Lemma.1 Prescribing Gaussian Curvature on S 2 137 From Lemma 5. Let O(3) be the group of orthogonal transformations of R3 . The action of G on S 2 induces an action of G on the Hilbert space H 1 (S): g[u](x) → u(gx) under which the ﬁxed point subspace of H 1 (S) is given by X = {u ∈ H 1 (S)  u(gx) = u(x). S S (5.25). Let G be a closed ﬁnite subgroup of O(3). there exists a subsequence of {euk } which converges strongly to euo in L1 (S). R(gx) = R(x). Let FG = {x ∈ S 2  gx = x.23) and (5.p into L1 . J(g[u]) = J(u).e. Let X∗ = X H(S).
v > . . or (iii) m > 0 and c∗ = inf X∗ J < −8π ln 4πm. the group G = {id. then it is also a critical point of J in H(S).2 (Chen and Ding [CD]). Obviously. suppose that R ∈ C α (S 2 ) satisﬁes (5. for any g ∈ G and any u. v >= S (5. we have < J (u). Therefore. where id is the identity transform. (ii) m ≤ 0. Remark 5.138 5 Prescribing Gaussian Curvature on Compact 2Manifolds Lemma 5. Then equation (5.25) with maxS R > 0.26) u vdA − 8π Reu dA S Reu vdA.5 Assume that uo is a critical point of J in X∗ . This can be easily see from < J (u).e. · · · gm }. g[v] >=< J (g −1 [u]). −id}. Theorem 5. w > i = < J (uo ). 1 (g1 [w] + · · · + gm [w]). First. Proof. v ∈ H(S). i. let m = maxFG R. It follows that 0 = < J (uo ).2) has a solution provided one of the following conditions holds: (i) FG = ∅. w >= i < J (uo ). v >= = 1 m m −1 < J (gi uo ).1. Where g −1 is the inverse of g. let v= Then for any g ∈ G g[v] = G = {g1 .1. S Assume For any w ∈ H(S). If FG = ∅. m 1 (gg1 [w] + · · · + ggm [w]) = v. in this case FG is empty. id(x) = x and − id(x) = −x ∀ x ∈ S 2 . uo is a critical point of J in H(S). gi [w] > i 1 m m < J (uo ). w > . Hence the above Theorem contains Moser’s existence result as a special case.1 In the case when the function R(x) is even. m 1 m m Hence v ∈ X∗ .1.
We ﬁrst show that.1 Prescribing Gaussian Curvature on S 2 139 To prove this theorem. Roughly speaking. Ω1 and Ω2 . we use (5. S Lemma 5.5. To verify (5. there exists two subsets of S 2 .29). we have eui dA→0.28) and Onofri’s Lemma: J(ui ) = 1  u2 dA − 8π ln(R(ξ) + o(1)) eui dA 2 S S 1 2 =  u dA − 8π ln(R(ξ) + o(1)) − 8π ln eui dA 2 S S 1 1 ≥  u2 dA − 8π ln(R(ξ) + o(1)) − 8π(ln 4π + 2 S 16π = −8π ln(4πR(ξ) + o(1)). Ω2 ) ≥ o > 0 such that {ui } satisﬁes all the conditions in Lemma 5. we need another two lemmas.1. the above argument shows that if a minimizing sequence of J is unbounded.28) and i→ ∞ lim J(ui ) ≥ −8π ln 4πR(ξ). (5. then passing to a subsequence. S\B (ξ) (5. and hence veriﬁes (5.6 (Onofri [On]) For any u ∈ H 1 (S) with eu dA ≤ 4π exp S S udA = 0. Now (5.3. the mass of the surface S with density eui (x) at point x will be concentrating at one point ξ on S. Hence the inequality 1 − δ) ui 2 eui dA ≤ C exp( 16π S holds for some δ > 0.27) 1 16π  u2 dA .29) Proof.30). we conclude that ui is bounded.1.28). with dist(Ω1 . such that for any > 0. there exists a subsequence of {ui } and a point ξ ∈ S 2 .30) and the continuity assumption on R(x) implies (5. Lemma 5. If ui →∞. This contradicts with our assumption. Then by the previous argument. (5. holds (5. then there exists a subsequence (still denoted by {ui }) and a point ξ ∈ S 2 with R(ξ) > 0.30) Otherwise.7 Suppose that {ui } is a sequence in H(S) with J(ui ) ≤ β. such that R(x)eui dA = (R(ξ) + o(1)) S S eui dA.  u2 dA) S .1.
1. (5. This completes the proof of the Lemma. is Ginvariant.31) Since ui is invariant under the action of G. Let D = {x ∈ S 2  R(x) = m}. one can see that we only need to show that {ui } is bounded in H 1 (S). Noticing that m ≥ R(ξ) by deﬁnition. Therefore. we have c∗ > −∞. there is a point ξ ∈ S 2 .3) possesses a weak solution. J(ui )→c∗ . that is ξ ∈ FG . FG is empty. we must also have eui dA→0. Assume on the contrary that ui →∞. However. the minimizing sequence {ui } must be bounded. and m = maxFG R > 0.140 5 Prescribing Gaussian Curvature on Compact 2Manifolds Let i→∞. {ui } must be bounded.3). we arrive at (5. this again contradicts with conditions (ii) and (iii).e.29). X∗ As we argue in the previous section (see (5. This a constant multiple of uo is the desired weak solution of equation (5. we must have R(ξ) > 0. Deﬁne c∗ = inf J. If R(xo ) > 0 for some xo ∈ D. Suppose that R ∈ C 2 (S 2 ). Now under condition (i). The following theorem provides a suﬃcient condition. Proof of the Theorem. Then from the proof of Lemma 5. then c∗ < −8π ln 4πm.1.12)). we have eui dA→0.3 (Chen and Ding [CD]). by Lemma 5. Moreover. and therefore (5. as i→∞. i. (ii). such that for any > 0.32) Because is arbitrary. under either one of the conditions (i).7. Let {ui } ⊂ X∗ be a minimizing sequence. Theorem 5. under what conditions on R. we deduce that gξ = ξ.29) is bounded above by β.33) . one may wonder.1. S\B (ξ) (5. R(ξ) > 0 and c∗ ≥ −8π ln 4πR(ξ). Here conditions (i) and (ii) in Theorem 5. Since the right hand side of (5. Similar to the reasoning at the beginning of this section. and hence possesses a subsequence converging weakly to some uo ∈ X∗ . S\B (gξ) (5.7.1.2 can be easily veriﬁed through the group G or the given function R. do we have condition (iii). or (iii). Therefore.
5.1 Prescribing Gaussian Curvature on S 2
141
Proof. Choose a spherical polar coordinate system on S := S 2 : π π (θ, φ), − ≤ θ ≤ , 0 ≤ φ ≤ 2π 2 2 with xo = ( π , φ) as the north pole. 2 Consider the following family of functions: uλ (θ, φ) = ln 1 − λ2 1 , vλ = uλ − (1 − λ sin θ)2 4π − uλ + 2 = 2euλ . uλ dA, 0 < λ < 1.
S
One can verify that this family of functions satisfy the equation (5.34) As λ→1, the family {uλ } ‘blows up’ (tend to ∞) at the one point xo , while approaching −∞ everywhere else on S 2 . Therefore, the mass of the sphere with density euλ (x) is concentrating at point xo as λ→1. It is easy to see that vλ ∈ H(S) for λ closed to 1. Moreover, we claim that vλ ∈ X, i.e. vλ (gx) = vλ (x), ∀g ∈ G. In fact, since g is an isometry of S 2 , and gxo = xo , we have d(gx, xo ) = d(gx, gxo ) = d(x, xo ), where d(·, ·) is the geodesic distance on S 2 . But vλ depends only on θ = d(x, xo ), so vλ (gx) = vλ (x), ∀g ∈ G. It follows that vλ ∈ X∗ for λ suﬃciently closed to 1. By a direct computation, one can verify that 1 2 Consequently, J(vλ ) = −8π ln
S
 uλ 2 dA + 8π¯λ = 0. u
S
R(x)euλ dA.
Notice that c∗ ≤ J(vλ ) by the deﬁnition of c∗ . Thus, in order to show (5.33), we only need to verify that, for λ suﬃciently close to 1 R(x)euλ dA > 4πm.
S
(5.35)
Let δ > 0 be small, we have
2π
Reuλ dA = (1 − λ2 )
S 2π 0
π 2 π 2 −δ
[R(θ, φ) − m]
cos θ dθdφ (1 − λ sin θ)2 + 4πm
+
0
π 2 −δ
[R(θ, φ) − m]
−π 2
cos θ dθdφ (1 − λ sin θ)2
= (1 − λ2 ){I + II} + 4πm.
142
5 Prescribing Gaussian Curvature on Compact 2Manifolds
Here we have used the fact euλ dA = 4π
S
which can be derived directly from equation (5.34). For each ﬁxed δ, it is easy to check that the integral II remains bounded as λ tends to 1. Thus (5.35) will hold if we can show that the integral I → + ∞ as λ→1. Notice that xo is a maximum point of the restriction of R on FG . Hence by the principle of symmetric critically, xo is a critical point of R, that is R(xo ) = 0. Using this fact and the second order Taylor expansion of R(x) near xo , we can derive I=π ≥ co
π 2 −δ π 2 π 2 −δ π 2
R(xo ) cos2 θ + o cos3 θ dθ. (1 − λ sin θ)2
θ−
π 2 2
(1 − λ sin θ)2
cos θ dθ
Here co is a positive constant. We have chosen δ to be suﬃciently small and have used the fact that R(xo ) is positive. From the above inequality, one can easily verify that as λ→1, I→ + ∞, since the integral
π 2 π 2 −δ
cos3 θ dθ (1 − sin θ)2
diverges to +∞. This completes the proof of the Theorem.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
Let M be a compact two dimensional Riemannian manifold. Let go (x) be a metrics on M with the corresponding LaplaceBeltrami operator and Gaussian curvature Ko (x). Given a function K(x) on M , can it be realized as the Gaussian curvature associated to the pointwise conformal metrics g(x) = e2u(x) go (x)? As we mentioned before, to answer this question, it is equivalent to solve the following semilinear elliptic equation − u + Ko (x) = K(x)e2u x ∈ M. (5.36)
To simplify this equation, we introduce the following simple existence result.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
143
Lemma 5.2.1 Let f (x) ∈ L2 (M ). Then the equation − u = f (x) x ∈ M possesses a solution if and only if f (x)d A = 0.
M
(5.37)
(5.38)
Proof. The ‘only if’ part can be seen by integrating both side of the equation (5.37) on M . We now prove the ‘if’ part. Assume that (5.38) holds, we will obtain the existence of a solution to (5.37). To this end, we consider the functional J(u) = under the constraint G = {u ∈ H 1 (M ) 
M
1 2
 u2 d A −
M M
f (x)ud A.
u(x)d A = 0}.
As we have seen before, we can use, in G, the equivalent norm u =
M
 u d A
2
1 2
.
By H oder and Poincar´ inequalities, we derive ¨ e J(u) ≥ 1 u 2 − f L2 u L2 2 1 u 2 − co f L2 u ≥ 2 c2 f 1 = ( u − co f L2 )2 − o 2 2
2 L2
.
(5.39)
The above inequality implies immediately that the functional J(u) is bounded from below in the set G. Let {uk } be a minimizing sequence of J in G, i.e. J(uk )→ inf J,
G
as k→∞.
Then again by (5.39), {uk } is bounded in H 1 , and hence possesses a subsequence (still denoted by {uk }) that converges weakly to some uo in H 1 (M ). From compact Sobolev imbedding H 1 → → L2 ,
144
5 Prescribing Gaussian Curvature on Compact 2Manifolds
we see that uk →uo strongly in L2 , Therefore uo (x)d A = lim
M M
and hence so in L1 . uk (x)d A = 0, f (x)uk (x)d A.
M
(5.40)
and f (x)uo (x)d A = lim
M
(5.41)
(5.40) implies that uo ∈ G, and hence J(uo ) ≥ inf J(u).
G
(5.42)
While (5.41) and the weak lower semicontinuity of the Dirichlet integral: lim inf
M
 uk 2 d A ≥
M
 uo 2 d A
leads to
J(uo ) ≤ inf J(u).
G
From this and (5.42) we conclude that uo is the minimum of J in the set G, and therefore there exists constant λ, such that uo is a weak solution of − uo − f (x) = λ. Integrating both sides and by (5.38), we ﬁnd that λ = 0. Therefore uo is a solution of equation (5.37). This completes the proof of the Lemma. We now simplify equation (5.36). First let u = 2u, then ˜
˜ − u + 2Ko (x) = 2K(x)eu . ˜
(5.43)
Let v(x) be a solution of ¯ − v = 2Ko (x) − 2Ko where 1 ¯ Ko = M  Ko (x)d A
M
(5.44)
is the average of Ko (x) on M . Because the integral of the right hand side of (5.44) on M is 0, the solution of (5.44) exists due to Lemma 5.2.1. Let w = u + v. Then it is easy to verify that ˜ ¯ − w + 2Ko = 2K(x)e−v ew . Finally, letting (5.45)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
145
¯ α = 2Ko and R(x) = 2K(x)e−v(x) and rewriting w as u, we reduce equation (5.36) to − u + α = R(x)eu(x) , x∈M (5.46)
By the wellknown GaussBonnet Theorem, if g is a Riemannian metrics on M with corresponding Gaussian curvature Kg (x), then Kg (x)d Ag = 2πχ(M )
M
where χ(M ) is the Euler Characteristic of M , and d Ag the area element associated to metric g. We say M is negatively curved, if χ(M ) < 0, and hence α < 0. 5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper Solutions Let M be a compact two dimensional manifold. Let α < 0. In [KW], Kazdan and Warner considered the equation − u + α = R(x)eu(x) , and proved ¯ Theorem 5.2.1 (KazdanWarner) Let R be the average of R(x) on M . Then ¯ i) A necessary condition for (5.46) to have a solution is R < 0. ¯ < 0, then there is a constant αo , with −∞ ≤ αo < 0, such that ii) If R one can solve (5.46) if α > αo , but cannot solve (5.46) if α < αo . iii) αo = −∞ if and only if R(x) ≤ 0. The existence of a solution was established by using the method of upper and lower solutions. We say that u+ is an upper solution of equation (5.47) if − u+ + α ≥ R(x)eu+ , x ∈ M. Similarly, we say that u− is a lower solution of (5.47) if − u− + α ≤ R(x)eu− , x ∈ M. The following approach is adapted from [KW] with minor modiﬁcations. ¯ Lemma 5.2.2 If a solution of (5.47) exists, then R < 0. Even more strongly, the unique solution φ of φ + αφ = R(x) , x ∈ M must be positive. (5.48) x∈M (5.47)
(5. This completes the proof of the Lemma. then there exist an upper solution u+ of (5. hence w(xo ) ≥ 0.50). such that for −δ ≤ α < 0.47). we regroup the right hand side of (5. v (5.47) has a solution u if and only if there is a positive solution v of v + αv − R(x) −  v2 = 0. Let v be a solution of ¯ − v = R(x) − R. This shows that a necessary condition for (5. x) ≡ (−aR + α) − R(x)(eav+b − a). we see ¯ immediately that R < 0. such that u+ = av + b is an upper solution. b.48) and let w = φ − v.52) ¯ For any given α < 0. one sees that (5. Using the substitution v = e−u .3 (The Existence of Upper Solutions). we must have w(xo ) + αw(xo ) > 0. Then w + αw = −  v2 ≤ 0.51) as ¯ f (a. so that eav(x)+b − a ≥ 0 . so that −aR + α > 0.146 5 Prescribing Gaussian Curvature on Compact 2Manifolds Proof.48) must be positive. there exists an upper solution u+ of (5.50) It follows that w ≥ 0. Therefore w ≥ 0 and φ ≥ v > 0.48). Integrating both sides of (5. ¯ ii) If R < 0. so that the right hand side of (5. v (5. Then we choose b large.51) and (5.52) that . A contradiction with (5. Lemma 5. Proof. In Case i).49) Assume that v is such a positive solution.51) is nonnegative. We have ¯ − u+ + α − R(x)eu+ = aR(x) − aR + α − R(x)eav+b .51) We want to choose constant a and b. It follows from (5.47) to have a solution is that the unique solution φ of (5. We try to choose constant a and b. ∀ x ∈ M. such that w(xo ) < 0 and xo is a minimum of w. This is possible because v(x) is continuous and M is compact. when R(x) ≤ 0. i) If R(x) ≤ 0 but not identically 0. Let φ be the unique solution of (5. we ﬁrst choose a suﬃciently large.47). then there exists a small δ > 0. Otherwise. We ﬁrst prove the second part. there is a point xo ∈ M . Consequently.2. (5.
w ∈ W 2. uniformly on M as a→0. Let u− = w − λ. we can make eav(x) − 1 ≤ ¯ −R . we choose α ≥ a2 and eb = a. Lemma 5. Therefore. where a is a positive number to be determine later.p (M ) and hence is continuous. This completes the proof of the Lemma.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 147 − u+ + α − R(x)eu+ ≥ 0. (5. we see that eav(x) →1 . b. By the Lp regularity theory. by choosing a suﬃciently small. R In Case ii).53) . Then (5. −R(x)}.52) becomes ¯ f (a. x) ≥ − ¯ aR − aR(x)(eav − 1) 2 ¯ R ≥ a − − R L∞ eav − 1 2 ¯ −R = a R L∞ − eav − 1 .47) such that u− < u+ .5. then one can clearly use any sufﬁciently large negative constant as u− .p (M ) of (5.p (M ). If R(x) is bounded from below.4 (Existence of Lower Solutions) Given any R ∈ Lp (M ) and a function u+ ∈ W 2. Choosing λ suﬃciently large so that a − ew−λ ≥ 0. there is a lower solution u− ∈ W 2. 2 R L∞ for all x ∈ M. Choose a positive constant a so that aK = −α. 2 R L∞ Again by the continuity of v and the compactness of M . For general R(x) ∈ Lp (M ).2. p Then aK(x) + α = 0 and (aK(x) + α) ∈ L (M ). Thus u+ so constructed is an upper solution ¯ R of (5.47) for α ≥ a2 . Proof. Then − u− + α − R(x)eu− = − w + α − R(x)ew−λ = −aK(x) − R(x)ew−λ ≤ −K(x)(a − ew−λ ). Thus there is a solution w of w = aK(x) + α. let ¯ K(x) = max{1.
5 Let p > 2. one can make λ large enough such that u− = w − λ < u+ . by (5. Based on the maximum principle on L. u+ ) . Hence the maximum principle holds for L. . for all x ∈ M. u− ≤ u ≤ u+ . −R(x)}. If there exist upper and lower solutions. then the new operator L obeys the maximum principle: If Lu ≥ 0. · · · .47) as Lu ≡ − u + k(x)u = R(x)eu − α + k(x)u := f (x. u+ . ·) possess some monotonicity. Lui+1 = f (x. 2. We will rewrite the equation (5. We now write equation (5. and if u− ≤ u+ . to keep u− ≤ ui ≤ u+ .47) as Lu = f (x. Lemma 5. and it follows that − u(xo ) + k(xo )u(xo ) ≤ k(xo )u(xo ) < 0. i = 1. u− ≤ ui ≤ u+ . The Laplace operator in the original equation does not obey maximum principles. then u(x) ≥ 0 . it requires that f (x. u− ∈ W 2. we arrive at − u− + α − R(x)eu− ≤ 0.p (M ) of (5. A contradiction. Proof. This completes the proof of the Lemma. ·) be monotone increasing. then there is a solution u ∈ W 2. Moreover. To see this. and u is C ∞ in any open set where R(x) is C ∞ . ∂u To this end. u). where k(x) is any positive function on M . u). we need the operator L obeys some maximum principle and f (x. we suppose in the contrary.p (M ). that is ∂f = R(x)eu + k(x) ≥ 0.47). there is some point on M where u < 0. Let xo be a negative minimum of u. Therefore u− is a lower solution. In order that for each i. we choose k(x) = k1 (x)eu+ (x) .53). where k1 (x) = max{1. ui ) .148 5 Prescribing Gaussian Curvature on Compact 2Manifolds and noticing that K(x) ≥ 0. however.2. then − u(xo ) ≤ 0. then start from the upper solution u+ and apply iterations: Lu1 = f (x. Moreover. if we let Lu = − u + k(x)u.
Part i) can obviously derived from Lemma 5. Hence {ui } is bounded in W 2.2.2. um ) and hence L(um+1 − um+2 ) ≥ 0.55) shows that {ui } is uniformly bounded.1.p →Lp is continuous. Assume that for any integer m. . there is a subsequence that converges uniformly to some function u in C 1 (M ). it follows that u is a solution of (5. This completes the proof of the Lemma. Then the maximum principle for L implies u+ ≥ u1 .1. 2. Therefore. Moreover ui+1 − uj+1 W 2.p into C 1 .p .55) Since u+ . This veriﬁes (5. ·) is monotone increasing in u for u ≤ u+ .p . In view of the monotonicity (5. we will show that um+2 ≤ um+1 . (5. if R ∈ C m+γ . This completes the ﬁrst step of our induction. Therefore um+1 ≥ um+2 . inequality (5. u+ ) we derive immediately and Lu1 = f (x. Based on these Lemmas. from Lu+ ≥ f (x. one can show the other side of the inequality and arrive at u− ≤ ui+1 ≤ ui ≤ · · · ≤ u+ . Since L : W 2.5.54) L(u+ − u1 ) ≥ 0. Since f (x. we have f (x. and ui are continuous. we show by math induction that the iteration sequence so constructed is monotone decreasing: ui+1 ≤ ui ≤ · · · ≤ u1 ≤ u+ . we conclude that the entire sequence {ui } itself converges uniformly to u. we are now able to complete the proof of the Theorem 5. in the Lp norm. u− .2. More precisely. i = 1. u+ ) (5. By the Schauder theory. um+1 ≤ um ≤ u+ . Similarly.2. Proof of Theorem 5. one proves inductively that u ∈ C ∞ in any open set where R(x) is C ∞ . In fact. 2.54) through math induction. {ui } converges strongly in W 2.p ≤ C L(ui+1 − uj+1 ) ≤ C( R Lp Lp L∞ e ui −e uj + k Lp ui − uj L∞ ).p (M ). so u ∈ W 2.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 149 Base on this choice of L and f .55). Lui+1 Lp = R(x)eui − α + k(x)ui Lp ≤ C. i = 1. Consequently.47) and satisﬁes u− ≤ u ≤ u+ . · · · . · · · . um+1 ) ≤ f (x. By the compact imbedding of W 2. then u ∈ C m+2+γ .
then αo > −∞. we can prove a stronger result: −αφ(x)→R(x) almost everywhere as α→ − ∞. we have − u1 + α > R(x)eu1 .47) is solvable for −δ ≤ α < 0. Now to complete the proof of the Theorem. u Then by H oder inequality ¨ u This implies that L2 L2 ≤ 1 α R udA. then there is a solution. Actually. α) be the solution of (5.2. (5. it suﬃce to prove that if R(x) is positive somewhere.2.2.47) possesses a solution u1 . if for α1 . if there is a upper solution. then for any α > α1 . Consequently. we only need to show Claim: There exist α < 0.3 infers that αo = −∞ if R(x) ≤ 0 but R(x) is not identically 0. Hence equation − u + α = R(x)eu also has a solution. almost everywhere . let u = u(x.150 5 Prescribing Gaussian Curvature on Compact 2Manifolds From Lemma 5. α)→0. (5. By virtue of Lemma 5. there exist a δ > 0. i) For a given α < 0. In fact. such that the unique solution of φ + αφ = R(x) is negative somewhere.47). such that (5. Then for any 0 > α > αo . Let αo = inf{α  (5.58) .57) (5.5.3. u(x.56) We argue in the following 2 steps.4 and 5.2.47) at α. u1 is an upper solution for equation (5.56).2. Multiplying both sides of equation (5. Hence by Lemma 5.47) is sovable }.2. M ≤ 1 R α L2 . one can solve (5. that is. (5. as α→ − ∞.56) by u and integrate over M to obtain − M  u2 dA + α M u2 dA = M R(x)u(x)dA. Lemma 5.
φ(x) is negative somewhere. Let {αk } be a sequence of numbers such that αk > αo .59) implies that −αφ + R(x)→0.46) has a solution for the critical value α = αo ? In [CL1]. for suﬃciently negative α. and hence (5. Now by the assumption.2 Chen and Li’s Results From the previous section. then ( + α)−1 R(x)→0 almost everywhere .5. 1 φ(x)→ R(x).2.46) to have a solution for any 0 > α > −∞ is R(x) ≤ 0. one can see that. as k→∞. in the case when R(x) changes signs. one can see from the Kazdan and Warner’s result that in the case when R(x) is nonpositive. equation (5.2. Theorem 5. a necessary and suﬃcient condition for equation (5.46) has at least one solution when α = αo . i. Or Proof. one may naturally ask: Does equation (5. We will prove the Theorem in 3 steps. the operator u=( + α is invertible. Let αo be deﬁned in the previous section. In particular. From the above reasoning.e. . This completes the proof of the Theorem.2 (ChenLi) Assume that R(x) is a continuous function on M .60) One can see the validity of (5.e. we have αo > −∞. a. However. (5.60) by simply apply the operator + α to both sides. and hence there are still some questions remained. if R ∈ L2 . as α→ − ∞.46) has been solved completely. The Outline. and αk →αo . Then equation (5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 151 Since for any α < 0.57). a. (5. α Since R(x) is continuous and positive somewhere. 5. we have R ∈ L2 . we write −αφ + R(x) = ( + α)−1 R. Chen and Li answered this question aﬃrmatively..59) ii) To see (5. we can express + α)−1 R(x).e.
there exists a solution φk of − φk + αk = R(x)eφk . Step 1. (5.62) Otherwise.63) . for each k. (5. 2. Let the minimizer be uk .61) We ﬁrst show that there exists a constant co > 0. φk (xk )→ − ∞ . since xk is a minimum. we have − φk (xk ) ≤ 0 . This veriﬁes (5.46) for α = αk . such that φk (x) ≥ −co . we prove that {uk } is bounded in H 1 (M ) and hence converges to a desired solution. let xk be a minimum of φk (x) on M . that is. as k→∞.e. · · · . Then obviously. On the other hand. by Kazdan and Warner. there exists a solution of equation (5. we minimize the functional.61) because αk →αo < 0 . · · · . Then ˜ − ψk + αk > − ψk + αk = R(x)eψk (x) . Jk (u) = 1 2  u2 d A + αk M M ud A − M R(x)eu d A in a class of functions that are between a lower and a Upper solution. and hence we can make R(x)eA greater than any ﬁxed negative number.152 5 Prescribing Gaussian Curvature on Compact 2Manifolds In step 1. This is possible because R(x) is bounded below on M .62). we show that the sequence of minimizers {uk } is bounded in the region where R(x) is positively bounded away from 0. 2. By the result of Kazdan and ˜ Warner. Call it ψk . These contradict equation (5. For each ﬁxed αk . In step 3. Then let αk →αo . ∀ x ∈ M and ∀ k = 1. choose αo < αk < αk . In step 2. ˜ i. − A + αk ≤ R(x)eA . (5. ∀ k = 1. ψk is a super solution of the equation − u + αk = R(x)eu(x) . for each ﬁxed αk .46) for all αk . Choose a suﬃciently negative constant A < −co to be the lower solution of equation (5. as k→∞. For each αk > αo .
such that R(x) ≤ C1 . there exists xo ∈ M . From (5. because the functional Jk (u) is bounded from below. we use a maximum principle. we minimize the functional Jk (u) in a class of functions H = {u ∈ C 1 (M )  A ≤ u ≤ ψk }. (5.67) Consequently − uk (xo ) + αk − R(xo )euk (xo ) ≤ 0. (5.63). and xo is a minimum of w.64).65) Suppose in the contrary.68) Let w(x) = ψk (x) − uk (x). To verify this.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 153 Illuminated by an idea of Ding and Liu [DL].68). we have. 0]. there exists an > 0. Then w(x) ≥ 0. there is a δ > 0. there exists a constant C1 . ∀ x ∈ M. ∀ x ∈ M. Consequently. we have in particular that the function g(t) = Jk (uk + tv) achieves its minimum at t = 0 on the interval [− . M (5. ∀ x ∈ M. Since uk is a minimum of Jk in H. Since uk (xo ) > A. since M is compact and R(x) is continuous. Also by (5. While on the other hand. . In order to show that uk is a solution of equation (5. we can show that there is a subsequence of a minimizing sequence which converges weakly to a minimum uk of Jk (u) in the set H.5. A < uk (x) < ψk (x). It follows that.e. for any positive function v(x) with its support in Bδ (xo ). Hence g (0) ≤ 0. In fact.64) Here we have used the H older and Poincar´ inequality. for any u ∈ H. This is possible. such that A < uk (x) . we need it be in the interior of H. we have uk + tv ∈ H. Jk (u) ≥ 1  u2 d A + αk 2 M 1 ≥ ( u − C2 )2 − C3 . such that uk (xo ) = ψk (xo ). one ¨ e can easily see that Jk is bounded from below. 2 ud A − C1 M M eψk d A (5. This implies that [− uk + αk − R(x)euk ]v(x)d A ≤ 0. we will derive a contradiction. i. ∀x ∈ Bδ (xo ). such that as − ≥ t ≤ 0. First we show that uk (x) < ψk (x) . It follows that − w(xo ) ≤ 0. by (5.66) (5.65) and a similar argument as in the previous section.
Assume that V is a Lipschitz function satisfying 0 < a ≤ V (x) ≤ b < ∞ and let K be a compact subset of a domain Ω in R2 .2. Then any solution u of the equation − u = V (x)eu . ∀ x ∈ M. Let wk = uk + vk . Then a direct computation as we did in the past will lead to − uk + αk = R(x)euk . Choose R(x) ≥ a > 0 . one can show that A < uk (x) . Li.69) so small such that Let xo be a point on M at which R(xo ) > 0. This veriﬁes (5. Step 2. Let vk be a solution of − vk − αk = 0 x ∈ Ω vk (x) = 1 x ∈ ∂Ω. ∀ x ∈ Ω ≡ B2 (xo ). It is obvious that the sequence {uk } so obtained in the previous step is uniformly bounded from below by the negative constant A. We will use a sup + inf inequality of Brezis. Now for any v ∈ C 1 (M ). and hence dt Jk (u + tv)t=0 = 0. ∀ x ∈ M. Ω).65). α Again a contradiction. (5. x ∈ Ω. and Shafrir [BLS]: Lemma 5. for some constant a. To show that it is also bounded from above. b. Similarly.6 (Brezis.154 5 Prescribing Gaussian Curvature on Compact 2Manifolds − w(xo ) = − ψk (xo ) − (− uk (xo )) ≥ −˜ k + αk > 0. . K. Then wk satisﬁes − wk = R(x)e−vk ewk . uk + tv is in H for all suﬃciently d small t. satisﬁes sup u + inf u ≤ C(a. Li. we ﬁrst consider the region where R(x) is positively bounded away from 0. K Ω V L∞ . and Shafrir). ∀ x ∈ M. Therefore we must have uk (x) < ψk (x) .
then − wk = R(x)euk − K(x)evk . Let wk = uk − vk . {wk } is also bounded from below.6 to {wk } to conclude that the sequence {wk } is also uniformly bounded from above in the smaller ball B (xo ). Locally. x ∈ M. Step 3. else where. It follows that for any function φ ∈ H 1 (M ). hence one can apply Lemma 5. On one hand.70) by ewk and integrating on M . From the previous step. we conclude that {wk } is uniformly bounded in the region where R(x) is positively bounded away from 0. Since xo is an arbitrary point where R is positive. we derive . multiplying both sides of (5. since each uk is a minimizer of the functional Jk .2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 155 By the maximum principle. M Choosing φ = e wk 2 . the metric is pointwise conformal to the Euclidean metric. For each αk . Now the uniform bound for {uk } in the same region follows suit. we know that {uk } is uniformly bounded in D. the second derivative Jk (uk ) is positively deﬁnite. (5.2. Let K(x) be a smooth function such that K(x) < 0 . (5. we have  wk 2 ewk dA = M M R(x)euk +wk dA − D K(x)euk dA.70) We show that {wk } is bounded in H 1 (M ). for x ∈ D. It is open since R(x) is continuous. [ φ2 − R(x)euk φ2 ]d A ≥ 0. such that the set D = {x ∈ M  R(x) > δ} is nonempty.71) On the other hand. We show that the sequence {uk } is bounded in H 1 (M ). Then since − vk + αk ≤ 0 . and αk →αo The sequence {vk } is uniformly bounded.5. Since {uk } is uniformly bounded from below. the sequence {vk } is bounded from above and from below in the small ball Ω. let vk be the unique solution of − vk + αk = K(x)evk . and K(x) = 0 . Choose a small δ > 0.
then {¯k } is unbounded.73) implies that M  wk 2 dA is bounded. (5. we would have u2 dA = ˜k D D uk − uk 2 dA→∞. Taking into account that u k {uk } is bounded in the region D where R(x) is positively bounded away from 0. where uk is the average of uk . and uo is the desired weak solution for − uo + αo = R(x)euo . there exists a subsequence of {uk } that converges to a function uo in H 1 (M ). Consequently.72). ˜k ¯k If M u2 dA is unbounded. Apparently.72) Combining (5. . and so does M  uk 2 dA. Consider uk = uk −¯k . (5. hence we can apply Poincar´ inequality to uk and conclude that {˜k } is e ˜ u bounded in H 1 (M ). we arrive at − D K(x)euk dA ≥ 3 4  wk 2 ewk dA.71) and (5. Obviously M uk dA = ˜ u ¯ ˜ 0. Therefore. we have u2 dA ≤ 2 k M M u2 dA + u2 M  . a contradiction with the fact that {˜k } is bounded u in H 1 (M ). ¯ for some subsequence. M (5.73) Notice that {uk } is uniformly bounded in region D and {wk } is bounded from below due to the fact that {uk } is bounded from below and {vk } is bounded.156 5 Prescribing Gaussian Curvature on Compact 2Manifolds 1 4  wk 2 ewk dA ≥ M M R(x)euk +wk dA. {uk } must also be bounded in H 1 (M ). This completes the proof of the Theorem.
there are wellknown obstructions found by Kazdan and Warner [KW2] and later generalized by Bourguignon and Ezin [BE].1 6.1 6. The conditions are: X(R)dVg = 0 Sn (6.3 The A Priori Estimates 6.1) The equation is “critical” in the sense that lack of compactness occurs.3 6.1 6.3. for n ≥ 3 6. x ∈ S n .2 Introduction The Variational Approach 6.1 Introduction On higher dimensional sphere S n with n ≥ 3.6 Prescribing Scalar Curvature on S n .3.2 6.2.3.2. Besides the obvious necessary condition that R(x) be positive somewhere. u > 0. Kazdan and Warner raise the following question: Which functions R(x) can be realized as the scalar curvature of some conformally related metrics? It is equivalent to consider the existence of a solution to the following semilinear elliptic equation − u+ n+2 n(n − 2) n−2 u= R(x)u n−2 .2 Estimate the Values of the Functional The Variational Scheme In the Region Where R < 0 In the Region Where R is small In the Region Where R > 0 6.2) . 4 4(n − 1) (6.
the derivatives of R up to order (n − 2) vanish. For n = 3. In dimensions higher than 3.1) on higher dimensional spheres. He found some positive rotationally symmetric function R on S 4 . (6.3) Then (a) is (6. The above mentioned Bianchi’s counter example seems to suggest that the ‘ﬂatness condition’ is somewhat sharp.3) on S 2 . Recently. Although people are wondering if the ‘ﬂatness condition’ is necessary.3) a suﬃcient condition? and (b) if not. and for which the problem admits no solution. In the last two decades.158 6 Prescribing Scalar Curvature on S n . were those KazdanWarner type necessary conditions also suﬃcient ? In the case where R is rotationally symmetric. For equation (6. (6. namely.1) admits no solution unless R is a constant. [Li2]. a major diﬃculty is that multiple blowups may occur when approaching a solution by a minimax sequence of the functional or by solutions of subcritical equations. Xu and Yang [XY] showed that if R is ‘nondegenerate. among others. Our results imply that for a rotationally symmetric function R. We found some stronger obstructions. In particular. numerous studies were dedicated to these problems and various suﬃcient conditions were found (please see the articles [Ba] [BC] [C] [CC] [CD1] [CD2] [ChL] [CS] [CY1] [CY2] [CY3] [CY4] [ES] [Ha] [Ho] [Li] [Li2] [Li3] [Mo] [SZ] [XY] and the references therein ). the ‘nondegeneracy’ condition is no longer suﬃcient to guarantee the existence of a solution.1) has no solution. We call these KazdanWarner type conditions. while some higher but less than nth order derivative is distinct from 0. which is nondegenerate and nonmonotone. then problem (6. a monotone rotationally symmetric function R admits no solution. for n ≥ 3 where dVg is the volume element of the conformal metric g and X is any conformal vector ﬁeld associated with the standard metric g0 . Roughly speaking. is this a suﬃcient condition? For prescribing Gaussian curvature equation (5.1) is that R (r) changes signs in the region where R is positive.4) is a suﬃcient condition.’ then (6. a necessary condition to solve (6. . if it is monotone in the region where it is positive. it is still used widely today (see [CY3]. the ‘nondegeneracy’ condition is a special case of the ‘ﬂatness condition’. a more proper condition is called the ‘ﬂatness condition’.4) Now. it requires that at every critical point of R. and [SZ]) as a standard assumption to guarantee the existence of a solution. we answered question (a) negatively [CL3] [CL4]. the conditions become: R > 0 somewhere and R changes signs. These conditions give rise to many examples of R(x) for which equation (6. In other words. In this situation. However. It was illustrated by a counter example in [Bi] constructed by Bianchi. what are the necessary and suﬃcient conditions? Kazdan listed these as open problems in his CBMS Lecture Notes [Ka]. one problem of common concern left open.
Outline of the proof of Theorem 6. take the limit.1 Introduction 159 Now. [Li2]. Here.4) a suﬃcient condition? In this section.’ (6.1) to have a solution is that R (r) changes signs in the regions where R > 0.1. and it applies to functions R with changing signs. There are many versions of ‘ﬂatness conditions.’ a general one was presented in [Li2] by Y. Let R = R(r) be rotationally symmetric and satisfy the following ﬂatness condition near every positive critical point ro : R(r) = R(ro ) + ar − ro α + h(r − ro ). and construct a maxmini variational scheme at subcritical levels and then approach the critical level. Thus.4) is a necessary and suﬃcient condition for (6.1 Let n ≥ 3.6).5) where h (s) = o(sα−1 ). The theorem is proved by a variational approach. Then a necessary and suﬃcient condition for equation (6. we construct a maxmini variational scheme. [Ba]. We seek critical points of Jp (u) under the constraint H = {u ∈ H 1 (S n )  E(u) = γn S n .’ is (6.1) to have a solution. Theorem 6. to better illustrate the idea. we only list a typical and easytoverify one. with a = 0 and n − 2 < α < n. a natural question is: Under the ‘ﬂatness condition.6. Li. in the statement of the following theorem. To ﬁnd the solution of equation (6. This is true in all dimensions n ≥ 3. Then let p→τ . Let Jp (u) := Rup+1 dV Sn and E(u) := Sn ( u2 + γn u2 )dV. n+2 Let γn = n(n−2) and τ = n−2 . . We ﬁrst ﬁnd a positive solution up of the 4 subcritical equation − u + γn u = R(r)up . under the ‘ﬂatness condition. We prove that. where S n  is the volume of S n . We blend in our new ideas with other ideas in [XY].1. (6. we essentially answer the open question (b) posed by Kazdan. we answer the question aﬃrmatively. (6. and [SZ].1. u ≥ 0}.6) for each p < τ and close to τ . We use the ‘center of mass’ to deﬁne neighborhoods of ‘critical points at inﬁnity.’ obtain some quite sharp and clean estimates in those neighborhoods.
{up } can only blow up at most ﬁnitely many points. (6. R is either nonpositive or has a local minimum. Our scheme is based on the following key estimates on the values of the functional Jp (u) in a neighborhood of the ‘critical points at inﬁnity’ associated with each local maximum of R.4). Let ψ1 and ψ2 be two functions deﬁned by (6. (iii) In the region where R is positively bounded away from 0: First due to the energy bound. Using a Pohozaev type identity. Let r1 be a positive local maximum of R. we have some kinds of ‘mountain pass’ associated to each local maximum of R.1). To ﬁnd a solution of (6. We prove that there is an open set G1 ⊂ H (independent of p).8) associated to r1 and r2 . a constant multiple of up is a solution of (6. while there is a function ψ1 ∈ G1 . Let r1 and r2 be two smallest positive local maxima of R. for n ≥ 3 If R has only one positive local maximum. take the limit. and let Γ be the family of all such paths. Let γ be a path in H connecting ψ1 and ψ2 .8) (6. on each of the poles. Moreover.10) for any positive local maximum ri of R. we assume that R has at least two positive local maxima.160 6 Prescribing Scalar Curvature on S n . we seek a solution by minimizing the functional in a family of rotationally symmetric functions in H.7) Here δ > 0 is independent of p. Obviously. we show that the sequence can only blow up at one . Deﬁne cp = sup min Jp (u). such that δ Jp (ψ1 ) > R(r1 )S n  − .7). The set G1 is deﬁned by using the ‘center of mass’. such that on the boundary ∂G1 of G1 .9) γ∈Γ γ For each p < τ . Jp (up ) ≤ R(ri )S n  − δ. In this case. by (6. (i) In the region where R < 0: This is done by the ‘Kelvin Transform’ and a maximum principle. then by condition (6. there is a critical point up of Jp (·). 2 (6. In the following.6). by compactness. Then similar to the approach in [C]. we let p→τ . (ii) In the region where R is small: This is mainly due to the boundedness of the energy E(up ). (6. we established apriori bounds for the solutions in the following order. we have Jp (u) ≤ R(r1 )S n  − δ. To show the convergence of a subsequence of {up }. the solutions we seek are not necessarily symmetric. such that Jp (up ) = cp . Roughly speaking.
and u := E(u) be the corresponding norm.10) to argue that even one point blow up is impossible and thus establish an apriori bound for a subsequence of {up }.11) for each p < τ := Let n+2 n−2 . In subsection 2. where S n  is the volume of S n . We seek critical points of Jp (u) under the constraint H := {u ∈ H 1 (S n )  E(u) = E(1) = γn S n .7) and (6. One can easily see that a critical point of Jp in S multiplied by a constant is a solution of (6. we carry on the maxmini variational scheme and obtain a solution up of (6.2 The Variational Approach In this section. First. we establish the key estimates (6. we construct a maxmini variational scheme to ﬁnd a solution of − u + γn u = R(r)up (6. We divide the rest of the section into two parts.6.2 The Variational Approach 161 point and the point must be a local maximum of R. v) = ( S u v + γn uv)dV be the inner product in the Hilbert space H 1 (S n ). Then. Let (u. . u ≥ 0}. Finally.8).6) for each p. we use (6. 6.11). we establish a priori estimates on the solution sequence {up }. Jp (u) := Sn Rup+1 dV and E(u) := Sn ( u2 + γn u2 )dV. In subsection 3. we carry on the maxmini variational scheme.
we use  ·  to denote the distance in Rn+1 .2. More generally and more precisely.13). (6. · · · . while tends to zero elsewhere. there is a family of conformal transforms Tq : H −→ H. (6. Choose a coordinate system in Rn+1 . θ) = (2 tan−1 (λ tan ).2. φq ∈ H and satisﬁes − u + γn u = γn uτ . 2 Here det(dhλ ) is the determinant of dhλ . As usual.14) q 2 r + sin2 r λ cos 2 2 with 0 < λ ≤ 1. As λ→0. for n ≥ 3 6. and can be expressed explicitly as det(dhλ ) = λ + λ2 sin2 n r 2 cos2 r 2 . we ﬁrst show that there is some kinds of ‘mountain pass’ associated to each positive local maximum of R. the energy E(·). When q = O (the south pole of S n ). Deﬁne the center of mass of u as q(u) =: Sn xuτ +1 (x)dV uτ +1 (x)dV Sn (6. Correspondingly. 1).2 and 6.162 6 Prescribing Scalar Curvature on S n . Unlike the classical ones. θ) of S n (0 ≤ r ≤ π. these ‘mountain passes’ are in some neighborhoods of the ‘critical points at inﬁnity’ (See Proposition 6. θ). so that the south pole of S n is at the origin O and the center of the ball B n+1 is at (0. Let φq be the standard solution with its ‘center of mass’ at q ∈ B n+1 . and the integral S n uτ +1 dV invariant.12) It is actually the center of mass of the sphere S n with density uτ +1 (x) at the point x.2.15) r hλ (r. 0. θ ∈ S n−1 ) centered at the south pole where r = 0. We recall some wellknown facts in conformal geometry. where q ˜ ˜ is the intersection of S n with the ray passing the center and the point q.1 Estimate the Values of the Functional To construct a maxmini variational scheme.1 below). this family of functions blows up at south pole. we can express. It is wellknown that this family of conformal transforms leave the equation (6. that is.13) We may also regard φq as depend on two parameters λ and q . with Tq u := u(hλ (x))[det(dhλ )] n−2 2n (6. in the spherical polar ˜ coordinates x = (r. we have . n−2 λ φq = φλ.˜ = ( 2 ) 2 .
one would arrive at { M 2 g u ˆ + c(n)Rg u2 }dVg = ˆ ˆ M { 2 g (uw) + c(n)Rg uw2 }dVg . g). we choose g to be the Euclidean metric g in Rn with ¯ gij = δij and g = go . and for any nonnegative integer k. v ∈ H 1 (S n ). On a ndimensional Riemannian manifold (M. Proof. The proof for (i) is similar. Sn (Tq u)k (Tq v)τ +1−k dV = (6. The conformal Laplacian has the following wellknown invariance property under the conformal change of metrics. the standard metric on S n .6. w > 0. Lg := g − c(n)Rg n−2 is call a conformal Laplacian.1 For any u. ˆ n+2 for all u ∈ C ∞ (M ). Then it is wellknown that ¯ ˆ go = w n−2 (x)¯ g with w(x) = satisfying the equation We also have 4 4 + x2 n+2 n−2 2 4 − ¯ w = γn w n−2 . v) (ii) Sn (6. (6.2 The Variational Approach 163 Lemma 6. g and Rg are the LaplaceBeltrami operator and the scalar curvature associated with the metric g respectively.18) by u and integrating by parts. For g = w4/(n−2) g.17) In particular. we have ˆ Lg u = w− n−2 Lg (uw) .18) Interested readers may ﬁnd its proof in [Bes]. (i) We will use a property of the conformal Laplacian to prove the invariance of the energy E(Tq u) = E(u) . then multiplying both sides of (6. .2. (6. it holds the following (i) (Tq u. where c(n) = 4(n−1) .19) In our situation. Tq v) = (u. If M is a compact manifold without boundary.16) uk v τ +1−k dV. E(Tq u) = E(u) and Sn (Tq u)τ +1 dV = Sn uτ +1 dV.
˜ Under this projection. 4 + λx2 It follows from this and (6.20) that E(Tq u) = Rn  o u(λx) λ(4 + x2 ) 4 + λx2 n−2 2  + γn u(λx) 2 λ(4 + x2 ) 4 + λx2 n−2 2 2 dVgo = Rn  ¯ u(λx)  ¯ u(λx) Rn λ(4 + x2 ) 4 + λx2 4λ 4 + λx2 4 4 + y2 n−2 2 w(x) 2 dx n−2 2 = = Rn 2 dx  ¯ u(y) n−2 2 2 dy = Rn  ¯ (u(y)w(y))2 dy = E(u). where q is placed at the origin of Rn .19) becomes { Sn o u 2 + γn u2 }dVo = Rn  ¯ (uw)2 dx. Make a stereographic projection from S n to Rn as shown in the ﬁgure below. θ) ∈ S n becomes x = (x1 . the point (r.164 6 Prescribing Scalar Curvature on S n . with r x = tan 2 2 and n−2 λ(4 + x2 ) 2 (Tq u)(x) = u(λx) . xn ) ∈ Rn . (ii) This can be derived immediately by making change of variable y = hλ (x). . This completes the proof of the lemma. (6.20) where o and ¯ are gradient operators associated with the metrics go and g ¯ respectively. for n ≥ 3 c(n)Rgo = γn := and n(n − 2) 4 c(n)Rg = c(n)0 = 0. let q be the intersection of S n with the ray passing through the ˜ center of the sphere and the point q. · · · . ¯ Now formula (6. One can also verify that Tq φq = 1. Again.
25) For a given δ1 > 0. one can choose λo . We now carry on the estimates near the south pole (0. This is a neighborhood of the ‘critical points at inﬁnity’ corresponding to O.25).O )→R(0)S n . t. Our conditions on R implies that R(r) = R(0) − arα .26) It is easy to see that for a ﬁxed function φλo . Proposition 6. Deﬁne Σ = {u ∈ H  q(u) ≤ ρo .2. by (6. θ) which we assume to be a positive local maximum. n − 2 < α < n.O . We will estimate the functional Jp in this neighborhood. for some a > 0. such that for all p ≥ p1 . q and Tq for q = O can be expressed in a ˜ ˜ similar way. sup Jp (u) > R(0)S n  − δ1 . there exists a p1 .2. in a small neighborhood of O. More precisely.2 The Variational Approach 165 The relations between q and λ.q (6. there is a p1 ≤ τ . holds Jp (u) ≤ R(0)S n  − δo .1 For any δ1 > 0. such that for all τ ≥ p ≥ p1 .24) Proof of Proposition 6. (6.2. The estimates near other positive local maxima are similar. po . by (6.O ∈ Σ.6. Hence. We ﬁrst notice that the supremum of Jp in Σ approaches R(0)S n  as p→τ .O ) > R(0)S n  − δ1 . such that φλo .21) (6. and δo . we have Proposition 6. as λ→0. and Jτ (φλo .2 There exist positive constants ρo .22) Notice that the ‘centers of mass’ of functions in Σ are near the south pole O.1 Through a straight forward calculation. v := min u − tφq ≤ ρo }. one can show that Jτ (φλ.23) Σ Then we show that on the boundary of Σ. Jp is continuous with respect to p. . such that for all p ≥ po and for all u ∈ ∂Σ. (6. Jp is strictly less and bounded away from R(0)S n . 2 (6.26). (6.
This completes the proof of the Lemma.29) To estimate the ﬁrst integral. and for λ and ˜ suﬃciently q small. Lemma 6.2. q q where δp := τ − p and op (1)→0 as p→τ .31) (6. ˜ (6.O (6. and v be deﬁned by (6.2 For p suﬃciently close to τ . q α α α Here we have used the fact that x + q  ≥ c(x + ˜ ) for some c > 0 in ˜ q one half of the ball B (O) and the symmetry of φλ.22). we obtain q the smallness of the second integral: S n \B (O) R(x)φp+1 dV ≤ C2 λn−δp λ. we work in a local coordinate centered at O.27) > 0 be small such that (6. Lemma 6.2 is rather complex.˜) = q Sn (6. λ.28).28) From the deﬁnition (6.O . we also need the following two lemmas that describe some useful properties of the set Σ. we have Jp (φλ. ˜ q2 ≤ C(˜2 + λ4 ). To estimate Jp for all u ∈ ∂Σ. We express Jp (φλ. B (O) R(x)φp+1 dV = λ.˜ and the boundedness of R.O ≤ B (O) [R(0) − C3 xα − C3 ˜α ]φp+1 dV q λ. q.˜) ≤ (R(0) − C1 ˜α )S n (1 + op (1)) − C1 λα+δp .˜ q ≤ B (O) B (O) R(x + q )φp+1 dV ˜ λ.2. Then for ρo suﬃciently small.O [R(0) − ax + q α ]φp+1 dV ˜ λ.1. The proof of Proposition 6.30) imply (6. Proof of Lemma 6.3 (On the ‘center of mass’) (i) Let q. and (6.14) of φλ.2.2.30) ≤ [R(0) − C3 ˜α ]S n [1 + op (1)] − C4 λα+δp (n−2)/2 .32) (ii) Let ρo .29).2. and q be deﬁned by (6. This completes the proof of Proposition 6.˜ q n−2 2 for q ∈ B 2 (O).14). Let B (O) ⊂ S n .˜ q ···. (6. for n ≥ 3 Jp (φλo . Then for suﬃciently small q. Noticing that α < n and δp →0.21) holds in ··· + B (O) S n \B (O) R(x)φp+1 dV = λ. q ρo ≤ q + C v . (6.2.27). We ﬁrst estimate Jp for a family of standard functions in Σ.O ) > R(0)S n  − δ1 . we conclude that (6.166 6 Prescribing Scalar Curvature on S n . . (6.
Here we have used the fact that v is small and that t is close to 1. If we place the center of the sphere S n at the origin of Rn+1 (not in our case). It follows from the deﬁnition of the ‘center of mass’ that τ +1 n x(tφq + v) ρo =  S . we have either v = ρo or q(u) = ρo . Hence we assume that q(u) = ρo . This completes the proof of Lemma 6. by the deﬁnition.2 The Variational Approach 167 Lemma 6. Proof of Lemma 6. (6. 2.33) Then Tqo vdV = 0 Sn (6. n. i = 1. and (6. (6. · · · .˜. 2. xn ). · · · . Sn (6. the boundary of Σ. and by an elementary q calculation. n. then we are done. · · · n. i = 1.35) where ψi are ﬁrst spherical harmonic functions on S n : − ψi = n ψi (x) .22). 2.36) ˜ Draw a perpendicular line segment from O to q q . Let Tqo be the conformal transform such that Tqo φqo = 1.3.31) is a direct consequence of the triangle inequality and (6. then one can see that (6.36). · · · .37) in terms of v : ρo =  tτ +1 tτ +1 xφτ +1 + (τ + 1)tτ q φτ +1 + (τ + 1)tτ q ≤ xφτ +1 q φτ +1 q xφτ v + o( v ) q φτ v + o( v ) q   + C v ≤ q + C v .2.14).2.2. If v = ρo . one arrives at q − q  ∼ λ2 . then for x = (x1 .6. we can expand the integrals in (6.34) and Tqo v · ψi (x)dV = 0. ˜ for λ suﬃciently small .37) (tφq + v)τ +1 Sn Noticing that v ≤ ρo is very small.3. (ii) For any u = tφq + v ∈ ∂Σ. (i) From the deﬁnition that φq = φλ.4 (Orthogonality) Let u ∈ Σ and v = u − to φqo be deﬁned by (6. (6. i = 1. ψi (x) = xi . .
(6.2.2. (i) By a variation with respect to t.168 6 Prescribing Scalar Curvature on S n .4. Step 1. The estimate is divided into two steps. φqo ) = Sn vLφqo dV. ψn ).39) ¯ Jp (u) = Sn ¯ R(x)up+1 dV.13) that vφτo dV = 0. First we show ¯ ¯ Jp (u) ≤ Jτ (u)(1 + op (1)).2 Make a perturbation of R(x): ¯ R(x) = where m = R ∂B2ρo (O) . . we make a variation with respect to q. 0= q q vLφq q=qo = γn q q vφτ q=qo q Tqo v(Tqo φq )τ q=qo = τ Tqo v(φq−qo )τ q=qo q φq−qo q=qo Tqo vφτ −1o q−q =τ Tqo v(ψ1 . we estimate Jτ (u). we arrive at (6. q Sn Now. We use the fact that v = u − to φqo is the minimum of E(u − tφq ) among all possible values of t and q. apply the conformal transform Tqo to both v and φqo in the above integral. (6.38) It follows from (6. (6. (ii) To prove (6. This completes the proof of the Lemma. ¯ In step 1. we have 0 = (v. Proof of Proposition 6. · · · . for n ≥ 3 Proof of Lemma 6. In step 2.35).40) where op (1)→0 as p→τ . we show that the diﬀerence between Jp (u) and Jτ (u) is very ¯ small.34). Write L = + γn . Deﬁne R(x) x ∈ B2ρo (O) m x ∈ S n \ B2ρo (O). By the invariant property of the transform.
41) Step 2. γn S n  (6. For any u ∈ ∂Σ. This implies (6. write u = v + tφq . ¯ Now estimate the diﬀerence between Jp (u) and Jp (u).2 The Variational Approach 169 In fact. 2 To estimate I1 . ¯ By virtue of (6. (6.40). that is ( v φq + γn vφq )dV = 0.43) = I1 + (τ + 1)I2 + τ (τ + 1) I3 + o( v 2 ).41). we have u 2 = v 2 + t2 φq 2 . ¯  Jp (u) − Jp (u) ≤ C1 S n \B2ρo (O) up+1 v p+1 ≤ C3 λn−δp n−2 2 ≤ C2 S n \B2ρo (O) (tφq )p+1 + C2 S n \B 2ρo (O) + C3 v p+1 . Using the fact that both u and φq belong to H with u = γn S n  = φq . we use (6. (6.40) and (6. by the H¨lder inequality o ¯ Rup+1 ≤ ≤ ¯ R p+1 uτ +1 dV τ +1 p+1 τ +1 p+1 τ +1 S n  τ +1 τ −p τ −p ¯ Ruτ +1 dV  R(0)S n   τ +1 . We derive t2 = 1 − It follows that ¯ R(x)uτ +1 ≤ Sn v 2 .27).42) tτ +1 Sn ¯ R(x)φτ +1 + (τ + 1) q τ (τ + 1) ¯ R(x)φτ v + q 2 Sn Sn φτ −1 v 2 + o( v 2 ) q (6.42) and (6. we now only need to estimate Jτ (u).38) implies that v and φq are orthogonal with respect to the inner product associated to E(·). Sn Notice that u 2 := E(u).6. .
for n ≥ 3 I1 ≤ (1 − τ +1 v 2 )R(0)S n (1 − c1 ˜α − c1 λα ) + O(λn ) + o( v 2 ). in either case there exist a δo > 0.35). 4 For any u ∈ ∂Σ. (6.34) and (6.38)). It is wellknown that the ﬁrst and second eigenspaces of − on S n are constants and ψi corresponding to eigenvalues 0 and n respectively. we have either v = ρo or q(u) = ρo . v) o v ≤ C 4 ρα v .34) and (6.46) imply that there is a β > 0 such that ¯ Jτ (u) ≤ R(0)S n [1 − β(˜α + λα + v 2 )].46) q has played a key role. o (6.47). Notice that the positive constant c2 in (6. Consequently v It follows that I3 ≤ R(0) Sn 2 = Tq v 2 ≥ (γn + n + c2 ) Sn (Tq v)2 . and (6.47) Here we have used the fact that v is very small and ρα v can be controlled o by ˜α + λα (see Lemma 6.44).45) ≤ Cρα φq o To estimate I3 . Now (6.31) and (6. such that . and without it. the coeﬃcient of v 2 in (6. To estimate I2 .45).170 6 Prescribing Scalar Curvature on S n . is the second nonzero eigenvalue of − .35) imply that Tq v is orthogonal to these two eigenspaces and hence (Tq v)2 dV ≥ λ2 Sn Sn Tq v2 dV . I2 = Sn ¯ (R(x) − m)φτ vdV = q B2ρo (O) ¯ (R(x) − m)φτ vdV q = 1 γn ¯ (R(x) − m)Lφq · vdV B2ρo (O) ≤ Cρα  o B2ρo (O) Lφq · vdV = Cρα (φq . φτ −1 v 2 dV = R(0) q (Tq v)2 ≤ (Tq φq )τ −1 (Tq v)2 dV Sn = R(0) Sn R(0) v 2. for some positive number c2 . we use (6.46) Now (6. (6.47) would be 0 due to the fact that γn = n(n−1) . By (6. we use the H 1 orthogonality between v and φq (see (6.2. where λ2 = n + c2 .44) q 2 γn S n  for some positive constant c1 . γ n + n + c2 (6. q (6.2).
Σ2 ⊂ H.11). (6. the constant multiples are bounded from above and bounded away from 0. (6.24) is an immediate consequence of (6. Case (i): R has only one positive local maximum. .48) Now (6.52) γ∈Γ u∈γ For each ﬁxed p < τ . ∀u ∈ ∂Σi .53) One can easily see that a critical point of Jp in H multiplied by a constant is a solution of (6. i = 1. This completes the proof of the Proposition.2. i = 1. we have Jp (up ) ≤ min R(ri )S n  − δ.48) . 6.2. Moreover. po < τ and δ > 0. (6. By Proposition 6.50) (6.49) Obviously. Therefore any maximizing sequence possesses a converging subsequence in Hr and the limiting function multiplied by a suitable constant is a solution of (6. Similar to the ideas in [C].40). Deﬁne cp = sup min Jp (u).2 The Variational Scheme In this part. In this case. Let Γ be the family of all such paths. we seek a solution by maximizing the functional Jp in a class of rotationally symmetric functions: Hr = {u ∈ H  u = u(r)}.1 and 6.6. 2. there exists a critical (saddle) point up of Jp with Jp (up ) = cp . there exist two disjoint open sets Σ1 . Let r1 and r2 be the two smallest positive local maxima of R. δ Jp (ψi ) > R(ri )S n  − . i=1. we construct variational schemes to show the existence of a solution to equation (6. Jp is bounded from above in Hr and it is wellknown that the variational scheme is compact for each p < τ (See the argument in Chapter 2). 2. Case (ii): R has at least two positive local maxima. condition (6. ψi ∈ Σi . due to (6. (6.2.51) Let γ be a path in H joining ψ1 and ψ2 .2 The Variational Approach 171 ¯ Jτ (u) ≤ R(0)S n  − 2δo . (6.2. by the wellknown compactness. and (6.11) for each p < τ .2 (6.11) and for all p close to τ .4) implies that R must have local minima at both poles.51) and the deﬁnition of cp .41). 2 and Jp (u) ≤ R(ri )S n  − δ. such that for all p ≥ po .
In [CL7].1). such that ui (xi )→∞.11) obtained by the variational scheme are uniformly bounded. for rotationally symmetric functions R on S 3 . which converges to a solution uo of (6. Proposition 6. there is a subsequence of {up }. Then there exists a subsequence {ui } with ui = upi . we need to choose a point near xi . we assume that R be bounded away from zero.11) obtained by the variational approach in section 2. {x  R(x) = 0}).11) are uniformly bounded in the regions where R ≤ −δ. 6. we prove that as p→τ .1.3. Then there exists a po < τ and a δ > 0.3. Proof.172 6 Prescribing Scalar Curvature on S n . pi →τ .1 Assume that R satisﬁes condition (6.3 The A Priori Estimates In the previous section. R close to 0.5) in Theorem 6. Proposition 6. Now suppose that the conclusion of the Proposition is violated. such that for all po < p < τ . dist({x  R(x) ≤ −δ}. Now within our variational frame in this paper. We will use a rescaling argument to derive a contradiction. which is a direct consequence of Proposition 6. However. such that for all po < p ≤ τ .2 In the Region Where R is Small In [CL6]. we showed the existence of a positive solution up to the subcritical equation (6. we estimate the solutions in three regions where R < 0.1 In the Region Where R < 0 The apriori bound of the solutions is stated in the following proposition.1 For all 1 < p ≤ τ . 6. the solutions of (6. and R > 0 respectively. {up } are uniformly bounded in the regions where R(x) ≤ δ. Since xi may not be a local maximum of ui .1. It is mainly due to the energy bound.2. Theorem 6. for every δ > 0. It is easy to see that the energy E(up ) are bounded for all p. The convergence is based on the following a priori estimate.11) for each p < τ . let K be any large number and let . The bound depends only on δ.2 in our paper [CL6]. then there exists po < τ . and the lower bound of R. we removed this condition by using a blowing up analysis near R(x) = 0.3. the a priori estimate is simpler. To this end. the solution of (6.3.3. In this section.2 Let {up } be solutions of equation (6. we also obtained estimates in this region by the method of moving planes. with xi →xo and R(xo ) = 0. and a sequence of points {xi }. in that paper. That method may be applied to higher dimensions with a few modiﬁcations. which is almost a local maximum. for n ≥ 3 6. To prove the Theorem.
55) for some c > 0.56) for some constant C. we use si (ai ) ≥ si (x) and derive from (6. i (6. BK (0) (6. choose a local coordinate and let si (xi ) = ui (x)(ri − x − xi ) pi −1 . Consequently for i suﬃciently large. ri − ai − xi  ri − x − xi  2 2 pi −1 To see the second part. and it follows by a standard argument that {vi } converges to a harmonic function vo in BK (0) ⊂ Rn .6.57) contradict with (6. we derive ui (ai )(ri − ai − xi ) pi −1 ≥ ui (xi )ri i It follows that hence 2 2 p −1 pi −1 = (2K) pi −1 . vi (x) = ∀ x ∈ Bλi K (ai ). 1 ui (λi x + ai ). with vo (0) = 1. ui (x) ≤ ui (ai ) ≤ ui (ai ) 2 pi −1 . vi (x)τ +1 dx ≥ cK n . one can verify that. This completes the proof of the Proposition. (6. ui (ai ) Obviously vi is bounded in BK (0). one can easily verify that the ball Bλi K (ai ) is in the interior of Bri (xi ) and the value of ui (x) in Bλi K (ai ) is bounded above by a constant multiple of ui (ai ). Let λi = [ui (ai )]− 2 . we use the fact si (ai ) ≥ si (xi ). for any K > 0. (6. . (6.54) 2 ri − ai − xi  ≥ λi K. To see the ﬁrst part.54). the boundedness of the energy E(ui ) implies that Sn uτ +1 dV ≤ C. 2 In a small neighborhood of xo . Now we can make a rescaling. Then from the deﬁnition of ai . On the other hand. From this. By a straight forward calculation.57) Obviously. Sn uτ +1 dV ≥ i BK (0) τ vi +1 dx.55). Let ai be the maximum of si (x) in Bri (xi ).3 The A Priori Estimates 173 ri = 2K[ui (xi )]− pi −1 2 . 2 Bλi K (ai ) ⊂ Bri (xi ).56) and (6.
Proof. the sequence {ui } is uniformly bounded and possesses a subsequence converging to a solution of (6. we show that there is no more than one point blow up and the point must be a local maximum of R. In Step 2. From the proof of Proposition 2.2. such that for all po < p ≤ τ and for any > 0. Then {ui } can have at most one simple blow up point and this point must be a local maximum of R. In Step 3. we have Jτ (ui )→R(xo )S n . Then similar to the argument in Part II.1. Let ri . Let {xi } be a sequence of points such that ui (xi )→∞ and xi →xo with R(xo ) > 0. we use (6.3 Let {up } be solutions of equation (6.3. .1 Let R satisfy the ‘ﬂatness condition’ in Theorem 1. i Bri (xo ) Because the total energy of ui is bounded.58) for all positive local maxima rk of R. si (x). we argue that the solutions can only blow up at ﬁnitely many points because of the energy bound. Then there exists a po < τ .174 6 Prescribing Scalar Curvature on S n . then by Lemma 6.3. we see that {vi (x)} converges to a standard function vo (x) in Rn with τ − vo = R(xo )vo . Finally. Step 1.˜. Moreover. let {ui } be the sequence of critical points of Jpi we obtained in Section 2. with λi = (max ui )− n−2 and with q at the maximum of R. As a consequence of a result in [Li2] (also see [CL6] or [SZ]). The argument is standard. Now if {ui } blow up at xo . k (6.3 In the Regions Where R > 0. for n ≥ 3 6. It follows that uτ +1 dV ≥ co > 0. {ui } can only blow up at ﬁnitely many points. we can have only ﬁnitely many such xo . one can obtain Jτ (ui ) ≤ min R(rk )S n  − δ. {up } are uniformly bounded in the regions where R(x) ≥ . This completes the proof of Theorem 6. we show that even one point blow up is impossible. we have Lemma 6.1).11) obtained by the variational approach in section 2. because each ui is the minimum of Jpi on a path.3. In Step 1. The proof is divided into 3 steps. q Step 3. Step 2. and vi (x) be deﬁned as in Part II.58). Therefore.3. This is a contradiction with (6. For convenience. {ui } behaves almost like a family of the standard 2 ˜ functions φλi . We have shown that {ui } has only isolated blow up points. Proposition 6.53) to conclude that even one point blow up is impossible. This is done mainly by using a Pohozaeve type identity.1. Therefore.1.
that is. the simplest version maximum principle reads: . More generally. Since under the condition u (x) > 0. then we have the following simplest version of maximum principle: Assume that u (x) > 0 in an open interval (a. for a C 2 function u(x) of nindependent variables x = (x1 . the graph is concave up. b).3 7. we have D2 u(xo ) := (uxi xj (xo )) ≤ 0 . Based on this observation.5 Introduction Weak Maximum Principle The Hopf Lemma and Strong Maximum Principle Maximum Principle Based on Comparison A Maximum Principle for Integral Equations 7. the symmetric matrix is nonpositive deﬁnite at point xo . it can not have any local maximum.4 7.7 Maximum Principles 7. then u can not have any interior maximum in the interval. Correspondingly. we must have u (xo ) ≤ 0.1 Introduction As a simple example of maximum principle. xn ). · · · . let’s consider a C 2 function u(x) of one independent variable x. One can also see this geometrically. It is wellknown in calculus that at a local maximum point xo of u. at a local maximum xo .1 7.2 7.
0) is a saddle point. respectively. An interesting special case is when (aij (x)) is an identity matrix. iii) Establishing the Existence of Solutions (a) For a linear equation such as (7. x ∈ B1 (0) ⊂ Rn u(x) = 0 . then AB ≤ 0. x ∈ ∂B1 (0). A simple counter example is 1 u(x1 . In other words. and this implies that u ≡ 0.2) in any bounded open domain Ω. the validity of the maximum principle comes from the simple algebraic fact: For any two n × n matrices A and B.1) in an open bounded domain Ω.2) If a ≤ f (x) ≤ b in B1 (0). (7. in which condition (7. we obtain a b (1 − x2 ) ≤ u(x) ≤ (1 − x2 ). if A ≥ 0 and B ≤ 0. there are numerous other applications. We will list some below. the solution of the boundary value problem (7.1. condition (7.176 7 Maximum Principles If (aij (x)) ≥ 0 and ij aij (x)uxi xj (x) > 0 (7. 2n 2n ii) Proving Uniqueness of Solutions In the above example. i) Providing Estimates Consider the boundary value problem − u = f (x) . In this case. Now.1) becomes u > 0. if f (x) ≡ 0.1 in the following). applying the maximum principle for operator (see Theorem 7. then we can choose a = b = 0. In this chapter. Unlike its onedimensional counterpart.2) is unique. and most of them will be used in the method of moving planes in the next chapter. let u(x) = sup φ(x) . 1 2 2 One can easily see from the graph of this function that (0. x2 ) = x2 − x2 .1) no longer implies that the graph of u(x) is concave up. we will introduce various maximum principles. then u can not achieve its maximum in the interior of the domain. Besides this. and which vanish on the boundary as u does. then we can compare the solution u with the two functions b a (1 − x2 ) and (1 − x2 ) 2n 2n which satisfy the equation with f (x) replaced by a and b.
such that − u ≤ f (u) ≤ f (¯) ≤ − u. x ∈ ∂Ω. Suppose that there ¯ exist two functions u(x) ≤ u(x). ¯ Let u be the limit of the sequence {ui }: u(x) = lim ui (x). ¯ Ω ∂Ω x ∈ Ω. x ∈ Ω u(x) = 0 . Theorem 7.2). x ∈ Ω φ(x) = 0 . we use successive approximations. then min u ≥ min u.1. then u is a solution of the problem (7. .) i) If − u(x) ≥ 0. x ∈ ∂Ω. we have u ≤ u1 ≤ u2 ≤ · · · ≤ ui ≤ · · · ≤ u.3). we introduce and prove the weak maximum principles. Let − u1 = f (u) and Then by maximum principle. u ¯ These two functions are called sub (or lower) and super (or upper) solutions respectively. x ∈ Ω. ii) If − u(x) ≤ 0.7. then max u ≤ max u.3) Assume that f (·) is a smooth function with f (·) ≥ 0.3). ¯ Ω ∂Ω − ui+1 = f (ui ).2 . To seek a solution of problem (7.1 (Weak Maximum Principle for − .1 Introduction 177 where the sup is taken among all the functions that satisfy the corresponding diﬀerential inequality − φ ≤ f (x) . Then u is a solution of (7. In Section 7. (7. (b) Now consider the nonlinear problem − u = f (u) .
any ξ ∈ Rn and for some δ > 0. then u attains its maximum value only on ∂Ω unless u is constant.2 (Weak Maximum Principle for L) Let L be the uniformly elliptic operator deﬁned above. Actually this can not happen unless u is constant. as we will see in the following. However. ∂xi ∂xi ∂xj bi (x)Di + c(x). We say that L is uniformly elliptic if aij (x)ξi ξj ≥ δξ2 for any x ∈ Ω. Assume that c(x) ≡ 0. i) If Lu(x) ≥ 0. and connected domain in Rn with smooth bound¯ ary ∂Ω. x ∈ Ω. i L=− ij aij (x)Dij + Here we always assume that aij (x). Theorem 7. Let u be a function in C 2 (Ω) ∩ C(Ω). then min u ≥ min u. ¯ Ω ∂Ω These Weak Maximum Principles infer that the minima or maxima of u attain at some points on the boundary ∂Ω. and c(x) are bounded continuous ¯ functions in Ω. they do not exclude the possibility that the minima or maxima may also occur in the interior of Ω.1.178 7 Maximum Principles This result can be extended to general uniformly elliptic operators. Dij = . x ∈ Ω. bi (x). Theorem 7. Let Di = Deﬁne ∂ ∂2 . .3 (Strong Maximum Principle for L with c(x) ≡ 0) Assume that Ω is an open.1. then u attains its minimum value only on ∂Ω unless u is constant. ii) If Lu(x) ≤ 0. This maximum principle (as well as the weak one) can also be applied to the case when c(x) ≥ 0 with slight modiﬁcations. i) If Lu ≥ 0 in Ω. ¯ Ω ∂Ω ii) If Lu ≤ 0 in Ω. bounded. then max u ≤ max . Assume that c(x) ≡ 0 in Ω.
∀x ∈ Ω.3 by using the Hopf Lemma. ¯ Let φ be a positive function on Ω satisfying − φ + λ(x)φ ≥ 0. These will be studied in Section 7. where we prove the ‘Maximum Principles Based on Comparisons’. − is ‘positive’. and obviously so does − + c(x) if c(x) ≥ 0.4 (Strong Maximum Principle for L with c(x) ≥ 0) Assume that Ω is an open.+ c(x)’ can still remain ‘positive’ to ensure the maximum principle. as we will see in the next chapter.5. Also in Section 7.1. then the operator ‘. maximum principles hold for ‘positive’ operators. then u ≥ 0 in Ω.4) . and connected domain in Rn with smooth bound¯ ary ∂Ω. as consequences of Theorem 7. Actually. Let u be a function in C 2 (Ω) ∩ C(Ω). we all require that c(x) ≥ 0.4. as we will see in later sections. (7.1. Notice that in the previous Theorems. If c(x) > λ(x) .5. i) If Lu(x) ≥ 0. Do we really need c(x) ≥ 0? The answer is ‘no’. In Section 7. bounded.5 (Maximum Principle Based on Comparison) Assume that Ω is a bounded domain. Roughly speaking. Assume that c(x) ≥ 0 in Ω. if c(x) is not ‘too negative’. then u can not attain its nonnegative maximum in the interior of Ω unless u is constant. Theorem 7. we establish a maximum principle for integral inequalities. Let u be a function such that − u + c(x)u ≥ 0 x ∈ Ω u≥0 on ∂Ω.1 Introduction 179 Theorem 7. We will prove these Theorems in Section 7. These principles can be applied very conveniently in the ‘Method of Moving Planes’ to establish the symmetry of solutions for semilinear elliptic equations. ii) If Lu(x) ≤ 0. However. we derive the ‘Narrow Region Principle’ and the ‘Decay at Inﬁnity Principle’. x ∈ Ω.1. x ∈ Ω.4. then u can not attain its nonpositive minimum in the interior of Ω unless u is constant.5) (7.7. in practical problems it occurs frequently that the condition c(x) ≥ 0 can not be met.
) i) If − u(x) ≥ 0. Here we only present the proof of part i). we will deal with one dimensional case and higher dimensional case separately.10).180 7 Maximum Principles 7. u (x) satisﬁes the stronger condition (7.7) (7. we must have −u (xo ) ≤ 0. This contradicts with the assumption (7. we prove the weak maximum principles. let Ω be the interval (a. b).10). we ﬁrst carry it out under the stronger assumption that −u (x) > 0. b) is concave downward. To better illustrate the idea. ¯ Ω ∂Ω (7. (7.9) ii) If − u(x) ≤ 0. b) of u. Then condition (7. Obviously. then min u ≥ min u. and . a Figure 2 b To prove the above observation rigorously.6).7). Theorem 7.6) (7. The entirely similar proof also works for part ii). then max u ≤ max u. such that u(xo ) < m. there is a minimum xo ∈ (a. First. we consider a perturbation of u: u (x) = u(x) − x2 . Proof.2. Then by the second order Taylor expansion of u around xo .6) becomes u (x) ≤ 0. b) are large or equal to the minimum value of u at the end points (See Figure 2).8) (7. Suppose in contrary to (7.10) Let m = min∂Ω u. x ∈ Ω. Now for u only satisfying the weaker condition (7.1 (Weak Maximum Principle for − . This implies that the graph of u(x) on (a. and therefore one can roughly see that the values of u(x) in (a. for each hence > 0. ¯ Ω ∂Ω x ∈ Ω.2 Weak Maximum Principles In this section.
then − u(xo ) ≤ 0. then for any ro > r > 0. This completes the proof of the Theorem. then for any ro > r > 0. This contradicts with (7. u(x)dx = Br (xo ) ∂Br (xo ) ∂u dS = rn−1 ∂ν S n−1 ∂u o (x + rω)dSω . we arrive at the desired conclusion (7. to prove the theorem.13) u(xo ) < ∂Br (xo ) ∂Br (xo ) It follows that. This Lemma tell us that.11) u(xo ) > (=) ∂Br (xo ) ∂Br (xo ) It follows that. if xo is a maximum of u in Ω.2 Weak Maximum Principles 181 min u ≥ min u . ∂r (7.2.1.16) Otherwise. if there exists a minimum xo of u in Ω.7. (7. then by Lemma 7. 1 u(x)dS.14) We postpone the proof of the Lemma for a moment.2. (7.2. we ﬁrst consider u (x) = u(x) − x2 .16). we need the following Lemma 7. Roughly speaking.15). The Proof of Lemma 7.1. By the Divergence Theorem. if − u(x) > 0.17) where dSω is the area element of the n − 1 dimensional unit sphere S n−1 = {ω  ω = 1}. ¯ Ω ∂Ω Now letting →0. we arrive at (7. (7.7). i) If − u(x) > (=) 0 for x ∈ Bro (xo ) with some ro > 0. 1 u(x)dS. (7. To prove the theorem in dimensions higher than one.1 (Mean Value Inequality) Let xo be a point in Ω.7). then the value of u at the center of the small ball Br (xo ) is larger than its average value on the boundary ∂Br (xo ). Let Br (xo ) ⊂ Ω be the ball of radius r center at xo . the graph of u is locally somewhat concave downward.15) Hence we must have min u ≥ min u Ω ∂Ω (7. letting →0. Obviously. and ∂Br (xo ) be its boundary. − u = − u + 2 n > 0. (7. Now based on this Lemma.12) ii) If − u(x) < 0 for x ∈ Bro (xo ) with some ro > 0. we have − u (xo ) ≤ 0. Now in (7. then − u(xo ) ≥ 0. . if xo is a minimum of u in Ω.
17). bi (x). there exists a δ > 0. This completes the proof of the Lemma.2. one can see that if we replace − operator by − + c(x) with c(x) ≥ 0. ∂xi ∂xi ∂xj bi (x)Di + c(x). any ξ ∈ Rn and for some δ > 0. then max u ≤ max u. S n−1 (7. Then by the continuity of u. From the proof of Theorem 7.2 (Weak Maximum Principle for L with c(x) ≡ 0). i) If Lu ≥ 0 in Ω. Theorem 7. ¯ Ω ∂Ω .19) Here we always assume that aij (x). Consequently.18) Integrating both sides of (7.2. Dij = . then the conclusion of Theorem 7. We say that L is uniformly elliptic if aij (x)ξi ξj ≥ δξ2 for any x ∈ Ω. To see (7. such that − u(x) > 0. Let Di = Deﬁne L=− ij ∂2 ∂ .2. Furthermore. i aij (x)Dij + (7. we suppose in contrary that − u(xo ) > 0.11). ∀x ∈ Bδ (xo ).18) from 0 to r yields u(xo + rω)dSω − u(xo )S n−1  < 0.1 is still true (with slight modiﬁcations).1. This contradicts with the assumption that xo is a minimum of u.182 7 Maximum Principles If u < 0. ¯ Ω ∂Ω (7. Assume that c(x) ≡ 0. S n−1 where S n−1  is the area of S n−1 .20) ii) If Lu ≤ 0 in Ω. then by (7. It follows that u(xo ) > 1 rn−1 S n−1  ∂Br (xo ) u(x)dS. then min u ≥ min u. and c(x) are bounded continuous ¯ functions in Ω. ∂ { ∂r u(xo + rω)dSω } < 0. we can replace the Laplace operator − with general uniformly elliptic operators.12).11) holds for any 0 < r < δ. (7. Let L be the uniformly elliptic operator deﬁned above. This veriﬁes (7.
Let L=− ij aij (x)Dij + i bi (x)Di + c(x) be uniformly elliptic in Ω with c(x) ≡ 0. ¯ Ω ∂Ω Interested readers may ﬁnd its proof in many standard books. This is called the “Strong Maximum Principle”. Let u− (x) = min{u(x). (7. ¯ Ω ∂Ω ii) If Lu ≤ 0 in Ω. then max u ≤ max u+ .3 (Weak Maximum Principle for L with c(x) ≥ 0). the minimum value of u can only be achieved on the boundary unless u is constant. i) If Lu ≥ 0 in Ω.3. Let L be the uniformly elliptic operator deﬁned above. ∀x ∈ B. bounded. Assume that c(x) ≥ 0.3 The Hopf Lemma and Strong Maximum Principles In the previous section.3 The Hopf Lemma and Strong Maximum Principles 183 For c(x) ≥ 0. However it does not exclude the possibility that the minimum may also attain at some point in the interior of Ω. In this section. 0}. Assume that Ω is an open. 0} and u+ (x) = max{u(x).1 (Hopf Lemma). Theorem 7. ∂u(xo ) < 0.2. page 327. we will show that this can not actually happen. We will prove it by using the following Lemma 7. that is. and connected domain in Rn with smooth boundary ∂Ω.22) Then for any outward directional derivative at xo . it concludes that the minimum of u attains at some point on the boundary ∂Ω. Let u be a function in ¯ C 2 (Ω) ∩ C(Ω). Assume that Lu ≥ 0 in Ω. say in [Ev].23) . we prove a weak form of maximum principle. the principle still applies with slight modiﬁcations. In the case Lu ≥ 0. 7. ∂ν (7. then min u ≥ min u− .21) Suppose there is a ball B contained in Ω with a point xo ∈ ∂Ω ∩ ∂B and suppose u(x) > u(xo ). (7.7.
184
7 Maximum Principles
In the case c(x) ≥ 0, if we require additionally that u(xo ) ≤ 0, then the same conclusion of the Hopf Lemma holds. Proof. Without loss of generality, we may assume that B is centered at the origin with radius r. Deﬁne w(x) = e−αr − e−αx . Consider v(x) = u(x) + w(x) on the set D = B r (xo ) ∩ B (See Figure 3). 2
2 2
r 0 D xo Figure 3
We will choose α and appropriately so that we can apply the Weak Maximum Principle to v(x) and arrive at v(x) ≥ v(xo ) ∀x ∈ D. (7.24)
Ω
We postpone the proof of (7.24) for a moment. Now from (7.24), we have ∂v o (x ) ≤ 0, ∂ν Noticing that ∂w o (x ) > 0 ∂ν We arrive at the desired inequality ∂u o (x ) < 0. ∂ν Now to complete the proof of the Lemma, what left to verify is (7.24). We will carry this out in two steps. First we show that Lv ≥ 0. Hence we can apply the Weak Maximum Principle to conclude that min v ≥ min .
¯ D ∂D
(7.25)
(7.26)
(7.27)
7.3 The Hopf Lemma and Strong Maximum Principles
185
Then we show that the minimum of v on the boundary ∂D is actually attained at xo : v(x) ≥ v(xo ) ∀x ∈ ∂D. (7.28) Obviously, (7.27) and (7.28) imply (7.24). To see (7.26), we directly calculate
n n
Lw = e−αx {4α2
i,j=1 n
2
aij (x)xi xj − 2α
i=1 n
[aii (x) − bi (x)xi ] − c(x)} + c(x)e−αr [aii (x) − bi (x)xi ] − c(x)}
i=1
2
≥ e−αx {4α2
i,j=1
2
aij (x)xi xj − 2α
(7.29)
By the ellipticity assumption, we have r aij (x)xi xj ≥ δx2 ≥ δ( )2 > 0 in D. 2 i,j=1
n
(7.30)
Hence we can choose α suﬃciently large, such that Lw ≥ 0. This, together with the assumption Lu ≥ 0 implies Lv ≥ 0, and (7.27) follows from the Weak Maximum Principle. To verify (7.28), we consider two parts of the boundary ∂D separately. (i) On ∂D ∩ B, since u(x) > u(xo ), there exists a co > 0, such that u(x) ≥ u(xo ) + co . Take small enough such that w ≤ δ on ∂D ∩ B. Hence v(x) ≥ u(xo ) = v(xo ) ∀x ∈ ∂D ∩ B. (ii) On ∂D ∩ ∂B, w(x) = 0, and by the assumption u(x) ≥ u(xo ), we have v(x) ≥ v(xo ). This completes the proof of the Lemma. Now we are ready to prove Theorem 7.3.1 (Strong Maximum Principle for L with c(x) ≡ 0.) Assume that Ω is an open, bounded, and connected domain in Rn with smooth bound¯ ary ∂Ω. Let u be a function in C 2 (Ω) ∩ C(Ω). Assume that c(x) ≡ 0 in Ω. i) If Lu(x) ≥ 0, x ∈ Ω, then u attains its minimum only on ∂Ω unless u is constant. ii) If Lu(x) ≤ 0, x ∈ Ω, then u attains its maximum only on ∂Ω unless u is constant. Proof. We prove part i) here. The proof of part ii) is similar. Let m be the minimum value of u in Ω. Set Σ = {x ∈ Ω  u(x) = m}. It is relatively closed in Ω. We show that either Σ is empty or Σ = Ω.
186
7 Maximum Principles
We argue by contradiction. Suppose Σ is a nonempty proper subset of Ω. Then we can ﬁnd an open ball B ⊂ Ω \ Σ with a point on its boundary belonging to Σ. Actually, we can ﬁrst ﬁnd a point p ∈ Ω \ Σ such that d(p, Σ) < d(p, ∂Ω), then increase the radius of a small ball center at p until it hits Σ (before hitting ∂Ω). Let xo be the point at ∂B ∩ Σ. Obviously we have in B Lu ≥ 0 and u(x) > u(xo ). Now we can apply the Hopf Lemma to conclude that the normal outward derivative ∂u o (x ) < 0. (7.31) ∂ν On the other hand, xo is an interior minimum of u in Ω, and we must have Du(xo ) = 0. This contradicts with (7.31) and hence completes the proof of the Theorem. In the case when c(x) ≥ 0, the strong principle still applies with slight modiﬁcations. Theorem 7.3.2 (Strong Maximum Principle for L with c(x) ≥ 0.) Assume that Ω is an open, bounded, and connected domain in Rn with smooth bound¯ ary ∂Ω. Let u be a function in C 2 (Ω) ∩ C(Ω). Assume that c(x) ≥ 0 in Ω. i) If Lu(x) ≥ 0, x ∈ Ω, then u can not attain its nonpositive minimum in the interior of Ω unless u is constant. ii) If Lu(x) ≤ 0, x ∈ Ω, then u can not attain its nonnegative maximum in the interior of Ω unless u is constant. Remark 7.3.1 In order that the maximum principle to hold, we assume that the domain Ω be bounded. This is essential, since it guarantees the existence ¯ of maximum and minimum of u in Ω. A simple counter example is when Ω n is the half space {x ∈ R  xn > 0}, and u(x, y) = xn . Obviously, u = 0, but u does not obey the maximum principle: max u ≤ max u.
Ω ∂Ω
Equally important is the nonnegativeness of the coeﬃcient c(x). For example, set Ω = {(x, y) ∈ R2  − π < x < π , − π < y < π }. Then u = cos x cos y 2 2 2 2 satisﬁes − u − 2u = 0, in Ω u = 0, on ∂Ω.
7.3 The Hopf Lemma and Strong Maximum Principles
187
But, obviously, there are some points in Ω at which u < 0. However, if we impose some sign restriction on u, say u ≥ 0, then both conditions can be relaxed. A simple version of such result will be present in the next theorem. Also, as one will see in the next section, c(x) is actually allowed to be negative, but not ‘too negative’. Theorem 7.3.3 (Maximum Principle and Hopf Lemma for not necessarily bounded domain and not necessarily nonnegative c(x). ) Let Ω be a domain in Rn with smooth boundary. Assume that u ∈ C 2 (Ω)∩ ¯ and satisﬁes C(Ω) − u+ u(x) = 0
n i=1 bi (x)Di u
+ c(x)u ≥ 0, u(x) ≥ 0, x ∈ Ω x ∈ ∂Ω
(7.32)
with bounded functions bi (x) and c(x). Then i) if u vanishes at some point in Ω, then u ≡ 0 in Ω; and ii) if u ≡0 in Ω, then on ∂Ω, the exterior normal derivative ∂u < 0. ∂ν
To prove the Theorem, we need the following Lemma concerning eigenvalues. Lemma 7.3.2 Let λ1 be the ﬁrst positive eigenvalue of − φ = λ1 φ(x) x ∈ B1 (0) φ(x) = 0 x ∈ ∂B1 (0) (7.33)
with the corresponding eigenfunction φ(x) > 0. Then for any ρ > 0, the ﬁrst λ1 positive eigenvalue of the problem on Bρ (0) is 2 . More precisely, if we let ρ ψ(x) = φ( x ), then ρ λ1 − ψ = 2 ψ(x) x ∈ Bρ (0) (7.34) ρ ψ(x) = 0 x ∈ ∂Bρ (0) The proof is straight forward, and will be left for the readers. The Proof of Theorem 7.3.3. i) Suppose that u = 0 at some point in Ω, but u ≡ 0 on Ω. Let Ω+ = {x ∈ Ω  u(x) > 0}. Then by the regularity assumption on u, Ω+ is an open set with C 2 boundary. Obviously, u(x) = 0 , ∀ x ∈ ∂Ω+ . Let xo be a point on ∂Ω+ , but not on ∂Ω. Then for ρ > 0 suﬃciently small, one can choose a ball Bρ/2 (¯) ⊂ Ω+ with xo as its boundary point. Let ψ be x
188
7 Maximum Principles
the positive eigenfunction of the eigenvalue problem (7.34) on Bρ (xo ) correλ1 x sponding to the eigenvalue 2 . Obviously, Bρ (xo ) completely covers Bρ/2 (¯). ρ u Let v = . Then from (7.32), it is easy to deduce that ψ 0 ≤ − v−2 v·
n
ψ + bi (x)Di v + ψ i=1
n
Di ψ − ψ + + c(x) v ψ ψ i=1
n
≡− v+
i=1
˜i (x)Di v + c(x)v. b ˜
Let φ be the positive eigenfunction of the eigenvalue problem (7.33) on B1 , then n 1 Di φ λ1 + c(x). c(x) = 2 + ˜ ρ ρ i=1 φ This allows us to choose ρ suﬃciently small so that c(x) ≥ 0. Now we can ˜ apply Hopf Lemma to conclude that, the outward normal derivative at the boundary point xo of Bρ/2 (¯), x ∂v o (x ) < 0, ∂ν because v(x) > 0 ∀x ∈ Bρ/2 (¯) and v(xo ) = x u(xo ) = 0. ψ(xo ) (7.35)
On the other hand, since xo is also a minimum of v in the interior of Ω, we must have v(xo ) = 0. This contradicts with (7.35) and hence proves part i) of the Theorem. ii) The proof goes almost the same as in part i) except we consider the point xo on ∂Ω and the ball Bρ/2 (¯) is in Ω with xo ∈ ∂Bρ/2 (¯). Then for x x the outward normal derivative of u, we have ∂u o ∂v o ∂ψ o ∂v o (x ) = (x )ψ(xo ) + v(xo ) (x ) = (x )ψ(xo ) < 0. ∂ν ∂ν ∂ν ∂ν Here we have used a wellknown fact that the eigenfunction ψ on Bρ (xo ) is radially symmetric about the center xo , and hence ψ(xo ) = 0. This completes the proof of the Theorem.
7.4 Maximum Principles Based on Comparisons
In the previous section, we show that if (− + c(x))u ≥ 0, then the maximum principle, i.e. (7.20), applies. There, we required c(x) ≥ 0. We can think −
38).36) with λ = λ1 obey Maximum Principle.1 Assume that Ω is a bounded domain. − +c(x) is also ‘positive’.4. and (7. (7. Let xo ∈ Ω be a minimum of v(x). and thus completes the proof of the Theorem. and taking into account that u(xo ) < 0.38) While on the other hand.7. the maxima or minima of φ are attained only on the boundary ∂Ω. (7. If c(x) > λ(x) . to ensure the Maximum Principle. at point xo . Proof. By a direct calculation. . Do we really need c(x) ≥ 0 here? To answer the question. let us consider the Dirichlet eigenvalue problem of − : − φ − λφ(x) = 0 x ∈ Ω (7. ∀x ∈ Ω.37). (7. and the maximum principle holds for any ‘positive’ operators. That is. Let φ be a positive ¯ function on Ω satisfying − φ + λ(x)φ ≥ 0. it is easy to verify that φ 1 φ − v=2 v· + (− u + u).39). This is an obvious contradiction with (7.41). we have. since xo is a minimum. the solutions of (7.36) φ(x) = 0 x ∈ ∂Ω. We notice that the eigenfunction φ corresponding to the ﬁrst positive eigenvalue λ1 is either positive or negative in Ω. Then since φ(x) > 0. Suppose that u(x) < 0 somewhere in u(x) . Let v(x) = φ(x) in Ω.4 Maximum Principles Based on Comparisons 189 as a ‘positive’ operator. we can establish the following more general maximum principle based on comparison. (7. by (7. that is. we have − v(xo ) ≤ 0 and v(xo ) = 0. c(x) need not be nonnegative. − u+ φ u(xo ) ≥ − u + λ(xo )u(xo ) φ > − u + c(xo )u(xo ) ≥ 0. we must have v(x) < 0 somewhere Ω. it is allowed to be as negative as −λ1 .39) (7. More precisely. We argue by contradiction. This suggests that.41) (7. Theorem 7.37) Assume that u is a solution of − u + c(x)u ≥ 0 x ∈ Ω u≥0 on ∂Ω.40) and (7. then u ≥ 0 in Ω. For c(x) ≥ 0.40) φ φ φ On one hand.
In dimension n ≥ 3.4. For convenience in applications. or at points where u is negative.1 (Narrow Region Principle. we assume further that lim inf u(x) ≥ 0.1 From the proof. Still consider the same v(x) as in the proof of Theorem 7.e.4.38). (7.1 to conclude that u ≥ 0 in Ω.39).38) with bounded function c(x). The Theorem is also valid on an unbounded domains if u is “nonnegative” at inﬁnity: Theorem 7.42) x→∞ Then u ≥ 0 in Ω. we provide two typical situations where there exist such functions φ and c(x) satisfying condition (7. and let φ(x) = xq Then it is easy to verify that .) If u satisﬁes (7. Then the rest of the arguments are exactly the same as in the proof of Theorem 7. ii) Decay at Inﬁnity.37) and (7.4.37) and (7. so that the Maximum Principle Based on Comparison applies: i) Narrow regions. i) Narrow Regions.190 7 Maximum Principles Remark 7. one can choose some positive 1 number q < n − 2. c(x) > λ(x) = 2 .4. Proof.4. Hence we can directly apply Theorem l 7. c(x) −1 satisﬁes (7. and ii) c(x) decays fast enough at ∞. Corollary 7.42) guarantees that the minima of v(x) do not “leak” away to inﬁnity.4. provided lim inf x→∞ u(x) ≥ 0. Then it is easy Ω 0 l x1 to see that − φ = ( 1 )2 φ. Now condition (7.39).39) are required only at the points where v attains its minimum.1. i. Then when the width l of the region Ω is suﬃciently small. besides condition (7. When Ω = {x  0 < x1 < l} is a narrow region with width l as shown: We can choose φ(x) = sin( x1l+ ). one can see that conditions (7.2 If Ω is an unbounded domain. where λ(x) = l −1 l2 can be very negative when l is suﬃciently small.1.
∀(x. and for the region Ω as stated in Corollary 7.44) 1 with xs s(p − 1) > 2. they can be easily applied to a nonlinear equation.3 Although Theorem 7. If further assume that u ∂Ω ≥ 0. one can see that actually condition (7. for example. Assume that the solution u decays near inﬁnity at the rate of (7.38) on ¯ Ω. y)f (y)dy.2.4.43) in Ω. Assume K(x.2 (Decay at Inﬁnity) Assume there exist R > 0. Let Ω be a region in Rn .4.4. y) ∈ Ω × Ω.4.2 that u ≥ 0 in the entire region Ω. Remark 7.4. Let c(x) = −u(x)p−1 . y) ≥ 0.4. Deﬁne the integral operator T by (T f )(x) = Ω K(x. such that c(x) > − Suppose u(x) lim = 0. q(n − 2 − q) . . x2 (7. − u − up−1 u = 0 x ∈ Rn .7. x2 In the case c(x) decays fast enough near inﬁnity.4. Then for R suﬃciently large.43) Remark 7. x→∞ xq C Let Ω be a region containing in BR (0) ≡ Rn \ BR (0). may or may not be bounded. 7.1. we can adapt the proof of Theorem 7.2 From Remark 7. then we can derive from Corollary 7. we introduce a maximum principle for integral equations.5 A Maximum Principle for Integral Equations 191 − φ= q(n − 2 − q) φ.5 A Maximum Principle for Integral Equations In this section.4.1 as well as its Corollaries are stated in linear forms.1 to derive Corollary 7. ∀x > R. c(x) satisﬁes (7. then u(x) ≥ 0 for all x ∈ Ω. If u satisﬁes (7.43) is only required at points where u is negative.
such as Lp (Ω) norms.48) (7. Parallel to this. f + (x) = max{0. f (x)}.45) There are many norms that satisfy (7. This completes the proof of the Theorem. we have f + (x) = 0 and (T f )+ (x) ≥ 0 by the deﬁnitions. whenever 0 ≤ f (x) ≤ g(x) in Ω. we explained how it can be applied to the situations of “Narrow Regions” and “Decay at Inﬁnity”. we have similar applications for integral equations. 0 ≤ f + (x) ≤ (T f )(x) = (T f )+ (x) + (T f )− (x) ≤ (T f )+ (x) (7. Deﬁne (T f )(x) = Ω 1 c(y)f (y)dy. Deﬁne and (7. L∞ (Ω) norm. (7.47) with some 0 < θ < 1 ∀g. and C(Ω) norm.45).49) While in the rest of the region Ω. It follows from (7. (7. Theorem 7.49) holds for any x ∈ Ω. Therefore.1 Suppose that Tg ≤ θ g If f (x) ≤ (T f )(x). Recall that in Section 5.49) that f + ≤ (T f )+ ≤ θ f + . x − yn−α Assume that f satisﬁes the integral equation f = Tf in Ω. (7. which implies that f + (x) ≡ 0. f − (x) = min{0. Then it is easy to see that for any x ∈ Ω+ . Since θ < 1. we must have f + = 0. after introducing the “Maximum Principle Based on Comparison” for Partial Diﬀerential Equations.5. . and hence f (x) ≤ 0.46) Ω+ = {x ∈ Ω  f (x) > 0}. then f (x) ≤ 0 for all x ∈ Ω. f (x)}. Proof.192 7 Maximum Principles Let · be a norm in a Banach space satisfying f ≤ g . Let α be a number between 0 and n.
We will use the method of moving planes to obtain the radial symmetry and monotonicity of the positive solutions. ∀ x ∈ Ω. i) Narrow Regions: The width of Ω is very small.50) and (7.1 that f + (x) ≡ 0 . This completes the proof of the Corollary. o in the As an immediate application of this “Maximum Principle”. we study an integral equation in the next section. Ω = BR (0).1 Assume that c(y) ≥ 0 and c(y) ∈ Lτ (Rn ). we can make the integral Ω c(y)τ dy as small as we wish.7. by Lebesgue integral theory. C ii) Near Inﬁnity: Say. (7. from Theorem 7.50) ≤ C c(y) Lτ (Ω) f Lp (Ω) .51) for some p.5. and thus to obtain C c(y) Lτ (Ω) < 1. a maximum principle that will be applied to “Narrow Regions” and “Near Inﬁnity”. If we have some right integrability condition on c(y). where µ(D) is the measure of the set D.1 One can see that the condition µ(Ω ∩ BRo (0)) ≤ Corollary is satisﬁed in the following two situations.5. such that if µ(Ω ∩ BRo (0)) ≤ o. Remark 7. we have Corollary 7.5 A Maximum Principle for Integral Equations 193 or more generally. with suﬃciently large R.5. . then we can derived. Now it follows from Theorem 7.1. Proof. then f + ≡ 0 in Ω. the integral inequality f ≤ Tf Further assume that Tf Lp (Ω) in Ω.51). (7.5. Let f ∈ Lp (Rn ) be a nonnegative function satisfying (7. τ > 1. Since c(y) ∈ Lτ (Rn ). when the measure of the intersection of Ω with BRo (0) is suﬃciently small. More precisely. the complement of the ball BR (0). Then there exist positive numbers Ro and o depending on c(y) only.
.
Caﬀarelli.8 Methods of Moving Planes and of Moving Spheres 8.1 The Background 8. The Method of Moving Planes and its variant–the Method of Moving Spheres–have become powerful tools in establishing symmetries and monotonicity for solutions of partial diﬀerential equations. semilinear partial diﬀerential equations. and Spruck [CGS].1 Symmetry of Solutions on a Unit Ball 8.2.4 8.3 Method of Moving Planes in a Local Way 8. there have seen many signiﬁcant contributions. and to prove nonexistence of solutions.1 Outline of the Method of Moving Planes 8.1 The Background 8.2. it was further developed by Serrin [Se].4. Ni. Li [Li].4.2 The A Priori Estimates Method of Moving Spheres 8. This method has been applied to free boundary problems. Particularly for semilinear partial diﬀerential equations.2 Applications of Maximum Principles Based on Comparison 8.3. We refer to the paper of Frenkel [F] for more descriptions on the method. and Nirenberg [GNN]. and many others.2 Necessary Conditions 8.5 Method of Moving Planes in an Integral Form and Symmetry of Solutions for Integral Equations The Method of Moving Planes (MMP) was invented by the Soviet mathematician Alexanderoﬀ in the early 1950s. and other problems. to derive useful inequalities.2. .2 Symmetry of Solutions of −∆u = up in Rn . Chen and Li [CL] [CL1].3 Symmetry of Solutions of −∆u = eu in R2 8. Decades later. They can also be used to obtain a priori estimates. 8. Gidas. Chang and Yang [CY].3. Gidas.
one can see that. In Section 8. We recommend the readers study this part carefully. Instead of using local properties (say diﬀerentiability) of a diﬀerential equation. so that they will be able to apply it to their own research. In this chapter. In Section 8. Roughly speaking. we will establish radial symmetry and monotonicity for the solutions of the following three semilinear elliptic problems − u = f (u) x ∈ B1 (0) u=0 on ∂B1 (0). We will establish symmetry. In particular. and − u = eu(x) x ∈ R2 . the maximum principle has been used inﬁnitely many times. we will apply the Method of Moving Planes and their variant–the Method of Moving Spheres– to study semilinear elliptic equations and integral equations. the MMP is a continuous way of repeated applications of maximum principles. and the advantage is that each time we only need to use the maximum principle in a very narrow region.’ the maximum principle can still be applied.2. To investigate the symmetry and monotonicity of integral equations. and under certain conditions. a priori estimates. and nonexistence of the solutions. the Maximum Principles introduced in the previous chapter are applied in innovative ways. The MMP greatly enhances the power of maximum principles. − u = u n−2 (x) x ∈ Rn n ≥ 3. the Maximum Principles Base on Comparison will play a major role. n+2 During the moving of planes. From the previous chapter. we have seen the beauty and power of maximum principles.3. In the authors’ research practice. they are equivalent. the Narrow Region Principle and the Decay at Inﬁnity Principle will be used repeatedly in dealing with the three examples. the authors. n+2 in M. we will apply the Method of Moving Planes in a ‘local way’ to obtain a priori estimates on the solutions of the prescribing scalar curvature equation on a compact Riemannian manifold M − 4(n − 1) n−2 ou + Ro (x)u = R(x)u n−2 . During this process. monotonicity. they employed the global properties of the solutions of integral equations.196 8 Methods of Moving Planes and of Moving Spheres From the previous chapter. in such a narrow region. one can change a differential equation into an integral equation. During the process of Moving Planes. . together with Ou. even if the coeﬃcients of the equation are not ‘good. created an integral form of MMP. It is wellknown that by using a Green’s function. we also introduced a form of maximum principle at inﬁnity to the MMP and therefore simpliﬁed many proofs and extended the results in more natural ways.
xn ) ∈ Rn  x1 = λ}.1 Outline of the Method of Moving Planes 197 We alow the function R(x) to change signs. · · · . Let Σλ denote the region to the left of the plane. In Section 8. the traditional Method of Moving Planes does not work.5. This is a plane perpendicular to x1 axis and the plane that we will move with. and it also becomes a suﬃcient condition for the existence of solutions in most cases. we take the Euclidian space Rn for an example. let Tλ = {x = (x1 . For any real number λ.5. we introduce an auxiliary function to circumvent this diﬃculty.8. x2 . Let u be a positive solution of a certain partial diﬀerential equation. We prove that if the function R(x) is rotationally symmetric and monotone in the region where it is positive.4.1 Outline of the Method of Moving Planes To outline how the Method of Moving Planes works. We will use the Method of Moving Planes in an innovative way to obtain a priori estimates. then both equations admit no solution. 8. In this situation. respectively. and − u+ n+2 n−2 n(n − 2) u= R(x)u n−2 4 4(n − 1) on S 2 and on S n (n ≥ 3). If we want to prove that it is symmetric and monotone in a given direction. The Maximum Principle for Integral Equations established in Chapter 7 is combined with the estimates of various integral norms to carry on the moving of planes. It arises as an EulerLagrange equation for a functional in the context of the HardyLittlewoodSobolev inequalities. we study the integral equation in Rn u(x) = Rn n+α 1 u n−α (y)dy. Due to the diﬀerent nature of the integral equation. as an application of the maximum principle for integral equations introduced in Section 7. Hence we exploit its global property and develop a new idea–the Integral Form of the Method of Moving Planes to obtain the symmetry and monotonicity of the solutions. the traditional blowingup analysis fails near the set where R(x) = 0. n−α x − y for any real number α between 0 and n. This provides a stronger necessary condition than the well known KazdanWarner condition. . we may assign that direction as x1 axis. In Section 8. we use the Method of Moving Spheres to prove a nonexistence of solutions for the prescribing Gaussian and scalar curvature equations − u + 2 = R(x)eu . Since the Method of Moving Planes can not be applied to the solution u directly. i.e.
We show that if wλo (x) ≡ 0.198 8 Methods of Moving Planes and of Moving Spheres Σλ = {x ∈ Rn  x1 < λ}. one can see that the key to the Method of Moving Planes is to establish inequality (8. 1 . We prove that u is symmetric about the plane Tλo . and move the plane Tλ along the x1 direction to the right as long as the inequality (8. and we want to show that u is symmetric about some plane Tλo . We continuously move the plane this way up to its limiting position. − and hence Σλ is empty. Step 2. such that (8. xn ) about the plane Tλ (See Figure 1). From the above illustration. ∀x ∈ Σλo . we use a diﬀerent idea. While for integral equations. we have wλ (x) ≥ 0 . maximum principles are powerful tools for this task. then there would exist λ > λo . and for partial diﬀerential equations.1) Then we are able to start oﬀ from this neighborhood of x1 = −∞. that is wλo (x) ≡ 0 for all x ∈ Σλo . We show that this norm must be zero. such that wλo (x) ≡ 0 . we deﬁne λo = sup{λ  wλ (x) ≥ 0. we generally go through the following two steps. Let xλ = (2λ − x1 . More precisely. In order to show that. the reﬂection of the point x = (x1 . ∀x ∈ Σλ . x Σλ Tλ xλ Figure 1 We compare the values of the solution u at point x and xλ . x2 . let wλ (x) = u(xλ ) − u(x). Step 1. xn ).1). We ﬁrst show that for λ suﬃciently negative.1) holds. · · · . (8. To this end. there exists some λo . We estimate a certain norm of wλ on the set − Σλ = {x ∈ Σλ  wλ (x) < 0} where the inequality (8.1) holds. ∀x ∈ Σλ }.1) is violated. and this contradicts with the deﬁnition of λo . This is usually carried out by a contradiction argument. · · · .
and wλ (x) = uλ (x) − u(x). Proof. 8. more precisely. we study some semilinear elliptic equations.2. 0 x1 . Let Σλ be the part of B1 (0) which is on the left of the plane Tλ .2. · · · . Ni.1 Symmetry of Solutions in a Unit Ball We ﬁrst begin with an elegant result of Gidas.3) x xλ Σλ Tλ Figure 4 Let We compare the values of the solution u on Σλ with those on its reﬂection. As shown on Figure 4 below. let Tλ = {x  x1 = λ} be the plane perpendicular to the x1 axis.1 Assume that f (·) is a Lipschitz continuous function such that f (p) − f (q) ≤ Co p − q (8. the readers will see vividly how the Maximum Principles Based on Comparisons are applied to narrow regions and to solutions with decay at inﬁnity.2 Applications of the Maximum Principles Based on Comparisons 199 8. xλ = (2λ − x1 . Then every positive solution u of − u = f (u) x ∈ B1 (0) u=0 on ∂B1 (0). is radially symmetric and monotone decreasing about the origin. uλ (x) = u(xλ ). In the proof of each theorem. For each x ∈ Σλ . xn ). We will apply the method of moving planes to establish the symmetry of the solutions. (8. let xλ be the reﬂection of the point x about the plane Tλ .2 Applications of the Maximum Principles Based on Comparisons In this section.2) for some constant Co . x2 . and Nirenberg [GNN1]: Theorem 8. The essence of the method of moving planes is the application of various maximum principles.8.
We consider the function wλ+δ (x) on the narrow region (See Figure 5): ¯ do Ωδ = Σλ+δ ∩ {x  x1 > λ − }. and this would contradicts with the deﬁnition of λ. (8. ∀x ∈ Σλ }. then the image of the curved surface part of ∂Σ ¯ under the fact. one can verify that wλ satisﬁes wλ + C(x. f (uλ (x)) − f (u(x)) . We now increase the value of λ continuously. we will show that the plane can be further moved to the right ¯ by a small distance. Obviously. let ¯ λ = sup{λ  wλ (x) ≥ 0. λ)wλ (x) = 0. Σλ is a narrow (in x1 direction) region. It follows ¯ that. where C(x. by moving this way. By the Strong Maximum Principle. wλ (x) > 0 since u > 0 in B1 (0). λ) = and by condition (8. the plane will not stop before hitting the origin. More precisely. we ﬁrst claim that ¯ λ ≥ 0. In ¯ < 0.2).200 8 Methods of Moving Planes and of Moving Spheres Then it is easy to see that uλ satisﬁes the same equation as u does. on this part of ∂Σλ . wλ (x) ≥ 0. while on the curve part of ∂Σλ . where u(x) > 0 by assumption.) Now we can apply the “Narrow Region Principle” ( Corollary 7.4) Step 1: Start Moving the Plane. such that δ ≤ do ¯ ¯ 2 . −λ. Applying the Mean Value Theorem to f (u). wλ (x) > 0. This provides a starting point for us to move the plane Tλ Step 2: Move the Plane to Its Right Limit.6) (8. C(x. We show that.4.5) Otherwise. Choose a small positive number δ. that is. ¯ 2 . for λ suﬃciently close to −1. λ) ≤ Co . ¯ Let do be the maximum width of narrow regions that we can apply the “Narrow Region Principle”. we move the plane Tλ to the right as long as the inequality (8. We start from the near left end of the region. uλ (x) − u(x) (8. ∀x ∈ Σλ . wλ (x) = 0. we ¯ ¯ deduce that wλ (x) > 0 ¯ in the interior of Σλ . for λ close to −1. x ∈ Σλ .5) holds. ( On Tλ . if λ λ reﬂection about Tλ lies inside B1 (0). wλ (x) ≥ 0. and on ∂Σλ .1 ) to conclude that.
∀x1 ≥ 0. ∀x ∈ Σλ− do . ¯ And therefore.2 Applications of the Maximum Principles Based on Comparisons 201 It satisﬁes ¯ wλ+δ + C(x. More ¯ precisely and more generally. x ) . we ﬁrst notice ¯ that it is satisﬁed on the two curved parts and one ﬂat part where x1 = λ + δ of the boundary ∂Ωδ due to the deﬁnition of wλ+δ . wλ is positive and bounded away from 0. (8. 2 Notice that on this part. ¯ (8.8) 1 . for δ suﬃciently small. we still have wλ+δ (x) ≥ 0 . there exists a constant co > 0. ∀x ∈ Σλ− do . ∀x ∈ Σλ+δ . To see that it is also true ¯ ¯ on the rest of the boundary where x1 = λ − do . x ) ≤ u(x1 . Now we can apply the “Narrow Region Principle” to conclude that wλ+δ (x) ≥ 0. ∀x ∈ Ωδ . the boundary condition in (8.6) implies that u(−x1 .7) Σλ− do Ωδ 0 ¯ 2 x1 Tλ− do Tλ+δ ¯ ¯ 2 Figure 5 The equation is obvious. we use continuity argument.8. such that wλ (x) ≥ co . ¯ ¯ ¯ This contradicts with the deﬁnition of λ and thus establishes (8. To see the boundary condition. λ + δ)wλ+δ = 0 x ∈ Ωδ ¯ ¯ wλ+δ (x) ≥ 0 x ∈ ∂Ωδ .6). (8. ¯ ¯ 2 Since wλ is continuous in λ. ¯ ¯ 2 Hence in particular.7) holds for such small δ. wλ+δ (x) ≥ 0.
we conclude that u(x) must be radially symmetric about the origin. as was pointed out by R. and hence assumes the form u(x) = [n(n − 2)λ2 ] n−2 4 n−2 2 (λ2 + x − xo 2 ) for some λ > 0 and xo ∈ Rn . Similarly.2. Since we can place x1 axis in any direction.10) is identically zero. a simpler and more elementary proof was given for almost the same result: n+2 Theorem 8. we obtain u(−x1 .10) u = O( 1 ). and hence assume the form u(x) = [n(n − 2)λ2 ] (λ2 + x − n−2 4 n−2 2 xo 2 ) for λ > 0 and for some xo ∈ Rn . In the case that n+2 1 ≤ p < n−2 . Gidas and Spruck [GS] showed that the only nonnegative solution of (8. ∀x1 ≥ 0. an interesting results is the symmetry of the positive solutions of the semilinear elliptic equation: u + up = 0 . and Nirenberg [2]. we see that u(x) is symmetric about the plane T0 . is in fact equivalent to the geometric result due to Obata [O]: A Riemannian metric on S n which is conformal to the standard one and having the same constant scalar curvature is the pull back of the standard one under a conformal map of S n to itself. Ni. x ∈ Rn . xn ). This completes the proof.8) and (8.10) with reasonable behavior at inﬁnity.2. n ≥ 3. We then start from λ close to 1 and move the plane Tλ toward the left. · · · .2 Symmetry of Solutions of − u = up in Rn In an elegant paper of Gidas. every positive C 2 solution of (8. 8. in the authors paper [CL1]. Gidas and Spruck [CGS] removed the decay assumption u = O(x2−n ) and proved the same result.2. all the positive solutions of (8. Also the monotonicity easily follows from the argument.9). (8. x ) .3 i) For p = n−2 . They proved n+2 Theorem 8. Caﬀarelli. Schoen. Then. namely (8.2 For p = n−2 .9) Combining two opposite inequalities (8. Recently. xn−2 are radially symmetric and monotone decreasing about some point. x ) ≥ u(x1 .202 8 Methods of Moving Planes and of Moving Spheres where x = (x2 . . This uniqueness result.10) must be radially symmetric and monotone decreasing about some point.
2 is actually included in the more general proof of the ﬁrst part of Theorem 8. to better illustrate the idea. we apply the maximum principle n+2 to wλ (x). (See the previous Figure 1.2 Applications of the Maximum Principles Based on Comparisons 203 ii) For p < n+2 n−2 .1). By the deﬁnition of uλ . (8.2.12) where ψλ (x) is some number between uλ (x) and u(x). using the uniqueness theory in Ordinary Diﬀerential Equations. how the “Decay at Inﬁnity” principle is applied here. Since x1 axis can be chosen in any direction. for λ suﬃciently negative. We will show that. · · · . xλ = (2λ − x1 . Proof of Theorem 8. Step 1. the “Decay at Inﬁnity” is applied.11) Here. Finally.2. xn ).2. The proof of Theorem 8. Tλ = ∂Σλ and let xλ be the reﬂection point of x about the plane Tλ . This implies that the solution u is symmetric and monotone decreasing about the plane Tλo .e. say at λ = λo . we start from the very left end of our region Rn . Recalling the “Maximum Principle Based on Comparison” (Theorem 7. Write τ = n−2 . ∀x ∈ Σλo .3. Then in the second step. we conclude that u must be radially symmetric and monotone about some point. it is easy to see that. that is near x1 = −∞. · · · . we see here c(x) = . i. wλ (x) ≥ 0. However. To verify (8. xn ) ∈ Rn  x1 < λ}. And the readers will see vividly. Prepare to Move the Plane from Near −∞. Then by the Mean Value Theorem.8. The proof consists of three steps. the only nonnegative solution of (8. The plane will stop at some limiting position. in the third step. λ (8.2. we will ﬁrst present the proof of Theorem 8.10) is identically zero. we will move our plane Tλ in the the x1 direction toward the right as long as inequality (8. we will show that the solutions can only assume the given form. it is easy to verify that τ − wλ = uτ (x) − uτ (x) = τ ψλ−1 (x)wλ (x). Deﬁne Σλ = {x = (x1 . x2 .4.11) holds. In the ﬁrst step. uλ satisﬁes the same equation as u does.2 (mostly in our own idea).2. ∀x ∈ Σλ . We will show that wλo (x) ≡ 0. and wλ (x) = uλ (x) − u(x).) Let uλ (x) = u(xλ ).11) for λ suﬃciently negative.
Unfortunately. increase the value of λ. ∀x ∈ Σλo .14) We show that the plane Tλo can still be moved a small distance to the right. and hence (8.11). Therefore.2). for all 0 < δ < δo .e. ∀x ∈ Σλ }.204 8 Methods of Moving Planes and of Moving Spheres τ −τ ψλ−1 (x). Apparently at these points. This completes the preparation for the moving of planes. x x x By the decay assumption of the solution u(x) = O we derive immediately that τ ψλ−1 (˜) = O ( x 4 1 n−2 ) ˜ x 1 xn−2 .15) This would contradict with the deﬁnition of λo . i. x x and hence 0 ≤ uλ (˜) ≤ ψλ (˜) ≤ u(˜). we use the “Narrow Region Principle ” to derive (8.2 ). ∀x ∈ Σλo +δ .. and more precisely. we must have (8.3 ). Obviously. because . λo < +∞.15). Here the power of Step 2. (8. 1 is greater than two. which is what (actually more ˜ x than) we desire for. Now we can move the plane Tλ toward right. as long as the inequality (8.13) Otherwise. Move the Plane to the Limiting Position to Derive Symmetry. Deﬁne λo = sup{λ  wλ (x) ≥ 0. =O 1 ˜4 x . there exists a δo > 0 such that. uλ (˜) < u(˜).3.4. by the “Strong Maximum Principle” on unbounded domains ( see Theorem 7. (8.4.11) holds. (8. due to the asymptotic behavior of u near x1 = +∞. By the “Decay at Inﬁnity” argument (Corollary 7. it suﬃce τ to check the decay rate of ψλ−1 (x). More precisely. only at the points x ˜ where wλ is negative (see Remark 7. it can not be applied in this situation.13) must hold. we have wλo +δ (x) ≥ 0 . We claim that wλo (x) ≡ 0. we can apply the “Maximum Principle Based on Comparison” to conclude that for λ suﬃciently negative ( ˜ suﬃciently x large ). we have wλo (x) > 0 in the interior of Σλo . Recall that in the last section.
xo must be on the boundary of Σλo . Then by the Hopf Lemma (see Theorem 7. ∀ i = 1.8.2.3). we already know wλo ≥ 0.17) φ φ + − wλ + wλ φ φ 1 φ (8.1. ∂ν This contradicts with (8. ¯ ¯ (8. to verify (8.2 Applications of the Maximum Principles Based on Comparisons 205 the “narrow region” here is unbounded. Then there exists a sequence of numbers {δi } tending to 0 and for each i.1 There exists a Ro > 0 ( independent of λ).2. we must have wλo (xo ) = 0. ¯ ¯ We postpone the proof of the Lemma for a moment. φ(x) 1 with 0 < q < n − 2. . ¯ ¯ i→∞ However. the outward normal derivative ∂wλo o (x ) < 0. It ¯ ¯ follows that wλo (xo ) = wλo (xo )φ(xo ) + wλo (xo ) φ = 0 + 0 = 0. · · · . we have. xq Then it is a straight forward calculation to verify that − wλ = 2 wλ · ¯ ¯ We have Lemma 8.14). Now. Consequently. we have xi  ≤ Ro . the corresponding negative minimum xi of wλo +δi . To overcome this diﬃculty. Then. Now suppose that (8. we introduce a new function wλ (x) = ¯ where φ(x) = wλ (x) . by (8. therefore. then xo  < Ro . what left is to prove the Lemma.15) is violated for any δ > 0. wλo (xo ) = lim ¯ i→ ∞ and wλo +δi (xi ) = 0 ¯ (8.3. such that if xo is a minimum point of wλ and wλ (xo ) < 0. By Lemma 8.18) On the other hand. since wλo (xo ) = 0.16) wλo (xo ) = lim wλo +δi (xi ) ≤ 0. there is a subsequence of {xi } (still denoted by {xi }) which converges to some point xo ∈ Rn .15).18). 2. and we are not able to guarantee that wλo is bounded away from 0 on the left boundary of the “narrow region”.
12) that − wλ + φ wλ (xo ) > 0.19) contradicts with (8. ¯ ¯ (8.10) must be radially symmetric and monotone decreasing about some point in Rn . as we argued in Step 1. together with (8. Assume that xo is a negative minimum of wλ . This completes the proof of the Theorem.3. c(xo ) := −τ ψ τ −1 (xo ) > − It follows from (8. and n−2 (λ2 + r2 ) 2 by the uniqueness of the ODE problem. Then they satisﬁes the following ordinary diﬀerential equation n−1 −u (r) − r u (r) = uτ u (0) = 0 (n−2)/4 u(0) = [n(n−2)] n−2 λ 2 is a solution.1.2. v(x) = xn−2 x2 . without loss of generality. Step 3. Proof of Theorem 8.19) On the other hand. Since the equation is invariant under translation. φ φ(xo ) q(n − 2 − q) ≡ . hence the method of moving planes can not be applied directly to u. if xo  is suﬃciently large. i) The general idea in proving this part of the Theorem is almost the same as that for Theorem 8. Therefore. In the previous two steps.206 8 Methods of Moving Planes and of Moving Spheres Proof of Lemma 8. we conclude that every positive solution of (8. One can verify that u(r) = u(x) = [n(n − 2)λ2 ] n−2 4 n−2 2 [n(n − 2)λ2 ] n−2 4 (λ2 + x − xo 2 ) for λ > 0 and some xo ∈ Rn .10) must assume the form for some λ > 0. So we ﬁrst make a Kelvin transform to deﬁne a new function 1 x u( ).2. The main diﬀerence is that we have no decay assumption on the solution u at inﬁnity.2. and hence completes the proof of the Lemma.16). we may assume that the solutions are symmetric about the origin. ¯ Then − wλ (xo ) ≤ 0 and wλ (xo ) = 0. we show that the positive solutions of (8. by the asymptotic behavior of u at inﬁnity. o x φ(xo ) This. this is the only solution.2.
8. It is easy to verify that v satisﬁes the same equation as u does. We show that. we have ˜ wλ (x) ≥ 0. correspondingly wλ may be singular at the point xλ = (2λ. 0). we see that vλ satisﬁes the same equation as v does. then v has no singularity at the origin. By the asymptotic behavior v(x) ∼ 1 . and we have the desired decay rate for xo  τ c(x) := −τ ψλ−1 (x). Hence instead of on Σλ . we consider wλ ˜ on Σλ = Σλ \ {xλ }.2. and hence u has the desired decay at inﬁnity. and τ − wλ = τ ψλ−1 (x)wλ (x).2. and show that v is radially symmetric and monotone decreasing about some point. ∀x ∈ Σλ . except at the origin: Then obviously. wλ (x) = vλ (x) − v(x). n ≥ 3.21) we derive immediately that.4. u is also symmetric and monotone decreasing about the origin. v(x) has the desired decay rate v + v τ (x) = 0 x ∈ Rn \ {0}. xo n−2 x  1 is greater than two. τ ψλ−1 (xo ) ∼ ( 1 1 )τ −1 = o 4 .2. If the center point is the origin. wλ has a singularity the power of . (8. Because v(x) may be singular at the origin. Then the same argument in the proof of Theorem 8. then by the deﬁnition of v. The diﬀerence here is that. · · · . where ψλ (x) is some number between vλ (x) and v(x). as mentioned in Corollary 7.20) We will apply the method of moving planes to the function v. Hence we can apply the “Decay at Inﬁnity” to wλ (x). Each time we show that the points of interest are away from the singularities. we treat the singular point carefully. so that we can carry on the method of moving planes to the end to show the ˜ existence of a λo such that wλo (x) ≡ 0 for x ∈ Σλo and v is strictly increasing ˜λ . Step 1. at a negative minimum point xo of wλ .2 would imply that u is symmetric and monotone decreasing about some point. If the center point of v is not the origin. Deﬁne vλ (x) = v(xλ ) . for λ suﬃciently negative. 0. And in our proof.2.2 Applications of the Maximum Principles Based on Comparisons 207 1 at inﬁnity. but has a xn−2 possible singularity at the origin. in the x1 direction in Σ o As in the proof of Theorem 8. xn−2 (8.
Suppose φ that wλo (x) ≡0. Again let λk λo ˜ be a sequence such that wλk (x) < 0 for some x ∈ Σλk . We need to show that ¯ ˜ for each k. This is possible because v(x) → 0 as x → ∞. then by Maximum Principle we have ¯ Deﬁne wλ = ¯ wλo (x) > 0 .22) >0 due to the fact that v(x) > 0 and v ≤ 0. inf ˜ wλk (x) can be achieved at some point xk ∈ Σλk and that ¯ Σλk the sequence {xk } is bounded away from the singularities xλk of wλk . then the inﬁmum is achieved in Σλ \ B1 (xλ ) ˜ To see this. Now similar to the Step 1 in the proof of Theorem 8.2. Therefore wλo (x) ≡ 0. then wλo (x) ≡ 0. Then let λ be so negative. Actually. For such λ. The rest of the proof is similar to the step 2 in the proof of Theorem 8. Again.2. This implies (8. Now through a similar argument as in the proof of Theorem 8. we ﬁrst note that for x ∈ B1 (0). ¯ ˜ for x ∈ Σλo . obviously wλ (x) ≥ 0 on B1 (xλ ) \ {xλ }.2.2. deﬁne ˜ λo = sup{λ  wλ (x) ≥ 0. ∀x ∈ Σλ }. This can be seen from the following facts a) There exists > 0 and δ > 0 such that wλo (x) ≥ ¯ b) for x ∈ Bδ (xλo ) \ {xλo }. Step 2. hence we need to show that. we will show that.2.2.208 8 Methods of Moving Planes and of Moving Spheres at xλ .2 except that now we need to take care of the singularities.22). If inf Σλ wλ (x) < 0. ¯ ¯ ˜ Fact (a) can be shown by noting that wλo (x) > 0 on Σλo and wλo ≤ 0. wλ the same way as in the proof of Theorem 8. that v(x) ≤ 0 for x ∈ B1 (xλ ). ∀x ∈ Σλo . ¯ while fact (b) is obvious. ¯ . we can deduce that wλ (x) ≥ 0 for λ suﬃciently negative. limλ→λo inf x∈Bδ (xλ ) wλ (x) ≥ inf x∈Bδ (xλo ) wλo (x) ≥ . the minimum of wλ is away from xλ . We will show that ˜ If λo < 0. v(x) ≥ min v(x) = ∂B1 (0) 0 (8.2 one can easily arrive at a contradiction.
one can easily see that v can only be symmetric about the origin if it is not identically zero.xo (x).8.2. x ∈ Rn \ {0}. we may assume that the origin is at the mid point of the line segment x1 x2 .3 Symmetry of Solutions for − u = eu in R2 When considering prescribing Gaussian curvature on two dimensional compact manifolds.2. if the sequence of approximate solutions “blows up”. . v satisﬁes v+ 1 v p (x) = 0 . Hence u must also be symmetric about the origin. We will use the method of moving planes to prove: Theorem 8. xn+2−p(n−2) n+2 n−2 . It follows that u is constant. we also obtain the symmetry and monotonicity in x1 direction by combining the two inequalities obtained in the two opposite directions. we conclude that u ≡ 0.xo (x) = ln 32λ2 (4 + λ2 x − xo 2 )2 for any λ > 0 and any point xo ∈ R2 is a family of explicit solutions. Then from the above argument. we notice Due to the singularity of the coeﬃcient of v p at the origin. also it is interesting in its own right. we move the plane in the negative x1 direction from positive inﬁnity toward the origin. since equation (8. we must have u(x1 ) = u(x2 ).4 Every solution of (8. one would arrive at the following equation in the entire space R2 : ∆u + exp u = 0 . then we can carry out the above procedure in the opposite direction.10) is invariant under translations and rotations.23) is radially symmetric with respect to some point in R2 and hence assumes the form of φλ. It is known that φλ. we derive the symmetry and monotonicity in x1 direction by the above argument. If they stop at the origin again. we ﬁrst need to obtain some decay rate of the solutions near inﬁnity. then by rescaling and taking limit.23) exp u(x)dx < +∞ R2 The classiﬁcation of the solutions for this limiting equation would provide essential information on the original problems on manifolds. namely. 8. x ∈ R2 (8. ii) To show the nonexistence of solutions in the case p < that after the Kelvin transform. If our planes Tλ stop somewhere before the origin.2 Applications of the Maximum Principles Based on Comparisons 209 If λo = 0. To this end. we conclude that the solution u must be radially symmetric about some point. from the equation (8. This completes the proof of the Theorem. Since the x1 direction can be chosen arbitrarily.10). Now given any two points x1 and x2 in Rn . Finally.
2. ∂Ωt ∂Ωt Hence −( and so d ( dt Ωt d Ωt ) · dt Ωt exp u(x)dx ≥ 4πΩt  d Ωt ) · dt exp u(x)dx)2 = 2 exp t · ( Ωt exp u(x)dx ≤ −8πΩt et .210 8 Methods of Moving Planes and of Moving Spheres Some Global and Asymptotic Behavior of the Solutions The following Theorem gives the asymptotic behavior of the solutions near inﬁnity. then as x → +∞. Lemma 8.5 If u(x) is a solution of (8.2 (W. let Ωt = {x  u(x) > t}. 1 u(x) →− ln x 2π uniformly. Lemma 8.2. This Theorem is a direct consequence of the following two Lemmas. . For −∞ < t < ∞. x ∈ R 2 and R2 R2 exp u(x)dx ≤ −4 exp u(x)dx < +∞. which is essential to the application of the method of moving planes. Integrating from −∞ to ∞ gives −( R2 exp u(x)dx)2 ≤ −8π R2 exp u(x)dx which implies R2 exp u(x)dx ≥ 8π as desired.2 enables us to obtain the asymptotic behavior of the solutions at inﬁnity. exp u(x)dx ≥ 8π.23).2. Ding) If u is a solution of − u = eu . one can obtain Ωt exp u(x)dx = − Ωt u= ∂Ωt  uds − d Ωt  = dt ∂Ωt ds  u By the Schwartz inequality and the isoperimetric inequality. Theorem 8. ds ·  u  u ≥ ∂Ωt 2 ≥ 4πΩt . then R2 Proof.
ln x − y − ln(y + 1) − ln x →0 ln x hence I2 →0. we have. By a result of Brezis and Merle [BM]. x ∈ R2 and we will show w(x) 1 → ln x 2π R2 R2 exp u(x)dx < exp u(x)dx uniformly as x → +∞. u(x) 1 →− ln x 2π R2 exp u(x)dx uniformly. in region D2 . a) To estimate I1 .2. I2 and I3 are the integrals on the three regions D1 = {y  x − y ≤ 1}. then as x → +∞. Proof. we see that the condition ∞ implies that the solution u is bounded from above. where I1 . . we simply notice that I1 ≤ C x−y≤1 eu(y) dy − 1 ln x ln x − yeu(y) dy x−y≤1 Then by the boundedness of eu(y) and R2 eu(y) dy. we need only to verify that I := R2 ln x − y − ln(y + 1) − ln x u(y) e dy→0 ln x as x→∞.3 . D2 = {y  x − y > 1 and y ≤ K} and D3 = {y  x − y > 1 and y > K} respectively.23). Write I = I1 + I2 + I3 . b) For each ﬁxed K.8. Let 1 w(x) = 2 (ln x − y − ln(y + 1)) exp u(y)dy.2 Applications of the Maximum Principles Based on Comparisons 211 Lemma 8. as x→∞. If u(x) is a solution of (8.24) To see this. we see that I1 →0 as x→∞. 2π R Then it is easy to see that w(x) = exp u(x) . (8. We may assume that x ≥ 3.
) Deﬁne wλ (x) = u(xλ ) − u(x). x2 ) be the reﬂection point of x = (x1 .D. This veriﬁes (8. Finally by the uniqueness of the solution of the following O.23) must assume the form φλ. The Method of Moving Planes and Symmetry In this subsection. Without loss of generality.E. Combining Lemma 8.24). We also show that the solution is strictly increasing before the critical position. A straight forward calculation shows wλ (x) + (exp ψλ (x))wλ (x) = 0 (8. we use the fact that for x − y > 1  ln x − y − ln(y + 1) − ln x ≤C ln x v ≡ 0 and Then let K→∞.2. x2 )  x1 < λ} and Let Tλ = ∂Σλ = {(x1 . x2 ) about the line Tλ .5. This completes the proof of our Lemma.212 8 Methods of Moving Planes and of Moving Spheres c)To see I3 →0. problem u (r) + 1 u (r) = f (u) r u (0) = 0 u(0) = 1 we see that the solutions of (8. For a given solution. we show the monotonicity and symmetry of the solution in the x1 direction. For λ ∈ R1 . we obtain our Theorem 8. xλ = (2λ − x1 . Assume that u(x) is a solution of (8.2 and Lemma 8. we move the family of lines which are orthogonal to a given direction from negative inﬁnity to a critical position and then show that the solution is symmetric in that direction about the critical position.25) . Consider the function v(x) = u(x) + w(x). we conclude that the solution must be radially symmetric about some point. we apply the method of moving planes to establish the symmetry of the solutions. (See the previous Figure 1.2. Then v(x) ≤ C + C1 ln(x + 1) for some constant C and C1 .x0 (x).2. x2 )  x1 = λ}. Since the direction can be chosen arbitrarily.3. let Σλ = {(x1 . Therefore v must be a constant.23).
which is a negative minimum of wλ (x).28) φ φ + eψλ (x) + φ φ Moreover. φ x(x − 1)2 ln(x − 1) It follows from this and (8. we have. At this point. if wλ (x) < 0 somewhere. . φ (8. hence it does not work here.29).27) This is what we desire for. while the key function φ = xq given in Corollary 7. As a modiﬁcation. At such point u(xo λ ) < u(xo ). then by (8. the key is to check the decay rate of c(x) := −eψλ (x) at points xo where wλ (x) is negative. (8. By the “Decay at Inﬁnity” argument.29) Now.5.2 requires 0 < q < n − 2. Step 1. Then similar to the argument in Subsection 5.3). φ(x) wλ = 0. we have eψλ (x o ) = O( 1 ) xo 4 (8. wλ (x) = ¯ u(xλ ) − u(x) →0 . This completes Step 1.26) 1 Notice that we are in dimension two. by the asymptotic behavior of the solution u near inﬁnity (see Lemma 8.26) that eψ(x ) + o φ o (x ) < 0 for suﬃciently large xo . we choose φ(x) = ln(x − 1). one can easily derive a ¯ contradiction from (8. ∀x ∈ Σλ . for λ suﬃciently negative. By virtue of Theorem 8. ln(x − 1) as x→∞. for each ﬁxed λ. ¯ (8.8. there exists a point xo .27) and (8.2. As in the previous examples. and hence ψλ (xo ) ≤ u(xo ).2. we show that.2 Applications of the Maximum Principles Based on Comparisons 213 where ψλ (x) is some number between u(x) and uλ (x). Then it is easy to verify that −1 φ (x) = . we introduce the function wλ (x) = ¯ It satisﬁes wλ + 2 wλ ¯ ¯ wλ (x) .4.28). we have wλ (x) ≥ 0.2.
4 4(n − 1) (8.2. ∀x ∈ Σλo . x ∈ M. In this case. go ) is the standard sphere S n . go ) and Ro (x) is the scalar curvature of go . n+2 (8.30) where o is the BeltramiLaplace operator of (M. it does not work near the points where R(x) = 0. One main ingredient in the proof of existence is to obtain a priori estimates on the solutions. see articles by Bahri . When (M. We show that If λo < 0. ∀x ∈ Σλ }. 8.30).214 8 Methods of Moving Planes and of Moving Spheres Step 2 . In recent years. there have seen a lot of progress in understanding equation (8. Given a function R(x) on M. This technical assumption became somewhat standard and has been used by many authors for quite a few years. a useful technique employed by most authors was a ‘blowingup’ analysis. In the last few years. Deﬁne λo = sup{λ  wλ (x) ≥ 0. This completes the proof of the Theorem. the equation becomes − u+ n+2 n(n − 2) n−2 u= R(x)u n−2 . For equation (8. one interesting problem in diﬀerential geometry is to ﬁnd a metric g that is pointwise conformal to the original metric go and has scalar curvature R(x). This is equivalent to ﬁnding a positive solution of the semilinear elliptic equation − 4(n − 1) n−2 ou + Ro (x)u = R(x)u n−2 . to establish a priori estimates. a tremendous amount of eﬀort has been put to ﬁnd the existence of the solutions and many interesting results have been obtained ( see [Ba] [BC] [Bi] [BE] [CL1] [CL3] [CL4] [CL6] [CY1] [CY2] [ES] [KW1] [Li2] [Li3] [SZ] [Lin1] and the references therein).1 The Background Let M be a Riemannian manifold of dimension n ≥ 3 with metric go . However.2 except that we use φ(x) = ln(−x1 + 2). The argument is entirely similar to that in the Step 2 of the proof of Theorem 8. x ∈ S n . people had to assume that R was positive and bounded away from 0. For example.3.31) It is the socalled critical case where the lack of compactness occurs. the wellknown KazdanWarner condition gives rise to many examples of R in which there is no solution. u > 0.3 Method of Moving Planes in a Local Way 8. Due to this limitation. then wλo (x) ≡ 0.31) on S n .
1 Let M be a locally conformally ﬂat manifold.34) are uniformly bounded near Γ ∩ D. Recently. To overcome this. To prove this result.8. where A is an arbitrary positive number.α (S n ) . Let Γ = {x ∈ M  R(x) = 0}. ∂R ≤ 0 where ν is the ∂ν outward normal (pointing to the region where R is negative) of Γ . M.3. we derived a priori bounds for the solutions of more general equations n(n − 2) u = R(x)up . then use a blowing up argument. and Yang [CY2]. − u+ Proposition 8.3 Method of Moving Planes in a Local Way 215 [Ba]. In this proposition.α and R ∈ C 2. Then the solutions of the equation − ou + Ro (x)u = R(x)up . we have u ≤ C in the region where R(x) ≤ .31) for functions R which are allowed to change signs. and Assume that Γ ∈ C 2. he ﬁrst established a Liouville type theorem in Rn . we essentially required R(x) to grow linearly near its n+2 zeros. we sharpen our method of moving planes to further weaken the condition and to obtain more general result: Theorem 8. such that for any solution u of (8. u > 0. in M (8.3. δ0 .33) Let D be any compact subset of M and let p be any number greater than 1. the bound is independent of p. (8.α (M ) satisfying R(x)  R(x) is continuous near Γ.α (S n ) and  R(x) ≥ β0 for any x with R(x) ≤ δ0 . by introducing a new auxiliary function. . are there any a priori estimates? In [CL6]. Lin [Lin1] weaken the condition by assuming only polynomial growth of R near its zeros. We removed the above wellknown technical assumption and obtained a priori estimates on the solutions of (8. = 0 at Γ. we used the ‘method of moving planes’ in a local way to control the growth of the solutions in this region and obtained an a priori estimate. in the case p = n−2 . For a ﬁxed A. (8. and R C 2. and Schoen and Zhang [SZ].32). which seems a little restrictive. we answered this question aﬃrmatively.32) 4 1 for any exponent p between 1 + A and A. Li [Li2] [Li3]. Chang. Later. Then for functions with changing signs. Gursky. Bahri and Coron [BC]. Then there are positive constants and C depending only on β0 . In fact. The main diﬃculty encountered by the traditional ‘blowingup’ analysis is near the point where R(x) = 0. x ∈ S n .1 Assume R ∈ C 2. Chang and Yang [CY1].
where d(x) is the distance from the point x to Γ .216 8 Methods of Moving Planes and of Moving Spheres One can see that our condition (8.1 (See Theorem 9. Part I.35) are uniformly bounded in the region {x ∈ K : R(x) ≤ 0 and dist(x. D. (8. we have sup u ≤ C( Br (x) 1 rn (u+ )q dV ) q . Since M is a locally conformally ﬂat manifold.36) where C = C(n. Moreover our restriction on the exponent is much weaker.3. Then for any ball B2r (x) ⊂ D and any q > 0. one only need to obtain an integral bound. p can be any number greater than 1.n (D) and suppose u ≥ 0.34) and prove Theorem 8. we derive estimates in the region(s) where R is negatively bounded away from zero.33) is weaker than Lin’s polynomial growth restriction. It comes from a standard elliptic estimate for subharmonic functions and an integral bound on the solutions.34) in a compact set D ⊂ M can be reduced to − u = R(x)up . In fact. Lemma 8. Besides being n+2 . we use the ‘method of moving planes’ to estimate the solutions in the region(s) surrounding the set Γ . We ﬁrst derive the bound in the region(s) where R is negatively bounded away from zero. The proof of the Theorem is divided into two parts. but are not polynomial growth. we introduce the following lemma of Gilbarg and Trudinger [GT].20 in [GT]). Γ ) ≥ δ}.3. one may simply look at functions R(x) which 1 }. Let u ∈ W 2. In virtue of this Lemma. q).3. p > 1.2 The A Priori Estimates Now. we prove . the lower bound of R. To illustrate.2. In the region(s) where R is negative In this part. this kinds of functions satisfy our condition. a local ﬂat metric can be chosen so that equation (8.3. We prove Theorem 8. B2r (x) 1 (8. To prove Theorem 8. n−2 8.2 The solutions of (8.35) in a compact set K in Rn . Then based on this bound.3. The bound depends only on δ. to establish an a priori bound of the solutions. we estimate the solutions of equation (8. grow like exp{− d(x) Obviously. This is the equation we will consider throughout the section.1.
e. y) with y = (x2 .37) follows suit.35) by ϕα Rα sgnR and p−1 then integrate.8. The Proof of Theorem 8. Let u be a given solution of (8. a Kelvin transform. The key ingredient is to use the method of moving planes to show that. near x0 . xn ). Multiply both sides of equation (8. Denote the new system by x = (x1 . − ϕ = λ1 ϕ(x) x ∈ Ω ϕ(x) = 0 x ∈ ∂Ω. This completes the proof of the Lemma. Let 1 if R(x) > 0 sgnR = 0 if R(x) = 0 −1 if R(x) < 0. the values of the solution u is comparable. where R is small. we have ϕα R1+α up dx ≤ C1 Ω Ω ϕα−2 Rα−2 udx ≤ C2 Ω ϕ p Rα−2 udx. Step 1. It is a direct consequence of Lemma 8.2 Let B be any ball in K that is bounded away from Γ .2. In the region where R is small In this part. and x0 ∈ Γ .35). K ) ≥ 1. together with the boundedness of the integral up in a small neighborhood of x0 .37) Proof. The proof consists of the following four steps. through a straight forward calculation. Transforming the region Make a translation.3. Let α = 1+2p . Ω (8. then there is a constant C such that up dx ≤ C.3. This result. Let ϕ be the ﬁrst eigenfunction of − in Ω. we estimate the solutions in a neighborhood of Γ .3. a rotation. or if necessary. we arrive at o ϕα R1+α up dx ≤ C3 .38) Now (8. Since the method of moving planes can not be applied directly to u.3 Method of Moving Planes in a Local Way 217 Lemma 8. It is suﬃcient to show that u is bounded in a neighborhood of x0 . B (8. Let Ω be an open neighborhood of K such that dist (∂Ω. we construct some auxiliary functions. α Applying H¨lder inequality on the righthandside. Let x1 axis pointing .3.1 and Lemma 8. leads to an a priori bound of the solutions.2 Part II. i. along any direction that is close to the direction of R. Taking into account that α > 2 and ϕ and R are bounded in Ω. · · · .
Introducing an auxiliary function Let m = max∂1 D u.218 8 Methods of Moving Planes and of Moving Spheres to the right. ∂R (b) ∂x1 ≤ 0 in D. In the new system. x1 ≥ −2 } The intersection of ∂1 D and the plane {x ∈ Rn  x1 = −2 } encloses a part of the plane. such that 0 ≤ u ≤ 2m. x0 becomes the origin. (See Figure 6. Let x1 = φ(y) be the equation of the image of Γ near the origin. and  u ≤ Cm. (c)  R Step 2. Let u be a C 2 extension of u from ∂1 D to the entire ˜ ∂D. Let ∂1 D := {x  x1 = φ(y) + . The image of Γ is also uniformly convex near the origin.) Γ y ∂1 D ∂2 D R(x) > 0 0 R(x) < 0 x1 Tλ Figure 6 The small positive number (a) ∂1 D is uniformly convex. and R(x) is continuous in D. and the image of Γ is tangent to the yplane at origin. ˜ ˜ is chosen to ensure that . Let D by the region enclosed by two surfaces ∂1 D and ∂2 D. which is denote by ∂2 D. the image of the set Ω + := {x  R(x) > 0} is contained in the left of the yplane.
In this region. and w C 1 (D) ≤ C1 m.3 Method of Moving Planes in a Local Way 219 Let w be the harmonic function w=0x∈D w = u x ∈ ∂D ˜ Then by the maximum principle and a standard elliptic estimate. we show that v(x) > 0 ∀x ∈ D. By Theorem 8. R(x) is negative and bounded away from 0. Here C0 is a large constant to be determined later.39) We consider the following two cases. It follows from this and (8. ψ(y) = −C0 m φ(y) − m [ + φ(y)]2 − 2m.39).8. One can easily verify that v(x) satisﬁes the following v + ψ(y) + f (x. ∂x1 Noticing that ( + φ(y) − x1 ) ≥ 0. we have 0 ≤ w(x) ≤ 2m Introduce a new function v(x) = u(x) − w(x) + C0 m[ + φ(y) − x1 ] + m[ + φ(y) − x1 ]2 which is well deﬁned in D.3. Case 1) φ(y) + 2 ≤ x1 ≤ φ(y) + .  ∂x1 for some positive constant C2 . v) = 2mx1 φ(y)+R(x){v+w(x)−C0 m[ +φ(y)−x1 ]−m[ +φ(y)−x1 ]2 }p At this step. we have ∂u  ≤ C2 m. v(x) = 0 ∀x ∈ ∂1 D . ∂v ≤ (C1 + C2 − C0 )m − 2m( + φ(y) − x1 ). (8.2 and the standard elliptic estimate. such that ∂v <0 ∂x1 Then since we have v(x) > 0. v) = 0 x ∈ D v(x) = 0 x ∈ ∂1 D where and f (x. one can choose C0 suﬃciently large.
Let xλ = (2λ − x1 . Applying the Method of Moving Planes to v in x1 direction In the previous step. v) = R(xλ )wλ + f (xλ .41) holds. we arrive at v(x) > 0. v(x)) ≤ f (xλ . v) = R(xλ )(vλ − v) + f (xλ .39). and move the plane perpendicular to x1 axis toward the left. that is.40) We want to show that the above inequality holds for − 1 ≤ λ ≤ for some 0 < 1 < . (8. Now we can start from the rightmost tip of region D. we have v(x) ≥ −w(x) + C0 m 2 ≥ −2m + C0 m .40) remains true. move the plane Tλ toward the left as long as the inequality (8.220 8 Methods of Moving Planes and of Moving Spheres Case 2) −2 ≤ x1 ≤ φ(y) + 2 . This lies a foundation for the moving of planes. the reﬂection of Σλ . v) − f (x. again by (8. the readers may see the ﬁrst example in Section 5. y) be the reﬂection point of x with respect to the plane Tλ . because Σλ is a narrow region and f (x. v). More precisely. This choice of 1 is to guarantee that when the plane Tλ moves to λ = − 1 . it is easy to see that vλ satisﬁes the equation vλ + ψ(y) + f (xλ . 2 Again. (8. let Σλ = {x ∈ D  x1 ≥ λ}. vλ ) − f (xλ . choosing C0 suﬃciently large (depending only on ). We are going to show that wλ (x) ≥ 0 ∀ x ∈ Σλ . v(x)) for x = (x1 . y) ∈ D with x1 > λ > − 1 . v) is Lipschitz continuous in v (Actually.41) . In fact. v) − f (x. then we have (8. We show that the moving of planes can be carried on provided f (x. Tλ = {x ∈ Rn  x1 = λ}. Let vλ (x) = v(xλ ) and wλ (x) = vλ (x) − v(x).40) is true when λ is less but close to . vλ ) = 0. ∂f = R(x)). v) + f (xλ . we showed that v(x) ≥ 0 in D and v(x) = 0 on ∂1 D. In this region. For detailed ∂v argument. Now we decrease λ. and hence wλ satisﬁes − wλ = f (xλ . Step 3. v) − f (x. If (8.about the plane Tλ still lies in region D.
In the part where u ≥ 1.3 Method of Moving Planes in a Local Way 221 − wλ − R(xλ )wλ (x) ≥ 0. ∂x1 b) R(x) > 0. we have wλo (x) > 0. ∂x1 .43) for some constant a0 . similar to the argument in the examples in Section 5. ∂x1 Then. ∀x ∈ Σλ }.41) holds if ≤ 0 in the set ∂x1 {x ∈ D  R(x) < 0 or x1 > −2 1 }. y) with x1 > −2 1 . ∂x1 ∂x1 ∂x1 First. we write ∂f ∂R = 2m φ(y)+up−1 ∂x1 ∂x1 u + R(x)/ ∂R ∂x1 p[ ∂w + C0 m + 2m( + φ(y) − x1 )] . a) R(x) ≤ 0. ∂f (x. To estimate ∂f . by virtue of (8. x = (x1 .42) and the maximum principle.8. then since v(x) > 0 in D. one can derive a contradiction. we use u to control R(x)/ ∂R ∂x1 p[ ∂w + C0 m + 2m( + φ(y) − x1 )]. (8. Choose C0 suﬃciently large. This enables us to apply the maximum principle to wλ . ∂x1 More precisely. We consider the following two possibilities. (8. v) It is elementary to see that (8. Let λo = inf{λ  wλ (x) ≥ 0. so that ∂w + C0 m ≥ 0 ∂x1 Noticing that ∂R ∂x1 ≤ 0 and ( + φ(y) − x1 ) ≥ 0 in D. we use ∂x1 ∂f ∂R ∂w = 2m φ(y) + up−1 { u + R(x)p[ + C0 m + 2m( + φ(y) − x1 )]}. we have ∂f ≤ 0.42) If λo > − 1 . we notice that the uniform convexity of the image of Γ near the origin implies φ(y) ≤ −a0 < 0. ∀x ∈ Tλo ∩ Σλo . ∀x ∈ Σλo and ∂wλo < 0.
one can ﬁnd a cone ∆x0 with x0 as its vertex and staying to the left of x0 . the intersection 0 of the cone ∆x0 with the set {xR(x) ≥ δ2 } has a positive measure and the lower bound of the measure depends only on δ0 and the C 1 norm of R.43). (8.44) Noticing that w(x) is bounded in D. so that it can be realized as the Gaussian curvature of some conformally related metric.44) leads immediately to u(x) + C6 ≥ u(x0 ) ∀x ∈ ∆x0 (8.4 Method of Moving Spheres 8.45) and an integral bound on u. one can show that (8.1 The Background Given a function K(x) on the two dimensional standard sphere S 2 .45) is true for any point x0 in a small neighborhood of Γ . for any λ between − 1 and . ∂x1 ∂R Then by condition (8.40) implies that.3.2. we can use 2m φ(y) to control Rp[ ∂w + C0 m + 2m( + φ(y) − x1 )]. This is equivalent to solving the following nonlinear elliptic equation . Now the a priori bound of the solutions is a consequence of (8. So far. the inequality (8. In the part where u ≤ 1.1. This completes the proof of Theorem 8. A similar argument will show that this remains true if we rotate the x1 axis by a small angle. Furthermore. the function v(x) is monotone decreasing in x1 direction. we can make R(x)/ ∂x1 arbitrarily small by a ∂f small choice of 1 . 8. such that v(x) ≥ v(x0 ) ∀x ∈ ∆x0 (8. Deriving the a priori bound Inequality (8.45) More generally.40) is true.222 8 Methods of Moving Planes and of Moving Spheres Choose 1 suﬃciently small. for any point x0 ∈ Γ . Therefore. by a similar argument. which can be derived from Lemma 8. in a small neighborhood of the origin. and therefore arrive at ∂x1 ≤ 0.3.4. such that  ∂R  ∼  R. Step 4. our conclusion is: The method of moving planes can be carried on up to λ = − 1 . the wellknown Nirenberg problem is to ﬁnd conditions on K(x). in virtue of (8. More precisely.33) on R. ∂x1 Here again we use the smallness of R.
The vector ﬁeld X in (8.47) For a unifying description. In recent years. Then for which R. then (8. can one solve the equations? What are the necessary and suﬃcient conditions for the equations to have a solution? These are very interesting problems in geometry.46) on S 2 .) However. On the higher dimensional sphere S n . a monotone rotationally symmetric function R admits no solution. see [BC] [CY1] [CY2] [CY3] [CY4] [CL10] [CD1] [CD2] [CC] [CS] [ES] [Ha] [Ho] [KW1] [Kw2] [Li1] [Li2] [Mo] and the references therein.8. on S 2 . various suﬃcient conditions were obtained by many authors (For example.47) have no solution. for even functions R on S 2 . To ﬁnd necessary and suﬃcient conditions.49) Sn where dAg is the volume element of the metric eu go and u n−2 go for n = 2 and for n ≥ 3 respectively. a similar problem was raised by Kazdan and Warner: Which functions R(x) can be realized as the scalar curvature of some conformally related metrics ? The problem is equivalent to the existence of positive solutions of another elliptic equation − u+ n+2 n−2 n(n − 2) u= R(x)u n−2 4 4(n − 1) (8. A pioneer work in this direction is [Mo] by Moser. we gave a negative answer to this question. These conditions give rise to many examples of R(x) for which (8.48) and (8. 4 . there are still gaps between those suﬃcient conditions and the necessary ones. In particular.48) to have a solution is R being positive somewhere. the necessary and suﬃcient condition for (8.48) Both equations are socalled ‘critical’ in the sense that lack of compactness occurs. Besides the obvious necessary condition that R(x) be positive somewhere. The conditions are: X(R)dAg = 0 (8.46) is equivalent to − u + 2 = R(x)eu . there are other wellknown obstructions found by Kazdan and Warner [KW1] and later generalized by Bourguignon and Ezin [BE]. it is natural to begin with functions having certain symmetries. In the following. (8. Then one may naturally ask: ‘ Are the necessary conditions of KazdanWarner type also suﬃcient?’ This question has been open for many years. In [CL3]. we call these necessary conditions KazdanWarner type conditions. Where is the Laplacian operator associated to the standard metric.4 Method of Moving Spheres 223 − u + 1 = K(x)e2u (8. we let R(x) = 2K(x) be the scalar curvature.49) is a conformal Killing ﬁeld associated to the standard metric go on S n . He showed that.
48) has no solution at all.48) to have a solution. because it suggests that (8.4.48) has a solution.50).2 There exists a family of functions R satisfying the KazdanWarner type conditions (8. (8. we were not able to prove it at that time. we present a class of rotationally symmetric functions satisfying (8. Proposition 8.4. there is no solution at all. Although we believed that for all such functions R. We also cover the higher dimensional cases. since their family of functions R satisfy (8. Xu and Yang found a family of rotationally symmetric functions R satisfying conditions (8.2) is satisﬁed automatically. pointed out that the Kazdan. At that time. Thus one can interpret Moser’s result as: In the class of even functions. And thus. Assume that i) R is nondegenerate ii) R changes signs in the region where R > 0 Then problem (8.48) to have a solution.47) has no rotationally symmetric solution.50) for which problem (8. we could show this for a family of such functions.51) Then problem (8.224 8 Methods of Moving Planes and of Moving Spheres Note that. In [XY]. the KazdanWarner type conditions are the necessary and suﬃcient conditions for (8.48) has no solution at all. and R ≡ C.1 Let R be rotationally symmetric. for the ﬁrst time.50) In [XY]. This generalizes Xu and Yang’s result.Warner type conditions are not suﬃcient for (8. for which problem (8. for which the problem (8. for rotationally symmetric functions R = R(θ) where θ is the distance from the north pole. A simple question was asked by Kazdan [Ka]: What are the necessary and suﬃcient conditions for rotationally symmetric functions? To be more precise. If R is monotone in the region where R > 0. In our earlier paper [CL3]. Xu and Yang also proved: Proposition 8.52) . First.48) has no rotationally symmetric solution.51). we were not able to obtain the counter part of this result in higher dimensions.3 Let R be rotationally symmetric.48) and (8. in the class of even functions. we proved the nonexistence of rotationally symmetric solutions: Proposition 8. (8.4.50). the KazdanWarner type conditions take the form: R > 0 somewhere and R changes signs (8. However. this result is still very interesting. (1.50) may not be the suﬃcient condition. Even though one didn’t know whether there were any other nonsymmetric solutions for these functions R .
48) to have a solution is R > 0 somewhere and R changes signs in the region where R > 0 (8.4. Let B be a geodesic ball centered at the North Pole xn+1 = 1.48) are rotationally symmetric. and for all functions satisfying (8.4. ii) R(x) ≤ 0. In [CL4].52). we make a stereographic projection from S n to Rn .48) or (8.53) 8.2 Let R be a continuous function. one can obtain a necessary and suﬃcient condition in the nondegenerate case: Corollary 8. Then it is equivalent to consider the following equation in Euclidean space Rn : .1 Let R be continuous and rotationally symmetric.51). Theorem 8.48) and (8.4. Combining our Theorem 1 with Xu and Yang’s Proposition 3.4. Then problems (8. Then problems (8. we obtain a comparison formula which leads to a direct contradiction. Assume that i) R(x) ≥ 0. The main objective of this section is to present the proof on the necessary part of the conjecture. and R ≡ C.2 Necessary Conditions In this subsection.47) have no solution at all.4. we introduce a new idea. We call it the method of ‘moving spheres’. which appeared in our later paper [CL4].8.2.51). instead of (8. The following are our main results.47) have no solution at all.47) to have a solution. we prove Theorem 8.4. would be the necessary and suﬃcient condition for (8. to prove Proposition 8. we used the method of ‘moving planes’ to show that the solutions of (8. Theorem 8.1 Let R be rotationally symmetric and nondegenerate in the sense that R = 0 whenever R = 0. R is nondecreasing in xn+1 direction. for x ∈ S n \ B.48) and (8. It works in all dimensions. In [CL3]. This method only work for a special family of functions satisfying (8.2. a similar method was used by P.4. Instead of showing the symmetry of the solutions. condition (8. In an earlier work [P] . We also generalize this result to a family of nonsymmetric functions. Padilla to show the radial symmetry of the solutions for some nonlinear Dirichlet problems on annuluses.50). for x ∈ B .1 and Theorem 8. If R is monotone in the region where R > 0. Then the necessary and suﬃcient condition for (8. For convenience. And we conjectured in [CL3] that for rotationally symmetric R.4 Method of Moving Spheres 225 The above results and many other existence results tend to lead people to believe that what really counts is whether R changes signs in the region where R > 0. It does not work in higher dimensions.
R ≤ 0 for r > ro . then u(λx) > x2−n u( λx ) x2 ∀x ∈ B1 (0).4.59) We ﬁrst prove Lemma 8. we may assume that: R(r) > 0. Lemma 8.2. ii) R ≡ C is nonnegative everywhere and monotone. We compare the values of u at those pairs λx of points λx and x2 .48) and (8. then u(λx) > u( λx ) − 4 ln x x2 ∀x ∈ B1 (0). (8.54) (8. In step 1.54) and (8.2. or vice versa. This is done by Kelvin transform and the strong maximum principle. we only need to show that in the remaining case.4. and we assume these throughout the subsection. Without loss of generality. Thus. x ∈ Rn . Here is the Euclidean Laplacian operator. hence there is no solution.4. u ∼ Cx2−n for n ≥ 3 (8.56) for n = 2. The reﬂection point of λx λx about the sphere ∂Bλ (0) is x2 .1.4. n ≥ 3.57). The ﬁrst two cases violate the KazdanWarner type conditions. Let u be a solution of (8. We use a new idea called the method of ‘moving spheres ’.1 is based on the following comparison formulas. Proof of Lemma 8. The function R is also bounded and continuous.1 Let R be a continuous function satisfying (8. Let x ∈ B1 (0).58) Lemma 8. there are three possibilities: i) R is nonpositive everywhere. Then in step 2.4.2 Let R be a continuous function satisfying (8.56) for some C > 0.55) and (8. 0 < λ ≤ 1 (8. r = x and n+2 τ = n−2 is the socalled critical Sobolev exponent.47).53). 0 < λ ≤ 1 (8.56) for n > 2.55) with appropriate asymptotic growth of the solutions at inﬁnity u ∼ −4 ln x for n=2 . R(r) ≤ 0 for r ≥ 1. iii) R > 0 and nonincreasing for r < ro . we move (shrink) the sphere ∂Bλ (0) . The function R(r) is the projection of the original R in equations (8. For a radial function R satisfying (8. A similar idea will work for Lemma 8.59) is true for λ = 1. − u = R(r)uτ u > 0.57) The proof of Theorem 8. there is also no solution. and this projection does not change the monotonicity of the function.226 8 Methods of Moving Planes and of Moving Spheres − u = R(r)eu . Let u be a solution of (8.57). x ∈ R2 . (8. R (r) ≤ 0 for r < 1. then λx ∈ Bλ (0).4. we start with the unit sphere and show that (8.
Then it is easy to verify that v satisﬁes the equation 1 − v = R( )v τ r (8. Step 1. we establish the inequality for all x ∈ B1 (0) and λ ∈ (0.65) τ where ψλ is some function with values between τ uτ −1 and τ vλ−1 .4).63) (8. we move the sphere ∂Bλ (0) towards λ = 0. Then it is easy to verify that λ τ − vλ = R( )vλ (x). By doing so. λ (8. This is done again by the Kelvin transform and the strong maximum principle. Step 2.57) that u < 0. λ = 1. λ λ wλ + R( )ψλ wλ = [R( ) − R(λr)]uτ λ r r (8. We show that x (8.60) u(x) > x2−n u( 2 ) ∀x ∈ B1 (0). (8.8. Then − uλ = R(λx)uτ (x). (8. Then by (8. In this step.62) x Let vλ (x) = x2−n uλ ( x2 ) be the Kelvin transform of uλ .64) r Let wλ = uλ − vλ . We show that the moving can