1 Hyperbolic equations
1.1 Introduction
The subject of this course is the theory of nonlinear hyperbolic equations. These
are a type of partial diﬀerential equations which describe the propagation of
waves. Perhaps the bestknown member of this class is the standard wave
equation
∂
2
u
∂t
2
=
n
i=1
∂
2
u
∂(x
i
)
2
. (1)
Here u is a realvalued function and n is a positive integer. The cases met most
frequently in applications are n = 1, 2, 3. In a sense the theory of hyperbolic
equations consists in identifying a large class of partial diﬀerential equations
whose solutions have many qualitative properties in common with the equation
(1). This equation is second order but it should already be said at this point
that many important hyperbolic equations are ﬁrst order and that in fact most
of this course will be concerned with systems of ﬁrst order equations.
Next some general notation and terminology will be introduced. Let x
α
; α =
0, 1, 2, . . . , n be local coordinates on an open subset U of R
n+1
. The coordinate
x
0
will often play a preferred role in what follows and will also be denoted by t.
In general in the examples to be considered it represents time. The coordinates
x
i
; 1 = 1, 2, . . . , n usually represent spatial coordinates although there are also
examples where they have a diﬀerent interpretation. The intuitive picture is
that the solutions of the equations represent states of a system which evolve
in time. For this reason the equation is referred to as an evolution equation.
The unknown u in the equation is a mapping from U to R
k
. I other words
this is a system of k equations. If the system is of order r then in order for
the system to make sense in a straightforward pointwise sense the mapping u
should be r times continuously diﬀerentiable. If this regularity condition holds
and u satisﬁes the equation it is called a classical solution. There are ways of
extending the notion of solutions leading to what are called weak solutions. In
this course the aim is the study of classical solutions.
If an evolution equation is intended to describe a deterministic process in the
real world then its solutions should be ﬁxed uniquely by a complete description
of the system at a ﬁxed time t = 0. This leads to the mathematical concept
of the initial value problem or Cauchy problem. What a complete description
1
means depends on the type of equation. For the wave equation (1) it means
giving the values of the function and its ﬁrst time derivative. In other words
we pose the conditions u(0, x) = u
0
(x) and
∂u
∂t
(0, x) = u
1
(x) for some functions
u
0
and u
1
. When these two functions are given and have suitable regularity
properties there should exist a solution u of the evolution equation satisfying
these initial conditions, at least in a neighbourhood of the initial hypersurface
t = 0. Moreover this solution should be unique. With a suitable choice of
regularity conditions for the data and the solutions this is true for the wave
equation (1). As another example consider the (inviscid) Burgers equation
∂u
∂t
+u
∂u
∂x
= 0. (2)
Here k = 1 (a scalar equation) and the equation is nonlinear. In this case we
pose the initial condition u(0, x) = u
0
(x). For suitable regularity assumptions
existence and uniqueness hold in this example.
In the Cauchy problem for an evolution equation existence and uniqueness
should hold. This means that for any suitable initial data a local solution exists
and that this solution is uniquely determined by the data. There is also a third
property which should be satisﬁed. The solution should depend continuously
on the initial data in a suitable sense. Making this precise requires deﬁning
topologies on the set of initial data and the set of solutions. If all three properties
hold then the initial value problem is said to be wellposed. (This is translation
of the term ‘bien pos´e’ introduced by Jacques Hadamard.) The importance
of the third property is as follows. Suppose that we want to use a system of
evolution equations to describe a phenomenon in nature. Then it is necessary
to compare the solution of the equation with experimental data. Experimental
data are always subject to error. This means that it is impossible to determine
exactly which initial data for the evolution equation is appropriate for describing
a given experiment. This data is only known up to a small error. If the model
is to be predictive a small error in the data should only lead to a small error
in the solution. This is precisely a statement on continuous dependence of the
solution on the data. There is also an additional property which is preliminary
to proving continuous dependence. It should be the case that the solutions
corresponding to data close to a particular datum have a common domain of
existence.
What does it mean for an evolution equation to be hyperbolic? Intuitively it
is the property that the equation should have a wellposed initial value problem
with the data which are appropriate for a hyperbolic equation. This means that
for system of order r the data should consist of the restrictions to the initial
hypersurface of the solution and its time derivatives up to order r − 1. Note
that this is a plausible choice of data for the following reason. When these
data are given then u and its time derivatives of order up to r − 1 are known
for t = 0. Diﬀerentiating along the hypersurface allows all partial derivatives
of u of all orders to be determined for which the number of time derivatives
does not exceed r − 1. Diﬀerentiating the evolution equation repeatedly with
respect to time and substituting in the information already available allows all
2
derivatives of u to be determined on the initial hypersurface. This is suggestive
that uniqueness holds in the Cauchy problem although it does prove that this
is the case.
There are evolution equations where diﬀerent choices of initial data are ap
propriate. Consider for instance the heat equation in one dimension:
∂u
∂t
=
∂
2
u
∂x
2
. (3)
In this case it turns out that it is appropriate to prescribe only the value of u
on the initial hypersurface and that this leads to a wellposed Cauchy problem.
It is not possible to additionally prescribe the time derivative of u and this
equation is not hyperbolic. It belongs to the class of parabolic equations. In
general these are of order r = 2s and only the ﬁrst s − 1 time derivatives may
be prescribed. There is another important diﬀerence between hyperbolic and
parabolic equations. In the hyperbolic case the existence which is assumed for
wellposedness is in an open neighbourhood of t = 0. In the parabolic case it
is only a onesided neighbourhood satisfying the condition t ≥ 0. Parabolic
equations cannot be solved backwards in time for general initial data. Consider
next the Schr¨odinger equation in one dimension.
∂u
∂t
= i
∂
2
u
∂x
2
. (4)
Here u is complexvalued. Everything can be expressed in terms of real variables
by thinking of this as a system of two equations for the real and imaginary parts
of u. In this case, as with the heat equation, it is appropriate to prescribe only
the value of u and not its time derivative. Then a wellposed Cauchy problem
is obtained. This equation is not hyperbolic. In contrast to the heat equation
(and other parabolic equations) it can be solved in both time directions.
It has already been mentioned that the wave equation is the model for the
class of hyperbolic equations. It should be remarked that there are other equa
tions which are sometimes called wave equations but are not hyperbolic. To
prevent confusion they may be referred to as dispersive wave equations. An
example is the Kortewegde Vries equation
∂u
∂t
=
∂
3
u
∂x
2
+ 6u
∂u
∂x
. (5)
This has a wellposed initialvalue problem where only the value of u is pre
scribed. The wave equation (1) gets its name from the fact that it was ﬁrst
used to describe water waves. Interestingly the Kortewegde Vries equation also
emerged as a description for water waves. The fact that its Cauchy problem is
so diﬀerent from that of the ordinary wave equation comes from the fact that it
is derived by taking a kind of singular limit. Although the Cauchy problem for
the KdV equation has been studied, most mathematical work on it is related to
ﬁnding explicit solutions of nonlinear PDE. The coeﬃcient six in the equation
comes from that source.
3
In this course no general and precise deﬁnition of the concept ’hyperbolic’ will
be given. One possibility would be to be to deﬁne an equation to be hyperbolic
if it has a wellposed Cauchy problem with initial data of the kind which we
have described as being appropriate. The problem with this deﬁnition is that
for a given equation it would be necessary to prove a (possibly very diﬃcult)
theorem in order to check whether the deﬁnition is satisﬁed. In fact what we
would like to have is a criterion which is easy to check and which, when it
is satisﬁed, guarantees the wellposedness of the Cauchy problem. There are
diﬀerent criteria of this kind and there is no one which is the most general. One
deﬁnition which has turned out to be widely applicable is that of a symmetric
hyperbolic system and this course concentrates on the theory of that class of
equations. A symmetric hyperbolic system is a system of ﬁrst order equations
and so it looks as if by concentrating on this case we would have lost contact to
the central example, the wave equation, since that is second order. In fact this
is not the case since the wave equation can be reduced to a ﬁrst order system
by introducing the ﬁrst order derivatives as new variables in such a way that a
symmetric hyperbolic system results. The deﬁnition of wellposedness depends
on a choice of the regularity of the data and solutions considered and of the
corresponding topologies of the spaces of data and solutions. The spaces which
occur naturally in the theory of symmetric hyperbolic systems are the Sobolev
spaces of L
2
type (which will be introduced later) and the C
∞
functions. There
are equations which do have some right to be called hyperbolic and which do not
allow a good theory of the Cauchy problem in spaces of this type. An example
are the systems which are weakly hyperbolic in the sense of LerayOhya. In
that case smooth functions must be replaced by functions belonging to certain
Gevrey classes. These issues will not be discussed further in this course. They
were just mentioned here to emphasize that while symmetric hyperbolic systems
are suﬃcient for understanding a large part of the ’hyperbolic world’ they do
not cover all equations of interest in this context.
1.2 Examples
In the last section two simple examples of hyperbolic equations were given to
gether with several examples of equations which are not hyperbolic. In this
section further examples of hyperbolic equations will be presented.
1. The wave equation
−
∂
2
u
∂t
2
+ ∆U = 0. (6)
Here u = u(t, x) is a mapping from I R
n
→ R where I is an open interval
and ∆ is the Lapace operator on R
n
. As already mentioned this equation is
the model equation for hyperbolic equations. The case n = 2 describes waves
on the surface of a ﬂuid, for instance waves on the sea.
2. A linear transport equation
∂u
∂t
+a
∂u
∂x
= 0 (7)
4
where u = u(t, x) is a mapping from I R → R and a is a constant. The general
solution of this equation is u(t, x) = u
0
(x−at). This allows the Cauchy problem
to be solved explicitly in this case. If the initial datum u
0
is C
∞
the same is
true of the solution u. The solution is a proﬁle (a wave) which propagates to
the right with velocity a.
3. The McKendrick equation
∂ρ
∂t
+
∂ρ
∂a
+µ(a)ρ = 0 (8)
This equation describes an agestructured population. The quantity ρ represents
the number of organisms of a given species of age a at time t. The function
µ(a) > 0, called the mortality function, is the rate at which organisms of age
a die. It is necessary to supplement this equation by a boundary condition at
a = 0 representing the birth of the organisms. Thus the problem of interest
for the application is not a pure Cauchy problem. When the organism is one
which reproduces sexually it is usual to only consider the population of females.
This example has been mentioned here to emphasize that the variables other
than time in a hyperbolic equation are not always spatial variables and that
hyperbolic equations come up in modelling biological systems.
4. Semilinear wave equations
−∂
2
t
u + ∆u = u
2k+1
(9)
In this example u is a realvalued function and k is a positive integer. This is
one of the simplest examples of a nonlinear hyperbolic equation. An equation
is called semilinear if the terms of highest order occur linearly with coeﬃcients
which do not depend on the unknowns. In contrast to the case of linear equations
there is a major diﬀerence between considering solutions which exist locally or
globally in time. It is known that the equation (9) has local in time solutions
for relatively general initial data, for instance for C
∞
data. It is much harder
to say whether these solutions can be extended so as to exist for all values of t.
If the space dimension n is equal to three and k = 1 then the answer is positive
(J¨orgens, about 1960). For n = 3 and k = 2 global existence holds (Grillakis,
about 1990). For n = 3 and k ≥ 3 nothing is known. Thus we see that even an
apparently simple equation like (9) leads to mysterious questions.
5. Wave map (hyperbolic plane)
−∂
2
t
u + ∆u = −2(∂
t
u∂
t
v −∇u ∇v)
−∂
2
t
v + ∆v = e
−2v
[(∂
t
u)
2
−[∇u[
2
] (10)
This example is a semilinear system of equations for two unknown functions u
and v and is an important model system in the mathematical theory of hyper
bolic equations. Similar equations are known in mathematical physics, where
5
they are called nonlinear σmodels. This system has an interpretation in terms
of diﬀerential geometry. There are diﬀerent types of wave map which are deﬁned
by the choice of a Riemannian manifold, the target manifold. In the case of (10)
the Riemannian manifold is the hyperbolic plane. This equation with n = 2 has
certain properties in common with equation (9) with k = 2. These are socalled
critical equations. A general global existence theorem for this equation was only
proved as recently as 2009 by Krieger and Schlag.
6. Wave map (general)
−∂
2
t
u
A
+ ∆u
A
=
B,C
Γ
A
BC
(u)(∂
t
u
B
∂
t
u
C
−∇u
B
∇u
C
) (11)
Here we have a generalization of the last example with k unknowns u
A
(A =
1, . . . , k). In this case the Riemannian manifold previously mentioned is of
dimension k. The coeﬃcients Γ
A
BC
are the Christoﬀel symbols of this metric.
7. The Euler equations
∂
t
ρ +
3
j=1
∂
j
(ρv
j
) = 0
∂
t
(ρv
i
) +
3
j=1
∂
j
(ρv
i
v
j
+δ
ij
p) = 0 (12)
∂
t
s +
3
j=1
v
j
∂
j
s = 0
The Euler equations describe a perfect ﬂuid in R
3
with mass density ρ (assumed
nonnegative), velocity v and entropy density s. The pressure p is given by an
equation of state p = f(ρ, s). The function f should satisfy the physical con
ditions that f ≥ 0 and ∂f/∂ρ > 0 for ρ > 0. A particularly simple case is the
isentropic case where s is assumed to be constant. Then p is a function of ρ
alone and the ﬁrst two of the above equations deﬁne the isentropic Euler equa
tions. The Euler equations (including the isentropic case) are quasilinear but
not semilinear. The highest order derivatives occur linearly but with coeﬃcients
which do depend on the unknowns. These equations are the Euler equations for
a compressible ﬂuid. The incompressible Euler equations have certain proper
ties in common with hyperbolic equations but are not actually hyperbolic and
they will not be considered further in this course.
8. The Maxwell equations in vacuum
∂E/∂t = curl B
∂B/∂t = −curl E (13)
6
divE = 0 divB = 0
The Maxwell equations describe the electromagnetic ﬁeld. Here we consider only
the case without sources (the vacuum case). In other words, no charged matter
is present. The quantities E and B are vector ﬁelds in R
3
at each ﬁxed time; E
is the electric ﬁeld and B is the magnetic ﬁeld. The ﬁrst two of these equations
are the Maxwell evolution equations and the other two the constraints. The
constraints contain no time derivatives and imply the restrictions divE
0
= 0 and
divB
0
= 0, where E
0
and B
0
are the initial data for E and B, respectively. Thus
the initial data for these equations cannot be prescribed freely. The evolution
equations imply that ∂
t
(divE) = 0 and ∂
t
(divB) = 0. It follows that a solution
of the Maxwell evolution equations with data satisfying the constraints satisﬁes
all Maxwell equations (propagation of the constraints).
9. The Maxwell equations in a medium. It is possible to give an eﬀective
description of the propagation of electromagnetic ﬁelds in a material. The ﬁeld
equations are derived from a Lagrangian L depending on E and B. In the
vacuum case it is simply
1
2
([E[
2
−[B[
2
). More general Lagrangians give rise to
an interesting class of ﬁrst order quasilinear hyperbolic systems.
10. Elasticity theory. The Euler equations can be generalized to give a descrip
tion of an elastic solid. The resulting equations form a ﬁrst order quasilinear
hyperbolic system.
11. YangMills equations. The YangMills equations are another nonlinear
generalization of the Maxwell equations. They are semilinear. They are the
classical analogue of equations which play a big role in particle physics. The
mathematical formulation of these equations requires tools from diﬀerential ge
ometry (connections on principal ﬁbre bundles).
12. The Einstein equations. These equations give a fully relativistic description
of the gravitational ﬁeld and are the fundamental equations of general relativity.
They are formulated using concepts from diﬀerential geometry (Lorentzian met
rics). These equations are second order and quasilinear but not semilinear. The
nonlinearity shows similarities to that of wave maps (terms which are quadratic
in the ﬁrst derivatives of the unknowns).
1.3 Characteristics
It has already been said that no attempt will be made in this course to give
a general and precise deﬁnition of the concept ‘hyperbolic’. The attempt will
nevertheless be made to explain what are the key features which play a role in
trying to formulate a deﬁnition. In order to do this it is necessary to introduce
the notion of characteristics. Before that the concept of linearization is required.
Consider the following system of partial diﬀerential equations:
α=s
A
α
(y, u, . . . , D
s−1
u)D
α
u +B(y, u, . . . , D
s−1
u) = 0 (14)
7
As before the unknown u is assumed to be vectorvalued, so that the A
α
are
matrixvalued and B is vectorvalued. Here the Greek indices are multiindices.
In other words α is a ﬁnite sequence (α
1
, . . . , α
n
) of nonnegative integers and
D
α
u =
∂
α
u
(∂y
1
)
α
1
. . . (∂y
n
)
α
n
(15)
with [α[ = α
1
+. . . +α
n
. The notation D
s−1
u represents all partial derivatives
of u up to order s −1, in other words the collection of all D
α
u with [α[ ≤ s −1.
Here no variable t has been singled out; the relation to what was written above
comes by letting y = (t, x). The equation (14) is quasilinear, i.e. it is linear
in the highest order derivatives. The discussion of the linearization would work
just as well for a fully nonlinear equation but it seemed helpful to restrict here to
the quasilinear case since all equations considered in this course are quasilinear.
Now let u be a solution of (14). The linearization of the system (14) about the
solution u is the linear equation
α=s
_
A
α
(y, u, . . . , D
s−1
u)D
α
v +
_
s−1
r=0
∂A
α
∂(D
r
u)
D
r
v
_
D
α
u
_
+
s−1
r=0
∂B
∂(D
r
u)
D
r
v = 0, (16)
considered as an equation for v. This equation has the following origin. Let
w(λ, y) be a parameterdependent solution of the system (14) with w(0, y) =
u(y). This can also be considered as a family w(λ) of solutions of the system (14)
with w(0) = u. Then the derivative of this family with respect to λ, evaluated at
λ = 0, satisﬁes the linearized system (16). The intuitive idea is that a solution
of the linearized system could provide an approximation to solutions of the full
system.
In general the equation (14) is said to be hyperbolic at the solution u if the
linearized system about u is hyperbolic. The system (14) is called hyperbolic if
a solution with this property exists. Thus the problem of deﬁning hyperbolicity
for nonlinear systems has been reduced to the linear case. Thus we now consider
the linear system
α≤s
A
α
(y)D
α
u +B(y) = 0 (17)
At this point it is appropriate to make a remark about the semilinear case. The
system (14) is semilinear when A
α
only depends on y. If the linearization is
written in the form (17) then for [α[ = s the coeﬃcients A
α
(y) do not depend
on the solution, in contrast to the general case.
The principal part of the equation (17) is the expression
α=s
A
α
(y)D
α
u,
i.e. the part which contains the highest order derivatives. If we replace D
α
u
in this expression by ξ
α
we get the principal symbol
α≤s
A
α
(y)ξ
α
, a matrix
valued function of y and ξ ∈ R
n
. Note that by deﬁnition for a multiindex α
the expression ξ
α
means ξ
α
1
1
. . . ξ
α
n
n
. The determinant of the principal symbol
8
is a realvalued function of y and ξ and is called the characteristic polynomial.
It will be denoted by P(y, ξ). The set
¦(y, ξ) : P(y, ξ) = 0, ξ ,= 0¦
is called the characteristic set. At this point it is appropriate to remark that in
contrast to the concept ‘hyperbolic’ the concept ‘elliptic’ is easy to deﬁne. A
system is elliptic if and only if its characteristic set is empty, i.e. if the principal
symbol is invertible for all (y, ξ). A hypersurface with normal vector ν is called
characteristic when ν lies in the characteristic set, otherwise noncharacteristic.
For a ﬁxed value of y the polynomial P is of order sk. Thus it is to be
expected that the part of the characteristic set for this ﬁxed value of y consists
of at most sk (possibly singular) hypersurfaces. If complex values of ξ were
allowed there would be exactly sk of these, counting multiplicity. However they
do not all need to be real. The hyperbolicity of the system is dependent on
what direction in R
n+1
is chosen as time direction. Let a splitting y = (t, x) be
chosen. There is a corresponding splitting ξ = (τ, ζ). Consider the zero set of
the expression P(t, x, τ, ζ
0
) for a ﬁxed choice of ζ
0
,= 0. When t, x and ζ
0
are
ﬁxed this is a polynomial of order sk in τ.
Deﬁnition The system is called strictly hyperbolic at the point y if there is a
choice of t such that for each choice of ζ
0
the expression P(t, x, τ, ζ
0
) has exactly
sk distinct real roots.
A strictly hyperbolic system has a wellposed initial value problem, i.e. exis
tence, uniqueness and continuous dependence hold. The disadvantage of this
deﬁnition is that it is not satisﬁed in most examples of hyperbolic systems. The
multiplicity of the roots is usually greater than one and often dependent on the
choice of ζ
0
. Nevertheless it should be noted that the condition that all roots
are real is closely connected to the deﬁnition of hyperbolicity. This condition
is in particular satisﬁed for symmetric hyperbolic systems. The zeroes of the
characteristic polynomial correspond to diﬀerent types of waves. Before we leave
the concept of strict hyperbolicity we should confront it with our examples.
It is easy to see that the wave equation, the linear transport equation (7)
and the McKendrick equation (8) are strictly hyperbolic. The same is true for
the equation (9). The system (10), on the other hand, is not strictly hyperbolic.
The characteristic set looks like in the case of the wave equation but consists
of double zeroes of the characteristic polynomial. A similar argument applies
to (11). The characteristics of the Euler and Maxwell equations are not simple
zeroes and therefore these equations are not strictly hyperbolic.
1.4 Symmetric hyperbolic systems
Let I ⊂ R be an open interval containing zero and let G ⊂ R
k
be an open
subset. Let A
0
and A
i
be C
∞
mappings from I R
n
G to M
k
(R), the space
of k by k real matrices. Let B be a C
∞
mapping from I R
n
G to R
k
.
9
Consider the quasilinear system
A
0
(t, x, u)∂
t
u +
n
i=1
A
i
(t, x, u)∂
i
u +B(t, x, u) = 0 (18)
of partial diﬀerential equations for a mapping u of class C
∞
from I R
n
to G.
Deﬁnition The system (18) is called symmetric hyperbolic if
(i) the matrices A
0
and A
i
are symmetric for (t, x, u) ∈ I R
n
G.
(ii) there is a constant C > 0 such that for each vector v ∈ R
n
the inequality
¸A
0
(t, x, u)v, v) ≥ C¸v, v) holds for all (t, x, u) ∈ I R
n
G
If a matrix A is positive deﬁnite there exists a constant C > 0 such that
¸Av, v) ≥ C¸v, v). Therefore the second condition can be expressed by say
ing that A
0
is uniformly positive deﬁnite. The linearization of equation (18)
is
A
0
(t, x, u)∂
t
v +
n
i=1
A
i
(t, x, u)∂
i
v
+
_
∂A
0
∂u
(t, x, u)∂
t
u +
n
i=1
∂A
i
∂u
(t, x, u)∂
i
u +
∂B
∂u
(t, x, u)
_
v = 0 (19)
The system (19) is a linear symmetric hyperbolic system for each ﬁxed function
u with values in G. In fact the system (18) is symmetric hyperbolic if and
only if the linearization of this equation about each function u with values in
G is symmetric hyperbolic. What do the characteristics of equation (18) look
like? To see this we need a little linear algebra. An eigenvalue of the matrix
M is by deﬁnition a solution λ of the equation det(M − λI) = 0. This can
be generalized in the following way. Let A be a positive deﬁnite symmetric
matrix. An eigenvalue of M with respect to A is a solution λ of the equation
det(M −λA) = 0. If M is symmetric then it is the case that, as in the special
case A = I, all eigenvalues of M with respect to A are real. The characteristic
polynomial of (19) is
P(t, x, τ, ζ) = det[τA
0
(t, x, u) +
n
i=1
ζ
i
A
i
(t, x, u)].
The roots of the equation det(τA
0
+
n
i=1
ζ
i
0
A
i
) = 0 are, up to a factor −1, the
eigenvalues of the matrix
n
i=1
ζ
i
0
A
i
with respect to A
0
. Thus they are real.
They need not be distinct. For this reason the theory of symmetric hyperbolic
systems can be applied in many situations in which the system is not strictly
hyperbolic. If we have two symmetric hyperbolic systems with unknowns u and
u
and we produce a system for (u, u
) by placing them side by side then the
combined system is also symmetric hyperbolic. An initial datum for the system
(18) is a function u
0
(x) on R
n
with values in G.
10
Now, as promised, it will be shown how the wave equation can be written as
a symmetric hyperbolic system. Let u be a solution of the wave equation. Let
v = ∂
t
u and w
i
= ∂
i
u. Then the n + 2 quantities u, v, w
i
satisfy the following
system
∂
t
u = v, ∂
t
v =
n
i=1
∂
i
w
i
, ∂
t
w
i
= ∂
i
v (20)
This system is symmetric hyperbolic. In this way we obtain a solution of the
system (20) from each solution of the wave equation. The initial data for this
solution have the additional property that ∂
i
u
0
= (w
i
)
0
. Conversely, if (u, v, w
i
)
is a solution of (20) with initial data which have the additional property then
u solves the wave equation. For the system (20) implies that ∂
t
(w
i
− ∂
i
u) =
0. Together with the condition on the initial data this imples that w
i
= ∂
i
u
everywhere. Then the ﬁrst two equations of (20) imply that u satisﬁes the wave
equation. The same procedure allows the semilinear equations in the examples
(9)(11) to be put into symmetric hyperbolic form. In the examples (10)(11)
it is important to note the following fact. It was already mentioned that it is
possible to make a system by placing two symmetric hyperbolic systems side by
side without the property of symmetric hyperbolicity being lost. In other words,
when a system of partial diﬀerential equations decomposes into two subsystems
which are symmetric hyperbolic and not coupled to each other then the full
system is symmetric hyperbolic. In fact more can be said. The analogous
statement still holds when the principal parts of the subsystems are decoupled.
This holds for the systems obtained by reduction of the equations for wave maps.
Exercise The Maxwell equations are a symmetric hyperbolic system
Now it will be shown that the Euler equations can be written in symmetric
hyperbolic form. This is already a ﬁrst order system. Thus it is not necessary
to do any reduction. In the nonisentropic case it turns out that it is necessary
to make the (physically motivated) assumption ∂f/∂ρ > 0 in order to obtain
a symmetric hyperbolic system. Then there exists a function g such that ρ =
g(s, p). As a ﬁrst step in obtaining a symmetric hyperbolic form certain linear
combinations of the equations are taken. If −v
i
times the ﬁrst equation is added
to the equation for v
i
various terms cancel. Then ρ is replaced by p by means
of the function g. The terms in the ﬁrst equation which contain derivatives of
s vanish as a consequence of the evolution equation for the entropy. It is then
enough to multiply the ﬁrst equation with a suitable positive factor. The set
G can be chosen as that which is deﬁned by the inequalities ρ
1
< ρ < ρ
2
and
0 < s < s
0
where ρ
1
, ρ
2
and 0 < s < s
0
are arbitrary positive constants with
ρ
1
< ρ
2
.
Exercise After the manipulations just described the Euler equations are a sym
metric hyperbolic system.
Until now data was always given at t = t
0
with t
0
an arbitrary constant. It is,
however, possible to prescribe data on a more general hypersurface. Consider a
11
hypersurface which is deﬁned by the equation t = f(x). At a given point (t, x)
of the hypersurface the vector (1, −∂
i
f) is normal to the hypersurface. The
hypersurface is called spacelike with respect to the symmetric hyperbolic system
(18) if A
0
−
n
i=1
A
i
∂
i
f is positive deﬁnite. We could introduce t
= t − f(x)
as a new time coordinate. Then we would be in the known situation with data
on t
= 0 after the transformation. The transformed A
0
is positive deﬁnite in
a neighbourhood of t
= 0. If this matrix is uniformly positive deﬁnite then we
get a local existence theorem. This will be the case, for example, when f has
compact support.
2 Background from functional analysis
Existence theorems for partial diﬀerential equations are generally proved by
deﬁning an iteration and showing that it converges to a solution or by study
ing a mapping whose ﬁxed points are solutions. This is similar to the Picard
iteration used to prove the standard existence theorem for ordinary diﬀerential
equations, but more complicated. A standard tool used to prove convergence or
the existence of a ﬁxed points consists of inequalities between norms of various
functions (estimates). For this purpose it is necessary to have various inequal
ities relating the norms of a function in diﬀerent spaces and some important
inequlities of this type will now be discussed.
2.1 Embedding theorems
In the following we need the spaces L
p
(R
n
) of all functions on R
n
whose pth
power is integrable with the norm
f
L
p =
__
R
n
[f[
p
dx
_
1/p
(21)
Here p is a real number belonging to the interval [1, ∞). The space L
∞
(R
n
)
is the space of all essentially bounded functions with the norm deﬁned by the
essential supremum. We also need the Sobolev spaces W
m,p
of all functions
whose derivatives up to order m are in L
p
. Here m is a nonnegative integer
and p satisﬁes inequality 1 ≤ p ≤ ∞. (The case p = 2 is particularly important
and plays a central role in the theory of hyperbolic equations. For this reason
we introduce a special notation and write H
m
= W
m,2
.) Strictly speaking it is
necessary at this point to use distributions rather than functions and to interpret
the derivatives as distributional derivatives. In this course it will not be possible
to cover these topics. Fortunately they will not play a big role in what follows.
The space C
∞
0
(R
n
) of functions of class C
∞
with compact support is dense in
each of these Sobolev spaces with p < ∞.
At ﬁrst sight it seems natural to solve hyperbolic equations in C
k
spaces.
Thus, for instance, it might be hoped that initial data u
0
∈ C
2
and u
1
∈ C
1
might give rise to a solution u ∈ C
2
. Unfortunately this is not the case in
general. To get this type of statement it is necessary to replace C
k
by H
k
.
12
The Sobolev spaces of type L
2
are the natural spaces for hyperbolic equations.
An example gives some insight as to why C
k
regularity is not propagated by
hyperbolic equations. Consider the wave equation (1) with n = 3. Specializing
to the spherically symmetric case u(t, x) = u(t, r) where r = [x[ gives the
equation
∂
2
t
u = ∂
2
r
u +
2
r
∂
r
u. (22)
This may be usefully rewritten as ∂
2
t
(ru) = ∂
2
r
(ru). The general solution is thus
u(t, r) =
1
r
(f(t −r) +g(t +r)) (23)
for arbitrary functions f and g. Smoothness at r = 0 implies that f
= g
. For
a solution evolving from initial data of compact support it can be concluded
that f = g. The initial data at t = 0 are given by u
0
(r) = r
−1
(f(−r) + f(r))
and u
1
(r) = r
−1
(f
(−r) + f
(r)). For the example choose f to be a function
of compact support with the following properties. It vanishes on the interval
(−∞, 1], it is equal to (r −1)
3
on the interval [1, 2] and it is smooth for r ≥ 1.
Then u
0
is C
2
and u
1
is C
1
. It will now be shown that the solution is not C
2
.
The intuitive idea is that the irregularity in the initial data at r = 1 propagates
along an ingoing light cone and focuses at t = 1. Inside the past light cone
of the point (1, 0) the solution is identically zero. Thus if the solution were
C
2
all its second order derivatives would have to vanish there. However for r
small u(1, r) = r
2
and thus ∂
2
r
u(1, r) = 2. By a more complicated construction
involving data with an inﬁnite number of irregularities of this type it is possible
to see that for data where u
0
and u
1
are C
2
and C
1
respectively there may be
no time interval where the corresponding solution is C
2
.
Since Sobolev spaces are the appropriate spaces for proving theorems on the
wellposedness of hyperbolic equations but at the end of the day we would like
to have classical solutions it is clearly important to have suitable results relating
these two types of spaces. These are the Sobolev embedding theorem which will
be proved in this section. In what follows the presentation of these theorems
in [4] will be adopted. First some preliminaries are necessary. Note that the
Sobolev spaces are Banach spaces but not in general Hilbert spaces.
Lemma 2.1.1 Let X
1
and X
2
be Banach spaces and Y a dense linear subspace
of X
1
. Let L be a linear mapping from Y to X
2
with L(x)
X
2
≤ Cx
X
1
for
a constant C > 0. Then there exists a linear mapping
˜
L : X
1
→ X
2
whose
restriction to Y is equal to L and which satisﬁes the inequality 
˜
L(x)
X
2
≤
Cx
X
1
.
Proof If x ∈ X
1
there is a sequence x
n
∈ Y with x
n
−x
X
1
→ 0 for n → ∞.
(Y is dense in X
1
.) Since the sequence ¦x
n
¦ converges it is Cauchy. Because of
the inequality L(x
m
) −L(x
n
)
X
2
≤ Cx
m
−x
n

X
1
the sequence L(x
n
) is also
a Cauchy sequence. Hence L(x
n
) converges to a limit z. If we replace ¦x
n
¦ by
a another sequence ¦x
n
¦ then L(x
n
) also converges to z. For we can apply the
argument just given to the sequence x
1
, x
1
, x
2
, x
2
, . . .. Thus z depends only on
x and we can deﬁne a mapping from X
1
to X
2
by
˜
L(x) = z. It is easy to show
13
that
˜
L is linear. Moreover, the restriction of
˜
L to Y is equal to L: it suﬃces to
consider constant sequences. It remains to show the desired inequality.

˜
L(x) ≤ 
˜
L(x
n
) +
˜
L(x
n
) −
˜
L(x) (24)
= L(x
n
) +L(x
n
) −
˜
L(x) (25)
≤ Cx
n
 +L(x
n
) −
˜
L(x) (26)
Let > 0. Since L(x
n
) converges to L(x) the second term is smaller than /2
for n suﬃciently large. On the other hand the ﬁrst term is less than Cx+/2.
It follows that

˜
L(x) ≤ Cx +
Since was arbitrary the result follows.
We will use this statement for instance when X
1
is a Sobolev space and Y is
the space of C
∞
functions with compact support. Recall the H¨older inequality.
Let 1 ≤ p ≤ ∞ and let p
be the unique number (conjugate exponent) with
1/p + 1/p
= 1. Let f ∈ L
p
(R
n
) and g ∈ L
p
(R
n
). Then the product fg is in
L
1
and fg
L
1 ≤ f
L
pg
L
p
. In order to prove the embedding theorems we
need a slight generalization of this statement.
Lemma 2.1.2 (generalized H¨older inequality) Let p
1
, . . . , p
s
be (generalized)
real numbers with 1 ≤ p
i
≤ ∞ and 1/p
1
+1/p
2
+. . . 1/p
s
= 1. For i = 1, 2, . . . , s
let f
i
∈ L
p
i
(R
n
). Then the product f
1
f
2
. . . f
s
is in L
1
and
f
1
f
2
. . . f
s

L
1 ≤ f
1

L
p
1 f
2

L
p
2 . . . f
s

L
p
s
This inequality follows from the original H¨older inequality by induction. Now
we come to the ﬁrst embedding theorems.
Lemma 2.1.3 For 1 ≤ p < n every function in W
1,p
(R
n
) belongs to L
np/(n−p)
(R
n
)
and there is a constant C > 0 such that
u
L
np/(n−p) ≤ Cu
W
1,p
for all u ∈ W
1,p
(R
n
).
In order to prove the lemma it is enough to obtain the inequality in the case
u ∈ C
∞
0
(R
n
). If the inequality holds in that case then we can apply Lemma
2.1.1 with X
1
= W
1,p
(R
n
), Y = C
∞
0
(R
n
) and X
2
= L
np/(n−p)
(R
n
). If we
choose L to be the identity we get an extension
˜
L. We claim that
˜
L must aslo
be the identity. For if u
n
→ u in W
1,p
(R
n
) and u
n
→ v in L
np/(n−p)
(R
n
) then
it must be the case that u = v. This type of argument will occur frequently in
what follows and will not be repeated explicitly. We can also replace the W
1,p
norm of u in the inequality by the L
p
norm of ∇u. Now suppose that u is a
function in C
∞
0
(R
n
). Then
u(x
1
, x
2
, . . . , x
n
) = u(x
1
, x
2
, . . . , x
n
) +
_
x
1
x
1
∂
1
u(x
1
, x
2
, . . . , x
n
)dx
1
14
It follows that
[u(x
1
, x
2
, . . . , x
n
)[ ≤ [u(x
1
, x
2
, . . . , x
n
)[ +
_
x
1
x
1
[∂
1
u(x
1
, x
2
, . . . , x
n
)[dx
1
We can choose x
1
so that (x
1
, x
2
, . . . , x
n
) lies outside the support of u. Thus
[u(x
1
, x
2
, . . . , x
n
)[ ≤
_
∞
−∞
[∂
1
u(x
1
, x
2
, . . . , x
n
)[dx
1
Similar estimates hold for the other coordinates. Taking the product of these
these n inequalities and taking the root of order n − 1 of the result gives the
inequality
[u(x
1
, x
2
, . . . , x
n
)[
n/(n−1)
≤
n
i=1
__
∞
−∞
[∂
i
u[dx
i
_ 1
n−1
Now this inequality should be successively integrated with respect to x
1
up to
x
n
, whereby the generalized H¨older inequality with all p
i
= n−1 is used in each
step. The result is that
_
R
n
[u[
n/(n−1)
dx ≤ C
__
R
n
[∇u[dx
_
n/(n−1)
This completes the proof of the result in the case p = 1. In order to get the
general case u is replaced by [u[
γ
with a suitable value of γ > 1 in this inequality.
[u[
γ

L
n/(n−1) ≤ C[u[
γ−1
[∇u[
L
1 ≤ C∇u
L
p[u[
γ−1

L
p
The choice γ = (n −1)p/(n −p) gives
__
R
n
[u[
np/(n−p)
dx
_
(n−1)/n
≤ C
__
R
n
[u[
np/(n−p)
dx
_
(p−1)/p
∇u
L
p
and the desired result.
Corollary For 1 ≤ kp < n every function in W
k,p
(R
n
) belongs to L
np/(n−kp)
(R
n
)
and there exists a constant C such that
u
L
np/(n−kp) ≤ Cu
W
k,p
for all u ∈ W
k,p
(R
n
).
The theorems which were presented up to now concern the embedding of one
Sobolev space into another. There are also theorems which assert the existence
of an embedding of a Sobolev space into a space deﬁned by pointwise diﬀeren
tiability properties. These will be discussed next.
Lemma 2.1.4 If kp > n then every function in W
k,p
(R
n
) belongs to L
∞
(R
n
)
and is continuous. There exists a constant such that u
L
∞ ≤ Cu
W
k,p for all
functions u in W
k,p
(R
n
).
15
Proof In order to prove the statements with L
∞
it is enough to prove the in
equality for u ∈ C
∞
0
(R
n
). We have already seen the justiﬁcation for statements
like this. To prove the statement about continuity it is enough to note that in
the situation of Lemma 2.1.1 the mapping
˜
L takes values in L(Y ) and that the
continuous functions are a closed subspace of L
∞
(R
n
). If we want to prove the
inequality for a smooth function u with compact support it is enough to bound
u(0). An arbitrary point x of R
n
can be written as rω where ω lies on the unit
sphere. Let g : R → R be a C
∞
function with g(r) = 1 for r < 1/2 and g(r) = 0
for r > 3/4. For each ω:
u(0) = −
_
1
0
∂/∂r[g(r)u(r, ω)]dr (27)
=
(−1)
k
(k −1)!
_
1
0
r
k−n
(∂/∂r)
k
[g(r)u(r, ω)]r
n−1
dr (28)
where the second line follows by partial integration. If this formula is integrated
over S
n−1
we get
[u(0)[ ≤ C
_
B
r
k−n
[(∂/∂r)
k
[g(r)u(r, ω)][dx
where B is the unit ball centred at the origin. By H¨older’s inequality
u(0) ≤ Cr
k−n

L
p
(B)
(∂/∂r)
k
[g(r)u(x)]
L
p
(B)
with 1/p+1/p
= 1. The operator (∂/∂r)
k
can be written in the form
A
α
D
α
where the A
α
are bounded. Hence the second factor can be bounded by u
W
k,p.
The ﬁrst factor is ﬁnite under the assumption kp > n of the theorem.
2.2 Moser inequalities
The aim is now to prove the Moser inequalities. Before we can do that we need
the GagliardoNirenberg inequalities.
Lemma 2.2.1 Let j be an integer with 1 ≤ j ≤ n. Let k > 1 be a real
number and 1 ≤ p ≤ k. We deﬁne q
1
= 2k/(p + 1) and q
2
= 2k/(p − 1). If
u ∈ L
q
2
(R
n
) ∩ W
2,q
1
(R
n
) then ∂
j
u ∈ L
2k/p
and there exists a constant C > 0
such that the following inequality holds
∂
j
u
2
L
2k/p
≤ Cu
L
q
2 ∂
2
j
u
L
q
1
Proof If v ∈ C
∞
0
(R
n
) and q ≥ 2 then the function v[v[
q−2
is in C
1
0
(R
n
) and
∂
j
(v[v[
q−2
) = (q −1)(∂
j
v)[v[
q−2
Let v = ∂
j
u. Then
[∂
j
u[
q
= ∂
j
(u∂
j
u[∂
j
u[
q−2
) −(q −1)u∂
2
j
u[∂
j
u[
q−2
16
We now integrate this equation over R
n
and use the generalized H¨older in
equality to estimate the second term. (The ﬁrst term on the right hand side has
integral zero.) The result is
∂
j
u
q
L
q ≤ [q −1[ u
L
q
2 ∂
2
j
u
L
q
1 ∂
j
u
q−2
L
q
where q = 2k/p. It only remains to divide both sides by ∂
j
u
q−2
L
q .
Next this inequality is applied to D
l−1
u. It follows that
D
l
u
2
L
2k/p
≤ CD
l−1
u
L
q
2 D
l+1
u
L
q
1 .
Here p lies in the interval [1, k] and l ≥ 1 is an integer. Now we use the
elementary inequality that for nonnegative real numbers a and b and > 0
arbitrary
√
ab ≤ a + (1/4)b, with the result that
D
l
u
L
2k/p ≤ C(D
l−1
u
L
2k/(p−1) +
−1
D
l+1
u
L
2k/(p+1) )
If p ∈ [2, k] and l ≥ 2 we get
D
l−1
u
L
2k/(p−1) ≤ C(
1
D
l−2
u
L
2k/(p−2) +
−1
1
D
l
u
L
2k/p)
These two inequalities can be combined, with
1
ﬁxed and small enough, to
give
D
l
u
L
2k/p ≤ C(D
l−2
u
L
2k/(p−2) +C()D
l+1
u
L
2k/(p+1) )
After further steps of this kind we get
D
l
u
L
2k/p ≤ C(D
l−j
u
L
2k/(p−j) +C()D
l+1
u
L
2k/(p+1) )
for j ≤ p ≤ k and l ≥ j. We can also substitute for the second term. This leads
to the following result:
Lemma 2.2.2 If j ≤ p ≤ k + 1 − m and l ≥ j then for small enough the
following estimate holds
D
l
u
L
2k/p ≤ C(D
l−j
u
L
2k/(p−l) +C()D
l+m
u
L
2k/(p+m) )
The whole content of this statement is already given by the special case l = j,
namely
D
l
u
L
2k/p ≤ C(u
L
2k/(p−l) +C()D
l+m
u
L
2k/(p+m) )
A further specialization of this inequality will be of interest in what follows.
This is the case p +m = k:
D
l
u
L
2k/p ≤ C(u
L
2k/(p−l) +C()D
k+l−p
u
L
2)
In the inequalities which we have proved the right hand side is always a sum.
Now the sum will be replaced by a product.
17
Lemma 2.2.3 Let l, µ and m be nonnegative integers with l not greater than
the maximum of µ and m and let q, r and ρ be real numbers which belong to
the interval [1, ∞]. We deﬁne
α =
n
q
−
n
r
+µ −l, β = −
n
q
+
n
ρ
−m+l
and assume that neither α nor β vanishes. If the inequality
D
l
u
L
q ≤ C
1
D
µ
u
L
r +C
2
D
m
u
L
ρ
holds for all u ∈ C
∞
0
(R
n
) then
D
l
u
L
q ≤ (C
1
+C
2
)D
µ
u
β/(α+β)
L
r D
m
u
α/(α+β)
L
ρ
If the ﬁrst inequality holds then α and β have the same sign.
We write the ﬁrst inequality schematically as Q ≤ C
1
R+C
2
P. Now we replace
u(x) by u(sx) in the inequality and get
s
l−n/q
Q ≤ C
1
s
µ−n/r
R +C
2
s
m−n/ρ
P
for all s > 0. Dividing both sides by s
l−n/q
gives Q ≤ C
1
s
α
R + C
2
s
−β
P. If α
and β had opposite signs then it would be possible to let s tend to either zero
of inﬁnity and conclude that Q = 0, a contradiction. Thus α and β have the
same sign. Now choose s = (P/R)
1/(α+β)
. Then we get the desired inequality.
(It should be noted that the choice of s is such that the two terms on the right
hand side are equal.)
If we apply the result of Lemma 2.2.3 to the special case of the inequality
which was proved in Lemma 2.2.2 then the following theorem is obtained:
Theorem 2.2.1 (GagliardoNirenberg) Let l, p and k be positive integers with
l ≤ p ≤ k −1. Then
D
l
u
L
2k/p ≤ Cu
(k−p)/(k+l−p)
L
2k/(p−l)
D
k+l−p
u
l/(k+l−p)
L
2
In particular if l < k we can set p = l and we get
D
l
u
L
2k/l ≤ Cu
1−l/k
L
∞ D
k
u
l/k
L
2
The GagliardoNirenberg inequalities will now be used to show that under
suitable circumstances multiplication and composition with smooth functions
deﬁne mappings from Sobolev spaces (of type L
2
) to themselves.
Lemma 2.2.4 If β and γ are multiindices with [β[ + [γ[ = k then there exists
a constant C > 0 such that for all f and g in C
∞
0
(R
n
)
(D
β
f)(D
γ
g)
L
2 ≤ C(f
L
∞D
k
g
L
2 +D
k
f
L
2g
L
∞)
18
Proof The H¨older inequality implies that
(D
β
f)(D
γ
g)
L
2 ≤ D
β
f
L
2k/l D
γ
g
L
2k/m
where l = [β[ and m = [γ[. If we apply the GagliardoNirenberg inequality to
the terms on the right hand side we get
(D
β
f)(D
γ
g)
L
2 ≤ Cf
(1−l/k)
L
∞ D
k
f
l/k
L
2
g
(1−m/k)
L
∞ D
k
g
m/k
L
2
(29)
= C(f
L
∞D
k
g
L
2)
m/k
(D
k
f
L
2g
L
∞)
l/k
(30)
The result now follows from the inequality a
1/p
b
1/q
≤ (p
−1
a+q
−1
b) for positive
real numbers a, b and p, q in the interval [1, ∞] with p
−1
+ q
−1
= 1, which is
essentially Young’s inequality.
With this result it is possible to prove the ﬁrst two Moser inequalities.
Theorem 2.2.2 (Moser) There is a constant C > 0 such that the inequality
D
α
(fg)
L
2 ≤ C(f
L
∞D
s
g
L
2 +D
s
f
L
2g
L
∞)
holds for all f, g ∈ H
s
(R
n
) ∩ L
∞
(R
n
) where s = [α[. In particular D
α
(fg)
belongs to L
2
(R
n
). There is also a constant C > 0 such that the inequality
D
α
(fg) −fD
α
g
L
2 ≤ C(D
s
f
L
2g
L
∞ +∇f
L
∞D
s−1
g
L
2)
holds for all f ∈ H
s
(R
n
) ∩ W
1,∞
(R
n
) and g ∈ H
s−1
(R
n
) ∩ L
∞
(R
n
).
Proof The proof will only be given for the case f, g ∈ C
∞
0
.
D
α
(fg) =
β+γ=α
_
α
β
_
D
β
fD
γ
g
Each term on the right hand side can be estimated by the same expression using
Lemma 2.2.4 and this gives the ﬁrst inequality of the theorem.
D
α
(fg) −fD
α
g =
β+γ=α,β>0
_
α
β
_
D
β
fD
γ
g
The last expression is a linear combination of terms of the form (D
β
∂
i
f)(D
γ
g)
with [β +γ[ = s −1. Thus in order to prove the second inequality it is enough
to apply Lemma 2.2.4 with f replaced by ∂
i
f.
To prove the third Moser inequality we need the following generalization of
Lemma 2.2.4.
Lemma 2.2.5 If β
i
are multiindices, , i = 1, . . . , s with
[β
i
[ = k then
(D
β
1
f
1
) . . . (D
β
s
f
s
)
L
2
≤ C(f
1

L
∞f
2

L
∞ . . . D
k
f
s

L
2 +. . . +D
k
f
1

L
2f
2

L
∞ . . . f
s

L
∞) (31)
19
Proof The proof is very similar to that of Lemma 2.2.4. First the generalized
H¨older inequality is used and then the GagliardoNirenberg inequality.
Theorem 2.2.3 (Moser) Let F : R → R be a C
∞
function with F(0) = 0.
There is a constant C > 0, only depending on f
L
∞, such that the inequality
D
α
(F(f))
L
2 ≤ C(f
L
∞)D
s
f
L
2
holds for all f ∈ H
s
(R
n
) ∩ L
∞
(R
n
), where s = [α[. In particular D
α
(F(f))
belongs to L
2
(R
n
).
Proof D
α
(F(f)) =
r≤s
β
1
+...+β
s
=α
C
β
D
β
1
f . . . D
β
s
f(d
r
F/df
r
)(f). Apply
ing Lemma 2.2.5 to this gives the result.
An important property of the Moser inequalities is the following. The right
hand side contains expressions which may contain many derivatives (the L
2
norms) and ones which contain no more than one derivative (the L
∞
norms).
The expressions of the ﬁrst type occur linearly. Estimates of this kind are called
‘tame’. We will see later that they are very important in order to get a sharp
continuation criterion for symmetric hyperbolic systems. If it is only desired to
prove local existence the weaker consequences of the Moser inequalities given
below are suﬃcient. It is, however, the case that the Moser inequalities make
the proof of the existence theorem signiﬁcantly cleaner and more eﬃcient.
Lemma 2.2.6 Let s > n/2. There exists a constant C > 0 such that the
inequality
fg
H
s ≤ Cf
H
sg
H
s
holds for all f, g ∈ H
s
(R
n
). In particular fg belongs to H
s
(R
n
). If F is as in
the assumptions of Theorem 2.2.3 and C
1
> 0 is a constant then there exists a
constant C
2
> 0 such that f
H
s ≤ C
1
implies that F(f)
H
s ≤ C
2
.
Proof The ﬁrst part follows immediately from Theorem 2.2.2 by using the fact
that, as a consequence of the Sobolev inequality, the L
∞
norm can be bounded
in terms of the H
s
norm. The second part follows in a similar way from Theorem
2.2.3.
2.3 The BanachAlaoglu theorem
In the proof of the existence theorem for symmetric hyperbolic systems which
will be given later we need to apply the BanachAlaoglu theorem and so this
theorem will now be discussed brieﬂy. If X is a Banach space let X
be the
dual space and X
the double dual, i.e. the dual space of X
. There is a
natural embedding i of X into X
given by i(u)(φ) = φ(u). The mapping i is
in general not surjective. Consider the weak topology on X
. A sequence φ
n
in
X
converges weakly if the sequence ω(φ
n
) converges for an arbitrary ω ∈ X
.
It is possible to introduce a weaker topology. For u ∈ X let ω
u
= i(u). A
sequence φ
n
in X
is said to converge weak
∗
if for any u ∈ X the sequence
ω
u
(φ
n
) converges. (The terminology comes from the fact that the dual space
20
is often denoted by X
∗
instead of X
.) There is a topology on X
, the weak
∗
topology, which gives this notion of convergence for sequences. If the space X
is reﬂexive, so that X can be identiﬁed with X
, the weak and weak
∗
toplogies
coincide. This is, however, in general not the case. An example of this which
is important for what follows, is that of the L
∞
spaces. The consequence of
the theorem of BanachAlaoglu which we need will now be stated. For brevity
we refer to it as the BanachAlaoglu theorem. A discussion of the relation of
this statement to the original statement of the BanachAlaoglu theorem can be
found in the book of Rudin [3].
Theorem 3.1.1 (BanachAlaoglu) Let X be a separable Banach space. Each
bounded sequence in X
has a weak
∗
convergent subsequence.
In the case that X is the separable space L
1
(R
n
) for instance, this theorem
implies that each bounded sequence in the dual space X
= L
∞
(R
n
) has a
weak
∗
convergent subsequence.
3 Local existence for linear symmetric hyper
bolic systems
3.1 The problem
In this section we discuss existence and uniqueness in the initial value problem
for a linear symmetric hyperbolic system with C
∞
coeﬃcients. We restrict
ourselves to making some general remarks on this subject and no complete
existence proof will be presented. The student interested in seeing such a proof
is referred to [1] or to the previous version of these lecture notes (in German)
where this topic was covered. For a linear system there is no need to make a
distinction between the question of existence of solutions which are local in time
or global in time. Consider the system
A
0
(t, x)∂
t
u +
n
i=1
A
i
(t, x)∂
i
u +B
1
(t, x)u +B
2
(t, x) = 0 (32)
The objects A
0
, A
i
and B
1
are C
∞
mappings from RR
n
to M
k
(R) and B
2
is
a C
∞
mapping from RR
n
to R
k
. The aim is to ﬁnd a solution u which is a
C
∞
mapping from RR
n
to R
k
. The interest of this theorem for this course is
as a preparatory step for proving local existence and uniqueness for quasilinear
hyperbolic systems.
3.2 The domain of dependence
When studying the question of local existence for hyperbolic equations it is
possible to localize in space. This is done using the concept of the domain of
dependence or that of the domain of inﬂuence. These ideas are also very impor
tant for the nonlinear case. Here it is necessary to give a warning. WARNING:
21
Some authors interchange the concepts ’domain of dependence’ and ’domain of
inﬂuence’ compared to the usage here.
Deﬁnition 1 Let a solution of a symmetric hyperbolic system be given on
I R
n
. A domain of dependence for a point (t
0
, x
0
) ∈ I R
n
is a subset G
of the initial hypersurface t = 0 with the property that v(t
0
, x
0
) = u(t
0
, x
0
) for
any smooth solution v of the system which agrees with the solution u on G.
With this deﬁnition the domain of dependence is not unique. For instance, G =
R
n
is always a possibility. Smaller domains of dependence are more interesting,
but there does not exist a theorem which ensures the existence of a minimal
domain of dependence for a given equation and solution. It is possible to deﬁne
the domain of dependence of a subset E of I R
n
in a similar way as a subset
G of the hypersurface t = 0 which is a domain of dependence for each point of
E. A kind of inverse deﬁnition is the following
Deﬁnition 2 Let a solution of a symmetric hyperbolic system on I R
n
be
given. The domain of inﬂuence of a subset G of the initial hypersurface t = 0 is
the set of all points (t, x) with the property that G is a domain of dependence
for (t, x).
The domain of inﬂuence is uniquely deﬁned but this does not mean that it it is
easy to determine it for a given system and a given solution.
Statements about the domain of dependence for symmetric hyperbolic sys
tems can be obtained by the use of energy estimates. Let a classical (i.e. C
1
)
solution of a linear symmetric hyperbolic system be given. Let S
0
and S
1
be
the hypersurfaces t = 0 and t = f(x) respectively. We suppose that S
1
is a
spacelike hypersurface. An open subset G of I R
n
is said to be a lenseshaped
region deﬁned by S
0
and S
1
if G has compact closure and the boundary ∂G of
G is contained in S
0
∪ S
1
. The equation is
Pu = A
0
∂
t
u +
n
i=1
A
i
∂
i
u +B
1
u +B
2
= 0 (33)
We do not yet want to ﬁx how diﬀerentiable the coeﬃcients A
0
, A
i
, B
1
and
B
2
should be. If we write ∂
0
u = ∂
t
u then P can be written in the form Pu =
n
i=0
A
i
∂
i
u+B
1
u+B
2
. For the moment only the homogeneous case B
2
= 0 will
be considered. Integrating the inner product ¸Pu, e
−Kt
u) over G for a constant
K gives
0 =
_
G
e
−Kt
¸u,
n
i=0
A
i
∂
i
u +B
1
u) (34)
The integral over terms which contain A
i
can be reexpressed as follows
_
G
e
−Kt
¸u,
n
i=0
A
i
∂
i
u) =
_
G
n
i=0
1
2
∂
i
(e
−Kt
¸u, A
i
u)) −
1
2
_
G
e
−Kt
¸u,
n
i=0
∂
i
A
i
u)
+
1
2
K
_
G
e
−Kt
¸u, A
0
u) (35)
22
The ﬁrst term on the right hand side can be tranformed into a boundary integral
using Stokes’ theorem. Let (∂G)
−
= ∂G∩ S
0
and (∂G)
+
= ∂G∩ S
1
. Then the
result is:
_
G
n
i=0
∂
i
(e
−Kt
¸u, A
i
u)) =
_
(∂G)
+
e
−Kt
¸u, (A
0
−
n
i=1
A
i
∂
i
f)u) −
_
(∂G)
−
¸u, A
0
u)
(36)
The equations (34)(36) combine to give:
_
(∂G)
+
e
−Kt
¸u, (A
0
−
n
i=1
A
i
∂
i
f)u) =
_
(∂G)
−
¸u, A
0
u)
+
_
G
e
−Kt
¸u,
n
i=0
∂
i
A
i
u) −K
_
G
e
−Kt
¸u, A
0
u) −2
_
G
e
−Kt
¸u, B
1
u)
=
_
(∂G)
−
¸u, A
0
u) +
_
G
e
−Kt
¸u,
n
i=0
∂
i
A
i
u −2B
1
u −KA
0
u) (37)
Suppose that u = 0 on (∂G)
−
. If K is chosen large enough then the last term
in (37) is negative, provided that u does not vanish identically on G. Thus the
right hand side can be made negative, which is a contradiction since the right
hand side is evidently nonnegative. Thus the vanishing of u on ∂G
−
implies
the vanishing of u on G and ∂G
−
is a domain of dependence for G. To justify
this argument it is enough if A
0
and A
i
(1 ≤ i ≤ n) are C
1
and B
1
and B
2
are
continuous.
This argument provides a uniqueness theorem for linear symmetric hyper
bolic systems.
Theorem 3.2.1 Let u, v be two classical solutions of the linear symmetric
hyperbolic system (32) on RR
n
with the same initial datum u
0
on t = 0. Let
A
0
and A
i
(1 ≤ i ≤ n) be C
1
mappings and B
1
and B
2
continuous. Then u = v
in a neighbourhood of the initial hypersurface. If the matrices A
i
are bounded
then u = v on all of RR
n
.
Proof The function u−v is a solution of the homogeneous system with vanishing
initial data. Since a neighbourhood of the initial hypersurface can be covered
by lenseshaped regions u − v vanishes on this neighbourhood by the above
argument. This gives the ﬁrst statement of the theorem. Now suppose that
the A
i
are bounded for 1, . . . , n. Then in order to show that the hypersurface
S
1
in the deﬁnition of a lenseshaped region is spacelike it suﬃces to show
that [f
[ is smaller than a certain positive constant. By taking a lenseshaped
region fulﬁlling this condition and translating it in spacelike directions it can
be seen that in this case the neighbourhood of the initial hypersurface where
uniqueness holds can be chosen to be of the form I
R
n
. Here I
is an open
interval containing zero. Consider now the supremum of all real numbers T
with the property that u = v for −T < t < T. It follows from what has just
been shown that T > 0. We want to show that T = ∞. If it is assumed that T
is ﬁnite then u = v for −T ≤ t ≤ T beacause of the continuity of the functions.
23
By taking t = T or t = −T as a new initial hypersurface we see that u = v on
the interval (−T −, T + ) for some > 0, which contradicts the deﬁnition of
T. Thus T = ∞ and the theorem is proved.
Next it will be shown that if an initial datum u
0
for a solution u of a linear
symmetric hyperbolic system with A
i
bounded for i = 1, . . . , n has compact
support then the restriction of u to any hypersurface of the form t = t
0
also has
compact support. To be concrete we used lenseshaped regions where f(x) =
α − β[x[
2
and α, β > 0. It is clear that a function of this type deﬁnes a lense
shaped region as soon as the hypersurface t = f(x) is spacelike. Under the
condition that the A
i
are bounded for i = 1, . . . , n this property holds when β
is suﬃciently small, say β ≤ β
0
. Suppose now that the support of u
0
lies in the
ball of radius R about the origin. Let x
0
∈ R
n
be a point which is outside the
ball of radius R + β
0
t
2
0
about the origin. Considering the lenseshaped region
with α = t
0
and β = β
0
shows that the solution vanishes at the point (t
0
, x
0
).
Hence the support of the restriction of u to t = t
0
is contained in the ball of
radius R+β
0
t
2
0
. We see that that the support of the solution at each later time
is compact and we get a rough estimate for how fast the support can spread
out.
This estimate for the propagation speed is very rough and we now want
to look at the domain of dependence for the wave equation more closely. In
order to do this we choose f(x) = t
0
−(τ
2
+[x −x
0
[
2
)
1/2
, where τ is a positive
constant. This function f deﬁnes a lenseshaped region for each value of τ. Thus
[x−x
0
[ ≤ (t
2
0
−τ
2
)
1/2
is always a domain of dependence for the point (t
0
−τ, x
0
).
A simple continuity argument in the limit τ → 0 shows that [x − x
0
[ ≤ t
0
is a
domain of dependence for the point (t
0
, x
0
). This argument does not depend on
the dimension. The question of whether it is possible to ﬁnd a smaller domain
of dependence for the wave equation is much more complicated. For n odd the
set [x −x
0
[ = t
0
is a domain of dependence for (t
0
, x
0
) (Huygens principle) but
for n even (for example n = 2) this is not the case. The fact that we were
able to restrict the domain of dependence for the wave equation as far as we
did relies on the fact that we understand the geometry of the characteristics
so precisely in that case. The analysis for nonlinear wave equations and wave
maps is identical and that for the Maxwell equations in vacuum is very similar.
In the case of the Euler equations things are not so easy, since if we follow the
sound cone backwards this surface need not remain smooth. This is a general
problem with quasilinear equations.
At this point we can already prove a uniqueness theorem for nonlinear sym
metric hyperbolic systems. For this we use the following lemma.
Lemma 3.2.1 Let U be an open subset of R
k
and let F : U → R
k
be a C
1
mapping. Then there exists a continuous mapping M : U U → M
k
(R) with
the property that F(v) −F(u) = M(u, v)(v −u).
Proof Here the lemma will only be proved under the additional assumption
that U is convex. Those who are familiar with partitions of unity will recognize
that they can be used to pass from this special case to the general case. Let
w(t) = (1 − t)u + tv. Because U has been assumed to be convex w(t) ∈ U for
24
t ∈ [0, 1] and F(w(t)) is deﬁned.
F(v) −F(u) = F(w(1)) −F(w(0)) (38)
=
_
1
0
(d/dt)(F(w(t
))dt
(39)
=
_
1
0
DF(w(t
))(dw/dt)(t
)dt
(40)
=
_
1
0
DF(w(t
))(v −u)dt
(41)
Thus we can choose M(u, v) =
_
1
0
DF(w(t))dt. This expression is clearly con
tinuous when F is C
1
.
Theorem 3.2.2 Let u, v be two classical solutions of the quasilinear symmetric
hyperbolic system (18) on I R
n
with the same initial datum u
0
at t = 0.
where the A
i
and B are C
1
. Then u = v in a neighbourhood of the initial
hypersurface. If the matrices A
i
(u) are bounded then u = v on the whole of
I R
n
.
Proof The equations
n
i=0
A
i
(u)∂
i
u+B(u) = 0 and
n
i=0
A
i
(v)∂
i
v +B(v) = 0
hold. Thus
n
i=0
[A
i
(u)∂
i
(u −v) + (A
i
(u) −A
i
(v))∂
i
v] +B(u) −B(v) = 0 (42)
Using Lemma 3.2.1 we can rewrite this equation in the following form:
n
i=0
A
i
(u)∂
i
(u −v) + [
n
i=0
˜
A
i
(u, v)(∂
i
v) +
˜
B(u, v)](u −v) = 0 (43)
This a linear homogeneous symmetric hyperbolic system for u −v. Since u and
v are classical solutions the quantities A
i
(u) are C
1
and the other coeﬃcients
are continuous. Thus the desired result follows from Theorem 3.2.1. (Strictly
speaking, we need the analogue of Theorem 3.2.1 where R is replaced by an
interval I but this analogue can be proved in exactly the same way.)
In order to prove a local existence theorem (i.e. local in time) for a symmetric
hyperbolic system it is enough to do so locally in space, as will now be explained.
Let φ be a smooth function with compact support which satisﬁes the conditions
that φ = 1 for [x[ < 1, φ = 0 for [x[ > 2 and 0 ≤ φ(x) ≤ 1 everywhere in R
n
.
Let u
0
be an initial datum for a symmetric hyperbolic system. For each point
y ∈ R
n
we can consider the initial datum u
0,y
(x) = φ(x)u
0
(y+x). Suppose that
for each choice of y a solution u
y
with initial datum u
0,y
exists on the region
[x[ < 1, [t[ < T(y). Then a solution with initial datum u
0
exists. Let y
k
be a
sequence such that the balls with unit radius about the points y
k
deﬁne a locally
ﬁnite cover of R
n
. First we deﬁne u(t, x) on the region [x−y
k
[ < 1, [t[ < T(y
k
)
25
by u(t, x) = u
y
k
(t, x − y). There is an open neighbourhood of t = 0 where the
function u is welldeﬁned. This function is a solution of the system with initial
datum u
0
. This shows that in local existence theorems we can assume without
loss of generality that the initial data has compact support.
3.3 Energy estimates
Due to the results of the last section we know that in order to prove local
existence theorems for symmetric hyperbolic systems it is enough to work locally
in space. Hence without loss of generality we can alter the coeﬃcients of the
equation outside a compact set if it makes life easier. Now we want to do this
with the equation (32). Let φ be a smooth function with compact support which
has the properties which were required above (cutoﬀ function). By replacing
the coeﬃcients A
i
(1 ≤ i ≤ n), B
1
and B
2
in (32) by φA
i
, φB
1
and φB
2
we can assume that these coeﬃcients have compact support. For A
0
it is not
quite so simple. In that case we replace A
0
by φA
0
+ (1 − φ)Id. This matrix
is positive deﬁnite and equal to the identity in a neighbourhood of inﬁnity.
Another simpliﬁcation can be obtained by reducing the problem to the case
with zero initial datum. The solution u of the original equations with initial
data u
0
can be replaced the function v, where v(t, x) = u(t, x) − u
0
(x). This
function satisﬁes the equation
A
0
∂
t
v +
n
i=1
A
i
∂
i
v +B
1
v + [B
2
+
n
i=1
A
i
∂
i
u
0
+B
1
u
0
] = 0 (44)
with vanishing initial data for t = 0 and this equation has the same form as
(32). Thus we can assume without loss of generality in the existence proof that
u
0
= 0.
Lemma 3.3.1 Let u be a smooth solution of the linear symmetric hyperbolic
system (32) with vanishing initial data whose support lies in the region [x[ < R
for a constant R > 0. If the coeﬃcients A
0
, A
i
, B
1
and B
2
are smooth and if
A
0
− Id, A
i
, B
1
and B
2
vanish for [x[ > R then there exists a constant C > 0
such that u(t)
H
s ≤ C sup
0≤t
≤T
B
2
(t
)
H
s.
Proof Since A
0
is uniformly positive deﬁnite there exists a constant C
2
> 0
such that ¸v, A
0
v) ≥ C
2
[v[
2
for all v ∈ R
k
. It follows that (¸v, A
0
(t, x)v))
1/2
deﬁnes a norm which is uniformly equivalent to the usual norm in R
k
.
(d/dt)
_
R
n
¸u, A
0
u) = 2
_
R
n
¸u, A
0
∂
t
u) +
_
R
n
¸u, ∂
t
A
0
u) (45)
= −2
_
R
n
¸u,
n
i=1
A
i
∂
i
u +B
1
u +B
2
) +
_
R
n
¸u, ∂
t
A
0
u) (46)
Now the spatial derivatives on the right hand side can be eliminated by
means of a partial integration.
_
R
n
¸u,
n
i=1
A
i
∂
i
u) = −
1
2
_
R
n
n
i=1
¸∂
i
A
i
u, u) (47)
26
If (47) is substituted into (46) then the result is
(d/dt)
_
R
n
¸u, A
0
u) =
_
R
n
¸u, (∂
t
A
0
+
n
i=1
∂
i
A
i
−2B
1
)u −2B
2
) (48)
It follows that
(d/dt)
_
R
n
¸u, A
0
u) ≤ Cu
2
L
2 +B
2

2
L
2 (49)
If the equivalence of norms which was already mentioned is used then we get
the inequality
(d/dt)
_
R
n
¸u, A
0
u) ≤ C
_
R
n
¸u, A
0
u) +B
2

2
L
2 (50)
It follows by integration in time that
u(t)
L
2 ≤ C sup
0≤t
≤t
B
2
(t
)
L
2 (51)
To get the inequality for higher Sobolev norms we must derive an equation for
D
α
u where α is an arbitrary multiindex. The equation is:
A
0
∂
t
(D
α
u) +
n
i=1
A
i
∂
i
(D
α
u) + [D
α
(A
0
∂
t
u) −A
0
∂
t
(D
α
u)]
+
n
i=1
[D
α
(A
i
∂
i
u) −A
i
∂
i
(D
α
u)] +D
α
(B
1
u) +D
α
B
2
= 0 (52)
This equation for D
α
u has a similar form to that for u with the only diﬀerence
that B
2
has been replaced by a more complicated expression. The Moser in
equalities are exactly what is needed in order to estimate these terms. At the
moment, however, a cruder estimate will be used.
D
α
(A
i
∂
i
u) −A
i
D
α
∂
i
u
L
2 ≤ Cu
H
s (53)
and
D
α
(B
1
u)
L
2 ≤ Cu
H
s (54)
It is also necessary to substitute the equation (32) into the term with A
0
. First
we get
D
α
(A
0
∂
t
u) −A
0
D
α
(∂
t
u)
L
2 ≤ C∂
t
u
H
s−1 (55)
The equation then gives
∂
t
u
H
s−1 ≤ C(u
H
s +B
2

H
s) (56)
This leads to the the inequality
(d/dt)
_
R
n
¸D
α
u, A
0
D
α
u) ≤ Cu
2
H
s +B
2

2
H
s (57)
with s = [α[. Summing these inequalities for the derivatives up to a certain
order and proceeding as in the case s = 0 gives the statement of the theorem.
27
3.4 Existence for linear symmetric hyperbolic systems
This section is concerned with a proof of existence and uniqueness in the initial
value problem for a linear symmetric hyperbolic system with C
∞
data. It will
only be sketched here with the heavy calculations being omitted. The proof
proceeds by discretizing the equation. First some notation will be introduced.
Consider a time interval [0, T] with T a ﬁxed real number and deﬁne a grid
on [0, T] R
n
as follows. Let h and k be two positive real numbers with the
property that T = kl for an integer l. Let Σ be the set of all points of the form
x = (x
1
, . . . , x
n
) = (α
1
h, . . . , α
n
h), t = mk, 0 ≤ t ≤ T (58)
Here α
1
, . . . , α
n
, m are integers. The α
j
are collected into a multiindex ˜ α where
the tilde is supposed to show that the indices are allowed to take on all integer
values. Then Σ consists of the points
x = ˜ αh, t = mk with 0 ≤ m ≤ T/k (59)
Deﬁne operators
E
j
u(t, x
1
, . . . , x
n
) = u(t, x
1
, . . . , x
j
+h, . . . , x
n
); j = 1, . . . , n (60)
and
E
0
u(t, x
1
, . . . , x
n
) = u(t +k, x
1
, . . . , x
n
) (61)
We write symbolically
E
˜ α
u(t, x) = u(t, x + ˜ αh) (62)
Next we deﬁne the following operators:
δ
j
= h
−1
(E
j
−1); j = 1, . . . , n (63)
and
δ
0
= k
−1
(E
0
−1) (64)
All these operators commute with each other. For C
2
functions Taylor’s theorem
implies that
δ
j
u(t, x) = ∂
j
(t, x) +O(h), δ
0
u(t, x) = ∂
t
(t, x) +O(k) (65)
If the equation (32) is to be discretized it seems at ﬁrst sight natural to consider
the diﬀence equation
A
0
δ
0
v +
n
j=1
A
j
δ
j
v +B
1
v +B
2
= 0 (66)
Unfortunately this expression has stability problems, so that a more complicated
form must be used. A suitable choice is the equation (which was introduced by
28
Friedrichs)
k
−1
A
0
_
_
E
0
−(2n)
−1
n
j=1
(E
j
+E
−1
j
)
_
_
v
+(2h)
−1
n
j=1
A
j
(E
j
−E
−1
j
)v +B
1
v +B
2
= 0 (67)
This can be written as Λv = −B
2
for a suitable linear operator Λ. The equation
(67) is supposed to hold for all values of (t, x) ∈ Σ such that 0 ≤ t ≤ T − k.
Since A
0
is invertible (67) can be solved for E
0
v = v(t +k). Thus it is possible
to calculate the values of v at time t +k when the values at time t are known.
Thus the initial datum v(0, x) = 0 deﬁnes a unique solution of (67). Hence it is
unproblematic to obtain an existence theorem for the set of diﬀerence equations.
The idea now is to show that the solution of the diﬀerence equation converges
to a solution of the corresponding diﬀerential equation when the discretization
parameters are allowed to tend to zero in a suitable way. Convergence is proved
by using discrete versions of energy estimates and Sobolev inequalities. The
grid is reﬁned by setting h = 2
−q
, k = 2
−q
λ for a positive integer q. Let Σ
q
be
the grid for the given choice of q and v
q
the solution of the discretized equation
on this grid. For q
≤ q we have Σ
q
⊂ Σ
q
. The union of the subsets Σ
q
is a
countable subset σ of [0, T] R
n
. It can be shown that the functions v
q
and
their diﬀerence quotients converge uniformly along a subsequence to limits. The
limit of v
q
itself is a candidate for a solution of the diﬀerential equation. In fact
it can be proved that it is a solution with the correct initial datum. This results
in the following theorem
Theorem 3.5.1 Let u
0
be a smooth initial datum with compact support for
the linear symmetric hyperbolic system (32). Let A
0
− Id, A
i
, B
1
and B
2
be
smooth with compact support. Then there exists a unique classical solution
with the given initial datum on the time interval [0, T].
This theorem is not exactly what we want because it only gives a classical
solution which is local in time, while there is in fact a global solution. The
additional conclusions can be obtained with some further work.
Theorem 3.5.2 Let u
0
be a smooth initial datum with compact support for
the linear symmetric hyperbolic system (32). Suppose that A
0
−Id, A
i
, B
1
and
B
2
are smooth with compact support on each ﬁnite time interval. Then there
exists a unique smooth solution on the prescribed initial datum on the interval
(−∞, ∞).
Proof (sketch) The methods which lead to the existence of a classical solution
on an interval [0, T
1
] can be generalized so as to show that a C
2
solution exists
on an interval [0, T
2
]. The length of this interval only depends on the size of
the initial datum in the space C
l
for a certain integer l. The restriction on the
time of existence is, however, only a consequence of the method of proof which
was used and not an intrinsic feature of the problem. We can replace the datum
u
0
and the inhomogeneous term B
2
by cu
0
and cB
2
and then the solution is
29
replaced by cu as a consequence of the linearity of the equation. By means of
this transformation the case of general initial data and a general inhomogeneous
term can be reduced to the case of small initial data and a small inhomogeneous
term. If we diﬀerentiate the equation with respect to t and x
j
and introduce
new variables w = ∂
t
u and u
j
= ∂
j
u we get a new symmetric hyperbolic system.
This system diﬀers from the original one only through the inhomogeneous term.
Data for the new system can be obtained by diﬀerentiating the original data and
substituting in the equation for t = 0. By Theorem 3.5.1 the new system has
a classical solution on an interval [0, T
2
]. The quantities w − ∂
t
u and u
i
− ∂
i
u
are classical solutions of a homogeneous linear symmetric hyperbolic system
with vanishing initial data. Thus they vanish everywhere. It follows that the
original equation has a solution of class C
3
on the interval [0, T
2
]. It can be
shown inductively that the solution on [0, T
2
] is C
k
for each ﬁnite value of k.
Thus this solution is C
∞
. If we choose a ﬁnite time interval then by assumption
the coeﬃcients of the equation are uniformly bounded on this interval. For
this reason we can choose the same time T
2
when we prescribe data for this
equation at diﬀerent times. Since the original interval can be covered by ﬁnitely
many intervals of length T
2
the solution exists globally. (Here it was used that
the problem is invariant under time reversal so that it is just as easy to solve
backwards in time.)
There exist diﬀerent methods to prove this theorem. Here we have just presented
one of them. Many have the following general structure. In the ﬁrst step the
equation which is to be solved is replaced by another one which is easier to
solve but which should approximate the original equation in a certain sense.
In a second step it must be shown that the functions which are supposed to
approximate a solution of the original problem in fact converge to a solution
of that problem. In the second step (approximate) energy estimates play a
prominent role. The approximating equation could be a diﬀerence equation (as
it is in the above), an equation with analytic coeﬃcients (so that the theorem of
CauchyKovalevskaya can be applied) or a regularized version of the equation
where the diﬀerential operators are multiplied by smoothing operators. No
doubt there are also other possibilities.
4 Local existence for quasilinear symmetric hy
perbolic systems
4.1 The problem
In this section we will show local existence in the initial value problem for a
quasilinear symmetric hyperbolic system with C
∞
coeﬃcients. The notation
and assumptions are as in Section 1.4 with the exception that we now consider
solutions which are not nececessarily C
∞
. They will always be classical solu
tions. We suppose that the coeﬃcients have been cut oﬀ as at the beginning
of section 3.3 and that only initial data with compact support are considered.
30
Uniqueness for (18) was already shown in Theorem 3.2.2. The initial datum u
0
for the function u is supposed to belong to the Sobolev space H
s
(R
n
) with s
suﬃcently large. (How large this is will be speciﬁed later.) A sharp continuation
criterion is proved which shows when a solution deﬁned locally in time can be
extended to a longer time interval. The proof presented here is essentially taken
from the book of Majda [2]. The general strategy of the proof is as follows.
In order to avoid technical problems the initial datum u
0
is approximated by a
sequence ¦u
j
0
¦ of smooth functions with compact support. Then an iteration is
deﬁned. If u
j−1
is given then u
j
should solve the equation
A
0
(t, x, u
j−1
)∂
t
u
j
+
n
i=1
A
i
(t, x, u
j−1
)∂
i
u
j
+B(t, x, u
j−1
) = 0 (68)
with initial datum u
j
0
. Thus it is necessary to solve a linear equation with
smooth coeﬃcients for smooth initial data. This was done in Theorem 3.5.2. It
is however necessary to note the following. The solution of this equation need
not exist globally since u
j−1
can reach the boundary of the region G where the
coeﬃcients are deﬁned. Thus the existence theorem obtained for (68) is only
local and the time of existence of the solution u
j
can a priori depend on j. It
must be shown that the time of existence does not tend to zero as j → ∞.
When we have obtained a sequence u
j
on a ﬁxed time interval it must be shown
that the sequence converges to a solution of (18). The method we use to achieve
these things is provided by energy estimates.
4.2 The iteration
The iteration which was described brieﬂy in the last section will now be formally
deﬁned. Let u
0
be a function with values in R
k
which belongs to the Sobolev
space H
s
(R
n
). Then there exists a sequence u
j
0
in C
∞
0
(R
n
) with u
j
0
−u
0

H
s →
0 for j → ∞. Deﬁne a function u
0
on R R
n
by u
0
(t, x) = u
0
0
(x). Now
a sequence u
j
is deﬁned recursively. The domain of deﬁnition of u
j
is [0, T
j
)
where
T
j
= sup¦0 < t ≤ T
j−1
: u
j−1
([0, t) R
n
) ⊂ G¦ (69)
The function u
j
is the unique solution of (68) with initial datum u
j
0
which exists
by Theorem 3.5.2. Each of the functions u
j
is smooth and has a support which in
contained in a region of the form [x[ < C on each closed interval [0, T
j
). Hence
partial integration ans the interchange of integrals with derivatives justiﬁed
for these functions. On a ﬁxed closed interval the constant C can be chosen
independent of j provided the solution u
j
is deﬁned on this interval.
4.3 Energy estimates
The fundamental energy estimate for the equation (68) is as follows:
Lemma 4.3.1 For j ≥ 1 let u
j
be a smooth solution with compact support of
the equation (68) on an interval [0, T] with T < T
j
. Suppose that the initial
31
condition u
j
(0, x) = u
j
0
(x) holds where u
j
0
is a smooth function with compact
support. If there is an open subset G
1
of G with
¯
G
1
a compact subset of G such
that u
j
([0, T] R
n
) ⊂ G
1
then there exists a constant C > 0 only depending
on G
1
and s such that the following inequality holds for all t ∈ [0, T]:
u
j
(t)
2
H
s
≤ C[u
j
0

2
H
s +
_
t
0
(1 +u
j
(t
)
C
1 +u
j−1
(t
)
C
1 +∂
t
u
j−1
(t
)
C
0 +∂
t
u
j
(t
)
C
0)
(1 +u
j−1
(t
)
H
s +u
j
(t
)
H
s +∂
t
u
j
(t
)
H
s−1)u
j
(t
)
H
sdt
] (70)
Proof The norm
_
¸v, A
0
v) is uniformly equivalent to [v[ as discussed in the
proof of Lemma 3.3.1. If we apply the operator D
α
to equation (68) we get the
following analogue of (52):
A
0
(u
j−1
)∂
t
(D
α
u
j
) +
n
i=1
A
i
(u
j−1
)∂
i
(D
α
u
j
) + [D
α
(A
0
(u
j−1
)∂
t
u
j
) −A
0
(u
j−1
)D
α
(∂
t
u
j
)]
+
n
i=1
[D
α
(A
i
(u
j−1
)∂
i
u
j
) −A
i
(u
j−1
)D
α
(∂
i
u
j
)] +D
α
(B(u
j−1
)) = 0 (71)
Now take the inner product of this equation with u
j
and carry out the now
familiar partial integration. The result is
(d/dt)
_
R
n
¸D
α
u
j
, A
0
D
α
u
j
) =
_
R
n
¸D
α
u
j
, (∂
t
A
0
(u
j−1
)+
n
i=1
∂
i
A
i
(u
j−1
))D
α
u
j
−2B
α
)
(72)
where
B
α
= [D
α
(A
0
(u
j−1
)∂
t
u
j
) −A
0
(u
j−1
)D
α
(∂
t
u
j
)]
+
n
i=1
[D
α
(A
i
(u
j−1
)∂
i
u
j
) −A
i
(u
j−1
)D
α
(∂
i
u
j
)] +D
α
(B(u
j−1
)) (73)
The right hand side can be estimated using the Moser inequalities. The chain
rule gives
∂
t
A
0
(u
j−1
) = (DA
0
(u
j−1
))∂
t
u
j−1
, ∂
i
A
i
(u
j−1
) = (DA
i
(u
j−1
))∂
i
u
j−1
(74)
The derivatives DA
0
and DA
i
are bounded on the relatively compact subset
G
1
. Hence
_
R
n
¸D
α
u
j
, (∂
t
A
0
(u
j−1
)+
n
i=1
∂
i
A
i
(u
j−1
))D
α
u
j
) ≤ C(∂
t
u
j−1

C
0+u
j−1

C
1)u
j

2
H
s
(75)
32
with s = [α[. To estimate the other term an estimate for B
α
is needed. Con
sider ﬁrst the expression D
α
(B(x, u)). We would like to apply the third Moser
inequality to this but it is not immediately possible because of the xdependence
of B
α
. This can be got around. Let v be a mapping from R
n
to R
n
where the
component v
i
is a smooth function of compact support which is equal to x
i
on
the support of u. Then the third Moser estimate can be applied to the mapping
(u, v) with values in R
n+k
. It follows that
D
α
B(x, u
j−1
)
L
2 ≤ C(1 +D
s
u
j−1

L
2) (76)
The second Moser inequality implies that
D
α
(A
i
(u
j−1
)∂
i
u
j
) −A
i
(u
j−1
)D
α
(∂
i
u
j
)
L
2 (77)
≤ C(DA
i
(u
j−1
)
L
∞D
s−1
∂
i
u
j

L
2 +∂
i
u
j

L
∞D
s
(A
i
(u
j−1
))
L
2)
≤ C(u
j−1

C
1u
j

H
s +u
j

C
1(1 +u
j−1

H
s)) (78)
where in the third line the third Moser inequality was used again. In a similar
way we get
D
α
(A
0
(u
j−1
)∂
t
u
j
) −A
0
(u
j−1
)D
α
(∂
t
u
j
)
L
2
≤ C(u
j−1

C
1∂
t
u
j

H
s−1 +∂
t
u
j

C
0(1 +u
j−1

H
s)) (79)
Using (75), (76), (78) and (79) in (72) and integrating gives the desired result.
Let U
j,s
(t) be deﬁned by
[U
j,s
(t)]
2
= sup
0≤j
≤j
α≤s
_
R
n
¸A
0
D
α
u
j
, D
α
u
j
)
where the sequence u
j
is generated by the iteration. Let N
j
(t) be the cor
reponding supremum of u
j
(t)
C
1. With the help of the equation it follows
immediately that for j ≥ 1 the quantity ∂
t
u
j
(t)
C
0 + ∂
t
u
j−1
(t)
C
0 can be
estimated by C(1 + N
j
(t)). We can suppose without loss of generality that
[U
j,s
(0)]
2
≤ 2
α≤s
¸A
0
D
α
u
0
, D
α
u
0
) for all j. Taking the supremum over j
of the inequality of Lemma 4.3.1 gives
[U
j,s
(t)]
2
≤ [U
j,s
(0)]
2
+C
_
t
0
(1+N
j
(t
))(1+U
j,s
(t
)+sup
j
∂
t
u
j
(t
)
H
s−1)U
j,s
(t
)dt
(80)
The equation (68), together with the ﬁrst and third Moser inequalities gives the
inequality
∂
t
u
j
(t)
H
s−1 ≤ C(1 +N
j
(t))(1 +U
j,s
(t)) (81)
This allows the explicit norm of ∂
t
u to be eliminated from (80) with the result
that
[U
j,s
(t)]
2
≤ [U
j,s
(0)]
2
+C
_
t
0
(1 +N
j
(t
))
2
(1 +U
j,s
(t
))U
j,s
(t
)dt
(82)
33
The Gronwall inequality can be applied to to this integral equation. Since there
are many variants of this inequality in the literature we will give a form here
which is suﬃcient for our needs.
Lemma 4.3.2 (Gronwall inequality) Let v and h be continuous functions on
the interval [0, T] with h ≥ 0 which satisfy the inequality
v(t) ≤ C
1
+
_
t
0
h(t
)v(t
)dt
(83)
Then
v(t) ≤ C
1
exp
__
t
0
h(t
)dt
_
(84)
This is a special case of a statement which can be found on page 15 of [5]. In
that reference there is a discussion of diﬀerent forms of this inequality. On the
closed interval [0, T
1
] with 0 < T
1
≤ T the inequality
[U
j,s
(t)]
2
≤ [U
j,s
(0)]
2
+C
_
T
1
0
(1 +N
j
(t
))
2
dt
+C
_
t
0
(1 +N
j
(t
))
2
[U
j,s
(t
)]
2
dt
(85)
holds as a consequnce of (82). Then (84) implies that
[U
j,s
(t)]
2
≤
_
[U
j,s
(0)]
2
+C
_
T
1
0
(1 +N
j
(t
))
2
dt
_
exp
_
C
_
t
0
(1 +N
j
(t
))
2
dt
_
(86)
Putting t = T
1
in (86) and replacing T by t in the notation gives
[U
j,s
(t)]
2
≤
_
[U
j,s
(0)]
2
+C
_
t
0
(1 +N
j
(t
))
2
dt
_
exp
_
C
_
t
0
(1 +N
j
(t
))
2
dt
_
(87)
This will alow us to estimate the functions u
j
(t) in the H
s
norm. An obvious
procedure would be to estimate the diﬀerence between two terms in the iteration
and obtain a contraction in the H
s
norm. Unfortunately trying to do this
directly reveals that a bound in the H
s+1
norm is necessary. For this reason it
is necessary to choose another route. For this all that is needed is an estimate
for the L
2
norm of the diﬀerence, which will now be derived. For j ≥ 2 the
equation
A(u
j
)∂
t
(u
j
−u
j−1
) +
n
i=1
A
i
(u
j
)∂
i
(u
j
−u
j−1
)
+[
˜
A
0
(u
j
, u
j−1
)∂
t
u
j−1
+
n
i=1
˜
A
i
(u
j
, u
j−1
)∂
i
u
j−1
+
˜
B(u
j
, u
j−1
)](u
j
−u
j−1
) = 0 (88)
holds. Up to the choice of notation this is identical to (42) and is derived with
the help of Lemma 3.2.1. From this equation the following energy estimate is
34
obtained:
u
j
(t) −u
j−1
(t)
2
L
2 ≤ u
j
0
(t) −u
j−1
0
(t)
2
L
2 (89)
+C
_
t
0
(1 +u
j
(t
)
C
1 +u
j−1
(t
)
C
1)u
j
(t
) −u
j−1
(t
)
2
L
2dt
Suppose now that s > n/2 + 1. Let V
j
(t) = u
j
− u
j−1

L
2. Then it follows
from (90) and the Sobolev embedding theorem that
[V
j
(t)]
2
≤ [V
j
(0)]
2
+C
_
t
0
(1 +U
j,s
(t
))[V
j−1
(t
)]
2
dt
(90)
Taking the supremum over an interval [0, T
] with T
≤ T gives
sup
0≤t≤T
[V
j
(t)]
2
≤ [V
j
(0)]
2
+CT
sup
0≤t≤T
(1 +U
j,s
(t)) sup
0≤t≤T
[V
j−1
(t)]
2
(91)
4.4 Convergence
Before we prove the convergence of the iteration it is convenient to reduce the
problem to the case with vanishing initial data. Since the initial datum is
only of ﬁnite diﬀerentiability the equation for u − u
0
would not have smooth
coeﬃcients. For this reason we instead replace u by u−u
0
0
. Then the transformed
equation has smooth coeﬃcients. The initial datum does not vanish but can be
made as small as desired by a suitable choice of u
0
0
. From now on we suppose
that a transformation of this kind has been made. It follows from the Sobolev
embedding theorem that for s > n/2 + 1 there exists a constant C > 0 such
that N
j
(t) ≤ CU
j,s
(t). When this is combined with (82) the following integral
inequality for U
j,s
(t) is obtained:
[U
j,s
(t)]
2
≤ C[u
0

2
H
s +
_
t
0
(1 + [U
j,s
(t
)]
2
)
2
dt
] (92)
A function that satisﬁes this integral inequality can be estimated by the solution
of the corresponding integral equation. The equation is
f(t) = C[u
0

2
H
s +
_
t
0
(1 +f(t
))
2
dt
] (93)
This solution is in turn given by the solution of the corresponding diﬀerential
equation
df/dt = C(1 +f)
2
(94)
with initial value Cu
0

2
H
s. There exists a number T > 0 such that this solution
is smaller than 2Cu
0

2
H
s on the interval [0, T]. It follows that there is a number
T > 0 such that the values of all functions u
j
on the interval [0, T] stay in G
1
as
long as they are deﬁned. The deﬁnition of T
j
implies that under these conditions
T
j
≥ T for all j. Moreover the H
s
norms of u
j
are uniformly bounded on this
interval.
35
Now some function spaces must be introduced. They are spaces of functions
which map the interval [0, T] into a Banach space X. The functions which are
continuous with respect to the topology deﬁned by the norm make up the space
C
0
([0, T], X). This is a Banach space with the norm u = sup
0≤t≤T
u(t)
X
.
Since diﬀerentiability of functions with values in X is deﬁned we can deﬁne
the Banach space C
1
([0, T], X) in a corresponding way. The vector space
C
w
([0, T], X) of functions on [0, T] with values in X which are continuous with
respect to the weak topology will also be required. It is, however, not neces
sary to deﬁne a norm on this space. The concept of measurable functions from
[0, T] to X will also be needed. A function of this kind is called measurable if
for each open subset W of X in the topology deﬁned by the norm u
−1
(W) is
measurable. The function u is called weakly measurable if for each φ ∈ X
the
scalar function φ(u(t)) is measurable. The theorem of Pettis says that when X
is separable measurability and weak measurability are equivalent. With these
preliminaries it is possible to deﬁne L
p
spaces of functions with values in X.
What we need in practise are the spaces L
1
([0, T], X) and L
∞
([0, T], X) in the
case that X is a separable and reﬂexive Banach space. They are Banach spaces,
L
1
([0, T], X) is separable and L
∞
([0, T], X
) is the dual space of L
1
([0, T], X).
Further details about these spaces can be found in the book of Zeidler [6].
For a real number s the Sobolev space H
s
(R
n
) is deﬁned as the completion
of the space C
∞
0
(R
n
) with respect to the norm
u
2
H
s =
_
R
n
(ˆ u(ξ))
2
(1 +[ξ[
2
)
s
dξ (95)
Here ˆ u denotes the Fourier transform of u. For an integer s this norm is equli
valent to the usual H
s
norm so that the new spaces can be indentiﬁed with the
old ones.
Exercise From H¨older’s inequality it follows that
u
H
s
≤ u
s
s
H
su
1−
s
s
L
2
(96)
for 0 < s
< s and all u ∈ H
s
(R
n
).
The main result of this section is an existence theorem for quasilinear sym
metric hyperbolic systems. It is assumed as before that the coeﬃcients have
been cut oﬀ in a suitable way.
Theorem 4.4.1 Let u
0
∈ H
s
(R
n
) be an initial datum for the quasilinear sym
metric hyperbolic system (18) with s > n/2 +1 an integer. Let A
0
−Id, A
i
and
B be smooth functions which vanish for [x[ > R. Then there exists a unique
classical solution with the given initial datum on a time interval [0, T]. This so
lution belongs to the space C
0
([0, T], H
s
(R
n
)) ∩ C
1
([0, T], H
s
−1
(R
n
)) for any
s
in the interval [0, s).
Proof We consider the iteration introduced above which deﬁnes a sequence ¦u
j
¦
of smooth functions on [0, T] R
n
. According to (92) this sequence is bounded
in C
0
([0, T], H
s
(R
n
)). The number T
in the inequality (91) can be chosen so
small that CT
< 1. It can also be assumed without loss of generality that the
36
initial datum has been approximated in such a way that V
j
(0) ≤ 2
−j
. Then as
a consequence of (91):
sup
0≤t≤T
[V
j
(t)]
2
≤ 2
−2j
+K sup
0≤t≤T
[V
j−1
(t)]
2
(97)
Summing this from 1 to N gives
N
j=0
[ sup
0≤t≤T
[V
j
(t)]
2
] ≤ 2 +K
N
j=0
[ sup
0≤t≤T
[V
j
(t)]
2
] (98)
By choosing T
suﬃciently small it can be assumed that K < 1 and it It follows
from this that the inﬁnite sum converges and that u
j
is a Cauchy sequence in the
space C
0
([0, T], L
2
(R
n
)). The interpolation inequality (96) now implies that it is
also a Cauchy sequence in C
0
([0, T], H
s
(R
n
)) ist, for all s
< s. Using the equa
tion it can be proved that ∂
t
u is a Cauchy sequence in C
0
([0, T], H
s
−1
(R
n
)).
In particular, since s > n/2 + 1, it is possible to choose s
> n/2 + 1. This
proves that u
j
converges to a limit in C
1
([0, T] R
n
) and that the function u is
a classical solution of (18). This solution has the correct initial datum, namely
u
0
. The argument also gives the regularity statement of the theorem, namely
that u ∈ C
0
([0, T], H
s
(R
n
)) ∩ C
1
([0, T], H
s
−1
(R
n
)) for 0 ≤ s
< s.
4.5 Additional regularity
Theorem 4.4.1 has the disadvantage that we lose some regularity. The initial
datum is in H
s
(R
n
) but the solution only in H
s
(R
n
) for each ﬁxed value of t.
In this section this problem will be overcome. The Sobolev space H
−s
(R
n
) is
the dual space of H
s
(R
n
) and H
−s
(R
n
) is dense in H
−s
(R
n
) for s
< s. Let
v be an element of H
−s
(R
n
). Then v can be arbitrarily well approximated by
elements w of H
−s
(R
n
) in the H
−s
norm. In particular w can be chosen such
that
¸u
j
(t) −u
j
(t), v −w) < /2 (99)
for t ∈ [0, T]. Here the fact has been used that the sequence u
j
is bounded
in C
0
([0, T], H
s
(R
n
)). Since u
j
− u converges to zero in C
0
([0, T], H
s
(R
n
)) it
follows that j and j
can be chosen so large that
¸u
j
(t) −u
j
(t), w) < /2 (100)
Combining (100) and (101) gives the estimate
¸u
j
(t) −u
j
(t), v) < (101)
It follows that u ∈ C
0
w
([0, T], H
s
(R
n
)). A further statment can be obtained
from the BanachAlaoglu theorem (Theorem 3.1.1). It is namely the case
that L
∞
([0, T], H
s
(R
n
)) is the dual space of L
1
([0, T], H
−s
(R
n
)). The sec
ond space is also separable. Hence Theorem 3.1.1 can be applied to show that
u ∈ L
∞
([0, T], H
s
(R
n
)).
37
To show that u ∈ C
0
([0, T], H
s
(R
n
)) another argument is needed. We al
ready know that u(t) ∈ H
s
(R
n
) for each value of t. It remains to show the con
tinuity with respect to the topology deﬁned by the norm. We will show that u is
continuous from the right at t = 0. Since the argument is not aﬀected by a time
translation or a reversal of the time direction this suﬃces. Thus we want to show
that lim
m→∞
u(t
m
) −u(0)
H
s = 0 for each sequence of numbers t
m
in the in
terval [0, T] which tends to zero. If we knew that u(0)
H
s ≥ limsupu(t
m
)
H
s
we could get the desired results using the following lemma:
Lemma 4.5.1 Let H be a Hilbert space and ¦u
m
¦ a sequence in H which
converges weakly to u ∈ H. If u ≥ limsupu
m
 then u −u
m
 → 0.
Proof First it will be shown that u
m
 → u. For this it is enough under the
given assumptions to show that u ≤ liminf u
m
. It u is zero the inequality
holds. It is also invariant under scaling. Thus we can assume w.l.o.g. that
u = 1. Then
liminf u
m
 ≥ liminf¸u, u
m
) = 1 = u
This proves the ﬁrst statement. Now
u −u
m

2
= ¸u −u
m
, u −u
m
) (102)
= u
m

2
−2¸u, u
m
) +u
2
(103)
The last expression tends to zero because of the weak convergence and the result
of the ﬁrst step. It follows that u
m
→ u.
The norm which is deﬁned by
v
2
s,A
0 =
α≤s
¸A
0
D
α
v, D
α
v) (104)
is equivalent to the usual H
s
norm. This norm clearly comes from an inner
product and Lemma 4.5.1 will now be applied to the Hilbert space deﬁned by
this inner product. The energy estimates, together with what we know about
the boundeness of the sequence u
j
, gives an inequality of the form
u
j
(t)
2
s,A
0 ≤ u
j
(0)
2
s,A
0 +r(t) (105)
where the function r(t) is independent of t and satisﬁes the condition r(t) = o(t)
for t → 0. It follows that:
u(t)
2
s,A
0 ≤ limsup
j→∞
u
j
(t)
2
s,A
0
≤ limsup
j→∞
u
j
(0)
2
s,A
0 +r(t)
= u(0)
2
s,A
0
+r(t) (106)
The ﬁrst inequality uses the weak convergence of the sequence. Using (106)
completes the argument.
38
4.6 A continuation criterion
If u is a solution of the equation (18) which is in the space C
0
([0, T), H
s
(R
n
)) ∩
C
1
([0, T), H
s−1
(R
n
)) then the anaogue of integrated form of (72) holds where
u
j−1
and u
j
are replaced by u and the deﬁnition of B
α
is changed accordingly.
In the integrated form it is possible to justify the passage to the limit. This B
α
can be estimatedas in Section 4.3 with the result:
u(t)
2
H
s ≤ C[u
0

2
H
s +
_
t
0
(1 +u(t
)
C
1 +∂
t
u(t
)
C
0)
(1 +u(t
)
H
s +∂
t
u(t
)
H
s−1)u(t
)
H
sdt
] (107)
If we have a solution in the space C
0
([0, T], H
s
(R
n
)) as in the last section we
can use the equation to replace ∂
t
u
H
s−1 by u
H
s. The inequality (107) then
simpliﬁes to
u(t)
2
H
s ≤ C[u
0

2
H
s +
_
t
0
(1 +u(t
)
C
1 +∂
t
u
C
0)(1 +u(t
)
2
H
s)dt
] (108)
The time of existence T of the solution in Theorem 4.4.1 only depends on H
s
norm of the initial datum, provided the solution takes values in the set G
1
. The
inequality (107) shows that as long as the C
1
norm of the solution stays ﬁnite
its H
s
norm also stays ﬁnite.
Theorem 4.6.1 Let u be a classical solution of the equation (18) on an interval
[0, T) with an initial datum u
0
∈ H
s
(R
n
), s > n/2 + 1. If the C
1
norm of u
and the C
0
norm of ∂
t
u are bounded on [0, T) and the values of u lie in a open
subset G
1
whose compact closure is contained in G then u can be extended as
a classical solution to an interval [0, T
) with T
> T and the extension belongs
to the space C
0
([0, T
), H
s
(R
n
)) ∩ C
1
([0, T
), H
s−1
(R
n
)).
Proof By Theorem 4.4.1 there exists a solution in the space C
0
([0, T
), H
s
(R
n
))
on a short time interval. This is a classical solution and must agree with the
given classical solution as long as both exist. The boundedness of the C
1
norm
and therefore of the H
s
norm shows that the solution in C
0
([0, T
), H
s
(R
n
))
can be extended up to and beyond T with the same regularity.
An interesting corollary of this statement is that for a C
∞
initial datum there
is a corresponding C
∞
solution. The interval of existence cannot shrink to zero
as s grows.
For special symmetric hyperbolic systems this continuation criterion can be
improved, as can be seen by a careful consideration of the proof. If, for instance,
the system is semilinear the C
1
norm can be replaced by the C
0
norm. For a
semilinear wave equation this means that after reduction to ﬁrst order the C
0
norm of the new variables is enough to ensure the continued existence of the
solution. In other words it is enough to have a bound for the C
1
norm of the
original variables and the C
0
norm of their time deriavtives. If the unknowns in a
semilinear symmetric hyperbolic system can be written in the form u = (u
1
, u
2
)
39
where the equation for u
2
is linear with coeﬃcients which only depend on t
and x then it is only necessary to control the L
∞
norm of u
1
to ensure the
continued existence of the solution. This can be used to show that a solution of
the equation (9) exists as long as u is pointwise bounded.
5 Global results
We have already seen that for linear symmetric hyperbolic systems it is possible
to show that unique global solutions exist for smooth initial data. This takes
care of the systems (1) and (13). In the quasilinear case there is no comparable
general result. A unique solution exists locally in time but the question whether
a global solution exists must be investigated on a case by case basis. The criteria
of Section 4.6 show that global existence holds for (9), (10) or (11) if the norms
u
C
0, (u, v)
C
1 + (∂
t
u, ∂
t
v)
C
0 or u
A

C
1 + ∂
t
u
A

C
0 respectively remain
bounded. Nothing will be said here about the Euler equations since in that case
global existence cannot be expected. In the next two sections examples will be
presented where the continuation criterion can be veriﬁed.
If global existence does not hold for general initial data, or at least cannot
be shown, it is possible to try to show global existence for data which are close
to data for which global existence is known. The best known case is that where
u = 0 satisﬁes the equation and data close to the vanishing initial data are
investigated. This is referred to as the case of small data. An example of this
type will be treated in a later section.
5.1 The onedimensional wave map
In this section it will be shown that for smooth initial data with compact support
for a wave map in one dimension a global solution exists. Only the special wave
map (10) will be treated. For a general wave map in one space dimension
no further analytical diﬃculties occur. It is, however necessary to use a little
diﬀerential geometry which we do not want to introduce here. As a result of
the remarks in the last section we know that it is enough to show that for a
solution on a time interval [0, T) the quantity
u(t)
C
1 +v(t)
C
1 +∂
t
u(t)
C
0 +∂
t
v(t)
C
0 (109)
is bounded. The energy
c =
_
R
1
2
¦e
−2v
[(∂
t
u)
2
+[∂
x
u[
2
] + (∂
t
v)
2
+[∂
x
v[
2
¦dx (110)
is constant in time. The equaitons for the wave map are of the form
−∂
2
t
u +∂
2
x
u = Q
u
(111)
−∂
2
t
v +∂
2
x
v = Q
v
(112)
40
for certain source terms Q
u
and Q
v
. The L
1
norm of Q
u
(t) can be bounded
by the energy. A classical representation formula for solutions of the inhomoge
neous wave equation in one space dimension is
u(t, x) =
1
2
_
u(0, t −x) +u(0, t +x) +
_
x+t
x−t
∂
t
u(0, x
)dx
+
_
∆
Q
u
(t
, x
)dt
dx
_
(113)
Here ∆ denotes the trangle with vertices (t, x), (0, t−x) and (0, t−x). Of course
a similar formula holds for v. The only term on the right hand side of (113)
which is not determined by the initial data, and therefore not automatically
bounded, is the last one. In the case of v:
¸
¸
¸
¸
_
∆
Q
v
(t
, x
)dt
dx
¸
¸
¸
¸
≤
_
t
0
Q
v
(t
)
L
1dt
(114)
The right hand side of this equation is known to be bounded. Now the analogue
of (113) for v that v is bounded. Under these circumstances the L
1
norm of Q
u
can be bounded by the energy. The boundedness of u then follows from (113).
Next we want to bound the ﬁrst derivatives of u and v. For this it is useful to
introduce coordinates ξ = t +x and η = t −x, together with the corresponding
derivatives
u
ξ
= ∂
t
u +∂
x
u, u
η
= ∂
t
u −∂
x
u (115)
We also use u
ξη
for the second derivative ∂
2
t
u−∂
2
x
u. The wave map (10) in one
space dimension is of the form
u
ξη
= (u
ξ
v
η
+u
η
v
ξ
) (116)
v
ξη
= −e
−2v
u
η
u
ξ
(117)
These equations can be integrated in the ξ and η directions. If this is done
naively then the right hand side is quadratic in the unknowns and it is not
possible to say anything about the boundedness of the solutions. Instead it
is necessary to use the special structure of the nonlinearity as in the following
calculation.
∂/∂ξ (e
−2v
u
2
η
+v
2
η
) = 0
∂/∂η (e
−2v
u
2
ξ
+v
2
ξ
) = 0 (118)
It follows that the quantities e
−v
u
ξ
, e
−v
u
η
, v
ξ
and v
η
can be bounded in terms
of the initial data. Since we already know that v is bounded we get estimates
for u
ξ
and u
η
. Estimates for ∂
t
u, ∂
x
u, ∂
t
v and ∂
x
v then follow. The fact that
the quadratic terms can be handled in this way is connected with the fact that
the system satisﬁes the null condition of Klainerman.
In this problem another approach is possible. It would be possible to ﬁrst
use (118), which leads to the boundedness of v
t
. An integration in t then ensures
the boundedness of v. After that (118) can be used again to control u. From
this point on the argument runs as before. The reason that a more complicated
argument was presented ﬁrst is that it has a wider range of applicability.
41
5.2 A semilinear wave equation
The subject of this section is a global existence theorem for the equation (9) in
the case n = 3 and k = 1. We consider smooth data with compact support.
From Section 4.6 we know that it is enough to show that for an arbitrary solution
on an interval [0, T) the L
∞
norm of the solution is bounded. Using the Sobolev
embedding theorem this means that it is enough to show that the H
2
norm is
bounded. This will be done using energy estimates. From (9) it follows that
−∂
2
t
(∂
i
u) + ∆(∂
i
u) = 3u
2
∂
i
u (119)
The energy
c =
_
R
3
1
2
[(∂
t
u)
2
+[∇u[
2
] +
1
4
u
4
(120)
is constant in time. Since we can control the support of the solution using the
domain of dependence the L
2
norm can be contolled by the L
4
norm. It thus
follows from energy conservation that u
H
1 is bounded. Multiplying (119) with
∂
t
∂
x
u and integrating in space gives
(d/dt)
__
R
3
1
2
((∂
t
∂
i
u)
2
+[∇∂
i
u[
2
)
_
= −3
_
R
3
u
2
∂
i
u∂
t
∂
i
u (121)
The last integral can be bounded by
3
2
[∂
i
∂
t
u
2
L
2
+ u
2
∂
i
u
2
L
2
]. It remains to
look closely at the second term.
u
2
∂
i
u
2
L
2 =
_
R
3
u
4
(∂
i
u)
2
(122)
≤
__
R
3
u
6
_
2/3
__
R
3
(∂
i
u)
6
_
1/3
≤ Cu
4
H
1u
2
H
2 (123)
The ﬁrst step uses the H¨older inequality and the second the Sobolev inequality.
This argument does not work for higher powers in dimension n = 3. On the
other hand it gives global existence for any integer k ≥ 1 in the case n = 2.
5.3 Dissipative symmetric hyperbolic equations
In this section the following symmetric hyperbolic equation is considered:
∂
t
u +
n
i=1
A
i
(u)∂
i
u +λu = 0 (124)
It is assumed that λ > 0. This is certainly not the most general system which can
be treated with the techniques discussed in what follows. It is, however, general
enough to illustrate the essential ideas. The function u which is identically zero
is obviously a solution of (124) which is global in time. The aim is now to
show that solutions u which evolve from data u
0
with small Sobolev norm also
42
exist globally in time, and that the Sobolev norm u
H
s with s suﬃciently large
converges to zero exponentially as t → ∞.
The basic idea is to derive an energy estimate for the system (124) where
the term which contains λ is included explicitly. An essential point is that
the coeﬃcients in the equation do not depend explicitly on t or x. Hence no
summand one occurs. The estimate is
(d/dt)u(t)
2
H
s ≤ (−λ +u(t)
C
1 +∂
t
u(t)
C
0)u(t)
2
H
s (125)
When u
H
s is small the norm u
C
1 is also small as a consequence of the
Sobolev embedding theorem. The equation shows that ∂
t
u
C
0 is small. It
follows that there exists > 0 such that u(t)
H
s ≤ implies that the expression
−λ +u(t)
C
1 +∂
t
u(t)
C
0 is negative. Consider an initial datum u
0
with the
property that u
0

H
s ≤ /2. Close to t = 0 it follows by continuity that
u(t)
H
s < . Now let T
∗
be the supremum of all numbers T such a solution
of (124) exists on [0, T] and u(t)
H
s ≤ there. If T
∗
< ∞ the continuation
criterion implies that the solution exists on a longer time interval. But at
t = T
∗
the derivative of u(t)
H
s is negative, contradicting the deﬁnition of T
∗
.
The only remaining possibility is that T
∗
= ∞. Furthermore, the expression
−λ+u(t)
C
1 +∂
t
u(t)
C
0 is everywhere smaller than a negative constant and
so u(t)
H
s decays exponentially as t → ∞. We see that the solution u = 0 is
asymptotically stable.
References
[1] John, F.: Partial Diﬀerential Equations (4th Edition) Springer, Berlin
(1982).
[2] Majda, A.: Compressible Fluid Flow and Systems of Conservation Laws in
Several Space Dimensions. Springer, Berlin (1984).
[3] Rudin, W.: Functional Analysis (2nd Edition). McGrawHill, New York
(1991).
[4] Taylor, M.: Partial Diﬀerential Equations III. Nonlinear Equations.
Springer, Berlin (1996).
[5] Walter, A.: Diﬀerential and Integral Inequalities. Springer, Berlin (1970).
[6] Zeidler, E.: Nonlinear Functional Analysis and its Applications. II.
Springer, Berlin (1990).
43