You are on page 1of 185

Continuous Dynamics

on Metric Spaces

Craig Calcaterra

29 November 2008
Version 1.0

Preface v

Introduction vii
0.1 Context and objective . . . . . . . . . . . . . . . . . . . . . . . . vii
0.2 Example: flows on L2 (R) . . . . . . . . . . . . . . . . . . . . . . xi
0.3 Example: flows on manifolds . . . . . . . . . . . . . . . . . . . . xiv
0.4 Example: flows on a space with no linear structure . . . . . . . . xxi
0.5 Chapter outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
0.6 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
0.7 Abridged version of the book . . . . . . . . . . . . . . . . . . . . xxv
0.8 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi

I Theory 1
1 Flows 3
1.1 Generating flows with arc fields . . . . . . . . . . . . . . . . . . . 3
1.1.1 The fundamental theorem . . . . . . . . . . . . . . . . . . 3
1.1.2 Local flows . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.1.3 Global flows . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.2 Forward flows and fixed points . . . . . . . . . . . . . . . . . . . 19
1.3 Invariant sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.4 Commutativity of flows . . . . . . . . . . . . . . . . . . . . . . . 22

2 Lie algebra on metric spaces 25
2.1 Metric space arithmetic . . . . . . . . . . . . . . . . . . . . . . . 25
2.2 Metric space Lie bracket . . . . . . . . . . . . . . . . . . . . . . . 31
2.3 Covariance and contravariance . . . . . . . . . . . . . . . . . . . 34

3 Foliations 41
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 Local integrability . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3 Commutativity of flows . . . . . . . . . . . . . . . . . . . . . . . 58
3.4 The Global Frobenius Theorem . . . . . . . . . . . . . . . . . . . 60


3.5 Control theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

II Examples 71
4 Brackets on function spaces 73

5 Approximation with non-orthogonal families 83
5.1 Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.1.1 First approximation formula . . . . . . . . . . . . . . . . . 83
5.1.2 Signal synthesis . . . . . . . . . . . . . . . . . . . . . . . . 84
5.1.3 Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . 85
5.1.4 Coefficient formulas . . . . . . . . . . . . . . . . . . . . . 88
5.1.5 Instability . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.2 Low-frequency trigonometric series . . . . . . . . . . . . . . . . . 90
5.2.1 Density in L2 . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.2.2 Coefficient formulas . . . . . . . . . . . . . . . . . . . . . 92
5.2.3 Damping gives a stable family . . . . . . . . . . . . . . . . 97

6 Partial differential equations 101
6.1 Metric space arithmetic . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 PDEs as arc fields . . . . . . . . . . . . . . . . . . . . . . . . . . 104

7 Flows on H (Rn ) 107
7.1 IFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.2 Continuous IFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.3 Fixed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.4 Cyclically attracted sets . . . . . . . . . . . . . . . . . . . . . . . 114
7.5 Control theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

8 Counter-examples 119

Appendix A: Metric spaces 123
.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
.2.1 Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
.2.2 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . 130
.3 Geometric objects . . . . . . . . . . . . . . . . . . . . . . . . . . 131
.3.1 Triangles . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
.3.2 Metric coordinates . . . . . . . . . . . . . . . . . . . . . . 132
.3.3 Conversion formulas . . . . . . . . . . . . . . . . . . . . . 133

Appendix B: ODEs as vector fields 137

Appendix C: Numerical differentiation 141

List of notation 151

This book explores the subject of metric geometry using continuous dynamics.
Metric geometry is currently experiencing intense interest, due to Perelman’s
solution of the Poincare’s Conjecture and the influence of Gromov’s ideas on
string theory in physics. Despite this advanced pedigree, metric geometry begins
at a basic level requiring no more than an undergraduate introduction to point
set topology and the definition of a distance metric. The novel perspective of
this text is the focus of using flows on an abstract metric space to crack into
geometric objects such as foliations. The abstract environment allows us to
pinpoint the necessary ideas to make all our analytic constructions—we employ
the bare minimum definitions for creating dynamics, geometric decompositions,
and approximations on metric spaces. This book is written with students in
mind, with the intention of using this minimum apparatus to make learning and
understanding the ideas easier. Hopefully the treatment will be of interest to
researchers as well, being the first unified presentation of this dynamic approach
to metric geometry. Further, researchers can use this abstract environment to
test the limits of their understanding of fundamental constructions such as flows,
Lie derivatives, foliations, holonomy and connections.


In this chapter the case is made for the importance of studying flows on a metric
space. The concept of a metric space is the deepest point of contact between
geometry and analysis; we gain new perspective on these subjects by generalizing
several of their results to metric spaces. The generalized Fundamental Theorem
of Ordinary Differential Equations and Frobenius’ Foliation Theorem are the
major theoretical results of this book. The first theorem belongs to analysis
and the second to geometry.
The greater generality also gives a richer palette for mathematical modeling,
as demonstrated with novel dynamics on H (Rn ), the space of nonempty com-
pact subsets of Rn . Innovative dynamics arise even on well-studied spaces. E.g.,
geometric control theory on function spaces leads to our centerpiece example:
low-frequency trigonometric series can approximate any L2 function on any in-
terval, Theorem 94 and Example 95, which the reader can turn to immediately,
before learning the details of metric space dynamics which conceived the idea.

0.1 Context and objective
A metric space (M, d) is a set M with a function d : M × M → R called the
metric which is positive, definite, symmetric and satisfies the triangle inequal-

(i) d(x, y) ≥ 0 positivity
(ii) d(x, y) = 0 iff x = y definiteness (or non-degeneracy)
(iii) d(x, y) = d(y, x) symmetry
(iv) d(x, y) ≤ d(x, z) + d(z, y) triangle inequality

for all x, y, z ∈ M. A metric space is locally complete if for each element
x ∈ M there exists an r > 0 such that the closed ball

B (x, r) := {y ∈ M|d (x, y) ≤ r}

is complete. Every major result in this book is written at this generality, so our
constant friend is the triangle inequality—exploited without acknowledgement.
The most important metric spaces include n-dimensional Euclidean space Rn ,
Riemannian manifolds and function spaces such as L2 (R). Appendix A gives


definitions for these and other examples and lists general properties of metric
The term “continuous dynamics”, as opposed to “discrete dynamics”, means
the study of flows:

Definition 1 A flow is a continuous map F : M × R → M which, for all
x ∈ M and s, t ∈ R, satisfies
(i) F (x, 0) = x
(ii) F (F (x, s) , t) = F (x, s + t).

More efficient notations are

Ft (x) := F (x, t) =: Fx (t)

with the space variable x or time parameter t in the subscript, depending on
which quantity is active in a calculation. Flows will typically be denoted with
F , G, or H.
For fixed t, a flow gives a map Ft : M → M which is necessarily an auto-
morphism, i.e., a homeomorphism of M to itself, since F−t is the continuous
inverse of Ft
F−t ◦ Ft = F0 = Id.
A flow may thus be viewed as a 1-parameter family of homeomorphisms. From
another point of view, the R parameter t often signifies time, and the map Fx :
R → M then describes the motion of a fixed x through its position/configuration
space M
Fx (t) for all t ∈ R with initial condition x = Fx (0).

Our chief interest is to use continuous dynamics to explore the geometry
of general metric spaces. Insights into geometric structure, in turn, give us
deeper understanding of possible dynamics. It is surprising how many important
geometrical ideas require only a metric for their definition. Balls and spheres,
of course, are utilized at the inception of metric spaces. A more extensive list
of static geometric definitions (ellipses, cylinders, etc.) appears on page 133.
Ekeland’s variational principle and the Mountain Pass Theorem have natural
expressions on a metric space [43]. For many decades algebraic topologists
have been aware that a topology without further algebraic structure is sufficient
to define geometrically insightful indices, such as the fundamental homotopy
group or the homological Conley index [25]. More important for this book,
geometric notions such as curves, surfaces, tangency, and transversality have
natural expressions on metric spaces. The generalization of the Fundamental
Theorem of Ordinary Differential Equations to metric spaces ([52], [7], [18], [30])
and Frobenius’ Foliation Theorem (Chapter 3) are the major theoretical results
explicated in this text. Further, length, speed, angles, norm, curvature [14],
the Lie derivative (Chapter 2), gradients ([41], [3]) and many others also have
natural and fruitful generalizations. The spirit that guides the development of
metric geometry is the conviction that every major geometrical result has a
substantial expression on metric spaces.

Metric geometry’s goal may be summarized as the attempt to generalize Rie-
mannian geometry to metric spaces; a complementary point of view holds that
Riemannian geometry is a specialized pursuit within the wider goal of exploring
the geometry of general metric spaces. Hilbert’s 4th and 23rd problems are a
good place to start the long history of metric geometry. The major contributors
to metric geometry are unprintably populous, but an embarrassingly short list
of highlights, particularly relevant to the goal of this book, include Menger [48],
A. D. Aleksandrov [2], Busemann [15], and Gromov [37], who take the notions
“curve”, “path” or “arc” in M as primary objects of study. Different authors
have contradictory definitions. Let us define a curve to be a continuous map
c : I → M, where I ⊂ R is a subinterval with nonempty interior; and define
the path of c as its image, the set {c (t) : t ∈ I} ⊂ M . An arc is a curve
with a special property, e.g., it may minimize distance or energy. We will re-
serve the freedom to use the term “arc” in this loose, evocative manner as any
distinguished curve. The length L (c) of a curve c is the supremum of the sum

d (c (ti ) , c (ti−1 ))

taken over all finite partitions {t0 , t1 , ..., tn } of its domain I. A curve c : I → X
has speed bounded by ρ with ρ ≥ 0 if d (c (s) , c (t)) ≤ ρ |s − t| for all s
and t in I. The speed of c is the infimum of all such bounds ρ. The length
of the curve restricted to any interval [t1 , t2 ] ⊂ I is then less than or equal
to ρ |t1 − t2 |, and the notion of speed as length-traveled-divided-by-time is still
valid (infinitesimally and on average) in metric spaces.

Much of the differential calculus may also be generalized, inspired by the
observation that tangency may be characterized using solely the metric (lines
(1) and (2) below) without requiring any algebraic properties for the underlying
space. Differential equations and their solutions are thereby expressible on gen-
eral metric spaces. Stripping manifolds of their local-linear structure and leaving
only the ability to compare distances between points helps us understand the
essence of these geometrical and dynamical facts, and it has the added benefit
of giving occasionally stronger theorems and a wider descriptive power that ac-
companies the more general framework. But our ulterior motivation is: focusing
on the metric alone often makes things easier. It is easier to prove and under-
stand a result when there are fewer assumptions; and it is easier to construct
examples when we are not restricted to a highly structured environment, such as
a finite-dimensional manifold. Throughout the book, though, we rigidly adhere
to the philosophy of faithfully and naturally generalizing analysis and geometry
on manifolds; this allows us to use the voluminous library of traditional results
whenever our generalized examples inhabit a more structured environment.
Two curves ci : Ii → M for i = 1, 2 are tangent at a point t ∈ I1 ∩ I2 if
d (c1 (t + h) , c2 (t + h))
lim = 0. (1)
h→0 h
This definition faithfully generalizes tangency on M = Rn or any normed linear

space. For instance a curve c : I → Rn is differentiable with c (t0 ) ∈ Rn for
t0 ∈ I if and only if c is tangent to the curve l (t) := c (t0 ) + (t − t0 ) c (t0 ) which
is a line in the direction of c (t0 ) since
d (c (t0 + h) , l (t0 + h))  c (t0 + h) − c (t0 ) 
= lim  − c (t0 )

h→0 h h→0 h 

where the metric d is derived from the norm, d (x, y) := x − y. So the smooth-
ness of a curve c is determined by its tangency with a special curve, an arc, l.
Remember (Appendix B) nearly any ODE may be rewritten as a vector field
x = V (x)
where V : Rn → Rn is the vector field and a solution is a curve σx0 : I → Rn
with initial condition σ x0 (0) = x0 ∈ Rn satisfying
σ x (t) = V (σx0 (t)) .
dt 0
The fundamental result of ODEs is: if V is Lipschitz continuous then there exists
a collection of solutions which generates a unique local flow F (x, t) := σ x (t).
We generalize this result in Chapter 1 using the idea contained in (2) that a
curve can represent a vector or derivative. In analogy with vectors on a linear
space, we study arcs on a metric space. Whereas the vector field V specifies a
direction V (x) ∈ Rn at each point x ∈ Rn to which solutions must be tangent,
an arc field X specifies a direction with an arc at each point x. So an arc field
is a map X : M × [−1, 1] → M with X (x) : [−1, 1] → M being the arc at the
position x ∈ M .
To make the generalization claimed in the previous paragraph more concrete,
let us show how every vector field V may be naturally represented as an arc
field X. Define X : Rn × [−1, 1] → Rn by X (x, t) := x + tV (x). If σx0 : I → Rn
is a solution to the vector field problem, then σ x0 is also tangent to X at each
value t ∈ I in the sense that
d (σ (t + h) , X (σ (t) , h))
lim = 0.
h→0 h
To check this notice
d (σ (t + h) , X (σ (t) , h))
h→0 h  
d (σ (t + h) , σ (t) + hV (σ (t)))  σ (t + h) − σ (t) 
= lim = lim   − V (σ (t))

h→0 h h→0 h
 dt σ (t) − V (σ (t)) = 0.

The motivation for generalizing the calculus is to analyze dynamics (i.e.,
flows) on such archetypical examples of metric spaces as the infinite-dimensional
space L2(R), manifolds, and the space of non-empty compact subsets of the
plane H R2 .
0.2. EXAMPLE: FLOWS ON L2 (R) xi

0.2 Example: flows on L2 (R)
The space of square integrable functions L2 (R) (see Appendix A.1 for a precise
definition) is a linear space and may seem an unlikely candidate to yield novel
results through our program of abstracting classical results to metric spaces
while avoiding the use of any linear structure. However, for this most elemen-
tary of all infinite-dimensional spaces—this Hilbert space—the linear structure is
actually a hindrance to understanding some of its most basic flows.

Example 2 On M := L2 (R) , the (Hilbert) space of square integrable functions
of one real variable, the metric is derived from the L2 norm:

d (f, g) := (f − g)2 dµ = f − g2 .

What is the simplest example of a flow on M? For many visual thinkers, trans-
lating the graph leaps to mind:

F (f, t) (x) := f (x + t) .

f (x + t) and f (x)
The two flow properties are automatically verified: (i) F (f, 0) = f and (ii)
F (F (f, s) , t) = F (f, s + t) for any f ∈ L2 (R). In fact {F (·, t) |t ∈ R} is
clearly a family of isometries of M .

This example seems so perfectly regular as to seem trivial. However a con-
founding blow to our intuition is that for most initial conditions f, the curves
F (f, ·) are non-differentiable with respect to either Gateaux or Frechet differen-
tiability (notions we won’t use and won’t define). To get a feel for this situation,
consider the initial condition f := χ[0,1] . Here χS represents the characteristic
function of a set S, i.e.,

1 for x ∈ S
χS (x) :=
0 otherwise.

For f := χ[0,1]

F (f, t + h) − F (f, t) 
= h χ[1+t,1+t+h] − χ[t,t+h]

has norm 2/h and does not converge to a member of L2 (R) as h → 0. The
linear structure of the vector space L2 (R) is not helping in our quest to analyze1
Even more fundamentally bothersome is the fact that the speed of the flow is
not locally bounded, i.e., the speed of the curves F (f, ·) can become arbitrarily
large on any neighborhood of M .
(Here we are referring to the notion of speed defined technically above. The
speed of F (f, ·) is not related to the rate the graph is translated on the R axis—
which is constantly 1. The metric is biased toward the structure of addition
of functions in order to achieve a norm and is less sensitive to comparing how
similar the graphs appear. Reread the definitions carefully so as not to be misled
by initial intuition.)
This difficulty with translation is at the heart of many obstacles to answering
the well-posedness of partial differential equations (PDEs), since translation is
the solution of
∂F ∂F
∂t = ∂x .

This is the simplest non-trivial partial differential equation and yet we already
see the “unbounded” property of some functional analysis operators rearing its
head. This warns us about the difficulties inherent in transporting the language
and intuition of continuous dynamics in finite dimensions to infinite dimensions
or more general metric spaces. L2 is a beautiful, complete metric space which
is natural to consider as an environment for solving PDEs, but the pitfall men-
tioned in this paragraph may lead us to widen our search to other metric spaces.

Example 3 Another basic flow on M := L2 (R) is vector space translation,
G : L2 (R) × R → L2 (R) given by

G (f, t) := f + tg

for any choice of g ∈ L2 (R). The evolution of the graph of Gt (f) as t changes
is not quite as easy to visualize as Example 2; but since G respects the vector
space structure, it is much tamer analytically. Verifying the flow properties is
trivial. Continuity in particular follows immediately from the properties of the
norm. In fact the speed is globally bounded by ρ := g since

d (G (f, t) , G (f, s)) = (f + tg) − (f + sg) = |t − s| g .

G (·, t), like F (·, t) above, is again a family of isometries of M :

G (f1 , t) − G (f2 , t) = (f1 + tg) − (f2 + tg) = f1 − f2  .

1 This difference quotient does, of course, converge to a difference of Dirac point distrib-

utions δt+1 − δ t if we bother to define the wider notion of a distribution in the linear dual.
Admittedly we’re being overly critical on the value of linearity at this stage, but read on and
note for yourself why even the use of covectors won’t simplify the analysis.
0.2. EXAMPLE: FLOWS ON L2 (R) xiii

How do our two flows F and G from Examples 2 and 3 compare? How do
they interact on M, and what does this tell us about M ? Let us determine the
reachable set for this pair of flows. The reachable set is an object of fundamental
concern in the subject of control theory, which we take up in greater detail in
§3.5. Imagine we are running some process which allows us to apply either flow
F or G successively, at will, to an initial condition in our configuration space
M. The reachable set starting from the initial point f is then defined as

RF,G (f ) := Gsn Ftn Gsn−1 Ftn−1 ...Gs1 Ft1 (f ) ∈ M |si , ti ∈ R, n ∈ N .

Here we are dropping the composition parentheses, using Gs Ft (f ) = G (F (f, t) , s)
to simplify notation; the general associativity of composition means the extra
parentheses are unnecessary. So starting with the initial condition f ∈ M we
can steer our process in finite time to any configuration in RF,G (f) ⊂ M by
judiciously applying F and G by various amounts si and ti .
If RF,G (f) is dense in M, then M is said to be controllable by F and
G. For instance we could imagine M consists of the space of possible signals a
circuit can generate in a looped line. G then represents adding a waveform in
the shape of the graph of g; and F would correspond to time lag as the signal
naturally cycles around the loop. The reachable set in this idealized scenario
represents the possible signals that can be generated with our circuit.
As a first inquiry into the nature of RF,G (f) for our two types of translation
on L2 (R), let us test whether F and G commute, i.e., does F (G (f, s) , t) =
G (F (f, t) , s)? If so the reachable set will be merely a two-dimensional subset
of the infinite-dimensional space M = L2 (R), since any member collapses to
the simple representation

Gsn Ftn Gsn−1 Ftn−1 ...Gs1 Ft1 (f) = Gs1 +...+sn Ft1 +...+tn (f) = Gs Ft (f) .

Perhaps surprisingly F and G are usually far from commutative, and by how
much depends on the function g:

[F (G (f, s) , t) − G (F (f, t) , s)] (x) (3)
= [f (x + t) + sg (x + t)] − [f (x + t) + sg (x)] = s [g (x + t) − g (x)] .

From the point of view of differentiable flows on a manifold, we would at least
d (F (G (f, t) , t) , G (F (f, t) , t)) = O t2
and, in fact, continuing from line (3) we calculate

F (G (f, t) , t) − G (F (f, t) , t) dg
lim =
t→0 t2 dx
if g is differentiable. Following the ideas of geometric control theory, this break
in holonomy suggests the reachable set is more than two-dimensional. In fact
RF,G (f ) should be dense in the span of the set of all Lie brackets generated by
F and G.

This turns out to be exactly correct:
span g n ∈ N ⊂ RF,G (f) (4)

where S denotes the topological closure of a set S ⊂ M, and spanS denotes
the closed linear span of S in M . There are algorithms for steering any initial
condition to a member of the reachable set, and for many choices of g ∈ M , e.g.,
g (x) := e−x , we find that all of M is controllable. Continuing the application
of this model to signal processing above, this means that for the correct choice
of g, any signal can be synthesized by alternately applying F and G. In the
course of this book we will clarify the terminology and ideas surrounding these
claims, culminating in §5.1 with Theorem 88.

One corollary is that low-frequency trigonometric series of the form an eix/n
are dense in L2 [a, b] for any interval [a, b], Theorem 90. Here’s a quick moti-

vation for this shocking fact: approximate f (x) ≈ bn xn then notice xn =
dn itx
i−n dt ne
. Now approximate the derivatives with finite differences (the
formula is reviewed in Example 129). Example 95 gives the coefficients for ap-
proximating x3 to arbitrary accuracy using just 3 low-frequency sine functions.
To achieve these results we generalize the Chow-Rashevsky Theorem to met-
ric spaces: the closure of the reachable set is the closure of the integral manifold
to the distribution consisting of the set of all arc fields bracket-generated from
F and G, which gives line (4). The proof uses generalized versions of folia-
tions (Chapter 3), Lie brackets (§2.2), geometrical distributions (§3.4), and an
arithmetic of flows which works on a geometric as well as algebraic level (§2.1,
Theorem 43). To give a firm footing for such complicated constructions us-
ing only the abstract building blocks of metric spaces, we devote Chapter 1 to
carefully establishing the fundamental properties of existence, uniqueness and
regularity of flows on M and Chapter 2 to the algebraic properties.

0.3 Example: flows on manifolds
The ultimate source of inspiration for metric space generalization of geometri-
cal and dynamical ideas is the theory of differentiable manifolds, in addition to
being the colliery for our examples. The elaborate apparatus constructed to do
calculus on a differentiable manifold is remarkably successful in extending tra-
ditional calculus on Rn to a more general setting, indispensable in fundamental
areas of mathematics and physics. Much of this apparatus is ready to be further
extended to metric spaces.
Digesting the brief overview of differentiable manifolds in this section is not
necessary to digest the rest of the material in this book, but a familiarity with
manifold theory will allow you to anticipate all our results. This is an apology
to the beginner for how dense the next paragraph is. [23] or [12] or many other

proper introductions expand the following paragraph to chapters; [1] gives an
introduction to infinite-dimensional manifolds, or Banach manifolds. We make
no pretense toward rigor in this section, but promise to rectify this imprecision
in the presentation of the generalizations throughout the remainder of the text.
We focus instead on the properties of manifolds which are naturally generalized
to metric spaces.

Example 4 Define the torus

T 2 := S 1 × S 1 = {(x mod 2π, y mod 2π) = (x, y) mod 2π|x, y ∈ R}

where x mod 2π is the remainder upon dividing x by 2π. The torus T 2 is most
easily geometrically visualized when embedded in R3 as a doughnut: First embed
the circle S 1 in R3 via a map such as c (t) := (0, 3 + sin t, cos t), then rotate the
circle around the z-axis with the embedding S : T 2 → R3
  
cos s − sin s 0 0
S (s, t) :=  sin s cos s 0   3 + sin t  .
0 0 1 cos t

(This particular construction is easily extended to give more 2-dimensional man-

Figure 1: T 2 embedded in R3

ifolds such as found on pp. 43-47.)
T 2 is an archetypical example of a manifold, which by definition is a set
which is locally flat; this means every point x ∈ T 2 has a neighborhood U with
φ : U → V a homeomorphism onto its image in the topological vector space V .
For the torus V = R2 which makes it 2-dimensional, hence the superscript in
T 2 . φ is called a chart for T 2 which gives local coordinates on the manifold. T 2
is also a differentiable manifold, which means the charts “match up nicely”,
which means for any two charts φi : Ui → V for i = 1, 2 the composition φ2 ◦φ−1 1
is a differentiable map from V to V wherever it is defined. The existence of

these nicely-matched-up charts means any calculus done on V may be applied to
T 2 . Charts are easy to construct for T 2 ; the only difficulty arises near a point
(x, y) ∈ T 2 when x = 0 or y = 0, which the reader is encouraged to resolve.
In Figure 1 you can see the locally flat patches of R2 nicely matching up as
grid lines to form the manifold, but this is an aberration amongst manifolds.
Excepting the Klein bottle, no other compact 2-dimensional manifold has such
perfectly aligned patches globally. Consider, e.g., a globe where longitude and
latitude grid lines degenerate at the pole; results from topology prove any way
you attempt to construct such a grid on a globe, will end with at least one point
of degeneration [40].

Particularly important for our investigation of geometry and dynamics is
the concept of the tangent space of a manifold, roughly the space of all possible
directions in which you can move from a point within the manifold. The tangent
space may be defined in several equivalent ways; let’s outline the most relevant
for our purposes. A tangent vector v at a point x ∈ M is an equivalence class
of curves in the manifold c : (−δ c , δ c ) → M with c (0) = x under the equivalence
relation that c1 ∼ c2 if (φ ◦ c1 ) (0) = (φ ◦ c2 ) (0) for any chart φ, i.e., c1 and
c2 are differentiable and tangent to each other at x. This equivalence relation
distills the idea of a “direction with magnitude” v located at the position x
into an abstract mathematical object, represented by an explicit, constructible
object c (0). The tangent space at x is the set of all equivalence classes under
this relation, denoted Tx M. The tangent bundle T M is the collection of
all tangent spaces, i.e., the disjoint union T M :=  Tx M , representing all
possible directions of motion (x, v). A vector field is a map f : M → T M with
f (x) ∈ Tx M , and represents a rule for motion on the manifold. A solution to
a vector field is a curve σ : (α, β) → M which is tangent to the vector field at all
points, i.e., a curve which follows the rule. More concretely, if the translation
map τ s : R → R is given by τ s (t) := s + t then σ ◦ τ s : (α − s, β − s) → M has
σ◦τ s ∈ f (σ (s)) for all s ∈ (α, β). In other words σ follows the rules of motion of
f through M . Assuming 0 ∈ (α, β) we say σ (0) ∈ M is the initial condition
of the solution, the place where the motion σ begins. By the Fundamental
Theorem of ODEs (Appendix B) applied to charts, we always have solutions to
a smooth vector field, and we can combine them to give a local flow. Conversely,
any differentiable flow on a manifold is generated by a vector field.

Example 5 A simple flow that spirals around the torus F : T 2 × R → T 2 is
defined by Ft (x, y) = (x + t, y + at) mod 2π. If a is a rational number then the
path F(x,0) (R) closes and is homeomorphic to the circle S 1 (Figure 2). These
paths are evocatively described as toral helices. The different paths, starting
from different points (x, 0), partition T 2 . This partition is an example of a 1-
dimensional foliation of the torus. Two more foliations perpendicular to each
other are illustrated by grid lines in Figure 1 above.
When a is not a rational number the path F(x,0) (R) does not close on itself
and is homeomorphic to R, as a dense subset of T 2 (Figure 3). Still there are
an infinite number of disjoint paths, starting from different points (x, 0), which

Figure 2: Rational flow-path

again foliate T 2 .

Figure 3: Irrational flow-path

The solutions to the vector field f (x, y) := (1, a) generate the flow F . Here
the number (1, a) really represents the curve

c (t) := [(x, y) + t (1, a)] mod 2π ∈ T 2

which itself is a representative
  of an equivalence class of curves under tangency
and so [c] ∈ T(x,y) T 2 . Sometimes it’s easier to just construct the flow than
to think about the vector fields; but vector fields are generally considered pri-
mary, and often have great descriptive power, giving a link between algebra and

geometry. E.g., the vector field illustrated here

V (x, y) := (cos y, sin x)

is smooth on T 2 since it matches up at 0 and 2π. V is easily solved on the plane
and transferred by charts (or S) to the manifold. However, the flow paths of V
do not foliate T 2 as before, since there are 4 fixed points; instead this leads to a
stratification since the paths are of different dimension—namely 1 and 0.
Another inequivalent foliation of T 2 is given by the paths in Figure 4, consist-
ing of two closed circles and an continuum’s worth of toral helices which accumu-
late on the circles. More circles may be added, producing topologically distinct
foliations. Essentially the final twist we can add in foliating a 2-dimensional
compact manifold is a Reeb component, illustrated in Figure 5. See [40] for an
elementary classification of foliations on compact manifolds.

Higher-dimensional foliations of manifolds are vital to the study of geometry
and dynamics. Examples of 2-dimensional foliations of a 3-dimensional space
are illustrated on pages 42-47. Just as integral curves and 1-dimensional fo-
liations are generated by vector fields or by 1-dimensional subbundles of the
tangent bundle T M , surfaces and n-dimensional foliations are generated by n
transverse vector fields or by n-dimensional subbundles of T M called distribu-
tions2 . Not all distributions may be integrated to generate foliations, not even if
they are smoothly defined (see Example 58). However, a simple condition called
“involutivity” characterizes the integrable distributions—this characterization is
referred to as Frobenius’ Foliation Theorem.
To define involutivity, we use the Lie bracket [f, g] of two vector fields f
and g, which is a new vector field on M. The vector [f, g] (x) is the tangency
equivalence class represented by the curve G−√t F−√t G√t F√t (x) where F and
G are the respective flows of f and g. I.e., start at x ∈ M and move in an
2 The term “distribution” is not to be confused with the several other mathematical con-

cepts that share its name. As a striking case of poor terminology, when studying dynamics
on abstract function spaces three of these definitions may be needed in a single example:
probability distributions, functionals, and subbundles, e.g, in Example 85.

Figure 4: 4 leaves of another toral foliation are depicted: 2 circles and two
partially-complete squished toral helices. Everyone loves a Slinky.

Figure 5: Reeb component

approximate-parallelogram following F , then G, then F backwards,√ then G
backwards. The little “parallelogram” almost returns to x, but t has infinite
speed at t = 0 which cancels the naturally tendency of the parallelogram to
close, at least to order O (t), giving a curve with finite speed, Figure 6.Why

Figure 6: O (t) gap represents the new vector [f, g]

do we use t? If we restrict our attention to x ∈ Rn and move 0 > 0 in
each of the directions around the parallelogram in with the f and g vector
field directions
 starting at x0 , then using Taylor series, we get a curve x (0) =
x0 + 0 ∂x f (x0 ) − ∂f
2 ∂g
∂x g (x0 ) + O 0 .
The bracket encapsulates a subtle difference between f and g which is crit-
ical to appreciate. For example [f, g] = 0 if and only if F and G commute,
meaning the parallelogram closes perfectly. But there is much more. The Lie
bracket gives us fundamental geometrical information about the subbundle of
T M generated by f and g, i.e., the distribution
∆ (f, g) := {span {f (x) , g (x) ∈ Tx M } |x ∈ M } .
The distribution ∆ (f, g) gives a plane at each point x and so is also called a plane
field. Frobenius’ Foliation Theorem says ∆ (f, g) foliates M into 2-dimensional
surfaces (“leaves”) exactly when [f, g] (x) ∈ ∆ (f, g) for all x ∈ M. Higher-
dimensional foliations are determined by a straightforward generalization.

Frobenius’ Theorem has an important corollary for control theory, the Chow-
Rashevsky Theorem, concerning the reachable set of a control system: If ∆ (f, g)
is involutive then the situation is simple, and the reachable set Rf,g (x) is the
leaf of the foliation through x; both sets consist of the set of all points in the set
of all piecewise differentiable paths containing x with derivatives being linear
combinations of f and g. If ∆ (f, g) is not involutive then [f, g] is transverse
to any surface tangent to f and g, so cycling through the motions of f and g
according to the bracket definition sends us away from the tangent surface, and
thus the reachable set is not as simple as in the involutive case. But there is a
simple solution in this case as well. If ∆ (f, g) is not involutive, then we may form

the distribution ∆ [f, g] bracket-generated by f and g, consisting of the linear
combinations of f and g and all finitely iterated brackets, such as [[f, [f, g]] , f ].
By definition ∆ [f, g] is involutive and so foliates M by Frobenius’ Theorem.
The Chow-Rashevsky Theorem says the closure of the reachable set Rf,g (x) is
the leaf of the foliation from the bracket-generated distribution ∆ [f, g] through
x. This is easy to believe now since iterated brackets of f and g are tangent to
the flows of some complicated composition of F and G. E.g.,
[f, [f, g]] ∼ F− √ 4 G √
|t| − |t|
F √
G √
F−√t G− √
4 F √ √ √ √
t − 4 t G 4 t F 4 t F t (x) .

So the flow of any iterated bracket is in Rf,g (x). This shows that the leaf
is contained in the closure of the reachable set; the (less interesting) reverse
inclusion follows from the Nagumo-Brézis invariant-set theorem, proven in §1.3.

Vector fields on a manifold are generalized to metric spaces with arc fields
as a special family of curves (cf. the technical description on page 3). The
definition of Lie brackets (§2.2) is essentially the same on metric spaces as given
above for manifolds. But to define distributions (§3.4) using spans of arc fields,
and also to define involutivity, we need an arithmetic for flows on a metric space—
but metric spaces have no usable linear structure by definition. Surprisingly,
though you cannot add points together in a metric space, you can add arc fields
in a natural way which faithfully generalizes the linear properties of vector
fields on manifolds: scalar multiplication is defined by changing the speed of
the curves, and arc fields can be added simply by composing them (§2.1). Then
global foliations on metric spaces follow with a new proof of Frobenius’ Theorem
(Chapter 3). The Chow-Rashevsky Theorem is generalized in §3.5.

0.4 Example: flows on a space with no linear
As a final introductory example we consider a metric space which resists any
natural ascription of a linear structure, but still gives a fertile environment for

Example 6 Let (Rn , d) be the usual n-dimensional Euclidean space. The met-
ric space H (Rn ) is the set of all nonempty compact subsets of Rn and the
Hausdorff distance is given by
dH (a, b) := max sup inf {d (x, y)} , sup inf {d (x, y)} .
x∈a y∈b y∈a x∈b

Using the simplifying notation d (x, a) := inf {d (x, y)} =: d (a, x) for x ∈ Rn
and a ⊂ Rn , we have

dH (a, b) = sup {d (x, b) , d (y, a)} .

H (Rn ) has several useful topological properties in common with Rn . It is
separable, complete and even locally compact (separability is obvious by consid-
ering finite subsets of Rn ; for completeness, see [8]; for local compactness, see
[33, p. 183]).

What makes this space interesting for modeling is that shapes of homoge-
neous matter are merely points
  in this metric space. A circle, a rectangle,
pentagram: all points in H R2 . A ball, a box, a cloud: points in H R3 .
Exercise 7 Find dH (a, b) when a ∈ H R2 is the unit coordinate box

a := { (x, y)| 0 ≤ x ≤ 1, 0 ≤ y ≤ 1}
and b ∈ H R2 is the unit ball

b := (x, y)| x2 + y 2 ≤ 1 .

Hint: Since a = b we cannot have dH (a, b) = 0.

Figure 7: dH (ball, square) =?

Exercise 8 Determine which points are in the ball BdH (0, 1) ⊂ H R2 .
Hint: BdH (0, 1) = Bd (0, 1) and the word “point” is easily misinterpreted

As further motivation for the potential of this space, answer the question:

What is a curve in H (Rn ) ?

Looking at a black and white newspaper photo with a magnifying glass we see a
 of black dots. This photograph may be thought of as a point in
H R2 , the compact set representing the union of the black dots which forms

a closed and bounded
 subset of R2 . Now if a black and white 2  photograph is a
  in H R , a black and white film clip is a curve in H R . These points of
H R2 , the photographs or individual frames of the movie, move continuously
with respect to the Hausdorff metric as time goes by (or at least approximately
continuously, since there are only a finite number of frames in a film clip). Color
cinema is a curve in H R3 .
The ability to describe the motion of complex patterns makes H (Rn ) a very
interesting space. It is easy to imagine the motion and evolution of a homoge-
neous material simply as a curve  3 in
 this metric space: a moving cloud may be
  as a curve in H R , and a lightning stroke is a very fast curve
in H R3 ; the growth of a bacteria colony in a petri dish and the evolution of a
 snowflake growing from a tiny ice seed are both geometrically curves
in H R3 .

Very well then, H (Rn ) has a strong potential for describing all sorts of
shape changes, but do we have any control on this profusion of information
with which H (Rn ) presents us? How do we mathematically encapsulate motion
or characterize forces on such a space? Can we generalize differential equations,
somehow? Even then, could we stomach any calculations with this complicated
metric? Happily, all of these questions have positive answers.
Let’s construct some curves in H (Rn ).

Example 9 For x, y ∈ Rn , let λxy : [−1, 1] → Rn be the line defined by

λxy (t) := (1 − t) x + ty.

For k functions fi : Rn → Rn define the arc field X : H (Rn )×[−1, 1] → H (Rn )
Xa (t) := ∪ λxfi (x) (t)

which describes curves from Xa (0) = a to Xa (1) = ∪ fi (a) in H (Rn ). In
Chapter 7 this arc field X is shown to generate a flow F on H (Rn ). When the
fi are affine and contractive Ft (a) converges to a unique fixed point in H (Rn )
as t → ∞, the convex hull of a fractal. Another example in §7.5 characterizes
the reachable set of a control system as the limit of the flow of a similar arc

0.5 Chapter outline
Part I: Theory In these chapters our will is bent to proving generalizations
of the basic theorems of dynamical systems and differential geometry.

Chapter 1: Flows The Fundamental Theorem of ODEs is generalized, prov-
ing the well-posedness of arc fields, Theorem 12. This gives a means
for generating flows on metric spaces. Global flows are guaranteed

when an arc field satisfies the extra condition of linearly bounded
speed, Theorem 25. A fixed point is guaranteed when the arc field is
suitably contractive, Theorem 31. An invariant-set theorem general-
izing the Nagumo-Brézis Theorem is given with Theorem 33, which
is used later to piece together integral surfaces in a global foliation
theorem. Theorem 35 gives a condition analogous to a vanishing Lie
bracket which guarantees forward flows commute.
Chapter 2: Lie algebra on a metric space An arithmetic for arc fields is in-
troduced which generalizes the algebraic structure of vector fields on
a manifold. Theorem 43 elucidates which module properties general
arc fields enjoy. Then the Lie bracket is introduced and its algebraic
properties are explored. Theorem 52 shows how pull-back and push-
forward operations are natural with respect to this new Lie algebra.
Chapter 3: Foliations Transverse arc fields generate geometric distributions.
The Lie bracket is used to prove a local Frobenius theorem, showing
involutive distributions have integral surfaces, Theorem 62. These
integral surfaces are pieced together to foliate metric spaces, culmi-
nating in a global Frobenius theorem, Theorem 75. A corollary of
this result is an application to control theory with Chow’s Theorem
on a metric space, Theorem 78.

Part II: Examples

Chapter 4: Brackets on function spaces The Lie bracket and the Frobenius
Theorem are applied to simple flows on L2 (R) to make good on the
promises of §0.2. Various foliations of L2 and other function spaces
are explored.
Chapter 5: Approximation with non-orthogonal families Applications of the
results of Chapter 4 give surprising new approximation methods us-
ing non-orthogonal
 familiesof functions such as translations of a
Gaussian e |n ∈ N in §5.1 and low-frequency trigonomet-
ric functions e |n ∈ N in §5.2.
Chapter 6: More flows on function spaces PDEs are rewritten as arc fields
avoiding derivatives.
Chapter 7: Flows on H (Rn ) A continuous version of the discrete IFS fractal
generator and other flows with novel dynamics are introduced.

Some sections are not logically dependent on others. The fastest tour of the
highpoints is
§1.1 → §2.1 → §2.2 → §3.2 → §4 → §5

0.6 Prerequisites
Technically the prerequisites for understanding this book are very basic; a single
semester of undergraduate analysis which introduces the concept of a limit in
a metric space is sufficient. We’ve made efforts to keep the book self-contained
and gently introduce each concept. Certainly, those with experience in the
differentiable-manifold presentations of flows, Lie brackets and foliations will
find this generalized environment easy to apprehend. When released from the
details of charts, atlases and coordinates, new students may likewise find these
concepts simpler to grasp.
Several proofs are extremely long. This is a good place to apologize, justify
ourselves, and prepare the reader. This is an abstract subject with concrete
claims. We are a bit defensive, therefore, and feel the need to detail every
pedestrian step exhaustively. Instead of relying on our readers’ mathematical
dexterity in this unfamiliar terrain, we spoiled the fun and printed out six-page
proofs. Instead of slogging through, line by line, it may be more productive for
you to read the proof’s outline, then create one yourself.

0.7 Abridged version of the book
Generalizations of the major ideas in dynamics and geometry can be fruitfully
made to metric spaces. As well as greater descriptive power, the extra generality
gives insight into classical questions on infinite-dimensional spaces.
A vector field on a manifold is recast as an arc field X, that is, a set of
curves on a metric space M , each curve representing a direction, i.e., X is a
continuous map X : M × [−1, 1] → M such that for all x ∈ M , X (x, 0) = x.
Tangency between two arc fields X and Y is given by the condition

d (Xx (t) , Yx (t)) = o (t) .

If X satisfies the regularity conditions E1 and E2 (p. 1.1.1) on a complete
metric space, then there exists a unique local flow tangent to X. If X has
linearly bounded speed, it generates a global flow.
An arithmetic for arc fields is given by X + Y and aX for a ∈ R defined by

(X + Y )t (x) := Yt Xt (x)

(aX)t (x) := Xat (x) .
The Lie bracket [X, Y ] of two arc fields is given by

[X, Y ] (x, t) := G−√t F−√t G√t F√t (x)

where F and G are the flows generated by X and Y .
A distribution is a set of arc fields. A distribution ∆ is involutive if for
any X, Y ∈ ∆, we have [X, Y ] ∼ ∆. An involutive distribution has a unique

maximal integral surface through each point in M . The integral surfaces, pieced
together, foliate M.

One application on M := L2 (R) shows the flows Ft (f) (x) := f (x + t)
and Gt (f ) := f + tg bracket-generate an infinite-dimensional distribution when
g (x) = e−x , and the reachable set is all of M. Similarly G and Zt (f) (x) :=
e f (x) have an infinite-dimensional bracket-generated distribution and L2 ([a, b] , C)

is controllable with G and Z, for any choice of interval [a, b]. Consequently se-
N 2 
ries of Gaussians ak e−(x+1/k) or low-frequency trig series ak eix/k may
k=0 k=M
be made arbitrarily close to any square integrable function.

0.8 Acknowledgements
This theory took more than 10 years to commit to paper, though I had assumed
it could be hammered out in a few months. It’s all Axel Boldt’s fault. To
my constant irritation, he corrected countless mistakes and misunderstandings,
which really slowed down the creative process. He also introduced me to several
branches of mathematics, which distracted me from metric spaces, and made
me a more versatile mathematician. Thanks for screwing up my focus, pal.
Michael Green was the mathematician who gave me the most extensive and
useful feedback on this manuscript. David Bleecker suggested I write this book,
which was the strangest thing I had seen him do, so I took him seriously. He
has been my greatest supporter in the development of these ideas.
Except for my wife, Karen. Often when authors thank their wives, I imagine
a shrew who speeds the writing of a book by folding her arms and tapping her
feet at the doorway to the study. But Karen took an interest in all the ideas
in this book, even the applications outside her field of expertise. She was my
best sounding board, my best critic. And by introducing me to fatherhood then
guiding me for a year abroad in China, she’s been my best teacher.
Part I


Chapter 1


“Panta rhei.” (Everything flows.)
-Heraclitus, ca. 500 B.C.

The purpose of this chapter is to introduce a general method for producing
flows (dynamical systems) on a metric space. A flow may proceed forward and
backward in time F : M × (−∞, +∞) → M, or possibly only forward in time
F : M × [0, +∞) → M as in the case of diffusion. We explore the generation of
both types of flows and study some conditions which guarantee global existence,
fixed points and commutativity.

1.1 Generating flows with arc fields
This section follows the generation of flows on a manifold M from a vector field:
first we find solutions for each initial condition x ∈ M , then we piece together
the solutions with domain (−δ, δ) in a neighborhood of x to get a local flow,
which are then continued to produce a global flow with domain (−∞, ∞).

1.1.1 The fundamental theorem
The following definition is made in analogy with the representation of a vector
field on a manifold as a family of curves, detailed in §0.3.

Definition 10 An arc field on a metric space M is a continuous map X :
M × [−1, 1] → M with locally uniformly bounded speed, such that for all x ∈ M,
X (x, 0) = x.

Saying X has locally uniformly bounded speed means X (x, ·) : [−1, 1] →
M is Lipschitz, locally uniformly in x. Specifically we have

d (X (x, s) , X (x, t))
ρ (x) := sup < ∞,
s=t |s − t|


(i.e., X (x, ·) is Lipschitz), and the function ρ (x) is locally bounded, meaning
there exists r > 0 such that

ρ (x, r) := sup {ρ (y) |y ∈ B (x, r)} < ∞.

A solution curve to X is a curve σ which is tangent to X throughout its
domain, i.e., σ : (α, β) → M for some open interval (α, β) ⊂ R such that for
each t ∈ (α, β)
d (σ (t + h) , X (σ (t) , h))
lim = 0, (1.1)
h→0 h
i.e., d (σ (t + h) , X (σ (t) , h)) = o (h).
Arc fields are typically denoted with X, Y , or Z. The two independent
variables for arc fields, usually denoted by x and t, are often thought of as
representing space and time. We typically use x, y, and z for space variables,
while r, s, t, and h fill the time variable slot. As with flows, the variables of an
arc field X will often migrate liberally between parentheses and subscripts

X (x, t) = Xx (t) = Xt (x)

depending on which variable we wish to emphasize in a calculation.
On Rn a vector field which is Lipschitz continuous generates a local flow
constructed by Euler curves. An arc field is a faithful analogy for a metric
space, and when it satisfies analogous regularity conditions (E1 and E2 detailed
below), we will soon show Euler curves converge to a flow. To further the
analogy with vector fields on manifolds, an arc field may be thought of as a
map X : M → AM where AM is the arc bundle, consisting of the set of all
Lipschitz continuous arcs, and we require X (x) (0) = x.
The initial condition of σ is the point x = σ (0) ∈ M . Notationally we use
σx to mean the solution with initial condition x. We say σ x : (αx , ω x ) → M
is the unique solution to X with initial condition x if for any other solution 
x : (
σ  x ) → M also having initial condition x, we have (
αx , ω  x ) ⊂ (αx , ω x )
αx , ω
and σx = σ x |(αx ,ωx ) (i.e., σ x is the unique maximal solution curve).
We will prove below that on a locally complete metric space the next two
conditions guarantee the arc field problem is well posed, i.e., there exists a
unique solution from any initial condition x ∈ M (Theorem 12).

Condition E1: For each x0 ∈ M , there exist positive constants r, δ and Λ such
that for all x, y ∈ B (x0 , r) and t ∈ (−δ, δ)

d (Xt (x) , Xt (y)) ≤ d (x, y) (1 + |t| Λ)

Condition E2: For each x0 ∈ M, there exist positive constants r, δ and Ω such
that for all x ∈ B (x0 , r) and s ∈ (−δ, δ) and any t with |t| ≤ |s|

d (Xs+t (x) , Xt (Xs (x))) ≤ |st| Ω.

These conditions may be restated as saying

d (Xt (x) , Xt (y))
− 1 ≤ O (|st|)
d (x, y)

d (Xs+t (x) , Xt (Xs (x))) = O (|st|)

for |t| ≤ |s| as s → 0, locally uniformly in x. Infinitesimally these conditions
have E1 limiting the spread of X, and E2 restraining X to be flow-like (Figure

Figure 1.1: E1 and E2 are continuity conditions on X which ensure some geo-
metric regularity using only the metric.

Example 11 A Banach space (M, ·) is a complete normed vector space
(e.g., Rn with Euclidean norm). A Banach space is an example of a metric
space (M, d) where the metric is defined by d (u, v) := u − v. A vector field
on a Banach space M is a map f : M → M . A solution to a vector field
f with initial condition x is a curve σx : (α, ω) → M defined on an open
interval (α, ω) ⊂ R containing 0 such that σx (0) = x and σx (t) = f (σx (t))
for all t ∈ (α, ω). The Fundamental Theorem of ODEs (detailed in Appendix
B) guarantees unique solutions for any locally Lipschitz vector field f . With a
few tricks, most differential equations can be represented as vector fields on a
suitably abstract space.
Every Lipschitz vector field f : M → M naturally gives rise to an arc field
X (x, t) := x + tf (x) on M , and it is easy to check X satisfies E1 and E2:

d (Xt (x) , Xt (y)) = Xt (x) − Xt (y)
≤ x − y + |t| f (x) − f (y) ≤ (1 + |t| Kf ) x − y

where Kf is the local Lipschitz constant for f, so ΛX := Kf gives Condition


d (Xs+t (x) , Xt (Xs (x)))
= x + (s + t) f (x) − [Xs (x) + tf ((Xs (x)))] = tf (x) − tf (Xs (x))
≤ |t| Kf x − [x + sf (x)] ≤ |st| Kf2 x

so ΩX := Kf2 x. Further the solutions to the arc field are precisely the solutions
to the vector field guaranteed by the fundamental theorem since
   σ (t + h) − σ (t) 
d σ (t + h) , Xσ(t) (h) = |h|  − f (σ (t))
h  = o (h)
⇔ σ (t) = f (σ (t)) .

Therefore Theorem 12, below, is a generalization of the classical Fundamental
Theorem of ODEs (given in Appendix B). Similarly, Lipschitz vector fields on a
Banach manifold (a manifold whose charts map to a Banach space; if f is locally
Lipschitz in one chart, it is in any and on the manifold with any compatible
metric) give arc fields which satisfy E1 and E2.

The basic iterative trick for proving ODEs are well-posed on Rn , or more
generally on a Banach space, applies just as well for arc fields on general metric
spaces. For economy of description we use round brackets in the superscript,
f (i) , to denote the composition of a map f : M → M with itself i times. So, for
Xt/2n (x) = Xt/2n ◦ Xt/2n ◦ ... ◦ Xt/2n (x) .
i comp ositions

Then given x ∈ M and a positive integer n, we may define the n-th Euler
curve ξ n : (−2n , 2n ) → M for X starting at x as
(2n )
ξ n (t) := Xt/2n (x) (1.2)

for n ∈ N such that 2n > |t|. Taking n → ∞ generates a solution to X in the
following fundamental result.

Theorem 12 Let X be an arc field satisfying E1 and E2 on a locally complete
metric space M . Then given any point x ∈ M , there exists a unique solution
σx : (αx , ωx ) → M with initial condition σ x (0) = x for some αx < 0 < ωx ∈
R∪ {∞, −∞}.

Proof of Existence of Solutions. We will show
(2n )
lim ξ n (t) = lim Xt/2n (x)
n→∞ n→∞

exists for each t sufficiently close to 0 and define σx (t) as this limit. Then σx (t)
will be shown to be tangent to X at t = 0. The elaborate chain of elementary

calculations checking these two facts becomes convoluted, but the inspiration
guiding us is sketched simply enough in Figures 2.2 and 2.3. We then establish
σx (s + t) = σσx (s) (t) which shows σx is tangent to X at all t in its domain
by the previous result. Uniqueness of solutions is elaborated and verified in
Remark 17, below.
First we show that for sufficiently small c > 0 the image of the Euler curves
ξ n ([−c, c]) must remain bounded for all n. This is intuitively true because the
arc field X from which the Euler curve is constructed has locally bounded speed
ρ < ∞, so successively following 2n compositions of X for small time t/2n does
not allow us to travel further than ρ |t| distance. This is exactly correct, but we
need to demonstrate how we can achieve this bound using only the metric. We
exhaust the rest of this voluminous paragraph with the tedious details. Suppose
r > 0 is chosen so ρ (x, r) < ∞. If ρ (x, r) = 0, then σ (t) := x defines a solution
curve and there is nothing to prove. Thus, assume ρ (x, r) > 0, and let

c := r/ρ (x, r) .

We assume hereafter that t is restricted to |t| < c and |t| < 1, guaranteeing the
Euler curve ξ n (t) is well defined. In this case we claim ξ n (t) ∈ B (x, r): the
triangle inequality gives


(k−1) (k)
d (x, ξ n (t)) ≤ d Xt/2n (x) , Xt/2n (x)

where Xt/2n (x) = x by definition.

(k−1) (k)
d Xt/2n (x) , Xt/2n (x) = d y, Xt/2n (y) ≤ ρ (y) |t| /2n for each k

where y := Xt/2n (x). So if y ∈ B (x, r) then ρ (y) ≤ ρ (x, r) and induction
allows us to conclude

d (ξ n (t) , x) ≤ ρ (x, r) |t| < r.

Next we additionally assume the above r > 0 is chosen small enough that
Λ and Ω from Conditions E1 and E2 hold uniformly on B (x, r) and for conve-
nience, that Λ, Ω > 1. We may further assume the closure B (x, r) is a complete
metric subspace of M by again taking r to be smaller if need be. In this carefully
chosen neighborhood we will now show the Euler curves converge by proving ξ n
is Cauchy. (If M were locally compact, Arzela-Ascoli would allow us to bypass
this one page verification.)

Figure 2.2: To prove the Euler curves are Figure 2.3: To prove tangency apply
Cauchy, apply E1 and E2 repeatedly to E1 and E2 to estimate the distance
estimate the distance between ξ n (t) and between Xt (x0 ) and ξ n (t) → σx (t) .
ξ n+1 (t) tracking back to
ξ n (0) = x0 = ξ n+1 (0) .

  (2n ) (2n+1 )
d ξ n (t) , ξ n+1 (t) = d Xt/2n (x) , Xt/2n+1 (x)
(2n ) (2) (2n −1) (2) (2n −1) (2n+1 )
≤ d Xt/2n (x) , Xt/2n+1 Xt/2n (x) + d Xt/2n+1 Xt/2n (x) , Xt/2n+1 (x)

The first term is approximated by
(2n ) (2) (2n −1) (2n −1) (2) (2n −1)
d Xt/2n (x) , Xt/2n+1 Xt/2n (x) = d X2t/2n+1 Xt/2n (x) , Xt/2n+1 Xt/2n (x)
  t 2
= d X2t/2n+1 (y) , Xt/2n+1 (y) ≤ Ω

(2n −1)
for y := Xt/2n (x) using Condition E2, while the second term is approximated
(2) (2n −1) (2n+1 )
d Xt/2n+1 Xt/2n (x) , Xt/2n+1 (x)
(2) (2n −1) (2) (2n+1 −2)
= d Xt/2n+1 Xt/2n (x) , Xt/2n+1 Xt/2n+1 (x)
(2n −1) (2n+1 −2) |t|
≤ d Xt/2n (x) , Xt/2n+1 (x) 1 + n+1 Λ

using Condition E1 twice. Such calculations will now be performed frequently
and without comment for the rest of the proof; usually when a new Λ or Ω
erupts, the triangle inequality and Condition E1 or E2 have been used.

Inserting these last two estimates and iterating we have
(2n ) (2n+1 )
d Xt/2n (x) , Xt/2n+1 (x)
  2  2
(2n −1) (2n+1 −2) |t| t
≤ d Xt/2n (x) , Xt/2n+1 (x) 1 + n+1 Λ + Ω
2 2n+1
(2n −2) (2n+1 −2·2) |t|
≤ d Xt/2n (x) , Xt/2n+1 (x) 1 + n+1 Λ
 2  2  2
|t| t t
+ 1 + n+1 Λ Ω+ Ω
2 2n+1 2n+1
(2n −2n ) (2n+1 −2·2n ) |t|
≤ d Xt/2n (x) , Xt/2n+1 (x) 1 + n+1 Λ
 2·k  2
 −1 |t| t
+ 1 + n+1 Λ n+1

k=0 2 2
 2 2n −1  2·k
t  |t|
= 0+ Ω 1 + Λ
2n+1 k=0 2n+1
 2 1 + |t|
Λ −1
(geometric series) t 2n+1
= n+1
Ω  2
2 |t|
1 + 2n+1 Λ −1
 2 1 + |t|
Λ − 1  t 2 e|t|Λ − 1
t 2n+1 |t| e|t|Λ − 1
= n+1
Ω  2 ≤ n+1
Ω |t| ≤ n+1 Ω
2 2 2 Λ
2n Λ
|t| |t|
2n Λ + 2n+1 Λ

Then for m < n

n−1   n−1
 (2k ) (2k+1 )
d (ξ m (t) , ξ n (t)) ≤ d ξ k (t) , ξ k+1 (t) = d Xt/2k (x) , Xt/2k+1 (x)
k=m k=m
|t|Λ |t|Λ

n−1 |t| e −1 e − 1 −(m+1) 
∞ e|t|Λ − 1 −m
≤ k+1
Ω ≤ |t| Ω 2 2−k = |t| Ω 2
k=m 2 Λ Λ k=0 Λ
and we see ξ n (t) is uniformly Cauchy on the interval |t| < c in the complete
metric space B (x, r). By the bound ρ on speed, the curves ξ n (t) are uniformly
continuous in t and so they converge to a (continuous) curve, denoted
σx (t) := lim ξ n (t) .

Let us now check σx is tangent to X, first at t = 0. Notice
d (σx (t) , Xx (t)) ≤ d (σ x (t) , ξ n (t)) + d (ξ n (t) , Xx (t)) .
The first summand is easily controlled. For the second summand consider the
fact that for any t ∈ [−1, 1] and n ∈ N we have

d Xt (x) , Xt/n (x) ≤ e|t|Λ t2 Ω (1.3)

which holds since
(n) (n−(k+1))
d Xt (x) , Xt/n (x) ≤ d Xt/n Xkt/n (x) , Xt/n X(k+1)t/n (x)
 (n−(k+1))  2

n−1 |t| t
≤ 1+ Λ k Ω ≤ e|t|Λ t2 Ω.
k=0 n n

Replacing the n in (1.3) with 2n , the bound is undisturbed, and we have

d (σ x (t) , Xx (t)) ≤ d (σx (t) , ξ n (t)) + e|t|Λ t2 .

Letting n → ∞ gives
d (σ x (t) , Xx (t)) ≤ e|t|Λ t2 Ω = O t2 (1.4)

locally uniformly in x.
Next we show σx is locally 2nd-order tangent to X for all t. This will be
done if we show σ x (s + t) = σσx (s) (t) because in that case
d σx (s + t) , Xσx (s) (t) = d σ σx (s) (t) , Xσx (s) (t) = O t2

this last equality having been established by line (1.4). Using (1.3) we have
d Xt (x) , Xt/n (y) ≤ d (x, y) + t2 Ω e|t|Λ (1.5)

(n) (n) (n) (n)
d Xt (x) , Xt/n (y) ≤ d Xt (x) , Xt/n (x) + d Xt/n (x) , Xt/n (y)
2 |t|Λ |t|  
≤ t Ωe + 1 + Λ d (x, y) ≤ d (x, y) + t2 Ω e|t|Λ .

Next if k divides j then using (1.5) we have

(j) (j/k)
d Xt/j (x) , Xkt/j (x) ≤ e|t|Λ t2 Ωk/j (1.6)

(j) (j/k) (k) (k[j/k−1]) (j/k−1)
d Xt/j (x) , Xkt/j (x) = d Xt/j Xt/j (x) , Xkt/j Xkt/j (x)
  kt 2
(k[j/k−1]) (j/k−1)
≤ d Xt/j (x) , Xkt/j (x) + Ω e|kt/j|Λ ≤ ...
    2    2  
(0) (0) kt |kt/j|Λ |kt/j|Λ kt |kt/j|Λ
 ... d X t/j (x) , X kt/j (x) + j Ω e + e + j Ω e  |kt/j|Λ
≤   2 e
+... ktj Ω

(0) (0)
where the sum is taken j/k times and since d Xt/j (x) , Xkt/j (x) = 0 the
above is
( ((   ! )  2 )  2 )
kt |kt/j|Λ |kt/j|Λ kt |kt/j|Λ kt
≤ ... Ω e + e + Ω e + ... Ω e|kt/j|Λ
j j j
= Ω
= t2 Ωe|t|Λ .

(1.6) is useful because it gives us
(j) (j/k)
lim Xt/j (x) = lim Xkt/j (x)
j→∞ j→∞

or better put
(kj) (j)
lim Xt/j (x) = lim Xkt/j (x) (1.7)
j→∞ j→∞

where k can be any function of j and t as long as k/j → 0 as j → ∞.
For each n ∈ N, choose i (n) ∈ N such that n/i (n) → 0 as n → ∞ (for
example, choose i (n) = n2 ). and let j (i, n) , k (i, n) ∈ N be chosen so

s+t s |s + t| s+t t |s + t|
j − < and k − <
2i 2n 2i 2i 2n 2i
s+t s+t s t |s + t|
j +k i − + n < 2 i
2 i 2 2 n 2 2

which implies (j + k) − 2i−n < 2

j + k = 2i−n + δ (n)
where |δ (n)| < 2. Therefore

(2n ) (2i ) (2n [j+k])
σx (s + t) = lim X(s+t)/2n (x) = lim X s+t (x) = lim X s+t (x)
n→∞ i→∞ 2i i→∞ 2i
(2n j) (2n k)
= lim X s+t X s+t (x) and using (1.7) this is
i→∞ 2i 2i
n n
(2 ) (2 ) (2n ) (2n )
= lim X s+t Xk s+t (x) = lim Xt/2n Xs/2n (x)
i→∞ j 2i i n→∞
(2n ) (2n )
= lim Xt/2n lim Xs/2n (x) = σ σx (s) (t) .
n→∞ n→∞

This completes the proof that solutions exist which are locally uniformly 2nd-
order tangent to X. The proof of uniqueness follows from Theorem 16 below;
see Remark 17.

Remark 13 Theorem 12 has a simple corollary showing the well-posedness of
time-dependent dynamics following the exact same idea for time-dependent vec-
tor fields on a manifold. Simply consider a time-independent arc field on M ×R,
namely ((x, t) , h) → (Xx,t (h) , t + h) in M × R, and project solutions onto the
M factor.
2nd-order differential equations can be rewritten with 2nd-order vector fields.
A 2nd-order arc field is a straightforward generalization with well-posedness a
simple corollary of Theorem 12 (see [16] for details).
With a little extra effort Theorem 12 and those which follow are true in even
greater generality, and the reader is encouraged to study the work in, e.g., [52],
[7] and [18]. **check on the status of columbo and corli’s new work**. But
in the examples throughout this book the stronger conditions E1 and E2 are
satisfied and are Easier to use.

The above proof actually gives a result stronger than the statement of the
theorem which will be frequently useful:
Corollary 14 Assuming E1 and E2, the solutions σ are locally uniformly
2nd-order tangent to X in the variable x, i.e.,
d (Xx (t) , σ x (t)) = O t2
locally uniformly for x ∈ M ; i.e., for each x0 ∈ M there exist positive constants
r, δ, T > 0 such that for all x ∈ B (x0 , r)
d (Xx (t) , σ x (t)) ≤ t2 T
whenever |t| < δ.
Proof. This was established at line (1.4).
Denote local uniform tangency of two arc fields X and Y by X ∼ Y and
local uniform 2nd-order tangency by X ≈ Y . It is easy to check ∼ and ≈ are
equivalence relations. E.g., transitivity follows from the triangle inequality:
d (Xt (x) , Zt (x)) ≤ d (Xt (x) , Yt (x)) + d (Yt (x) , Zt (x)) .
We use the symbols ∼ and ≈ in many contexts in this monograph (particularly
§3.4), and always with an associated local-uniform-tangency property.
Further, the proof of Theorem 12 gives us another useful fact we will subse-
quently need:
Corollary 15 Assuming E1 and E2, the solutions σ are tangent uniformly over
all arc fields X which satisfy E1 and E2 for specified Λ and Ω.
Proof. This was also established at line (1.4).
Also notice the proof used only the weaker property s = t and not the more
general |t| ≤ |s| from Condition E2 to prove the Euler curves are Cauchy. The
full assumption was used to prove the solution is tangent to the arc field.
Theorem 16 Let σ x : (αx , β x ) → M and σy : αy , β y → M be two solu-
tions to an arc field X which satisfies E1. Assume (αx , β x ) ∩ αy , β y ⊃ I for
some interval I containing 0, and assume E1 holds uniformly with Λ on a set
{σ x (t) |t ∈ I} ∪ {σy (t) |t ∈ I} .
d (σ x (t) , σy (t)) ≤ eΛ|t| d (x, y) for all t ∈ I.

Proof. We check t ≥ 0, the case t < 0 being similar. Let

g (t) = e−Λt d (σ x (t) , σy (t)) .

For h ≥ 0, we have

g (t + h) − g (t)
= e−Λ(t+h) d (σ x (t + h) , σ y (t + h)) − e−Λt d (σ x (t) , σy (t))
= e−Λ(t+h) (d (Xh (σx (t)) , Xh (σ y (t))) + o (h)) − e−Λt d (σx (t) , σy (t))
≤ e−Λt e−Λh d (σx (t) , σ y (t)) (1 + Λh) − e−Λt d (σx (t) , σ y (t)) + o (h)
= e−Λh (1 + Λh) − 1 e−Λt d (σx (t) , σy (t)) + o (h)
= o (h) e−Λt d (σ x (t) , σ y (t)) + o (h) = o (h) (g (t) + 1) .

Hence, the upper forward derivative of g (t) is nonpositive; i.e.,
g (t + h) − g (t)
D+ g (t) := lim+ ≤ 0.
h→0 h

Consequently, g (t) ≤ g (0) or

d (σx (t) , σy (t)) ≤ eΛt d (σ x (0) , σ y (0)) = eΛt d (x, y) .

Theorem 16 says solutions locally diverge at most exponentially, which is the
most useful result we have for proving regularity of flows. When I is compact

{σx (t) |t ∈ I} ∪ {σ y (t) |t ∈ I}

is compact since σ x is continuous, and so it is often easy to find a uniform bound
Λ for E1 on the set.

Remark 17 Uniqueness of solutions in Theorem 12 has the same meaning as
in classical ODE theory:    
(1.) Any two solutions σ1x : α1x , β 1x → M and σ2x : α2x , β 2x → M with
initial condition x has σ1x (t) = σ 2x (t) for all t ∈ α1x , β 1x ∩ α1x , β 1x and
(2.) There exists a solution σ x : (αx , ω x ) → M with maximal domain,
meaning any other solution σ *x : (* * x ) → M has in the sense that for any
αx , ω
(* * x ) ⊂ (αx , ωx ).
αx , ω

Choosing x = y in Theorem 16 establishes (1.) for a small interval containing
the origin. The exact same extension argument as in ODEs then establishes
(1.) and (2.) fully (cf. practically any text introducing ODEs, e.g., [39]). The
maximal interval (αx , ωx ) described in (2.) is the union of the domains of all
solutions with initial condition x.

Example 18 **Good spot for the non-unique solutions example x = x. This
example indicates how E1 and E2 cannot be weakened too much if we want to
guarantee a general well-posedness result.**

Remark 19 Theorem 16 gives uniqueness of solutions for any arc field which
satisfies E1 alone. E2 is only used to prove general existence, but E2 is typi-
cally the more difficult condition to verify, so if we can verify solutions exist in
some other manner (perhaps directly calculating the limit of Euler curves, as in
Example 100) E1 is sufficient.

Theorem 16 also gives an easy proof of a very general Nagumo-type invariant-
set theorem, Theorem 33 below in §3.4.

Notice in the proof of Theorem 12 the Euler curves were defined with nodes
spaced at a distance of t/2n . This was for convenience. The simpler expression
lim Xt/n (x) = σ x (t) (1.8)

may also be verified, but we won’t present the more tedious analysis.
Yet a third definition of Euler curves for any real number r > 0 is common:
for i, n ∈ N define

 (i)
 X(t−i·r2−n ) Xr2−n (x) i · r2−n ≤ t ≤ (i + 1) r2−n
ξ r,n (t) :=

 X (i)
(t+i·r2−n ) X−r2−n (x) − (i + 1) r2−n ≤ t ≤ −i · r2−n .

This concatenation of arcs is more complicated notationally, but more intuitively
compelling, and is in introductory texts on differential equations. Again ξ r,n →
σx as n → ∞ as was proven in [18] with r = 1 to verify well-posedness under
commensurate conditions. Notice
(2n )
ξ t,n (t) = Xt/2n (x)

since t = 2n · t2−n .

1.1.2 Local flows
From now on (αx , ωx ) will denote the maximal domain with initial condition x.

Corollary 20 Assume the conditions of Theorem 12 and let s ∈ (αx , ωx ).
Then ασx (s) = αx − s and ω σx (s) = ωx − s. Thus t ∈ ασx (s) , ω σx (s) if and only
if t + s ∈ (αx , ω x ), and then we have
σ σx (s) (t) = σx (s + t).
Defining W ⊂ M × R by
W : = {(x, t) ∈ M × R|t ∈ (αx , ω x )} and
F : W →M by F (x, t) := σ x (t) (1.9)

(i) M × {0} ⊂ W and F (x, 0) = x for all x ∈ M (identity at 0 property)
(ii) F (t, F (s, x)) = F (t + s, x) (1-parameter local group property)
(iii) For each (fixed) x ∈ M , F (x, ·) : (αx , ωx ) → M is the maximal solution
σx to X.
The map F is called the local flow of X.
Compare Condition E2 with (ii) above to see why an arc field might be
described as a “pre-flow”.
Theorem 16 says if F is the local flow of an arc field X which satisfies
Condition E1 with uniform constant ΛX then
d (Ft (x) , Ft (y)) ≤ eΛX |t| d (x, y) . (1.10)
Thus F (x, t) is continuous in x. Notice eΛX |t| = 1+ΛX |t|+O t2 and compare
Condition E1 with line (1.10) to see why E1 may be thought of as a local linearity
property for X, needed for the continuity of F . Now let’s check continuity in
the other variable, t:
Lemma 21 Suppose c > 0 and σ : (−c, c) → X is a solution curve of X.
Assume the speed of X is bounded by ρ ∈ [0, ∞) on σ ((−c, c)). Then the speed
of σ is also bounded by ρ.
Proof. First let t ≥ 0. For −c ≤ t0 ≤ t0 + t < c, let
f (t) := d (σ (t0 + t) , σ (t0 )) − ρt
Since f (0) = 0 we wish to show D+ f (t) ≤ 0, since then f (t) ≤ 0 and we will
then know
d (σ (t0 + t) , σ (t0 )) ≤ ρt
as desired.
f (t + h) − f (t) = d (σ (t0 + t + h) , σ (t0 )) − d (σ (t0 + t) , σ (t0 )) − ρh
≤ d (σ (t0 + t + h) , σ (t0 + t)) − ρh
≤ d (σ (t0 + t + h) , Xh (σ (t0 + t))) + d (Xh (σ (t0 + t)) , σ (t0 + t)) − ρh
= o (h) + d (Xh (σ (t0 + t)) , X0 (σ (t0 + t))) − ρh
≤ o (h) + ρh − ρh = o (h) .
Checking d (σ (t0 + t) , σ (t0 )) ≤ ρ |t| for t < 0 is similar, mutatis mutandis.

Theorem 22 For F and W as above (1.9) we have W open in M × R and F
continuous on W .

Proof. Continuity is easy to check using Theorem 16 and Lemma 21 on
the separate variables, once we’ve established a proper environment on which
their assumptions are satisfied. So we first check W is open by showing for
any (x0 , t0 ) ∈ W there is a neighborhood V of x0 and 0 > 0 such that V ×
(t0 − 0, t0 + 0) ⊂ W . Define x1 := σ x0 (t0 ). Since X has locally bounded speed
there exists r > 0 such that ρ := ρ (x1 , r) < ∞, and so for
 any x ∈B (x1 , r/2ρ)

r r r r
we have σ x (t) ∈ B (x1 , r) for |t| < 2ρ . Consequently B x1 , 2ρ × − 2ρ , 2ρ ⊂
Now the rest of the proof follows the idea that there is a small enough
neighborhood V of x0 such that F (V, t0 ) ⊂ B (x1 , r/2ρ) by Theorem 16 which
r r
guarantees V × t0 − 2ρ , t0 + 2ρ ⊂ W since
r r r r
F F (V, t0 ) , − 2ρ , 2ρ = F V, t0 − 2ρ , t0 + 2ρ

by the local group property. Theorem 16 requires only there  be a set on which
E1 is satisfied uniformly by some Λ > 0. Then V := B x0 , e−t0 Λ 4ρ is suffi-
/ 0
cient. Now to show the set for Theorem 16 exists. Notice 0, t0 + 2ρ is compact
/ 0
and so its continuous image σx0 0, t0 + 2ρ ⊂ M is compact. For each t ∈
/ 0
0, t0 + 2ρ there is a ball B (σx0 (t) , rt ) ⊂ M with rt > 0 on which Condition
/ 0
E1 is satisfied with Λt . These neighborhoods cover σx0 0, t0 + 2ρ , so there is
a finite subcover {B (σx0 (ti ) , rti ) |i = 1, ..., n}. Let Λ := max {Λi |i = 1, ..., n},
let U := ∪ B (σ x0 (ti ) , rti ) and let M \U denote the set complement. The
i=1/ 0
function f : 0, t0 + 2ρ → R defined by f (t) := d (σx0 (t) , M \U ) is positive
and continuous on/ a compact 0 domain and so has a minimum m > 0. There-
fore any y ∈ σ x0 0, t0 + 2ρ has a neighborhood ball B (y, m) ⊂ U and
/ any solution
0 curve which stays within a distance of m of the path
σx0 0, t0 + 2ρ will have a uniform Λ satisfying E1. Therefore Theorem 16

applies and we can choose V := B x0 , e−t0 Λ 4ρ as explained above, giving

r r
(x0 , t0 ) ∈ V × t0 − 2ρ , t0 + 2ρ ⊂ W and W is open. (In fact we have proven
V × 0, t0 + 2ρ ⊂ W .)
Now proving continuity is easy. Since X is an arc field, it has locally bounded
speed and there exists r > 0 and a local bound on speed ρ := ρ (σ x0 (t0 ) , r) < ∞
for Xy for all y ∈ B (σx (t0 ) , r), in particular Lemma 21 requires the speed of
σx0 (t) be bounded by ρ for all t with |t − t0 | < 2ρ . Using Theorem 16 (on the
set constructed in the previous paragraph for which Λ is uniform) and Lemma

21, as (x, t) → (x0 , t0 ) we have

d (F (x, t) , F (x0 , t0 ))
= d (σx (t) , σ x0 (t0 )) ≤ d (σx (t) , σ x0 (t)) + d (σ x0 (t0 ) , σ x0 (t))
≤ eΛ(|t0 |+1) d (x0 , x) + ρ (σx (t0 ) , r) |t0 − t| → 0.

For fixed t it is clear Ft is a local lipeomorphism, when defined, by Theorem

1.1.3 Global flows
We now investigate conditions which guarantee local flows are in fact “global”,
i.e., (αx , ωx ) = R for all x ∈ M . To achieve this, we mimic ODE theory.

Example 23 Consider the classic elementary example of “quadratic” speed
x = f (x)
where f : R → R is the (locally Lipschitz) vector field given by f (x) = x2 which
has solutions x (t) := so that when the initial condition is x (0) = x0 =
1 − tx0
0, the solutions “blow up” at time t = 1/x0 . The vector field f (x) = x2 grows
too quickly as solutions x grow, sending x to ∞ in finite time.

To guarantee globally defined flows, first the space cannot have holes, i.e.,
M must be complete. Secondly we must limit the magnitude of the vector field
to prevent the situation in Example 23, which inspires the following

Definition 24 An arc field X on a metric space M is said to have linear
speed growth if there is a point x ∈ M and positive constants c1 and c2 such
that for all r > 0
ρ (x, r) ≤ c1 r + c2 , (1.11)
where ρ (x, r) is the local bound on speed given in Definition 10.

If y is any other point in X then B (y, r) ⊆ B (x, d (x, y) + r) . Thus,

ρ (y, r) ≤ ρ (x, d (x, y) + r) ≤ c1 (x) (d (x, y) + r) + c2 (x)
= c1 (x) r + (c1 (x) d (x, y) + c2 (x)) .

Hence, if the relation (1.11) holds for a point x, then for any other y ∈ X we
also have
ρ (y, r) ≤ c1 (y) r + c2 (y) , (1.12)
where c1 (y) = c1 (x) and c2 (y) = c1 (x) d (x, y) + c2 (x).

Theorem 25 Let X be an arc field on a complete metric space M , which sat-
isfies E1 and E2 and has linear speed growth. Then F has domain W = M × R,
i.e., F is a flow.

Proof. A similar proof in this context of metric spaces appears in [18]. Most
other proofs on manifolds can be easily transferred to our current situation.
Assume t ≥ 0 (the case t < 0 being similar). Then for any partition 0 =
t1 < t2 < ... < tn+1 = t of [0, t] we have

d (σ x (t) , x) ≤ d (σ x (t) , σx (0)) ≤ d (σx (ti ) , σx (ti+1 )) ≤ ρ (σx (si )) |ti+1 − ti |
i=1 i=1

for some choice of si ∈ [ti , ti+1 ] which leads to
 t  t
d (σx (t) , x) ≤ ρ (σx (s)) ds ≤ c1 (x) d (σx (s) , x) + c2 (x) ds.
0 0

In other words, for f (t) := d (σx (t) , x) we wish to use the inequality
f (t) ≤ c1 f (s) + c2 ds (1.13)

to bound f . In fact (1.13) gives
f (t + h) − f (t)
D+ f (t) := lim+ ≤ c1 f (t) + c2 .
h→0 h
As motivation, the solution of x (t) = c1 x (t) + c2 satisfying x (0) = 0 is
x (t) = cc21 (ec1 t − 1). Since we expect f  (t) ≤ x (t) and f (0) = 0, we expect f
to grow at most exponentially. Then assuming the domain (αx , ω x ) has ω x < ∞
we would have by continuity and the boundedness of f that σ x can be continued
to have σx (ω x ) ∈ M . But then the fundamental theorem allows us to continue
σx beyond ω x giving us the contradiction.
A flow is sometimes called a full flow, or a global flow, or a complete
flow to distinguish it from a local flow. Since local flows are continuous—and
continuity is a local property—full flows are continuous.
Example 26 The support of an arc field X is the closure of the set S :=
{x ∈ M |Xm  0}. Here 0 is the constant arc field 0x (t) = x. Assuming E1 and
E2 on a locally complete space M , it is easy to see that when the support of X
is compact, the flow F is complete; in particular if M is compact all such X
give complete flows.
Example 27 Every local flow on a metric space is generated by an arc field.
Any local flow F gives rise to an arc field F : M × [−1, 1] → M defined by
 F (x, t)  if t ∈  α2x , ω2x 
F (x, t) := F x, αx if t ∈ −1, α2x
  ω2x 
F x, 2 if t ∈ ω2x , 1 .
The issue here is that F , being a local flow, may have [−1, 1]  (αx , ω x ), so
we have to be careful at the endpoints. Clearly the local flow generated by F
is F. Since all our concerns with arc fields are local, we will never focus on
t∈/ α2x , ω2x and henceforth we will not notationally distinguish between F and
F as arc fields.

With this identification of flows being arc fields (but not usually vice-versa)
we may simplify Corollary 14 to:

Corollary 28 X ≈ F if X satisfies E1 and E2.

Examples relevant to this chapter occur in Chapter 7 and Example 100.

1.2 Forward flows and fixed points
In many applications the solution of a differential equation, or vector field is not
defined for t < 0. For example, diffusion phenomena is usually only tractable
forward in time. In this case we work with forward flows (also called semi-
flows, or in the context of operators on Banach spaces, semi-groups). We list
here the minor modifications to the above theory for this more general situation,
then prove a simple fixed point theorem. We don’t bother to stress much new
forward-specific terminology, as it should be clear from context whether we mean
forward or bidirectional in any examples.
Change the domain of arcs on M from c : [−1, 1] → M to c : [0, 1] → M
and similarly replace [−1, 1] with [0, 1] everywhere it occurs, e.g., (forward) arc
fields are defined as maps X : M × [0, 1] → M . Solutions σ x : [0, β x ) → M are
forward tangent to X defined by

d (σ (t + h) , X (σ (t) , h))
lim = 0,
h→0+ h

i.e., t and h are restricted to positive values. We explicitly spell out the minor
changes to Conditions E1 and E2 since a new possibility of allowing negative
ΛX will prove to be useful.

Condition E1: For each x0 ∈ M, there are constants r > 0, δ > 0 and Λ ∈ R
such that for all x, y ∈ B (x0 , r) and t ∈ [0, δ)

d (Xt (x) , Xt (y)) ≤ d (x, y) (1 + tΛ) .

Condition E2:
d (Xs+t (x) , Xt (Xs (x))) = O (st)
for 0 ≤ t ≤ s as s → 0, locally uniformly in x.

Corollary 29 Let X be an arc field satisfying E1 and E2 on a locally complete
metric space M. Then given any point x ∈ M , there exists a unique solution
σx : [0, ω x ) → M with initial condition σx (0) = x.

Proof. Follow the proof of Theorem 12.

Corollary 30 Let X be an arc field on a complete metric space M , which
satisfies E1 and E2. Solutions σ x : [0, ω x ) → M satisfy
σ σx (s) (t) = σx (s + t)
for s, t ≥ 0 and s + t < ω x . Defining W ⊂ M × R+ by
W : = {(x, t)|t ∈ [0, ω x )} and
F : W →M by F (x, t) := σx (t)
we know F is continuous and
(i) M × {0} ⊂ W and F (x, 0) = x for all x ∈ M (identity at 0 property)
(ii) F (t, F (s, x)) = F (t + s, x) (1-parameter local semi-group property)
(iii) For each x ∈ M, F (x, ·) : [0, ω x ) → M is the maximal solution σ x to X.
If in addition X has linear speed growth, then F has domain W = M × R+ ,
i.e., F is a forward flow.
Proof. Use Corollary 29 and adapt the proof of Theorem 22.
Theorem 31 Let X be an arc field on a complete metric space M, which has
linear speed growth and satisfies Conditions E1 and E2, with Λ < 0 uniformly
valid for all of M. Then the forward flow F : M ×[0, ∞) → M of X has a unique
fixed point. That is, there exists p ∈ M, such that for all t ≥ 0, F (p, t) = p, and
if F (x, t0 ) = x for some t0 > 0, then x = p. Furthermore, the flow converges to
the fixed point exponentially:
d (F (x, t) , p) = d (F (x, t) , F (p, t)) ≤ eΛt d (x, p) .
Proof. Theorem 16 is valid mutatis mutandis and gives
d (F (a, t) , F (b, t)) ≤ eKA t d (a, b) .
Thus, Ft := F (·, t) is a contraction mapping for t > 0 on M , and therefore has
a unique fixed point, say pt , by the Contraction Mapping Theorem (Theorem
119, Appendix A.1). Note pt is a continuous function of t, since
d (pt , pt ) = d (F (pt , t) , F (pt , t ))
≤ d (F (pt , t) , F (pt , t)) + d (F (pt , t) , F (pt , t ))
≤ eKA t d (pt , pt ) + d (F (pt , t) , F (pt , t ))
d (F (pt , t) , F (pt , t ))
⇒ d (pt , pt ) ≤ → 0 as t → t
1 − eKA t
by the continuity of F . The 1-parameter local semigroup property of F gives
pt = F (pt , t) = F (F (pt , t) , t) = F (pt , 2t) = · · · = F (pt , nt)
for any positive integer n. Hence, pnt = pt and further pi/j = pi = p1 for all
positive integers i and j. Since t → pt is continuous and constant on the positive
rationals, pt = p1 for all t > 0.
See Chapter 7 for examples in H (Rn ). Theorem 33 in the next section deals
with the more general question of invariant sets instead of just fixed points.

1.3 Invariant sets
Definition 32 A set S ⊂ M is defined to be locally uniformly tangent to
X if
d (Xt (x) , S) = o (t)
locally uniformly for x ∈ S, denoted S ∼ X.
S is invariant under the flow F if for any x ∈ S we have Ft (x) ∈ S for all
t ∈ (αx , ω x ).
The next theorem is a metric space generalization of the Nagumo-Brézis
Invariance Theorem (Example 11 shows how this generalizes the Banach space
setting). The bidirectional case is given, but the result obviously holds also for
forward flows mutatis mutandis. Cf. [50] for an exposition on general invariance
Theorem 33 Let X satisfy E1 and E2 and assume a closed set S ⊂ M has
S ∼ X. Then S is an invariant set of the flow F .
Proof. By choosing x ∈ S this theorem is an immediate corollary of the
following, slightly stronger fact:
Lemma 34 Let σx : (α, ω) → U ⊂ M be a solution to X which meets Condition
E1 with uniform constant Λ on a neighborhood U . Assume S ⊂ U is a closed
set with S ∼ X. Then
d (σx (t) , S) ≤ eΛ|t| d (x, S) for all t ∈ (α, ω) .
Proof. (Adapted from the proof of Theorem 16, due to David Bleecker.)
We check only t > 0. Define g (t) := e−Λt d (σ x (t) , S). For h ≥ 0, we have
g (t + h) − g (t) = e−Λ(t+h) d (σx (t + h) , S) − e−Λt d (σx (t) , S)
≤ e−Λ(t+h) [d (σ x (t + h) , Xh (σx (t))) + d (Xh (σx (t)) , Xh (y)) + d (Xh (y) , S)]
− e−Λt d (σ x (t) , S)
for any y ∈ S, which in turn is
≤ e−Λ(t+h) [d (Xh (σx (t)) , Xh (y)) + o (h)] − e−Λt d (σ x (t) , S)
≤ e−Λt e−Λh d (σx (t) , y) (1 + Λh) − e−Λt d (σ x (t) , S) + o (h)
= e−Λh (1 + Λh) d (σx (t) , y) − d (σ x (t) , S) e−Λt + o (h) .
g (t + h) − g (t) ≤ e−Λh (1 + Λh) − 1 e−Λt d (σx (t) , S) + o (h)
since y was arbitrary in S. Thus
g (t + h) − g (t)
≤ o (h) e−Λt d (σx (t) , S) + o (h) = o (h) (g (t) + 1) .

Hence, the upper forward derivative of g (t) is nonpositive; i.e.,
+ g (t + h) − g (t)
D g (t) := lim+ ≤ 0.
h→0 h

Consequently, g (t) ≤ g (0) or

d (σx (t) , S) ≤ eΛt d (σ x (0) , S) = eΛt d (x, S) .

Theorem 33 will be used to piece together local integral surfaces to get
foliations in §3.4. Also see Example 87.

1.4 Commutativity of flows
The following theorem is valid for both bidirectional and forward flows.

Theorem 35 Let X and Y be arc fields on a complete metric space M which
satisfy Conditions E1 and E2. Let F and G be the local flows generated by X
and Y , respectively. If
d (Yt Xt (x) , Xt Yt (x)) = o t2 (1.14)

locally uniformly in x then
Fs Gt = Gt Fs

that is, the flows commute.

Proof. (1.14) means for any x ∈ M there exists a neighborhood U :=
B (x, δ) and a function φ with lim φ (t) = 0 such that for all y ∈ U we have
d (Yt Xt (y) , Xt Yt (y)) ≤ t2 φ (t). By shrinking U if necessary, both arc fields will
satisfy E1 and E2 uniformly on U for some constants Λ > 1 and Ω, and also
the speeds of X and Y are uniformly bounded by ρ > 0. For the time being
we assume s and t are sufficiently small so all the compositions of X and Y
appearing below remain in U, i.e., |s| , |t| < δ/ (2ρ). In the last paragraph of the
proof, continuation will eliminate this restriction on s and t.
The calculations can become a little convoluted, but using Figure **add it!**
you might find it easier to construct your own proof than to read this one.
Let us first check the theorem in the case s = t. This next estimate is the
linchpin of the proof.

Lemma: d (Yr Xr )i (x) , (Xr Yr )i (x) ≤ rφ (r) (1 + rΛ)2i (1.15)

Let us verify this estimate. Denote xj := (Yr Xr )j (x)

d (Yr Xr )i (x) , (Xr Yr )i (x)
≤ d (Xr Yr )k (Yr Xr )i−k (x) , (Xr Yr )k+1 (Yr Xr )i−k−1 (x)

k k
= d (Xr Yr ) (Yr Xr ) (xi−k−1 ) , (Xr Yr ) (Xr Yr ) (xi−k−1 )

≤ d (Yr Xr (xi−k−1 ) , Xr Yr (xi−k−1 )) (1 + rΛ)2k

≤ max d (Yr Xr (xk ) , Xr Yr (xk )) (1 + rΛ)2k
xk k=0
(1+rΛ)2i −1
≤ r φ (r) (1+rΛ)2 −1
≤ rφ (r) (1 + rΛ)2i

as desired.
We will show the following estimate can be made arbitrarily small.
d (Gt Ft (x) , Ft Gt (x)) ≤ d Gt Ft (x) , Yt/n Xt/n (x) (1.16)
 n  n   n 
+d Yt/n Xt/n (x) , Xt/n Yt/n (x) + d Xt/n Yt/n (x) , Ft Gt (x)
The above lemma (1.15) satisfactorily bounds the middle term by
 n  n    t
d Yt/n Xt/n (x) , Xt/n Yt/n (x) ≤ nt φ nt e2 n Λ → 0 (1.17)
as n → ∞. Next
d Gt Ft (x) , Yt/n Xt/n (x)
  n  n   n  n  n 
≤ d Gt Ft (x) , Yt/n Xt/n (x) + d Yt/n Xt/n (x) , Yt/n Xt/n (x) .
The first term converges to 0 as n → ∞ by the Euler curve approximation, so
let us consider the second term separately:
 n  n  n 
d Yt/n Xt/n (x) , Yt/n Xt/n (x)
(  n−k  k  n−k )

n−2 Yt/n Xt/n Yt/n Xt/n (x) ,
≤ d  n−k−1  k+1  n−k−1 (1.18)
k=0 Yt/n Xt/n Yt/n Xt/n (x)
using only the triangle inequality since
 n  1  n−1  1
Yt/n Xt/n (x) = Yt/n Xt/n Yt/n Xt/n (x) , etc.
Denote yk := Xt/n (x) so
 n−k  k  n−k  n−k−1  k+1  n−k−1
d Yt/n Xt/n Yt/n Xt/n (x) , Yt/n Xt/n Yt/n Xt/n (x)
 k+1  k+1  n−k−1
≤ d Yt/n Xt/n (yn−k−1 ) , Xt/n Yt/n (yn−k−1 ) 1 + Λ nt
  2(k+1)  n−k−1   n+k+1
≤ nt φ nt 1 + nt Λ 1 + Λ nt = nt φ nt 1 + Λ nt .

Therefore (1.18) is bounded by

t n+k+1   n+1 n−2
≤ nφ n 1 + Λ nt = nt φ nt 1 + Λ nt 1 + Λ nt
k=0 k=0
t n−1
t n+1 1+ Λ n  − 1
= nφ n 1 + Λn (& using Λ > 1 gives)
1 + Λ nt − 1
  n+1  n−1   2n  
≤ φ nt 1 + Λ nt 1 + Λ nt = φ nt 1 + Λ nt ≤ φ nt e2Λt → 0

as n → ∞. We just barely have a bound going to zero at this point, which
indicates a sharpness
 for the theorem.
 Now tracing the calculations backward
shows d Gt Ft (x) , Yt/n Xt/n (x) → 0 as n → ∞. Similar manipulations
yield d Xt/n Yt/n (x) , Ft Gt (x) → 0 as n → ∞, and putting these results
together in line (1.16) gives d (Gt Ft (x) , Ft Gt (x)) = 0 as desired.
Now since Ft Gt (x) = Gt Ft (x) for any valid t we can use this and the
semigroup property of flows to get

F2t Gt (x) = Ft Ft Gt (x) = Ft Gt Ft (x) = Gt Ft Ft (x) = Gt F2t (x)

and similarly
Fmt Gnt (x) = Gnt Fmt (x)
for any m, n ∈ N (or Z for the bidirectional case) by induction. Since t may
be chosen arbitrarily, for small enough t we have Fr Gs (x) = Gs Fr (x) for any
valid r, s ∈ Q. By the continuity of the flows F and G, the result follows.
This theorem is a generalization of a classical result in mechanics, as will be
obvious once we introduce the metric space analog of the Lie bracket in §2.2,
which allows an alternate proof given in §3.3.
Chapter 2

Lie algebra on metric spaces

On a metric space we cannot add or multiply elements without imposing further
structure on the space. But arc fields enjoy some natural algebraic properties
because of the fact that R is embedded in their definition. In fact just as
vector fields on a manifold form a module, arc fields with minimal regularity
assumptions (E1, E2, and closure) form a module up to tangency equivalence.
The Lie bracket of two vector fields is a key tool in the study of geometry
and dynamics on manifolds. In §2.2 a generalization is introduced to exploit
its power on metric spaces. The asymptotic characterization given at line (2.6)
below is the natural definition to choose for the metric space context. Remark-
ably, though, the Lie derivative interpretation is shown to be valid as well at
line (2.8).
The relation of the Lie bracket to the other algebraic definitions on arc fields
is explored, and we find the operations of pull-back and push-forward to be
natural with respect to this algebra. All of this machinery then allows us to
prove Frobenius’ Foliation Theorem on a metric space in Chapter 3.

2.1 Metric space arithmetic
Definition 36 If X and Y are arc fields on M then define the arc field X + Y
on M by
(X + Y )t (x) := Yt Xt (x) .

For any locally Lipschitz function a : M → R define the arc field aX by

aX (x, t) := X (x, a (x) t) . (2.1)

To be fastidiously precise we need to define aXx (t) for all t ∈ [−1, 1] so


technically when a > 1 we must specify
 1 1

 X (x, a (x) t) − |a(x)| ≤ t ≤ |a(x)| 

X (x, 1) t > 1/ |a (x)| when a (x) = 0
aX (x, t) := 

 X (x, −1) t < −1/ |a (x)|

x for − 1 ≤ t ≤ 1 if a (x) = 0
using the trick from Example 27. Again, we will not burden ourselves with this
detail; in all cases our concern with the properties of an arc field Xx (t) is only
near t = 0 and a is always continuous.
When X + Y satisfies Conditions E1 and E2, its flow H is then computable
with Euler curves (using line (1.8) above) as
(n)  (n)
H (x, t) = lim (X + Y )t/n (x) = lim Yt/n Xt/n (x) . (2.3)
n→∞ n→∞

Therefore, this definition of X + Y using compositions is a direct generalization
of the concept of adding vector fields on a differentiable manifold (see [1], §4.1A).
The sum of two flows on a metric space was introduced in [24] in the same spirit
as defined here, with commensurable conditions.
It is a simple definition check to prove aX is an arc field when a is locally
Lipschitz, since aXx (t) = Xx (a (x) t) is Lipschitz in t if Xx (t) is:

d (Xx (a (x) s) , Xx (a (x) t)) d (Xx (s) , Xx (t))
ρaX (x) := sup = sup

s=t |s − t| s=t s − t
a(x) a(x)

d (Xx (s) , Xx (t))
= |a (x)| sup = |a (x)| ρX (x)
s=t |s − t|

ρaX (x, r) := sup {ρaX (y)} = sup {|a (y)| ρ (y)}
y∈B(x,r) y∈B(x,r)

≤ (|a (x)| + rKa ) ρX (x, r) < ∞.

However, we need another condition to guarantee linear combinations of arc
fields with local flows still give arc fields with local flows:

Definition 37 We say X & Y close if

d (Ys Xt (x) , Xt Ys (x)) = O (|st|)

locally uniformly in x, i.e., if for each x0 ∈ M there exist positive constants
CXY , δ, and r such that for all x ∈ B (x0 , r)

d (Ys Xt (x) , Xt Ys (x)) ≤ CXY |st|

for all |s| , |t| < δ.

Example 38 As in Example 11 let f, g : B → B be Lipschitz vector fields on a
Banach space B, and let X and Y be their corresponding arc fields
X (x, t) := x + tf (x) and Y (x, t) := x + tg (x) .
(X + Y ) (x, t) = [x + tf (x)] + tg (x + tf (x)) = x + t [f (x) + g (x + tf (x))]
which is tangent to the arc field
Z (x, t) := x + t [f (x) + g (x)]
d ((X + Y ) (x, t) , Z (x, t)) = |t| [f (x) + g (x + tf (x))] − [f (x) + g (x)]
≤ t2 Kg f (x) = o (t)
which motivates the definition of the sum of arc fields.
It is easy to check X & Y close:
d (Ys Xt (x) , Xt Ys (x))
= x + tf (x) + sg (x + tf (x)) − [x + sg (x) + tf (x + sg (x))]
≤ |t| f (x) − f (x + sg (x)) + |s| g (x + tf (x)) − g (x)
≤ |t| Kf x − (x + sg (x)) + |s| Kg x + tf (x) − x
≤ |st| (Kf g (x) + Kg f (x))
so CXY := (Kf g (x) + Kg f (x)).
Proposition 39 Assume X & Y close and satisfy E1 and E2. Then their sum
X + Y satisfies E1 and E2.
Proof. Checking Condition E1:
d ((X + Y )t (x) , (X + Y )t (y))
= d (Yt Xt (x) , Yt Xt (y)) ≤ d (Xt (x) , Xt (y)) (1 + |t| ΛY )
≤ d (x, y) (1 + |t| ΛX ) (1 + |t| ΛY ) ≤ d (x, y) 1 + |t| (ΛX + ΛY ) + t2 ΛX ΛY
≤ d (x, y) (1 + |t| ΛX+Y )
where ΛX+Y := ΛX + ΛY + ΛX ΛY < ∞.
Condition E2:
d (X + Y )s+t (x) , (X + Y )t (X + Y )s (x)
= d (Ys+t Xs+t (x) , Yt Xt Ys Xs (x))
≤ d (Ys+t Xs+t (x) , Yt Ys Xs+t (x)) + d (Yt Ys Xs+t (x) , Yt Xt Ys Xs (x))
≤ |st| ΩY + d (Ys Xs+t (x) , Xt Ys Xs (x)) (1 + |t| ΛY )
≤ |st| ΩX + [d (Ys Xs+t (x) , Ys Xt Xs (x)) + d (Ys Xt (y) , Xt Ys (y))] (1 + tΛX )

where y := Xs (x). Notice
d (Ys Xs+t (x) , Ys Xt Xs (x)) ≤ d (Xs+t (x) , Xt Xs (x)) (1 + |s| ΛY )
≤ |st| ΩX (1 + |s| ΛY ) = O (|st|)
and the last summand of (2.4) is also O (|st|) since X & Y close, so E2 is
When X & Y close and satisfy E1 and E2, we also have (X + Y ) ≈ (Y + X)
using (2.3) since
 (n)  (n−1)
Yt/n Xt/n = Yt/n Xt/n Yt/n Xt/n (2.5)
whence both arc fields X + Y and Y + X are (locally uniformly 2nd-order)
tangent to the flow H.
Example 40 Under the conditions of Theorem 35 we can obviously see Ft Gt =
Ht where again H is the flow generated by X + Y . In fact, line (1.17) in its
proof gives a second (more tedious) verification of the fact that the Euler curves
for X + Y and Y + X converge to each other.
Proposition 41 If X satisfies E1 and E2 and a : M → R is a locally Lipschitz
function, then aX satisfies E1 and E2.
Proof. It suffices, by localizing, to assume a is globally Lipschitz.
d (aXx (t) , aXy (t))
= d (Xx (a (x) t) , Xy (a (y) t))
≤ d (Xx (a (x) t) , Xx (a (y) t)) + d (Xx (a (y) t) , Xy (a (y) t))
≤ |a (x) t − a (y) t| ρ (x) + d (x, y) (1 + |a (y)| |t| ΛX )
≤ d (x, y) (Ka |t| ρ (x) + 1 + |a (y)| |t| ΛX ) = d (x, y) (1 + |t| ΛaX )
where ΛaX := Ka ρ (x) + |a (y)| ΛX < ∞.
E2: For all x0 ∈ M and δ > 0 we know a is bounded by some A > 0 on
B (x0 , δ) since a is Lipschitz.
d aXx (s + t) , aXaXx (s) (t)
= d Xx (a (x) (s + t)) , XXx (a(x)s) (a (Xx (a (x) s)) t)
≤ d Xx (a (x) (s + t)) , XXx (a(x)s) (a (x) t)
+ d XXx (a(x)s) (a (x) t) , XXx (a(x)s) (a (Xx (a (x) s)) t)
≤ a (x) |s| · a (x) |t| ΩX + ρ · |a (x) t − a (Xx (a (x) s)) t|
≤ |st| [a (x)]2 ΩX + |t| ρKa d (x, Xx (a (x) s))
≤ |st| [a (x)]2 ΩX + |st| ρ2 Ka a (x) ≤ |st| ΩaX
where ΩaX := A2 ΩX + ρ2 Ka A.
Combining Propositions 39 and 41 gives

Theorem 42 If a and b are locally Lipschitz functions and X & Y close and
satisfy E1 and E2, then aX + bY is an arc field which satisfies E1 and E2 and
so has a unique local flow.
If in addition a and b are globally Lipschitz and X and Y have linear speed
growth, then aX + bY generates a unique flow.
Proof. We haven’t proven aX and bY close, but this is a straightforward
definition check, as is the fact that aX + bY has linear speed growth.
Now we have the beginnings of a linear structure associated with M. For
instance, expressions such as X − Y make sense:
X − Y := X + (−1)Y
where −1 is a constant function on M. Further, 0 is an arc field defined as the
constant map
0 (x, t) := x.
Note the space of all Lipschitz functions a : M → R is a ring.
Theorem 43 (Module properties) Let X,Y , and Z be arc fields which satisfy
Conditions E1 and E2 and assume X & Y , Y & Z, and X & Z all close. Let
a : M → R and b : M → R be locally Lipschitz functions. Then

(i) 0+X =X =X +0 additive identity
(ii) X + −X ≈ 0 additive inverse
(iii) X + (Y + Z) = (X + Y ) + Z additive associativity
(iv) 1X = X scalar identity
(v) a (bX) = (ab) X scalar associativity
(vi) (ab) X = (ba) X scalar commutativity
(vii) X +Y ≈Y +X additive commutativity
(viii) a (X + Y ) ≈ aX + aY additive distributivity
(ix) (a + b) X ≈ aX + bX scalar distributivity
Further, equivalence respects this linearity:
(ix) if X ∼ X  and Y ∼ Y  then aX + bY ∼ aX  + bY 
(x) if X ≈ X  and Y ≈ Y  then aX + bY ≈ aX  + bY 
Proof. All equalities—(i) and (iii)-(vi)—are immediate from the definitions;
(iii) and (v) particularly are due to the general associativity of composition of
maps. (ii) follows immediately from Condition E2:
d ((X − X)t (x) , 0 (x)) = d (X−t Xt (x) , X−t+t (x)) = O t2 .
(vi) was shown at line (2.5). For (vii) and (viii) we may appeal as we did
for (vi) to the fact that both sides of the relations are 2nd order tangent to the
same flows; but they are also easy to verify directly: checking (vii)
d (a (X + Y )t (x) , (aX + aY )t (x))
= d Ya(x)t Xa(x)t (x) , Ya(Xa(x)t (x))t Xa(x)t (x) = d Ya(x)t (y) , Ya(Xa(x)t (x))t (y)

where y := Xa(x)t (x) . Then
d Ya(x)t (y) , Ya(Xa(x)t (x))t (y) ≤ |t| ρY (y) a (x) − a Xa(x)t (x)
≤ |t| ρY (y) Ka d x, Xa(x)t (x) ≤ t2 ρY (y) ρX (x) Ka |a (x)| = O t2

locally uniformly. Checking (viii) notice if a and b are constant, then this
is simply Condition E2. Since we want the result for Lipschitz functions we
carefully verify

d ([(a + b) X]t (x) , (aX + bX)t (x)) = d X(a(x)+b(x))t (x) , Xb(Xa(x)t (x))t Xa(x)t (x)
≤ d X(a(x)+b(x))t (x) , Xb(x)t Xa(x)t (x) + d Xb(x)t Xa(x)t (x) , Xb(Xa(x)t (x))t Xa(x)t (x)

≤ t2 |ab (x)| Ω + d Xb(x)t (y) , Xb(Xa(x)t (x))t (y)

where y := Xa(x)t (x) and this final estimate is bounded by
t2 |ab (x)| Ω+ρX (y) |t| b (x) − b Xa(x)t (x) ≤ t2 (|ab (x)| Ω + ρX (y) ρX (x) Kb |a (x)|)
which is O t2 locally uniformly.
(ix) follows from the two facts

aX ∼ aX  and X + Y ∼ X + Y 

which are verified easily:

d ([aX]t (x) , [aX  ]t (x)) = d Xa(x)t (x) , Xa(x)t

(x) = o (a (x) t) = o (t)

locally uniformly, and

d ([X + Y ]t (x) , [X  + Y  ]t (x))
= d (Yt Xt (x) , Yt Xt (x)) ≤ d (Yt Xt (x) , Yt Xt (x)) + d (Yt Xt (x) , Yt Xt (x))
= d (Xt (x) , Xt (x)) (1 + ΛY |t|) + d (Yt (y) , Yt (y))

where y := Xt (x), so the last estimate is o (t) locally uniformly.
(x): Replace o (t) in the verification of (ix) with O t2 .
Consequently under the conditions of closure and E1 and E2, we may now
perform soaring feats of algebra such as

X +Y ∼0 ⇔ Y ∼ −X.

Local flows have the following stronger linearity property, which is printed
here so it may be obliquely referred to in the depths of a long proof in the sequel.

Lemma 44 If F is a local flow then interpreting F as an arc field we can
perform the following operations when both sides are defined:

1. for a, b ∈ R, we have aF + bF = (a + b) F

2. if a and b are real functions then (aF + bF )t (x) = (a + b ◦ (aF )t ) Ft (x).

Proof. This is another obvious definition check:
(2) (aF + bF )t (x) = (bF )t (aF )t (x) = Fb((aF ) (x))t Fa(x)t (x)
= F(a(x)+(b◦(aF ) )(x))t (x) = (a + b ◦ (aF )t ) Ft (x)

and (1) follows from (2).

Proposition 45 Let X,Y , and Z be arc fields which satisfy Conditions E1 and
E2 and assume X & Y, Y & Z, and X & Z all close. Let a, b : M → R be
locally Lipschitz functions. Then
(i) X & X close
(ii) Y & X close
(iii) aX & Y close
(iv) X and Y + Z close.

Proof. (i) follows immediately from Condition E2. The others are easy; let
us do the most difficult here:

d ((Y + Z)s Xt (x) , Xt (Y + Z)s (x)) = d (Zs Ys Xt (x) , Xt Zs Ys (x))
≤ d (Zs Ys Xt (x) , Zs Xt Ys (x)) + d (Zs Xt Ys (x) , Xt Zs Ys (x))
≤ d (Ys Xt (x) , Xt Ys (x)) (1 + |s| ΛZ ) + O (|st|) = O (|st|)

Closure is not transitive lest all arc fields close, since all arc fields close with
the constant 0 arc field. This prevents us from forming a natural local linear
structure fully analogous to the tangent bundle of a manifold via equivalence
classes under ≈ tangency. But by means of Proposition 45 and Theorem 42, we
can form successive linear combinations of arc fields which all close and have
unique solutions, making an object with properties akin to a linear subbundle
of the tangent bundle. We invite the reader to explore extra restrictions on
either arc fields or the space M which guarantee all arc fields close, giving a full
tangent bundle.

Examples for this section form the content of Chapter 6. With the module
properties of this section, a homological analysis of metric spaces would be an
interesting exercise.

2.2 Metric space Lie bracket
Review §0.3 from the Introduction for motivation from manifolds.

Definition 46 Given arc fields X and Y with local flows F and G, define the
bracket [X, Y ] : M × [−1, 1] → M as
G−√t F−√t G√t F√t (x) for t ≥ 0
[X, Y ] (x, t) := F− |t| G−√|t| F√|t| G√|t| (x) for t < 0.
√ (2.6)

Here again, without spelling out the details, we implicitly use the trick from
Example 27 to force [X, Y ] to be well defined if the local flows are not defined
for all t ∈ [−1, 1].
There are many different equivalent characterizations of the Lie bracket on
a manifold. (2.6) uses the obvious choice of the asymptotic characterization to
generalize the concept to metric spaces. [X, Y ] (x, t) traces out a small “paral-
lelogram” in M starting at x, which hopefully almost returns to x. The bracket
measures the failure of F and G to commute as will be made clear in Theorems
63 and 62. Notice

 (F + G − F − G) x, |t|

for t ≥ 0
[X, Y ] (x, t) :=
 (G + F − G − F ) x, |t| for t < 0.

We should very much have preferred to define the bracket of two arc fields X
and Y directly in terms of the arc fields themselves instead of using their flows
F and G. This is not feasible if we want meaningful geometric information as
can be seen in Example 111 p. 119 and Example 112.

The bracket√is not a priori an arc field since it is not clear whether the speed
is bounded as t is employed. This is remedied if the arc fields close:

Lemma 47 If X & Y close and satisfy E1 and E2 then
d (Y−t X−t Yt Xt (x) , x) = O t2

locally uniformly for x ∈ M .


d (Y−s X−t Ys Xt (x) , x)
≤ d (Y−s X−t Ys Xt (x) , Y−s X−t Xt Ys (x)) + d (Y−s X−t Xt Ys (x) , Y−s Ys (x)) + d (Y−s Ys (x) , x)
≤ d (Ys Xt (x) , Xt Ys (x)) (1 + |s| ΛY ) (1 + |t| ΛX ) + t2 ΩX (1 + |s| ΛY ) + s2 ΩY
≤ CXY |st| (1 + |s| ΛY ) (1 + |t| ΛX ) + t2 ΩX (1 + |s| ΛY ) + s2 ΩY ≤ C |st| + t2 + s2

C := max {CXY (1 + ΛY ) (1 + ΛX ) , ΩX (1 + ΛY ) , ΩY } .
Letting s = t gives the result.

Proposition 48 If X and Y satisfy E1 and E2, and F & G close (as arc fields)
then [X, Y ] is an arc field.

Proof. We establish the local bound on speed. The main purpose of Lemma
47 is to give d ([X, Y ]t (x) , x) = O (t) for t ≥ 0:
d ([X, Y ]t2 (x) , x) = d (G−t F−t Gt Ft (x) , x) = O t2

since F and G satisfy E1 (Theorem 16) and E2 (1-parametery local group prop-
erty). Similarly, for t < 0

d (F−t G−t Ft Gt (x) , x)
≤ d (Ft Gt F−t G−t (x) , Ft F−t (x)) ≤ d (Gt F−t G−t (x) , F−t (x)) e|t|ΛX + 0

which, using this trick again, gives

≤ d (F−t G−t (x) , G−t F−t (x)) e|t|(ΛX +ΛY )
= O t2 since F & G close.

d ([X, Y ]t (x) , x) = O (t)

for both positive and negative t. Then since |t| is Lipschitz except at t = 0
we see [X, Y ] has bounded speed.

Example 49 We haven’t proven Lipschitz vector fields necessarily have flows
which close, but their analog arc field bracket is still an arc field. The issue is
subtle. Let f and g be two vector fields with associated arc fields X and Y (as
in Example 11) and flows F and G. Then

d (Gs Ft (x) , Ft Gs (x))
≤ d (Gs Ft (x) , Ys Ft (x)) + d (Ys Ft (x) , Ys Xt (x)) + d (Ys Xt (x) , Xt Ys (x))
+d (Xt Ys (x) , Xt Gs (x)) + d (Xt Gs (x) , Ft Gs (x))
≤ O s2 + O t2 + O (st) + O s2 + O t2

which is not enough to give that F & G close. But it is enough to prove [X, Y ] is
an arc field, referring to the proof of Proposition 48, which only uses s = t from
the closure condition. So even though Lipschitz vector fields may be nonsmooth,
with undefined classical Lie bracket, their metric space bracket is meaningful
and will give us geometric information on any Banach manifold, as we shall
see in Theorem 62. It is an open question whether the arc field bracket of two
Lipschitz vector fields is always well-posed.

Exercise 50 What conditions do we need to guarantee the Lie algebra proper-
ties for the bracket?
(i) − [X, Y ] = [Y, X]
(ii) [Y, X] + [X, Y ] = 0

(iii) − [X, Y ] ∼ [−X, Y ].
(iv) [X + Y, Z] ∼ [X, Z] + [Y, Z]
(v) [aX, aY ] = a2 [X, Y ] for a ∈ R
(vi) [aX, Y ] ∼ a [X, Y ] for a ∈ R.
Hint: (i) , (ii) and (v) are true for any arc fields with flows. Invent restric-
tions on the arc fields or conditions on the space M which guarantee (iii), (iv)
and (vi).

2.3 Covariance and contravariance
In this section we demonstrate the arithmetic on arc fields is natural from the
category theory point of view.
Let φ : M1 → M2 be a lipeomorphism—a Lipschitz map with a Lipschitz
inverse. The push-forward of an arc field X on M1 is the arc field φ∗ X on M2
given by   
φ∗ X (x, t) := φ X φ−1 (x) , t .
This is a direct analog of the push-forward of a vector field on a manifold. The
  ofany curve or flow on M1 is defined similarly, e.g., φ∗ F (x, t) :=
φ F φ−1 (x) , t . The push-forward of a function a : M1 → R is the function
φ∗ a : M2 → R defined as φ∗ a (x) := a φ−1 (x) .

Proposition 51 If φ : M1 → M2 is a lipeomorphism and the arc field X on
M1 has unique solutions then φ∗ X has unique solutions. If F is the local flow
of X, then φ∗ F is the local flow of φ∗ X.

 This is not conceptually
 difficult, just notationally labyrinthine.
d Fx (t + h) , XFx (t) (h) = o (t) for all x ∈ M implies

d ((φ∗ F )x (t + h) , (φ∗ X) (φ∗ Fx (t) , h))
= d φ Fφ−1 (x) (t + h) , φ X φ−1 φ Fφ−1 (x) (t) , h
≤ Kφ d Fφ−1 (x) (t + h) , X Fφ−1 (x) (t) , h = o (t) .

since φ−1 (x) ∈ M .
The push-forward of a flow is still a flow, since it clearly satisfies the 1-
parameter local group property and is the identity at t = 0.
The pull-back of a map is defined similarly, e.g.,

φ∗ F (x, t) := φ−1 (F (φ (x) , t))

and Proposition 51 clearly holds with pull-backs in place of push-forwards, since
push-forward and pull-back are inverse operations—see part (vi) of the following

Theorem 52 (Algebraic properties of covariance and contravariance)

Let Mi be metric spaces and let φ : M1 → M2 and ψ : M2 → M3 be
lipeomorphisms. Let a, b : M1 → R and  a, b : M2 → R be Lipschitz functions.
Then we have
(i) φ∗ (ab) = (φ∗ a) (φ∗ b) and φ∗ (ab) ∗ ∗
 = (φ a) (φ b)
(ii) φ∗ (a + b) = φ∗ a + φ∗ b and φ∗  a + b = φ∗ 
a + φ∗b

(iii) (ψ ◦ φ) = φ∗ ◦ ψ∗ (contravariance flips the order)
and (ψ ◦ φ)∗ = ψ∗ ◦ φ∗ (covariance preserves the order).
Let X and Y be arc fields on M1 and X  and Y be arc fields on M2 then
(iv) φ∗ (X + Y ) = φ∗ (X) + φ∗ (Y ) and φ∗ X  + Y = φ∗ X  + φ∗ Y

(v) φ∗ (aX) = a ◦ φ φ∗ (X)
 = φ∗ (a) φ∗ (X)

and φ  ∗ 
aX = ( 
a ◦ φ) φ X = φ∗ (

a) φ∗ X  .
(vi) φ∗ φ∗ = IdM1 and φ∗ φ∗ = IdM2 .
(vii) [φ∗ X, φ∗ Y ] = φ∗ [X, Y ] and [φ∗ X, φ∗ Y ] = φ∗ [X, Y ].
Proof. These are all obvious definition checks. Most hold in more general
φ∗ (ab) (x) = (ab) φ−1 (x) = a φ−1 (x) b φ−1 (x)
= (φ∗ a) (x) (φ∗ b) (x) = (φ∗ a) (φ∗ b) (x)
for φ∗ replace φ−1 with φ.
φ∗ (a + b) (x) = (a + b) φ−1 (x) = a φ−1 (x) + b φ−1 (x)
= (φ∗ a) (x) + (φ∗ b) (x) = (φ∗ a + φ∗ b) (x) .
(iii) This is valid for functions, arc fields and flows. Essentially this follows
from (ψ ◦ φ)−1 = φ−1 ◦ ψ−1 . Let us explicitly check arc fields:
(ψ ◦ φ)∗ Xt (x) = (ψ ◦ φ)−1 (Xt ((ψ ◦ φ) (x))) = φ−1 ψ−1 Xt ψφ (x)
= φ−1 (ψ∗ X)t (φ (x)) = (φ∗ (ψ∗ X))t (x) = (φ∗ ◦ ψ∗ ) Xt (x)
(ψ ◦ φ)∗ Xt (x) = ψ ◦ φ Xt (ψ ◦ φ)−1 (x) = ψ φ Xt φ−1 ψ−1 (x)
= ψ (φ∗ X)t ψ−1 (x) = (ψ∗ ◦ φ∗ ) Xt (x) .
(iv) We check only pull-backs:
φ∗ (X + Y ) (x, t) = φ−1 (X + Y ) (φ (x) , t) = φ−1 Yt Xt φ (x)
= φ−1 Yt φφ−1 Xt φ (x) = (φ∗ (X) + φ∗ (Y )) (x, t) .
φ∗ (aX) (x, t) = φ−1 (aX) (φ (x) , t) = φ−1 X (φ (x) , (a ◦ φ) (x) t)
= φ∗ (X) (x, (a ◦ φ) (x) t) = (a ◦ φ) φ∗ (X) (x, t) .

(vi) Let’s check two cases; all others are similarly obvious.
φ∗ φ∗ F (x, t) = φ∗ φ F φ−1 (x) , t = φ−1 φ F φφ−1 (x) , t = F
φ∗ φ∗ a (x) = φ∗ a (φ (x)) = a φ−1 φ (x) = a (x) .
(vii) Automatic from the definition since φ∗ F and φ∗ G are the local flows
of φ∗ X and φ∗ Y . Checking t ≥ 0
[φ∗ X, φ∗ Y ]t2 (x) = (φ∗ G)−t (φ∗ F )−t (φ∗ G)t (φ∗ F )t (x)
= φG−t φ−1 φF−t φ−1 φGt φ−1 φFt φ−1 (x)
= φG−t F−t Gt Ft φ−1 (x) = φ [X, Y ]t2 φ−1 x = φ∗ [X, Y ]t2 (x)
t < 0 is just as easy.
Notice the formulas hold formally for arbitrary functions a : M → R, but we
restrict ourselves to Lipschitz functions to guarantee φ∗ (aX) still has bounded

Evidently this proposition shows lipeomorphisms on metric spaces engender
maps quite similar to module homomorphisms in view of Theorem 43. How
much of homology theory can be grafted onto this context?

Since pull-back and linearity are established for arc fields, we can now explore
another characterization of the bracket. In the context of M being a smooth
manifold, let F and G be local flows generated by smooth vector fields f :
M → T M and g : M → T M . There it is well known the following “dynamic”
characterization of the traditionally defined Lie bracket is equivalent to the
asymptotic characterization

d ∗ (Ft )∗ g − g
[f, g] = (Ft ) g = lim . (2.7)
dt t=0
t→0 t
Using this for inspiration, we return to the context of metric spaces with F and
G again the local flows of arc fields X and Y . We have
Ft∗ Gt (x) = (t [X, Y ] + G)t (x) for t ≥ 0 and (2.8)
Fs∗ Gs (x) = (−s [−X, −Y ] − G)−s (x) for s < 0 (2.9)
which hold because
(t [X, Y ] + G)t (x) = Gt [X, Y ]t2 (x)
= Gt G−t F−t Gt Ft (x) = F−t Gt Ft (x) = Ft∗ Gt (x)
(−s [−X, −Y ] − G)−s (x)
= Gs [−X, −Y ]s2 (x) = Gs (−G)−|s| (−F )−|s| (−G)|s| (−F )|s| (x)
= Gs G|s| F|s| G−|s| F−|s| (x) = F−s Gs Fs (x) = Fs∗ Gs (x) .

These facts will be used in Chapter 3 for a proof of the fundamental result on
foliations of metric spaces, Theorem 62, as will the following

Proposition 53 If X has local flow F then (Fs ) X ∼ X.

If X satisfies E1 and E2 then (Fs ) X ≈ X.

Proof. Using the properties of flows Ft = F−s+t+s = F−s Ft Fs and Ft−1 =
F−t we get
d (Fs )∗ X t (x) , Xt (x)
≤ d (F−s Xt Fs (x) , F−s Ft Fs (x)) + d (Ft (x) , Xt (x))
≤ esΛX d (Xt (y) , Ft (y)) + o (t) = o (t)

where y := Fs (x) and the exponential comes from Theorem 16.  
If X satisfies E1 and E2 then o (t) may be replaced with O t2 since then
X ≈ F by Corollary 14.

Definition 54 Let φ : M1 → M2 be a lipeomorphism. The arc fields X : M1 →
AM1 and Y : M2 → AM2 are called φ-related, denoted X ∼φ Y , if Y ∼ φ∗ X.
If Y ≈ φ∗ X then X and Y are 2nd-order φ-related, denoted X ≈φ Y .
(Remember ∼ and ≈ have an implicit local uniformity condition.)

Proposition 53 may therefore be restated as X ∼Fs X.

Proposition 55 Let φ be a lipeomorphism.
(i) if X ∼ Y then φ∗ X ∼ φ∗ Y and φ∗ X ∼ φ∗ Y .
(ii) Y ∼ φ∗ X iff φ∗ Y ∼ X.
(Consequently φ-related may be equivalently written with the pull-back instead
of the push-forward, then re-rewritten as an equivalence relation due to the fol-
lowing transitive property.)
(iii) X ∼φ Y and Y ∼ψ Z implies X ∼ψ◦φ Z.
Further (i) , (ii) , and (iii) hold with ≈ in place of ∼.

Proof. (i)
d (φ∗ Xt (x) , φ∗ Yt (x)) = d φ Xt φ−1 (x) , φ Yt φ−1 (x)
≤ Kφ d Xt φ−1 (x) , Yt φ−1 (x) = o (t) .

Similarly for φ∗ . (ii) follows from (i) since φ∗ and φ∗ are inverse:

Y ∼ φ∗ X ⇒ φ∗ Y ∼ φ∗ φ∗ X = X.

(iii) by definition Y ∼ φ∗ X and Z ∼ ψ∗ Y so (i) implies Z ∼ ψ∗ Y ∼ ψ∗ φ∗ X =
(ψ ◦ φ)∗ X.

Proposition 56 Assume X and Y are arc fields on M which satisfy Conditions
E1 and E2, and let F and G be the flows of X Y . Then
(i) Gt∗ F (respectively G∗t F ) is the flow generated by Gt∗ X (respectively

Gt X).
(ii) X ∼φ Y iff G = φ∗ F iff F ∼φ G.
(iii) Ft∗ X ∼ X.
(iv) Gt∗ X and G∗t X satisfy Condition E2 (but not necessarily E1).

Proof. (i)

(Gt∗ F )r+s (x) = Gt Fr+s (G−t x) = Gt Fr Fs G−t x = Gt Fr G−t Gt Fs G−t x
= (Gt∗ Fs ) (Gt∗ Fr ) (x)

(Gt∗ F )0 (x) = Gt F0 (G−t x) = x
consequently Gt∗ F is a flow and is tangent to Gt∗ X since

d (Gt∗ Fs (x) , Gt∗ Xs (x)) = d (Gt Fs (G−t x) , Gt Xs (G−t x))
≤ eΛY |t| d (Fs (G−t x) , Xs (G−t x)) = O t2 .

The claim is settled by Proposition 51, which also informs:
G ∼ Y ∼ φ∗ X ∼ φ∗ F
and so G = φ∗ F since φ∗ F is a local flow and G is the unique flow tangent to
(iii) We repeat this fact because (ii) now gives us an automatic proof with
φ := Ft
(iv) E2:

d (Gt∗ Xr+s (x) , Gt∗ Xs (Gt∗ Xr (x)))
= d (Gt Xr+s (G−t x) , Gt Xs (G−t (Gt∗ Xr (x))))
= d (Gt Xr+s (G−t x) , Gt Xs (G−t (Gt Xr (G−t x))))
= d (Gt Xr+s (G−t x) , Gt Xs Xr (G−t x)) ≤ eΛY |t| d (Xr+s (G−t x) , Xs Xr (G−t x))
≤ eΛY |t| |rs| ΩX = |rs| Ω

where Ω := eΛY t ΩX .
E1 is not quite satisfied, though: for t ≥ 0

d (Gt∗ Xs (x) , Gt∗ Xs (y))
= d (Gt Xs (G−t x) , Gt Xs (G−t y)) ≤ eΛY t d (Xs (G−t x) , Xs (G−t y))
≤ eΛY t d (G−t x, G−t y) (1 + sΛX ) ≤ e2ΛY |t| d (x, y) (1 + sΛX ).

Proposition 57 Assume E1 and E2 are satisfied by X, X  , Y and Y  .
If X ∼φ X  and Y ∼φ Y  then [X, Y ] ∼φ [X  , Y  ].

Proof. φ∗ X ∼ X  and φ∗ Y ∼ Y  so since φ∗ F is the local flow of φ∗ X it is
also the local flow of X  and φ∗ G is similarly the local flow of Y  . Then

φ∗ [X, Y ] = [φ∗ X, φ∗ Y ] = [X  , Y  ]

by definition of the bracket.
Chapter 3


The geometry of a manifold M may be understood easier if we can foliate it. A
foliation is a fundamental deconstruction of M , a partition of M into a family
of sets called the “leaves” of the foliation. The most elementary example is to
partition Rn with the leaves Rp × {x} where x ∈ Rq and p + q = n with p, q ≥ 0.
In this manner Rn is partitioned into Rq many copies of Rp . The leaves of
more interesting, curved foliations may be constructed by first specifying sets of
vector fields (called distributions) to which the leaves are tangent. If conversely
a foliation is first established, then the dynamics of flows for vector fields in the
distribution tangent to the foliation may be better understood in terms of the
geometry of the foliation.
In this chapter we again substitute arc fields for vector fields and apply the
limited Lie algebra developed in Chapter 2 to prove Frobenius’ Foliation Theo-
rem on a metric space. These ideas lead to an infinite-dimensional control theory
in §3.5, which gives new approximation schemes using previously unanticipated
families of functions in Chapter 5.

3.1 Introduction
To give a quick impression of the very geometrical topic of this chapter, let’s
look at a few drawings of leaves and foliations before we delve into the technical
definitions. In R3 consider the set S consisting of the x3 -axis and the unit circle
in the x1 -x2 plane,

S := x ∈ R3 |x1 = 0 = x2 ∪ x ∈ R3 |x21 + x22 = 1, x3 = 0

where x = (x1 , x2 , x3 ). Figures 3.1-3.5 demonstrate different foliations of the
space M := R3 \S.
Figure 3.1 displays a foliation of M by tori, generated by rotating the circles
in Figure 3.2 about the x3 -axis (see Example 5 in §0.3 for the formula). As
the tori shrink they converge to the circle x ∈ R3 |x22 + x23 = 1 ; as they grow
they fill up the rest of R3 except the z-axis x ∈ R3 |x1 = 0 = x2 . Then we can


Figure 3.1: Torus foliation (4 leaves)

Figure 3.2: rotating circles

see the three-dimensional space M is homeomorphic to a continuum’s worth of
copies of the torus T 2 , i.e., M  T 2 × (0, ∞). Each torus is a leaf, and the
collection of leaves form a foliation of M .
We don’t include the x3 -axis nor the x1 ,x2 -plane unit circle in M , because
they are 1-dimensional instead of 2 and so cannot be leaves in the foliation.
Technically the collection of all the 2-D surfaces along with the two 1-D curves
are a stratification of R3 , cf., [22]. You might wish to compare these figures
with the stereographic projection to R3 ∪ {∞} of the Hopf fibration of the unit
three-sphere in R4

S 3 := x ∈ R4 x = 1
(cf. [63, p. 103], e.g.) where the construction is more natural. Through this
projection, these pictures give a hint on how to construct several topologically
distinct fixed-point-free flows on S 3 , and flows with extremely complex basins
of attraction on this simplest of compact 3-dimensional manifolds.
Now if we instead have these circles grow as they rotate around the x3 -axis,
we get an ouroboros (or snake-eating-its-own-tail) foliation, Figure 3.3. Each
ouroboros leaf is homeomorphic to an infinite  cylinder, or S 1 × (0, ∞), where
1 1 1
S denotes the circle. So M  S × (0, ∞) × S . To see this start with an
initial ouroboros and follow the foliation, expanding along “larger” leaves. As
the leaves grow, filling out the foliation, we return in finite time to the initial

Figure 3.3: Ouroboros foliation (1 leaf)

Next a spiral is rotated around the z-axis, Figure 3.4. Again (but for
 reasons) the leaf is homeomorphic to a cylinder and M 
(0, ∞) × S 1 × S 1 .

Figure 3.4: Spiral torus foliation (1 leaf)

You can test your geometric imagination by adding one more twist. Shrink
the spiral as it rotates around the z-axis. This spiral may or may not intersect
the previous copies of itself as it rotates and shrinks, depending on whether or
not the ratio of the rotation rate and the shrinking rate is rational. For Figure
3.5 the rates are chosen so the spiral rotates twice around the x3 -axis before
closing on itself perfectly, giving a surface homeomorphic to a cylinder as in the
case of Figure 3.4, but now the cylinder is twisted twice, Figure 3.6.

This twisted cylinder may be disorienting, but it is still orientable. Try
tracing an edge to see how the solution to a vector field tangent to one set of
grid lines would be periodic; then in Figure 3.7 we see how an initial condition
close to the center would follow a path that leads it far away before returning.

Figure 3.5: Ouroboros spiral with 720◦ -rotation/dilation symmetry (1 leaf)

If the ratio of dilation to rotation is irrational the surface will not close with
the rotation. Then a single leaf is homeomorphic to the plane, R2 , and dense
in M. See Figure 3.8. We get a situation reminiscent of the irrational flow on
the torus T 2 whose flow lines are dense (Example 5).

In the general case, a local flow gives a local foliation with 1-dimensional
leaves—the integral curves’ paths. In this sense Frobenius’ Foliation Theorem
generalizes the Fundamental Theorem of ODEs. A flow without equilibria fo-

Figure 3.6: Twisted cylinder

Figure 3.7: A “longer” twisted cylinder

Figure 3.8: Ouroboros spiral without rational rotation/dilation symmetry (1

liates the whole space (remember Example 5). The existence of a nontrivial
foliation is not guaranteed on general spaces M and depends on the global
topology of M. For example, there is no 1-dimensional foliation of S n for even
numbers n, nor for any compact surface except the torus and the Klein bottle.

Higher-dimensional foliations are fundamental in many diverse subjects; the
three areas that inspire our interest are differential geometry, dynamical sys-
tems, and control theory. The heuristic geometric idea is to generalize vector
fields to “plane fields”, which may be algebraically defined. Plane fields of any
dimension are used and are called distributions, which are not to be confused
with the generalized functions from analysis and other mathematical concepts
which unfortunately share the bland term.
Intuitively we might expect to be able to “integrate” a plane field to get a
surface tangent to the plane field starting from any point; i.e., there should exist
a foliation tangent to any distribution, giving a basic link between algebra and
geometry. However, unlike the 1-dimensional case, where Lipschitz continuous
vector fields always have integral curves, many well-defined smooth plane fields
have no integral surfaces—such distributions are called non-holonomic. A non-
holonomic plane field is an intuitively disturbing object in geometry (but is
the starting point of the subject called “contact geometry”, fundamental in

Example 58 The archetypical non-involutive distribution is in R3 with the set
of planes given by the linear spans of the vector fields f, g : R3 → R3 where
f (x1 , x2 , x3 ) := (1, 0, 0) and g (x1 , x2 , x3 ) := (0, 1, x1 ). We readily verify this

Figure 3.9: Non-involutive distribution

distribution has no surface whose tangent spaces coincide with the plane field,
because the reachable set from any point is all of R3 : we can move tangentially
to the plane field by moving parallel to the x1 -axis at any time; at the x2 ,x3 -plane
we can also move parallel to the x2 -axis; at any other point we can move either
up or down diagonally. If there were an integral surface for this distribution, the
reachable set from a point on the surface would be limited to the 2-dimensional
surface (by Nagumo invariance, Theorem 33). Therefore this distribution is

In control theory, a non-holonomic plane field may be a boon: if a 2-
dimensional distribution generated by a pair of vector fields is non-holonomic,
then the reachable set is more than 2 dimensional. In Chapter 4 we’ll see how
a 2-dimensional distribution in L2 may have an infinite dimensional reachable
The Local Frobenius Theorem (Theorem 62) gives an algebraic property
which characterizes holonomic distributions: when the bracket of any vector
fields in the plane field are still in the plane field, then the distribution has a
tangent foliation. The technical terminology is: involutive distributions are
integrable. Extending the integral surfaces by continuation gives us the Global
Frobenius Theorem (Theorem 75): each extended integral surface is a leaf, and
the collection of leaves partitions M into a foliation.
In §3.2 we will prove the Local Frobenius Theorem on a fully general metric
space. A novel approach to the proof is needed in order to use the metric
space bracket. This paragraph gives an outline of the proof, simplified to vector
fields on a manifold. The terminology will be clarified in §3.2, and Figures 3.10
and 3.11 from §3.2 may aid your intuition. The crux of the Local Frobenius
Theorem in two dimensions is as follows: Given two transverse vector fields
f, g : M → T M there exists an integral surface (tangent to linear combinations
of f and g) through any point x0 ∈ M , under the assumption that the Lie
bracket satisfies [f, g] = af + bg for some choice of functions a, b : M → R
(involutivity of f and g). To prove this, define

S := {Ft Gs (x0 ) ∈ M| |s| , |t| < δ}

where F and G are the local flows of f and g. Since f and g are transverse,
we may choose δ > 0 small enough for S to be a well-defined surface. S will
be shown to be the desired integral surface through x0 . Notice S is tangent to
f by construction, but it is not immediately clear S is tangent to a f + b g for
arbitrarily chosen a , b ∈ R. Notice, though, that by construction S is tangent
to g at any point x = Gs (x0 ), and also S is tangent to a f + b g at this same
x for functions a and b . Therefore establishing

(Ft )∗ (a f + b g) = a f + b g at x = Gs (x0 ) (3.1)

for some functions a and b , proves S is tangent to a F + b G at an arbitrary
point z = Ft Gs (x0 ) ∈ S, since the push-forward (Ft )∗ and the pull-back (Ft )∗

are inverse to each other and preserve tangency since they are local lipeomor-
phisms. Next since the Lie bracket equals the Lie derivative,

Fh∗ (g) − g
lim = [f, g] = af + bg
h→0 h
for some a and b by involutivity, so

af + *bg + o (h) .
Fh∗ (g) = g + h (af + bg) + o (h) = *

Using the fact that Fh∗ (f ) = f for any h, and the linearity of pullback for fixed
t, we have for functions ai and bi : M → R

Ft/n (ai f + bi g) = (ai+1 f + bi+1 g) + o (1/n)

for some functions ai+1 and bi+1 . Then since
Ft∗ = Ft/n
∗ ∗
Ft/n ∗
...Ft/n ∗
= Ft/n
comp osition n times

we get (3.1) as follows:
Ft∗ (a0 f + b0 g) = lim ∗
Ft/n (a0 f + b0 g)
= lim an f + bn g + no (1/n) = a∞ f + b∞ g + 0

completing the sketch for manifolds.

[40], [22] and [64] are good introductions with deeper insights on foliations on
finite-dimensional manifolds. Topological, analytical, and geometric questions
have been explored voluminously; [64] has 263 pages of references up to 1996
and some examples in infinite dimensions.

3.2 Local integrability
In this section we prove the 2-dimensional local Frobenius Theorem on a metric
space, Theorem 62.

Definition 59 Two arc fields X and Y are (locally uniformly) transverse if
for each x0 ∈ M there exists a δ > 0 such that

d (Xs (x) , Yt (x)) ≥ δ (|s| + |t|)

for |t| < δ for all x ∈ B (x0 , δ).

Example 60 On the plane R2 with Euclidean norm · any two linearly inde-
pendent vectors u, v ∈ R2 give us the transverse arc fields

Xt (x) := x + tu and Yt (x) := x + tv.

To check this, it is perhaps easiest to define a new norm on R2 by

xuv := |x1 | + |x2 |

where x = x1 u + x2 v and x1 , x2 ∈ R. Since all norms on R2 are metrically
equivalent there must exist a constant C > 0 such that xuv ≤ C x for all
x ∈ R2 . Then taking δ := C1

d (Xs (x) , Yt (x)) = su − tv ≥ δ su − tvuv = δ (|s| + |t|) .

Localization shows any pair of continuous vector fields f and g on a differentiable
manifold (metrized in any manner) give transverse arc fields if f and g are non-
colinear at each point.

A (2-dimensional) surface is a 2-dimensional topological manifold, i.e., lo-
cally homeomorphic to R2 .
For any subset N ⊂ M and element x ∈ M the distance from x to N is
defined (with an excusable overload of notation d) as

d (x, N ) := inf {d (x, y) : y ∈ N} .

This new function d is not a metric, obviously, but it does satisfy a kind of
triangle inequality:
d (x, N) ≤ d (x, y) + d (y, N )
for all x, y ∈ M , as is easy to verify.

Definition 61 A surface S ⊂ M is an integral surface for two transverse
arc fields X and Y if given any Lipschitz functions a, b : M → R we have S
locally uniformly tangent to aX + bY restricted to S, i.e.,

d ((aX + bY )t (x) , S) = o (t)

locally uniformly for x ∈ S. Locally uniform tangency is denoted S ∼ aX + bY .

Theorem 62 Assume X and Y are transverse, and satisfy E1 and E2 on a
locally complete metric space M . If [X, Y ] ∼ aX + bY for some Lipschitz
functions a, b : M → R, then for each x0 ∈ M there exists an integral surface S
through x0 .

Proof. The metric space analogs of the bracket and the pullback defined
in §2.2 and §2.3 will now be inserted into the manifold outline given in §3.1. A
rigorous verification of the analytic estimates requires voluminous, but straight-
forward, calculations painstakingly detailed in the next six pages.

Figure 3.10: integral surface S

Let F and G be the local flows of X and Y . Define

S := {Ft Gs (x0 ) | |s| , |t| < δ}

where δ > 0 is chosen small enough for S to be a well-defined surface (Figure
3.10). I.e., Ft1 Gs1 (x0 ) = Ft2 Gs2 (x0 ) implies t1 = t2 and s1 = s2 , so

φ : (−δ, δ) × (−δ, δ) ⊂ R2 → S ⊂ M

defined by φ (s, t) := Ft Gs (x0 ) is a homeomorphism. Finding such a δ is possible
since X and Y are transverse. To see this, assume the contrary. Then there are
different choices of si and ti which give Ft1 Gs1 (x0 ) = Ft2 Gs2 (x0 ) which implies
Gs1 (x0 ) = Ft3 Gs2 (x0 ) and letting y := Gs2 (x0 ) we must also then have

Ft (y) = Gs (y) . (3.2)

If our current contrary assumption were true, then for all ε > 0 there would
exist s and t with |s| , |t| < ε such that (3.2) holds. This contradicts the fact
that X and Y are transverse.
We will show S is a desired integral surface through x0 . Assume δ is also
chosen small enough so throughout S the functions |a| and |b| are bounded,
while the constants Λ, Ω, and ρ hold for X and Y uniformly, and the closure
of B (x, 2δ (ρ + 1)) is complete. This is possible because F and G have locally
bounded speeds, since X and Y do.
S ∼ X by construction, but it is not immediately clear S ∼ a X + b Y for
arbitrarily chosen a , b ∈ R. We can use

a X + b Y ∼ a F + b G

and so we will show S ∼ a F + b G. We need to show this is true for an
arbitrary point z ∈ S, so assume z := Ft Gs (x0 ) for some s and t ∈ R. When

t = 0 however, i.e., at any x := Gs (x0 ), it is easy to see our desired result
holds, because, due to the construction of S we have S ∼ a F + b G since
a F + b G ∼ b G + a F (Theorem 43 (vi)) and

(b G + a F )h (x) = Fa (G )h Gb (x)h (x) = Fa (Gb (x)h (x))h Gb (x)h Gs (x0 ) ∈ S
b (x)h (x)

when h is small.
(x0 , x, z, s and t are now fixed for the remainder of the proof; however, we
only explicitly check the case t > 0, indicating the changes where needed to
check the t < 0 case.)
If we prove

(Ft )∗ (a F + b G) ∼ S at x = Gs (x0 ) (3.3)

this will prove S ∼ a F + b G at z, since the push-forward (Ft )∗ and the pull-
back (Ft )∗ are inverse, and are local lipeomorphisms, and so preserve tangency.
(See Figure 3.11.)

Figure 3.11: pull-back to Gs (x0 )

***add one more picture with the intermediate step approximation***
Recalling (2.8):
Ft∗ Gt (x) = (t [X, Y ] + G)t (x)

Ft/n Gt/n (x) = n [X, Y ] + G t/n (x) (3.4)

for our previously fixed small t ≥ 0 and arbitrary positive integer n ∈ N. (For
t < 0 use (2.9) instead.) Clearly for any arc fields Z and Z
d Zs (x) , Z s (x) = o (s) implies
d (sZ)s (x) , sZ s (x) = d (Z)s2 (x) , Z s2 (x) = o s2 (3.5)

and so

[X, Y ] ∼ aF + bG implies
d n [X, Y ] t/n
(x) , n (aF + bG) t/n
(x) = o n12 (3.6)

since t is fixed.
We use these facts to establish (3.3), first checking
d (Ft∗ (a F + b G))t/n (x) , S = o n1

as n → ∞. At the end of the proof we will replace t/n by arbitrary r → 0.
Using the linearity of pull-back (Theorem 52) we get

d (Ft∗ (a F + b G))t/n (x) , S
 ∗  ∗(n)
= d (a ◦ Ft ) Ft (F ) + (b ◦ Ft ) Ft/n (G) (x) , S
= d a0 F + b0 Ft/n (G) (x) , S

where a0 := a ◦ Ft and b0 := b ◦ Ft . Using (3.4) means this last estimate is
∗(n−1)  t 
= d a0 F + b0 Ft/n n [X, Y ] + G (x) , S

∗(n−1) t  ∗(n−1)  t 
≤ d a0 F + b0 Ft/n n [X, Y ] + G (x) , a 0 F + b F
0 t/n n (aF + bG) + G (x)
t/n t/n

∗(n−1) t 
+ d a0 F + b0 Ft/n n (aF + bG) + G (x) , S . (3.7)

The first summand of (3.7) is now analyzed as
∗(n−1)  t  
∗(n−1)  t 
d a0 F + b0 Ft/n n [X, Y ] + G (x) , a0 F + b 0 F t/n n (aF + bG) + G (x)
t/n t/n


= d b0 F(n−1)t/n n [X, Y ] + G (y) , b0 F(n−1)t/n n (aF + bG) + G (y)
t/n t/n

where y := a0 Ft/n (x)


= d F(n−1)t/n n [X, Y ] + G (y) , F (n−1)t/n n (aF + bG) + G (y)
b0 (y)t/n b0 (y)t/n
(  t    )
F [X, Y ] + G b0 (y)t/n F(n−1)t/n (y)
=d  −(n−1)t/n n   
, F−(n−1)t/n nt (aF + bG) + G b0 (y)t/n F(n−1)t/n (y)
(    )
F−(n−1)t/n nt [X, Y ] + G b0 (y)t/n (z)
=d    (3.8)
, F−(n−1)t/n nt (aF + bG) + G b0 (y)t/n (z)

where z := F(n−1)t/n (y). Then by Theorem 16, (3.8) is
≤d [X, Y ] + G b (y)t/n (z) , nt (aF + bG) + G b (y)t/n (z) eΛX (n−1)t/n
n 0 0
 t  t 
= d Gb0 (y)t/n n [X, Y ] b0 (y)t/n (z) , Gb0 (y)t/n n (aF + bG) b0 (y)t/n (z) eΛX (n−1)t/n
≤ d nt [X, Y ] b0 (y)t/n (z) , nt (aF + bG) b0 (y)t/n (z) eΛX (n−1)t/n eΛY b0 (y)t/n
  2 ΛX (n−1)t/n+ΛY b0 (y)t/n  
≤ r b0 (y) nt e =: o1 n12 (3.9)

where we define

r (s) := d ([X, Y ]s (z) , (aF + bG)s (z)) .
 main assumption of the theorem, r (s) = o (s) so we have o1 n2 =
o n2 , but we need to keep a careful record of this estimate as we will be
summing n terms like it—the subscript distinguishes o1 as a specific function.
Substituting (3.9) into (3.7) gives

d (Ft∗ (a F + b G))t/n (x) , S
= d a0 F + b0 Ft/n G (x) , S (3.10)
∗(n−1)  t   
≤ d a0 F + b0 Ft/n n (aF + bG) + G (x) , S + o1 n12
(   ) 
a0 F + b0 nt a ◦ F(n−1)t/n F  
= d t    ∗(n−1) (x) , S  + o1 n12
+b0 · n b ◦ F(n−1)t/n + 1 Ft/n G
(   t    ) 
a0 + b0 n a ◦ F(n−1)t/n ◦ a0 Ft/n F  
= d t    ∗(n−1) (x) , S  + o1 n12
+b0 · n b ◦ F(n−1)t/n + 1 Ft/n G
= d a1 F + b1 Ft/n G (x) , S + o1 n12 (3.11)

a1 := a0 + b0 nt a ◦ F(n−1)t/n ◦ a0 Ft/n and
b1 := b0 · nt b ◦ F(n−1)t/n + 1 .

Getting from the third line to the fourth line uses the linearity of pull-back
(Theorem 52), while the fifth line is due to the linearity of F (Lemma 44).
After toiling through these many complicated estimates we can relax a bit,
since the rest of the proof follows more algebraically by iterating the result of

lines (3.10) and (3.11):
d a0 F + b0 Ft/n G (x) , S
≤ d a1 F + b1 Ft/n G (x) , S + o1 n12
≤ d a2 F + b2 Ft/n G (x) , S + o1 n12 + o2 n12
≤ ... ≤ d (an F + bn G)t/n (x) , S + oi n12 (3.12)

a2 := a1 + b1 nt a ◦ F(n−2)t/n ◦ a1 Ft/n
b2 := b1 · nt b ◦ F(n−2)t/n + 1 and in general
ai := ai−1 + bi−1 n a ◦ F(n−i)t/n ◦ ai−1 Ft/n
bi := bi−1 · nt b ◦ F(n−i)t/n + 1 .

In the region of interest the |a| and |a0 | are bounded by some A ∈ R and |b| and
|b0 | are bounded by some B ∈ R so
|b1 | = b0 · nt b ◦ F(n−1)t/n + 1 ≤ B nt B + 1
|b2 | = b1 · nt b ◦ F(n−1)t/n + 1 ≤ B nt B + 1
|bi | ≤ B nt B + 1 and

|a1 | = a0 + b0 nt a ◦ F(n−1)t/n ≤ A + B nt A
|a2 | = a1 + b1 nt a ◦ F(n−2)t/n ≤ A + B nt A + B nt B + 1 nt A
|a3 | = a2 + b2 nt a ◦ F(n−3)t/n
≤ A + B nt A + B nt B + 1 nt A + B nt B + 1 nt A
t i
i−1 k t nB + 1 − 1
|ai | ≤ A + n AB nB + 1 = A + n AB t
k=0 nB
t i
= A nB + 1 .

t n
|bn | ≤ B nB + 1 ≤ BetB and
 n tB
|an | ≤ A nt B + 1 ≤ Ae .
Penultimately, we need to estimate the oi n12 . Remember from line (3.9)
    2 ΛX (n−1)t/n+ΛY b0 (y)t/n
o1 n12 := r b0 (y) nt e

where r (s) = o (s), so
    2 Λ (n−2)t/n+Λ b (y)t/n
o2 n12 = r b1 (y) nt e X Y 1

   2 ΛX (n−2)t/n+ΛY B t B+1 t/n
≤ B nt B + 1 o nt e n

    2 Λ (n−i)t/n+Λ b (y)t/n
oi n12 = r bi−1 (y) nt e X Y i−1

n   2 ΛX (n−i)t/n+ΛY B t B+1 i−1 t/n

n  1
oi n2 ≤ r bi−1 (y) nt e n
i=1 i=1

ΛX (n−i)t/n+ΛY B n B+1 t/n
≤o t
n BetB e
  2  2
since r bi−1 (y) nt = o nt BetB for all i. Therefore

n    2 tB 1
oi 1
n2 ≤o t
n BetB neΛX t+ΛY Be t/n
=o n

as n → ∞. Putting this into (3.12) gives
d (Ft∗ (a F + b G))t/n (x) , S ≤ d (an F + bn G)t/n (x) , S + o n1 = o n1

because of the uniform bound on |an | and |bn |. To see this notice
d (a∗ F + b∗ G)t/n (x) , S = o n1

uniformly for bounded a∗ and b∗ since a∗ F + b∗ G ∼ b∗ G + a∗ F and as be-
fore (b∗ G + a∗ F )t (x) ∈ S using the uniform Λ and Ω derived in the proofs of
Propositions 39 and 41 (cf. Remark 15).
Finally we need to check

d ((Ft∗ (a F + b G))r (x) , S) = o (r)

when r is not necessarily t/n. We may assume 0 < t < 1 and 0 < r < t so
t = nr + ε for some 0 ≤ ε < r and integer n with rt − 1 < n ≤ rt . Therefore the
above calculations give

d ((Ft∗ (a F + b G))r (x) , S) = d Fε∗ Fr∗(n) (cF + dG) (x) , S
≤ d (Fε∗ (an F + bn G)r (x) , S) + o (r) = o (r) .

The n-dimensional corollary of this 2-dimensional version of Frobenius’ The-
orem is given in Section 3.4.

3.3 Commutativity of flows
Theorem 63 Assume X and Y satisfy E1 and E2 on a locally complete metric
space M. Let F and G be the local flows of X and Y . Then [X, Y ] ∼ 0 if and
only if F and G commute, i.e.,

Ft Gs (x) = Gs Ft (x) , i.e., Ft∗ (G) = G.

Proof. The assumption [X, Y ] ∼ aX + bY with a = b = 0 allows us to copy
the approach in the proof of Theorem 62. Let δ > 0 be chosen small enough so
1. the constants Λ, Ω, and ρ for X and Y hold uniformly, and
2. [X, Y ] ∼ 0 uniformly
all on S := B (x, 2δ (ρ + 1)) and that S is also complete. We check t > 0. Since
Ft∗ (G) and G are both local flows, we only need to show they are tangent to
each other and then they must be equal by uniqueness of solutions.
As motivation imagine being in the context of differentiable manifolds. There,
for vector fields f and g with local flows F and G, we would have
Fh∗ (g) − g
lim = Lf g = [f, g] = 0
h→0 h
so Fh∗ (g) = g + o (h) and thus we expect

Fh∗ (g) = g + o (h) .

We might use this idea as before with the linearity of pull-back (Theorem 52)
to get
Ft∗ (g) = lim Ft/n (g) = lim g + no (1/n) = g
n→∞ n→∞

as desired.
Now in our context of metric spaces with t > 0, line (2.8) again gives

Ft/n (G)t/n (x) = nt [X, Y ] + G t/n (x) .

For t < 0 one would use (2.9). Also we again have

[X, Y ] ∼ 0 implies
d n [X, Y ] t/n
(x) , x = o n12 .

Using these tricks (and Theorem 16 in the fourth line following) gives
∗ ∗(n−1) ∗
d (Ft (G))t/n (x) , Gt/n (x) = d Ft/n Ft/n (G) (x) , Gt/n (x)

∗(n−1)  t 
= d Ft/n n [X, Y ] + G t/n (x) , Gt/n (x)
∗(n−1) ∗(n−1) ∗(n−1)
≤ d Ft/n Gt/n nt [X, Y ]t/n (x) , Ft/n Gt/n (x) + d Ft/n Gt/n (x) , Gt/n (x)

≤ d Gt/n nt [X, Y ]t/n (y) , Gt/n (y) eΛX n + d Ft/n Gt/n (x) , Gt/n (x)

where y := F(n−1)t/n (x)

≤ d nt [X, Y ]t/n (y) , y eΛY t/n eΛX n + d Ft/n Gt/n (x) , Gt/n (x)

and so

d (Ft∗ (G))t/n (x) , Gt/n (x)
≤ d Ft/n Gt/n (x) , Gt/n (x) + eΛY t/n+ΛX n o1 n12
where o1 n12 := d nt [X, Y ]t/n (y) , y .
Iterating this result gives
d Ft/n (G) (x) , Gt/n (x)
≤ d Ft/n Gt/n (x) , Gt/n (x) + eΛY t/n+ΛX n o1 n12
 t(n−2)   t(n−1)  
≤ d Ft/n Gt/n (x) , Gt/n (x) + eΛY t/n+ΛX n o2 n12 + eΛY t/n+ΛX n o1 n12
n   t(n−i)
≤ ... ≤ d Ft/n Gt/n (x) , Gt/n (x) + eΛY t/n oi n12 eΛX n

n   t(n−i)
= eΛY t/n oi n12 eΛX n
1 t
where oi n2 := d n [X, Y ]t/n (yi ) , yi and yi := F(n−i)t/n (x). Since
d nt [X, Y ]t/n (y) , y = o n12

uniformly for y ∈ B (x, 2δ (ρ + 1)) we have
d Ft/n (G) (x) , Gt/n (x)

n   t(n−i)   
n t(n−i)
≤ eΛY t/n oi 1
n2 eΛX n =o 1
n2 eΛY t/n eΛX n

i=1 i=1
 t n+1
 1  ΛY t/n ΛX t  t
i  1  ΛY t/n+ΛX t

n 1 − e− n
−n  t .
= o n2 e e e = o n2 e
i=1 1 − e− n

d (Ft∗ (G))t/n (x) , Gt/n (x) = o n1
and Ft∗ (G) ∼ G by the same argument at the last paragraph of the proof of
Theorem 62.
The converse is trivial.
Using Example 11, this theorem applies to the non-locally compact setting
with nonsmooth vector fields. [55], another paper which inspires this mono-
graph, obtains similar results with a very different approach.

Example 64 If [X, Y ] ∼ 0 and F, G, and H are the flows of X, Y, and X + Y ,
respectively, then H = F ◦ G.

3.4 The Global Frobenius Theorem
The goal of this section is to recast Theorem 62 in the language of distributions
and foliations, and so we begin with several definitions. As always M is a locally
complete metric space.

Definition 65 A distribution ∆ on M is a set of arc fields.

The following are archetypical examples of distributions. Using the addition
and multiplication operations defined for arc fields on M (§2.1) we may define
the linear span of arc fields:
1 n  i
∆ X, ..., X := ai X ai ∈ Lip (M, R) .

(Remember Lip (M, R) is the set of Lipschitz functions on M .)
The linear span of two (or more) distributions ∆1 and ∆2 on M is also
obviously defined to give another distribution

∆1 + ∆2 := X + Y |X ∈ ∆1 , Y ∈ ∆1 .

∆ (X) := {aX|a ∈ Lip (M, R)}
we automatically have ∆ (X, Y ) = ∆ (X) + ∆ (Y ). Associativity also holds for
this formal sum:  1   
∆ + ∆2 + ∆3 = ∆1 + ∆2 + ∆3
so we may write finite summands without confusion. Then without difficulty
we have    
1 n 
n i
∆ X, ..., X = ∆ X .

Commutativity is not generally valid, but it does hold up to tangency, defined
For x ∈ M denote ∆x := {X (x, •) |X ∈ ∆}. I.e., ∆x is a set of curves based
at x. For y ∈ M define (with another overload of d notation)

d (y, ∆x ) := inf {d (y, X (x, t)) |X ∈ ∆ and t ∈ [−1, 1]} .

Definition 66 An arc field X is (locally uniformly) tangent to ∆, denoted
X ∼ ∆, if for each x ∈ M there is an arc field X ∆ ∈ ∆ with X ∼ X ∆ uniformly
in a neighborhood of x.
* are (locally uniformly) tangent, denoted ∆ ∼
Two distributions ∆ and ∆
* if X ∼ ∆
∆, * for each X ∈ ∆ and X * ∼ ∆ for each X * ∈ ∆.
* Again, ∼ is an
equivalence relation.
1 2 n
By definition, then, X ∼ ∆ X, X, ..., X if and only if there exist Lipschitz

n k
functions ak : M → R such that X ∼ ak X. Restating, tangency between
two distributions means a correspondence between tangent arc fields within the
Example 67 Using Theorem 43 we may check that when the arc fields X
satisfy E1 and E2 and mutually close, then we have
1 2 2 1
∆ X, X ∼ ∆ X, X and ∆ (X) + ∆ (X) ∼ ∆ (X)

or more generally, assuming |I| < ∞
i i i
∆ X ∼∆ X +∆ X if J ∪ K = I.
i∈I i∈J i∈K

Definition 68 X is (locally uniformly) transverse to ∆ if for all x0 ∈ M
there exists a δ > 0 such that for all x ∈ B (x0 , δ) we have
d (Xx (s) , Yx (t)) ≥ δ (|s| + |t|)
for all Y ∈ ∆ and all |s| , |t| < δ. In this case we have
d (Xx (t) , ∆) ≥ δ |t| .
1 2 n
The arc fields X, X, ..., X are transverse to each other if for each i ∈ {1, ..., n}
we have X transverse to
1 2 i−1 i+1 n
∆ X, X, ..., X , X , ..., X .

Inspecting Example 60 shows this definition generalizes transversality in Rn .
A set of transverse arc fields is meant to generalize linearly independent vector
1 2 n
Definition 69 Let F := X, X, ..., X be a set of n transverse arc fields which
satisfy E1 and E2 on a neighborhood U ⊂ M and whose flows mutually close.
1 2 n
F is a local frame for a distribution ∆ if ∆ ∼ ∆ X, X, ..., X on U. F is a
global frame for ∆ if local uniform tangency holds throughout M.
A distribution is n-dimensional if each point in M has a neighborhood with
a local frame with cardinality n.
Whether global frames of a particular dimension even exist on a space M
may be difficult to answer—even when M is a manifold, where the question falls
under the purview of topology and global analysis.

 70 An n-dimensional distribution ∆ is involutive if each local
1 2 n
frame X, X, ..., X has
i j
X, X ∼ ∆

for all i, j ∈ {1, ..., n}.
A surface (or n-surface) is a topological manifold S (of dimension n). A
surface S ⊂ M is locally uniformly tangent to an arc field X, denoted
X ∼ S, if d (Xt (x) , S) = o (t) locally uniformly for x ∈ S.
An n-dimensional surface S isan integralsurface for an n-dimensional
1 2 n 
n k
distribution if for any local frame X, X, ..., X we have ak X ∼ S for any
choice of Lipschitz functions ak : M → R.
An n-dimensional distribution ∆ is said to be integrable if there exists an
integral surface for ∆ through every point in M.
Theorem 62 then has the following corollary:
Proposition 71 An n-dimensional involutive distribution is integrable.
Proof. n = 1 is well-posedness of arc fields, Theorem 12. n = 2 is Theorem
62. Now proceed by induction. We do enough of the case n = 3 to suggest
the path, and much of this is copied from the proof of Theorem 62—if you’ve
understood that proof, this induction is easier to construct for yourself than to
Choose x0 ∈ M . Let X, Y, and Z be the transverse arc fields guaranteed in
the definition of a 3-dimensional distribution. If we find an integral surface S
for ∆ (X, Y, Z) through x0 then obviously S is an integral surface for ∆. Let
F, G, and H be the local flows of X, Y , and Z and define
S := {Ft Gs Hr (x0 ) | |r| , |s| , |t| < δ}
with δ > 0 chosen small enough as in the proof of Theorem 62 so S is a 3-
dimensional manifold. Again we may assume δ is also chosen small enough so
that throughout S the functions |ak | are bounded by A, the constants Λ, Ω, and
ρ for X, Y and Z hold uniformly, and the closure of B (x, 3δ (ρ + 1)) is complete.
S := {Gs Hr (x0 ) | |r| , |s| < δ}
is an integral surface through x0 for ∆ (Y, Z) by the proof of Theorem 62. Now
S ∼ X by construction, but it is not immediately clear S ∼ a X + b Y + c Z
for arbitrarily chosen a , b , c ∈ R. Again we really only need to show S ∼
a F + b G + c H for an arbitrary point z := Ft Gs Hr (x0 ) ∈ S, and again it is
sufficient to prove
(Ft )∗ (a F + b G + c H) ∼ S at y = Gs Hr (x0 )
by the construction of S. Continue as above adapting the same tricks from the
proof of Theorem 62 to the extra dimension.

Proposition 72 If S1 and S2 are integral surfaces through x ∈ M, then

Theorem 73 (i) S1 ∩ S2 is an integral surface
(ii) S1 ∪ S2 is an integral surface.
Further, there is a unique maximal integral surface S through x, meaning
S ∩ S1 = S1 for any integral surface S1 through x.

Proof. The case n = 1 is true by the uniqueness of integral curves.
For higher dimensions n, Theorem 33 from §1.3 guarantees S1 and S2 contain

n k
local integral curves for ak X for all choices of ak ∈ R with initial condition
x. Since the X are transverse, there is a small neighborhood of x on which all
the choices of the parameters ak give local non-intersecting curves in M which
fill up n dimensions giving an integral surface in S1 ∩S2 (precisely the argument
in the second paragraph of the proof of Theorem 62).
For (ii) since S1 ∩ S2 is an integral surface inside S1 ∪ S2 the only question is
whether the union is still an n-dimensional manifold. Pick x ∈ S1 ∪ S2 and for
i = 1, 2 let Ui ⊂ Si be the n-dimensional neighborhood of x guaranteed by the
fact that Si is an integral surface. As with (i) each of these neighborhoods are
n k
manifolds filled with by the flows of ak X. By Nagumo’s invariance result,
Theorem 33, they coincide near x.
The maximal integral surface is the union of all integral surfaces through x.

Definition 74 A foliation is a partition of M into a set of subsets Φ :=
{Li }i∈I for some indexing set I, where the subsets Li ⊂ M (called leaves) are
disjoint, connected topological manifolds each having the same dimension.
A foliation Φ is tangent to a distribution ∆ if the leaves are integral sur-
faces; in this case we say ∆ foliates M.

Collecting all these results we have the following version of the Global Frobe-
nius Theorem.

Theorem 75 Let ∆ be an n-dimensional distribution on a locally complete met-
ric space M .
(i) If ∆ is involutive, then ∆ is integrable.
(ii) If ∆ is integrable, then ∆ foliates M .
(iii) If ∆ foliates M into Φ := {Lx }x∈M then for any X, Y ∈ ∆, we have
[X, Y ]x (t) ∈ Lx for t ∈ [−1, 1].
(iv) ∆ is involutive if and only if ∆ has a local frame at each x ∈ M with
commutative flows.

Proof. (i) is Proposition 71.
(ii) is 72.
(iii) follows from Theorem 33 and the definition of the bracket.

(iv) (⇐) This is automatic since the bracket is trivial if the flows commute.
(⇒) Pick any local frame X at x ∈ M and construct a commutative frame
1 1 2
* := F t∗ ◦ σ1 x and
as follows. Let σ x : (α, ω) → M be the solution of X. Define X
2 2
X* := X. This forces

surf ace
1 2
(a) F* and F* commute
1 2
* and X
(b) X * span a surface locally
4 5
1 2
* *
(c) ∨ F , F ⊂ Lx .
2 13
* *
Continue with n = 3, etc., pushing forward span F , F with F to extend

1 2 3 3
* and X
X * on a local 3-D set and X * := X. In the end we have an n-

3−D set
dimensional surface (property (b)) in Lx (property (c), which is the key point
of the proof and requires the assumption of the statement of the theorem) and
so fills Lx locally and commutes (property (a)).
Part (iii) of Theorem 75 is as close to a converse of (i) as we have been able
to achieve. The bracket is tangent to the distribution in the sense given in the
theorem, but not necessarily locally uniformly tangent to a single arc field in
the distribution—which is the definition of ∼ required for involutivity.

The local frame with commutative flows gives local coordinates on the leaves
of the foliation. If the foliation is trivial having only one leaf, then the flows
give local coordinates near each point in M , called flow coordinates in which
case M is a topological manifold.

Function space examples relevant to this chapter are the content of Chapters
4 and 5. Further, the idea of a connection from differential geometry is now
straightforward to generalize to metric spaces. Use the interpretation that a
connection is a choice of horizontal subspace of T T M, i.e., a distribution on
T T M . As on a manifold, each choice of a connection gives a precise definition
of curvature. On a metric space, however, the situation is complicated by the
choice of arcs which must be made to define the analogs of T M and T T M.
An open question is how to guarantee a connection related to the metric which
gives length minimizing geodesics—the analog of the Fundamental Theorem of
Riemannian Geometry. Progress in this direction is one of the successes of
Finsler geometry.

3.5 Control theory
In this section we explore some ideas from control theory and how the geometric
results from this chapter impact the subject. The culmination is a generalization
of Chow’s Theorem to metric spaces.

Often we are able to affect a dynamical system at will, i.e., we can control,
directly or indirectly, some parameters independently of the evolution of a flow.
We can intervene in the evolution of a system, instead of merely observing it.
The study of such a scenario is the purview of control theory. Some of the
original models are of mechanical systems, but the applications are extremely
diverse: from signal analysis to sociological models, any differential equation
can be co-opted for study in control theory if we complicate the situation by
adding parameters. One of the most exciting contemporary applications of con-
trol theory that requires geometric theory to apprehend is programming robot
motion, and we are immersed daily in more prosaic control systems: driving a
car, changing a thermostat, or adjusting the dosage levels of a patient’s med-
The model for driving a car can be simplified to the action of turning the
steering wheel (p1 > 0 means rotate the steering wheel right and p1 < 0 means
left) and moving the car forward (p2 > 0) or backward (p2 < 0). Thinking
of a toy remote-control car with its two peg controls may help intuition. The
parameters pi in any system are the controls, which belong to subsets of R.
These subsets are often intervals, but with digital systems we need to be able
to use discrete subsets like {0, 1} and {−1, 0, 1}. The systems are modeled
on a space M of possible configurations of the system. In a basic example
“configuration” might mean the location of the object of interest, but this is
usually only one of the variables of interest. In the model of driving the car we
might take x ∈ M to represent location, in which case M = R2 (assuming we’re
driving on a flat plane). But this is hardly adequate, as we will be also interested
in the direction of the car—so let the space of configurations be M := R2 × S 1
where the R2 factor represents the location of the center of mass of the car,
and S 1 represents the orientation. But we may also represent the direction
the wheels are turned—an important part of the configuration—so the proper
configuration space is M := R2 × S 1 × [−1, 1]. But to make flows tractable on
M we will force the easier scenario of M := R2 × S 1 × S 1
(x1 , x2 , θ1 , θ2 ). Even
in this simple example we are led to study a manifold M instead of a vector

Since we are interested in how our system evolves in time, a differential
equation is typically used to model the behavior. Often a mechanical system’s
evolution is determined by forces, and Newton’s equation is used:
x = f (x, p) (3.13)
where x ∈ M and p = (pi )i∈I are the control parameters. Control theory’s
fundamental questions are determining: 1) stability and 2) controllability of a

For the first question, there are many different meanings of stability, but they
all center on whether the system evolves in a qualitatively similar (“topologically
conjugate”) manner if small changes are made to the right hand side, to x, p
or f . Changing x determines sensitivity to initial conditions, adjusting
p determines the stability of the control, and perturbing the function f
determines the structural stability of the system.
Our focus in this chapter is on the second question of the controllability
of a system, which asks whether we can steer any initial condition x ∈ M
to any other configuration1 in M with a clever adjustment of parameters. In
infinite dimensions there is a difference between whether we can drive an initial
condition to a terminal condition, or whether we can only drive it arbitrarily
close to the terminal condition. Any imaginable practical application will not
distinguish these cases, though.
As a prerequisite for this full controllability, we must obviously have local
controllability, in which any initial condition has a neighborhood for which
the restriction of the system is controllable. Continuation then gives us full
controllability if M is path-connected. In this restricted local situation the 2nd-
order ODE may be equivalently rewritten (as illustrated in Appendix B) as a
1st-order ODE
x = g (x, p) = gp (x)
and the flow Gp generated by the vector field gp depends on the parameter p,
typically a member of Rn .
A simplified presentation of the following terminology is sufficient for our
purposes in metric spaces.

Definition 76 Let G = {Gp |p ∈ P } be a family of flows with indexing set P .
The reachable set of G from x ∈ M is denoted
RG (x) := Gptnn Gtn−1 ...Gpt11 (x) |ti ∈ R, p ∈ P, n ∈ N ⊂ M (3.14)

where x ∈ M is the constant function. RG (x) is the set of all finite composi-
tions of Gpt . The approximately reachable set from x is RG (x). If M is
approximately reachable, then the system is controllable.

Reachable sets partition M. Comparing Definition 76 of the reachable set
RG (x) with Definition 36 of linear combinations of arc fields we see RG (x) as
the configurations attainable from the initial configuration x using the flows Gp
successively, or with little practical difference using all linear combinations of
the Gp .

To clarify the terminology, consider again whether the “driving a car” sys-
tem is controllable. Now we are asking whether controlling our 2-dimensional
1 We will concentrate on this basic controllability and not on optimal controllability which

seeks to find solutions which optimize some quantity, such as the time needed to reach the
terminal condition using speed less than 1.

parameter space allows us to steer our car into any point of the 4-dimensional
configuration space M. Manipulations on the 2-dimensional parameter space
act on the configuration space through two simple flows Ft (x), steer, rotates
the steering wheel and Gt (x), drive, moves the car forward and back with
the steering fixed. Any driver’s intuition will promise us that a car can be
moved to any configuration using only F and G. Mathematically, though, it
seems unlikely a 2-dimensional parameter space is enough to control the entire
4-dimensional configuration space. R(F,G) (x) should be a 2-dimensional surface
inside of M, perhaps something like the surface

{Gs Ft (x) |s, t ∈ R} ⊂ M .

But this naive mathematical intuition is incorrect, and stems from the fact
that the flows F and G do not commute, so the terms of the reachable set
(3.14) do not simplify to Gs Ft (x). Following the course G−t F−t Gt Ft (x) should
return us to our initial configuration x if the flows were to commute, but as
you can mentally check, the automobile will actually end up rotated with an
insignificant translation. This motivates us to introduce the bracket of two flows
and to consider the meaning of Frobenius’ Theorem for control systems.
In this example [F, G] is called wriggle. Thinking of wriggle as rotation (ig-
noring the minor translation) it is easy to verify that [[F, G] , G]—right rotation,
forward drive, left rotation, backward drive—is effectively translation transverse
to drive. So [[F, G] , G] has been traditionally dubbed slide, an algorithm for
parking your car in any space infinitesimally longer than your car [1]. Since wrig-
gle is not tangent to a simple sum of drive and steer, wriggle generates a 3rd
dimension of controllability/reachability. Similarly slide generates a 4th dimen-
sion. Since wriggle is the combination of drive and steer 4 times (the bracket) as
is slide (iterated bracket), then drive and steer—using Euler approximation—are
enough to reach all points in the 4-dimensional configuration space.
(Drivers will object that slide is not the algorithm they use; parallel parking
is better described by the arc field H as follows:

Ht (x) := Fπ/2 G√t F−π G√t Fπ G−√t F−π G−√t Fπ/2 (x)
for t > 0. H is its own flow, translation perpendicular
√ to drive, when θ 2 = 0.
Be careful here. If you replace the π/2 with t then

H = [F, −G] + [−G, −F ] + [−F, −2G] + [−2G, F ] + [F, −G] + [−G, −F ]

and then using rules of Lie algebra we haven’t yet proven, we get H ∼ 0 which
doesn’t help us park a car. The use of π/2 reflects how drivers always turn their
wheels completely when shimmying into a space. Try writing out the formula
for slide explicitly to see the 4th root arise in the iterated bracket.)

Example 77 Consider an even simpler toy car restricted to moving forward
and backward or rotating, where the formulas are easy to intuit. The Segway,
the two-wheeled electric upright vehicle, is a good representative for this model.

Then the configuration space is simply M = R2 × S 1 , the R2 variable for the
center of mass and the S 1 variable representing its direction of orientation. Let
F be the flow moving the car forward or back in the direction of orientation,
and let G be rotation. For x = (x1 , x2 , θ)

Ft (x1 , x2 , θ) = (x1 + t cos θ, x2 + t sin θ, θ) = (x1 , x2 , θ) + t (cos θ, sin θ, 0)
Gt (x1 , x2 , θ) = (x1 , x2 , [θ + t] mod 2π)

then check

G−t F−t Gt Ft (x) = (x1 + t cos θ − t cos (θ + t) , x2 + t sin θ − t sin (θ + t) , θ) = x

assuming θ = 0 and t is small enough to ignore the modulo 2π calculation. In
fact for t > 0

[F, G]t (x) = G−√t F−√t G√t F√t (x)
(  √  √ )
cos θ − cos θ + t sin θ − sin θ + t
= (x1 , x2 , θ) + t √ , √ ,0
t t
∼ (x1 , x2 , θ) + t (sin θ, − cos θ, 0) =: Ht (x)

H is translation perpendicular to the orientation. H ∈/ ∆ (F, G) and so M =
Lx = R(F,G) and the system is controllable. More explicitly, since H is tangent
to the bracket of F and G, H may be approximated by appropriate successive
compositions of F and G which means R(F,G) is at least dense in M. So the
3-dimensional system is controllable with 2 parameters because the bracket is
nontrivial—i.e., the system is non-holonomic.

[27, Chapter 5] gives an example demonstrating the difficulties of applying
the traditional vector field Lie bracket to infinite-dimensional spaces directly
with differential operators on PDEs; in the same chapter, fruitful work on con-
trollability in the infinite dimensional context of Navier-Stokes and quantum
mechanics is cited. A careful use of the metric space Lie bracket and foliation
theorems means the approach can work in the greater generality of a metric
Define the distribution bracket-generated by the set of arc fields {Xi } to
be the distribution ∆ [{Xi }] consisting of the linear combinations of the Xi and
all finitely iterated brackets.

Theorem 78 Let ∆ be an n-dimensional distribution bracket-generated by the
Gp . Then RG (x) ⊂ Lx and RG (x) = Lx .

Proof. ∆ (Gp ) is involutive by definition and so has a foliation consisting of
the leaves Lx . That RG (x) ⊂ Lx is a corollary of Frobenius’ Theorem, Theorem
75 and Theorem 33. Continuation on the connected leaves means you can move
between any two points in a leaf using the flows, proving RG (x) = Lx .

This is a generalization of Chow’s Theorem to metric spaces (also called the
Chow-Rashevsky Theorem and Hermes’ Theorem).

In Chapters 4 and 5 we investigate applications of these ideas. An infinite-
dimensional distribution version arises naturally using the same approach of
generalizing linear algebra to functional analysis, but we have not yet dwelled
on any potential pitfalls.
Part II


Chapter 4

Brackets on function spaces

Let’s explore how the ideas of Chapter 3 express themselves with the simplest
examples on function spaces.

Example 79 (Vector space translations and dilations) Let M be a Ba-
nach space with norm ·. First let X and Y be vector space translations in the
directions of u and v ∈ M

Xt (x) := x + tu Yt (x) := x + tv.

X and Y are their own flows (when extended to |t| > 1). Obviously [X, Y ] = 0,
and the flows commute.
Next consider the dilations X and Y about the respective centers u and v ∈ M

Xt (x) := (1 + t) (x − u) + u Yt (x) := (1 + t) (x − v) + v

(u and v are usually taken equal to 0 for simplicity). The flows are computable
by intuition, or with a little effort using Euler curves

Ft (x) = lim Xt/n (x) = et x − et − 1 u.

(So dilation about u = 0 is the familiar Ft (x) = et x.)
Then for t ≥ 0

[X, Y ]t2 (x)
= G−t F−t Gt Ft (x)
= e−t e−t et et x − et − 1 u − et − 1 v − e−t − 1 u − e−t − 1 v
= x − u + e−t u − e−t v + e−2t v − e−2t u + e−t u − e−t v + v
= x + (v − u) e−t − 1


so [X, Y ] ∼ Z where Z is the translation Zt (x) := x + t (v − u) since, for
instance with t > 0

d ([X, Y ]t (x) , Zt (x))
( √ )2
 √ 2 − t
− t e − 1

= |v − u| e
− 1 − t = |t| |v − u| √ − 1 = o (t) .

Hence the distribution ∆ (X, Y ) is not involutive. However, the set of all di-
lations generates all translations using brackets. Using the same tricks we’ve
just employed, it is easy to check the bracket of a dilation and a vector space
translation is tangent to a vector space translation, e.g., if Ft (x) := x + tu and
Gt (x) := et x (dilation about 0) then [F, G] ∼ F since for t > 0
[X, Y ]t2 (x) = G−t F−t Gt Ft (x) = e−t et [x + tu] − tu = x + tu 1 − e−t

and so √
− t
d ([X, Y ]t (x) , Ft (x)) = |tu| 1−e√t − 1 = o (t) .

Bracket-generated distributions (definition on p. 68) are involutive by de-
finition. Example 79 shows the distribution bracket-generated by dilations is
exactly the distribution consisting of all dilations and all translations.

With obvious modifications the results of Example 79 are valid on the metric
space (H (Rn ) , dH ) where H (Rn ) is the set of non-void compact subsets of Rn
and dH is the Hausdorff metric. Theorem 75 gives foliations of H (Rn ), which
is of interest because this space is incapable of accepting any natural linear
structure. H (Rn ) is a particularly strange space topologically because, despite
being locally compact, H (Rn ) is infinite dimensional by most any measure we
can attempt to apply. We can even find infinitely many transverse flows.

Example 80 (two parameter decomposition of L2 ) Now let M be real Hilbert
space L2 (R). Since M is Banach, the results of Example 79 hold. Let’s ex-
plore the example from §0.2 in further detail. Let Ft (f) (x) := f (x + t) denote
function translation and let Gt (f) = f + tg denote vector space translation by
g ∈ L2 (R). When g ∈ C 1 (R) with derivative g  ∈ L2 (R), Example 97 below
shows F & G close and gives the formula for the flow of the sum.
Let’s compute their bracket. For t > 0

[F, G]t2 (f ) (x)
= G−t F−t Gt Ft (f) (x) = G−t F−t [f (x + t) + tg (x)]
g (x) − g (x − t)
= f (x) + tg (x − t) − tg (x) = f (x) − t2 .

Defining a new arc field Zt (f) := f + t (−g  ) we therefore have

7 (  √ )2
7 g (x) − g x − t
d ([F, G]t (f ) , Zt (f)) = |t| 8 √ − g (x) dx = o (t)

when g ∈ C 1 (R) with g ∈ L2 (R). Thus [F, G] ∼ Z which we finish verifying by
checking the case t ≤ 0. Let t = −s2 for s > 0. Then

[F, G]t (f ) (x) = F−s G−s Fs Gs (f) (x) = F−s G−s [f (x + s) + sg (x + s)]
2 g (x) − g (x − s)
= f (x) + sg (x) − sg (x − s) = f (x) − s −
g (x) − g (x − s)
= f (x) + t − .

So again d ([F, G]t (f) , Zt (f )) = o (t). (See Figures 5.1 and 5.2 with g (x) =
e−x .)

x x

2 2
G1 (0) = e−x F1 G1 (0) = e−(x+1)

x x

2 2
G−1 F1 G1 (0) = e−(x+1) − e−x F−1 G−1 F1 G1 (0) = 0
Figure 5.1: F and G do not commute.


 d −x2 
 √ √ √ √
Figure 5.2: F− t G− t F− t G− t (0) − t e  = o (t)

This simple calculation has remarkable consequences. Using Theorem 78 if
the (n + 1)-st derivative g[n+1] is not contained in
span g [i] |0 ≤ i ≤ n

then iterating the process of bracketing F and G generates a large space reach-
able via repeated compositions. For instance when g (x) = e−x the deriva-
tives generate the famous Hermite functions, a basis of L2 (R). In this case
R(F,G) (0) = L2 (R). We devote §5.1 to exploring the function approximation
schemes this fact inspires.

Continuing the example, for other choices of g we may alternately have
g [n+1] ∈ span g[i] |0 ≤ i ≤ n .

Then the space reachable by F and G is precisely limited. E.g., if we limit
ourselves to M := L2 [a, b] then we may choose g to be a sine or cosine function;
then R(F,G) (0) is two-dimensional as are the leaves of the foliation generated by
∆ (F, G). Similarly if g is an n-th order polynomial then the parameter space
is (n + 1)-dimensional. These choices of g would also give finite-dimensional
foliations of L2G (R) which denotes the space of square integrable functions with
Gaussian weight, i.e., with norm
2 −x2
f G := |f (x)| e dx < ∞.

Restating these results in different terminology: Controlling amplitude and
phase the 2-parameter system is holonomically constrained. Controlling phase
and superposition perturbation (F and G) generates a larger space of signals;
how much F and G deviate from holonomy depends on the choice of perturbation
function g.

Example 81 Let’s continue Example 80 with M = L2 (R) and
Ft (f) (x) := f (x + t) and Gt (f) := f + tg.
Now define the arc fields
Vt (f) := et f and Wt (f ) (x) := f et x
which may be thought of as vector space dilation (about the point 0 ∈ M )
and function dilation (about the point 0 ∈ R). Again, V and W are their own
flows. Using the same approach as in Examples 79 and 80 it is easy to check
the brackets satisfy
[F, G]t (f) = f − tg  + o (t) [G, V ]t = Gt + o (t)
[G, W ]t (f ) (x) = f (x) + txg (x) + o (t) [F, V ] = 0
[F, W ]t = −Ft + o (t) [V, W ] = 0

assuming for the [F, G] and [G, W ] calculations that g ∈ C 1 (R) and g  ∈ L2 (R).
∆ (F, G) may be highly non-involutive depending on g,
∆ (G, V ) is involutive, but G and V do not commute,
∆ (G, W ) may be highly non-involutive depending on g,
∆ (F, V ) is involutive; F and V commute,
∆ (F, W ) is involutive, but F and W do not commute,
∆ (V, W ) is involutive; V and W commute.

For many choices of g (e.g., eax or xc for non-integer c) the flows G and W con-
trol many function spaces, similarly to F and G. This gives more opportunities
to generate approximation schemes which are explored in Chapter 5. Foliations
of L2 (R) generated by these distributions are now precisely understood.

Example 82 Now consider
Gt (f) := f + tg and Zt (f ) (x) := etcx f (x)
with g (x) = e−x and c constant non-zero. G and Z are their own flows.
[G, Z]t2 (f ) (x) = Z−t G−t Zt Gt (f ) (x)
(1 − e−tcx )
= f (x) + t2 g (x)
so [G, Z] ∼ H where Ht (f ) (x) := f (x) + tcxg (x). Therefore the bracket is not
involutive. In fact iterating the bracket generates polynomials of every degree
and the system is controllable—the reachable set satisfies R(G,Z) = L2 (I) for any
bounded interval I ⊂ R.
This then means that terms of the form
Ztn Gtn ...Zs1 Gt1 (0) = esn cx (· · · es2 cx (es1 cx t1 g (x) + t2 g (x)) · · · + tn g (x))
= g (x) a1 eb1 x + a2 eb2 x + · · · an ebn x

are dense in L2 (R). Therefore linear combinations of exponentials

an ebn x (4.1)

are dense in L2 (I) for any bounded interval I ⊂ R. Notice the choice of g was
nearly immaterial—any nonvanishing function in L2 (R) will do.

In this example the Stone-Weierstrass Theorem gives us the density of series
such as (4.1), since they form an algebra generated by the set

{erx |r ∈ R}

which separates points. However Stone-Weierstrass cannot predict the results
we will achieve extending this example in §5.2.

Example 83 The question arises: does Fourier analysis decompose L2 (0, 1)
with just a few flows? Yes and no. The typical point of view would more likely
be that Fourier analysis is so powerful
because we need only a countable set
of orthogonal functions einx |n ∈ Z or {sin (nx) |n ∈ Z} ∪ {cos (nx) |n ∈ Z} to
represent any of the uncountable profusion of functions in L2 (0, π). The series

f (x) = an sin nx + bn cos nx
k=1 k=1

represents (countably) infinitely many operations of superposition (+). That is,
you need 1 circuit for each n to synthesize a signal. Finitely many circuits means
limited fidelity. The translations F and G from Example 81 where Gt (f ) :=
f + tg with g (x) := cos x will certainly not work since the distribution generated
by the brackets of F and G is merely 2-dimensional (the derivative of the cosine
is a function translation of the cosine).
But from another point of view, the answer is yes. Three flows are enough to
generate any Fourier series with a simple algorithm. Let us examine the flows
Gt (f ) := f + tg and Wt (f ) (x) := f et x

from Example 81 more closely. Choosing g (x) = cos x notice any finite cosine

f (x) := bn cos nx

is in the reachable set for G and W from 0, because

f = Gb1 Wln 2 Gb2 Wln(3/2) Gb3 ...Wln(n/n−1) Gbn (0) .

Let us illustrate this with a calculation for n = 3 :

Gb1 Wln 2 Gb2 Wln(3/2) Gb3 (0) (x)
= Gb1 Wln 2 Gb2 Wln(3/2) (b3 cos (x)) = Gb1 Wln 2 Gb2 b3 cos eln(3/2) x
= Gb1 Wln 2 b3 cos eln(3/2) x + b2 cos x
= Gb1 b3 cos eln(3/2) eln 2 x + b2 cos eln 2 x
= b3 cos (3x) + b2 cos (2x) + b1 cos x.
So these two flows generate any Fourier cosine series, which means on [0, π]
that RG,W = L2 [0, π].

This algorithm means we can make a universal synthesizer by putting a single
variable-frequency oscillator (an IRC circuit whose capacitor has an adjustable
plate separation) on a looped wire. There are, of course, many obstacles which
need to be overcome in order to put this (or any) scheme into practice. The
time it takes to load a signal onto the wire (the load time) gets larger as the
fidelity is increased, so we need several circuits working in parallel. Looped
signals degenerate in time due to friction and diffusion. Et cetera. There are,
of course, answers to each problem, but knowing whether such fixes are practical
is one of the gaping holes in the education of theorists such as myself.

Finally, using the ideas from Example 80 and a trick we will see in Theorem
94, there is another algorithm possible, since

[G, W ]t (f ) (x) = f (x) + txg  (x) + o (t) (4.2)

which for a function g (x) set equal to ex or cos x means successive brackets
again generate an infinite set of linearly independent functions.
Example 84 Again choosing some function space, M := L2 (R) for exam-
ple, let us slightly generalize V from Example 81 and consider Xt (f ) (x) :=
etg(x) f (x) which is its own flow if g is some bounded function. Incidentally X
is the solution to the differential equation ft = gf . Let Yt (f) (x) := f (x + t)
and let’s calculate their bracket:
2 [g(x)−g(x−t)]
[X, Y ]t2 (f ) (x) = et t f (x)
so [X, Y ] ∼ Z where

Zt (f) := etg f
and iterating we have [X n, Y ] ∼ Z where
[X n, Y ] : = [[... [[X, Y ] , Y ] , ..., Y ] , Y ]
n brackets
tg [n]
Z t (f ) : = e f.


(f) = et(am g +an g [n] )
[m] [n] [m]
(am [X m, Y ] + an [X n, Y ])t (f) ∼ eam tg ean tg (f)

for ai ∈ R so

n    [k]
ak X k,Y (f ) ∼ exp t ak g (f) .
k=0 t k=0

Again, we understand the reachable set depending on the choice of g.

Example 85 Modifying the previous example slightly gives us a technique for
generating any probability distribution. Consider the new flow
Xt (f ) (x) := etg(x) f (x) / etg f  .

Now M is chosen to be the unit sphere in some Banach function space B from
which we get the norm used above, i.e., M := {f ∈ B| f  = 1}. Further, g is
chosen to be some suitably bounded function (though we are thinking merely of
the Gaussian at this moment). Obviously Xt (f ) = 1 for all t ∈ [−1, 1]. Again
X is its own flow, which is possibly surprising, but not difficult to check:
/ 0  
Xt Xs (f) (x) : = etg(x) esg(x) f (x) / esg f / etg [(esg f ) / esg f]
= e(s+t)g(x) f (x) / e(s+t)g f  = Xs+t (f ) (x) .

Calculating the bracket with the translation flow Yt (f ) (x) := f (x + t) is only
slightly more difficult than in the previous example, and we get [X, Y ] ∼ Z where
Zt (f) := etg f / etg f  .

Again the previous techniques work to generate any probability distribution as-
suming g is derivative-generating and the initial condition f (x) is sufficiently
regular (e.g., strictly positive).

Example 86 Let M := l2 (R), i.e., the Hilbert space of all  square summable se- 
quences. Let S := {x ∈ M |even entries are 0} , e.g., x = 1, 0, 2−1 , 0, 2−2 , 0, ... ∈
S. Then S is an infinite-dimensional closed linear submanifold of M and its
infinite distinct cosets Φ := {Lx := x + S|x ∈ M} foliate M.
The set ∆ of vector fields which have 0 in all the even entries is a distribution
with the same foliation Φ. Also ∆ is clearly involutive. This gives us motivation
to generalize Frobenius’ Theorem further to infinite-dimensional distributions.
[64] gives examples of foliations in the following infinite-dimensional contexts:
on the space of gauge fields on a principle bundle, on the space of Riemannian
metrics and on the space of probability measures on a manifold.

Example 87 Let M = L2 (R). Consider the following arc fields
Xt (f ) (x) := f (x + t) and Yt (f) (x) := et/2 f et x .

X and Y each give 1-dimensional foliations of M\ {0} and since they are each
invariant on the unit ball B := B (0, 1) ⊂ M they also foliate B\ {0} by a family
of isometries, and foliate the unit sphere

S ∞ := f ∈ L2 (R) |f 2 = 1 .

Interestingly the length of the integral curves are infinite (whenever the initial
condition is not f ≡ 0) and the integral curves are far from being geodesics1 .
An insight into the infinite dimension of L2 comes from comparing these
foliations to the 1-dimensional foliation of R2 \ {0} given by rotations which
also foliates the ball BR2 (0, 1) \ {0}. These rotations are a family of isometries,
where the integral curves have finite length 2πr.
As before it is easy to check the bracket satisfies [X, Y ] ∼ −X so ∆ gives a
2-dimensional foliation of M\ {0} and B\ {0} and S ∞ . The area of each leaf is
again infinite. Higher dimensional foliations are given by adding transverse arc
Let B + := {f ∈ M : f (x) ≥ 0, ∀x ∈ R}. Choose g ∈ B + and define the arc
field Z on M by
Zt (f) := (1 − t) f + tg.
It is easy to check Z satisfies E1 and E2, e.g.,

Zs Zt (f) = (1 − s) [(1 − t) f + tg] + sg = · · · = Zs+t (f ) + st (f − g)

so E2 is satisfied. B + is an attracting invariant set under the forward flow of
Z. In fact g is the unique attracting fixed point of the flow. However B + is not
invariant under the reverse flow of Z, e.g., pick f = χ[0,1] and g = χ[1,2] . B + is
also invariant (forward and backward) under X and Y and their bracket is also
in B + so X and Y give a 2-D foliation of B + \ {0}. Further, any positive linear
combination aX + bY + cZ for a, b, c > 0 also gives a flow which has B + as a
forward invariant set. But the bracket of X and Z is not tangent to B + (since
it uses t < 0 in Zt ). As before the reachable set for X and Z or for Y and Z is
generally a higher-dimensional space.

1 locally minimal length curves
Chapter 5

Approximation with
non-orthogonal families

Chapter 4 furnished a particularly surprising result: two simple flows can control
an infinite-dimensional space. Here we translate some of these results to the
language of numerical analysis.

5.1 Gaussians
5.1.1 First approximation formula
The result of Example 80—that successive sums and translations of Gaussians ap-
proximate any L2 function—can be profitably rephrased. f ≈ g means f − g2 <

Theorem 88 For any f ∈ L2 (R) and any 0 > 0 there exists t > 0 and N ∈ N
and an ∈ R such that

N 2
f≈ an e−(x−nt) . (5.1)
3 n=0
x2 /2
If f (x) e is integrable, then one choice of coefficients is
9 x2 d −x2
an = √ k f (x) e e dx.
n! π k=n (k−n)!(2t) R dxk
If f (x) ex /2
is not integrable, replace f in the above formula with f · χ[−M,M]
where M is chosen large enough that f − f · χ[−M,M]  < 0.

Proof. Since the span of the Hermite functions is dense in L2 (R), see [61],
we have for some N
N dn  2

f ≈ bn n e−x .
3/2 n=0 dx


Now use finite backward differences to approximate the derivatives (Appendix
C reviews the well-known facts; use Example 129). We have for some small

N dn  2
N 1 
k n 2
bn n e−x ≈ bn n (−1) e−(x−kt) .
n=0 dx 3/2 n=0 t k=0 k
The coefficients an are achieved by simplifying this last expression and cal-
culating the coefficients bn using orthogonal functions. These straightforward
calculations are detailed in [20].

This result may be surprising; it promises we can approximate to any degree
of accuracy a function such as the following characteristic function of an interval

1 for x ∈ [−10, −11]
χ[−11,−10] (x) :=
0 otherwise
with support far from the means of the Gaussians e−(x−nt) which are located
in [0, ∞) at the points x = nt. The graphs of these functions e−(x−nt) are
extremely simple geometrically, being Gaussians with the same variance. We
only use the right translates, and they all shrink precipitously (exponentially)
away from their means.

an e−(x−nt) ≈ characteristic function?

5.1.2 Signal synthesis
Interpreting Theorem 88 in terms of signal analysis, we see a Gaussian filter is
a universal synthesizer with arbitrarily short load time. To clarify this claim,
let G (x) := √1π e−x . A Gaussian filter is a linear time-invariant system
represented by the operator

1 2
W (f ) (x) := (f ∗ G) (x) = √ f (y) e−(x−y) dy.
π R
The symbol W is used in reference to the Weierstrass transform. If you feed W a
Dirac delta distribution δ t (an ideal impulse at time x = t) you get W (δ t ) (x) =

G (x − t). The load time τ of a signal W (f ) is the length of the support of
the input function f . Theorem 88 gives
Corollary 89 For any f ∈ L2 (R) and any 0 > 0 and any τ > 0 there exists
t > 0 and N ∈ N with tN < τ such that

f ≈W an δ nt
3 n=0

for some choice of an ∈ R.
Feed a Gaussian filter a linear combination of impulses and we can syn-
thesize any signal and arbitrarily small load time τ. The design of physical
approximations to an analog electronic Gaussian filter are detailed in [29] and

One of the principle deities in Hinduism is Shiva who generates, maintains,
and destroys the universe by performing the Tandava (“dance of bliss”) in cyclic
eternity. Said to be the supreme representation of Hindu art, the Chola Nataraja
(Figure 5.1) depicts Shiva in the midst of Tandava. In this canonical represen-
tation fire is held in Shiva’s upper left hand, symbolizing destruction, and a
damaru drum in Shiva’s upper right hand, with which Shiva (re)generates the
cosmos. The damaru is a bifacial drum in the shape of an hourglass and is
sometimes made from human skulls (Figure 5.2). Delivering alternate pulses to
the opposite sides of a damaru is an excellent metaphor for the universal signal
synthesis of Corollary 89.
Appropriately, Shiva is depicted in the Chola Nataraja breaking the back of
the demon Apasmara, who represents ignorance.

5.1.3 Deconvolution
The inverse problem of convolution is an important one for signal analysis: given
a set of transformed signals from a fixed process, how do we find the signals
that were originally transformed. In a dramatic instance, astronomers collected
flawed images from the Hubble telescope’s slightly defective mirror from 1990
when it was launched until 1993 when the aberration was corrected. These
images were far from useless and were immediately improved with deconvolution.
All imaging systems (cameras, telescopes, microscopes, televisions, eyeballs)
have such flaws to a certain degree, due to inevitable imperfections in lenses
and mirrors, and deconvolution is an important technique for improving their
quality. (The bible of optics is [13].)
Let’s frame the problem mathematically. We imagine the pictures that Hub-
ble took are the image under the convolution

ΦT (f ) (x) := (f ∗ T ) (x) := f (y) T (x − y) dy.

f is the perfect picture, and the output ΦT (f ) is the transformed, flawed pic-
ture. The goal is to find the inverse transform Φ−1T given only a few outputs.

Figure 5.1: Chola Nataraja Himalayan
c Academy Publications, Kapaa, Kauai,

Figure 5.2: Damaru made from crania National
c Music Museum: America’s Shrine
to Music

Determining T solves the problem. The fundamental trick is to find the image
of a point source or Dirac delta distribution. Knowing ΦT (δ 0 ) gives us T since
ΦT (δ 0 ) (x) = δ 0 (y) T (x − y) dy = δ 0 (x − y) T (y) dy = T (x) .

Once the transfer function T is thus known, the Fourier transform may theo-
retically be used to find the inverse

f = F −1 (F (f ∗ T ) /F (T ))

using the Convolution Theorem F (f ∗ T ) = F (f ) F (T ). Division by 0 in
this calculation may demand another approach, though. In astronomy, distant
stars are regularly used as point sources. In microscopy, point sources are more
difficult to obtain. In astronomy and microscopy T is called the point spread
function; in signal analysis T is the impulse response; in control theory T is the
transfer function; in seismology T is the earth-reflectivity function.

Theoretically we can find T given ΦT (f) for any particular f since again

T = F −1 (F (f ∗ T ) /F (f))

but dividing by 0 again suggests this approach is not numerically feasible.
Theorem 88 allows us to use Gaussians as an alternative to point sources or
Dirac delta distributions in constructing an approximation of T . Let Gz (x) :=
√1 e−(x−z) .
ΦT (G0 ) (x) = G0 (y) T (x − y) dy = G0 (y − x) T (y) dy

= Gx (y) T (y) dy = *Gx , T + . (5.2)

Now by Theorem 88 we can construct an orthonormal basis of L2 with linear
combinations of the Gx and obtain the generalized Fourier coefficients of T
from line (5.2). The most natural choice of orthonormal basis would be the
Hermite polynomials. Specifically, given enough information about ΦT (G0 ) we
can compute
: n ;
dn d G

ΦT (G0 ) (x) = , T = (−1)n *Hn G, T +
dxn x=0 dxn

which are essentially the Hermite coefficients of T with which we may recon-
struct T as a Hermite series. This method is essentially inverting the Weierstrass
transform[10] which has been largely neglected despite periodic rediscovery. One
advantage to using the Weierstrass transform in astronomy (or microscopy) is
that the light profile of a distant star (or quantum dot) is quite accurately
represented by a Gaussian compared with a Dirac delta distribution.

As discussed in Examples 80 and 81 this method may be expanded to other
choices of functions in place of the Gaussian, but the choice is precisely limited to
derivative-generating functions. Those examples demonstrate why polynomials
and sine functions, e.g., are poor choices.

5.1.4 Coefficient formulas
Theorem 88 promises any L2 function can be approximated
< 2
f (x) ≈ an e−(x−nt) (5.3)

and gives a formula for the coefficients an . Unfortunately we cannot simply
send N → ∞:
f (x) = an e−(x−nt) .

The approximation, line (5.3), works because it is a weaker form of convergence
than the typical Hilbert space series convergence. Consequently the coefficients
an are not unique, and in fact are not “best” according to the classical continuous
least squares technique.

0.5 0.5

0 0
-4 -2 2 x 4 -4 -2 2 x 4
-0.5 -0.5


Least squares approximation Theorem 88 formula
N = 5, t = .01 N = 5, t = .01

Least squares
With the least squares method we minimize the error function

E2 (a0 , ..., aN ) := f (x) − an e dx

R n=0

by setting ∂E
∂aj = 0 for j = 0, ..., N and solving for the an . These N + 1 linear

equations are called the normal equations. The matrix form of this system is

M−→v = b where M is the matrix
=   !N
π − k2 +j 2 − (k+j)
2 t2
M= e

and  N

→ N −
→  2
v = [aj ]j=0 and b = f (x) e−(x−jt) dx .
R j=0
M is symmetric and invertible, so we can always solve for the an . But these
least squares matrices are notorious for being ill-conditioned when using non-
orthogonal approximating functions. The Hilbert matrix is the archetypical
example. The current application is no exception since the matrix entries are
very similar for most choices of N and t, so round-off error is extreme. Choosing
N = 7 instead of 5 in the graphed example above requires almost 300 significant

Lagrange multipliers formula

A third approach is to use Lagrange multipliers to minimize a2n subject to
the constraint
f (x) − (an cos ntx + bn sin ntx) dx ≤ 0.
R n=0

Changing the translation factors
There are many other approaches as well. The question of which method works
best is difficult, and the answer (if there is one) depends on the situation. A
completely different approach that may be effected is to change the choice of
Gaussians—we don’t need to use evenly spaced translations:
< 2
f (x) ≈ an e−(x−αn t) (5.4)

for any suitably bounded sequence α. The coefficients may be calculated by
adjusting the proof of Theorem 88 with the n-point numerical differentiation
formula appropriate to α (reviewed in Appendix C)

5.1.5 Instability
Be warned that the method is unstable for two reasons: the coefficients grow
without bound as 0 → 0; and as N increases all the coefficients need to be
recalculated. These difficulties can be ameliorated using the numerical analysis
canon (e.g., see Appendix C, e.g., for one approach to improving numerical
differentiation), but not eliminated. Instability is a catastrophic problem that
precludes the use of this method in many situations. However, we can take
comfort from the fact that we know precisely where the instability arises, and
so we can anticipate the error and adjust for it.

5.2 Low-frequency trigonometric series
“Signal analysts do it with high frequency.”
-bumper sticker

5.2.1 Density in L2
Taking the Fourier transform of line (5.1) gives the startling fact that low-
frequency trigonometric series are dense in L2 [a, b]. Let’s go through the de-
tails. Define the norm
9 2 −x2
f 2,G := |f (x)| e dx

with Gaussian weight function. Let L2G (R) denote the set of functions f with
finite norm f 2,G < ∞. Write f ≈ g to mean f − g2,G < 0.

Theorem 90 For every f ∈ L2 (R, C) and 0 > 0 there exists N ∈ N and t0 > 0
such that for any t = 0 with |t| < t0

f (x) ≈ an e−intx
3,G n=0

for an ∈ C dependent on N and t.

Proof. We use the Fourier transform with convention
1 9
F [f] (s) = √ f (x) e−isx dx.
2π R

F is a linear isometry of L2 (R, C) with
/ 2
0 1 s2
F e−αx = √ e− 4α ,

F [f (x + r)] = e F [f (x)] and

F [g ∗ h] = 2πF [g] F [h] .

where ∗ is convolution.
Let f ∈ L2 and we now show f2 (x) := √12π e−x ∗ F −1 [f ] (x) ∈ L2 . Notice
g := F −1 [f ] ∈ L2 and
2 9 9 1 −y2
1 99 2
f2 2 =
√2π g (x − y) e dy ds ≤ 2π |g (x − y)|2 e−2y dyds
 / 0  2
= c Wt0 |g|  = c g 1 = c g2 = c f22 < ∞

for some c > 0. Here Wt [h] is the solution to the diffusion equation for time t
and initial condition h. (The notation W refers to the Weierstrass transform.)

The reason for the third equality in the previous calculation is that Wt maintains
the L1 integral of any positive initial condition h for all time t > 0 [66].
Now approximate the real and imaginary parts of f2 with Theorem 88. Then
we get
N 2
√1 e−x ∗ F −1 [f ] (x) ≈ an e−(x−nt) an ∈ C
2π 3 n=0

and applying F gives

N 2
√1 e−s /4 f (s) ≈ an e−ints √12 e−s /4
2 3 n=0


f (s) √ ≈ an e−ints
23,G n=0
2 2
using the fact e−s /4 > e−s .
Another proof is furnished in Example 93, below.

Corollary 91 On any finite interval [a, b] for any ω > 0 the finite linear com-
binations of sine and cosine functions with frequency lower than ω are dense in
L2 ([a, b] , R).

Proof. On [a, b] the Gaussian is bounded and so the norms with or without
weight function are equivalent. Apply Theorem 90 to f ∈ L2 ([a, b] , R) and
choose t such that Nt < ω to get

f≈ Re (an ) cos (ntx) + Im (an ) sin (ntx)
3 n=0

(−1)n N
9 / −x2 −1
0 2 dk 
x −x2

an = k e ∗ F [f ] (x) e e dx.
n!2π k=n (k−n)!(2t) R dxk

High frequency trigonometric series are of course dense in L2 [0, 2π], which
is the basis of Fourier analysis; this idea was revolutionary in the 19th century,
but it’s common mathematical intuition today. Looking at the figures below
we’re not too surprised that linear combinations of trig functions sin kx and
cos kx can represent most any function on [0, 2π]. Perhaps, though, even jaded
 1  will be surprised the set of uninteresting, flat functions sin k x
and cos k x can combine linearly to give practically any function on (−∞, ∞).
The fact that low-frequency trig series can be constructed with a high-frequency
“signal” casts into doubt our traditional interpretation of the fundamental facts
of information theory. High channel capacity is possible with low bandwidth
transmission. Spectral imaging techniques are theoretically possible with low-
frequency electromagnetic waves.

high frequency, ω ≥ 1  ω ≤ 1
low frequency,
cos (kx) cos k1 x
k = 1, ..., 6 k = 1, ..., 6

Remark 92 The above development actually shows something a bit stronger.
Just as Gaussians with centers near zero can approximate any L2 function, we
may pick any point x0 ∈ R in place of 0 and Gaussians with centers near x0
clearly may still approximate any L2 function.
Then as before, taking the Fourier transform shows we may replace trig func-
tions of near-0 frequency (low-frequency trig functions) with trig functions of fre-
quency near any x0 and still get a robust approximation technique. This means
as long as we have subtle control over an oscillator in any narrow bandwidth we
can synthesize every signal.

That low-frequency trigonometric series an e−intx are dense in L2G (R, C)
may be superficially surprising in light of the penultimate paragraph of Example
80 which shows series of the form an e−i(x+r) for all r ∈ R are far from dense,
forming a 3-dimensional subset of the ∞-dimensional space. Use Ft (f ) (x) :=
f (x + t) and Gt (f ) = f + th with h (x) = eix .

5.2.2 Coefficient formulas
One formula for the coefficients of the low-frequency approximation is explicit in
the constructive proof given above for Corollary 91. Alternately we can directly
use classical least squares approximation or Lagrange multipliers as detailed in
§5.1. Let’s look at a few more approaches.
Example 93 As in Example 82 with c set equal to the complex number i
Gt (f ) := f + tg and Zt (f ) (x) := eixt f (x)
with g (x) = e−x gives R(G,Z) = L2 (R, C). Z is effectively the Fourier trans-
form of function translation. As before [G, Z] ∼ H where
Ht (f) (x) := f (x) + tixg (x)

Iterating the bracket generates polynomials of every degree, so the system is
controllable. This is another proof of Theorem 90.
Example 93 inspires a third proof of Theorem 90:
Theorem 94 If f is represented by a power series

f (x) = bn xn

with convergence in L2G then

f ≈ ak eiktx
3,G k=0

for large enough N and small t = 0 where
N (−1)n−k b
 n n
ak = .
n=k (it)n k

Proof. Since
dn  irx 
n n
i x = n e
dr r=0
we may truncate the series for f to get

N   dn  irx 
f (x) ≈ bn i−n e .
3/2,G drn
n=0 r=0
Now use finite forward differences to approximate the derivatives (Appendix C).
We have for some small t > 0
n d  irx 
N b n N b 1  n
n n−k n
n dr n
e ≈ n n
(−1) eiktx .
n=0 i r=0 3/2,G n=0 i t k=0 k
Switching the order of summation gives the result.
Every n-th order polynomial is then rewritten as a low-frequency trigono-
metric series with n (complex) terms. We can also use different polynomial
approximations of a function (e.g., orthogonal polynomials) to get different for-
mulas for the coefficients of the low-frequency trig-series approximation.
Example 95 Three low-frequency sine functions are enough to approximate x3
arbitrarily closely despite the fact that their graphs look linear around 0.



-60 -40 -20 20 x 40 60




sin 100 x (not to scale)




-10 -8 -6 -4 -2 0 2 4 x 6 8 10



sin (.01x) , sin (.02x) , sin (.03x)

Using the formula from Theorem 94, we can pick t = .01 to get x3 ≈
−3 3 1
sin (.01x) + (.01)3 sin (.02x) − sin (.03x) and graphs show close visual
(.01)3 (.01)3
agreement for x ∈ [−20, 20].



-3 -2 -1 0 1 2 3



x3 and low-frequency approximation; t = 0.1





-20 -10 0 10 20




x3 and low-frequency approximation; t = 0.01

As t → 0 the graphs converge for every x. Notice this miracle comes at the
cost of large coefficients—see the comments on instability in §5.1.5.
In Figures 5.3-5.5 a few more examples of Theorem 94 are displayed. The







-2 -1 1x 2

Figure 5.3: ex (solid), 4n=0 xn /n! (dots) and its low-freq. trig. series approxi-
mation (dashes) with t = 0.1



-2 -1 0 1x 2



Figure 5.4: x (x + 1) (x − 1) and its low freq. trig. approximation; t = 0.1

instability we referred to above arises, for example, when we wish to get a better
approximation for the graph in Figure 5.5. If we try to shrink t from 0.1 to say
0.01 the coefficients, which include 1/tn grow quickly. Depending on the limits
of your machine precision as t shrinks you will eventually see a meaningless
graph like Figure 5.6. As suggested in §5.1.5, implementing more sophisticated
numerical differentiation formulas derived in Appendix C avoids the round-off
error. We can leave the t = 0.1 but add more terms in the n-point formula to
improve the approximation while avoiding large coefficients, thereby sidestepping
the instability—Figure 5.7.

It is not difficult to extend Remark 92, slightly generalizing Theorem 94 to
give a similarly simple formula for power series conversion to trig series with
frequency in any narrow range. One approach is to approximate f (x) e−ixc with
frequencies near 0 then multiply by eixc .



-10 -8 -6 -4 -2 0 2 4 x 6 8 10



17 n
Figure 5.5: n=0 sin (nπ/2) x /n! and its low-freq. trig. approximation; t = 0.1

Figure 5.6: Instability creeps in as t → 0.01 with 30 digits precision.

Figure 5.7: Improved approximation of n=0 sin (nπ/2) xn /n! keeping t = 0.1
but using a higher n-point formula.

Axel Boldt and Pangyen Weng suggest another approach which yields the
same coefficients.
2nd proof of Theorem 94. This time just use

1 d irx eihx − 1 eitx − 1
x= e = lim ≈ so that
i dr r=0 h→0 ih it
 itx n  
n e −1 1  n
n−k n
x ≈ = (−1) eiktx .
(it)n (it)n k=1 k

Finally, the real part may be calculated in several ways to get the formula
   
  (−1)n n−1
1 2n  k 2n
N  t2n
 22n n + 22n−1 (−1) k cos ((2n − 2k) tx) 
f ≈   k=0  .
3,G n=0  (−1)n  k 2n+1 
+ t2n+1 22n (−1) k sin ((2n + 1 − 2k) tx)

Error bound
From the numerical analysis point of view, it is worth noting how Theorem 94
leads to an explicit error bound for low-frequency trig-series approximations:
1. Use Taylor’s Theorem to get an error bound for the polynomial approxima-
tion.  irx 
2. Replace the monomials xn with i−n dr n e r=0
3. Approximate the derivatives with finite differences, which have an explicit
error bound (details in Appendix C).

5.2.3 Damping gives a stable family
Numerical differentiation formulas have led us above to new approximating
families—shifted Gaussians and low-frequency trig series. Another approach to
numerical differentiation gives a noteworthy pair of approximating families, low-
frequency trig series with exponential and Gaussian damping.

ex cos x e−x cos x

Exponential and Gaussian-damped trig functions

Simply choose complex nodes αj := e n 2πi instead of real nodes for sampling in

the derivative approximation formulas. Example 130 from Appendix C guaran-
tees we have

m! < (n−m)j  j

2πi n 2πi
g (z) ≈ e n g z + te .
dz m ntm j=1

Using this formula for the coefficients to approximate the derivative e
drm r=0
as in Theorem 94 leads to

Theorem 96 If f is represented by a power series

f (x) = bn xn

with L2 convergence on a bounded interval, then f is approximated by damped,
low-frequency trigonometric functions

ak eitx cos( n 2π) e−tx sin( )
k 2πk
f (x) ≈ n
3 k=0

for large enough N and any fixed t = 0 where

M b m!
m 2πi (n−m)k
ak = m e
n .
m=0 (it) n


m M b
m d
irx M b m! 
n (n−m)k k 2πi
f (x) = bm x ≈ m m
e ≈ m e n 2πi eitxe n
m=0 3/2 m=0 i dr r=0 3/2 m=0 (it) n k=0
 bm m!  i(tx cos( k 2π)+ (n−m)k 2π) −tx sin( 2πk )
M n
= m e n n e n

m=0 (it) n k=0
n  bm m! 2πi (n−m)k itx cos( k 2π) −tx sin( 2πk )
= m e n e n e n

k=0 m=0 (it) n

Remarkably, there is no instability with this approximating family, because
the t may remain fixed at any value, even t = 1, and the n may be increased arbi-
trarily. In this special instance, numerical differentiation is not unstable because
the function we are approximating is analytic and entire. We can appreciate
the stability of the method by noticing the terms do not grow uncontrollably as
n → ∞, but instead converge to Cauchy’s integral. See Example 130.
With a more careful choice of nodes αj the damping terms’ coefficients may
be manipulated by the modeler. Then however, the coefficients ak are not
explicitly given, but may be determined by solving the Vandermonde matrix
(again, see Appendix C).

Applying this approach to the Gaussian approximation scheme in Theorem
88 gives an approximating family consisting of low-frequency trig functions eitk x
with Gaussian damping factors e−(x+rk ) which again boasts an explicit, stable
formula for the coefficients.

The shape of these approximating functions is similar to the sinc function
sin x
x . Compare this scheme and the Gaussian translations of §5.1 with the
classical Whittaker—Shannon interpolation formula, which uses translates of a
single sinc function with carefully chosen frequency.
Chapter 6

Partial differential

Further applications of metric space algebra on function spaces leads to a new
perspective on partial differential equations (PDEs) by rewriting them without
derivatives. The reason this is desirable is because on any reasonably robust
space of functions, a differential operator is unbounded. Looking at flows with-
out resorting to such discontinuous vector fields on Banach spaces gives palpable

6.1 Metric space arithmetic
Example 97 Ft (f ) (x) := f (x + t) and Gt (f ) := f + tg on say M = L2 has

d (Ft Gs (f) , Gs Ft (f))
= (f (x + t) + sg (x + t) − [f (x + t) + sg (x)]) dy
(  2 )1/2
g (x + t) − g (x)
= |st| dy = O (st)

assuming g ∈ C 1 and g  ∈ L2 so F & G close. Then computing the flow H of
F + G with Euler curves we get
 (n) t n t
Gt/n Ft/n (f ) (x) = f (x + t) + g x+m
n m=0 n

Ht (f ) (x) = lim Gt/n Ft/n (f ) (x) = f (x + t) + g (y) dy.


H, representing the equal combination of translation on the x-axis (F ) and L2
vector space translation (G) gives a dynamic which may be described as x-axis
translation with a smeared L2 vector space translation of g. The sum of these
flows was introduced in [24], §5.2, with other interesting function space examples
and a partial differential equations treatment.

H inspires us to study a similar arc field Xt (f) (x) := f (x) + g (y) dy.
We may readily check that X ∼ G so X doesn’t give a new dynamic, but it’s
still interesting as a new representation of a fundamental arc field.

Example 98 Consider the effect of multiplication aF on Ft (f ) (x) := f (x + t).
First if a is a constant

(aF )t (f ) (x) = Fat (f ) (x) = f (x + at)

which changes the speed of the x-axis translation.
If we want to apply the metric space arithmetic of §2.1 to a non-constant
function a then it needs to be a function a : M → R. E.g., a (f) := f1 gives
(aF )t (f) (x) := f (x + f1 t) which is its own flow and is the solution to the
PDE ft = f fx . This 1-D PDE is equivalent to Newton’s equation on a line
(see [5, p. 2]).

Another possibility asserts itself here. Though it doesn’t make sense in terms
of the metric space arithmetic defined above, let us formally insert the function
a := x. Then

(aF )t (f ) (x) = Fat (f ) (x) = f (x + xt) = f ([1 + t] x) .

aF is not a flow now:

(aF )s+t (f) (x) = f ([1 + s + t] x)
(aF )t (aF )s (f) (x) = f ([1 + s] [1 + t] x) .

The arc field aF actually has flow given by x-axis dilation
Ht (f ) (x) := f et x .

Checking well-posedness requires choosing a metric, e.g., pick M := L1 (R).
Condition E1 follows from
d ((aF )t (f ) , (aF )t (g)) = |f (x + xt) − g (x + xt)| dx
1 ∞
= |f (u) − g (u)| du ≤ d (f, g) |t|n ≤ d (f, g) (1 + |t| Λ)
1 + t −∞ n=0

when |t| < 1 guaranteeing uniqueness of solutions. The solution H exists,
printed above, but checking E2 hits a snag:
d (aF )s+t (f) , (aF )t (aF )s (f) = |f ([1 + s + t] x) − f ([1 + s + t + st] x)| dx
= |f (u) − f (u + stx)| du ≤ |st| xf  (x) = O (st)
1 + s + t −∞
for s, t > 0, but only when xf  (x) is integrable.
Picking M := Cc1 (R) the set of C 1 functions with compact support, e.g.,
with metric d (f, g) := f − g∞ + f  − g  ∞ gives
d ((aF )t (f) , (aF )t (g)) = sup |f (x + xt) − g (x + xt)| = d (f, g)

d (aF )s+t (f) , (aF )t (aF )s (f) = sup |f ([1 + s + t] x) − f ([1 + s + t] x + stx)|
≤ |st| sup xf  (x) = O (st) .

Example 99 Continuing the non-standard arithmetic of Example 98, consider
aG for Gt (f) := f + tg where a (x) is a function:
(aG)t (f ) (x) = Ga(x)t (f) (x) = f (x) + ta (x) g (x)
so aGt (f) = f + t (a · g). If the curve Gt (f ) is thought of as a vector in L2
or L1 starting at f and moving in the direction of g, then multiplying G by a
suitably bounded function a gives a new direction aG, akin to rotating the vector
aG is clearly its own flow and satisfies E2; checking E1
d (aG)f 1 (t) , (aG)f 2 (t) = d f 1 , f 2

proves we have unique solutions.
Example 100 Let Xt (f) (x) := f (x) + tf (x + t) for a, b ∈ R. The Euler
curves are
n tk n
Zt/n (f ) (x) = k k
f (x + kt) so
k=0 n
∞ tk
Ft (f ) (x) = f (x + kt) .
k=0 k!

Notice on any Banach space of functions, using the notation of the translation
operator τ z : R → R with τ z (x) = x + z, we have
  ∞ k 
 ∞ tk

d (Xt (f) , Ft (f )) = f + tτ t f −  
τ kt f  =  τ kt f 

k=0 k! k=2 k!
∞ tk ∞ tk

≤ τ kt f  = f = o (t) .
k=2 k! k=2 k!

Applying the formulas Fs+t = Ft Fs , etc., to various functions f gives new
series identities.

Try calculating limits of Euler curves for F + G with various G, such as
Gt (f) := f + tg using the model of Example 97.

Similar results hold for the generalization Xt (f ) (x) := f (x)+tf (x + (at + b)).

6.2 PDEs as arc fields
Consider the simplest PDE
ft = fx (6.1)
with initial condition f0 : R → R, i.e., f (x, 0) = f0 (x). The solution is trans-
lation f (x, t) = f0 (x + t). To cast (6.1) in the idiom of arc fields we therefore
may write Xt (f0 ) (x) := f0 (x + t) or equivalently Xt (f ) (x) := f (x + t) which
is obviously its own flow. Alternately we may write (6.1) as

∂f (x, r) f (x, r + t) ∂f (x, r)
fr = = lim = = fx or
∂r t→0 t ∂x
f (x, r + t) − f (x, r)
= fx + O (t) or
f (x, r + t) = f + tfx + o (t)

which, as a second arc field, would be written simply Xt (f ) := f + tfx with
X ∼ X  . Diagrammatically the PDE-to-arc field translation is

ft = fx ⇐⇒ Xt (f) (x) := f (x + t)
, -
f (x, r + t) = f + tfx + o (t) ⇐⇒ Xt (f ) := f + tfx .

Similarly consider the PDE

ft = fxx (6.2)

with initial condition f0 : R → R. The solution is diffusion
1 2
f (x, t) = √ e−(x−y) /(4t) f0 (y) dy.
4πt −∞

Diagrammatically the PDE-to-arc field translation of (6.2) is
9∞ −(x−y)2 /(4t)
ft = fxx ⇐⇒ Yt (f ) (x) := √1 (y) dy
4πt −∞ e f
, -
f (x, r + t) = f + tfxx + o (t) ⇐⇒ Yt (f ) := f + tfxx

where Y is restricted to being a forward arc field.

It is easy to check both X and Y satisfy E1 and E2 on L1 (R), and further
X & Y close and even commute Xs Yt = Yt Xs . Consequently we have existence
and uniqueness of solutions to a family of PDEs
ft = afx + bfxx ⇐⇒ Z = aX + bY
,  -
 Zt = f + t (afx + bfxx )
f (x, r + t) = f + t (afx + bfxx ) + o (t) ⇐⇒ = (f + atfx ) + btfxx = Ybt Xat

(f )
= (aX + bY ) (f)
where a and b may be any locally Lipschitz functions of f , i.e., Lipschitz con-
tinuous functionals a, b : L1 (R) → R. When a and b are constants the solutions
calculated with Euler curves are
f (t, x) = lim Zt/n (f0 (x)) = (aX + bY )t (f0 (x))

since X and Y commute. With non-constant a and b the Euler curves still
converge, but are not as easy to simplify.

Translating Example 11 to the current notation, let Wt (f) := f + tΨ (f).
When Ψ : M → M is locally Lipschitz on a Banach space, E1 and E2 are
satisfied. Now adding W is straightforward and gives solutions to the family
ft = afx + bfxx + cΨ (f ) .

To extend these results to higher-dimensional, higher-order, time-dependent
PDEs, apply Remark 13. To determine the existence of long-time solutions,
apply Theorem 25 adjusted in accord with the forward-flows ideas of Section

This proves well-posedness for a family of PDEs, but the most eagerly sought-
after well-posedness results are for a wider choice of coefficients, not a, b, c : M →
R as above. Conflating the approach of this section with Examples 98 and 99
extends the family. E.g.,
ft = xfx ⇐⇒ Xt (f ) (x) := f (x + xt)
, -
f (x, r + t) = f + txfx + o (t) ⇐⇒ Xt (f) := f + txfx .

The claim X ∼ X  follows from lim f(x+xt)−f(x)
t = xfx . The solution is the flow
Ft (f ) (x) := f (ert x), so if xfx is a component of a PDE we call it dilation.

To see why nonlinear PDEs are notoriously difficult to solve, consider the
simplest, which in arc field language is
ft = f · fx ⇐⇒ Xt (f ) (x) := f (x + tf (x))
, -
f (x, r + t) = f + tf · fx + o (t) ⇐⇒ Xt (f) := f + tf · fx .

The claim X ∼ X  follows from lim f (x+tf(x))−f
= f · fx . Checking Condi-
tions E1 and E2 quickly becomes convoluted. This equation is related to the
convective derivative from fluid mechanics.

Exercise 101 Apply the above results to a space of divergence-free vector fields
and solve the Navier-Stokes equations. Finding the precise metric that works
for both the f · fx component and the diffusion fxx component, yet keeps the arc
field at linear speed may be a challenge; but if you act now, you’ll get the coffee
maker, the furniture, and... $1,000,000!
Chapter 7

Flows on H (Rn)

The idea for generalizing vector fields to arc fields to study flows on a metric
space is natural and simple and was independently arrived at by several authors
[51], [7], [18]. All of the instigators had the same space in mind, H (Rn ). This
space’s rich modeling capabilities was originally used by Hausdorff to give a
new topology on function spaces by comparing the distance between the graphs
of functions. Later the space became a successful environment for generating
fractals as detailed in §7.1. In this chapter we will apply the metric space results
to H (Rn ), introducing novel dynamics with previously unimaginable modeling

7.1 IFS
One route towards generating a fractal is via a so-called iterated function system
(IFS). For the sake of completeness, we review the procedure; a more leisurely
treatment is found in [8]. Let · denote the usual Euclidean norm on Rn and
use the metric space H(Rn ) from Example 6. Denote α ∨ β := max {α, β} and
α ∧ β := min {α, β}.
We prove the following lemma, as one might expect cross terms dH (a1 , b2 )
and dH (a2 , b1 ) also on the right hand side.

Lemma 102 For any a1 , a2 , b1 , b2 ∈ H(Rn ),

dH (a1 ∪ a2 , b1 ∪ b2 ) ≤ dH (a1 , b1 ) ∨ dH (a2 , b2 ),

for any a1 , a2 , b1 , b2 ∈ H(Rn ).


Proof. We have

dH (a1 ∪ a2 , b1 ∪ b2 ) = max d (a1 ∪ a2 , y) ∨ max d (x, b1 ∪ b2 )
y∈b1 ∪b2 x∈a1 ∪a2
= max d (a1 ∪ a2 , y) ∨ max d (a1 ∪ a2 , y)
y∈b1 y∈b2
∨ max d (x, b1 ∪ b2 ) ∨ max d (x, b1 ∪ b2 )
x∈a1 x∈a2
≤ max d (a1 , y) ∨ max d (a2 , y) ∨ max d (x, b1 ) ∨ max d (x, b2 )
y∈b1 y∈b2 x∈a1 x∈a2
= max d (a1 , y) ∨ max d (x, b1 ) ∨ max d (a2 , y) ∨ max d (x, b2 )
y∈b1 x∈a1 y∈b2 x∈a2

= dH (a1 , b1 ) ∨ dH (a2 , b2 ).

More generally,

dH ( ∪ ai , ∪ bj ) ≤ max dH (ai , bi ).
1≤i≤k 1≤j≤k 1≤i≤k

Now let fi : Rn → Rn , for 1 ≤ i ≤ k, be affine contractions with respective
Lipschitz constants ci < 1. Let f : H(Rn ) → H(Rn ) be given by f (a) := ∪ fi (a)
where fi (a) := {fi (z) : z ∈ a}. Using Lemma 102, the following shows f is a
contraction mapping on H(Rn ) with contraction factor c := max ci < 1 :

k k
dH (f (a), f (b)) = dH ∪ fi (a), ∪ fi (b) ≤ max dH (fi (a), fi (b))
i=1 i=1 1≤i≤k
≤ max max d (fi (a), y) ∨ max d (x, fi (b))
1≤i≤k y∈fi (b) x∈fi (a)
≤ max max (ci d (a, y)) ∨ max (ci d (x, b))
1≤i≤k y∈b x∈a

≤ max ci dH (a, b) = cdH (a, b) .

By the Contraction Mapping Theorem, Theorem 119, the iterates of f starting
with any point in H(Rn ) converge to a unique fixed point of f in H(Rn ). For
many choices of the IFS, f , this fixed point is an interesting fractal. For example
if we choose n = 2 and
1  1 
f1 (x) := 12 x f2 (x) := 12 x + 2, 0 f3 (x) := 12 x + 1
4, 2

then the fixed point of f is the famous Sierpinsky triangle (Figure 7.1).

Figure 7.1: Fixed point of a discrete flow on H R2

7.2 Continuous IFS
As an arc field application, we now create a continuous version of the above
process. For x, y ∈ Rn , let λxy : R → Rn be the line from x to y defined by

λxy (t) := (1 − t) x + ty.

Define λab : R → H(Rn ) by

λab (t) := ∪ {λxy (t)} = {λxy (t)|x ∈ a, y ∈ b}

so λab (0) = a, and λab (1) = b. With regard to the black and white film clip
analogy from §0.4, gab would be a morph from photo a to photo b.
Let diam(a) := max x − y.

Proposition 103 We have

dH (λab (t) , λab (t )) ≤ diam(a ∪ b) |t − t | .

Consequently, for any a, b ∈ H(Rn ), λab : R → H(Rn ) is Lipschitz continuous
(i.e., has bounded speed), and the length L( λab |[0,1] ) of λab restricted to the
domain [0, 1] satisfies dH (a, b) ≤ L( λab |[0,1] ) ≤ diam(a ∪ b).

Proof. Use Lemma 102 on the definition of λab :

dH (λab (t) , λab (t ))
≤ max (1 − t) x + ty − [(1 − t ) x + t y] = |t − t | max x − y .
x∈a x∈a
y∈b y∈b

Some more crucial properties of the curves λab are ultimately consequences
of the estimates
 d (x, y) |t − t| + |1 − t | d (x, x ) + |t | d (y, y  ) ,
d (λxy (t), λx y (t )) ≤ min
d (x , y ) |t − t| + |1 − t| d (x, x ) + |t| d (y, y  )
and d λxy (s + h), λλxy (s)λyz (s) (h) ≤ |sh| (d (x, y) + d (y, z)) (7.2)
whose proofs are straightforward. Using (7.1) it is then not hard to prove
Proposition 104 For a, b, a , b ∈ H(Rn ) and t, t ∈ R, we have
dH (λab (t) , λa b (t ))
≤ (diam (a ∪ b) ∧ diam (a ∪ b )) |t − t|
+ (1 − (|t| ∧ |t |)) dH (a, a ) + (|t| ∨ |t |) dH (b, b ) , (7.3)
where α ∧ β := min {α, β}.
Corollary 105 For a, a , b, b ∈ H(Rn ) and t ∈ R, we have
dH (λab (t), λa b (t)) ≤ |1 − t| dH (a, a ) + |t| dH (b, b ).
The following proposition will be used in verifying Condition E2 for the arc
field defined below in (7.4).
Proposition 106
dH λab (s + h) , λλab (s)λbc (s) (h) ≤ |sh| [diam (a ∪ b) + diam (b ∪ c)]
for any a, b, c ∈ H (Rn ) .
Proof. We have
dH λab (s + h) , λλab (s)λbc (s) (h)
= max d (λab (s + h), z) ∨ max d z, λλab (s)λbc (s) (h) .
z∈λλab (s)λbc (s) (h) z∈λab (s+h)

Using (7.2), we have
max d (λab (s + h) , z)
z∈λλab (s)λbc (s) (h)
= max min d (λxy (s + h) , z)
z∈λλab (s)λbc (s) (h) x∈a, y∈b
=  max min d λxy (s + h) , λλx y (s)λy z (s) (h)
x ∈a, y  ∈b, z  ∈c x∈a, y∈b
≤  max d λ xy  (s + h) , λ λx y (s)λy z (s) (h)
x ∈a, y ∈b, z ∈c

≤ |sh| max {d (x , y  ) + d (y  , z  )}
x ∈a, y ∈b, z  ∈c
≤ |sh| [diam(a ∪ b) + diam(b ∪ c)]

max d z, λλab (s)λbc (s) (h)
z∈λab (s+t)
= max min d z, λλxy (s)λyz (s) (h)
z∈λab (s+t) x∈a, y∈b, z∈c
= max min d λ x y  (s + h) , λλ
xy (s)λyz (s)
x ∈a, y ∈b x∈a, y∈b, z∈c
≤ max min d λx  y  (s + h) , λλ
x y  (s)λy  z (s)
x ∈a, y ∈b z∈c
≤ |sh|  max min {d (x , y  ) + d (y  , z)}
x ∈a, y ∈b z∈c

≤ |sh| [diam(a ∪ b) + diam(b ∪ c)].

We define an arc field X : H(Rn ) × [−1, 1] → H(Rn ) by
Xt (a) := ∪ λafi (a) (t) = λaf (a) (t) . (7.4)

The continuity of X follows from (7.3) of Proposition 104, while Proposition 103
provides a speed function ρ : H(Rn ) → R+ , namely ρ (a) = diam (a ∪ f (a)).
X has linear speed growth in the sense of Definition 24. Indeed, for b ∈
BdH (a, r), using the contractivity of the fi , we have ρ (b) = diam (b ∪ f (b)) ≤
diam (a ∪ f (a)) + 2r = ρ (a) + 2r and hence ρ (a, r) ≤ ρ (a) + 2r. Even without
contractivity, if we assume the fi are globally Lipschitz we still have linear speed
growth since k is finite, and so Theorem 25 gives a global flow on H (Rn ) once
we verify E1 and E2 below.
With this choice of X, one might expect the points of H(Rn ) to move under
the flow toward the attractor of the IFS, but our aim is to show they flow toward
the convex hull of the attractor. To this end we will employ Theorem 31 so we
restrict X to being a forward arc field. First let’s check Condition E1. By
Lemma 102,
  k k
dH (Xa (t), Xb (t)) = dH λaf (a) (t), λbf (b) (t) = dH ∪ λafi (a) (t), ∪ λbfi (b) (t)
i=1 i=1
≤ max dH λafi (a) (t) , λbfi (b) (t) .

Using Corollary 105, for each i ∈ {1, · · · , k}, we have
dH λafi (a) (t) , λbfi (b) (t) ≤ (1 − t) dH (a, b) + tdH (fi (a) , fi (b))
≤ (1 − t) dH (a, b) + tci dH (a, b)
≤ dH (a, b) (1 + t (ci − 1)) .

Thus, with
Λ := max ci − 1 < 0 (7.5)

Condition E1 is satisfied.
We now verify Condition E2. As the fj send lines to lines in Rn ,
  k k
f λaf (a) (s) = ∪ fi (λaf (a) (s)) = ∪ λfi (a)fi (f (a)) (s) ⊆ λf (a)f (f (a)) (s) .
i=1 i=1

Then using Proposition 106, we have
dH Xa (s + h) , XXa (s) (h)

= dH λaf (a) (s + h) , λ[λaf (a) (s)][f (λaf (a) (s))] (h)

≤ dH λaf (a) (s + h) , λ[λaf (a) (s)][λf (a)f (f (a)) (s)] (h)
≤ hs [diam (a ∪ f (a)) + diam (f (a) ∪ f (f (a)))] .

Ω (a) := 2diam (a ∪ f (a) ∪ f (f (a)))
satisfies Condition E2.

7.3 Fixed points
By Theorem 31, the negativity of Λ at line (7.5) guarantees a unique fixed point
for the forward flow F of the arc field X. We now show this fixed point is
the convex hull of the attractor of the associated IFS. The convex hull of a
set P ⊂ Rn (denoted C (P )) is the (convex) intersection of all convex subsets
of Rn containing P . The convex hull C ({x0 , . . . , xn }) of {x0 , . . . , xn } ⊆ Rn is
an n-simplex, where we allow the volume of C ({x0 , . . . , xn }) to be 0. It is
standard that
4 n 5
< n

C ({x0 , . . . , xn }) = αi xi αi ∈ [0, 1] , αi = 1 . (7.6)

i=0 i=0
For f : Rn ←J affine and αi as in (7.6), f ( ni=0 αi xi ) = ni=0 αi f (xi ). Thus,

f (C ({x0 , . . . , xn })) = C (f ({x0 , . . . , xn })) .

In order to demonstrate the unique fixed point of the flow F generated
by the arc field Xa (t) = λaf (a) (t) is the convex hull C (A) of the attractor A
of the IFS (i.e., A is the fixed point of f (·) := ∪i fi (·)), it suffices to prove
XC(A) (t) = λC(A)f (C(A)) (t) is actually constant for 0 ≤ t ≤ n+1 . Then, the
+ n
constant curve α : R → H(R ) defined by α(t) := C (A) is a solution curve of
X, and C (A) is the fixed point of F . We use the following theorem (proven,
for instance, in [32, p. 10]) which is another characterization of convex hulls in
Rn .

Theorem 107 (Carathéodory’s theorem) The convex hull of a set P ⊂ Rn
is the union of all n-simplices with vertices in P .

Thus, for example, in the plane the convex hull of a set P consists of the union
of all filled triangles with vertices in P . In this case, every point x ∈ C (P ) is
inside a triangle with vertices in P .

Proposition 108 For the attractor A of an IFS f (·) = ∪i fi (·), we have f (C (A)) ⊆
C (A) .

Proof. Let p ∈ f (C (A)). Then by Theorem 107, for some i ∈ {1, . . . , k} ,
p = fi (q) for some q in some n-simplex C ({x0 , . . . , xn }) with vertices x0 , . . . , xn
in A. Since A = f (A) = ∪i fi (A) , we have fi ({x0 , . . . , xn }) ⊆ fi (A) ⊆ A and
hence C (fi ({x0 , . . . , xn })) ⊆ C (A). Thus,

p = fi (q) ∈ fi (C (x0 , . . . , xn )) = C (fi ({x0 , . . . , xn })) ⊆ C (A) .

The inclusion in Proposition 108 is often strict.

Proposition 109 For {x0 , . . . , xn } ⊆ Rn , we have
λC({x0 ,...,xn }){x0 ,...,xn } (t) = C ({x0 , . . . , xn }) for 0 ≤ t ≤ .
Proof. We have

λC({x0 ,...,xn }){x0 ,...,xn } (t)
= { λpxi (t)| p ∈ C ({x0 , . . . , xn }) , i ∈ {0, . . . , n + 1}} .

Thus, λC({x0 ,...,xn }){x0 ,...,xn } (t) is the union of n + 1 n-simplices each of which
is shrinking into one of the vertices as t 0 1, and we need to show / this
0 union
1 1
is all of C ({x0 , . . . , xn }) for 0 ≤ t ≤ n+1 ; i.e., for any t ∈ 0, n+1 , every
q ∈ C ({x0 , . . . , xn }) can be written as

λpxi (t) = (1 − t) p + txi
for some p ∈ C ({x0 , . . . , xn }) and some i ∈ {0, . . . , n/+ 1}. Since
0 q= i=0 αi xi

where ni=0 αi = 1, one of the αi ≥ 0, say α0 , is in n+1 1
, 1 . Then

( n
< α0 − t < αi
q= αi xi + α0 x0 = (1 − t) x0 + xi + tx0 .
1−t i=1
1 −t
/ 0
Observe α0 − t ≥ 0 for t ∈ 0, n+1 , and

α0 − t < αi α0 − t + ni=1 αi 1−t
+ = = = 1,
1−t i=1
1 − t 1 − t 1−t

α0 − t αi
p := x0 + xi ∈ C ({x0 , . . . , xn }) ,
1−t i=1
1 −t

and q = λpx0 (t) as required.

It remains to show
λC(A)f (C(A)) (t) = C (A) for 0 ≤ t ≤ .
Since f (C (A)) ⊆ C (A) by Proposition 108, we have

λC(A)f (C(A)) (t) ⊆ λC(A)C(A) (t) = C (A) .
For the reverse inclusion, note by Proposition 109 for 0 ≤ t ≤ n+1

λC(A)f (C(A)) (t) ⊇ ∪ λC({x0 ,...,xn }){x0 ,...,xn } (t)
{x0 ,...,xn }⊆A

= ∪ C ({x0 , . . . , xn }) = C (A) .
{x0 ,...,xn }⊆A

In summary, we have proven

Theorem 110 Let f (·) = ∪ki=1 fi (·) be the IFS determined by contractive affine
maps f1 , . . . , fk of Rn , and let A be the unique fixed point of f . The arc field
X on H (Rn ) defined by Xa = λaf (a) generates a contractive forward flow F :
H (Rn ) × [0, ∞) → H (Rn ), whose (unique) fixed point is the convex hull C (A)
of A.

7.4 Cyclically attracted sets
Consider the following differential equations in Rn where x(t), y(t), and z(t)
represent curves in Rn .
dx/dt = y − x
dy/dt = z − y (7.7)
dz/dt = x − z.
Its solutions are three curves spiraling toward each other; x(t) moves toward
y(t), y(t) moves toward z(t), and z(t) moves toward x(t). Let’s define an arc
field that describes a similar situation on H(Rn ).
On the Cartesian product

H 3 (Rn ) := H (Rn ) × H (Rn ) × H (Rn )

use the complete metric

d((a, b, c), (a , b , c )) := dH (a, a ) + dH (b, b ) + dH (c, c ).

Define the arc field X on H 3 (Rn ) by

X(a,b,c) := (λab , λbc , λca ) .

In §7.2 properties of λab were demonstrated that make it easy to check E1 and
The projections of the solution onto each of its three coordinates gives curves
a (t), b (t), and c (t) in H (Rn ) beginning respectively at the initial points a, b,
and c and attracted to each other cyclically. As a special case if a, b, and c
are individual points in Rn ⊂ H (Rn ), the projections of the solutions to the
arc field X with those initial conditions are identical to the solutions of the
differential equation (7.7).

7.5 Control theory
Let {fi }i∈I be a free control system where M is a manifold and fi : M → T M
are vector fields. Let R{fi } denote the reachable set, i.e., the set of all points
reachable by solutions of

pi (t) fi

where pi : R → R are unconstrained control parameters. This suggests a set-
valued approach so we again use H (M ), the set of all nonvoid compact subsets
of M , to find R{fi } .
Consider the map Φ : H (M ) → H (T M ) defined by
Φ (a) := ti fi (a) |ti | ≤ 1
i∈I i∈I

and the arc field on H (M ) given by

Xt (a) : = a + tΦ (a) = ∪ {x + ty} when M is a linear space
{ti }
Xt (a) : = 4 ∪ 5 Ft (a) generally.

ti |ti |≤1

{ti } {ti }
Here F {ti } is the flow of the vector field ti fi and Ft (a) = Ft (x) x ∈ a .
We check E1 and E2. Then the reachable set R{fi } is the limit of the flow,
and Euler curves give a constructible (if computationally impractical) means for
approximating the reachable set. Denoting a ∨ b := max {a, b} we check E1 on

a linear space:

dH (Xt (a) , Xt (b))
     
  
=  max d  ∪ x + ty, z  ∨  max d z, ∪ x + ty  
z∈Xt (b) x∈a z∈Xt (a) x ∈b
y∈Φ(a) y  ∈Φ(b)
   
   
=  max min x + ty − (x + ty ) ∨  max min x + ty − (x + ty  )
 x ∈b x∈a  x∈a x ∈b
y ∈Φ(b)y∈Φ(a) y∈Φ(a)y  ∈Φ(b)
   
   
≤  max min {x − x  + |t| y − y  } ∨  max min {x − x  + |t| y − y  }
 x ∈b x∈a  x∈a x ∈b
y  ∈Φ(b)y∈Φ(a) y∈Φ(a)y  ∈Φ(b)

≤ dH (a, b) + |t| dH (Φ (a) , Φ (b)) ≤ dH (a, b) + |t| maxdH (fi (a) , fi (b))
≤ dH (a, b) + |t| dH (a, b) maxKi

and E2:
 
Xt Xs (a) = Xt  ∪ {x + sy} =  ∪   {x + ty  }
y∈Φ(a)    
x ∈ ∪ {x+sy} ,y ∈Φ ∪ {x+sy}
x∈a x∈a
y∈Φ(a) y∈Φ(a)

= ∪ ∪ {(x + sy) + t (x + sy )}
x∈a x ∈Φ(a)
y∈Φ(a) y  ∈Φ(2) (a)


dH (Xs+t (a) , Xt Xs (a))
  
=  max d  ∪ x + (s + t) y, z 
z∈Xt Xs (a) x∈a
  
  
∨  max d z, ∪ ∪ {(x + sy) + t (x + sy  )}
z∈Xs+t (a) x∈a x ∈Φ(a)
y∈Φ(a) y  ∈Φ(2) (a)
 
 
=  max min x + (s + t) y − ((x + sy) + t (x + sy  ))
x∈a,x ∈Φ(a) x∈a
y∈Φ(a),y ∈Φ(2) (a)y∈Φ(a)
 
 
∨  max min x + (s + t) y − ((x + sy) + t (x + sy  ))
x∈a x∈a,x ∈Φ(a)
y∈Φ(a)y∈Φ(a),y ∈Φ(2) (a)
   
  max min y − (x + sy  )
 
  x∈a,x 
  ∈Φ(a) x∈a 
 y∈Φ(a) 
= |t|  y∈Φ(a),y ∈Φ(2) (a)  
 
 
   
 ∨  max min y − (x + sy  ) 
x∈a x∈a,x ∈Φ(a)

y∈Φ(a)y∈Φ(a),y  ∈Φ(2) (a)
   
   
≤ |st|  max min y  ∨  max min y  
x∈a,x ∈Φ(a) x∈a x∈a x∈a,x ∈Φ(a)
y∈Φ(a),y  ∈Φ(2) (a)y∈Φ(a) y∈Φ(a)y∈Φ(a),y  ∈Φ(2) (a)

≤ |st| max y  = O (st) .
y  ∈Φ(2) (a)

These calculations also verify E1 and E2 on a manifold via localization.

Examples similar to this one give part of the motivation for introducing
mutational analysis [7] and quasidifferential equations [51].
Chapter 8


Example 111 For computational purposes, we would much prefer to use the
original arc fields X and Y in the definition of the bracket [X, Y ] instead of their
flows F and G (particularly for examples with PDEs). The current example,
however, shows this is not generally feasible. Let us use the bracket
Y−√t X−√t Y√t X√t (x) for t ≥ 0
{X, Y } (x, t) := X−√|t| Y−√|t| X√|t| Y√|t| (x) for t < 0

instead of
G−√t F−√t G√t F√t (x) for t ≥ 0
[X, Y ] (x, t) := F−√|t| G−√|t| F√|t| G√|t| (x) for t < 0.

Xt (f ) (x) := f + tfx and Ft (f) := f (x + t)
so that on, e.g., M := L1 (R) ∩ C 1 (R) the flow of X is F since

d (Xt (f ) , Ft (f)) = |f (x + t) − f (x) − tf  (x)| dx

f (x + t) − f (x)
= |t| − f  (x) dx = o (t)
(M is not complete with the L1 norm/metric, but F is still the flow of X).
However, due to the presence of the square root of t it is still conceivable that
{X, X}  [X, X]. In fact
{X, X}t2 (f ) = f + tfx + t (fx + tfxx ) − t [fx + tfxx + t (fxx + tfxxx )]
−t (fx + tfxx + t (fxx + tfxxx ) − t [fxx + tfxxx + t (fxxx + tfxxxx )])
= f − t2 2fxx + t4 fxxxx
d ({X, X}t (f ) , Yt (f )) = o (t)


where Yt (f) := f − 2tfxx for f ∈ C 2 and we see {X, X}  {F, F } = 0. Conse-
quently, if we want any geometric information about flows from the bracket, it
is important not to interchange flows and arc fields in calculations.
Example 112 If X ≈ F and Y ≈ G then for t > 0
d ({X, Y } (x, t) , [X, Y ] (x, t))
= d Y−√t X−√t Y√t X√t (x) , G−√t F−√t G√t F√t (x)
≤ d Y−√t X−√t Y√t X√t (x) , Y−√t F−√t G√t F√t (x)
+d Y−√t F−√t G√t F√t (x) , G−√t F−√t G√t F√t (x)
  √ √
√ √ √ √ √ √
≤ d X− t Y t X t (x) , F− t G t F t (x) 1 + ΛY t + O t
  √ 2  √
≤ ... ≤ d X√t (x) , F√t (x) 1 + ΛY t 1 + ΛX t + 3O (t)
= O (t) = o (t) .
Since there are circumstances when all of these inequalities are equalities, this
estimate is tight. Consequently even 2nd-order tangency is not enough to allow
the indiscriminate use of arc fields to directly calculate the bracket.

Example 113 (local-uniformity necessary for Theorem 63)
Our results not only apply to all Lipschitz vector fields, but also for some
discontinuous vector fields. Consider the vector field f : R2 → R2 given by

(1, 0) x1 < x2
f (x) = f (x1 , x2 ) :=
(0, −1) x1 ≥ x2 .



-1 0 1



Though discontinuous, f still has a unique (continuous) flow F given by

 (x1 + t, x2 ) x1 + t ≤ x2
 x1 + t ≥ x2
(x2 , x2 − (t − (x1 − x2 ))) x1 ≥ x2 
Fx (t) :=

 (x1 , x2 − t) x1 + t ≥ x2
 x1 + t ≤ x2 .
(x1 + t − (x1 − x2 ) , x1 ) x1 ≥ x2

Another vector field on R2 given by the constant function g (x) := (1, 0) has flow
Gx (t) := x + t (1, 0). Calculate their arc field bracket to find it is tangent to the
constant 0 flow

d ([X, Y ]t (x) , 0t (x)) = d ([X, Y ]t (x) , x) = o (t)

at every x ∈ M = R2 but not locally uniformly o (t) near the line of discontinuity
x1 = x2 . Consequently Theorem 63 on commutativity does not apply, and in
fact commutativity does not hold since, e.g.,

G10 F10 (−1, 0) = G10 (0, −9) = (10, −9)
F10 G10 (−1, 0) = F10 (9, 0) = (9, −10) .

F and G also satisfy

d (Gs Ft (x) , Ft Gs (x)) = O (|st|)

at each point, though not locally uniformly so they do not close. Their sum
F + G is well defined, though.

Example 114 Let F be the flow derived from the discontinuous vector field f
as in Example 113 except extended to R3 ,

(1, 0, 0) x1 < x2
f (x1 , x2 , x3 ) :=
(0, −1, 0) x1 ≥ x2 .

and define a new flow Zt (x) := (x1 , x2 , x3 + t) then their bracket is identically 0
(and so locally uniformly tangent to 0) and these flows do commute and foliate
R3 like pages in a book opened to a right angle, or stacked, bent, sheet metal.
So the locus of discontinuity matches up with a space of relative equilibrium
(perpendicular flows).
Appendix A: Metric spaces

“Metric spaces are everywhere.”-Mischa Gromov.

A metric space (M, d) is a set M with a function d : M × M → R called
the metric which is positive, definite, symmetric and satisfies the triangle in-
(i) d(x, y) ≥ 0 positivity
(ii) d(x, y) = 0 iff x = y definiteness (or non-degeneracy)
(iii) d(x, y) = d(y, x) symmetry
(iv) d(x, y) ≤ d(x, z) + d(z, y) triangle inequality
for all x, y, z ∈ M . Maurice Fréchet introduced metric spaces in [34, 1906], 1906,
though the term was coined by Felix Hausdorff, who gave the first extensive
exploration of their properties in [38], 1914.

.1 Examples
Example 115 For our purposes the most important metric space is M = R,
the real number line, with metric d (x, y) = |x − y|. Properties (i)-(iv) are easy
to verify, and R is complete by definition. Next we might take M 
= Rn with
n 2
d (x, y) = x − y where · denotes the Euclidean norm, z := i=1 zi for
z = (z1 , ..., zn ) ∈ M .
More generally any vector space with a norm · gives a metric space by
using d (x, y) := x − y. A normed vector space which is complete is called a
Banach space. Conversely a metric d on a vector space M which is translation
d(x, y) = d (x + z, y + z)
for all z ∈ M and homogeneous
d (rx, ry) = |r| d (x, y)
for all r ∈ R gives a norm x := d (x, 0) on M .
Other common examples of metrics on Rn which come from norms include
the taxicab metric with

d1 (x, y) := |xi − yi |


(compute B (x, r) to see why it’s also called the “diamond metric”) and the
Chebyshev metric

d∞ (x, y) := max |xi − yi | =: x − y∞

which is also called the supremum metric.

x1 = 1 x2 = 1 x∞ = 1

More generally, the Minkowski distance1 of order p ≥ 1 is defined as

dp (x, y) := |xi − yi |p

and it happens that lim dp (x, y) = d∞ (x, y) for any fixed x, y ∈ Rn .

If we take n → ∞ in Rn we get the set of real sequences

RN := {x = (x)∞
i=1 = (x1 , x2 , ...) |xi ∈ R for i ∈ N} .

The lp metrics (p ≥ 1)
∞  p1

dp (x, y) := |xi − yi |p = x − yp

on the sets
lp (R) := x ∈ RN dp (x, 0) < ∞
also give metric spaces. Here 0 represents the constant sequence 0 = (0, 0, ...) ∈
RN . Here again we have the supremum metric

d∞ (x, y) := sup |xi − yi | = x − y∞

and again lim dp (x, y) = d∞ (x, y) is valid.

Finally we can further generalize to the Lp spaces on subsets of the space of
real functions
RR := {f : R → R} .
1 not to be confused with the pseudo-Riemannian Minkowski metric fundamental to special

relativity theory.
.1. EXAMPLES 125

The Lp metrics (p ≥ 1) are given by
dp (f, g) := |f − g| dµ = f − gp .

Here µ is the Lebesgue measure, but there is a new difficulty: if dp (f, g) = 0 we
may still have f = g on a set of measure 0 and so property (ii) is invalid. The
standard trick is to introduce the equivalence relation f ∼ g when they agree on
a set of full measure. Then dp is a genuine metric on the quotient space

Lp (R) := f ∈ RR dp (f, 0) < ∞ / ∼

where now 0 is the constant function 0 (x) = 0 for all x ∈ R. Again we have the
supremum metric d∞ (f, g) := ess sup |f − g| where now ess sup refers to the
essential supremum

ess sup (f ) := inf {r ∈ R|f (x) < r for almost all x} = f∞

on the set L∞ (R). Again lim dp (f, g) = d∞ (f, g) holds.

L2 (R) is a particularly important space as it is a Hilbert space, i.e., a
complete inner product space. The inner product *·, ·+ is given by

*f, g+ := f (x) g (x) dµ (x)

and we have *f, f + = f2 . Constructing the norm from the inner product
in this way shows all Hilbert spaces are Banach spaces. Hilbert spaces are the
starting point for the subject of functional analysis [26].
Another useful addition is to change the measure in Lp . Define the Lpw norm
dp (f, g) := |f (x) − g (x)| w (x) dx = f − gp,w .

The set of functions with bounded p, w norm is denoted L92w (R). w is the weight
function, which is assumed to satisfy w (x) > 0 and w (x) dx = 1. In this
2 √
book L2G is particularly useful where G denotes the Gaussian G (x) := e−x / π.

In the spaces of the two previous examples, R can be replaced by the set of
complex numbers C, and the claims remain valid.

Example 116 All of the above examples derive from normed vector spaces, but
metric spaces are much more general. E.g., the space
d f (x)

Cb (R) := f : R → R : sup < ∞ ∀n ∈ N
x∈R dxn
is a natural set to investigate for physical situations. Physicists usually assume
their objects of concern are smooth and bounded. So it is extremely unsettling

that Cb∞ (R) does not have a complete norm. However, it does have a few
complete metrics, including

∞ f − g[n]
d (f, g) :=
n=0 1 + f − g[n]

where  n 
d f (x)
f [n] := sup .
x∈R dxn
Here is another general situation where a vector space without a norm can
still be given a metric: Begin with an arbitrary set S and any complete metric
space (M, d). Denote by M  the set of all bounded functions from S to M, where
f : S → M is bounded if

f (S) := {f (x) |x ∈ S} ⊂ B (x0 , r) ⊂ M

for some x0 ∈ M and 0 < r < ∞. Then M  is complete under the supremum
d∞ (f, g) := sup{d (f (x) , g (x)) .

Example 117 Euclidean distance on R2 gives a metric on the 1-sphere

S 1 := x ∈ R2 x = 1

by restriction. This is the extrinsic metric. The intrinsic metric dI (x, y) is
defined to be the length of the shortest path in S 1 ⊂ R2 connecting x and y.
This example is immediately generalized to the n-sphere S n ⊂ Rn+1 . A choice
of metric on Rn+1 other than the Euclidean distance will give new extrinsic and
intrinsic metrics on S n .
On the next simplest manifold, the torus T 2 , there are three natural choices
of metric. Viewing T 2 as embedded in R3 we have the extrinsic metric and the
intrinsic metric defined in the same way as the metrics on S 1 . The third metric
is the flat metric. Remember

T 2 := S 1 × S 1 = {x = (x1 mod 2π, x2 mod 2π) = (x1 , x2 ) mod 2π|x1 , x2 ∈ R}

then the flat metric is

dF (x, y) := min {(x1 + 2mπ, x2 + 2nπ) − (y1 + 2jπ, y2 + 2kπ)} .

We call this the flat metric because the space is equivalent to a flat piece of
paper for which opposite edges are identified (via the modulo 2π operation). This
metric is geometrically inequivalent to the others since geodesics are different as
is the curvature.

Example 118 The metric space H (Rn ) is the set of all nonempty compact
subsets of Rn . Using the simplifying notation d (x, a) := inf {d (x, y)} =: d (a, x)
for x ∈ Rn and a ⊂ Rn the Hausdorff metric on H (Rn ) is given by
4 5
dH (a, b) := max sup {d (x, b)} , sup {d (y, a)} . (1)
x∈a y∈b

An equivalent definition for d may aid intuition: define

B (a, r) := ∪ B (x, r)

for a ⊂ Rn then

dH (a, b) = inf { r| b ⊂ B (a, r) ∧ a ⊂ B (b, r)} .

H (Rn ) has several useful topological properties in common with Rn . It is
separable, complete and even locally compact (separability is obvious by consid-
ering finite subsets of Rn ; for completeness, see [8]; for local compactness, see
[33, p. 183]).
For any metric space (M, d) the space of nonvoid compact subsets H (M )
may be metrized with dH defined without change as (1). Again, if M is complete
(or separable, or locally compact) then (H (M ) , dH ) is also.
An interesting generalization is to consider the space MGH of all nonvoid
compact metric spaces. The Gromov—Hausdorff distance is a complete met-
ric on MGH defined by

dGH (M1 , M2 ) := inf {dH (f1 (M1 ) , f2 (M2 ))}

taken over all metric spaces M and all isometric embeddings fi : Mi → M for
i = 1, 2. This space is complete and separable.
With Riemannian geometry we may prove Gromov’s compactness theorem,
which states that the set of Riemannian manifolds with Ricci curvature ≥ c and
diameter ≤ D is pre-compact in the Gromov—Hausdorff metric, [37].
In 2003 Perelman [53] used continuous dynamics on the metric space MGH ,
i.e., Ricci flow, to validate Thurston’s geometrization conjecture. Several essen-
tial ideas come from metric geometry, including Gromov’s compactness theorem,
which is no surprise since his earlier accomplishments were in Aleksandrov met-
ric geometry and his thesis advisor was Burago [14].

.2 Properties
Here we tersely list results from elementary point-set topology relevant to metric
spaces. The choice of results covered includes facts used implicitly throughout
this text and also facts appropriate for furthering the program of generalizing
differential geometry and analysis theorems initiated in this book. Most of these
ideas may be generalized to topological spaces; on metric spaces the presentation

is more natural and simplified. Proofs of the facts in this section are commonly
available in elementary topology texts; [49] is recommended.

A sequence (xn ) in a metric space M is a map x : N → M . The image of
n ∈ N is typically denoted xn ∈ M instead of x (n). The point x∗ ∈ M is the
limit of (xn ) if for any 0 > 0 there exists N ∈ N such that d (xn , x∗ ) < 0 for all
n > N ; this is denoted x∗ = lim xn or xn → x∗ . A sequence with a limit is
convergent, and a sequence with no limit is divergent.
A subset S ⊂ M has closure S in M defined by
S := x ∈ M | ∃ (xn ) ⊂ M with x = lim xn .

S is a closed subset of M if S = S; and S is an open subset if its complement
S  is closed. S is dense in M if S = M . If S is open and dense in M then the
property of belonging to S is generic.

.2.1 Regularity
A map f : M → N between metric spaces is continuous at x ∈ M if for
any 0 > 0 there exists δ > 0 such that dY (f (x) , f (y)) < 0 for all y ∈ M
such that d (x, y) < δ. The map f is continuous if f is continuous at every
x ∈ M . If for every convergent sequence xn → x∗ we have f (xn ) → f (x∗ )
then f is sequentially continuous. Continuity and sequential continuity are
equivalent; also f is continuous iff for any open set U ⊂ N the set f −1 (U) is
open in M . A map f is uniformly continuous if for any 0 > 0 there exists
δ > 0 such that dY (f (x) , f (y)) < 0 for all x, y ∈ M such that d (x, y) < δ.
A neighborhood of a point x ∈ M is an open set containing x. A map f is
locally uniformly continuous if for each x ∈ M there exists a neighborhood
on which f is uniformly continuous.
A sequence of functions fn : M → N converges uniformly to a function
f : M → N if for any 0 > 0 there exists N such that

dY (fn (x) , f (x)) < 0

for all n > N and all x ∈ X (in other words d∞ (fn , f ) < 0 for all n > N which
is why d∞ is occasionally referred to as the uniform metric). The Uniform
Limit Theorem states that if the fn are continuous and converge uniformly to
f, then f is continuous.
A map f : (M, dM ) → (N, dN ) between metric spaces is Lipschitz continu-
ous if there exists K ≥ 0 such that

dN (f (x1 ) , f (x2 )) ≤ KdM (x1 , x2 ) (2)

for all x1 , x2 ∈ M. The number K is a Lipschitz constant for f. The infimum
of all such Lipschitz constants is denoted Kf . The map f is locally Lipschitz

continuous if for each x ∈ M there exists a neighborhood on which f is Lip-
schitz. f is a contraction if it has a Lipschitz constant with Kf < 1. In this
case Kf is called its contraction factor.
Lipschitz continuity is a natural regularity restriction on a metric space; it is
closely related to smooth maps by Rademacher’s Theorem: a locally Lipschitz
map f : Rn → Rm is differentiable at almost every point x ∈ Rn . Metric space
versions of Rademacher’s Theorem are also valid, [3]. Smoothness on Rn is gen-
erally associated in some way with tangency to a linear map, or in the simplest
case, a smooth curve is tangent to a line, a special choice of curve in Rn . Since
there is no automatic “special curve” to define smoothness in a general met-
ric space, we sometimes rely on Lipschitz continuity instead. Fortunately, many
important results on Rn hold under the more general Lipschitz regularity as well
as smoothness. E.g., Lipschitz vector fields can be substituted for smooth vec-
tor fields in the Fundamental Theorem of ODE’s (Theorem 128), the Flow-box
Theorem [19], the definition of the Lie bracket [55] (or p. 32 above), Frobe-
nius’ Theorem (p. 51 above), the Inverse Function Theorem [7], etc. Even the
Fundamental Theorem of Calculus works for Lipschitz functions f : [a, b] → R
f  (x) dµ (x) = f (b) − f (a)

with the integral taken in the Lebesgue sense. (Use Rademacher’s Theorem
to define f  almost everywhere, then note this holds more generally for any
absolutely continuous f [56]). For those interested in studying non-smooth
analysis an excellent starting point is the subject of geometric measure theory
It is convenient to denote associated spaces of maps as follows:

C (M, N) : = {f : M → N | f is continuous}
Lip (M, N) : = {f : M → N | f is Lipschitz}
LipK (M, N) : = {f : M → N | f with Lipschitz constant ≤ K} .

A homeomorphism is a continuous map with continuous inverse. A lipeo-
morphism is a Lipschitz map with Lipschitz inverse. An isometry is a map
f : M → N such that dN (f (x) , f (y)) = dM (x, y) for all x, y ∈ M . An isome-
try is a lipeomorphism onto its image. M and N are isometric if there exists
an isometry from M onto N.

A Cauchy sequence is a sequence (xn ) with the property that for any 0 > 0
there exists N ∈ N such that d (xn , xm ) < 0 for all m, n > N . A metric space
M is complete if every Cauchy sequence converges. M is locally complete if
every point x ∈ M has a complete neighborhood. A closed subset of a complete
space is complete. A locally complete metric space is isometrically isomorphic
with an open subset of a complete metric space, and conversely every open
subset of a complete metric space is locally complete. Every space in Examples
115-118 is complete.

A sequence (yn ) is a subsequence of (xn ) if there exists an increasing
sequence (nm ) in N such that xnm = ym . A metric space M is complete if f
every Cauchy sequence has a convergent subsequence.
Every metric space M has a metric completion M  , that is a complete
metric space for which M may be isometrically embedded as a dense subset.
The space M  may be constructed as the set of all equivalence classes of Cauchy
sequences in M where equivalence between two sequences x = {xn } and y =
{yn } ⊂ M is determined if d (xn , yn ) → 0 as n → ∞. Clearly M  is unique up to
isometry. A second approach to constructing M  is to define φa (x) := d (x, a) −
d (x, x0 ) and Φ : M → B (X, R) by Φ (a) := φa then notice M  := Φ (M ) is
complete. A third approach is to use Kuratowski’s embedding theorem: every
metric space can be embedded in a Banach space. Whence the completion is
the closure.

Theorem 119 (Contraction Mapping Theorem) A contraction f : M →
M on a complete metric space M has a unique fixed point x∗ . For any x ∈ M the
iterates f (n) (x) converge to x∗ exponentially, i.e., if K < 1 is the contraction
factor of f , then 
d f (n) (x) , x∗ ≤ K n d (x, x∗ ) .

.2.2 Extensions
Theorem 120 (Tietze extension) If S is a subset of a metric space M, and
if f : S → R is continuous, then there exists an extension f : M → R which is

Theorem 121 (McShane’s Lipschitz extension) If S is a subset of a met-
ric space M and f : S → R is K-Lipschitz, then f : M → R defined by

f (x) := sup { f (y) − K · d (x, y)| y ∈ S}

equals f on S and is K-Lipschitz.

The proof of this is easy to check since f is given explicitly [47].
The support of a function φ : M → R is defined to be supp (φ) :=
{ x ∈ M | φ (x) = 0}, the closure of the set on which the function does not vanish.

Theorem 122 Let {Ui }i∈I be an arbitrary open covering of a metric space
M, i.e., Ui ⊂ M is open for all i and ∪ Ui = M . Then there exists a partition
of unity dominated by {Ui }i∈I . This means there exist functions φi : M → [0, 1]
for all i ∈ I such that
(i) supp (φi ) ⊂ Ui for all i ∈ I
(ii) {supp
 (φi )} is locally finite
(iii) φi (x) = 1 for each x ∈ M.

(ii) means at any given x ∈ M only finitely many φi are nonzero. This
makes the sum in (iii) well-defined.
A partition of unity is used in manifold theory to demonstrate the existence
of constructions whenever the desired object may be constructed on charts. For
example, Riemann metrics always exist on a manifold because the Euclidean
inner product exists on the chart in Rn ; vector fields may be approximated with
C ∞ vector fields since9they can be approximated in charts; and the integral of
a form on a manifold dω is defined using charts,

Compactness is a fundamental property in metric spaces which is given short
shrift in this book, since all our results hold in the more general setting of a
complete metric space. One result worth mentioning in keeping with our interest
in generalizing results from manifolds to metric spaces is the fact that a compact
metric space with topological dimension n can be embedded in R2n+1 . Unlike
the case with manifolds this value of 2n + 1 is sharp for metric spaces.

We are focused on metric spaces and not topological spaces, so we do not wish
to immerse ourselves in the dense terminology of topological spaces. Therefore
we print the next two theorems without explanation. We’ve been able to forgive
ourselves this shoddiness because they are not directly used anywhere in the
manuscript, despite the perspective they impart.

Theorem 123 Every metric space is a Hausdorff, first countable, paracompact
and perfectly normal topological space.

Theorem 124 (Nagata-Smirnov-Bing metrization) A topological space is
metrizable if and only if it is regular and has a basis which is countably locally

.3 Geometric objects
This book largely ignores the “static” objects listed in this dusty appendix
in favor of more dynamic interests. We cannot ignore their fundamental im-
portance completely, though, and recognize that further developments in this
subject will benefit from the rich constructions generalized from Riemannian
and Finsler geometry.
The diameter of a nonvoid subset A of a metric space M is the number

D (A) := sup d (x, y)

.3.1 Triangles
The cosine angle formula for a triangle in the plane with sides of length A, B,
and C is C 2 = A2 + B 2 − 2AB cos θ where θ is the angle between the sides of

length A and B. This may be immediately generalized to metric spaces to give
a notion of angle. For three points x, y, and z ∈ M let
2 2 2
* d (x, y) + d (y, z) − d (x, z)
∠xyz := arccos
2d (x, y)2 d (y, z)2
denote the comparison angle xyz which is used to define the angle between
curves c1 and c2 : [0, 1] → M with common origin c1 (0) = x = c2 (0) as
* (c1 , c2 ) := lim ∠c
∠ * 1 (s) xc2 (t)

when the limit exists. This gives the inspiration for defining a metric space
inner product / 0
*c1 , c2 +M := c1  c2  cos ∠* (c1 , c2 )

where the norm c denotes the speed of c at t = 0
d (c (t) , c (0))
c := sup
t=0 |t|

similar to Definition 10. Similarly curvature (defined with the angle given pre-
viously), convex sets, geodesics, gradients, etc., can be generalized profitably
to metric spaces, see [37], [14] and [41]. Usually the metric space needs to be
restricted (to length spaces or even locally Euclidean spaces) to get nontrivial
results from these definitions. A different approach is to use the interpreta-
tion of a connection on a manifold M as a distribution on T M, which may be
generalized to metric spaces using the ideas of Chapter 3.

.3.2 Metric coordinates
Let C ⊂ M be a set of points and define x ∈ M and c ∈ C the number
xc := d (x, c) called the c-th metric coordinate of x. Assuming for each pair
of elements x, y ∈ M with x = y we have (xc )c∈C = (yc )c∈C ∈ RC then C
coordinatizes M.

Example 125 Consider the open half-plane H 2 in the Euclidean plane E2 with
the Euclidean metric d. Pick any two distinct points a and b on the boundary.
We can locate any point x in H 2 if we know its distances to a and b, say
xa = d (x, a) and xb = d (x, b). Thus {a, b} is a metric coordinatizing set for
H 2.
Equations in metric coordinates obviously give different graphs from those
in Cartesian or polar coordinates. E.g., for any r > d (a, b) , the locus of the
xa + xb = r (3)
in metric coordinates is the set

x ∈ H 2 : d (x, a) + d (x, b) = r .

The graph of (3) is half of an ellipse with foci at a and b.
E2 , the plane, requires 3 non-colinear points for a metric coordinatizing set.
H (the open half-space) is metrically coordinatized with 3 non-colinear points
on its boundary, and E3 needs 4 non-coplanar points. Many geometric objects
are readily described in metric coordinates on E3 :

Sphere (center a, radius r) xa = r r≥0
Ellipsoid (foci a, b) xa + xb = r r ≥ d (a, b)
Hyperboloid (foci a, b) |xa − xb | = r 0 < r < d (a, b)

Infinite Cylinder s (s − xa ) (s − xb ) (s − d (a, b)) = r
←→ 2r xa +xb +d(a,b)
(with axis ab , radius d(a,b) ) where s = 2

Infinite Cone x2b = d (a, b)2 + x2a − 2xa d (a, b) cos θ

(with axis ab , vertex a, angle θ)

Plane (⊥ ab ) xa = xb
Segment ab xa + xb = d (a, b)

Ray ab xa ± xb = d (a, b)

Line ab |xa ± xb | = d (a, b)

The equation for the cylinder comes from Heron’s formula for the area of a
triangle. The equation for the cone is simply the cosine angle formula for a
triangle and represents only one half of a two-sided cone; the other half is given
when θ is replaced with π − θ. More general equations for lines and planes
are available but are not so concise. Choosing the coordinates according to the
problem simplifies the formulae.
Since each of the above formulae use only metric coordinates, they may serve
as definitions for the various geometric objects in general metric spaces.

.3.3 Conversion formulas

Choose a metric coordinatizing subset {a, b, c} of the Euclidean plane E2 so the

rays −

ca and cb are perpendicular with d (a, c) = 1 = d (b, c). Define a Cartesian
coordinate system on the plane with the origin (0, 0) at c, the positive x-axis

along the ray −

ca and the positive y-axis along the ray cb so E2 is given the

structure of R2 . The conversion formulae2 are easy to find:
Metric (wa , wb , wc ) = w = (x, y) Cartesian (4)

wc = x2 + y 2
wb = x2 + (y − 1)2
wa = (x − 1) + y2 .
Solving these same equations for x and y yields the inverse formulae
wc2 − wa2 + 1 w2 − wb2 + 1
x= and y= c . (5)
2 2
More generally, on a Hilbert space we have:
Theorem 126 Let (H, *·, ·+) be a real Hilbert space with orthonormal basis B.
The set C := B ∪ {0} ⊂ H is a metric coordinatizing set.
Proof. For u, v ∈ H assume d (u, c) = d (v, c) for all c ∈ C. Then since 0 is
in C we have *u, u+ = *v, v+ . Further
*u − c, u − c+ = *v − c, v − c+
*u, u+ − 2 *c, u+ + *c, c+ = *v, v+ − 2 *c, v+ + *c, c+
*c, u+ = *c, v+
for all c ∈ B so u = v.
Using the basis B write an element w ∈ H in orthonormal coordinates as
w= (w *c = *w, c+ for each c ∈ B. Any point w ∈ H is given in metric
*c ) where w
coordinates by w = (wc )c∈B∪{0} where wc := w − c = d (w, c) . With this, the
conversion formulae are
w02 − wc2 + 1
*c =
w c∈B (6)
wc = w2 − 2w *c + 1 c∈B (7)
w0 = w
a straightforward generalization of the finite-dimensional formulae, (5) and (4).
(7) results from the easy calculation

wc = w − c = *w − c, w − c+1/2
= (*w, w+ − *w, c+ − *c, w+ + *c, c+)1/2
= w2 − 2w *c + 1 .

Solving this equation for w
*c yields (6).
2 To write (w , w , w ) = w = (x, y) is technically abuse of notation. (w , w , w ) and
a b c a b c
(x, y) are actually representations of w, and in the sequel we write wC = (wa , wb , wc ) to make
this distinction explicit.

Example 127 One must be careful in applying these formulae. They do not
necessarily apply on non-Hilbert vector spaces. The finite-dimensional Banach
space R2 with the infinity norm has basis {(1, 0) , (0, 1)} which does not produce
a coordinatizing set in the above manner.
Appendix B: ODEs as
vector fields

The most important source of flows is the subject of ordinary differential equa-
tions (ODEs). Let us demonstrate the elementary fact that for local questions,
practically any n-th order ODE may be rewritten as a vector field by adding
dependent variables.
Denoting higher-order derivatives with square brackets

dn y
y [n] := = y... ←n prim es

consider for some g : Rn+2 → R the ODE

g y [n] , y[n−1] , ..., y  , y, t = 0. (8)

Using the implicit function theorem we can locally solve (8) for y[n] when ∂x1
0. (In case this is not true, we are in the realm of the subject of singularity
theory; see [6], [4].) And so we have

y [n] = h y[n−1] , ..., y  , y, t

for some h : Rn+1 → R. Substituting x1 := y, x2 := y , x3 := y  , ..., xn := y [n−1]
we get the equivalent 1st-order system

x1 = x2
xn−1 = xn
xn = h (xn , ..., x2 , x1 , t) .

Introducing a final variable xn+1 := t eliminates the right hand side’s depen-


dence on t to get the autonomous system

x1 = x2
xn−1 = xn
xn = h (xn , ..., x2 , x1 , xn+1 )
xn+1 = 1

which may be rewritten more concisely with vector notation as

x = f (x) (9)

where x = (x1 , ..., xn+1 ) and f : Rn+1 → Rn+1 .

f is called the vector field associated with the ODE (8). A solution to
the vector field is a map x : I → Rn+1 for some interval I ⊂ R which satisfies
(9). The point x (0) = x0 ∈ Rn+1 is called the initial condition of x. Such a
function x clearly gives a solution y to (8) by retracing our steps, using the first
coordinate of x. This can be immediately generalized, mutatis mutandis, by
assuming y is a vector quantity. Consequently we take solutions to vector fields
as primary, and Theorem 128 below has come to be known as the Fundamental
Theorem of ODEs3 , which uses the next two definitions.
Our geometric intuition is usually rooted in finite dimensions (typically R2
with the occasional stretch to R3 ). However, a routine experience in the “un-
reasonable effectiveness of mathematics” is how easily proofs inspired by low-
dimensional intuition can be generalized to abstract spaces. Case in point, this
next theorem and its proofs have the same form in dimension 1 or infinity.

Theorem 128 If f : B → B is a Lipschitz vector field on a Banach space
B then there exists a unique solution to the ODE x = f (x) for each initial
condition x0 ∈ B.

Proof. This is given in detail in introductory ODE texts, such as [39].
(Sketch) The solution σx0 is the limit of the sequence {φi } defined by

φ0 (t) : = x0
φi+1 (t) : = x0 + f (φi (s)) ds.

The limit exists by the Contraction Mapping Theorem, Theorem 119. The detail
that complicates matters is the domain of the solution. By continuity of f we
find r and M > 0 such that f (x) < M on B (x0 , r). Then the domain is at
least (−r/M, r/M) and we use the supremum norm on C ((−r/M, r/M ) , B)
3 alternately referred to as the “Cauchy-Lipschitz Theorem”, “Picard-Lindelöf Theorem”,

“Well-posedness Theorem”, “Existence and Uniqueness Theorem”, etc., but most often it’s
used without comment.

to get the metric space used in the Contraction Mapping Theorem. Then
Grönwall’s lemma (a specialization of Theorem 16 to real functions) guaran-
tees uniqueness.
Alternately, Theorem 12’s proof is transferrable, demonstrating the conver-
gence of the sequence of Euler curves.

This result may be carried to the slightly more general context of a smooth
Banach manifold, M . A map f : M → T M is a vector field if π ◦ f = idM where
π : T M → M is the natural projection. Remember, a tangent vector (x, x ) =
v ∈ T M can be represented as an equivalence class v = [c] of curves c which
are differentiable and tangent. Therefore a vector field f may be represented
as a family of curves on M with cx ∈ [cx ] = f (x) ∈ T M requiring cx (0) =
x. Theorem 128 guarantees unique solutions when the transferred f is locally
Lipschitz on some (and therefore any) chart.
Appendix C: Numerical

To approximate the derivative of a function f : R → R we may employ the
difference quotient

df f (x + t) − f (x)
= f [1] (x) ≈
dx t
f (x+2t)−f (x+t)
d2 f − f(x+t)−f (x)
f (x + 2t) − 2f (x + t) + f (x)
= f [2] (x) ≈ t t
dx2 t t2
1 m m
f (x) ≈ m (−1)m−j f (x + jt) .
t j=0 j

In this appendix we derive an error estimate and generalize this formula to
include more points x + αj t. In approximating the mth derivative with an n + 1
point formula

f [m] (x) = m
cm,j f (x + αj t) + Error
t j=0

we wish to calculate the coefficients cj and keep track of the Error. In the
forward difference method, the αj = j, but keeping these values general allows
us to find the coefficients for the central, backward, and other difference formulas
just as easily. The following method for finding the cj was shown to us by Jeffrey
Thornton who rediscovered the approach.
Taylor’s Theorem has

n (α t)k (αj t)n+1 [n+1]  
f (x + αj t) = f [k] (x) + f ξj
k=0 k! (n + 1)!


for some ξ j between x and x + αj t. From this it follows

cj f (x + αj t)
 
 T 1 1 ··· 1
f (x)  α0 α1 ··· αn  c0

 tf  (x)   α20 α21 α2n

   ···  
 ..   2! 2! 2!  c1 
=    .. .. .. ..  .. .
 .   . . . .  
 tn f [n] (x)    .
 αn αn αn 
 0 1
··· n
 cn
tn+1 α0 f
n+1 [n+1]
(ξ0 ) α1 f
n+1 [n+1]
(ξ1 ) αn+1
f [n+1] (ξn )
(n+1)! (n+1)! ··· n

Now pick c = [cj ] as a solution to
   
1 1 ··· 1   0
 α0 α1 ··· αn  c0  .. 
 2   . 
 α0 α21 α2n  
 c1   
 2! ··· 2!  ..  =  1  th
 . 2!  
..     ← m entry 1 (10)
 . .. .. .   . 
 . . . .   .. 
αn αn αn cn
n! ··· n
n! 0
which is possible whenever the αj are distinct, because then the matrix is in-
vertible, as is seen using the Vandermonde determinant:
Π (αk − αj )
det = = 0.
Π j!

Then we must have
 
 T  . 
f (x)  .. 
 tf  (x)   
   1 (m-th position) 

 ..   ..

cj f (x + αj t) =  . 
j=0    .

 tn f [n] (x)   0 
 
tn+1  
n   
(n+1)! cj αn+1
j f [n+1] ξ j
= tm f [m] (x) + cj αn+1
j f [n+1] ξ j .
(n + 1)! j=1
f [m] (x) = m
cj f (x + αj t) + Error
t j=0
for cj which satisfy (10) where
tn+1−m  n  
Error = − cj αn+1
j f [n+1] ξ j .
(n + 1)! j=0

This Error formula shows how truncation error may be decreased by increas-
ing n without shrinking t, thus combatting round-off error at the expense of
increased computation of sums.
Example 129 For n = m and αj = j the ci which satisfy (10) are
cj = (−1)n−j
which gives the famous the forward difference formula
[m] 1 
n−j n
f (x) ≈ m (−1) f (x + jt)
t j=0 j
and similarly we can derive the backward difference formula
[m] 1 m
j n
f (x) ≈ m (−1) f (x − jt)
t j=0 j

Notice the αi may be chosen as complex values when f is analytic (as is
the case with our Gaussians in §5.1 and particularly with complex exponentials
for the low-frequency trigonometric series in §5.2.2). This gives us another
opportunity to mitigate round-off error, since a greater quantity of regularly-
spaced nodes αi can be packed into an epsilon ball around zero in the complex
plane than on the real line. However, Taylor’s remainder as used above needs
to be adjusted in the complex case to the familiar integral form.
Example 130 Thornton chose the roots of unity in C as nodes αj := e n 2πi
m! (n−m)j 2πi
and found cj = ntme
n so
m! < (n−m)j  j

f [m] (z) ≈ e n 2πi
f z + te n 2πi . (11)
ntm j=1

Controlling the error and taking n → ∞ proves Cauchy’s Integral Formula:
[m] m! f (w)
f (z) = dw
2πi (w − z)m+1

where γ is the circle centered at z of radius t.
Since line (11) is merely the Riemann sum approximation of the complex
integral, a more sophisticated numerical integration technique would benefit a
practical implementation of the results of §5.2.3.

As final note we mention there have been numerous advances to the present
day in inverting the Vandermonde matrix. We mention only the earliest appli-
cation to numerical differentiation [59] which gives a formula in terms of the
Stirling numbers.

[1] Ralph Abraham, Jerrold Marsden and Tudor Ratiu, “Manifolds, Tensor
Analysis, and Applications”, 2nd Ed., Springer-Verlag, 1988.

[2] Aleksandr Danilovich Aleksandrov, “Convex Polyhedra”, Springer-Verlag,
Berlin, 2005.

[3] Luigi Ambrosio, Nicola Gigli, Giuseppe Savaré, “Gradient Flows in Metric
Spaces and in the Space of Probability Measures”, 2nd ed., Birkhäuser,

[4] Vladimir Igorevich Arnol’d, “Geometrical Methods in the Theory of Ordi-
nary Differential Equations”, Springer-Verlag, Berlin, 1988

[5] Vladimir Igorevich Arnol’d, “Lectures on Partial Differential Equations”,
Springer-Verlag, Berlin, 2004..

[6] Vladimir Igorevich Arnol’d, V. S. Afrajmovich, “Bifurcation Theory and
Catastrophe Theory”, Springer-Verlag, Berlin, 1999.

[7] Jean-Pierre Aubin, “Mutational and Morphological Analysis”, Birkhäuser,
Boston, 1999.

[8] Michael Barnsley,“Fractals Everywhere”, Academic Press Professional,
New York, 1993.

[9] Alain Bensoussan, Giuseppe Da Prato, Michel C. Delfour, Sanjoy K. Mitter,
“Representation and Control of Infinite Dimensional Systems”, 2nd Ed.,
web-version, April 2006.

[10] G. G. Bilodeau, The Weierstrass Transform and Hermite Polynomials,
Duke Mathematical Journal, Vol. 29, No. 2, 1962.

[11] Leonard M. Blumenthal, Karl Menger, “Studies in Geometry”, W. H. Free-
man and Co., 1970.

[12] William M. Boothby, “An Introduction to Differentiable Manifolds and
Riemannian Geometry”, 2nd Ed., Academic Press, Inc., 1986.


[13] Max Born and Emil Wolf, “Principles of Optics”, 7th ed., Cambridge Uni-
versity Press, (corrected) 2002.

[14] Dmitri Burago, Yuri Burago and Sergei Ivanov, “A Course in Metric Geom-
etry”, American Mathematical Society 1984.

[15] Herbert Busemann, “The Geometry of Geodesics”, Academic Press Inc.,
New York, N. Y., 1955.

[16] Craig Calcaterra, “Arc Fields”, Ph. D. dissertation, University of Hawaii,

[17] Craig Calcaterra, Linear Combinations of Gaussians with a Single Variance
are dense in L2 , Proceedings of the World Congress on Engineering, 2008.

[18] Craig Calcaterra and David Bleecker, Generating Flows on a Metric Space,
Journal of Mathematical Analysis and Applications, 248, pp. 645-677, 2000.

[19] Craig Calcaterra and Axel Boldt, Lipschitz Flow-box Theorem, Journal of
Mathematical Analysis and Applications, 338, issue 2, pp. 1108-1115, 2008.

[20] Craig Calcaterra and Axel Boldt, Approximating with Gaussians,
arXiv:0805.3795v1 [math.CA]

[21] Craig Calcaterra, Axel Boldt, Michael Green, David Bleecker, Metric Co-
ordinate Systems, math.DS/0206253

[22] César Camacho, Alcides Lins Neto, “Geometric Theory of Foliations”,
Birkhäuser Boston, 1985.

[23] Shiing-Shen Chern, W. H. Chen, K. S. Lam, “Lectures on Differential
Geometry”, World Scientific Publishing Co., 1999.

[24] Rinaldo M. Colombo and Andrea Corli, A Semilinear Structure on Semi-
groups in a Metric Space, Semigroup Forum, 68, pp. 419-444, 2004.

[25] Charles C. Conley, Isolated Invariant Sets and the Morse Index, CBMS
Regional Conference, vol. 89, American Mathematical Society, 1978.

[26] John B. Conway, “A Course in Functional Analysis”, 2nd edition, Springer-
Verlag, 1994.

[27] Jean-Michel Coron, “Control and Nonlinearity”, American Mathematical
Society, 2007.

[28] S. Darlington, Synthesis and Reactance of 4-poles, J. Math. & Phys., 18,
pp. 257-353, 1939.

[29] Milton Dishal, Gaussian-Response Filter Design, Electrical Communica-
tion, 36, no. 1, pp. 3-26, 1959.

[30] J. R. Dorroh and J. W. Neuberger, A Theory of Strongly Continuous Semi-
groups in Terms of Generators, Journal of Functional Analysis, 136, pp.
114—126, 1996.
[31] James Dugundji, “Topology”, Allyn and Bacon, Boston, 1966.
[32] Günter Ewald, “Combinatorial Convexity and Algebraic Geometry”,
Springer-Verlag, New York, 1996.
[33] Herbert Federer, “Geometric Measure Theory,” Springer-Verlag, New York,
[34] Maurice Fréchet, Sur Quelques Points du Calcul Fonctionnel, Rendic. Circ.
Mat. Palermo 22, pp. 1—74, 1906.
[35] David Gottlieb and Steven Orszag, “Numerical Analysis of Spectral Meth-
ods”, SIAM, 1977.
[36] Leslie Greengard and Xiaobai Sun, A New Version of the Fast Gauss Trans-
form, Documenta Mathematica, Extra Volume ICM, III, pp. 575-584, 1998.
[37] M. Gromov, “Metric structures for Riemannian and non-Riemannian
spaces”, Birkhäuser, 1999.
[38] Felix Hausdorff, “Grundzüge der Mengenlehre”, Leipzig: Veit, 1914.
(Reprinted in “Felix Hausdorff—Gesammelte Werke. Band II”, Springer-
Verlag, pp. 91-576, 2002.)
[39] Philip Hartman, “Ordinary Differential Equations”, 2nd Ed., SIAM, 2002.
[40] Gilbert Hector and Ulrich Hirsch, “Introduction to the Geometry of Folia-
tions, Part A”, Friedr. Vieweg & Sohn, 1981.
[41] Juha Heinonen, “Lectures on Analysis on Metric Spaces”, Springer-Verlag,
New York, 2001.
[42] John E. Hutchinson, Fractals and Self Similarity, Indiana University Math-
ematics Journal, 30, pp. 713-747, 1981.
[43] Youssef Jabri, “The Mountain Pass Theorem”, Cambridge University
Press, 2003.
[44] Donald Knuth, “Concrete Mathematics,” 2nd ed., Addison-Wesley, 1994.
[45] Greg Leibon, Daniel Rockmore & Gregory Chirikjian, A Fast Hermite
Transform with Applications to Protein Structure Determination, Proceed-
ings of the 2007 international Workshop on Symbolic-Numeric Computa-
tion, ACM, New York, NY, pp. 117-124, 2007.
[46] J. Madrenas, M. Verleysen, P. Thissen, and J. L. Voz, A CMOS Ana-
log Circuit for Gaussian Functions, IEEE Transactions on Circuits and
Systems-II: Analog and Digital Signal Processing, 43, no. 1, 1996.

[47] E. J. McShane, Extensions of Range of Functions, Bull. Am. Math. Soc.,
40, pp. 837-842, 1934.
[48] Karl Menger, “Géométrie Général”, Memor. Sci. Math. no. 124, Gauthier-
Villars, Paris, 1954.
[49] James R. Munkres, “Topology,” Prentice Hall, 2000.
[50] D. Motreanu and N. H. Pavel, “Tangency, Flow Invariance for Differential
Equations, and Optimization Problems”, Marcel Decker, 1999.

[51] A. I. Panasyuk, Quasidifferential Equations in a Complete Metric Space
under Conditions of the Caratheodory Type. I, Differential Equations 31,
pp. 901-910, 1995.
[52] A. I. Panasyuk, Quasidifferential Equations in a Complete Metric Space
under Caratheodory-type Conditions. II, Differential Equations 31, no. 8,
pp. 1308-1317, 1995.
[53] Grisha Perel’man, Finite Extinction Time for the Solutions to the Ricci
Flow on Certain Three-manifolds, arXiv:math/0307245v1 [math.DG]

[54] Anthony Ralston and Philip Rabinowitz, “A First Course in Numerical
Analysis”, McGraw-Hill, 1978.

[55] Franco Rampazzo and Hector J. Sussmann, Commutators of Flow Maps of
Nonsmooth Vector Fields, Journal of Differential Equations, 232, no. 1 pp.
134-175, 2007.
[56] H. L. Royden, “Real Analysis”, Macmillan Publishing Company, 1988.

[57] Slobodan Simić, Lipschitz Distributions and Anosov Flows, Proceedings of
the AMS, 124, no. 6, pp. 1869-1877, 1996.

[58] Eduardo D. Sontag, “Mathematical Control Theory”, 2nd Ed., Springer-
Verlag, 1998.

[59] A. Spitzbart and N. Macon, Numerical Differentiation Formulas, The
American Mathematical Monthly, Vol. 64, No. 10, pp. 721-723, 1957.
[60] M. H. Stone, Developments in Hermite Polynomials, The Annals of Math-
ematics, 2nd Ser., Vol. 29, No. 1/4, pp. 1-13, 1927-1928.
[61] Gabor Szegö, “Orthogonal Polynomials”, American Mathematical Society,
3rd ed., 1967.
[62] E. C. Titschmarsh, “The Theory of Functions”, 2d ed., Oxford University
Press, Fair Lawn, N.J., 1939.
[63] William P. Thurston, “Three-Dimensional Geometry and Topology”, vol.
1, Princeton University Press, 1997.

[64] Philippe Tondeur, “Geometry of Foliations”, Birkhauser Verlag, 1991.
[65] Gilbert G. Walter, “Wavelets and Other Orthogonal Systems With Appli-
cations”, CRC Press, 1994.
[66] David Widder, “The Heat Equation”, Pure and Applied Mathematics, Vol.
67. Academic Press, 1975.
List of notation


Terms in bold are defined in that paragraph.

A := B A is defined by B, or B is denoted by A; used to distinguish the
case when = represents a step in a calculation.

A =: B means B := A

A :=: B means A and B are interchangeable notations for a concept

Square brackets for bibliographical citations, e.g., [1].

Round brackets refer to displayed lines, e.g., (2.3) refers to the third-referenced
line in the second chapter.


⇒ implication

⇐ is a consequence of

⇔ equivalence

iff “if and only if”

∃ there exists

∀ for any

Set theory and algebra

R denotes the set of real numbers

C the complex numbers

N := {1, 2, ...} the natural numbers


Z := {..., −1, 0, 1, 2, ...} the integers
Rn := R × R × · · · × R Cartesian product
R+ := [0, ∞) ⊂ R
Lp , LpG , lp , H (Rn ), etc., are various spaces defined in Appendix A.1.

B denotes the complement of the set B, i.e., { x ∈ M | x ∈
/ B}
A\B := A ∩ B  set difference
f :A→B signifies a function with domain A and codomain B
f : A × B × C → M may be denoted f (x, y, z) :=: f (x) (y) (z) :=: fx (y, z) :=:
fx,y (z)
f1 , f2 : A → B signifies two functions with the same domain and codomain
f (n) composition n times: f ◦ f ◦ ... ◦ f
f [n] nth derivative of f
fn usually distinguishes the nth object f ; alternatively may mean the nth
multiplicative power
sup A supremum (least upper bound) of A ⊂ R
inf A infimum (greatest lower bound) of A ⊂ R
span {Ai } linear span of objects Ai , e.g.,

span {Ai } := ci Ai n ∈ N, ci ∈ R

span topological closure of span, i.e., closed linear span
∨ a ∨ b := maximum of a and b
∧ a ∧ b := minimum of a and b

Metric spaces

B (x, r) := {y ∈ M|d (x, y) < r} :=: BM (x, r) :=: Bd (x, r) is the ball about
the center x with radius r in M.
B (x, r) := {y ∈ M |d (x, y) ≤ r} :=: B (x, r) the closed ball.

C (M, N ) := {f : M → N | f is continuous}
Lip (M, N ) := {f : M → N | f is Lipschitz} (not the set of lipeomorphisms
from M to N )
LipK (M, N ) := {f : M → N | f with Lipschitz constant ≤ K}
homeomorphism, lipeomorphism: see Appendix A.2

∼, ≈ A ∼ B and A ≈ B denote 1st and 2nd-order local-uniform tangency.
A and B may be arc fields (p. 12), an arc field and a set (p.21), 2-D
integral surfaces (p. 51), and distributions (p. 60).
Be warned ≈ is alternately used for function approximation: f ≈ g means
f − g < 0.

“Big oh” and “little oh”

O, o For a real function Ψ the statement

“ Ψ (t) = O (tn ) as t → 0 ”

means there exist K > 0 and δ > 0 such that |Ψ (t)| < K |tn | for 0 < |t| <
δ. The statement
“ Ψ (t) = o (tn ) as t → 0 ”
Ψ (t)
lim = 0.
t→0 tn

For a family of functions {Ψx : x ∈ M } we say Ψx (t) = O (tn ) locally uni-
formly in x when for each x0 ∈ M there are positive constants r, δ and
K such that for all x ∈ B (x0 , r) and 0 = t ∈ (−δ, δ) we have

|Ψx (t)| < |tn | K.

Ψx (t) = o (tn ) locally uniformly in x when for each x0 ∈ M and any 0 > 0
there are positive constants r and δ such that for all x ∈ B (x0 , r) and
0 = t ∈ (−δ, δ) we have
|Ψx (t)| < |tn | 0.
(αx , ω x ), 14 bracket-generated distribution, see
angle, 132 distribution, bracket-generated
approximately reachable set, 66
approximation, 83 C (M, N), 129
Fourier synthesis, 78 Cauchy sequence, 129
probability distributions, 80 Cauchy-Lipschitz Theorem, 138
with exponentials, 77 characteristic function, χS , xi
with Gaussians, 76, 83 Chebyshev metric, see supremum met-
with low-frequency trig series, ric
90 Chow’s Theorem, xiv
coefficients, 93 manifold, xx
damping, 98 metric space, 68
error bound, 97 close, 26
approximation field, see arc field bracket is arc field, 33
arc, ix closure closes, 31
arc bundle, 4 closed set, 128
arc field, x, 3 closure, 128
generalizes vector field, x, 5 commutativity of flows, 22, 58, 60,
parallel parking example, 67 63
scalar multiple, 25 complete, 129
second order, 12 complete flow, see flow
sum, 25 completion of a metric space, 130
time dependent, 12 Conditions E1 and E2, 4
well posed result, 6 E1 implies uniqueness, 14
automorphism, viii forward arc fields, 19
implicit in frame definition, 61
B (x, r), 152 imply module properties, 29
Banach manifold, xv, 6 well posedness, 6
FTODE, 139 configuration space, 65
Banach space, 5, 27, 73, 123 conley index, viii
embedding, 130 continuity, 128
flow on unit sphere, 80 continuous dynamics, viii
foliation of unit sphere in L2 , contraction, 129
81 Contraction Mapping Theorem, 130
FTODE, 138 control theory, 65
bounded speed, ix H (Rn ), 115
bracket, see Lie bracket infinite dimensional, xiii


controllability, xiii, 66 fixed point, 20
converge, 128 flow, viii, 3
convex set, 132 well posed, 17
convolution, 85 foliation, xiv, 41
coordinates manifold, xvi
flow, 64 ouroboros, 43
local on a manifold, xv spiral ouroboros, 44, 45
metric, 132 metric space, 63
curvature, 132 unit sphere in L2 , 81
curve, ix forward flow, 3, 19, 21, 22
fixed point, 20
d, vii Fourier
d∞ , see supremum metric analysis, 78
deconvolution, 85 transform, 87, 90, 92
∆ (X, Y ), 60 frame, 64
∆ [X, Y ], xxi, 68 local, 61
∆x , 60 Frobenius’ Theorem
dense, 128 generalization of FTODE, 45
dilation manifold, xx
function dilation, 77 proof, 49
vector space dilation, 77 metric space
distance, see metric global, 63
distance to a set, 51 local, 62
distribution local 2-D, 51
bracket generated, xxi FTODE, see Fundamental Theorem
bracket-generated, 68, 74 of ODEs
distribution (geometric), xiv full flow, see flow
manifold, xviii, 48 Fundamental Theorem of ODEs, x,
metric space, 60 6, 138
n-dimensional, 61 forward flows, 19
diverge, 128 generalized to Frobenius’ The-
orem, 45
bound for low-freq. trig approx., Gaussian filter, 84
97 generic, 128
numerical differentiation, 143 geodesic, 132
Euler curve, 6 geometric objects on metric spaces,
alternate definitions, 14 131, 133
existence and uniqueness global flow, see flow
arc field solutions, 6 global frame, 61
vector field solutions, 138 Global Frobenius Theorem, 63
exponential growth, 13 gradient, 132
extrinsic metric, 126
Hausdorff distance, xxi, 127
F , viii Hausdorff metric, see Hausdorff dis-
Finsler geometry, 64 tance

Hermite polynomial, 76, 83, 87 dynamic characterization, 36
Hilbert space, 125 manifold, xviii
holonomy, 48, 76 metric space, 32
homeomorphism, 129 arc field vs. flow definition,
H (Rn ), xxi, 74, 107, 127 119
and reachable sets, 115 limit of a sequence, 128
cyclically attracted sets, 114 linear span, see span
fixed point of discrete map, 108 linear speed growth, 17, 29
fixed point of flow on, 114 fixed point, 20
Lip (M, N ), 129
initial condition lipeomorphism, 34, 129
arc field, 4 LipK (M, N ), 129
vector field Lipschitz, 4, 128
Banach space, 5 constant, 128
manifold, xvi locally, 129
Rn , x vector field, 27
inner product (metric space), 132 load time, 85
integrability, 49, 51, 62, 63 local coordinates
integral surface manifold, xv
2-dimensional, 51 metric space, 64
maximal, 63 local flow, 15
metric space, 62 continuity, 16
intrinsic metric, 126 forward, 20
invariant set, 21 local flows are arc fields, 18
involutivity, 48, 49, 74 local frame, see frame
metric space, 62 local uniformity
irrational flow of the torus, 45 bounded speed arc field, 3
irrational foliation of the torus, xvii continuity, 128
isometry, 129 tangency
arc fields, 12
K, 128 distributions, 60
Kf , 128 integral surface, 51
φ related, 37
L2 (R), xi, 74 surface, 21, 62
foliations, 77 locally complete, 129
L2 (R), 125 locally complete metric space, vii
L2G , 125 low-frequency trigonometric series,
Lagrange multipliers, 89 90
Λ, 4 coefficients, 93
< 0 implies fixed point, 20 damping, 98
leaf, 63 dense in L2 [a, b], xiv
least squares, 88 error bound, 97
Lebesgue measure, 125 Lp , 125
length of a curve, ix lp , 124
length space, 132
Lie bracket, xiv M , vii

manifold, xv Reeb component, xviii
chart, xv ρ, ix, 4
metric, vii, 123 Ricci flow, 127
coordinate, 132
metric space, vii, 123 S, see closure
metric space arithmetic, 25 semi-flow, see forward flow
Conditions E1 and E2, 29 semi-group, see forward flow
examples, 101 sequence, 128
module properties of arc fields, Shiva, 85
29 σ, 4
Minkowski distance, 124 σx , 4
module properties of arc fields, 29 signal synthesis
low-frequency trig series, 91
n-dimensional distribution, 61 Shiva’s damaru, 85
Nagumo’s Invariance Theorem, xxi with Gaussians, 85
generates leaves of foliation, 63 slide, 67
metric space, 21 solution
neighborhood, 128 arc field, 4
norm (metric space), 132 (αx , ω x ), 14
normal equations, 88 maximal, 4
numerical differentiation vector field, xvi
backward difference, 143 Banach space, 5
forward difference, 143 solution curve, see solution
n-point formula, 142 span, xiv, xx, 76
of arc fields, 60
O (tn ), 153 of distributions, 60
o (tn ), 153 speed of a curve, ix
ODE, ordinary differential equation, locally uniformly bounded, 3
137 stratification, xviii, 43
Ω, 4 subsequence, 130
open covering, 130 support, 85
open subset, 128 arc field, 18
ouroboros, 43 function, 130
supremum metric
partition of unity, 130 L∞ (R), 125
path, ix, xvi Rn , 124
Picard-Lindelöf Theorem, 138 surface, 62
pre-flow, see arc field 2-dimensional, 51
probability distribution, 80
pull-back, 34 t, viii
push-forward, 34 tangency
natural, 34 arc field to distribution, 60
arc field to solution, 4
Rademacher’s Theorem, 129 between curves, ix
reachable set, xiii, xx, 66 distribution to distribution, 60
H (Rn ), 115 foliation, 63

forward tangency, 19 on Rn , x
generalization from Rn , x
second order, 12 Weierstrass transform, 84, 87
surface, 62 weight function, 125
∼, ≈, 37 well posed
arc fields, 12 arc field, 4, 6
distribution, 60 distribution, 63
equivalence relations, 12 plane field, 51
integral surface, 51 vector field, 138
invariant set, 21 wriggle, 67
module properties, 29
φ related, 37 X, 3
tangent bundle, T M , xvi x, viii
tangent space, Tx M , xvi
tangent vector, xvi
taxicab metric, 123
torus, T 2 , xv
bracketed with dilation, 77
function translation
bracketed with vector space
translation, 74
foliates the Hilbert ball, 81
on probability distributions,
PDE, 104
vector space translation, xii, 73
arc fields, 50
distribution, 61
triangle inequality, vii

O (tn ) locally, 153
o (tn ) locally, 153
locally uniformly continuous map,
solution to arc field, 4, 13

Vandermonde determinant, 142
vector field, xviii, 138
Banach space, 5
discontinuous examples, 120, 121
manifold, xvi
nonsmooth, 59

**check on the status of colombo and corli’s new work √
**Good spot for the non-unique solutions example x = x.
**Commuting figure
**add one more picture with the intermediate step approximation to Frobe-
nius proof
**infinite D distributions and frobenius thm
**The picture on the cover gives the scaffolding for constructing a flow on
S 3 with no equilibria which alternately consists of open sets of points with
alpha limit set consisting of the first circle and omega limit set consisting of
the second circle and other open sets with alpha and omega limit sets reversed.
These open sets are intertwined in interesting complex ways separated by the
surfaces described before, consisting of periodic paths.
**integrate valid traditional foliation results  
**Shannon-Hartley law/theorem: C = B log2 1 + N where
C is the channel capacity in bits per second;
B is the bandwidth of the channel in hertz;
S is the total signal power over the bandwidth, measured in watt or volt2;
N is the total noise power over the bandwidth, measured in watt or volt2.
So the information is limited (a lot) by how much power you can use, if
you are shrinking the bandwidth as suggested above. Or does this theorem not
Considering the cubic function approximation with 3 sine functions, Example
95, we have

3 = C and
S 1
∼ C so
C ∼ B

violating the Shannon-Hartley law.