Differential Geometry in Physics

Gabriel Lugo
Department of Mathematical Sciences and Statistics
University of North Carolina at Wilmington
c _1992, 1998, 2006
i
This document was reproduced by the University of North Carolina at Wilmington from a camera
ready copy supplied by the authors. The text was generated on an desktop computer using L
A
T
E
X.
c _1992,1998, 2006
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or
otherwise, without the written permission of the author. Printed in the United States of America.
ii
Preface
These notes were developed as a supplement to a course on Differential Geometry at the advanced
undergraduate, first year graduate level, which the author has taught for several years. There are
many excellent texts in Differential Geometry but very few have an early introduction to differential
forms and their applications to Physics. It is the purpose of these notes to bridge some of these
gaps and thus help the student get a more profound understanding of the concepts involved. When
appropriate, the notes also correlate classical equations to the more elegant but less intuitive modern
formulation of the subject.
These notes should be accessible to students who have completed traditional training in Advanced
Calculus, Linear Algebra, and Differential Equations. Students who master the entirety of this
material will have gained enough background to begin a formal study of the General Theory of
Relativity.
Gabriel Lugo, Ph. D.
Mathematical Sciences and Statistics
UNCW
Wilmington, NC 28403
lugo@uncw.edu
iii
iv
Contents
Preface iii
1 Vectors and Curves 1
1.1 Tangent Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Curves in R
3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Fundamental Theorem of Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Differential Forms 15
2.1 1-Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Tensors and Forms of Higher Rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Exterior Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4 The Hodge-∗ Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3 Connections 33
3.1 Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Curvilinear Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3 Covariant Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Cartan Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4 Theory of Surfaces 43
4.1 Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 The First Fundamental Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3 The Second Fundamental Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4 Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
0
Chapter 1
Vectors and Curves
1.1 Tangent Vectors
1.1 Definition Euclidean n-space R
n
is defined as the set of ordered n-tuples p = (p
1
, . . . , p
n
),
where p
i
∈ R, for each i = 1, . . . , n.
Given any two n-tuples p = (p
1
, . . . , p
n
), q = (q
1
, . . . , q
n
) and any real number c, we define two
operations:
p +q = (p
1
+q
1
, . . . , p
n
+q
n
) (1.1)
cp = (cp
1
, . . . , cp
n
)
With the sum and the scalar multiplication of ordered n-tuples defined this way, Euclidean space
acquires the structure of a vector space of n dimensions
1
.
1.2 Definition Let x
i
be the real valued functions in R
n
such that x
i
(p) = p
i
for any point
p = (p
1
, . . . , p
n
). The functions x
i
are then called the natural coordinates of the the point p. When
the dimension of the space n = 3, we often write: x
1
= x, x
2
= y and x
3
= z.
1.3 Definition A real valued function in R
n
is of class C
r
if all the partial derivatives of the
function up to order r exist and are continuous. The space of infinitely differentiable (smooth)
functions will be denoted by C

(R
n
).
In advanced calculus, vectors are usually regarded as arrows characterized by a direction and a
length. Vectors as thus considered as independent of their location in space. Because of physical
and mathematical reasons, it is advantageous to introduce a notion of vectors that does depend on
location. For example, if the vector is to represent a force acting on a rigid body, then the resulting
equations of motion will obviously depend on the point at which the force is applied.
In a later chapter we will consider vectors on curved spaces. In these cases the positions of the
vectors are crucial. For instance, a unit vector pointing north at the earth’s equator is not at all the
same as a unit vector pointing north at the tropic of Capricorn. This example should help motivate
the following definition.
1.4 Definition A tangent vector X
p
in R
n
, is an ordered pair (X, p). We may regard X as an
ordinary advanced calculus vector and p is the position vector of the foot the arrow.
1
In these notes we will use the following index conventions:
Indices such as i, j, k, l, m, n, run from 1 to n.
Indices such as µ, ν, ρ, σ, run from 0 to n.
Indices such as α, β, γ, δ, run from 1 to 2.
1
2 CHAPTER 1. VECTORS AND CURVES
The collection of all tangent vectors at a point p ∈ R
n
is called the tangent space at p and
will be denoted by T
p
(R
n
). Given two tangent vectors X
p
, Y
p
and a constant c, we can define new
tangent vectors at p by (X +Y )
p
=X
p
+Y
p
and (cX)
p
= cX
p
. With this definition, it is easy to see
that for each point p, the corresponding tangent space T
p
(R
n
) at that point has the structure of a
vector space. On the other hand, there is no natural way to add two tangent vectors at different
points.
Let U be a open subset of R
n
. The set T(U) consisting of the union of all tangent vectors at
all points in U is called the tangent bundle. This object is not a vector space, but as we will see
later it has much more structure than just a set.
1.5 Definition A vector field X in U ∈ R
n
is a smooth function from U to T(U).
We may think of a vector field as a smooth assignment of a tangent vector X
p
to each point in
in U. Given any two vector fields X and Y and any smooth function f, we can define new vector
fields X +Y and fX by
(X +Y )
p
= X
p
+Y
p
(1.2)
(fX)
p
= fX
p
Remark Since the space of smooth functions is not a field but only a ring, the operations
above give the space of vector fields the structure of a ring module. The subscript notation X
p
to
indicate the location of a tangent vector is sometimes cumbersome. At the risk of introducing some
confusion, we will drop the subscript to denote a tangent vector. Hopefully, it will be clear from the
context whether we are referring to a vector or to a vector field.
Vector fields are essential objects in physical applications. If we consider the flow of a fluid in
a region, the velocity vector field indicates the speed and direction of the flow of the fluid at that
point. Other examples of vector fields in classical physics are the electric, magnetic and gravitational
fields.
1.6 Definition Let X
p
be a tangent vector in an open neighborhood U of a point p ∈ R
n
and
let f be a C

function in U. The directional derivative of f at the point p, in the direction of X
p
,
is defined by

X
(f)(p) = ∇f(p) X(p), (1.3)
where ∇f(p) is the gradient of the function f at the point p. The notation
X
p
(f) = ∇
X
(f)(p)
is also often used in these notes. We may think of a tangent vector at a point as an operator on
the space of smooth functions in a neighborhood of the point. The operator assigns to a function
the directional derivative of the function in the direction of the vector. It is easy to generalize the
notion of directional derivatives to vector fields by defining X(f)(p) = X
p
(f).
1.7 Proposition If f, g ∈ C

R
n
, a, b ∈ R, and X is a vector field, then
X(af +bg) = aX(f) +bX(g) (1.4)
X(fg) = fX(g) +gX(f)
The proof of this proposition follows from fundamental properties of the gradient, and it is found in
any advanced calculus text.
Any quantity in Euclidean space which satisfies relations 1.4 is a called a linear derivation on
the space of smooth functions. The word linear here is used in the usual sense of a linear operator
in linear algebra, and the word derivation means that the operator satisfies Leibnitz’ rule.
1.2. CURVES IN R
3
3
The proof of the following proposition is slightly beyond the scope of this course, but the propo-
sition is important because it characterizes vector fields in a coordinate-independent manner.
1.8 Proposition Any linear derivation on C

(R
n
) is a vector field.
This result allows us to identify vector fields with linear derivations. This step is a big departure
from the usual concept of a “calculus” vector. To a differential geometer, a vector is a linear operator
whose inputs are functions. At each point, the output of the operator is the directional derivative
of the function in the direction of X.
Let p ∈ U be a point and let x
i
be the coordinate functions in U. Suppose that X
p
= (X, p),
where the components of the Euclidean vector X are a
1
, . . . , a
n
. Then, for any function f, the
tangent vector X
p
operates on f according to the formula
X
p
(f) =
n

i=1
a
i
(
∂f
∂x
i
)(p). (1.5)
It is therefore natural to identify the tangent vector X
p
with the differential operator
X
p
=
n

i=1
a
i
(

∂x
i
)(p) (1.6)
X
p
= a
1
(

∂x
1
)
p
+. . . +a
n
(

∂x
n
)
p
.
Notation: We will be using Einstein’s convention to suppress the summation symbol whenever
an expression contains a repeated index. Thus, for example, the equation above could be simply
written
X
p
= a
i
(

∂x
i
)
p
. (1.7)
This equation implies that the action of the vector X
p
on the coordinate functions x
i
yields the com-
ponents a
i
of the vector. In elementary treatments, vectors are often identified with the components
of the vector and this may cause some confusion.
The difference between a tangent vector and a vector field is that in the latter case, the coefficients
a
i
are smooth functions of x
i
. The quantities
(

∂x
1
)
p
, . . . , (

∂x
n
)
p
form a basis for the tangent space T
p
(R
n
) at the point p, and any tangent vector can be written
as a linear combination of these basis vectors. The quantities a
i
are called the contravariant
components of the tangent vector. Thus, for example, the Euclidean vector in R
3
X = 3i + 4j −3k
located at a point p, would correspond to the tangent vector
X
p
= 3(

∂x
)
p
+ 4(

∂y
)
p
−3(

∂z
)
p
.
1.2 Curves in R
3
1.9 Definition A curve α(t) in R
3
is a C

map from an open subset of R into R
3
. The curve
assigns to each value of a parameter t ∈ R, a point (x
1
(t), x
2
(t), x
2
(t)) in R
3
U ∈ R
α
−→ R
3
t −→ α(t) = (x
1
(t), x
2
(t), x
2
(t))
4 CHAPTER 1. VECTORS AND CURVES
One may think of the parameter t as representing time, and the curve α as representing the
trajectory of a moving point particle.
1.10 Example Let
α(t) = (a
1
t +b
1
, a
2
t +b
2
, a
3
t +b
3
).
This equation represents a straight line passing through the point p = (b
1
, b
2
, b
3
), in the direction
of the vector v = (a
1
, a
2
, a
3
).
1.11 Example Let
α(t) = (a cos ωt, a sin ωt, bt).
This curve is called a circular helix. Geometrically, we may view the curve as the path described by
the hypotenuse of a triangle with slope b, which is wrapped around a circular cylinder of radius a.
The projection of the helix onto the xy-plane is a circle and the curve rises at a constant rate in the
z-direction.
Occasionally, we will revert to the position vector notation
x(t) = (x
1
(t), x
2
(t), x
3
(t)) (1.8)
which is more prevalent in vector calculus and elementary physics textbooks. Of course, what this
notation really means is
x
i
(t) = (x
i
◦ α)(t), (1.9)
where x
i
are the coordinate slot functions in an open set in R
3
.
1.12 Definition The derivative α

(t) of the curve is called the velocity vector and the second
derivative α

(t) is called the acceleration. The length v = |α

(t)| of the velocity vector is called
the speed of the curve. The components of the velocity vector are simply given by
V(t) =
dx
dt
=
_
dx
1
dt
,
dx
2
dt
,
dx
3
dt
_
, (1.10)
and the speed is
v =
¸
_
dx
1
dt
_
2
+
_
dx
2
dt
_
2
+
_
dx
3
dt
_
2
(1.11)
The differential dx of the classical position vector given by
dx =
_
dx
1
dt
,
dx
2
dt
,
dx
3
dt
_
dt (1.12)
is called an infinitesimal tangent vector, and the norm |dx| of the infinitesimal tangent vector
is called the differential of arclength ds. Clearly, we have
ds = |dx| = vdt (1.13)
As we will see later in this text, the notion of infinitesimal objects needs to be treated in a more
rigorous mathematical setting. At the same time, we must not discard the great intuitive value of
this notion as envisioned by the masters who invented Calculus, even at the risk of some possible
confusion! Thus, whereas in the more strict sense of modern differential geometry, the velocity
vector is really a tangent vector and hence it should be viewed as a linear derivation on the space
of functions, it is helpful to regard dx as a traditional vector which, at the infinitesimal level, gives
a linear approximation to the curve.
1.2. CURVES IN R
3
5
If f is any smooth function on R
3
, we formally define α

(t) in local coordinates by the formula
α

(t)(f) [
α(t)
=
d
dt
(f ◦ α) [
t
. (1.14)
The modern notation is more precise, since it takes into account that the velocity has a vector part
as well as point of application. Given a point on the curve, the velocity of the curve acting on a
function, yields the directional derivative of that function in the direction tangential to the curve at
the point in question.
The diagram below provides a more geometrical interpretation of the the velocity vector for-
mula (1.14). The map α(t) from R to R
3
induces a map α

from the tangent space of R to the
tangent space of R
3
. The image α

(
d
dt
) in TR
3
of the tangent vector
d
dt
is what we call α

(t)
α

(
d
dt
) = α

(t).
Since α

(t) is a tangent vector in R
3
, it acts on functions in R
3
. The action of α

(t) on a
function f on R
3
is the same as the action of
d
dt
on the composition f ◦ α. In particular, if we apply
α

(t) to the coordinate functions x
i
, we get the components of the the tangent vector, as illustrated
d
dt
∈ TR
α∗
−→ TR
3
÷ α

(t)
↓ ↓
R
α
−→ R
3
x
i
−→ R
α

(t)(x
i
) [
α(t)
=
d
dt
(x
i
◦ α) [
t
. (1.15)
The map α

on the tangent spaces induced by the curve α is called the push-forward. Many
authors use the notation dα to denote the push-forward, but we prefer to avoid this notation because
most students fresh out of advanced calculus have not yet been introduced to the interpretation of
the differential as a linear map on tangent spaces.
1.13 Definition
If t = t(s) is a smooth, real valued function and α(t) is a curve in R
3
, we say that the curve
β(s) = α(t(s)) is a reparametrization of α.
A common reparametrization of curve is obtained by using the arclength as the parameter. Using
this reparametrization is quite natural, since we know from basic physics that the rate of change of
the arclength is what we call speed
v =
ds
dt
= |α

(t)|. (1.16)
The arc length is obtained by integrating the above formula
s =
_

(t)| dt =
_
¸
_
dx
1
dt
_
2
+
_
dx
2
dt
_
2
+
_
dx
3
dt
_
2
dt (1.17)
In practice it is typically difficult to actually find an explicit arclength parametrization of a
curve since not only does one have calculate the integral, but also one needs to be able to find the
inverse function t in terms of s. On the other hand, from a theoretical point of view, arclength
parametrizations are ideal since any curve so parametrized has unit speed. The proof of this fact is
a simple application of the chain rule and the inverse function theorem.
β

(s) = [α(t(s))]

6 CHAPTER 1. VECTORS AND CURVES
= α

(t(s))t

(s)
= α

(t(s))
1
s

(t(s))
=
α

(t(s))

(t(s))|
,
and any vector divided by its length is a unit vector. Leibnitz notation makes this even more self
evident
dx
ds
=
dx
dt
dt
ds
=
dx
dt
ds
dt
=
dx
dt
|
dx
dt
|
1.14 Example Let α(t) = (a cos ωt, a sin ωt, bt). Then
V(t) = (−aω sin ωt, aω cos ωt, b),
s(t) =
_
t
0
_
(−aω sin ωu)
2
+ (aω cos ωu)
2
+b
2
du
=
_
t
0
_
a
2
ω
2
+b
2
du
= ct, where, c =
_
a
2
ω
2
+b
2
.
The helix of unit speed is then given by
β(s) = (a cos
ωs
c
, a sin
ωs
c
, b
ωs
c
).
Frenet Frames
Let β(s) be a curve parametrized by arc length and let T(s) be the vector
T(s) = β

(s). (1.18)
The vector T(s) is tangential to the curve and it has unit length. Hereafter, we will call T the unit
unit tangent vector. Differentiating the relation
T T = 1, (1.19)
we get
2T T

= 0, (1.20)
so we conclude that the vector T

is orthogonal to T. Let N be a unit vector orthogonal to T, and
let κ be the scalar such that
T

(s) = κN(s). (1.21)
We call N the unit normal to the curve, and κ the curvature. Taking the length of both sides of
last equation, and recalling that N has unit length, we deduce that
κ = |T

(s)| (1.22)
1.2. CURVES IN R
3
7
It makes sense to call κ the curvature since, if T is a unit vector, then T

(s) is not zero only if the
direction of T is changing. The rate of change of the direction of the tangent vector is precisely what
one would expect to measure how much a curve is curving. In particular, it T

= 0 at a particular
point, we expect that at that point, the curve is locally well approximated by a straight line.
We now introduce a third vector
B = T N, (1.23)
which we will call the binormal vector. The triplet of vectors (T, N, B) forms an orthonormal set;
that is,
T T = N N = B B = 1
T N = T B = N B = 0. (1.24)
If we differentiate the relation B B = 1, we find that B B

= 0, hence B

is orthogonal to B.
Furthermore, differentiating the equation T B = 0, we get
B

T +B T

= 0.
rewriting the last equation
B

T = −T

B = −κN B = 0,
we also conclude that B

must also be orthogonal to T. This can only happen if B

is orthogonal to
the TB-plane, so B

must be proportional to N. In other words, we must have
B

(s) = −τN(s) (1.25)
for some quantity τ, which we will call the torsion. The torsion is similar to the curvature in the
sense that it measures the rate of change of the binormal. Since the binormal also has unit length,
the only way one can have a non-zero derivative is if B is changing directions. The quantity B

then
measures the rate of change in the up and down direction of an observer moving with the curve
always facing forward in the direction of the tangent vector. The binormal B is something like the
flag in the back of sand dune buggy.
The set of basis vectors ¦T, N, B¦ is called the Frenet Frame or the repere mobile (moving
frame). The advantage of this basis over the fixed (i,j,k) basis is that the Frenet frame is naturally
adapted to the curve. It propagates along with the curve with the tangent vector always pointing
in the direction of motion, and the normal and binormal vectors pointing in the directions in which
the curve is tending to curve. In particular, a complete description of how the curve is curving can
be obtained by calculating the rate of change of the frame in terms of the frame itself.
1.15 Theorem Let β(s) be a unit speed curve with curvature κ and torsion τ. Then
T

= κN
N

= −κT τB
B

= −τB
. (1.26)
Proof: We need only establish the equation for N

. Differentiating the equation N N = 1, we
get 2N N

= 0, so N

is orthogonal to N. Hence, N

must be a linear combination of T and B.
N

= aT +bB.
Taking the dot product of last equation with T and B respectively, we see that
a = N

T, and b = N

B.
8 CHAPTER 1. VECTORS AND CURVES
On the other hand, differentiating the equations N T = 0, and N B = 0, we find that
N

T = −N T

= −N (κN) = −κ
N

B = −N B

= −N (−τN) = τ.
We conclude that a = −κ, b = τ, and thus
N

= −κT +τB.
The Frenet frame equations (1.26) can also be written in matrix form as shown below.
_
_
T
N
B
_
_

=
_
_
0 κ 0
−κ 0 τ
0 −τ 0
_
_
_
_
T
N
B
_
_
. (1.27)
The group-theoretic significance of this matrix formulation is quite important and we will come
back to this later when we talk about general orthonormal frames. At this time, perhaps it suffices
to point out that the appearance of an antisymmetric matrix in the Frenet equations is not at all
coincidental.
The following theorem provides a computational method to calculate the curvature and torsion
directly from the equation of a given unit speed curve.
1.16 Proposition Let β(s) be a unit speed curve with curvature κ > 0 and torsion τ. Then
κ = |β

(s)|
τ =
β

β

]
β

β

(1.28)
Proof: If β(s) is a unit speed curve, we have β

(s) = T. Then
T

= β

(s) = κN,
β

β

= (κN) (κN),
β

β

= κ
2
κ
2
= |β

|
2
β

(s) = κ

N +κN

= κ

N +κ(−κT +τB)
= κ

N +−κ
2
T +κτB.
β

β

] = T [κN (κ

N +−κ
2
T +κτB)]
= T [κ
3
B +κ
2
τT]
= κ
2
τ
τ =
β

β

]
κ
2
=
β

β

]
β

β

1.17 Example Consider a circle of radius r whose equation is given by
α(t) = (r cos t, r sin t, 0).
1.2. CURVES IN R
3
9
Then,
α

(t) = (−r sin t, r cos t, 0)

(t)| =
_
(−r sin t)
2
+ (r cos t)
2
+ 0
2
=
_
r
2
(sin
2
t + cos
2
t)
= r.
Therefore, ds/dt = r and s = rt, which we recognize as the formula for the length of an arc of circle
of radius t, subtended by a central angle whose measure is t radians. We conclude that
β(s) = (−r sin
s
r
, r cos
s
r
, 0)
is a unit speed reparametrization. The curvature of the circle can now be easily computed
T = β

(s) = (−cos
s
r
, −sin
s
r
, 0)
T

= (
1
r
sin
s
r
, −
1
r
cos
s
r
, 0)
κ = |β

| = |T

|
=
_
1
r
2
sin
2
s
r
+
1
r
2
cos
2
s
r
+ 0
2
=
_
1
r
2
(sin
2
s
r
+ cos
2
s
r
)
=
1
r
This is a very simple but important example. The fact that for a circle of radius r the curvature
is κ = 1/r could not be more intuitive. A small circle has large curvature and a large circle has small
curvature. As the radius of the circle approaches infinity, the circle locally looks more and more like
a straight line, and the curvature approaches 0. If one were walking along a great circle on a very
large sphere (like the earth) one would be perceive the space to be locally flat.
1.18 Proposition Let α(t) be a curve of velocity V, acceleration A, speed v and curvature κ,
then
V = vT,
A =
dv
dt
T +v
2
κN. (1.29)
Proof: Let s(t) be the arclength and let β(s) be a unit speed reparametrization. Then α(t) =
β(s(t)) and by the chain rule
V = α

(t)
= β

(s(t))s

(t)
= vT
A = α

(t)
=
dv
dt
T +vT

(s(t))s

(t)
=
dv
dt
T +v(κN)v
=
dv
dt
T +v
2
κN
10 CHAPTER 1. VECTORS AND CURVES
Equation 1.29 is important in physics. The equation states that a particle moving along a curve
in space feels a component of acceleration along the direction of motion whenever there is a change
of speed, and a centripetal acceleration in the direction of the normal whenever it changes direction.
The centripetal acceleration and any point is
a = v
3
κ =
v
2
r
where r is the radius of a circle which has maximal tangential contact with the curve at the point
in question. This tangential circle is called the osculating circle. The osculating circle can be
envisioned by a limiting process similar to that of the tangent to a curve in differential calculus.
Let p be point on the curve, and let q
1
and q
2
two nearby points. The three points determine a
circle uniquely. This circle is a “secant” approximation to the tangent circle. As the points q
1
and
q
2
approach the point p, the “secant” circle approaches the osculating circle. The osculating circle
always lies in the the TN-plane, which by analogy, is called the osculating plane.
1.19 Example (Helix)
β(s) = (a cos
ωs
c
, a sin
ωs
c
,
bs
c
), where c =
_
a
2
ω
2
+b
2
β

(s) = (−

c
sin
ωs
c
,

c
cos
ωs
c
,
b
c
)
β

(s) = (−

2
c
2
cos
ω
2
s
c
, −

2
c
2
sin
ωs
c
, 0)
beta

(s) = (−

3
c
3
cos
ω
2
s
c
, −

3
c
3
sin
ωs
c
, 0)
κ
2
= β

β

=
a
2
ω
4
c
4
κ = ±

2
c
2
τ =

β

β

)
β

β

=
b
c
_


2
c
2
cos
ωs
c


2
c
2
sin
ωs
c

3
c
2
sin
ωs
c


3
c
2
cos
ωs
c
_
c
4
a
2
ω
4
.
=
b
c
a
2
ω
5
c
5
c
4
a
2
ω
4
Simplifying the last expression and substituting the value of c, we get
τ =

a
2
ω
2
+b
2
κ = ±

2
a
2
ω
2
+b
2
Notice that if b = 0 the helix collapses to a circle in the xy-plane. In this case the formulas above
reduce to κ = 1/a and τ = 0. The ratio κ/τ = aω/b is particularly simple. Any curve where
κ/τ = constant is called a helix, of which the circular helix is a special case.
1.20 Example (Plane curves) Let α(t) = (x(t), y(t), 0). Then
α

= (x

, y

, 0)
1.2. CURVES IN R
3
11
α

= (x

, y

, 0)
α

= (x

, y

, 0)
κ =

α

|

|
3
=
[ x

y

−y

x

[
(x
2
+y
2
)
3/2
τ = 0
1.21 Example (Cornu Spiral) Let β(s) = (x(s), y(s), 0), where
x(s) =
_
s
0
cos
t
2
2c
2
dt
y(s) =
_
s
0
sin
t
2
2c
2
dt. (1.30)
Then, using the fundamental theorem of calculus, we have
β

(s) = (cos
s
2
2c
2
, sin
t
2
2c
2
, 0),
Since |β

= v = 1|, the curve is of unit speed, and s is indeed the arc length. The curvature is of
the Cornu spiral is given by
κ = [ x

y

−y

x

[= (β

β

)
1/2
= | −
s
c
2
sin
t
2
2c
2
,
s
c
2
cos
t
2
2c
2
, 0|
=
s
c
2
.
The integrals (1.30) defining the coordinates of the Cornu spiral are the classical Frenel Integrals.
These functions, as well as the spiral itself arise in the computation of the diffraction pattern of a
coherent beam of light by a straight edge.
In cases where the given curve α(t) is not of unit speed, the following proposition provides
formulas to compute the curvature and torsion in terms of α
1.22 Proposition If α(t) is a regular curve in R
3
, then
κ
2
=

α

|
2

|
6
(1.31)
τ =

α

α

)

α

|
2
, (1.32)
where (α

α

α

) is the triple vector product [α

α

] α

.
Proof:
α

= vT
α

= v

T +v
2
κN
α

= (v
2
κ)N

((s(t))s

(t) +. . .
= v
3
κN

+. . .
= v
3
κτB +. . .
12 CHAPTER 1. VECTORS AND CURVES
The other terms are unimportant here because as we will see α

α

is proportional to B
α

α

= v
3
κ(T N) = v
3
κB

α

| = v
3
κ
κ =

α

|
v
3

α

) α

= v
6
κ
2
τ
τ =

α

α

)
v
6
κ
2
=

α

α

)

α

|
2
1.3 Fundamental Theorem of Curves
Some geometrical insight into the significance of the curvature and torsion can be gained by consid-
ering the Taylor series expansion of an arbitrary unit speed curve β(s) about s = 0
β(s) = β(0) +β

(0)s +
β

(0)
2!
s
2
+
β

(0)
3!
s
3
+. . . (1.33)
Since we are assuming that s is an arclength parameter,
β

(0) = T(0) = T
0
β

(0) = (κN)(0) = κ
0
N
0
β

(0) = (−κ
2
T +κ

N +κτB)(0) = −κ
2
0
T
0

0
N
0

0
τ
0
B
0
Keeping only the lowest terms in the components of T, N, and B, we get the first order Frenet
approximation to the curve
β(s)
.
= β(0) +T
0
s +
1
2
κ
0
N
0
s
2
+
1
6
κ
0
τ
0
B
0
s
3
. (1.34)
The first two terms represent the linear approximation to the curve. The first three terms
approximate the curve by a parabola which lies in the osculating plane (TN-plane). If κ
0
= 0, then
locally the curve looks like a straight line. If τ
0
= 0, then locally the curve is a plane curve which
lies on the osculating plane. In this sense, the curvature measures the deviation of the curve from
being a straight line and the torsion (also called the second curvature) measures the deviation of the
curve from being a plane curve.
1.23 Theorem (Fundamental Theorem of Curves) Let κ(s) and τ(s), (s > 0) be any two analytic
functions. Then there exists a unique curve (unique up to its position in R
3
) for which s is the
arclength, κ(s) its curvature and τ(s) its torsion.
Proof: Pick a point in R
3
. By an appropriate affine transformation, we may assume that this
point is the origin. Pick any orthogonal frame ¦T, NB¦. The curve is then determined uniquely by
its Taylor expansion in the Frenet frame as in equation (1.34).
1.24 Remark It is possible to prove the theorem just assuming that κ(s) and τ(s) are continuous.
The proof however, becomes much harder and we refer the reader to other standard texts for the
proof.
1.25 Proposition A curve with κ = 0 is part of a straight line.
we leave the proof as an exercise.
1.3. FUNDAMENTAL THEOREM OF CURVES 13
1.26 Proposition A curve α(t) with τ = 0 is a plane curve.
Proof: If τ = 0, then (α

α

α

) = 0. This means that the three vectors α

, α

, and α

are linearly
dependent and hence there exist functions a
1
(s),a
2
(s) and a
3
(s) such that
a
3
α

+a
2
α

+a
1
α

= 0.
This linear homogeneous equation will have a solution of the form
α = c
1
α
1
+c
2
α
2
+c
3
, c
i
= constant vectors.
This curve lies in the plane
(x −c
3
) n = 0, where n = c
1
c
2
14 CHAPTER 1. VECTORS AND CURVES
Chapter 2
Differential Forms
2.1 1-Forms
One of the most puzzling ideas in elementary calculus is the idea of the differential. In the usual
definition, the differential of a dependent variable y = f(x), is given in terms of the differential of
the independent variable by dy = f

(x)dx. The problem is with the quantity dx. What does dx
mean? What is the difference between ∆x and dx? How much ”smaller” than ∆x does dx have
to be? There is no trivial resolution to this question. Most introductory calculus texts evade the
issue by treating dx as an arbitrarily small quantity (which lacks mathematical rigor) or by simply
referring to dx as an infinitesimal (a term introduced by Newton for an idea that could not otherwise
be clearly defined at the time.)
In this section we introduce linear algebraic tools that will allow us to interpret the differential
in terms of an linear operator.
2.1 Definition Let p ∈ R
n
, and let T
p
(R
n
) be the tangent space at p. A 1-form at p is a linear
map φ from T
p
(R
n
) into R. We recall that such a map must satisfy the following properties
a) φ(X
p
) ∈ R, ∀X
p
∈ R
n
(2.1)
b) φ(aX
p
+bY
p
) = aφ(X
p
) +bφ(Y
p
), ∀a, b ∈ R, X
p
, Y
p
∈ T
p
(R
n
)
A 1-form is a smooth choice of a linear map φ as above for each point in the space.
2.2 Definition Let f : R
n
→ R be a real-valued C

function. We define the differential df of
the function as the 1-form such that
df(X) = X(f) (2.2)
for every vector field in X in R
n
.
In other words, at any point p, the differential df of a function is an operator which assigns to
a tangent vector X
p
, the directional derivative of the function in the direction of that vector
df(X)(p) = X
p
(f) = ∇f(p) X(p) (2.3)
In particular, if we apply the differential of the coordinate functions x
i
to the basis vector fields,
we get
dx
i
(

∂x
j
) =
∂x
i
∂x
j
= δ
i
j
(2.4)
The set of all linear functionals on a vector space is called the dual of the vector space. It is
an standard theorem in linear algebra that the dual of a vector space is also a vector space of the
15
16 CHAPTER 2. DIFFERENTIAL FORMS
same dimension. Thus, the space T

p
R
n
of all 1-forms at p is a vector space which is the dual of
the tangent space T
p
R
n
. The space T

p
(R
n
) is called the cotangent space of R
n
at the point p.
Equation (2.4) indicates that the set of differential forms ¦(dx
1
)
p
, . . . , (dx
n
)
p
¦ constitutes the basis
of the cotangent space which is dual to the standard basis ¦(

∂x
1
)
p
, . . . (

∂x
n
)
p
¦ of the tangent space.
The union of all the cotangent spaces as p ranges over all points in R
n
is called the cotangent bundle
T

(R
n
).
2.3 Proposition Let f be any smooth function in R
n
and let ¦x
1
, . . . x
n
¦ be coordinate functions
in a neighborhood U of a point p. Then, the differential df is given locally by the expression
df =
n

i=1
∂f
∂x
i
dx
i
(2.5)
=
∂f
∂x
i
dx
i
Proof: The differential df is by definition a 1-form, so, at each point, it must be expressible as a
linear combination of the basis elements ¦(dx
1
)
p
, . . . , (dx
n
)
p
¦. Therefore, to prove the proposition,
it suffices to show that the expression 2.5 applied to an arbitrary tangent vector, coincides with
definition 2.2. To see this, consider a tangent vector X
p
= a
j
(

∂x
j
)
p
and apply the expression above
(
∂f
∂x
i
dx
i
)
p
(X
p
) = (
∂f
∂x
i
dx
i
)(a
j

∂x
j
)(p) (2.6)
= a
j
(
∂f
∂x
i
dx
i
)(

∂x
j
)(p)
= a
j
(
∂f
∂x
i
∂x
i
∂x
j
)(p)
= a
j
(
∂f
∂x
i
δ
i
j
)(p)
= (
∂f
∂x
i
a
i
)(p)
= ∇f(p) X(p)
= df(X)(p)
The definition of differentials as linear functionals on the space of vector fields is much more
satisfactory than the notion of infinitesimals, since the new definition is based on the rigorous
machinery of linear algebra. If α is an arbitrary 1-form, then locally
α = a
1
(x)dx
1
+, . . . +a
n
(x)dx
n
, (2.7)
where the coefficients a
i
are C

functions. A 1-form is also called a covariant tensor of rank 1,
or just simply a covector. The coefficients (a
1
, . . . , a
n
) are called the covariant components of
the covector. We will adopt the convention to always write the covariant components of a covector
with the indices down. Physicists often refer to the covariant components of a 1-form as a covariant
vector and this causes some confusion about the position of the indices. We emphasize that not all
one forms are obtained by taking the differential of a function. If there exists a function f, such
that α = df, then the one form α is called exact. In vector calculus and elementary physics, exact
forms are important in understanding the path independence of line integrals of conservative vector
fields.
As we have already noted, the cotangent space T

p
(R
n
) of 1-forms at a point p has a natural
vector space structure. We can easily extend the operations of addition and scalar multiplication to
2.2. TENSORS AND FORMS OF HIGHER RANK 17
the space of all 1-forms by defining
(α +β)(X) = α(X) +β(X) (2.8)
(fα)(X) = fα(X)
for all vector fields X and all smooth functions f.
2.2 Tensors and Forms of Higher Rank
As we mentioned at the beginning of this chapter, the notion of the differential dx is not made
precise in elementary treatments of calculus, so consequently, the differential of area dxdy in R
2
, as
well as the differential of surface area in R
3
also need to be revisited in a more rigorous setting. For
this purpose, we introduce a new type of multiplication between forms which not only captures the
essence of differentials of area and volume, but also provides a rich algebraic and geometric structure
which is vast generalization of cross products (which only make sense in R
3
) to Euclidean spaces of
all dimensions.
2.4 Definition A map φ : T(R
n
) T(R
n
) −→ R is called a bilinear map on the tangent space,
if it is linear on each slot. That is
φ(f
1
X
1
+f
2
X
2
, Y
1
) = f
1
φ(X
1
, Y
1
) +f
2
φ(X
2
, Y
1
)
φ(X
1
, f
1
Y
1
+f
2
Y
2
) = f
1
φ(X
1
, Y
1
) +f
2
φ(X
1
, Y
2
), ∀X
i
, Y
i
∈ T(R
n
), f
i
∈ C

R
n
Tensor Products
2.5 Definition Let α and β be 1-forms. The tensor product of α and β is defined as the bilinear
map α ⊗β such that
(α ⊗β)(X, Y ) = α(X)β(Y ) (2.9)
for all vector fields X and Y .
Thus, for example, if α = a
i
dx
i
and β = b
j
dx
j
, then,
(α ⊗β)(

∂x
k
,

∂x
l
) = α(

∂x
k
)β(

∂x
l
)
= (a
i
dx
i
)(

∂x
k
)(b
j
dx
j
)(

∂x
l
)
= a
i
δ
i
k
b
j
δ
j
l
= a
k
b
l
A quantity of the form T = T
ij
dx
i
⊗dx
j
is called a covariant tensor of rank 2, and we may think
of the set ¦dx
i
⊗dx
j
¦ as a basis for all such tensors. We must caution the reader again that there is
possible confusion about the location of the indices, since physicists often refer to the components
T
i
j as a covariant tensor.
In a similar fashion, one can also define the tensor product of vectors X and Y as the bilinear
map X ⊗Y such that
(X ⊗Y )(f, g) = X(f)Y (g) (2.10)
for any pair of arbitrary functions f and g.
18 CHAPTER 2. DIFFERENTIAL FORMS
If X = a
i ∂
∂x
i
and Y = b
j ∂
∂x
j
, then, the components of X ⊗Y in the basis

∂x
i


∂x
j
are simply
given by a
i
b
j
. Any bilinear map of the form
T = T
ij

∂x
i


∂x
j
(2.11)
is called a contravariant tensor of rank 2 in R
n
.
The notion of tensor products can easily be generalized to higher rank, and in fact one can have
tensors of mixed ranks. For example, a tensor of contravariant rank 2 and covariant rank 1 in R
n
is
represented in local coordinates by an expression of the form
T = T
ij
k

∂x
i


∂x
j
⊗dx
k
.
This object is also called a tensor of type T
2,1
. Thus, we may think of a tensor of type T
2,1
as map
with three input slots. The map expects two functions in the first two slots and a vector in the third
one. The action of the map is bilinear on the two functions and linear on the vector. The output is
a real number. An assignment of a tensor to each point in R
n
is called a tensor field.
Inner Products
Let X = a
i ∂
∂x
i
and Y = b
j ∂
∂x
j
be two vector fields and let
g(X, Y ) = δ
ij
a
i
b
j
. (2.12)
The quantity g(X, Y ) is an example of a bilinear map which the reader will recognize as the usual
dot product.
2.6 Definition A bilinear map g(X, Y ) on the tangent space is called a vector inner product if
1. g(X, Y ) = g(Y, X),
2. g(X, X) ≥ 0, ∀X,
3. g(X, X) = 0 iff X = 0.
Since we assume g(X, Y ) to be bilinear, an inner product is completely specified by its action on
ordered pairs of basis vectors. The components g
ij
of the inner product as thus given by
g(

∂x
i
,

∂x
j
) = g
ij
(2.13)
where g
ij
is a symmetric n n matrix, which we assume to be non-singular. By linearity, it is easy
to see that if X = a
i ∂
∂x
i
and Y = b
j ∂
∂x
j
are two arbitrary vectors, then
g(X, Y ) = g
ij
a
i
b
j
.
In this sense, an inner product can be viewed as a generalization of the dot product. The standard
Euclidean inner product is obtained if we take g
ij
= δ
ij
. In this case the quantity g(X, X) =| X |
2
gives the square of the length of the vector. For this reason g
ij
is also called a metric and g is called
a metric tensor.
Another interpretation of the dot product can be seen if instead one considers a vector X = a
i ∂
∂x
i
and a 1-form α = b
j
dx
j
. The action of the 1-form on the vector gives
α(X) = (b
j
dx
j
)(a
i

∂x
i
)
= b
j
a
i
(dx
j
)(

∂x
i
)
= b
j
a
i
δ
j
i
= a
i
b
i
.
2.2. TENSORS AND FORMS OF HIGHER RANK 19
If we now define
b
i
= g
ij
b
j
, (2.14)
we see that the equation above can be rewritten as
a
i
b
j
= g
ij
a
i
b
j
,
and we recover the expression for the inner product.
Equation (2.14) shows that the metric can be used as mechanism to lower indices, thus trans-
forming the contravariant components of a vector to covariant ones. If we let g
ij
be the inverse of
the matrix g
ij
, that is
g
ik
g
kj
= δ
i
j
, (2.15)
we can also raise covariant indices by the equation
b
i
= g
ij
b
j
(2.16)
We have mentioned that the tangent and cotangent spaces of Euclidean space at a particular point
are isomorphic. In view of the above discussion, we see that the metric accepts a dual interpretation;
one as bilinear pairing of two vectors
g : T(R
n
) T(R
n
) −→ R
and another as a linear isomorphism
g : T

(R
n
) −→ T(R
n
)
that maps vectors to covectors and vice-versa.
In elementary treatments of calculus authors often ignore the subtleties of differential 1-forms
and tensor products and define the differential of arclength as
ds
2
≡ g
ij
dx
i
dx
j
,
although, what is really meant by such an expression is
ds
2
≡ g
ij
dx
i
⊗dx
j
. (2.17)
2.7 Example In cylindrical coordinates, the differential of arclength is
ds
2
= dr
2
+r
2

2
+dz
2
. (2.18)
In this case the metric tensor has components
g
ij
=
_
_
1 0 0
0 r
2
0
0 0 1
_
_
. (2.19)
2.8 Example In spherical coordinates
x = ρ sin θ cos φ
y = ρ sin θ sin φ
z = ρ cos θ, (2.20)
the differential of arclength is given by
ds
2
= dρ
2

2

2

2
sin
2
θdφ
2
. (2.21)
In this case the metric tensor has components
g
ij
=
_
_
1 0 0
0 ρ
2
0
0 0 ρ
2
sin θ
2
_
_
. (2.22)
20 CHAPTER 2. DIFFERENTIAL FORMS
Minkowski Space
An important object in mathematical physics is the so called Minkowski space which is can be
defined as the pair Let (/
1,3
, g) be the pair, where
/
(1,3)
= ¦(t, x
1
, x
2
, x
3
)[ t, x
i
∈ R¦ (2.23)
and g is the bilinear map such that
g(X, X) = −t
2
+ (x
1
)
2
+ (x
2
)
2
+ (x
3
)
2
. (2.24)
The matrix representing Minkowski’s metric g is given by
g = diag(−1, 1, 1, 1),
in which case, the differential of arclength is given by
ds
2
= g
µν
dx
µ
⊗dx
ν
= −dt ⊗dt +dx
1
⊗dx
1
+dx
2
⊗dx
2
+dx
3
⊗dx
3
= −dt
2
+ (dx
1
)
2
+ (dx
2
)
2
+ (dx
3
)
2
. (2.25)
Note: Technically speaking, Minkowski’s metric is not really a metric since g(X, X) = 0 does
not imply that X = 0. Non-zero vectors with zero length are called Light-like vectors and they are
associated with with particles which travel at the speed of light (which we have set equal to 1 in our
system of units.)
The Minkowski metric g
µν
and its matrix inverse g
µν
are also used to raise and lower indices in
the space in a manner completely analogous to R
n
. Thus, for example, if A is a covariant vector
with components
A
µ
= (ρ, A
1
, A
2
, A
3
),
then the contravariant components of A are
A
µ
= g
µν
A
ν
= (−ρ, A
1
, A
2
, A
3
)
Wedge Products and n-Forms
2.9 Definition A map φ : T(R
n
) T(R
n
) −→ R is called alternating if
φ(X, Y ) = −φ(Y, X)
The alternating property is reminiscent of determinants of square matrices which change sign if
any two column vectors are switched. In fact, the determinant function is a perfect example of an
alternating bilinear map on the space M
2×2
of two by two matrices. Of course, for the definition
above to apply, one has to view M
2×2
as the space of column vectors.
2.10 Definition A 2-form φ is a map φ : T(R
n
) T(R
n
) −→ R which is alternating and
bilinear.
2.11 Definition Let α and β be 1-forms in R
n
and let X and Y be any two vector fields. The
wedge product of the two 1-forms is the map α∧β : T(R
n
) T(R
n
) −→ R given by the equation
(α ∧ β)(X, Y ) = α(X)β(Y ) −α(Y )β(X) (2.26)
2.2. TENSORS AND FORMS OF HIGHER RANK 21
2.12 Theorem If α and β are 1-forms, then α ∧ β is a 2-form.
Proof: : We break up the proof into the following two lemmas.
2.13 Lemma The wedge product of two 1-forms is alternating.
Proof: Let α and β be 1-forms in R
n
and let X and Y be any two vector fields. then
(α ∧ β)(X, Y ) = α(X)β(Y ) −α(Y )β(X)
= −(α(Y )β(X) −α(X)β(Y ))
= −(α ∧ β)(Y, X)
2.14 Lemma The wedge product of two 1-forms is bilinear.
Proof: Consider 1-forms, α, β, vector fields X
1
, X
2
, Y and functions f
1
, F
2
. Then, since the
1-forms are linear functionals, we get
(α ∧ β)(f
1
X
1
+f
2
X
2
, Y ) = α(f
1
X
1
+f
2
X
2
)β(Y ) −α(Y )β(f
1
X
1
+f
2
X
2
)
= [f
1
α(X
1
) +f
2
α(X
2
)]β(Y ) −α(Y )[f
1
β(X
1
) +f
2
α(X
2
)]
= f
1
α(X
1
)β(Y ) +f
2
α(X
2
)β(Y ) +f
1
α(Y )β(X
1
) +f
2
α(Y )β(X
2
)
= f
1
[α(X
1
)β(Y ) +α(Y )β(X
1
)] +f
2
[α(X
2
)β(Y ) +α(Y )β(X
2
)]
= f
1
(α ∧ β)(X
1
, Y ) +f
2
(α ∧ β)(X
2
, Y )
The proof of linearity on the second slot is quite similar and it is left to the reader.
2.15 Corollary If α and β are 1-forms, then
α ∧ β = −β ∧ α (2.27)
This last result tells us that wedge products have characteristics similar to cross products of
vectors in the sense that both of these products are anti-commutative. This means that we need to
be careful to introduce a minus sign every time we interchange the order of the operation. Thus, for
example, we have
dx
i
∧ dx
j
= −dx
j
∧ dx
i
if i ,= j, whereas
dx
i
∧ dx
i
= −dx
i
∧ dx
i
= 0
since any quantity which is equal to the negative of itself must vanish. The similarity between wedge
products is even more striking in the next proposition but we emphasize again that wedge products
are by far much more powerful than cross products, because wedge products can be computed in
any dimension.
2.16 Proposition Let α = A
i
dx
i
and β = B
i
dx
i
be any two 1-forms in R
n
. Then
α ∧ β = (A
i
B
j
)dx
i
∧ dx
j
(2.28)
Proof: Let X and Y be arbitrary vector fields. Then
(α ∧ β)((X, Y ) = (A
i
dx
i
)(X)(B
j
dx
j
)(Y ) −(A
i
dx
i
)(Y )(B
j
dx
j
)(X)
= (A
i
B
j
)[dx
i
(X)dx
j
(Y ) −dx
i
(Y )dx
j
(X)]
= (A
i
B
j
)(dx
i
∧ dx
j
)(X, Y )
22 CHAPTER 2. DIFFERENTIAL FORMS
Because of the antisymmetry of the wedge product the last equation above can also be written as
α ∧ β =
n

i=1
n

j<i
(A
i
B
j
−A
j
B
i
)(dx
i
∧ dx
j
)
In particular, if n = 3, then the coefficients of the wedge product are the components of the cross
product of A = A
1
i +A
2
j +A
3
k and B = B
1
i +B
2
j +B
3
k
2.17 Example Let α = x
2
dx −y
2
dy and β = dx +dy −2xydz. Then
α ∧ β = (x
2
dx −y
2
dy) ∧ (dx +dy −2xydz)
= x
2
dx ∧ dx +x
2
dx ∧ dy −2x
3
ydx ∧ dz −y
2
dy ∧ dx −y
2
dy ∧ dy + 2xy
3
dy ∧ dz
= x
2
dx ∧ dy −2x
3
ydx ∧ dz −y
2
dy ∧ dx + 2xy
3
dy ∧ dz
= (x
2
+y
2
)dx ∧ dy −2x
3
ydx ∧ dx + 2xy
3
dy ∧ dz
2.18 Example let x = r cos θ and y = r sin θ. Then
dx ∧ dy = (−r sin θdθ + cos θdr) ∧ (r cos θdθ + sin θdr)
= −r sin
2
θdθ ∧ dr +r cos
2
θdr ∧ dθ
= (r cos
2
θ +r sin
2
θ)(dr ∧ dθ)
= r(dr ∧ dθ) (2.29)
2.19 Remark
1. The result of the last example yields the familiar differential of area in polar coordinates
2. The differential of area in polar coordinates is a special example of the change of coordinate
theorem for multiple integrals. It is easy to establish that if x = f
1
(u, v) and y = f
2
(u, v), then
dx∧dy = det[J[du∧dv, where det[J[ is the determinant of the Jacobian of the transformation.
3. Quantities such as dxdy and dydz which often appear in calculus, are not well defined. In
most cases what is meant by these entities are wedge products of 1-forms
4. We state (without proof) that all 2-forms φ in R
n
can be expressed as linear combinations of
wedge products of differentials such as
φ = F
ij
dx
i
∧ dx
j
(2.30)
In a more elementary (ie: sloppier) treatment of this subject one could simply define 2-forms
to be gadgets which look like the quantity in equation (2.30). This is fact what we will do in
the next definition.
2.20 Definition A 3=form φ in R
n
is an object of the following type
φ = A
ijk
dx
i
∧ dx
j
∧ dx
k
(2.31)
where we assume that the wedge product of three 1-forms is associative, but still alternating in
the sense that if one switches any two differentials, then the entire expression changes by a minus
sign. we challenge the reader to come up with a rigorous definition of three forms (or an n-form,
for that matter) more in the spirit of multilinear maps. There is nothing really wrong with using
2.3. EXTERIOR DERIVATIVES 23
definition ref3form. It is just that this definition is coordinate dependent and mathematicians in
general (specially differential geometers) prefer coordinate-free definitions, theorems and proofs.
And now, a little combinatorics. Let us count the number of differential forms in Euclidean
space. More specifically, we want to count the dimensions of the space of k-forms in R
n
in the
sense of vector spaces. We will thing of 0-forms as being ordinary functions. Since functions are the
”scalars”, the space of 0-forms as a vector space has dimension 1.
R
2
Forms Dim
0-forms f 1
1-forms fdx
1
, gdx
2
2
2-forms fdx
1
∧ dx
2
1
R
3
Forms Dim
0-forms f 1
1-forms f
1
dx
1
, f
2
dx
2
, f
3
dx
3
3
2-forms f
1
dx
2
∧ dx
3
, f
2
dx
3
∧ dx
1
, f
3
dx
1
∧ dx
2
3
3-forms f
1
dx
1
∧ dx
2
∧ dx
3
1
The binomial coefficient pattern should be evident to the reader.
2.3 Exterior Derivatives
In this section we introduce a differential operator which generalizes the classical gradient, curl and
divergence operators.
Denote by
_
m
(p)
(R
n
) the space of m-forms at p ∈ R
n
. This vector space has dimension
dim
_
m
(p)
(R
n
) =
n!
m!(n −m)!
for m ≤ n and dimension 0 if m > n. We shall identify
_
0
(p)
(R
n
) with the space of (

functions at
p. Also we will call
_
m
(R
n
) the union of all
_
m
(p)
(R
n
) as p ranges through all the points in R
n
.
In other words, we have
_
m
(R
n
) =
_
p
_
m
p
(R
n
).
If α ∈
_
m
(R
n
), then α can be written in the form
α = A
i1,...im
(x)dx
i1
∧ . . . dx
im
(2.32)
.
2.21 Definition Let α be an m-form (written in coordinates as in equation (2.32)). The exterior
derivative of α is the (m+1-form) dα given by
dα = dA
i1,...im
∧ dx
i0
∧ dx
i1
. . . dx
im
=
∂A
i1,...im
∂dx
i0
(x)dx
i0
∧ dx
i1
. . . dx
im
(2.33)
In the special case where α is a 0-form, that is, a function, we write
df =
∂f
∂x
i
dx
i
24 CHAPTER 2. DIFFERENTIAL FORMS
2.22 Proposition
a) d :
_
m
−→
_
m+1
b) d
2
= d ◦ d = 0
c) d(α ∧ β) = dα ∧ β + (−1)
p
α ∧ dβ ∀α ∈
_
p
, β ∈
_
q
(2.34)
Proof:
a) Obvious from equation (2.32).
b) First we prove the proposition for α = f ∈
_
0
. We have
d(dα) = d(
∂f
∂dx
i
)
=

2
f
∂x
j
∂x
i
dx
j
∧ dx
i
=
1
2
[

2
f
∂x
j
∂x
i

2
f
∂x
i
∂x
j
]dx
j
∧ dx
i
= 0
Now, suppose that α is represented locally as in equation (2.32). It follows from 2.33 that
d(dα) = d(dA
i1,...im
) ∧ dx
i0
∧ dx
i1
. . . dx
im
= 0
c) Let α ∈
_
p
, β ∈
_
q
. Then we can write
α = A
i1,...ip
(x)dx
i1
∧ . . . dx
ip
β = B
j1,...jq
(x)dx
j1
∧ . . . dx
jq
.
(2.35)
By definition,
α ∧ β = A
i1...ip
B
j1...jq
(dx
i1
∧ . . . ∧ dx
ip
) ∧ (dx
j1
∧ . . . ∧ dx
jq
)
Now we take take the exterior derivative of the last equation taking into account that d(fg) =
fdg +gdf for any functions f and g. We get
d(α ∧ β) = [d(A
i1...ip
)B
j1...jq
+ (A
i1...ip
d(B
j1...jq
)](dx
i1
∧ . . . ∧ dx
ip
) ∧ (dx
j1
∧ . . . ∧ dx
jq
)
= [dA
i1...ip
∧ (dx
i1
∧ . . . ∧ dx
ip
)] ∧ [B
j1...jq
∧ (dx
j1
∧ . . . ∧ dx
jq
)] +
= [A
i1...ip
∧ (dx
i1
∧ . . . ∧ dx
ip
)] ∧ (−1)
p
[dB
j1...jq
∧ (dx
j1
∧ . . . ∧ dx
jq
)]
= dα ∧ β + (−1)
p
α ∧ dβ. (2.36)
The (−1)
p
factor comes in because to pass the term dB
ji...jp
through p 1-forms of the type dx
i
, one
has to perform p transpositions.
2.23 Example Let α = P(x, y)dx +Q(x, y)dβ. Then,
dα = (
∂P
∂x
dx +
∂P
∂y
) ∧ dx + (
∂Q
∂x
dx +
∂Q
∂y
) ∧ dy
=
∂P
∂y
dy ∧ dx +
∂Q
∂x
dx ∧ dy
= (
∂Q
∂x

∂P
∂y
)dx ∧ dy. (2.37)
This example is related to Green’s theorem in R
2
.
2.4. THE HODGE-∗ OPERATOR 25
2.24 Example Let α = M(x, y)dx+N(x, y)dy, and suppose that dα = 0. Then, by the previous
example,
dα = (
∂N
∂x

∂M
∂y
)dx ∧ dy.
Thus, dα = 0 iff N
x
= M
y
which implies that N = f
y
and M
x
for some (
1
function f(x, y). Hence
α = f
x
dx +f
y
df = df.
The reader should also be familiar with this example in the context of exact differential equations
of first order, and conservative force fields.
2.25 Definition A differential form α is called closed if dα = 0.
2.26 Definition A differential form α is called exact if there exist a form β such that α = dβ.
Since d ◦ d = 0, it is clear that a exact form is also closed. The converse is not at all obvious in
general and we state it here without proof.
2.27 Poincare’s Lemma In a simply connected space (such as R
n
), if a differential is closed
then it is exact.
The assumption hypothesis that the space must be simply connected is somewhat subtle. The
condition is reminiscent of Cauchy’s integral theorem for functions of a complex variable, which
states that if f(z) is holomorphic function and C is a simple closed curve, then,
_
C
f(z)dz = 0
This theorem does not hold if the region bounded by the curve C is not simply connected. The
standard example is the integral of the complex 1-form ω = (1/z)dz around the unit circle C
bounding a punctured disk. In this case,
_
C
1
z
dz = 2πi
2.4 The Hodge-∗ Operator
One of the important lessons that students learn in linear algebra is that all vector space of finite
dimension n are isomorphic to each other. Thus, for instance, the space P
3
of all real polynomials
in x of degree 3, and the space /
2×2
of real 2 by 2 matrices, are basically no different than the
Euclidean vector space R
4
in terms of their vector space properties. We have already encountered
a number of vector spaces of finite dimension in these notes. A good example of this is the tangent
space T
p
R
3
. The ”vector” part a
1 ∂
∂x
+a
2 ∂
∂y
+a
3 ∂
∂z
can be mapped to a regular advanced calculus
vector a
1
i + a
2
j + a
3
k, by replacing

∂x
by i,

∂y
by j and

∂z
by k. Of course, we must not confuse
a tangent vector which is a linear operator with a Euclidean vector which is just an ordered triplet,
but as far their vector space properties, there is basically no difference.
We have also observed that the tangent space T
p
R
n
is isomorphic to the cotangent space T

p
R
n
.
In this case, the vector space isomorphism maps the standard basis vectors ¦

∂x
i
¦ to their duals
¦dx
i
¦. This isomorphism then transforms a contravariant vector to a covariant vector.
Another interesting example is provided by the spaces
_
1
p
(R
3
) and
_
2
p
(R
3
), both of which have
dimension 3. It follows that these two spaces must be isomorphic. In this case the isomorphism is
given by the map
dx −→ dy ∧ dz
26 CHAPTER 2. DIFFERENTIAL FORMS
dy −→ −dx ∧ dz
dz −→ dx ∧ dy
(2.38)
More generally, we have seen that the dimension of the space of m-forms in R
n
is given by the
binomial coefficient (
n
m). Since
_
n
m
_
=
_
n
n −m
_
=
n!
(n −m)!
,
it must be the case that
_
m
p
(R
n
)

=
_
m
p
(R
n−m
) (2.39)
To describe the isomorphism between these two spaces, we will first need to introduce the totally
antisymmetric Levi-Civita permutation symbol which is defined as follows

i1...im
=
_
_
_
+1 if(i
1
, . . . , i
m
) is an even permutation of(1, . . . , m)
−1 if(i
1
, . . . , i
m
) is an odd permutation of(1, . . . , m)
0 otherwise
(2.40)
In dimension 3, there are only 3 (3!=6) nonvanishing components of
i,j,k
in

123
=
231
=
312
= 1

132
=
213
=
321
= 1 (2.41)
The permutation symbols are useful in the theory of determinants. In fact, if A = (a
i
j
) is a 3 3
matrix, then, using equation (2.41), the reader can easily verify that
detA = [A[ =
i1i2i3
a
i1
1
a
i2
2
a
i3
3
(2.42)
This formula for determinants extends in an obvious manner to n n matrices. A more thorough
discussion of the Levi-Civita symbols will appear later in these notes.
In R
n
, the Levi-Civita symbol with some or all the indices up is numerically equal to the
permutation symbol will all indices down

i1...im
=
i1...im
,
since the Euclidean metric used to raise and lower indices is δ
ij
.
On the other hand, in Minkowski space, raising an index with a value of 0 costs a minus sign,
because g
00
= g
00
= −1. Thus, in calM
(1,3)

i0i1i2i3
= −
i0i1i2i3
,
since any permutation of ¦0, 1, 2, 3¦ must contain a 0.
2.28 Definition The Hodge-∗ operator is a linear map ∗ :
_
m
p
(R
n
) −→
_
m
p
(R
n−m
) defined in
standard local coordinates by the equation
∗(dx
i1
∧ . . . ∧ dx
im
) =
1
(n −m)!

i1...im
im+1...in
dx
im+1
∧ . . . ∧ dx
in
, (2.43)
Since the forms dx
i1
∧. . .∧dx
im
constitute a basis of the vector space
_
m
p
(R
n
) and the ∗-operator
is assumed to be a linear map, equation (2.43) completely specifies the map for all m-forms.
2.4. THE HODGE-∗ OPERATOR 27
2.29 Example Consider the dimension n=3 case. then
∗dx
1
=
1
2!

1
jk
dx
j
∧ dx
k
=
1
2!
[
1
23
dx
2
∧ dx
3
+
1
32
dx
3
∧ dx
2
]
=
1
2!
[dx
2
∧ dx
3
−dx
3
∧ dx
2
]
=
1
2!
[dx
2
∧ dx
3
+dx
2
∧ dx
3
]
= dx
2
∧ dx
3
.
We leave it to reader to complete the computation of the action of the ∗-operator on the other basis
forms. The results are
∗dx
1
= +dx
2
∧ dx
3
∗dx
2
= −dx
1
∧ dx
3
∗dx
3
= +dx
1
∧ dx
2
, (2.44)
∗(dx
2
∧ dx
3
) = dx
1
∗(−dx
3
∧ dx
1
) = dx
2
∗(dx
1
∧ dx
2
) = dx
3
, (2.45)
and
∗(dx
1
∧ dx
2
∧ dx
3
) = 1. (2.46)
In particular, if f : R
3
−→ R is any 0-form (a function) then,
∗f = f(dx
1
∧ dx
2
∧ dx
3
)
= fdV, (2.47)
where dV is the differential of volume, also called the volume form.
2.30 Example Let α = A
1
dx
1
A
2
dx
2
+A
3
dx
3
, and β = B
1
dx
1
B
2
dx
2
+B
3
dx
3
. Then,
∗(α ∧ β) = (A
2
B
3
−A
3
B
2
) ∗ (dx
2
∧ dx
3
) + (A
1
B
3
−A
3
B
1
) ∗ (dx
1
∧ dx
3
) +
(A
1
B
2
−A
2
B
1
) ∗ (dx
1
∧ dx
2
)
= (A
2
B
3
−A
3
B
2
)dx
1
+ (A
1
B
3
−A
3
B
1
)dx
2
+ (A
1
B
2
−A
2
B
1
)dx
3
= (

A

B)
i
dx
i
(2.48)
The previous examples provide some insight on the action of the ∧ and ∗ operators. If one thinks
of the quantities dx
1
, dx
2
and dx
3
as playing the role of

i,

j and

k, then it should be apparent that
equations 2.44 are the differential geometry versions of the well known relations
i = j k
j = −i k
k = i j
. This is even more evident upon inspection of equation (2.48), which relates the ∧ operator to the
Cartesian cross product.
28 CHAPTER 2. DIFFERENTIAL FORMS
2.31 Example In Minkowski space the collection of all 2-forms has dimension (
4
2) = 6. The Hodge-
∗ operator in this case, splits
_
2
(/
1,3
) into two 3-dim subspaces
_
2
±
, such that ∗ :
_
2
±
−→
_
2

.
More specifically,
_
2
+
is spanned by the forms ¦dx
0
∧dx
1
, dx
0
∧dx
2
, dx
0
∧dx
3
¦, and
_
2

is spanned
by the forms ¦dx
2
∧ dx
3
, −dx
1
∧ dx
3
, dx
1
∧ dx
2
¦. The action of ∗ on
_
2
+
is
∗(dx
0
∧ dx
1
) =
1
2

01
kl
dx
k
∧ dx
l
= −dx
2
∧ dx
3
∗(dx
0
∧ dx
2
) =
1
2

02
kl
dx
k
∧ dx
l
= +dx
1
∧ dx
3
∗(dx
0
∧ dx
3
) =
1
2

03
kl
dx
k
∧ dx
l
= −dx
1
∧ dx
2
,
and on
_
2

,
∗(+dx
2
∧ dx
3
) =
1
2

23
kl
dx
k
∧ dx
l
= dx
0
∧ dx
1
∗(−dx
1
∧ dx
3
) =
1
2

13
kl
dx
k
∧ dx
l
= dx
0
∧ dx
2
∗(+dx
1
∧ dx
2
) =
1
2

12
kl
dx
k
∧ dx
l
= dx
0
∧ dx
3
,
In verifying the equations above, we recall that the Levi-Civita symbols which contain an index
with value 0 in the up position have an extra minus sign as a result of raising the index with g
00
.If
F ∈
_
2
(/), we will formally write F = F
+
+F

, where F
±

_
2
±
. We would like to note that the
action of the dual operator on
_
2
(/) is such that ∗
_
2
(/) −→
_
2
(/), and ∗
2
= −1. Thus, the
operator is an linear involution of the space and in fact,
_
2
±
are the eigenspaces corresponding to
the two eigenvalues of this involution.
It is also worthwhile to calculate the duals of 1-forms in /
1,3
. The results are
∗dt = −dx
1
∧ dx
2
∧ dx
3
∗dx
1
= +dx
2
∧ dt ∧ dx
3
∗dx
2
= +dt ∧ dx
1
∧ dx
3
∗dx
3
= +dx
1
∧ dt ∧ dx
2
. (2.49)
Gradient, Curl and Divergence
Classical differential operators which enter in Green’s and Stokes Theorems are better understood as
special manifestations of the exterior differential and the Hodge-∗ operators in R
3
. Here is precisely
how this works:
1. Let f : R
3
−→ R be a (

function. Then
df =
∂f
∂x
j
dx
j
= ∇f dx (2.50)
2. Let α = A
i
dx
i
be a 1-form in R
3
. Then
(∗d)α =
1
2
(
∂A
i
∂x
j

∂A
i
∂x
j
) ∗ (dx
i
∧ dx
j
)
= (∇A) dx (2.51)
3. Let α = B
1
dx
2
∧ dx
3
+B
2
dx
3
∧ dx
1
+B
3
dx
1
∧ dx
2
be a 2-form in R
3
. Then
dα = (
∂B
1
∂x
1
+
∂B
2
∂x
2
+
∂B
3
∂x
3
)dx
1
∧ dx
2
∧ dx
3
= (∇ B)dV (2.52)
2.4. THE HODGE-∗ OPERATOR 29
4. Let α = B
i
dx
i
, then
∗d ∗ α = ∇ B (2.53)
It is also possible to define and manipulate formulas of classical vector calculus using the per-
mutation symbols. For example, let a = (A
1
, A
2
, A
3
) and B = (B
1
, B
2
, B
3
) be any two Euclidean
vectors. Then it is easy to see that
(AB)
k
=
ij
k
A
i
B
j
,
and
(∇B)
k
=
ij
k
∂A
i
∂x
j
,
To derive many classical vector identities in this formalism, it is necessary to first establish the
following identity (see Ex. ())

ijm

klm
= δ
i
k
δ
j
l
−δ
i
l
δ
j
k
(2.54)
2.32 Example
[A(BC)]
l
=
mn
l
A
m
(BC)
m
=
mn
l
A
m
(
jk
n
B
j
C
k
)
=
mn
l

jk
n
A
m
B
j
C
k
)
=
mnl

jkn
A
m
B
j
C
k
)
= (δ
j
l
δ
k
m
−δ
k
l
δ
j
n
)A
m
B
j
C
k
= B
l
A
m
C
m
−C
l
A
m
b
n
Or, rewriting in vector form
A(BC) = B(A C) −C(A B) (2.55)
Maxwell Equations
The classical equations of Maxwell describing electromagnetic phenomena are
∇ E = 4πρ ∇B = 4πJ +
∂E
∂t
∇ B = 0 ∇E = −
∂B
∂t
(2.56)
We would like to formulate these equations in the language of differential forms. Let x
µ
=
(t, x
1
, x
2
, x
3
) be local coordinates in Minkowski’s space /
1,3
. Define the Maxwell 2-form F by the
equation
F =
1
2
F
µν
dx
µ
∧ dx
ν
, (µ, ν = 0, 1, 2, 3), (2.57)
where
F
µν
=
_
¸
¸
_
0 −E
x
−E
y
−E
y
E
x
0 B
z
−B
y
E
y
−B
z
0 B
x
E
z
B
y
−B
x
0
_
¸
¸
_
. (2.58)
Written in complete detail, Maxwell’s 2-form is given by
F = −E
x
dt ∧ dx
1
−E
y
dt ∧ dx
2
−E
z
dt ∧ dx
3
+
B
z
dx
1
∧ dx
2
−B
y
dx
1
∧ dx
3
+B
x
dx
2
∧ dx
3
. (2.59)
30 CHAPTER 2. DIFFERENTIAL FORMS
We also define the source current 1-form
J = J
µ
dx
µ
= ρdt +J
1
dx
1
+J
2
dx
2
+J
3
dx
3
. (2.60)
2.33 Proposition Maxwell’s Equations 2.56 are equivalent to the equations
dF = 0,
d ∗ F = 4π ∗ J. (2.61)
Proof: The proof is by direct computation using the definitions of the exterior derivative and
the Hodge-∗ operator.
dF = −
∂E
x
∂x
2
∧ dx
2
∧ dt ∧ dx
1

∂E
x
∂x
3
∧ dx
3
∧ dt ∧ dx
1
+

∂E
y
∂x
1
∧ dx
1
∧ dt ∧ dx
2

∂E
y
∂x
3
∧ dx
3
∧ dt ∧ dx
2
+

∂E
z
∂x
1
∧ dx
1
∧ dt ∧ dx
3

∂E
z
∂x
2
∧ dx
2
∧ dt ∧ dx
3
+
∂B
z
∂t
∧ dt ∧ dx
1
∧ dx
2

∂B
z
∂x
3
∧ dx
3
∧ dx
1
∧ dx
2

∂B
y
∂t
∧ dt ∧ dx
1
∧ dx
3

∂B
y
∂x
2
∧ dx
2
∧ dx
1
∧ dx
3
+
∂B
x
∂t
∧ dt ∧ dx
2
∧ dx
3
+
∂B
x
∂x
1
∧ dx
1
∧ dx
2
∧ dx
3
.
Collecting terms and using the antisymmetry of the wedge operator, we get,
dF = (
∂B
x
∂x
1
+
∂B
y
∂x
2
+
∂B
z
∂x
3
)dx
1
∧ dx
2
∧ dx
3
+
(
∂E
y
∂x
3

∂E
z
∂x
2

∂B
x
∂t
)dx
2
∧ dt ∧ dx
3
+
(
∂E
z
∂x
1

∂E
x
∂dx
3

∂B
y
∂t
)dt ∧ dx
1
∧ x
3
+
(
∂E
x
∂x
2

∂E
y
∂x
1

∂B
z
∂t
)dx
1
∧ dt ∧ dx
2
.
Therefore, dF = 0 iff
∂B
x
∂x
1
+
∂B
y
∂x
2
+
∂B
y
∂x
3
= 0,
which is the same as
∇ B = 0,
and
∂E
y
∂x
3

∂E
z
∂x
2

∂B
x
∂x
1
= 0,
∂E
z
∂x
1

∂E
x
∂x
3

∂B
y
∂x
2
= 0,
∂E
x
∂x
2

∂E
y
∂x
1

∂B
z
∂x
3
= 0,
2.4. THE HODGE-∗ OPERATOR 31
which means that
−∇E−
∂B
∂t
= 0. (2.62)
To verify the second set of Maxwell equations, we first compute the dual of the current density
1-form (2.60) using the results from example 2.4. We get
∗J = −ρdx
1
∧ dx
2
∧ dx
3
+J
1
dx
2
∧ dt ∧ dx
3
+J
2
dt ∧ dx
1
∧ dx
3
+J
3
dx
1
∧ dt ∧ dx
2
. (2.63)
We could now proceed to compute d∗F, but perhaps it is more elegant to notice that F ∈
_
2
(/),
and so, according to example (2.4), F splits into F = F
+
+F

. In fact, we see from (2.58) that the
components of F
+
are those of −E and the components of F

constitute the magnetic field vector
B. Using the results of example (2.4), we can immediately write the components of ∗F
∗F = B
x
dt ∧ dx
1
+B
y
dt ∧ dx
2
+B
z
dt ∧ dx
3
+
E
z
dx
1
∧ dx
2
−E
y
dx
1
∧ dx
3
+E
x
dx
2
∧ dx
3
, (2.64)
or equivalently,
F
µν
=
_
¸
¸
_
0 B
x
B
y
B
y
−B
x
0 E
z
−E
y
−B
y
−E
z
0 E
x
−B
z
E
y
−E
x
0
_
¸
¸
_
. (2.65)
Since the effect of the dual operator amounts to exchanging
E −→ −B
B −→ +E,
we can infer from equations (2.62) and (2.63) that
∇ E = 4πρ
and,
∇B−
∂E
∂t
= 4πJ.
32 CHAPTER 2. DIFFERENTIAL FORMS
Chapter 3
Connections
Connections
3.1 Frames
As we have already noted in chapter 1, the theory of curves in R
3
can be elegantly formulated by
introducing orthonormal triplets of vectors which we called Frenet frames. The Frenet vectors are
adapted to the curves in such a manner that the rate of change of the frame gives information about
the curvature of the curve. In this chapter we will study the properties of arbitrary frames and their
corresponding rates of change in the direction of the various vectors in the frame. This concepts will
then be applied later to special frames adapted to surfaces.
3.1 Definition A coordinate frame in R
n
is an n-tuple of vector fields ¦e
1
, . . . , e
n
¦ which are
linearly independent at each point p in the space.
In local coordinates x
1
, . . . x
n
, we can always express the frame vectors as linear combinations of
the standard basis vectors
e
i
= ∂
j
A
j
i
, (3.1)
where ∂
j
=

∂x
1
We assume the matrix A = (A
j
i
) to be nonsingular at each point. In linear
algebra, this concept is referred to as a change of basis, the difference being that in our case, the
transformation matrix A depends on the position. A frame field is called orthonormal if at each
point,
< e
i
, e
j
>= δ
ij
. (3.2)
Throughout this chapter, we will assume that all frame fields are orthonormal. Whereas this
restriction is not necessary, it is very convenient because it simplifies considerably the formulas to
compute the components of an arbitrary vector in the frame.
3.2 Proposition If ¦e
1
, . . . , e
n
¦ is an orthonormal frame, then the transformation matrix is
orthogonal (ie: AA
T
= I)
Proof: The proof is by direct computation. Let e
i
= ∂
j
A
j
i
. Then
δ
ij
= < e
i
, e
j
>
= < ∂
k
A
k
i
, ∂
l
A
l
j
>
= A
k
i
A
l
j
< ∂
k
, ∂
l
>
= A
k
i
A
l
j
δ
kl
33
34 CHAPTER 3. CONNECTIONS
= A
k
i
A
kj
= A
k
i
(A
T
)
jk
.
Hence
(A
T
)
jk
A
k
i
= δ
ij
(A
T
)
j
k
A
k
i
= δ
j
i
A
T
A = I
Given a frame vectors e
i
, we can also introduce the corresponding dual coframe forms θ
i
by
requiring
θ
i
(e
j
) = δ
i
j
(3.3)
since the dual coframe is a set of 1-forms, they can also be expressed in of local coordinates as
linear combinations
θ
i
= B
i
k
dx
k
.
It follows from equation( 3.3), that
θ
i
(e
j
) = B
i
k
dx
k
(∂
l
A
l
j
)
= B
i
k
A
l
j
dx
k
(∂
l
)
= B
i
k
A
l
j
δ
k
l
δ
i
j
= B
i
k
A
k
j
.
Therefore we conclude that BA = I, so B = A
−1
= A
T
. In other words, when the frames are
orthonormal we have
e
i
= ∂
k
A
k
i
θ
i
= A
i
k
dx
k
. (3.4)
3.3 Example Consider the transformation from Cartesian to cylindrical coordinates
x = r cos θ,
y = r sin θ,
z = z.
Using the chain rule for partial derivatives, we have

∂r
= cos θ

∂x
+ sin θ

∂y

∂θ
= −r sin θ

∂x
+r cos θ

∂y

∂z
=

∂z
From these equations we easily verify that the quantities
e
1
=

∂r
e
2
=
1
r

∂θ
e
3
=

∂z
,
3.2. CURVILINEAR COORDINATES 35
are a triplet of mutually orthogonal unit vectors and thus constitute an orthonormal frame.
3.4 Example For spherical coordinates( 2.20)
x = ρ sin θ cos φ
y = ρ sin θ sin φ
z = ρ cos θ,
the chain rule leads to

∂ρ
= sin θ cos φ

∂x
+ sin θ sin φ

∂y
+ cos θ

∂z

∂θ
= ρ cos θ cos φ

∂x
+ρ cos θ sin φ

∂y
+−ρ sin θ

∂z

∂φ
= −ρ sin θ sin φ

∂x
+ρ sin θ cos φ

∂y
.
In this case, the vectors
e
1
=

∂ρ
e
2
=
1
ρ

∂θ
e
3
=
1
ρ sin θ

∂φ
(3.5)
also constitute an orthonormal frame.
The fact that the chain rule in the two situations above leads to orthonormal frames is not
coincidental. The results are related to the orthogonality of the level surfaces x
i
= constant. Since
the level surfaces are orthogonal whenever they intersect, one expects the gradients of the surfaces
to also be orthogonal. Transformations of this type are called triply orthogonal systems.
3.2 Curvilinear Coordinates
Orthogonal transformations such as Spherical and cylindrical coordinates appear ubiquitously in
mathematical physics because the geometry of a large number of problems in this area exhibit sym-
metry with respect to an axis or to the origin. In such situations, transformation to the appropriate
coordinate system often result in considerable simplification of the field equations involved in the
problem. It has been shown that the Laplace operator which enters into all three of main classical
fields, the potential, the heat and the wave equations, is separable in 12 coordinate systems. A sim-
ple and efficient method to calculate the Laplacian in orthogonal coordinates can be implemented
by the use of differential forms.
3.5 Example In spherical coordinates the differential of arc length is given by (see equation 2.21)
ds
2
= dρ
2

2

2

2
sin
2
θdφ
2
.
Let
θ
1
= dρ
θ
2
= ρdθ
θ
3
= ρ sin θdφ. (3.6)
36 CHAPTER 3. CONNECTIONS
Note that these three 1-forms constitute the dual coframe to the orthonormal frame which just
derived in equation( 3.5). Consider a scalar field f = f(ρ, θ, φ). We now calculate the Laplacian
of f in spherical coordinates using the methods of section 2.4. To do this, we first compute the
differential df and express the result in terms of the coframe.
df =
∂f
∂ρ
dρ +
∂f
∂θ
dθ +
∂f
∂φ

=
∂f
∂ρ
θ
1
+
1
ρ
∂f
∂θ
θ
2
+
1
ρ sin θ
∂f
∂φ
θ
3
The components df in the coframe represent the gradient in spherical coordinates. Continuing with
the scheme of section 2.4, we first apply the Hodge-∗ operator. Then we rewrite the resulting 2-form
in terms of wedges of coordinate differentials so that we can apply the definition of the exterior
derivative.
∗df =
∂f
∂ρ
θ
2
∧ θ
3

1
ρ
∂f
∂θ
θ
1
∧ θ
3
+
1
ρ sin θ
∂f
∂φ
θ
1
∧ θ
2
= ρ
2
sin θ
∂f
∂ρ
dθ ∧ dφ −ρ sin θ
1
ρ
∂f
∂θ
dρ ∧ dφ +ρ sin θ
1
ρ sin θ
∂f
∂φ
dρ ∧ dθ
= ρ
2
sin θ
∂f
∂ρ
dθ ∧ dφ −sin θ
∂f
∂θ
dρ ∧ dφ +
∂f
∂φ
dρ ∧ dθ
d ∗ df =

∂ρ

2
sin θ
∂f
∂ρ
)dρ ∧ dθ ∧ dφ −

∂θ
(sin θ
∂f
∂θ
)dθ ∧ dρ ∧ dφ +

∂φ
(
∂f
∂φ
)dφ ∧ dρ ∧ dθ
=
_
sin θ

∂ρ

2
∂f
∂ρ
) +

∂θ
(sin θ
∂f
∂θ
) +

2
f
∂φ
2
_
dρ ∧ dθ ∧ dφ.
Finally, rewriting the differentials back in terms of the the coframe, we get
d ∗ df =
1
ρ
2
sin θ
_
sin θ

∂ρ

2
∂f
∂ρ
) +

∂θ
(sin θ
∂f
∂θ
) +

2
f
∂φ
2
_
θ
1
∧ θ
2
∧ θ
3
.
So, the Laplacian of f is given by

2
f =
1
ρ
2

∂ρ
_
ρ
2
∂f
∂ρ
_
+
1
ρ
2
sin θ
_

∂θ
(sin θ
∂f
∂θ
) +

2
f
∂φ
2
_
(3.7)
The derivation of the expression for the spherical Laplacian through the use of differential forms
is elegant and leads naturally to the operator in Sturm Liouville form.
The process above can also be carried out for general orthogonal transformations. A change of
coordinates x
i
= x
i
(u
k
) leads to an orthogonal transformation if in the new coordinate system u
k
,
the line metric
ds
2
= g
11
(du
1
)
2
+g
22
(du
2
)
2
+g
33
(du
3
)
2
(3.8)
only has diagonal entries. In this case, we choose the coframe
θ
1
=

g
11
du
1
= h
1
du
1
θ
2
=

g
22
du
2
= h
2
du
2
θ
3
=

g
33
du
3
= h
3
du
3
The quantities ¦h
1
, h
2
, h
3
¦ are classically called the weights. Please note that in the interest of
connecting to classical terminology we have exchanged two indices for one and this will cause small
discrepancies with the index summation convention. We will revert to using a summation symbol
3.2. CURVILINEAR COORDINATES 37
when these discrepancies occur. To satisfy the duality condition θ
i
(e
j
) = δ
i
j
, we must choose the
corresponding frame vectors e
i
according to
e
1
=
1

g
11

∂u
1
=
1
h
1

∂u
1
e
2
=
1

g
22

∂u
2
=
1
h
2

∂u
2
e
3
=
1

g
33

∂u
3
=
1
h
3

∂u
3
Gradient. Let f = f(x
i
) and x
i
= x
i
(u
k
). Then
df =
∂f
∂x
k
dx
k
=
∂f
∂u
i
∂u
i
∂x
k
dx
k
=
∂f
∂u
i
du
i
=

i
1
h
i
∂f
∂u
i
θ
i
= e
i
(f)θ
i
.
As expected, the components of the gradient in the coframe θ
i
are the just the frame vectors
∇ =
_
1
h
1

∂u
1
,
1
h
2

∂u
2
,
1
h
3

∂u
3
_
(3.9)
Curl. Let F = (F
1
, F
2
, F
3
) be a classical vector field. Construct the corresponding one form
F = F
i
θ
i
in the coframe. We calculate the curl using the dual of the exterior derivative.
F = F
1
θ
1
+F
2
θ
2
+F
3
θ
3
= (h
1
F
1
)du
1
+ (h
2
F
2
)du
2
+ (h
3
F
3
)du
3
= (hF)
i
du
i
, where (hF)
i
= h
i
F
i
dF =
1
2
_
∂(hF)
i
∂u
j

∂(hF)
j
∂u
i
_
du
i
∧ du
j
=
1
h
i
h
j
_
∂(hF)
i
∂u
j

∂(hF)
j
∂u
i
_

i
∧ dθ
j
∗dF =
ij
k
_
1
h
i
h
j
[
∂(hF)
i
∂u
j

∂(hF)
j
∂u
i
]
_
θ
k
= (∇F)
k
θ
k
.
Thus, the components of the curl are
_
1
h
2
h
3
[
∂(h
3
F
3
)
∂u
2

∂(h
2
F
2
)
∂u
3
],
1
h
1
h
3
[
∂(h
3
F
3
)
∂u
1

∂(h
1
F
1
)
∂u
3
],
1
h
1
h
2
[
∂(h
1
F
1
)
∂u
2

∂(h
2
F
2
)
∂u
1
].
_
(3.10)
Divergence. As before, let F = F
i
θ
i
and recall that ∇ F = ∗d ∗ F. The computation yields
F = F
1
θ
1
+F
2
θ
2
+F
3
θ
3
∗F = F
1
θ
2
∧ θ
3
+F
2
θ
3
∧ θ
1
+F
3
θ
1
∧ θ
2
= (h
2
h
3
F
1
)du
2
∧ du
3
+ (h
1
h
3
F
2
)du
3
∧ du
1
+ (h
1
h
2
F
3
)du
1
∧ du
2
d ∗ dF =
_
∂(h
2
h
3
F
1
)
∂u
1
+
∂(h
1
h
3
F
2
)
∂u
2
+
∂(h
1
h
2
F
3
)
∂u
3
_
du
1
∧ du
2
∧ du
3
.
38 CHAPTER 3. CONNECTIONS
Therefore,
∇ F = ∗d ∗ F =
1
h
1
h
2
h
3
_
∂(h
2
h
3
F
1
)
∂u
1
+
∂(h
1
h
3
F
2
)
∂u
2
+
∂(h
1
h
2
F
3
)
∂u
3
_
. (3.11)
3.3 Covariant Derivative
In this section we introduce a generalization of directional derivatives. The directional derivative
measures the rate of change of a function in the direction of a vector. What we want is a quantity
which measures the rate of change of a vector field in the direction of another.
3.6 Definition Let X be an arbitrary vector field in R
n
. A map ∇
X
: T(R
n
) −→ T(R
n
) is
called a Koszul connection if it satisfies the following properties.
1. ∇
fX
(Y ) = f∇
X
Y,
2. ∇
(X1+X2)
Y = ∇
X1
Y +∇
X2
Y,
3. ∇
X
(Y
1
+Y
2
) = ∇
X
Y
1
+∇
X
Y
2
,
4. ∇
X
fY = X(f)Y +f∇
X
Y,
for all vector fields X, X
1
, X
2
, Y, Y
1
, Y
2
∈ T(R
n
) and all smooth functions f. The definition states
that the map ∇
X
is linear on X but behaves like a linear derivation on Y. For this reason, the
quantity ∇
X
Y is called the covariant derivative of Y in the direction of X.
3.7 Proposition Let Y = f
i ∂
∂x
i
be a vector field in R
n
, and let X another C

vector field.
Then the operator given by

X
Y = X(f
i
)

∂x
1
. (3.12)
defines a Koszul connection. The proof just requires verification that the four properties above are
satisfied, and it is left as an exercise. The operator defined in this proposition is called the standard
connection compatible with the standard Euclidean metric. The action of this connection on a
vector field Y yields a new vector field whose components are the directional derivatives of the
components of Y .
3.8 Example Let
X = x

∂x
+xz

∂y
, Y = x
2

∂x
+xy
2

∂y
,
Then,

X
Y = X(x
2
)

∂x
+X(xy
2
)

∂y
= [x

∂x
(x
2
) +xz

∂y
(x
2
)]

∂x
+ [x

∂x
(xy
2
) +xz

∂y
(xy
2
)]

∂y
= 2x
2

∂x
+ (xy
2
+ 2x
2
yz)

∂y
3.9 Definition A Koszul connection ∇
X
is compatible with the metric g(Y, Z) =< Y, Z > if

X
< Y, Z >=< ∇
X
Y, Z > + < Y, ∇
X
Z > . (3.13)
3.3. COVARIANT DERIVATIVE 39
In Euclidean space, the components of the standard frame vectors are constant, and thus their
rates of change in any direction vanish. Let e
i
be arbitrary frame field with dual forms θ
i
. The
covariant derivatives of the frame vectors in the directions of a vector X, will in general yield new
vectors. The new vectors must be linear combinations of the the basis vectors

X
e
1
= ω
1
1
(X)e
1

2
1
(X)e
2

3
1
(X)e
3

X
e
2
= ω
1
2
(X)e
1

2
2
(X)e
2

3
2
(X)e
3

X
e
3
= ω
1
3
(X)e
1

2
3
(X)e
2

3
3
(X)e
3
(3.14)
The coefficients can be more succinctly expressed using the compact index notation

X
e
i
= e
j
ω
j
i
(X) (3.15)
It follows immediately that
ω
j
i
(X) = θ
j
(∇
X
e
i
). (3.16)
Equivalently, one can take the inner product of both sides of equation (3.15) with e
k
to get
< ∇
X
e
i
, e
k
> = < e
j
ω
j
i
(X), e
k
>
= ω
j
i
(X) < e
j
, e
k
>
= ω
j
i
(X)g
jk
Hence,
< ∇
X
e
i
, e
k
>= ω
ki
(X) (3.17)
The left hand side of the last equation is the inner product of two vectors, so the expression
represents an array of functions. Consequently, the right hand side also represents an array of
functions. In addition, both expressions are linear on X, since by definition ∇
X
is linear on X. We
conclude that the right hand side can be interpreted as a matrix in which, each entry is a 1-forms
acting on the vector X to yield a function. The matrix valued quantity ω
i
j
is called the connection
form .
3.10 Definition Let ∇
X
be a Koszul connection and let ¦e
i
¦ be a frame. The Christoffel
symbols associated with the connection in the given frame are the functions Γ
k
ij
given by

ei
e
j
= Γ
k
ij
e
k
(3.18)
The Christoffel symbols are the coefficients which give the representation of the rate of change of
the frame vectors in the direction of the frame vectors themselves. Many physicists therefore refer
to the Christoffel symbols as the connection once again giving rise to possible confusion. The precise
relation between the Christoffel symbols and the connection 1-forms is captured by the equations
ω
k
i
(e
j
) = Γ
k
ij
, (3.19)
or equivalently
ω
k
i
= Γ
k
ij
θ
j
(3.20)
In a general frame in R
n
there are n
2
entries in the connection 1-form and n
3
Christoffel symbols.
The number of independent components is reduced if one assumes that the frame is orthonormal.
3.11 Proposition Let and e
i
be an orthonormal frame and ∇
X
be a Koszul connection compatible
with the metric . Then
ω
ji
= −ω
ij
(3.21)
40 CHAPTER 3. CONNECTIONS
Proof: Since it is given that < e
i
, e
j
>= δ
ij
, we have
0 = ∇
X
< e
i
, e
j
>
= < ∇
X
e
i
, e
j
> + < e
i
, ∇
X
e
j
>
= < ω
k
i
e
k
, e
j
> + < e
i
, ω
k
j
e
k
>
= ω
k
i
< e
k
, e
j
> +ω
k
j
< e
i
, e
k
>
= ω
k
i
g
kj

k
j
g
ik
= ω
ji

ij
thus proving that ω is indeed antisymmetric.
3.12 Corollary The Christoffel symbols of a Koszul connection in an orthonormal frame are
antisymmetric on the lower indices; that is
Γ
k
ji
= −Γ
k
ij
. (3.22)
We conclude that in an orthonormal frame in R
n
the number of independent coefficients of the
connection 1-form is (1/2)n(n − 1) since by antisymmetry, the diagonal entries are zero, and one
only needs to count the number of entries in the upper triangular part of the n n matrix ω
ij
Similarly, the number of independent Christoffel symbols gets reduced to (1/2)n
2
(n−1). In the case
of an orthonormal frame in R
3
, where g
ij
is diagonal, ω
i
j
is also antisymmetric, so the connection
equations become

X
_
_
e
1
e
2
e
3
_
_
=
_
_
0 ω
1
2
(X) ω
1
3
(X)
−ω
1
2
(X) 0 ω
2
3
(X)
−ω
1
3
(X) −ω
2
3
(X) 0
_
_
_
_
e
1
e
2
e
3
_
_
. (3.23)
Comparing the Frenet frame equation( 1.27), we notice the obvious similarity to the general frame
equations above. Clearly, the Frenet frame is a special case in which the basis vectors have been
adapted to a curve resulting in a simpler connection in which some of the coefficients vanish. A
further simplification occurs in the Frenet frame since here the equations represent the rate of change
of the frame only along the direction of the curve rather than an arbitrary direction vector X.
3.4 Cartan Equations
Perhaps, the most important contribution to the development of Differential Geometry is the work
of Cartan culminating into famous equations of structure which we discuss in this chapter.
First Structure Equation
3.13 Theorem Let ¦e
i
¦ be a frame with connection ω
i
j
and dual coframe θ
i
. Then
Θ
i
≡ dθ
i

i
j
∧ θ
j
= 0 (3.24)
Proof: Let
e
i
= ∂
j
A
j
i
.
be a frame, and let θ
i
be the corresponding coframe. Since θ
i
(e
j
), we have
θ
i
= (A
−1
)
i
j
dx
j
.
3.4. CARTAN EQUATIONS 41
Let X be an arbitrary vector field. Then

X
e
i
= ∇
X
(∂
j
A
j
i
)
e
j
ω
j
i
(X) = ∂
j
X(A
j
i
)
= ∂
j
d(A
j
i
)(X)
= e
k
(A
−1
)
k
j
d(A
j
i
)(X)
ω
k
i
(X) = (A
−1
)
k
j
d(A
j
i
)(X).
Hence,
ω
k
i
= (A
−1
)
k
j
d(A
j
i
),
or in matrix notation
ω = A
−1
dA (3.25)
On the other hand, taking the exterior derivative of θ
i
, we find

i
= d(A
−1
)
i
j
∧ dx
j
= d(A
−1
)
i
j
∧ A
j
k
θ
k
dθ = d(A
−1
)A∧ θ.
But, since A
−1
A = I, we have d(A
−1
)A = −A
−1
dA = −ω, hence
dθ = −ω ∧ θ. (3.26)
In other words

i

i
j
∧ θ
j
= 0
Second Structure Equation
Let θ
i
be a coframe in R
n
with connection ω
i
j
. Taking the exterior derivative of the first equation
of structure and recalling the properties (2.34) we get
d(dθ
i
) +d(ω
i
j
∧ θ
j
) = 0

i
j
∧ θ
j
−ω
i
j
∧ dθ
j
= 0,
Substituting recursively from the first equation of structure, we get

i
j
∧ θ
j
−ω
i
j
∧ (−ω
j
k
∧ θ
k
) = 0

i
j
∧ θ
j

i
k
∧ ω
k
j
∧ θ
j
= 0
(dω
i
j

i
k
∧ ω
k
j
) ∧ θ
j
= 0

i
j

i
k
∧ ω
k
j
= 0.
We now introduce the following
3.14 Definition The curvature Ω of a connection ω is the matrix valued 2-form

i
j
≡ dω
i
j

i
k
∧ ω
k
j
(3.27)
3.15 Theorem Let θ be a coframe with connection ω in R
n
. The the curvature form vanishes
Ω = dω +ω ∧ ω = 0 (3.28)
42 CHAPTER 3. CONNECTIONS
Proof: Given the there is a non-singular matrix A such that θ = A
−1
dx and ω = A
−1
dA, have
dω = d(A
−1
) ∧ dA
On the other hand,
ω ∧ ω = (A
−1
dA) ∧ (A
−1
dA)
= −d(A
−1
)A∧ A
−1
dA
= −d(A
−1
)(AA
−1
) ∧ dA
= −d(A
−1
) ∧ dA.
Therefore, dΩ = −ω ∧ ω.
Change of Basis
We briefly explore the behavior of the quantities Θ
i
and Ω
i
j
under a change of basis.
Let e
i
be frame with dual forms θ
i
, and let e
i
another frame related to the first frame by an
invertible transformation.
e
i
= e
j
B
j
i
, (3.29)
which we will write in matrix notation as e = eB. Referring back to the definition of connec-
tions (3.15), we introduce the covariant differential ∇ by the formula
∇e
i
= e
j
⊗ω
j
i
= e
j
ω
j
i
∇e = e ω (3.30)
where once again, we have simplified the equation by using matrix notation. This definition is
elegant because it does not explicitly show the dependence on X in the connection (3.15). The idea
of switching from derivatives to differentials is familiar from basic calculus, but we should point out
that in the present context, the situation is more subtle. The operator ∇ here maps a vector field
to a matrix-valued tensor of rank T
1,1
. Another way to view the covariant differential, is to think
of ∇ as an operator such that if e is a frame, and X a vector field, then ∇e(X) = ∇
X
e. If f is a
function, then ∇f(X) = ∇
X
f = df(X), so that ∇f = df. In other words, ∇ behaves like a covariant
derivative on vectors, but like a differential on functions. We require ∇ to behave like a derivation
on tensor products
∇(T
1
⊗T
2
) = ∇T
1
⊗T
2
+T
1
⊗∇T
2
(3.31)
Taking the exterior differential of (3.29) and using (3.30) recursively, we get
∇e = ∇e ⊗B +e ⊗∇B
= (∇e)B +e(dB)
= e ωB +e(dB)
= eB
−1
ωB +eB
−1
dB
= e[B
−1
ωB +B
−1
dB]
= e ω
provided that the connection ω in the new frame e is related to the connection ω by the transfor-
mation law
ω = B
−1
ωB +B
−1
dB. (3.32)
It should be noted than if e is the standard frame e
i
= ∂
i
in R
n
, then ∇e = 0, so that ω = 0. In this
case, the formula above reduces to ω = B
−1
dB, showing that the transformation rule is consistent
with equation (3.25).
Chapter 4
Theory of Surfaces
Surfaces in R
3
4.1 Manifolds
4.1 Definition A C

coordinate chart is a C

map x from an open subset of R
2
into R
3
.
x : U ∈ R
2
−→ R
3
(u, v)
x
−→ (x(u, v), y(u, v), z(u, v)) (4.1)
We will always assume that the Jacobian of the map has maximal rank. In local coordinates, a
coordinate chart is represented by three equations in two variables
x
i
= f
i
(u
α
), where i = 1, 2, 3, α = 1, 2. (4.2)
The local coordinate representation allows us to use the tensor index formalism introduced in earlier
chapters. The assumption the Jacobian J = (∂x
i
/∂u
α
) be of maximal rank, allows one to evoke the
implicit function theorem. Thus, in principle, one can locally solve for one of the coordinates, say
x
3
in terms of the other two
x
3
= f(x
1
, x
2
). (4.3)
The locus of points in R
3
satisfying the equations x
i
= f
i
(u
α
), can also be locally represented by
an expression of the form
F(x
1
, x
2
, x
3
) = 0 (4.4)
4.2 Definition Let x(u
1
, u
2
) : U −→ R
3
and y(v
1
, v
2
) : V −→ R
3
be two coordinate charts with
a non empty intersection (x(U) ∩ y(V )) ,= ∅. The two charts are said to be C

equivalent if the
map φ = y
−1
x and its inverse φ
−1
(see fig 4.1 )are infinitely differentiable.
In more lucid terms, the definition just states that two equivalent charts x(u
α
) and y(v
β
) repre-
sent different reparametrizations for the same set of points in R
3
.
Figure 4.1: Chart Equivalence
43
44 CHAPTER 4. THEORY OF SURFACES
4.3 Definition A differentiably smooth surface in R
3
is a set of points / in R
3
such that
1. If p ∈ / then p belongs to some C

chart.
2. If p ∈ / belongs to two different charts x and y, then the two charts are C

equivalent.
Intuitively, we may think of a surface as consisting locally of number of patches ”sewn” to each other
so as to form a quilt from a global perspective.
The first condition in the definition states that each local patch looks like a piece of R
2
, whereas
the second differentiabilty condition indicates that the patches are joined together smoothly. An-
other way to state this idea is to say that a surface a space that is locally Euclidean and it has a
differentiable structure so that the notion of differentiation makes sense. If the Euclidean space is
of dimension n, the ”surface” is called an n-dimensional manifold
4.4 Example Consider the local coordinate chart
x(u, v) = (sin ucos v, sin usin v, cos v).
The vector equation is equivalent to three scalar functions in two variables
x = sin ucos v,
y = sin usin v,
z = cos u. (4.5)
Clearly, the surface represented by this chart is part of the sphere x
2
+y
2
+z
2
= 1. The chart can
not possibly represent the whole sphere because, although a sphere is locally Euclidean, (the earth
is locally flat) there is certainly a topological difference between a sphere and a plane. Indeed, if
one analyzes the coordinate chart carefully you will note that at the North pole (u = 0, z = 1, the
coordinates become singular. This happens because u = 0 implies that x = y = 0 regardless of the
value of v, so that the North pole has an infinite number of labels. The fact that it is required to
have two parameters to describe a patch on a surface in R
3
is a manifestation of the 2-dimensional
nature of of the surfaces. If one holds one of the parameters constant while varying the other, then
the resulting 1-parameter equations describe a curve on the surface. Thus, for example, letting
u = constant in equation (4.5) we get the equation of a meridian great circle.
4.5 Notation Given a parametrization of a surface in a local chart x(u, v) = x(u
1
, u
2
) = x(u
α
),
we will denote the partial derivatives by any of the following notations:
x
u
= x
1
=
∂x
∂u
, x
uu
= x
11
=

2
x
∂u
2
x
v
= x
v
=
∂x
∂v
, x
vv
= x
22
=

2
x
∂v
2
x
α
=
∂x
∂u
α
x
αβ
=

2
x
∂u
α
∂v
β
4.2 The First Fundamental Form
Let x
i
(u
α
) be a local parametrization of a surface. Then, the Euclidean inner product in R
3
induces
an inner product in the space of tangent vectors at each point in the surface. This metric on the
surface is obtained as follows:
dx
i
=
∂x
i
∂u
α
du
α
4.2. THE FIRST FUNDAMENTAL FORM 45
ds
2
= δ
ij
dx
i
dx
j
= δ
ij
∂x
i
∂u
α
∂x
j
∂u
β
du
α
du
β
.
Thus,
ds
2
= g
αβ
du
α
du
β
, (4.6)
where
g
αβ
= δ
ij
∂x
i
∂u
α
∂x
j
∂u
β
. (4.7)
We conclude that the surface, by virtue of being embedded in R
3
, inherits a natural metric (4.6)
which we will call the induced metric. A pair ¦/, g¦, where /is a manifold and g = g
αβ
du
α
⊗du
β
is a metric is called a Riemannian manifold if considered as an entity in itself, and Riemannian
submanifold of R
n
if viewed as an object embedded in Euclidean space. An equivalent version of
the metric (4.6) can be obtained by using a more traditional calculus notation
dx = x
u
du +x
v
dv
ds
2
= dx dx
= (x
u
du +x
v
dv) (x
u
du +x
v
dv)
= (x
u
x
u
)du
2
+ 2(x
u
x
v
)dudv(x
v
x
v
)dv
2
.
We can rewrite the last result as
ds
2
= Edu
2
+ 2Fdudv +Gdv
2
, (4.8)
where
E = g
11
= x
u
x
u
F = g
12
= x
u
x
v
= g
21
= x
v
x
u
G = g
22
= x
v
x
v
.
That is
g
αβ
= x
α
x
β
=< x
α
, x
β
> .
4.6 bdf The element of arclength
ds
2
= g
αβ
du
α
⊗du
β
(4.9)
is also called the first fundamental form. We must caution the reader that this quantity is not
a form in the sense of differential geometry since ds
2
involves the symmetric tensor product rather
than the wedge product.
The first fundamental form plays such a crucial role in the theory of surfaces, that will find it
convenient to introduce yet a third more modern version. Following the same development as in the
theory of curves, consider a surface / defined locally by a function (u
1
, u
2
) −→ α(u
1
, u
2
). We say
that a quantity X
p
is a tangent vector at a point p ∈ /, if X
p
is a linear derivation on the space of
C

real-valued functions ¦f[f : / −→ R¦ on the surface. The set of all tangent vectors at a point
p ∈ / is called the tangent space T
p
/. As before, a vector field X on the surface is a smooth
choice of a tangent vector at each point on the surface and the union of all tangent spaces is called
the tangent bundle T/.
The coordinate chart map
α : R
2
−→ / ∈ R
3
46 CHAPTER 4. THEORY OF SURFACES
induces a push-forward map
α

: TR
2
−→ T/
defined by
α

(V )(f) [
α(u
α
)
= V (α ◦ f) [
u
α
Just as in the case of curves, when we revert back to classical notation to describe a surface as
x
i
(u
α
), what we really mean is (x
i
◦α)(u
α
), where x
1
are the coordinate functions in R
3
. Particular
examples of tangent vectors on / are given by the push-forward of the standard basis of TR
2
.
These tangent vectors which earlier we called x
α
are defined by
α

(

∂u
α
)(f) [
α(u
α
)
=

∂u
α
(α ◦ f) [
u
α
In this formalism, the first fundamental form I is just the symmetric bilinear tensor defined by
induced metric
I(X, Y ) = g(X, Y ) =< X, Y >, (4.10)
where X and Y are any pair of vector fields in T/.
Orthogonal Parametric Curves
Let V and W be vectors tangent to a surface / defined locally by a chart x(u
α
). Since the vectors
x
alpha
span the tangent space of / at each point, the vectors V and W can be written as linear
combinations
V = V
α
x
α
W = W
α
x
α
.
The functions V
α
and W
α
are called the curvilinear coordinates of the vectors. We can calculate
the length and the inner product of the vectors using the induced Riemannian metric,
|V |
2
= < V, V >=< V
α
x
α
, V
β
x
β
>= V
α
V
β
< x
α
, x
β
>
|V |
2
= g
αβ
V
α
V
β
|W|
2
= g
αβ
W
α
W
β
,
and
< V, W > = < V
α
x
α
, W
β
x
β
>= V
α
W
β
< x
α
, x
β
>
= g
αβ
V
α
W
β
.
The angle θ subtended by the the vectors v and W is the given by the equation
cos θ =
< V, W >
|V | |W|
=
I(V, W)
_
I(V, V )
_
I(W, W)
=
g
αβ
V
α
W
β
g
αβ
V
α
V
β
g
αβ
W
α
W
β
. (4.11)
Let u
α
= φ
α
(t) and u
α
= ψ
α
(t) be two curves on the surface. Then the total differentials
du
α
=

α
dt
dt, and δu
α
=

α
dt
δt
4.2. THE FIRST FUNDAMENTAL FORM 47
represent infinitesimal tangent vectors (1.12) to the curves. Thus, the angle between two infinitesimal
vectors tangent to two intersecting curves on the surface satisfies
cos θ =
g
αβ
du
α
δu
β
_
g
αβ
du
α
du
β
_
g
αβ
δu
α
δu
β
(4.12)
In particular, if the two curves happen to be the parametric curves, u
1
= constant and u
2
=
constant then along one curve we have du
1
= 0, du
2
arbitrary, and along the second we have δu
1
arbitrary and δu
2
= 0. In this case, the cosine of the angle subtended by the infinitesimal tangent
vectors reduces to
cos θ =
g
12
δu
1
du
2
_
g
11
(δu
1
)
2
_
g
22
(du
2
)
2
=
g
12
g
11
g
22
=
F

EG
. (4.13)
As a result, we have the following
4.7 Proposition The parametric lines are orthogonal if F = 0.
4.8 Examples
a) Sphere
x = (a sin θ cos φ, a sin θ sin φ, a cos θ)
x
θ
= (a cos θ cos φ, a cos θ sin φ, −a sin θ)
x
φ
= (−a sin θ sin φ, a sin θ cos φ, )
E = x
θ
x
θ
= a
2
F = x
θ
x
φ
= 0
G = x
φ
x
φ
= a
2
sin
2
θ
ds
2
= a
2

2
+a
2
sin
2
θdφ
2
b) Surface of Revolution
x = (r cos θ, r sin θ, f(r))
x
r
= (cos θ, sin θ, f

(r))
x
θ
= (−r sin θ, r cos θ, 0)
E = x
r
x
r
= 1 +f
2
(r)
F = x
r
x
θ
= 0
G = x
θ
x
θ
= r
2
ds
2
= [1 +f
2
(r)]dr
2
+r
2

2
c) Pseudosphere
x = (a sin ucos v, a sin usin v, a(cos uln tan
u
2
))
E = a
2
cot
2
u
F = = 0
G = a
2
sin
2
u
ds
2
= a
2
cot
2
udu
2
+a
2
sin
2
udv
2
d) Torus
48 CHAPTER 4. THEORY OF SURFACES
x = ((b +a cos u) cos v, (b +a cos u) sin v, a sin u)
E = a
2
F = 0
G = (b +a cos u)
2
ds
2
= a
2
du
2
+ (b +a cos u)
2
dv
2
e) Helicoid
x = (ucos v, usin v, av)
E = 1
F = 0
G = u
2
+a
2
ds
2
= du
2
+ (u
2
+a
2
)dv
2
f) Catenoid
x = (ucos v, usin v, c cosh
−1
u
c
)
E =
u
2
u
2
−c
2
F = 0
G = u
2
ds
2
=
u
2
u
2
−c
2
du
2
+u
2
dv
2
4.3 The Second Fundamental Form
Let x = x(u
α
) be a coordinate patch on a surface /. Since x
u
and xv are tangential to the surface,
we can construct a unit normal n to the surface by taking
n =
x
u
x
v
|x
u
x
v
|
(4.14)
Now, consider a curve on the surface given by u
α
= u
α
(s). Without loss of generality, we assume
that the curve is parametrized by arclength s so that the curve has unit speed. Using the chain rule,
we se that the unit tangent vector T to the curve is given by
T =
dx
ds
=
dx
du
α
du
α
ds
= x
α
du
α
ds
(4.15)
Since the curve lives on the surface and the the vector T is tangent to the curve, it is clear that
T is also tangent to the surface. On the other hand, the vector T

= dT/ds does not in general have
this property, so what we will do is to decompose T

into its normal and tangential components (see
fig (4.2))
T

= K
n
+K
g
= κ
n
n +K
g
, (4.16)
where κ
n
= |K
n
| =< T

, n >
The scalar quantity κ
n
is called the normal curvature of the curve and K
g
is called the
geodesic curvature vector. The normal curvature measures the the curvature of x(u
α
(s)) resulting
4.3. THE SECOND FUNDAMENTAL FORM 49
Figure 4.2: Normal Curvature
by the constraint of the curve to lie on a surface. The geodesic curvature vector , measures the
“sideward” component of the curvature in the tangent plane to the surface. Thus, if one draws a
straight line on a flat piece of paper and then smoothly bend the paper into a surface, then the
straight line would now acquire some curvature. Since the line was originally straight, there is no
sideward component of curvature so K
g
= 0 in this case. This means that the entire contribution
to the curvature comes from the normal component, reflecting the fact that the only reason there is
curvature here is due to the bend in the surface itself.
Similarly, if one specifies a point p ∈ / and a direction vector X
p
∈ T
p
/, one can geometrically
envision the normal curvature by considering the equivalence class of all unit speed curves in /
which contain the point p and whose tangent vectors line up with the direction of X. Of course,
there are infinitely many such curves, but at an infinitesimal level, all these curves can be obtained
by intersecting the surface with a ”vertical” plane containing the vector X and the normal to /.
All curves in this equivalence class have the same normal curvature and their geodesic curvatures
vanish. In this sense, the normal curvature is more of a property pertaining to a direction on the
surface at a point, whereas, the geodesic curvature really depends on the curve itself. It might be
impossible for a hiker walking on the ondulating hills of the Ozarks to find a straight line trail, since
the rolling hills of the terrain extend in all directions. However, it might be possible to walk on a
path with zero geodesic curvature as long as the hiker can maintain the same compass direction.
To find an explicit formula for the normal curvature we first differentiate equation (4.15)
T

=
dT
ds
=
d
ds
(x
α
du
α
ds
)
=
d
ds
(x
α
)
du
α
ds
+x
α
d
2
u
α
ds
2
= (
dx
α
du
β
du
β
ds
)
du
α
ds
+x
α
d
2
u
α
ds
2
= x
αβ
du
α
ds
du
β
ds
+x
α
d
2
u
α
ds
2
.
Taking the inner product of the last equation with the normal and noticing that < x
α
, n >= 0, we
get
κ
n
= < T

, n >=< x
αβ
, n >
du
α
ds
du
β
ds
=
b
αβ
du
α
du
β
g
αβ
du
α
du
β
, (4.17)
where
b
αβ
=< x
αβ
, n > (4.18)
4.9 Definition The expression
II = b
αβ
du
α
⊗du
β
(4.19)
is called the second fundamental form .
50 CHAPTER 4. THEORY OF SURFACES
4.10 Proposition The second fundamental form is symmetric.
Proof: In the classical formulation of the second fundamental form the proof is trivial. We have
b
αβ
= b
βα
since for a C

patch x(u
α
), we have x
αβ
= x
βα
because the partial derivatives commute.
We will denote the coefficients of the second fundamental form by
e = b
11
=< x
uu
, n >
f = b
12
=< x
uv
, n >
= b
21
=< x
vu
, n >
g = b
22
=< x
vv
, n >
, so that equation (4.19) can be written as
II = edu
2
+ 2fdudv +gdv
2
(4.20)
and equation (4.17) as
κ
n
=
II
I
=
Edu
2
+ 2Fdudv +Gdv
2
edu
2
+ 2fdudv +gdv
2
(4.21)
We would also like to point out that just as the first fundamental form can be represented as
I =< dx, dx >,
so can we represent the second fundamental form as
II = − < dx, dn >
To see this it suffices to note that differentiation of the identity < x
α
, n >= 0 implies that
< x
αβ
, n >= − < x
α
, n
β
> .
Therefore,
< dx, dn > = < x
α
du
α
, n
β
du
β
>
= < x
α
du
α
, n
β
du
β
>
= < x
α
, n
β
> du
α
du
β
= − < x
αβ
, n > du
α
du
β
= −II
From a computational point a view, a more useful formula for the coefficients of the second
fundamental formula can be derived by first applying the classical vector identity
(AB) (C D) =
¸
¸
¸
¸
A C A D
B C B D
¸
¸
¸
¸
(4.22)
to compute
|x
u
x
v
|
2
= (x
u
x
v
) (x
u
x
v
)
= det
_
x
u
x
u
x
u
x
v
x
v
x
u
x
v
x
v
_
= EG−F
2
(4.23)
Consequently, the normal vector can be written as
n =
x
u
x
v
|x
u
x
v
|
=
x
u
x
v

EG−F
2
4.4. CURVATURE 51
Thus, we can write the coefficients b
αβ
directly as triple products involving derivatives of (x). The
expressions for these coefficients are
e =
(x
u
x
u
x
uu
)

EG−F
2
f =
(x
u
x
v
x
uv
)

EG−F
2
g =
(x
v
x
v
x
vv
)

EG−F
2
(4.24)
The first fundamental form on a surface measures the (square) of the distance between two
infinitesimally separated points. There is a similar interpretation of the second fundamental form
as we show below. The second fundamental form measures the distance from a point on the surface
to the tangent plane at a second infinitesimally separated point. To see this simple geometrical
interpretation, consider a point x
0
= x(u
α
0
) ∈ / and a nearby point x(u
α
0
+du
α
). Expanding on a
Taylor series, we get
x(u
α
0
+du
α
) = x
0
+ (x
0
)
α
du
α
+
1
2
(x
0
)
αβ
du
α
du
β
+. . .
We recall that the distance formula from a point x to a plane which contains x
0
is just the scalar
projection of (x −x
0
) onto the normal. Since the normal to the plane at x
0
is the same as the unit
normal to the surface and < x
α
, n >= 0, we find that the distance D is
D = < x −x
0
, n >
=
1
2
< (x
0
)
αβ
, n > du
α
du
β
=
1
2
II
0
The first fundamental form (or rather, its determinant) also appears in calculus in the context
of calculating the area of a parametrized surface. the reason is that if one considers an infinitesimal
parallelogram subtended by the vectors x
u
du and x
v
dv, then the differential of surface area is given
by the length of the cross product of these two infinitesimal tangent vectors. That is
dS = |x
u
x
v
| dudv
S =
_ _
_
EG−F
2
dudv
The second fundamental form contains information about the shape of the surface at a point.
For example, the discussion above indicates that if b = [b
αβ
[ = eg −f
2
> 0 then all the neighboring
points lie on the same side of the tangent plane, and hence, the surface is concave in one direction.
If at a point on a surface b > 0, the point is called an elliptic point, if b < 0, the point is called
hyperbolic or a saddle point, and if b = 0, the point is called parabolic.
4.4 Curvature
Curvature and all related questions which surround curvature, constitute the central object of study
in differential geometry. One would like to be able to answer questions such as, what quantities
remain invariant as one surface is smoothly changed into another? There is certainly something
intrinsically different from a cone, which we can construct from a flat piece of paper and a sphere
which we can not. What is it that makes these two surfaces so different? How does one calculate
the shortest path between two objects when the path is constrained to be on a surface?
52 CHAPTER 4. THEORY OF SURFACES
These and many other questions of similar type can be quantitatively answered through the
study curvature. We cannot overstate the great importance of this subject; perhaps it suffices to say
that without a clear understanding of curvature, there would not be a general theory of relativity,
no concept of black holes, and even more disastrous, no Star Trek.
The study of curvature of a hypersurface in R
n
(a surface of dimension n−1) begins by trying to
understand the covariant derivative of the normal to the surface. The reason is simple. If the normal
to a surface is constant, then the surface is a flat hyperplane. Thus, it is variations in the normal
that indicate the presence of curvature. For simplicity, we constrain our discussion to surfaces in
R
3
, but the formalism we use is applicable to any dimension. We will also introduce in this section
the modern version of the second fundamental form
4.11 Definition Let X be a vector field on a surface M in R
3
, and let N be the normal vector.
The map L given by
LX = −∇
X
N (4.25)
is called the Weingarten map.
In this definition we will be careful to differentiate between operators which live on the surface
and operators which live in the ambient space. We will adopt the convention of overlining objects
which live in the ambient space, the operator ∇ above being an example of one such object. The
Weingarten map is clearly a good place to start, since it is the rate of change of the normal in an
arbitrary direction which we wish to quantify.
4.12 Definition The Lie bracket [X, Y ] of two vector fields X and Y on a surface / is defined
as the commutator
[X, Y ] = XY = Y X, (4.26)
meaning that if f is a function on / then [X, Y ]f = X(Y (f)) −Y (X(f)).
4.13 Proposition The Lie bracket of two vectors X, Y ∈ T(calM) is another vector in T(/).
Proof: If suffices to prove that the bracket is a linear derivation on the space of C

functions.
Consider vectors X, Y ∈ T(calM) and smooth functions f, g in /. Then
[X, Y ](f +g) = X(Y (f +g)) −Y (X(f +g))
= X(Y (f) +Y (g)) −Y (X(f) +X(g))
= X(Y (f)) −Y (X(f)) +X(Y (g)) −Y (X(g))
= [X, Y ](f) + [X, Y ](g),
[X, Y ](fg) = X(Y (fg)) −Y (X(fg))
= X[fY (g) +gY (f)] −Y [fX(g) +gX(f)]
= X(f)Y (g) +fX(Y (g)) +X(g)Y (f) +gX(Y (f))
−Y (f)X(g) −f(Y (X(g)) −Y (g)X(f) −gY (X(f))
= f[X(Y (g)) −(Y (X(g))] +g[X(Y (f)) −Y (X(f))]
= f[X, Y ](g) +g[X, Y ](f).
4.14 Proposition The Weingarten map is a linear transformation on T(/).
Proof: Linearity follows from the linearity of ∇, so it suffices to show that L : X −→ LX maps
X ∈ T(/) to a vector LX ∈ T(/). Since N is the unit normal to the surface, < N, N >= 1, so
that any derivative of < N, N > is 0. Assuming that the connection is compatible with the metric,
4.4. CURVATURE 53

X
< N, N > = < ∇
X
N, > + < N, ∇
X
N >
= 2 < ∇
X
N, N >
= 2 < LX, N >= 0.
Therefore, LX is orthogonal to N, and hence it lies in T(/).
We recall at this point that the in the previous section we gave two equivalent definitions <
dx, dx >, and < X, Y >) of the first fundamental form. We will now do the same for the second
fundamental form.
4.15 Definition The second fundamental form is the bilinear map
II(X, Y ) =< LX, Y > (4.27)
4.16 Remark It should be noted that the two definitions of the second fundamental form are
consistent. This is easy to see if one chooses X to have components x
α
and Y to have components
x
β
. With these choices, LX has components −n
a
and II(X, Y ) becomes b
αβ
= − < x
α
, n
β
> We
also note that there is third fundamental form defined by
III(X, Y ) =< LX, LY >=< L
2
X, Y > (4.28)
In classical notation, the third fundamental form would be denoted by < dn, dn >. As one would
expect, the third fundamental form contains third order Taylor series information about the surface.
We will not treat III(X, Y ) in much detail in this work.
4.17 Definition The torsion of a connection ∇ is the operator T such that ∀X, Y
T(X, Y ) = ∇
X
Y −∇
Y
X −[X, Y ] (4.29)
A connection is called torsion free if T(X, Y ) = 0. In this case,

X
Y −∇
Y
X = [X, Y ].
We will say much more later about the torsion and the importance of torsion free connections.
For the time being, it suffices to assume that for the rest of this section, all connections are torsion
free. Using this assumption, it is possible to prove the following very important theorem.
4.18 Theorem The Weingarten map is a self adjoint operator on T/.
Proof: We have already shown that L : T/ −→ T/ is a linear map. Recall that an operator
L on a linear space is self adjoint if < LX, Y >=< X, LY >, so that the theorem is equivalent to
proving that that the second fundamental for is symmetric (II[X, Y ] = II[Y, X]). Computing the
difference of these two quantities, we get
II[X, Y ] −II[Y, X] = < LX, Y > − < LY, X >
= < ∇
X
N, Y > − < ∇
Y
N, X > .
Since < X, N >=< Y, N >= 0, and the connection is compatible with the metric, we know that
< ∇
X
N, Y > = − < N, ∇
X
Y >
< ∇
Y
N, X > = − < N, ∇
Y
X >,
54 CHAPTER 4. THEORY OF SURFACES
hence,
II[X, Y ] −II[Y, X] = < N, ∇
Y
X > − < N, ∇
X
Y >,
= < N, ∇
Y
X −∇
X
Y >
= < N, [X, Y ] >
= 0 (iff [X, Y ] ∈ T(/))
One of the most important topics in an introductory course linear algebra deals with the spectrum
of self adjoint operators. The main result in this area states that if one considers the eigenvalue
equation
LX = κX (4.30)
then the eigenvalues are always real and eigenvectors corresponding to different eigenvalues are or-
thogonal. In the current situation, the vector spaces in question are the tangent spaces at each point
of a surface in R
3
, so the dimension is 2. Hence, we expect two eigenvalues and two eigenvectors
LX
1
= κ
1
X
1
(4.31)
LX
2
= κ
1
X
2
. (4.32)
4.19 Definition The eigenvalues κ
1
and κ
2
of the Weingarten map L are called the principal
curvatures and the eigenvectors X
1
and X
2
are called the principal directions.
Several possible situations may occur depending on the classification of the eigenvalues at each
point p on the surface:
1. If κ
1
,= κ
2
and both eigenvalues are positive, then p is called an elliptic point
2. If κ
1
κ
2
< 0, then p is called a hyperbolic point.
3. If κ
1
= κ
2
,= 0, then p is called an umbilic point.
4. if κ
1
κ
2
= 0, then p is called a parabolic point
It is also well known from linear algebra, that the the determinant and the trace of a self adjoint
operator are the only invariants under a adjoint (similarity) transformation. Clearly these invariants
are important in the case of the operator L, and they deserve to be given special names.
4.20 Definition The determinant K = det(L) is called the Gaussian curvature of / and
H = (1/2)Tr(L) is called the mean curvature.
Since any self-adjoint operator is diagonalizable and in a diagonal basis, the matrix representing
L is diag(κ
1
, κ
2
), if follows immediately that
K = κ
1
κ
2
H =
1
2

1

2
) (4.33)
4.21 Proposition Let X and Y be any linearly independent vectors in T(/). Then
LX LY = K(X Y )
(LX Y ) + (X LY ) = 2H(X Y ) (4.34)
Proof: Since LX, LY ∈ T(/), they can be expresses as linear combinations of the basis vectors
X and Y .
4.4. CURVATURE 55
LX = a
1
X +b
1
Y
LY = a
2
X +b
2
Y.
computing the cross product, we get
LX LY =
¸
¸
¸
¸
a
1
b
1
a
2
b
2
¸
¸
¸
¸
X Y
= det(L)(X Y ).
Similarly
(LX Y ) + (X LY ) = (a
1
+b
2
)(X Y )
= Tr(L)(X Y )
= (2H)(X Y ).
4.22 Proposition
K =
eg −f
2
EG−F
2
H =
1
2
Eg −2Ff +eG
EG−F
2
(4.35)
Proof: Starting with equations (4.34) take the dot product of both sides with X Y and use the
vector identity (4.22). We immediately get
K =
¸
¸
¸
¸
< LX, X > < LX, Y >
< LY, X > < LX, X >
¸
¸
¸
¸
¸
¸
¸
¸
< X, X > < X, Y >
< Y, X > < Y, Y >
¸
¸
¸
¸
2H =
¸
¸
¸
¸
< LX, X > < LX, Y >
< Y, X > < Y, Y >
¸
¸
¸
¸
+
¸
¸
¸
¸
< X, X > < X, Y >
< LY, X > < LY, Y >
¸
¸
¸
¸
¸
¸
¸
¸
< X, X > < X, Y >
< Y, X > < Y, Y >
¸
¸
¸
¸
The result follows by taking X = x
u
and Y = x
v
4.23 Theorem (Euler) Let X
1
and X
2
be unit eigenvectors of L and let X = (cos θ)X
1
+(sin θ)X
2
.
Then
II(X, X) = κ
1
cos
2
θ +κ
2
sin
2
θ (4.36)
Proof: Easy. Just compute II(X, X) =< LX, X >, using the fact the LX
1
= κ
1
X
1
, LX
2
= κ
2
X
2
,
and noting that the eigenvectors are orthogonal. We get,
< LX, X > = < (cos θ)κ
1
X
1
+ (sin θ)κ
2
X
2
, (cos θ)X
1
+ (sin θ)X
2
>
= κ
1
cos
2
θ < X
1
, X
1
> +κ
2
sin
2
θ < X
2
, X
2
>
= κ
1
cos
2
θ +κ
2
sin
2
θ.

i

This document was reproduced by the University of North Carolina at Wilmington from a camera A ready copy supplied by the authors. The text was generated on an desktop computer using L TEX.

c 1992,1998, 2006 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the written permission of the author. Printed in the United States of America.

ii

Students who master the entirety of this material will have gained enough background to begin a formal study of the General Theory of Relativity. and Differential Equations. There are many excellent texts in Differential Geometry but very few have an early introduction to differential forms and their applications to Physics. Linear Algebra. the notes also correlate classical equations to the more elegant but less intuitive modern formulation of the subject. It is the purpose of these notes to bridge some of these gaps and thus help the student get a more profound understanding of the concepts involved. NC 28403 lugo@uncw. Ph. which the author has taught for several years.Preface These notes were developed as a supplement to a course on Differential Geometry at the advanced undergraduate.edu iii . D. first year graduate level. Gabriel Lugo. These notes should be accessible to students who have completed traditional training in Advanced Calculus. Mathematical Sciences and Statistics UNCW Wilmington. When appropriate.

iv .

3 Exterior Derivatives . . . 2. . . . . . . . . . . . . .1 Manifolds . . . . . . . . . . . . . . .4 The Hodge-∗ Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 The Second Fundamental Form 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Contents Preface 1 Vectors and Curves 1. . . . . . . . . . . . . . . . . . . . 4 Theory of Surfaces 4. . . . . . . . . . . .3 Fundamental Theorem of Curves . . . . . . . . . . . . 2 Differential Forms 2. . . . . . . . . . . . 3. .3 Covariant Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Curvilinear Coordinates 3. .2 Curves in R3 . . . . . 1. . . . . . . . . . . . . . . . . .2 Tensors and Forms of Higher Rank 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii 1 1 3 12 15 15 17 23 25 33 33 35 38 40 43 43 44 48 51 . . .2 The First Fundamental Form . . . . . . . . . . . . . . . . . . . . 3 Connections 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 . . . . . . . . . . . . . . . 2. .1 1-Forms . . . . . . . . . . . . . . . .1 Tangent Vectors . . . . . . . . . . . . . . . . . . . . .1 Frames . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Cartan Equations . . . . . .4 Curvature . . . . . . . . . . . . . . . . . . . . .

1 Definition Euclidean n-space Rn is defined as the set of ordered n-tuples p = (p1 . The functions xi are then called the natural coordinates of the the point p. a unit vector pointing north at the earth’s equator is not at all the same as a unit vector pointing north at the tropic of Capricorn. . .3 Definition A real valued function in Rn is of class C r if all the partial derivatives of the function up to order r exist and are continuous. Vectors as thus considered as independent of their location in space. In advanced calculus. pn ). . ν. . Euclidean space acquires the structure of a vector space of n dimensions 1 . we often write: x1 = x. .Chapter 1 Vectors and Curves 1. we define two operations: p + q = (p1 + q 1 . . The space of infinitely differentiable (smooth) functions will be denoted by C ∞ (Rn ). . . . For example. δ. if the vector is to represent a force acting on a rigid body. n. . β. 1. In a later chapter we will consider vectors on curved spaces. . 1. .4 Definition A tangent vector Xp in Rn .2 Definition Let xi be the real valued functions in Rn such that xi (p) = pi for any point p = (p1 .1) With the sum and the scalar multiplication of ordered n-tuples defined this way. j. When the dimension of the space n = 3. . 1 . In these cases the positions of the vectors are crucial. . ρ. . run from 1 to 2. . run from 0 to n. Given any two n-tuples p = (p1 . . σ. For instance. is an ordered pair (X.1 Tangent Vectors 1. . pn + q n ) cp = (cp1 . . Indices such as α. then the resulting equations of motion will obviously depend on the point at which the force is applied. cpn ) (1. run from 1 to n. q = (q 1 . We may regard X as an ordinary advanced calculus vector and p is the position vector of the foot the arrow. . . l. γ. . q n ) and any real number c. m. pn ). where pi ∈ R. it is advantageous to introduce a notion of vectors that does depend on location. p). 1. 1 In these notes we will use the following index conventions: Indices such as i. . pn ). for each i = 1. Because of physical and mathematical reasons. . . vectors are usually regarded as arrows characterized by a direction and a length. n. . . This example should help motivate the following definition. x2 = y and x3 = z. . k. Indices such as µ.

Hopefully. With this definition. We may think of a tangent vector at a point as an operator on the space of smooth functions in a neighborhood of the point. Definition A vector field X in U ∈ Rn is a smooth function from U to T (U ). and it is found in any advanced calculus text. it is easy to see that for each point p. it will be clear from the context whether we are referring to a vector or to a vector field. The operator assigns to a function the directional derivative of the function in the direction of the vector. The notation Xp (f ) = X (f )(p) is also often used in these notes. This object is not a vector space.3) X (f )(p) = where f (p) is the gradient of the function f at the point p. The directional derivative of f at the point p. 1. It is easy to generalize the notion of directional derivatives to vector fields by defining X(f )(p) = Xp (f ). we can define new vector fields X + Y and f X by 1. but as we will see later it has much more structure than just a set. is defined by f (p) · X(p). (1. a. Vector fields are essential objects in physical applications. and the word derivation means that the operator satisfies Leibnitz’ rule. The set T (U ) consisting of the union of all tangent vectors at all points in U is called the tangent bundle. b ∈ R. Other examples of vector fields in classical physics are the electric. g ∈ C ∞ Rn . If we consider the flow of a fluid in a region.7 Proposition If f. On the other hand. Given two tangent vectors Xp . VECTORS AND CURVES The collection of all tangent vectors at a point p ∈ Rn is called the tangent space at p and will be denoted by Tp (Rn ).2 CHAPTER 1.4 is a called a linear derivation on the space of smooth functions.2) Remark Since the space of smooth functions is not a field but only a ring. then X(af + bg) X(f g) = aX(f ) + bX(g) = f X(g) + gX(f ) (1. there is no natural way to add two tangent vectors at different points. the corresponding tangent space Tp (Rn ) at that point has the structure of a vector space. Yp and a constant c. we can define new tangent vectors at p by (X + Y )p =Xp + Yp and (cX)p = cXp . Given any two vector fields X and Y and any smooth function f . Any quantity in Euclidean space which satisfies relations 1. Let U be a open subset of Rn . We may think of a vector field as a smooth assignment of a tangent vector Xp to each point in in U . At the risk of introducing some confusion. we will drop the subscript to denote a tangent vector. 1. The word linear here is used in the usual sense of a linear operator in linear algebra. the operations above give the space of vector fields the structure of a ring module. and X is a vector field.5 (X + Y )p (f X)p = Xp + Yp = f Xp (1.4) The proof of this proposition follows from fundamental properties of the gradient.6 Definition Let Xp be a tangent vector in an open neighborhood U of a point p ∈ Rn and let f be a C ∞ function in U . the velocity vector field indicates the speed and direction of the flow of the fluid at that point. in the direction of Xp . . The subscript notation Xp to indicate the location of a tangent vector is sometimes cumbersome. magnetic and gravitational fields.

This step is a big departure from the usual concept of a “calculus” vector. . Thus. . This result allows us to identify vector fields with linear derivations. Then. . . . would correspond to the tangent vector Xp = 3( ∂ ∂ ∂ )p + 4( )p − 3( )p . To a differential geometer. The quantities = a1 ( ∂ ∂ )p . Let p ∈ U be a point and let xi be the coordinate functions in U . + an ( n )p .1. The quantities ai are called the contravariant components of the tangent vector. In elementary treatments. an . the Euclidean vector in R3 ( X = 3i + 4j − 3k located at a point p. Suppose that Xp = (X. x2 (t)) . but the proposition is important because it characterizes vector fields in a coordinate-independent manner. where the components of the Euclidean vector X are a1 . .7) ∂x This equation implies that the action of the vector Xp on the coordinate functions xi yields the components ai of the vector. x2 (t).2. . The curve assigns to each value of a parameter t ∈ R. The difference between a tangent vector and a vector field is that in the latter case. for any function f . for example. . for example.6) ∂ ∂ )p + . . the equation above could be simply written ∂ Xp = ai ( i )p . a vector is a linear operator whose inputs are functions.9 Definition A curve α(t) in R3 is a C ∞ map from an open subset of R into R3 . the output of the operator is the directional derivative of the function in the direction of X. 1.2 Curves in R3 1. x2 (t)) in R3 U ∈R t −→ −→ α R3 α(t) = (x1 (t). At each point. x2 (t).8 Proposition Any linear derivation on C ∞ (Rn ) is a vector field. ∂x ∂y ∂z 1. p). vectors are often identified with the components of the vector and this may cause some confusion. (1. ∂xi (1. ∂x1 ∂x Notation: We will be using Einstein’s convention to suppress the summation symbol whenever an expression contains a repeated index. . ( n )p ∂x1 ∂x form a basis for the tangent space Tp (Rn ) at the point p. the tangent vector Xp operates on f according to the formula n Xp (f ) = i=1 ai ( ∂f )(p). and any tangent vector can be written as a linear combination of these basis vectors. CURVES IN R3 3 The proof of the following proposition is slightly beyond the scope of this course.5) It is therefore natural to identify the tangent vector Xp with the differential operator n Xp Xp = i=1 ai ( ∂ )(p) ∂xi (1. a point (x1 (t). the coefficients ai are smooth functions of xi . Thus.

(1. 1. a3 t + b3 ). b3 ). Geometrically.12) is called an infinitesimal tangent vector.13) As we will see later in this text. we have ds = dx = vdt (1. Occasionally. x3 (t)) (1. . we must not discard the great intuitive value of this notion as envisioned by the masters who invented Calculus. dt dt dt dt (1. This equation represents a straight line passing through the point p = (b1 .9) where xi are the coordinate slot functions in an open set in R3 . we will revert to the position vector notation x(t) = (x1 (t). which is wrapped around a circular cylinder of radius a. dt dt dt 2 . a sin ωt. Of course.10) + dx2 dt + dx3 dt 2 (1. bt). 1. even at the risk of some possible confusion! Thus. the velocity vector is really a tangent vector and hence it should be viewed as a linear derivation on the space of functions.12 Definition The derivative α (t) of the curve is called the velocity vector and the second derivative α (t) is called the acceleration. it is helpful to regard dx as a traditional vector which. whereas in the more strict sense of modern differential geometry. in the direction of the vector v = (a1 . b2 . a2 . . we may view the curve as the path described by the hypotenuse of a triangle with slope b. and the norm dx of the infinitesimal tangent vector is called the differential of arclength ds.11) The differential dx of the classical position vector given by dx = dx1 dx2 dx3 . The length v = α (t) of the velocity vector is called the speed of the curve.11 Example Let α(t) = (a cos ωt. This curve is called a circular helix.4 CHAPTER 1.8) which is more prevalent in vector calculus and elementary physics textbooks. the notion of infinitesimal objects needs to be treated in a more rigorous mathematical setting. . at the infinitesimal level. The projection of the helix onto the xy-plane is a circle and the curve rises at a constant rate in the z-direction. and the curve α as representing the trajectory of a moving point particle. VECTORS AND CURVES One may think of the parameter t as representing time. 1. a2 t + b2 . (1. what this notation really means is xi (t) = (xi ◦ α)(t). Clearly. x2 (t). At the same time. gives a linear approximation to the curve.10 Example Let α(t) = (a1 t + b1 . a3 ). The components of the velocity vector are simply given by V(t) = and the speed is v= dx1 dt 2 dx = dt dx1 dx2 dx3 .

as illustrated d dt ∗ ∈ T R −→ T R3 ↓ ↓ α α (t) xi R −→ R3 −→ R d i (x ◦ α) |t . Using this reparametrization is quite natural. dt (1. but also one needs to be able to find the inverse function t in terms of s. The action of α (t) on a d function f on R3 is the same as the action of dt on the composition f ◦ α. The diagram below provides a more geometrical interpretation of the the velocity vector formula (1. Given a point on the curve. α (t)(xi ) |α(t) = 1. we say that the curve β(s) = α(t(s)) is a reparametrization of α.1. Many authors use the notation dα to denote the push-forward. The map α(t) from R to R3 induces a map α∗ from the tangent space of R to the d d tangent space of R3 . dt Since α (t) is a tangent vector in R3 . the velocity of the curve acting on a function. (1.15) dt The map α∗ on the tangent spaces induced by the curve α is called the push-forward. The image α∗ ( dt ) in T R3 of the tangent vector dt is what we call α (t) α∗ ( d ) = α (t). since we know from basic physics that the rate of change of the arclength is what we call speed ds v= = α (t) .2. from a theoretical point of view. since it takes into account that the velocity has a vector part as well as point of application. A common reparametrization of curve is obtained by using the arclength as the parameter.16) dt The arc length is obtained by integrating the above formula s= α (t) dt = dx1 dt 2 α + dx2 dt 2 + dx3 dt 2 dt (1. β (s) = [α(t(s))] .14) The modern notation is more precise. yields the directional derivative of that function in the direction tangential to the curve at the point in question. but we prefer to avoid this notation because most students fresh out of advanced calculus have not yet been introduced to the interpretation of the differential as a linear map on tangent spaces. In particular. we get the components of the the tangent vector.13 Definition If t = t(s) is a smooth. arclength parametrizations are ideal since any curve so parametrized has unit speed.14). it acts on functions in R3 . On the other hand. real valued function and α(t) is a curve in R3 . CURVES IN R3 5 If f is any smooth function on R3 . if we apply i α (t) to the coordinate functions x . The proof of this fact is a simple application of the chain rule and the inverse function theorem.17) In practice it is typically difficult to actually find an explicit arclength parametrization of a curve since not only does one have calculate the integral. (1. we formally define α (t) in local coordinates by the formula α (t)(f ) |α(t) = d (f ◦ α) |t .

Differentiating the relation T · T = 1. a sin .20) so we conclude that the vector T is orthogonal to T . Taking the length of both sides of last equation. t s(t) = 0 t (−aω sin ωu)2 + (aω cos ωu)2 + b2 du a2 ω 2 + b2 du 0 = = ct. we will call T the unit unit tangent vector. Then V(t) = (−aω sin ωt. b ). b). aω cos ωt. bt).19) . a sin ωt. ωs ωs ωs . where. Let N be a unit vector orthogonal to T .6 CHAPTER 1. we get 2T · T = 0. and recalling that N has unit length.21) We call N the unit normal to the curve.18) The vector T (s) is tangential to the curve and it has unit length. we deduce that κ = T (s) (1. (1. and κ the curvature. VECTORS AND CURVES = α (t(s))t (s) 1 = α (t(s)) s (t(s)) α (t(s)) . and let κ be the scalar such that T (s) = κN (s). Leibnitz notation makes this even more self evident dx ds = = dx dt = dt ds dx dt dx dt dx dt ds dt 1. c c c Frenet Frames Let β(s) be a curve parametrized by arc length and let T(s) be the vector T (s) = β (s).14 Example Let α(t) = (a cos ωt. (1. Hereafter. c = The helix of unit speed is then given by β(s) = (a cos a2 ω 2 + b2 . = α (t(s)) and any vector divided by its length is a unit vector. (1.22) (1.

j. In particular. N.25) for some quantity τ . which we will call the torsion. (1. we must have B (s) = −τ N (s) (1. In other words. differentiating the equation T · B = 0. N must be a linear combination of T and B. B) forms an orthonormal set. we find that B · B = 0. we get B · T + B · T = 0. we also conclude that B must also be orthogonal to T . We now introduce a third vector B = T × N.26) Proof: We need only establish the equation for N . B} is called the Frenet Frame or the repere mobile (moving frame). The advantage of this basis over the fixed (i.1. Hence.k) basis is that the Frenet frame is naturally adapted to the curve. rewriting the last equation B · T = −T · B = −κN · B = 0. The binormal B is something like the flag in the back of sand dune buggy.23) which we will call the binormal vector. This can only happen if B is orthogonal to the T B-plane. it T = 0 at a particular point. we see that a = N · T. N. we get 2N · N = 0.15 Theorem Let β(s) be a unit speed curve with curvature κ and torsion τ . In particular. −τ B (1. the curve is locally well approximated by a straight line. The triplet of vectors (T. Differentiating the equation N · N = 1. and b = N · B. The quantity B then measures the rate of change in the up and down direction of an observer moving with the curve always facing forward in the direction of the tangent vector. CURVES IN R3 7 It makes sense to call κ the curvature since. The set of basis vectors {T. The torsion is similar to the curvature in the sense that it measures the rate of change of the binormal. so B must be proportional to N .24) If we differentiate the relation B · B = 1. It propagates along with the curve with the tangent vector always pointing in the direction of motion.2. . and the normal and binormal vectors pointing in the directions in which the curve is tending to curve. Then T N B = = −κT = κN τB . (1. 1. N = aT + bB. Taking the dot product of last equation with T and B respectively. The rate of change of the direction of the tangent vector is precisely what one would expect to measure how much a curve is curving. the only way one can have a non-zero derivative is if B is changing directions. Since the binormal also has unit length. Furthermore. so N is orthogonal to N. T ·T =N ·N =B·B =1 T · N = T · B = N · B = 0. hence B is orthogonal to B. a complete description of how the curve is curving can be obtained by calculating the rate of change of the frame in terms of the frame itself. if T is a unit vector. we expect that at that point. that is. then T (s) is not zero only if the direction of T is changing.

VECTORS AND CURVES On the other hand. 0). . we find that N · T = −N · T = −N · (κN ) = −κ N · B = −N · B = −N · (−τ N ) = τ.17 Example Consider a circle of radius r whose equation is given by α(t) = (r cos t.27) The group-theoretic significance of this matrix formulation is quite important and we will come back to this later when we talk about general orthonormal frames. perhaps it suffices to point out that the appearance of an antisymmetric matrix in the Frenet equations is not at all coincidental. The Frenet frame equations (1. and N · B = 0. B 0 −τ 0 B (1. = T · [κN × (κ N + −κ2 T + κτ B)] = T · [κ3 B + κ2 τ T ] = κ2 τ β · [β × β ] = κ2 β · [β × β ] = β ·β β · [β × β ] τ 1. b = τ . differentiating the equations N · T = 0. β · β = κ2 κ2 = β 2 β (s) = κ N + κN = κ N + κ(−κT + τ B) = κ N + −κ2 T + κτ B.28) Proof: If β(s) is a unit speed curve. we have β (s) = T . We conclude that a = −κ. r sin t. At this time.8 CHAPTER 1.16 Proposition Let β(s) be a unit speed curve with curvature κ > 0 and torsion τ . The following theorem provides a computational method to calculate the curvature and torsion directly from the equation of a given unit speed curve. and thus N = −κT + τ B. β · β = (κN ) · (κN ). 1. Then κ τ = = β (s) β · [β × β ] β ·β (1.26) can also be written in matrix form as shown below.      T 0 κ 0 T  N  =  −κ 0 τ   N  . Then T = β (s) = κN.

0) r r 1 s 1 s T = ( sin . r cos t. 1. = dt (1. and the curvature approaches 0.29) Proof: Let s(t) be the arclength and let β(s) be a unit speed reparametrization. CURVES IN R3 Then. ds/dt = r and s = rt. Therefore. which we recognize as the formula for the length of an arc of circle of radius t. r cos . 0) r r r r κ = β = T = = = 1 1 s s sin2 + 2 cos2 + 02 r2 r r r s 1 s (sin2 + cos2 ) r2 r r 1 r This is a very simple but important example. speed v and curvature κ.1. If one were walking along a great circle on a very large sphere (like the earth) one would be perceive the space to be locally flat. subtended by a central angle whose measure is t radians.2. 0) (−r sin t)2 + (r cos t)2 + 02 9 r2 (sin2 t + cos2 t) = = r. the circle locally looks more and more like a straight line. acceleration A. α (t) α (t) = = (−r sin t. A small circle has large curvature and a large circle has small curvature. We conclude that s s β(s) = (−r sin . Then α(t) = β(s(t)) and by the chain rule V = α (t) = β (s(t))s (t) = vT A = α (t) dv = T + vT (s(t))s (t) dt dv = T + v(κN )v dt dv = T + v 2 κN dt . − sin . dv T + v 2 κN. The fact that for a circle of radius r the curvature is κ = 1/r could not be more intuitive.18 then Proposition Let α(t) be a curve of velocity V. 0) r r is a unit speed reparametrization. V A = vT. The curvature of the circle can now be easily computed s s T = β (s) = (− cos . − cos . As the radius of the circle approaches infinity.

Let p be point on the curve. This circle is a “secant” approximation to the tangent circle. 1. 1. The osculating circle can be envisioned by a limiting process similar to that of the tangent to a curve in differential calculus. Any curve where κ/τ = constant is called a helix. where c = c c c aω ωs aω ωs b (− sin . − 2 sin . This tangential circle is called the osculating circle. The equation states that a particle moving along a curve in space feels a component of acceleration along the direction of motion whenever there is a change of speed. The ratio κ/τ = aω/b is particularly simple. and let q1 and q2 two nearby points. ) c c c c c aω 2 ω 2 s aω 2 ωs (− 2 cos . Then α = (x . 0) . cos . VECTORS AND CURVES Equation 1. b a2 ω 5 c4 c c5 a2 ω 4 Simplifying the last expression and substituting the value of c.19 Example (Helix) β(s) β (s) β (s) beta (s) κ2 = = = = = = κ τ = = = = ωs bs ωs .20 Example (Plane curves) Let α(t) = (x(t). y . ). 0) c c c c aω 3 ω 2 s aω 3 ωs (− 3 cos .10 CHAPTER 1. As the points q1 and q2 approach the point p. 0) c c c c β ·β a2 ω 4 c4 aω 2 ± 2 c (β β β ) β ·β (a cos b c − aω cos ωs c2 c aω 3 ωs c2 sin c 2 a2 ω 2 + b2 − aω sin ωs c23 c − aω cos ωs c2 c 2 c4 a2 ω 4 . The three points determine a circle uniquely. of which the circular helix is a special case. The osculating circle always lies in the the T N -plane. 0).29 is important in physics. a sin . which by analogy. we get τ κ = bω a2 ω 2 + b2 aω 2 = ± 2 2 a ω + b2 Notice that if b = 0 the helix collapses to a circle in the xy-plane. − 3 sin . y(t). the “secant” circle approaches the osculating circle. In this case the formulas above reduce to κ = 1/a and τ = 0. The centripetal acceleration and any point is a = v3 κ = v2 r where r is the radius of a circle which has maximal tangential contact with the curve at the point in question. and a centripetal acceleration in the direction of the normal whenever it changes direction. is called the osculating plane.

c2 The integrals (1. v 3 κτ B + . sin 2 . y . In cases where the given curve α(t) is not of unit speed. CURVES IN R3 α α κ = = (x . . .32) where (α α α ) is the triple vector product [α × α ] · α . . . using the fundamental theorem of calculus. Proof: α α α = = = = = vT v T + v 2 κN (v 2 κ)N ((s(t))s (t) + . 2c 0 cos s (1. we have β (s) = (cos s2 t2 . 0) (x .31) (1. . 0 c 2c c 2c s = . as well as the spiral itself arise in the computation of the diffraction pattern of a coherent beam of light by a straight edge. the following proposition provides formulas to compute the curvature and torsion in terms of α 1. The curvature is of the Cornu spiral is given by κ = | x y − y x |= (β · β )1/2 t2 s t2 s = − 2 sin 2 .2. the curve is of unit speed. . 0). y(s).1. 0). . v 3 κN + . then κ2 τ = = α ×α 2 α 6 (α α α ) . and s is indeed the arc length.30) Then. 0) α ×α = α 3 |xy −y x | = (x 2 + y 2 )3/2 = 0 11 τ 1. 2 cos 2 . These functions.30) defining the coordinates of the Cornu spiral are the classical Frenel Integrals. y .21 Example (Cornu Spiral) Let β(s) = (x(s). 2c2 2c Since β = v = 1 . where x(s) y(s) = = t2 dt 2c2 0 s t2 sin 2 dt. α ×α 2 (1.22 Proposition If α(t) is a regular curve in R3 .

κ(s) its curvature and τ (s) its torsion. N . (s > 0) be any two analytic functions.23 Theorem (Fundamental Theorem of Curves) Let κ(s) and τ (s). β(s) = β(0) + T0 s + κ0 N0 s2 + κ0 τ0 B0 s3 . The proof however. β (0) = T (0) = T0 β (0) = (κN )(0) = κ0 N0 β (0) = (−κ2 T + κ N + κτ B)(0) = −κ2 T0 + κ0 N0 + κ0 τ0 B0 0 Keeping only the lowest terms in the components of T . Then there exists a unique curve (unique up to its position in R3 ) for which s is the arclength. 1. VECTORS AND CURVES The other terms are unimportant here because as we will see α × α is proportional to B α ×α α ×α κ (α × α ) · α τ = v 3 κ(T × N ) = v 3 κB = v3 κ α ×α = v3 6 2 = v κ τ (α α α ) = v 6 κ2 (α α α ) = α ×α 2 1. 2 6 (1.. 2! 3! (1. The first three terms approximate the curve by a parabola which lies in the osculating plane (T N -plane). we leave the proof as an exercise. Proof: Pick a point in R3 . 1. 1. If τ0 = 0.25 Proposition A curve with κ = 0 is part of a straight line. then locally the curve looks like a straight line. If κ0 = 0. Pick any orthogonal frame {T. The curve is then determined uniquely by its Taylor expansion in the Frenet frame as in equation (1.. In this sense.34). N B}. then locally the curve is a plane curve which lies on the osculating plane. and B. becomes much harder and we refer the reader to other standard texts for the proof.12 CHAPTER 1. we get the first order Frenet approximation to the curve 1 1 .34) The first two terms represent the linear approximation to the curve.33) Since we are assuming that s is an arclength parameter. .24 Remark It is possible to prove the theorem just assuming that κ(s) and τ (s) are continuous. By an appropriate affine transformation. the curvature measures the deviation of the curve from being a straight line and the torsion (also called the second curvature) measures the deviation of the curve from being a plane curve.3 Fundamental Theorem of Curves Some geometrical insight into the significance of the curvature and torsion can be gained by considering the Taylor series expansion of an arbitrary unit speed curve β(s) about s = 0 β(s) = β(0) + β (0)s + β (0) 2 β (0) 3 s + s + . we may assume that this point is the origin.

1. 13 are linearly .26 Proposition A curve α(t) with τ = 0 is a plane curve. This linear homogeneous equation will have a solution of the form α = c1 α1 + c2 α2 + c3 .3. α . then (α α α ) = 0. where n = c1 × c2 ci = constant vectors. Proof: If τ = 0. This means that the three vectors α . This curve lies in the plane (x − c3 ) · n = 0. and α dependent and hence there exist functions a1 (s). FUNDAMENTAL THEOREM OF CURVES 1.a2 (s) and a3 (s) such that a3 α + a2 α + a1 α = 0.

VECTORS AND CURVES .14 CHAPTER 1.

) In this section we introduce linear algebraic tools that will allow us to interpret the differential in terms of an linear operator. and let Tp (Rn ) be the tangent space at p.2 Definition Let f : Rn → R be a real-valued C ∞ function.1 1-Forms One of the most puzzling ideas in elementary calculus is the idea of the differential. We recall that such a map must satisfy the following properties a) b) φ(Xp ) ∈ R.3) In particular.Chapter 2 Differential Forms 2. 2. the differential of a dependent variable y = f (x). Xp . the differential df of a function is an operator which assigns to a tangent vector Xp . In the usual definition.4) dxi ( j ) = ∂x ∂xj The set of all linear functionals on a vector space is called the dual of the vector space. Yp ∈ Tp (Rn ) (2. A 1-form at p is a linear map φ from Tp (Rn ) into R. is given in terms of the differential of the independent variable by dy = f (x)dx.2) for every vector field in X in Rn . if we apply the differential of the coordinate functions xi to the basis vector fields. It is an standard theorem in linear algebra that the dual of a vector space is also a vector space of the 15 . φ(aXp + bYp ) = aφ(Xp ) + bφ(Yp ). we get ∂xi ∂ i = δj (2. at any point p. the directional derivative of the function in the direction of that vector df (X)(p) = Xp (f ) = f (p) · X(p) (2. What does dx mean? What is the difference between ∆x and dx? How much ”smaller” than ∆x does dx have to be? There is no trivial resolution to this question. The problem is with the quantity dx.1 Definition Let p ∈ Rn .1) A 1-form is a smooth choice of a linear map φ as above for each point in the space. We define the differential df of the function as the 1-form such that df (X) = X(f ) (2. Most introductory calculus texts evade the issue by treating dx as an arbitrarily small quantity (which lacks mathematical rigor) or by simply referring to dx as an infinitesimal (a term introduced by Newton for an idea that could not otherwise be clearly defined at the time. ∀Xp ∈ Rn ∀a. 2. In other words. b ∈ R.

then the one form α is called exact. A 1-form is also called a covariant tensor of rank 1.6) The definition of differentials as linear functionals on the space of vector fields is much more satisfactory than the notion of infinitesimals. We emphasize that not all one forms are obtained by taking the differential of a function. then locally α = a1 (x)dx1 +. xn } be coordinate functions in a neighborhood U of a point p. We will adopt the convention to always write the covariant components of a covector with the indices down. . In vector calculus and elementary physics. .5) = ∂f i dx ∂xi Proof: The differential df is by definition a 1-form. ( ∂xn )p } of the tangent space. Therefore. If there exists a function f . such that α = df . (dxn )p }. or just simply a covector. an ) are called the covariant components of the covector. to prove the proposition. (2. . . since the new definition is based on the rigorous machinery of linear algebra. . coincides with ∂ definition 2. . ∗ As we have already noted. . If α is an arbitrary 1-form. . DIFFERENTIAL FORMS same dimension. exact forms are important in understanding the path independence of line integrals of conservative vector fields.3 Proposition Let f be any smooth function in Rn and let {x1 . . 2. To see this. The coefficients (a1 .5 applied to an arbitrary tangent vector. the cotangent space Tp (Rn ) of 1-forms at a point p has a natural vector space structure. . Physicists often refer to the covariant components of a 1-form as a covariant vector and this causes some confusion about the position of the indices. . . at each point. . + an (x)dxn . The space Tp (Rn ) is called the cotangent space of Rn at the point p. Equation (2. it suffices to show that the expression 2. Thus. .4) indicates that the set of differential forms {(dx1 )p .16 CHAPTER 2. . so. . (dxn )p } constitutes the basis ∂ ∂ of the cotangent space which is dual to the standard basis {( ∂x1 )p . the space Tp Rn of all 1-forms at p is a vector space which is the dual of the tangent space Tp Rn . . The union of all the cotangent spaces as p ranges over all points in Rn is called the cotangent bundle T ∗ (Rn ).7) where the coefficients ai are C ∞ functions. it must be expressible as a linear combination of the basis elements {(dx1 )p . We can easily extend the operations of addition and scalar multiplication to . Then.2. the differential df is given locally by the expression n df = i=1 ∂f i dx ∂xi (2. . . . . consider a tangent vector Xp = aj ( ∂xj )p and apply the expression above ( ∂f i dx )p (Xp ) ∂xi = = = = = = = ∂f i j ∂ dx )(a )(p) ∂xi ∂xj ∂ ∂f aj ( i dxi )( j )(p) ∂x ∂x i j ∂f ∂x a ( i j )(p) ∂x ∂x ∂f i aj ( i δj )(p) ∂x ∂f ( i ai )(p) ∂x f (p) · X(p) df (X)(p) ( (2.

if α = ai dxi and β = bj dxj . The tensor product of α and β is defined as the bilinear map α ⊗ β such that (α ⊗ β)(X. since physicists often refer to the components Ti j as a covariant tensor. Yi ∈ T (Rn ). For this purpose.8) 2. but also provides a rich algebraic and geometric structure which is vast generalization of cross products (which only make sense in R3 ) to Euclidean spaces of all dimensions. ) ∂xk ∂xl ∂ ∂ )β( l ) ∂xk ∂x ∂ ∂ = (ai dxi )( k )(bj dxj )( l ) ∂x ∂x j i = ai δk bj δl = ak bl = α( A quantity of the form T = Tij dxi ⊗ dxj is called a covariant tensor of rank 2. f 1 Y1 + f 2 Y2 ) = f 1 φ(X1 . That is φ(f 1 X1 + f 2 X2 .10) . Y ) = α(X)β(Y ) (2. and we may think of the set {dxi ⊗ dxj } as a basis for all such tensors. 2. then. (2. TENSORS AND FORMS OF HIGHER RANK the space of all 1-forms by defining (α + β)(X) = α(X) + β(X) (f α)(X) = f α(X) for all vector fields X and all smooth functions f . Y1 ) + f 2 φ(X2 . the notion of the differential dx is not made precise in elementary treatments of calculus. the differential of area dxdy in R2 . We must caution the reader again that there is possible confusion about the location of the indices. Y2 ). as well as the differential of surface area in R3 also need to be revisited in a more rigorous setting.2. Y1 ) + f 2 φ(X1 . if it is linear on each slot. for example. (α ⊗ β)( ∂ ∂ . Y1 ) = f 1 φ(X1 .2 Tensors and Forms of Higher Rank As we mentioned at the beginning of this chapter. In a similar fashion.4 Definition A map φ : T (Rn ) × T (Rn ) −→ R is called a bilinear map on the tangent space. Y1 ) φ(X1 . f i ∈ C ∞ Rn Tensor Products 2. we introduce a new type of multiplication between forms which not only captures the essence of differentials of area and volume. g) = X(f )Y (g) for any pair of arbitrary functions f and g.9) for all vector fields X and Y .5 Definition Let α and β be 1-forms. one can also define the tensor product of vectors X and Y as the bilinear map X ⊗ Y such that (X ⊗ Y )(f.2. so consequently. ∀Xi . Thus. 17 (2.

the components of X ⊗ Y in the basis given by ai bj . 3. an inner product can be viewed as a generalization of the dot product. By linearity. Y ) on the tangent space is called a vector inner product if 1.6 Definition A bilinear map g(X. X) ≥ 0. For this reason gij is also called a metric and g is called a metric tensor.18 CHAPTER 2. we may think of a tensor of type T 2. An assignment of a tensor to each point in Rn is called a tensor field. In this case the quantity g(X. The output is a real number. X) = X 2 gives the square of the length of the vector. Y ) = δij ai bj . then g( g(X. Since we assume g(X. The action of the 1-form on the vector gives α(X) ∂ ) ∂xi ∂ = bj ai (dxj )( i ) ∂x j = bj ai δi = ai bi .12) The quantity g(X. Thus. then. it is easy ∂ ∂ to see that if X = ai ∂xi and Y = bj ∂xj are two arbitrary vectors. Y ) = g(Y. ∂xi ∂x This object is also called a tensor of type T 2.1 as map with three input slots. ∀X. X). X) = 0 iff X = 0. DIFFERENTIAL FORMS ∂ ∂xi ∂ ∂ If X = ai ∂xi and Y = bj ∂xj . Y ) = gij ai bj . and in fact one can have tensors of mixed ranks. The map expects two functions in the first two slots and a vector in the third one. The notion of tensor products can easily be generalized to higher rank. which we assume to be non-singular. Y ) to be bilinear. The action of the map is bilinear on the two functions and linear on the vector. In this sense. The standard Euclidean inner product is obtained if we take gij = δij . The components gij of the inner product as thus given by ∂ ∂ . g(X. g(X. T = T ij k Inner Products ∂ ∂ Let X = ai ∂xi and Y = bj ∂xj be two vector fields and let g(X. an inner product is completely specified by its action on ordered pairs of basis vectors.13) i ∂xj ∂x where gij is a symmetric n × n matrix. = (bj dxj )(ai . Y ) is an example of a bilinear map which the reader will recognize as the usual dot product. ) = gij (2.11) i ∂x ∂x is called a contravariant tensor of rank 2 in Rn . For example. ∂ Another interpretation of the dot product can be seen if instead one considers a vector X = ai ∂xi j and a 1-form α = bj dx . 2. Any bilinear map of the form ⊗ ∂ ∂xj are simply ∂ ∂ ⊗ j (2. a tensor of contravariant rank 2 and covariant rank 1 in Rn is represented in local coordinates by an expression of the form T = T ij ∂ ∂ ⊗ j ⊗ dxk . 2.1 . g(X. (2.

19 (2. 1 (2. If we let g ij be the inverse of the matrix gij . the differential of arclength is ds2 = dr2 + r2 dθ2 + dz 2 .7 Example In cylindrical coordinates. 0 ρ2 sin θ2 (2. In this case the metric tensor has components  1 0 gij =  0 r2 0 0  0 0 .15) we can also raise covariant indices by the equation bi = g ij bj (2.14) and we recover the expression for the inner product. although.20) the differential of arclength is given by ds2 = dρ2 + ρ2 dθ2 + ρ2 sin2 θdφ2 . In this case the metric tensor has components  1 0 gij =  0 ρ2 0 0  0 . that is i g ik gkj = δj .18) (2.22) . we see that the metric accepts a dual interpretation.21) (2. In elementary treatments of calculus authors often ignore the subtleties of differential 1-forms and tensor products and define the differential of arclength as ds2 ≡ gij dxi dxj . one as bilinear pairing of two vectors g : T (Rn ) × T (Rn ) −→ R and another as a linear isomorphism g : T (Rn ) −→ T (Rn ) that maps vectors to covectors and vice-versa.17) (2. what is really meant by such an expression is ds2 ≡ gij dxi ⊗ dxj . we see that the equation above can be rewritten as ai bj = gij ai bj .2. 2. thus transforming the contravariant components of a vector to covariant ones.2. In view of the above discussion.19) 2. (2. (2. Equation (2.14) shows that the metric can be used as mechanism to lower indices.8 Example In spherical coordinates x = ρ sin θ cos φ y = ρ sin θ sin φ z = ρ cos θ.16) We have mentioned that the tangent and cotangent spaces of Euclidean space at a particular point are isomorphic. TENSORS AND FORMS OF HIGHER RANK If we now define bi = gij bj .

A1 . Y ) = −φ(Y. if A is a covariant vector with components Aµ = (ρ. the determinant function is a perfect example of an alternating bilinear map on the space M2×2 of two by two matrices. Y ) = α(X)β(Y ) − α(Y )β(X) (2. g) be the pair.24) (2. A1 .10 Definition A 2-form φ is a map φ : T (Rn ) × T (Rn ) −→ R which is alternating and bilinear. A2 . The matrix representing Minkowski’s metric g is given by g = diag(−1.20 CHAPTER 2. x2 . (2. in which case. 1). The wedge product of the two 1-forms is the map α ∧ β : T (Rn ) × T (Rn ) −→ R given by the equation (α ∧ β)(X.26) . X) = −t2 + (x1 )2 + (x2 )2 + (x3 )2 . where M(1. 1. DIFFERENTIAL FORMS Minkowski Space An important object in mathematical physics is the so called Minkowski space which is can be defined as the pair Let (M1. In fact. Non-zero vectors with zero length are called Light-like vectors and they are associated with with particles which travel at the speed of light (which we have set equal to 1 in our system of units. X) The alternating property is reminiscent of determinants of square matrices which change sign if any two column vectors are switched. for example.) The Minkowski metric gµν and its matrix inverse g µν are also used to raise and lower indices in the space in a manner completely analogous to Rn . x1 . 1. the differential of arclength is given by ds2 = gµν dxµ ⊗ dxν = −dt ⊗ dt + dx1 ⊗ dx1 + dx2 ⊗ dx2 + dx3 ⊗ dx3 = −dt2 + (dx1 )2 + (dx2 )2 + (dx3 )2 .11 Definition Let α and β be 1-forms in Rn and let X and Y be any two vector fields.23) (2. 2. Minkowski’s metric is not really a metric since g(X.25) Note: Technically speaking.3) = {(t. xi ∈ R} and g is the bilinear map such that g(X. X) = 0 does not imply that X = 0. A3 ) Wedge Products and n-Forms 2. then the contravariant components of A are Aµ = g µν Aν = (−ρ. one has to view M2×2 as the space of column vectors. x3 )| t. A3 ). Thus.9 Definition A map φ : T (Rn ) × T (Rn ) −→ R is called alternating if φ(X. Of course.3 . A2 . for the definition above to apply. 2.

Then α ∧ β = (Ai Bj )dxi ∧ dxj Proof: Let X and Y be arbitrary vector fields. Y ) = = = = = α(f 1 X1 + f 2 X2 )β(Y ) − α(Y )β(f 1 X1 + f 2 X2 ) [f 1 α(X1 ) + f 2 α(X2 )]β(Y ) − α(Y )[f 1 β(X1 ) + f 2 α(X2 )] f 1 α(X1 )β(Y ) + f 2 α(X2 )β(Y ) + f 1 α(Y )β(X1 ) + f 2 α(Y )β(X2 ) f 1 [α(X1 )β(Y ) + α(Y )β(X1 )] + f 2 [α(X2 )β(Y ) + α(Y )β(X2 )] f 1 (α ∧ β)(X1 .12 Theorem If α and β are 1-forms. we have dxi ∧ dxj = −dxj ∧ dxi if i = j. F 2 . The similarity between wedge products is even more striking in the next proposition but we emphasize again that wedge products are by far much more powerful than cross products.15 Corollary If α and β are 1-forms. Y and functions f 1 . TENSORS AND FORMS OF HIGHER RANK 2. Proof: Consider 1-forms. Proof: : We break up the proof into the following two lemmas.27) This last result tells us that wedge products have characteristics similar to cross products of vectors in the sense that both of these products are anti-commutative. since the 1-forms are linear functionals. whereas dxi ∧ dxi = −dxi ∧ dxi = 0 since any quantity which is equal to the negative of itself must vanish. Y ) (2. because wedge products can be computed in any dimension. then α ∧ β is a 2-form.28) .2. we get (α ∧ β)(f 1 X1 + f 2 X2 .13 Lemma The wedge product of two 1-forms is alternating. 2. Then. β. 2.16 Proposition Let α = Ai dxi and β = Bi dxi be any two 1-forms in Rn . vector fields X1 . X) 21 2. X2 . Then (α ∧ β)((X. for example.2.14 Lemma The wedge product of two 1-forms is bilinear. Y ) + f 2 (α ∧ β)(X2 . Y ) The proof of linearity on the second slot is quite similar and it is left to the reader. 2. This means that we need to be careful to introduce a minus sign every time we interchange the order of the operation. then (α ∧ β)(X. Y ) = (Ai dxi )(X)(Bj dxj )(Y ) − (Ai dxi )(Y )(Bj dxj )(X) = (Ai Bj )[dxi (X)dxj (Y ) − dxi (Y )dxj (X)] = (Ai Bj )(dxi ∧ dxj )(X. Y ) = α(X)β(Y ) − α(Y )β(X) = −(α(Y )β(X) − α(X)β(Y )) = −(α ∧ β)(Y. Proof: Let α and β be 1-forms in Rn and let X and Y be any two vector fields. α. then α ∧ β = −β ∧ α (2. Thus.

then the coefficients of the wedge product are the components of the cross product of A = A1 i + A2 j + A3 k and B = B1 i + B2 j + B3 k 2.18 Example let x = r cos θ and y = r sin θ.22 CHAPTER 2. but still alternating in the sense that if one switches any two differentials. The differential of area in polar coordinates is a special example of the change of coordinate theorem for multiple integrals. In most cases what is meant by these entities are wedge products of 1-forms 4.30).17 Example Let α = x2 dx − y 2 dy and β = dx + dy − 2xydz. We state (without proof) that all 2-forms φ in Rn can be expressed as linear combinations of wedge products of differentials such as φ = Fij dxi ∧ dxj (2. Then dx ∧ dy = = = = (−r sin θdθ + cos θdr) ∧ (r cos θdθ + sin θdr) −r sin2 θdθ ∧ dr + r cos2 θdr ∧ dθ (r cos2 θ + r sin2 θ)(dr ∧ dθ) r(dr ∧ dθ) (2. then dx ∧ dy = det|J|du ∧ dv.20 Definition A 3=form φ in Rn is an object of the following type φ = Aijk dxi ∧ dxj ∧ dxk (2. The result of the last example yields the familiar differential of area in polar coordinates 2. 2.19 Remark 1. There is nothing really wrong with using . if n = 3.29) 2. This is fact what we will do in the next definition. It is easy to establish that if x = f 1 (u.31) where we assume that the wedge product of three 1-forms is associative. Quantities such as dxdy and dydz which often appear in calculus. for that matter) more in the spirit of multilinear maps. where det|J| is the determinant of the Jacobian of the transformation. v) and y = f 2 (u. 3. DIFFERENTIAL FORMS Because of the antisymmetry of the wedge product the last equation above can also be written as n n α∧β = i=1 j<i (Ai Bj − Aj Bi )(dxi ∧ dxj ) In particular. are not well defined. we challenge the reader to come up with a rigorous definition of three forms (or an n-form. Then α∧β = (x2 dx − y 2 dy) ∧ (dx + dy − 2xydz) = x2 dx ∧ dx + x2 dx ∧ dy − 2x3 ydx ∧ dz − y 2 dy ∧ dx − y 2 dy ∧ dy + 2xy 3 dy ∧ dz = x2 dx ∧ dy − 2x3 ydx ∧ dz − y 2 dy ∧ dx + 2xy 3 dy ∧ dz = (x2 + y 2 )dx ∧ dy − 2x3 ydx ∧ dx + 2xy 3 dy ∧ dz 2.30) In a more elementary (ie: sloppier) treatment of this subject one could simply define 2-forms to be gadgets which look like the quantity in equation (2. v). then the entire expression changes by a minus sign.

32) . we have m m n (Rn ) = p (R ). that is. f2 dx3 ∧ dx1 .. . dxim = ∂dxi0 (2.3 Exterior Derivatives In this section we introduce a differential operator which generalizes the classical gradient. We will thing of 0-forms as being ordinary functions.. a little combinatorics. .2. EXTERIOR DERIVATIVES 23 definition ref3form. It is just that this definition is coordinate dependent and mathematicians in general (specially differential geometers) prefer coordinate-free definitions.im (x)dxi0 ∧ dxi1 . gdx2 f dx1 ∧ dx2 Dim 1 2 1 Dim 1 3 3 1 Forms f f1 dx1 . The exterior derivative of α is the (m+1-form) dα given by dα = dAi1 . 2.21 Definition Let α be an m-form (written in coordinates as in equation (2.. m Denote by (p) (Rn ) the space of m-forms at p ∈ Rn . dxim ∂Ai1 .32)). More specifically. 2.. .. . . . we want to count the dimensions of the space of k-forms in Rn in the sense of vector spaces.. And now. This vector space has dimension dim m n (p) (R ) = n! m!(n − m)! 0 for m ≤ n and dimension 0 if m > n. Let us count the number of differential forms in Euclidean space. then α can be written in the form α = Ai1 . f3 dx1 ∧ dx2 f1 dx1 ∧ dx2 ∧ dx3 The binomial coefficient pattern should be evident to the reader.. Also we will call (Rn ) the union of all (p) (Rn ) as p ranges through all the points in Rn . p If α ∈ m (Rn ).3. f2 dx2 . In other words. we write df = ∂f i dx ∂xi . curl and divergence operators. We shall identify (p) (Rn ) with the space of C ∞ functions at m m p. theorems and proofs.im ∧ dxi0 ∧ dxi1 .im (x)dxi1 ∧ .33) In the special case where α is a 0-form.. Since functions are the ”scalars”. R2 0-forms 1-forms 2-forms R3 0-forms 1-forms 2-forms 3-forms Forms f f dx1 . f3 dx3 f1 dx2 ∧ dx3 . a function.. the space of 0-forms as a vector space has dimension 1. dxim (2.

32).23 Example Let α = P (x... . suppose that α is represented locally as in equation (2.jq + (Ai1 ...jq ∧ (dxj1 ∧ .36) = [Ai1 . y)dx + Q(x.jq )](dxi1 ∧ ... . dxjq .. We have ∂f ) ∂dxi ∂2f dxj ∧ dxi = ∂xj ∂xi 1 ∂2f ∂2f = [ j i i j ]dxj ∧ dxi 2 ∂x ∂x ∂x ∂x = 0 Now. .. The (−1)p factor comes in because to pass the term dBji .ip Bj1 .β ∈ q (2. ∧ dxip ) ∧ (dxj1 ∧ .32). . ∧ dxip )] ∧ [Bj1 .. .34) Proof: a) Obvious from equation (2.. ∧ dxjq ) Now we take take the exterior derivative of the last equation taking into account that d(f g) = f dg + gdf for any functions f and g.. . . .. It follows from 2.ip (x)dxi1 ∧ . .33 that d(dα) = d(dAi1 .jq (x)dxj1 ∧ ...ip )Bj1 .ip ∧ (dxi1 ∧ . ∧ dxip ) ∧ (dxj1 ∧ .jp through p 1-forms of the type dxi ... (2.jq (dxi1 ∧ . .jq ∧ (dxj1 ∧ .. α ∧ β = Ai1 . .22 Proposition a) b) c) CHAPTER 2.. . .. Then we can write α β = Ai1 . ∧ dxip )] ∧ (−1)p [dBj1 .. . . Then.. ∧ dxjq )] = dα ∧ β + (−1)p α ∧ dβ. . We get d(α ∧ β) = = [d(Ai1 . dxim = 0 c) Let α ∈ p . dxip = Bj1 .. b) First we prove the proposition for α = f ∈ d(dα) = d( 0 .ip d(Bj1 .35) By definition... . dα = = = ∂P ∂Q ∂Q ∂P dx + ) ∧ dx + ( dx + ) ∧ dy ∂x ∂y ∂x ∂y ∂P ∂Q dy ∧ dx + dx ∧ dy ∂y ∂x ∂Q ∂P ( − )dx ∧ dy. ∧ dxjq )] + (2. y)dβ. . ∂x ∂y ( (2.ip ∧ (dxi1 ∧ .. DIFFERENTIAL FORMS d: −→ d2 = d ◦ d = 0 d(α ∧ β) = dα ∧ β + (−1)p α ∧ dβ m m+1 ∀α ∈ p .. ... 2. .. . ∧ dxjq ) [dAi1 ..24 2.. .β ∈ q .37) This example is related to Green’s theorem in R2 . one has to perform p transpositions. .im ) ∧ dxi0 ∧ dxi1 .

y). In this case.27 Poincare’s Lemma In a simply connected space (such as Rn ). The reader should also be familiar with this example in the context of exact differential equations of first order. by the previous example. In this case the isomorphism is given by the map dx −→ dy ∧ dz . ∂x ∂y Thus. The condition is reminiscent of Cauchy’s integral theorem for functions of a complex variable. Then. y)dx + N (x. ∂y by j and ∂z by k. we must not confuse a tangent vector which is a linear operator with a Euclidean vector which is just an ordered triplet.24 Example Let α = M (x. by replacing ∂x by i. The ”vector” part a1 ∂x + a2 ∂y + a3 ∂z can be mapped to a regular advanced calculus ∂ ∂ ∂ 1 2 3 vector a i + a j + a k.25 Definition A differential form α is called closed if dα = 0. the vector space isomorphism maps the standard basis vectors { ∂xi } to their duals {dxi }. f (z)dz = 0 C This theorem does not hold if the region bounded by the curve C is not simply connected. y)dy. Of course. A good example of this is the tangent ∂ ∂ ∂ space Tp R3 . both of which have dimension 3. 2. the space P3 of all real polynomials in x of degree 3.2. and the space M2×2 of real 2 by 2 matrices. The converse is not at all obvious in general and we state it here without proof.4 The Hodge-∗ Operator One of the important lessons that students learn in linear algebra is that all vector space of finite dimension n are isomorphic to each other. if a differential is closed then it is exact. are basically no different than the Euclidean vector space R4 in terms of their vector space properties. for instance. 2. THE HODGE-∗ OPERATOR 25 2. The assumption hypothesis that the space must be simply connected is somewhat subtle. ∂ In this case. 1 dz = 2πi z C 2. there is basically no difference. and conservative force fields. Thus. and suppose that dα = 0. This isomorphism then transforms a contravariant vector to a covariant vector. It follows that these two spaces must be isomorphic. which states that if f (z) is holomorphic function and C is a simple closed curve. We have already encountered a number of vector spaces of finite dimension in these notes. dα = 0 iff Nx = My which implies that N = fy and Mx for some C 1 function f (x. it is clear that a exact form is also closed. 2. Since d ◦ d = 0. but as far their vector space properties. We have also observed that the tangent space Tp Rn is isomorphic to the cotangent space Tp Rn .4. 1 2 Another interesting example is provided by the spaces p (R3 ) and p (R3 ). then. Hence α = fx dx + fy df = df.26 Definition A differential form α is called exact if there exist a form β such that α = dβ. The standard example is the integral of the complex 1-form ω = (1/z)dz around the unit circle C bounding a punctured disk. ∂N ∂M dα = ( − )dx ∧ dy.

.. if A = (ai ) is a 3 × 3 j matrix. .. 3} must contain a 0. im ) is an odd permutation of(1. m) = (2. . the reader can easily verify that detA = |A| = i 1 i 2 i3 i1 i2 i3 a1 a2 a3 (2... (n − m)! (2..im .j. . . raising an index with a value of 0 costs a minus sign. m) −1 if(i1 . Since n m = it must be the case that m n p (R ) n n−m = n! . since the Euclidean metric used to raise and lower indices is δij . 2. . equation (2. in calM(1. .41). . . . In fact.in dx ∧ . in Minkowski space. On the other hand. 1. DIFFERENTIAL FORMS −dx ∧ dz dx ∧ dy (2.43) completely specifies the map for all m-forms.28 Definition The Hodge-∗ operator is a linear map ∗ : standard local coordinates by the equation ∗(dxi1 ∧ . m (2. .im im+1 im+1 .∧dxim constitute a basis of the vector space p (Rn ) and the ∗-operator is assumed to be a linear map. the Levi-Civita symbol with some or all the indices up is numerically equal to the permutation symbol will all indices down i1 . using equation (2. .3) i0 i1 i2 i3 =− i0 i1 i2 i3 . we will first need to introduce the totally antisymmetric Levi-Civita permutation symbol which is defined as follows   +1 if(i1 .39) ∼ = m n−m ) p (R To describe the isomorphism between these two spaces. . ∧ dxin .im = i1 .43) Since the forms dxi1 ∧. 2. . .im  0 otherwise In dimension 3. A more thorough discussion of the Levi-Civita symbols will appear later in these notes. In Rn . because g00 = g 00 = −1. ..k in = = 231 213 = = 312 321 =1 =1 (2. im ) is an even permutation of(1. . there are only 3 (3!=6) nonvanishing components of 123 132 i. . .. since any permutation of {0. then.. .40) i1 . . Thus.42) This formula for determinants extends in an obvious manner to n × n matrices..41) The permutation symbols are useful in the theory of determinants. . we have seen that the dimension of the space of m-forms in Rn is given by the n binomial coefficient (m).38) More generally. .26 dy dz −→ −→ CHAPTER 2. ∧ dxim ) = 1 (n − m)! m n p (R ) −→ m n−m ) p (R defined in i1 ..

and ∗(dx1 ∧ dx2 ∧ dx3 ) = 1. then ∗dx1 = = = = = 1 1 dxj ∧ dxk 2! jk 1 1 [ dx2 ∧ dx3 + 1 32 dx3 ∧ dx2 ] 2! 23 1 [dx2 ∧ dx3 − dx3 ∧ dx2 ] 2! 1 [dx2 ∧ dx3 + dx2 ∧ dx3 ] 2! dx2 ∧ dx3 .46) (2. THE HODGE-∗ OPERATOR 2.4. If one thinks of the quantities dx1 .44 are the differential geometry versions of the well known relations i = j×k j = −i × k k = i×j . ∗(α ∧ β) (A2 B3 − A3 B2 ) ∗ (dx2 ∧ dx3 ) + (A1 B3 − A3 B1 ) ∗ (dx1 ∧ dx3 ) + (A1 B2 − A2 B1 ) ∗ (dx1 ∧ dx2 ) = (A2 B3 − A3 B2 )dx1 + (A1 B3 − A3 B1 )dx2 + (A1 B2 − A2 B1 )dx3 = = (A × B)i dxi (2. ∗f = f (dx1 ∧ dx2 ∧ dx3 ) = f dV.47) where dV is the differential of volume. if f : R −→ R is any 0-form (a function) then.30 Example Let α = A1 dx1 A2 dx2 + A3 dx3 . dx2 and dx3 as playing the role of i. j and k. . 3 (2. and β = B1 dx1 B2 dx2 + B3 dx3 . Then. This is even more evident upon inspection of equation (2.48). 27 We leave it to reader to complete the computation of the action of the ∗-operator on the other basis forms. which relates the ∧ operator to the Cartesian cross product.48) The previous examples provide some insight on the action of the ∧ and ∗ operators. also called the volume form.2. The results are ∗dx1 = +dx2 ∧ dx3 ∗dx2 = −dx1 ∧ dx3 ∗dx3 = +dx1 ∧ dx2 . 2. ∗(dx2 ∧ dx3 ) = dx1 ∗(−dx3 ∧ dx1 ) = dx2 ∗(dx1 ∧ dx2 ) = dx3 . In particular.29 Example Consider the dimension n=3 case.44) (2. then it should be apparent that equations 2.45) (2.

If 2 2 F ∈ (M). Here is precisely how this works: 1. It is also worthwhile to calculate the duals of 1-forms in M1. Thus. dx1 ∧ dx2 }. We would like to note that the 2 2 2 action of the dual operator on (M) is such that ∗ (M) −→ (M). 2 2 0 1 0 2 0 3 More specifically.50) (2. −dx1 ∧ dx3 .3 . In verifying the equations above. ∗(+dx2 ∧ dx3 ) = ∗(−dx1 ∧ dx3 ) = ∗(+dx1 ∧ dx2 ) = 1 23 k kl dx 2 1 13 k kl dx 2 1 12 k kl dx 2 ∧ dxl = dx0 ∧ dx1 ∧ dxl = dx0 ∧ dx2 ∧ dxl = dx0 ∧ dx3 . such that ∗ : ± −→ . we recall that the Levi-Civita symbols which contain an index with value 0 in the up position have an extra minus sign as a result of raising the index with g 00 . Curl and Divergence Classical differential operators which enter in Green’s and Stokes Theorems are better understood as special manifestations of the exterior differential and the Hodge-∗ operators in R3 . the 2 operator is an linear involution of the space and in fact. ± are the eigenspaces corresponding to the two eigenvalues of this involution.49) Gradient. DIFFERENTIAL FORMS 4 2. The action of ∗ on + is ∗(dx0 ∧ dx1 ) = ∗(dx0 ∧ dx2 ) = ∗(dx0 ∧ dx3 ) = and on 2 −. we will formally write F = F+ + F− . Let α = Ai dxi be a 1-form in R3 . + is spanned by the forms {dx ∧ dx . dx ∧ dx . Then (∗d)α = = 1 ∂Ai ∂Ai ( − ) ∗ (dxi ∧ dxj ) 2 ∂xj ∂xj ( × A) · dx ∂f dxj = ∂xj f · dx (2. Let α = B1 dx2 ∧ dx3 + B2 dx3 ∧ dx1 + B3 dx1 ∧ dx2 be a 2-form in R3 . splits (M1. where F± ∈ ± .31 Example In Minkowski space the collection of all 2-forms has dimension (2) = 6. and − is spanned 2 by the forms {dx2 ∧ dx3 . The results are ∗dt ∗dx1 ∗dx2 ∗dx3 = −dx1 ∧ dx2 ∧ dx3 = +dx2 ∧ dt ∧ dx3 = +dt ∧ dx1 ∧ dx3 = +dx1 ∧ dt ∧ dx2 . (2. dx ∧ dx }. The Hodge2 2 2 2 ∗ operator in this case.28 CHAPTER 2. Then df = 2.52) . Then dα = = ∂B1 ∂B2 ∂B3 + + )dx1 ∧ dx2 ∧ dx3 1 2 ∂x ∂x ∂x3 ( · B)dV ( (2. and ∗2 = −1.3 ) into two 3-dim subspaces ± . Let f : R3 −→ R be a C ∞ function. 1 01 k kl dx 2 1 02 k kl dx 2 1 03 k kl dx 2 ∧ dxl = −dx2 ∧ dx3 ∧ dxl = +dx1 ∧ dx3 ∧ dxl = −dx1 ∧ dx2 .51) 3.

k j Maxwell Equations The classical equations of Maxwell describing electromagnetic phenomena are · E = 4πρ ·B=0 × B = 4πJ + × E = − ∂B ∂t ∂E ∂t (2. Let α = Bi dxi . THE HODGE-∗ OPERATOR 4. Fµν =  x  Ey −Bz 0 Bx  Ez By −Bx 0 (2. For example.59) . B3 ) be any two Euclidean vectors. Define the Maxwell 2-form F by the equation F = where 1 Fµν dxµ ∧ dxν .54) klm = δk δl − δl δk 2. ν = 0.55) mn l A m (B × C)m mn jk l Am ( n Bj Ck ) mn jk l n Am Bj Ck ) jkn m A Bj Ck ) mnl j k k j (δl δm − δl δn )Am Bj Ck Bl Am Cm − Cl Am bn ij ∂Ai . Then it is easy to see that (A × B)k = and ( × B)k = ij k Ai Bj . x1 . ()) ijm i j i j (2. A2 .4. (2. (µ. then ∗d ∗ α = ·B 29 (2.56) We would like to formulate these equations in the language of differential forms. 3).3 . Maxwell’s 2-form is given by F = −Ex dt ∧ dx1 − Ey dt ∧ dx2 − Ez dt ∧ dx3 + Bz dx1 ∧ dx2 − By dx1 ∧ dx3 + Bx dx2 ∧ dx3 .57) (2. rewriting in vector form A × (B × C) = B(A · C) − C(A · B) (2. 2. let a = (A1 . x2 . x3 ) be local coordinates in Minkowski’s space M1. 1. ∂x To derive many classical vector identities in this formalism. B2 .53) It is also possible to define and manipulate formulas of classical vector calculus using the permutation symbols.32 Example [A × (B × C)]l = = = = = = Or.2.58) Written in complete detail. 2   0 −Ex −Ey −Ey  E 0 Bz −By  . Let xµ = (t. A3 ) and B = (B1 . it is necessary to first establish the following identity (see Ex.

∂x1 ∂x2 ∂x3 which is the same as · B = 0. ∂x3 ∂x2 ∂x1 ∂Ez ∂Ex ∂By − − = 0.56 are equivalent to the equations dF d∗F = 0. dF = 0 iff ∂Bx ∂By ∂By + + = 0. DIFFERENTIAL FORMS J = Jµ dxµ = ρdt + J1 dx1 + J2 dx2 + J3 dx3 . ∂x2 ∂x1 ∂x3 . we get. = 4π ∗ J. (2.60) (2. 1 ∂x ∂t + dF = Therefore.61) Proof: The proof is by direct computation using the definitions of the exterior derivative and the Hodge-∗ operator. ∂Bx ∂x1 ∂Ey ( 3 ∂x ∂Ez ( 1 ∂x ∂Ex ( 2 ∂x ( ∂By ∂Bz + )dx1 ∧ dx2 ∧ dx3 + ∂x2 ∂x3 ∂Ez ∂Bx − − )dx2 ∧ dt ∧ dx3 + ∂x2 ∂t ∂Ex ∂By )dt ∧ dx1 ∧ x3 + − − ∂dx3 ∂t ∂Ey ∂Bz − − )dx1 ∧ dt ∧ dx2 . 2. ∂Ex ∂Ex ∧ dx2 ∧ dt ∧ dx1 − ∧ dx3 ∧ dt ∧ dx1 + ∂x2 ∂x3 ∂Ey ∂Ey − 1 ∧ dx1 ∧ dt ∧ dx2 − ∧ dx3 ∧ dt ∧ dx2 + ∂x ∂x3 ∂Ez ∂Ez − 1 ∧ dx1 ∧ dt ∧ dx3 − ∧ dx2 ∧ dt ∧ dx3 + ∂x ∂x2 ∂Bz ∂Bz ∧ dt ∧ dx1 ∧ dx2 − ∧ dx3 ∧ dx1 ∧ dx2 − ∂t ∂x3 ∂By ∂By ∧ dt ∧ dx1 ∧ dx3 − ∧ dx2 ∧ dx1 ∧ dx3 + ∂t ∂x2 ∂Bx ∂Bx ∧ dt ∧ dx2 ∧ dx3 + ∧ dx1 ∧ dx2 ∧ dx3 . ∂x1 ∂x3 ∂x2 ∂Ex ∂Ey ∂Bz − − = 0.30 We also define the source current 1-form CHAPTER 2. and ∂Ey ∂Ez ∂Bx − − = 0. ∂t ∂x1 dF = − Collecting terms and using the antisymmetry of the wedge operator.33 Proposition Maxwell’s Equations 2.

62) and (2. THE HODGE-∗ OPERATOR which means that 31 ∂B = 0.4).  0  Bx By By 0 Ez −Ey  . according to example (2. we see from (2.64) or equivalently. (2.62) ∂t To verify the second set of Maxwell equations.58) that the components of F+ are those of −E and the components of F− constitute the magnetic field vector B.  −Bx Fµν =   −By −Bz (2.4. (2. −Ez 0 Ex  Ey −Ex 0 (2. we first compute the dual of the current density 1-form (2. Using the results of example (2. We get − ×E− ∗J = −ρdx1 ∧ dx2 ∧ dx3 + J1 dx2 ∧ dt ∧ dx3 + J2 dt ∧ dx1 ∧ dx3 + J3 dx1 ∧ dt ∧ dx2 . In fact.2.4). ×B− ∂E = 4πJ. we can immediately write the components of ∗F ∗F = Bx dt ∧ dx1 + By dt ∧ dx2 + Bz dt ∧ dx3 + Ez dx1 ∧ dx2 − Ey dx1 ∧ dx3 + Ex dx2 ∧ dx3 . F splits into F = F+ + F− .63) 2 We could now proceed to compute d∗F . and so.65) Since the effect of the dual operator amounts to exchanging E −→ B −→ we can infer from equations (2. but perhaps it is more elegant to notice that F ∈ (M). .4.63) that · E = 4πρ and. ∂t −B +E.60) using the results from example 2.

DIFFERENTIAL FORMS .32 CHAPTER 2.

Chapter 3

Connections
Connections
3.1 Frames

As we have already noted in chapter 1, the theory of curves in R3 can be elegantly formulated by introducing orthonormal triplets of vectors which we called Frenet frames. The Frenet vectors are adapted to the curves in such a manner that the rate of change of the frame gives information about the curvature of the curve. In this chapter we will study the properties of arbitrary frames and their corresponding rates of change in the direction of the various vectors in the frame. This concepts will then be applied later to special frames adapted to surfaces. 3.1 Definition A coordinate frame in Rn is an n-tuple of vector fields {e1 , . . . , en } which are linearly independent at each point p in the space. In local coordinates x1 , . . . xn , we can always express the frame vectors as linear combinations of the standard basis vectors ei = ∂j Aji , (3.1)
∂ where ∂j = ∂x1 We assume the matrix A = (Aji ) to be nonsingular at each point. In linear algebra, this concept is referred to as a change of basis, the difference being that in our case, the transformation matrix A depends on the position. A frame field is called orthonormal if at each point, < ei , ej >= δij . (3.2)

Throughout this chapter, we will assume that all frame fields are orthonormal. Whereas this restriction is not necessary, it is very convenient because it simplifies considerably the formulas to compute the components of an arbitrary vector in the frame. 3.2 Proposition If {e1 , . . . , en } is an orthonormal frame, then the transformation matrix is orthogonal (ie: AAT = I) Proof: The proof is by direct computation. Let ei = ∂j Aji . Then δij = < ei , ej > = < ∂k Aki , ∂l Alj > = Aki Alj < ∂k , ∂l > = Aki Alj δkl 33

34 = Aki Akj = Aki (AT )jk . Hence (AT )jk Aki (AT )jk Aki T = δij = δ ji A A = I

CHAPTER 3. CONNECTIONS

Given a frame vectors ei , we can also introduce the corresponding dual coframe forms θi by requiring i θi (ej ) = δj (3.3) since the dual coframe is a set of 1-forms, they can also be expressed in of local coordinates as linear combinations θi = B ik dxk . It follows from equation( 3.3), that θi (ej )
i = Bk dxk (∂l Alj ) i = Bk Alj dxk (∂l ) i = Bk Alj δ k l

δ ij

i = Bk Akj .

Therefore we conclude that BA = I, so B = A−1 = AT . In other words, when the frames are orthonormal we have ei θi 3.3 = ∂k Aki = Aik dxk .

(3.4)

Example Consider the transformation from Cartesian to cylindrical coordinates x = r cos θ, y = r sin θ, z = z.

Using the chain rule for partial derivatives, we have ∂ ∂r ∂ ∂θ ∂ ∂z ∂ ∂ + sin θ ∂x ∂y ∂ ∂ = −r sin θ + r cos θ ∂x ∂y ∂ = ∂z = cos θ ∂ ∂r 1 ∂ r ∂θ ∂ , ∂z

From these equations we easily verify that the quantities e1 e2 e3 = = =

3.2. CURVILINEAR COORDINATES are a triplet of mutually orthogonal unit vectors and thus constitute an orthonormal frame. 3.4 Example For spherical coordinates( 2.20) x = ρ sin θ cos φ y = ρ sin θ sin φ z = ρ cos θ, the chain rule leads to ∂ ∂ρ ∂ ∂θ ∂ ∂φ In this case, the vectors e1 e2 e3 = = = ∂ ∂ρ 1 ∂ ρ ∂θ ∂ 1 ρ sin θ ∂φ ∂ ∂ ∂ + sin θ sin φ + cos θ ∂x ∂y ∂z ∂ ∂ ∂ = ρ cos θ cos φ + ρ cos θ sin φ + −ρ sin θ ∂x ∂y ∂z ∂ ∂ = −ρ sin θ sin φ + ρ sin θ cos φ . ∂x ∂y = sin θ cos φ

35

(3.5)

also constitute an orthonormal frame. The fact that the chain rule in the two situations above leads to orthonormal frames is not coincidental. The results are related to the orthogonality of the level surfaces xi = constant. Since the level surfaces are orthogonal whenever they intersect, one expects the gradients of the surfaces to also be orthogonal. Transformations of this type are called triply orthogonal systems.

3.2

Curvilinear Coordinates

Orthogonal transformations such as Spherical and cylindrical coordinates appear ubiquitously in mathematical physics because the geometry of a large number of problems in this area exhibit symmetry with respect to an axis or to the origin. In such situations, transformation to the appropriate coordinate system often result in considerable simplification of the field equations involved in the problem. It has been shown that the Laplace operator which enters into all three of main classical fields, the potential, the heat and the wave equations, is separable in 12 coordinate systems. A simple and efficient method to calculate the Laplacian in orthogonal coordinates can be implemented by the use of differential forms. 3.5 Example In spherical coordinates the differential of arc length is given by (see equation 2.21) ds2 = dρ2 + ρ2 dθ2 + ρ2 sin2 θdφ2 . Let θ1 θ2 θ3 = dρ = ρdθ = ρ sin θdφ.

(3.6)

4. rewriting the differentials back in terms of the the coframe. A change of coordinates xi = xi (uk ) leads to an orthogonal transformation if in the new coordinate system uk . we choose the coframe √ θ1 = g11 du1 = h1 du1 √ θ2 = g22 du2 = h2 du2 √ θ3 = g33 du3 = h3 du3 The quantities {h1 . the line metric ds2 = g11 (du1 )2 + g22 (du2 )2 + g33 (du3 )2 (3. Then we rewrite the resulting 2-form in terms of wedges of coordinate differentials so that we can apply the definition of the exterior derivative. we get d ∗ df = 1 ∂ ∂f ∂ ∂f ∂2f 1 sin θ (ρ2 ) + (sin θ ) + θ ∧ θ2 ∧ θ3 . ∂ρ ∂ρ ∂θ ∂θ ∂φ2 Finally. we first compute the differential df and express the result in terms of the coframe. the Laplacian of f is given by 2 f= 1 ∂ ∂f ∂2f 1 ∂ ∂f ρ2 + 2 (sin θ ) + ρ2 ∂ρ ∂ρ ρ sin θ ∂θ ∂θ ∂φ2 (3. df = = ∂f ∂f ∂f dρ + dθ + dφ ∂ρ ∂θ ∂φ ∂f 1 1 ∂f 2 1 ∂f 3 θ + θ + θ ∂ρ ρ ∂θ ρ sin θ ∂φ The components df in the coframe represent the gradient in spherical coordinates.36 CHAPTER 3.5). θ.4. we first apply the Hodge-∗ operator. ∗df = = = d ∗ df = = 1 ∂f 1 1 ∂f 1 ∂f 2 θ ∧ θ3 − θ ∧ θ3 + θ ∧ θ2 ∂ρ ρ ∂θ ρ sin θ ∂φ ∂f 1 ∂f 1 ∂f ρ2 sin θ dθ ∧ dφ − ρ sin θ dρ ∧ dφ + ρ sin θ dρ ∧ dθ ∂ρ ρ ∂θ ρ sin θ ∂φ ∂f ∂f ∂f dρ ∧ dφ + dρ ∧ dθ ρ2 sin θ dθ ∧ dφ − sin θ ∂ρ ∂θ ∂φ ∂ 2 ∂f ∂ ∂f ∂ ∂f (ρ sin θ )dρ ∧ dθ ∧ dφ − (sin θ )dθ ∧ dρ ∧ dφ + ( )dφ ∧ dρ ∧ dθ ∂ρ ∂ρ ∂θ ∂θ ∂φ ∂φ ∂ ∂ ∂f ∂2f ∂f sin θ (ρ2 ) + (sin θ ) + dρ ∧ dθ ∧ dφ. h3 } are classically called the weights.7) The derivation of the expression for the spherical Laplacian through the use of differential forms is elegant and leads naturally to the operator in Sturm Liouville form. We will revert to using a summation symbol . CONNECTIONS Note that these three 1-forms constitute the dual coframe to the orthonormal frame which just derived in equation( 3. Continuing with the scheme of section 2. φ). The process above can also be carried out for general orthogonal transformations. Consider a scalar field f = f (ρ. To do this. In this case. ρ2 sin θ ∂ρ ∂ρ ∂θ ∂θ ∂φ2 So. We now calculate the Laplacian of f in spherical coordinates using the methods of section 2. h2 .8) only has diagonal entries. Please note that in the interest of connecting to classical terminology we have exchanged two indices for one and this will cause small discrepancies with the index summation convention.

F = F1 θ1 + F2 θ2 + F3 θ3 = (h1 F1 )du1 + (h2 F2 )du2 + (h3 F3 )du3 = (hF )i dui . [ − ]. Construct the corresponding one form F = Fi θi in the coframe.10) · F = ∗d ∗ F . = ∂u1 ∂u2 ∂u3 . [ − ]. 1 h ∂u2 h ∂u3 h1 ∂u 2 3 (3. Then df = = = = ∂f dxk ∂xk ∂f ∂ui k dx ∂ui ∂xk ∂f i du ∂ui 1 ∂f i θ hi ∂ui i = ei (f )θi . We calculate the curl using the dual of the exterior derivative. CURVILINEAR COORDINATES 37 i when these discrepancies occur. Thus.9) Curl. let F = Fi θi and recall that F ∗F (3. Let f = f (xi ) and xi = xi (uk ). where (hF )i = hi Fi ∂(hF )j 1 ∂(hF )i − dui ∧ duj = 2 ∂uj ∂ui ∂(hF )j 1 ∂(hF )i − dθi ∧ dθj = hi hj ∂uj ∂ui 1 ∂(hF )i ∂(hF )j = ij − ] θk = ( [ k hi hj ∂uj ∂ui dF ∗dF × F )k θk . As expected. F2 . As before. To satisfy the duality condition θi (ej ) = δj . the components of the curl are 1 ∂(h3 F3 ) ∂(h1 F1 ) 1 ∂(h1 F1 ) ∂(h2 F2 ) 1 ∂(h3 F3 ) ∂(h2 F2 ) [ − ]. Let F = (F1 .3. . The computation yields d ∗ dF = F1 θ1 + F2 θ2 + F3 θ3 = F1 θ 2 ∧ θ 3 + F2 θ 3 ∧ θ 1 + F3 θ 1 ∧ θ 2 = (h2 h3 F1 )du2 ∧ du3 + (h1 h3 F2 )du3 ∧ du1 + (h1 h2 F3 )du1 ∧ du2 ∂(h2 h3 F1 ) ∂(h1 h3 F2 ) ∂(h1 h2 F3 ) + + du1 ∧ du2 ∧ du3 . h2 h3 ∂u2 ∂u3 h1 h3 ∂u1 ∂u3 h1 h2 ∂u2 ∂u1 Divergence. the components of the gradient in the coframe θi are the just the frame vectors = 1 ∂ 1 ∂ 1 ∂ . F3 ) be a classical vector field. we must choose the corresponding frame vectors ei according to 1 1 ∂ ∂ e1 = √ = g11 ∂u1 h1 ∂u1 1 ∂ ∂ 1 = e2 = √ 2 g22 ∂u h2 ∂u2 1 ∂ 1 ∂ = e3 = √ 3 g33 ∂u h3 ∂u3 Gradient.2.

1.8 Example Let X=x ∂ ∂ ∂ ∂ + xz . CONNECTIONS 1 ∂(h2 h3 F1 ) ∂(h1 h3 F2 ) ∂(h1 h2 F3 ) + + . The operator defined in this proposition is called the standard connection compatible with the standard Euclidean metric. and it is left as an exercise. Y = x2 + xy 2 .6 Definition Let X be an arbitrary vector field in Rn . for all vector fields X.12) X Y = X(f ) ∂x1 defines a Koszul connection. 2. Y. 3. f X (Y X : T (Rn ) −→ T (Rn ) is )=f = X Y. A map called a Koszul connection if it satisfies the following properties. X Y2 . 4.7 Proposition Let Y = f i ∂xi be a vector field in Rn . Z >=< > + < Y. For this reason.3 Covariant Derivative In this section we introduce a generalization of directional derivatives. The directional derivative measures the rate of change of a function in the direction of a vector. ∂x ∂y ∂x ∂y Then. Y2 ∈ T (Rn ) and all smooth functions f . What we want is a quantity which measures the rate of change of a vector field in the direction of another. Z < Y. + Y2 ) = X Y1 = X(f )Y + f X Y. XZ >. h1 h2 h 3 ∂u1 ∂u2 ∂u3 (3. the quantity X Y is called the covariant derivative of Y in the direction of X. The proof just requires verification that the four properties above are satisfied. Z > if X Y. Y1 . The action of this connection on a vector field Y yields a new vector field whose components are the directional derivatives of the components of Y . ∂ 3. 3.11) 3. X1 . The definition states that the map X is linear on X but behaves like a linear derivation on Y. Then the operator given by ∂ i . 3. XY = X(x2 ) = = ∂ ∂ + X(xy 2 ) ∂x ∂y ∂ ∂ ∂ ∂ ∂ ∂ + [x (xy 2 ) + xz (xy 2 )] [x (x2 ) + xz (x2 )] ∂x ∂y ∂x ∂x ∂y ∂y ∂ ∂ 2x2 + (xy 2 + 2x2 yz) ∂x ∂y 3. (3. X1 Y (X1 +X2 ) Y X (Y1 XfY + + X2 Y. X2 .9 Definition A Koszul connection X X is compatible with the metric g(Y.38 Therefore. (3. · F = ∗d ∗ F = CHAPTER 3. Z) =< Y. and let X another C ∞ vector field.13) .

will in general yield new vectors. one can take the inner product of both sides of equation (3. The number of independent components is reduced if one assumes that the frame is orthonormal.15) with ek to get < X ei .21) .16) Equivalently.3.10 Definition Let X be a Koszul connection and let {ei } be a frame. The new vectors must be linear combinations of the the basis vectors X e1 X e2 X e3 = ω 1 (X)e1 + ω 2 (X)e2 + ω 3 (X)e3 1 1 1 = ω 1 (X)e1 + ω 2 (X)e2 + ω 3 (X)e3 2 2 2 = ω 1 (X)e1 + ω 2 (X)e2 + ω 3 (X)e3 3 3 3 (3. the right hand side also represents an array of functions. since by definition X is linear on X. In addition. ek > = ω ji (X)gjk Hence. ek > = ω ji (X) < ej . < X ei .17) The left hand side of the last equation is the inner product of two vectors. ek > = < ej ω ji (X). COVARIANT DERIVATIVE 39 In Euclidean space. i or equivalently ω k = Γkij θj i (3. and thus their rates of change in any direction vanish.11 Proposition Let and ei be an orthonormal frame and with the metric . both expressions are linear on X. (3.20) In a general frame in Rn there are n2 entries in the connection 1-form and n3 Christoffel symbols. The Christoffel symbols associated with the connection in the given frame are the functions Γkij given by ei ej = Γkij ek (3. each entry is a 1-forms acting on the vector X to yield a function. We conclude that the right hand side can be interpreted as a matrix in which. Many physicists therefore refer to the Christoffel symbols as the connection once again giving rise to possible confusion.14) The coefficients can be more succinctly expressed using the compact index notation X ei = ej ω ji (X) X ei ). 3. the components of the standard frame vectors are constant. Let ei be arbitrary frame field with dual forms θi .19) be a Koszul connection compatible (3.15) It follows immediately that ω ji (X) = θj ( (3.3. so the expression represents an array of functions. Then ωji = −ωij X (3. ek >= ωki (X) (3. The precise relation between the Christoffel symbols and the connection 1-forms is captured by the equations ω k (ej ) = Γkij . The covariant derivatives of the frame vectors in the directions of a vector X. 3. Consequently. The matrix valued quantity ω ij is called the connection form .18) The Christoffel symbols are the coefficients which give the representation of the rate of change of the frame vectors in the direction of the frame vectors themselves.

ej > +ω k < ei . ek > i j = ω k gkj + ω k gik i j = ωji + ωij thus proving that ω is indeed antisymmetric. In the case of an orthonormal frame in R3 . ω ij is also antisymmetric. Then Θi ≡ dθi + ω ij ∧ θj = 0 Proof: Let ei = ∂j Aji . we have θi = (A−1 )ij dxj . Clearly. X ej > = < ω k ek . we notice the obvious similarity to the general frame equations above.12 Corollary The Christoffel symbols of a Koszul connection in an orthonormal frame are antisymmetric on the lower indices. be a frame.13 Theorem Let {ei } be a frame with connection ωji and dual coframe θi . (3. and one only needs to count the number of entries in the upper triangular part of the n × n matrix ωij Similarly.40 Proof: Since it is given that < ei . the most important contribution to the development of Differential Geometry is the work of Cartan culminating into famous equations of structure which we discuss in this chapter. CONNECTIONS = X < ei . Since θi (ej ). so the connection equations become      e1 0 ω 1 (X) ω 1 (X) e1 2 3  e2  =  −ω 1 (X) 0 ω 2 (X)   e2  . and let θi be the corresponding coframe.22) We conclude that in an orthonormal frame in Rn the number of independent coefficients of the connection 1-form is (1/2)n(n − 1) since by antisymmetry. (3. 3.4 Cartan Equations Perhaps. that is Γkji = −Γkij .27). ej > = < X ei .24) . where gij is diagonal. the diagonal entries are zero. A further simplification occurs in the Frenet frame since here the equations represent the rate of change of the frame only along the direction of the curve rather than an arbitrary direction vector X. ej > + < ei . First Structure Equation 3. (3. ω k ek > i j = ω k < ek . we have 0 CHAPTER 3.23) X 2 3 e3 −ω 1 (X) −ω 2 (X) 0 e3 3 3 Comparing the Frenet frame equation( 1. the Frenet frame is a special case in which the basis vectors have been adapted to a curve resulting in a simpler connection in which some of the coefficients vanish. 3. the number of independent Christoffel symbols gets reduced to (1/2)n2 (n − 1). ej > + < ei . ej >= δij .

Then X ei 41 = = = = ej ω ji (X) ω k (X) i Hence.14 Definition The curvature Ω of a connection ω is the matrix valued 2-form Ωij ≡ dω ij + ω ik ∧ ω k j 3.34) we get d(dθi ) + d(ω ij ∧ θj ) = 0 dω ij ∧ θj − ω ij ∧ dθj = 0.15 (3. i or in matrix notation ω = A−1 dA On the other hand.27) Theorem Let θ be a coframe with connection ω in Rn .28) . taking the exterior derivative of θ .26) i (3. The the curvature form vanishes Ω = dω + ω ∧ ω = 0 (3. But. CARTAN EQUATIONS Let X be an arbitrary vector field. we get dω ij ∧ θj − ω ij ∧ (−ω jk ∧ θk ) = 0 dω ij ∧ θj + ω ik ∧ ω k ∧ θj = 0 j (dω ij + ω ik ∧ ω k ) ∧ θj = 0 j dω ij + ω ik ∧ ω k = 0. we find dθi dθ = d(A−1 )ij ∧ dxj = d(A−1 )ij ∧ Ajk θk = d(A−1 )A ∧ θ. In other words dθi + ω ij ∧ θj = 0 (3.3. Taking the exterior derivative of the first equation of structure and recalling the properties (2.4. hence dθ = −ω ∧ θ. ω k = (A−1 )kj d(Aji ). we have d(A−1 )A = −A−1 dA = −ω. since A−1 A = I. j We now introduce the following 3. = j X (∂j A i ) ∂j X(Aji ) ∂j d(Aji )(X) ek (A−1 )kj d(Aji )(X) (A−1 )kj d(Aji )(X).25) Second Structure Equation Let θi be a coframe in Rn with connection ω ij . Substituting recursively from the first equation of structure.

we have simplified the equation by using matrix notation. We require to behave like a derivation on tensor products (T1 ⊗ T2 ) = T1 ⊗ T2 + T1 ⊗ T2 (3. then e = 0. Change of Basis We briefly explore the behavior of the quantities Θi and Ωij under a change of basis.32) It should be noted than if e is the standard frame ei = ∂i in Rn . Let ei be frame with dual forms θi . we introduce the covariant differential by the formula ei = ej ⊗ ω ji (3. and X a vector field.15). and let ei another frame related to the first frame by an invertible transformation. then f (X) = X f = df (X). CONNECTIONS Proof: Given the there is a non-singular matrix A such that θ = A−1 dx and ω = A−1 dA. ω∧ω = = = = (A−1 dA) ∧ (A−1 dA) −d(A−1 )A ∧ A−1 dA −d(A−1 )(AA−1 ) ∧ dA −d(A−1 ) ∧ dA. we get e = = = = = = e⊗B+e⊗ B ( e)B + e(dB) e ωB + e(dB) eB −1 ωB + eB −1 dB e[B −1 ωB + B −1 dB] eω provided that the connection ω in the new frame e is related to the connection ω by the transformation law ω = B −1 ωB + B −1 dB.15). but we should point out here maps a vector field that in the present context. is to think of as an operator such that if e is a frame. (3. then e(X) = X e.30) recursively. The idea of switching from derivatives to differentials is familiar from basic calculus. This definition is elegant because it does not explicitly show the dependence on X in the connection (3. the situation is more subtle. In other words.29) and using (3. but like a differential on functions. have dω = d(A−1 ) ∧ dA On the other hand. The operator to a matrix-valued tensor of rank T 1.30) = ej ω ji e = eω where once again. behaves like a covariant derivative on vectors. Another way to view the covariant differential. the formula above reduces to ω = B −1 dB. so that f = df . dΩ = −ω ∧ ω.29) which we will write in matrix notation as e = eB.25). (3. Therefore. If f is a function. ei = ej B ji . In this case. showing that the transformation rule is consistent with equation (3. .42 CHAPTER 3. so that ω = 0.1 .31) Taking the exterior differential of (3. Referring back to the definition of connections (3.

2) The local coordinate representation allows us to use the tensor index formalism introduced in earlier chapters. The assumption the Jacobian J = (∂xi /∂uα ) be of maximal rank. y(u. x3 ) = 0 (4. in principle.1 )are infinitely differentiable. x : U ∈ R2 −→ x (u.Chapter 4 Theory of Surfaces Surfaces in R3 4.4) 4. (4. v) −→ R3 (x(u.3) The locus of points in R3 satisfying the equations xi = f i (uα ). u2 ) : U −→ R3 and y(v 1 . 2. allows one to evoke the implicit function theorem. where i = 1. 2. one can locally solve for one of the coordinates. In local coordinates.1: Chart Equivalence 43 . Figure 4. v). In more lucid terms. x2 ). (4.2 Definition Let x(u1 . v 2 ) : V −→ R3 be two coordinate charts with a non empty intersection (x(U ) ∩ y(V )) = ∅. z(u. v)) (4.1 Manifolds Definition A C ∞ coordinate chart is a C ∞ map x from an open subset of R2 into R3 .1 4. a coordinate chart is represented by three equations in two variables xi = f i (uα ). 3. α = 1. x2 . v). can also be locally represented by an expression of the form F (x1 . The two charts are said to be C ∞ equivalent if the map φ = y−1 x and its inverse φ−1 (see fig 4. Thus.1) We will always assume that the Jacobian of the map has maximal rank. say x3 in terms of the other two x3 = f (x1 . the definition just states that two equivalent charts x(uα ) and y(v β ) represent different reparametrizations for the same set of points in R3 .

Then. (the earth is locally flat) there is certainly a topological difference between a sphere and a plane. for example.5) we get the equation of a meridian great circle. cos u.4 Example Consider the local coordinate chart x(u. 4. The fact that it is required to have two parameters to describe a patch on a surface in R3 is a manifestation of the 2-dimensional nature of of the surfaces. although a sphere is locally Euclidean. so that the North pole has an infinite number of labels. then the two charts are C ∞ equivalent. Intuitively. u2 ) = x(uα ). 2. sin u sin v. the coordinates become singular. The chart can not possibly represent the whole sphere because. the Euclidean inner product in R3 induces an inner product in the space of tangent vectors at each point in the surface. we will denote the partial derivatives by any of the following notations: xu = x1 = ∂x . This happens because u = 0 implies that x = y = 0 regardless of the value of v. we may think of a surface as consisting locally of number of patches ”sewn” to each other so as to form a quilt from a global perspective. Indeed.5) Clearly. (4.2 The First Fundamental Form Let xi (uα ) be a local parametrization of a surface. The first condition in the definition states that each local patch looks like a piece of R2 . Another way to state this idea is to say that a surface a space that is locally Euclidean and it has a differentiable structure so that the notion of differentiation makes sense.3 CHAPTER 4. v) = (sin u cos v. THEORY OF SURFACES Definition A differentiably smooth surface in R3 is a set of points M in R3 such that 1.5 Notation Given a parametrization of a surface in a local chart x(u.44 4. letting u = constant in equation (4. the ”surface” is called an n-dimensional manifold 4. If one holds one of the parameters constant while varying the other. This metric on the surface is obtained as follows: dxi = ∂xi α du ∂uα . Thus. whereas the second differentiabilty condition indicates that the patches are joined together smoothly. sin u sin v. If p ∈ M belongs to two different charts x and y. if one analyzes the coordinate chart carefully you will note that at the North pole (u = 0. the surface represented by this chart is part of the sphere x2 + y 2 + z 2 = 1. cos v). If the Euclidean space is of dimension n. The vector equation is equivalent to three scalar functions in two variables x = y = z = sin u cos v. then the resulting 1-parameter equations describe a curve on the surface. z = 1. ∂u ∂x xv = xv = . If p ∈ M then p belongs to some C ∞ chart. v) = x(u1 . ∂v xα = ∂x ∂uα xuu = x11 = ∂2x ∂u2 ∂2x xvv = x22 = ∂v 2 2 ∂ x xαβ = ∂uα ∂v β 4.

and Riemannian submanifold of Rn if viewed as an object embedded in Euclidean space. by virtue of being embedded in R3 . inherits a natural metric (4. u2 ) −→ α(u1 .6) which we will call the induced metric.6) (4. a vector field X on the surface is a smooth choice of a tangent vector at each point on the surface and the union of all tangent spaces is called the tangent bundle T M.8) is also called the first fundamental form. The coordinate chart map α : R2 −→ M ∈ R3 .9) = = = = g11 g12 g21 g22 = xu · xu = xu · xv = xv · xu = xv · xv . u2 ). An equivalent version of the metric (4. g}. We say that a quantity Xp is a tangent vector at a point p ∈ M. The set of all tangent vectors at a point p ∈ M is called the tangent space Tp M. xβ > .4. A pair {M.6) can be obtained by using a more traditional calculus notation dx = xu du + xv dv ds2 = dx · dx = (xu du + xv dv) · (xu du + xv dv) = (xu · xu )du2 + 2(xu · xv )dudv(xv · xv )dv 2 .6 bdf The element of arclength ds2 = gαβ duα ⊗ duβ (4.7) We conclude that the surface. (4. We can rewrite the last result as ds2 = Edu2 + 2F dudv + Gdv 2 .2. ∂uα ∂uβ ∂xi ∂xj α β du du . where gαβ = δij ∂xi ∂xj . We must caution the reader that this quantity is not a form in the sense of differential geometry since ds2 involves the symmetric tensor product rather than the wedge product. that will find it convenient to introduce yet a third more modern version. THE FIRST FUNDAMENTAL FORM ds2 = δij dxi dxj = δij Thus. ds2 = gαβ duα duβ . The first fundamental form plays such a crucial role in the theory of surfaces. where M is a manifold and g = gαβ duα ⊗duβ is a metric is called a Riemannian manifold if considered as an entity in itself. where E F G That is gαβ = xα · xβ =< xα . ∂uα ∂uβ 45 (4. if Xp is a linear derivation on the space of C ∞ real-valued functions {f |f : M −→ R} on the surface. 4. Following the same development as in the theory of curves. consider a surface M defined locally by a function (u1 . As before.

W > V · W I(V. Particular examples of tangent vectors on M are given by the push-forward of the standard basis of T R2 .46 induces a push-forward map CHAPTER 4. gαβ V α V β · gαβ W α W β (4. The functions V α and W α are called the curvilinear coordinates of the vectors.11) 2 2 2 = < V. W β xβ >= V α W β < xα . where x1 are the coordinate functions in R3 . THEORY OF SURFACES α∗ : T R2 −→ T M defined by α∗ (V )(f ) |α(uα ) = V (α ◦ f ) |uα Just as in the case of curves. W > = < V α xα . W ) I(V. W ) gαβ V α W β . xβ > = gαβ V α V β = gαβ W α W β . what we really mean is (xi ◦ α)(uα ). and δuα = δt dt dt . xβ > = gαβ V α W β . Then the total differentials duα = dφα dψ α dt. Let uα = φα (t) and uα = ψ α (t) be two curves on the surface. The angle θ subtended by the the vectors v and W is the given by the equation cos θ = = = < V. the vectors V and W can be written as linear combinations V W = V α xα = W α xα . V β xβ >= V α V β < xα . the first fundamental form I is just the symmetric bilinear tensor defined by induced metric I(X.10) where X and Y are any pair of vector fields in T M. V V W and < V. (4. Y >. V ) I(W. V >=< V α xα . when we revert back to classical notation to describe a surface as xi (uα ). Y ) = g(X. These tangent vectors which earlier we called xα are defined by α∗ ( ∂ ∂ )(f ) |α(uα ) = (α ◦ f ) |uα α ∂u ∂uα In this formalism. Orthogonal Parametric Curves Let V and W be vectors tangent to a surface M defined locally by a chart x(uα ). Since the vectors xalpha span the tangent space of M at each point. Y ) =< X. We can calculate the length and the inner product of the vectors using the induced Riemannian metric.

12) to the curves. if the two curves happen to be the parametric curves. a cos θ sin φ. we have the following 4. (4. the cosine of the angle subtended by the infinitesimal tangent vectors reduces to g12 g12 δu1 du2 F = . sin θ. r sin θ. a sin u sin v. In this case. a(cos u ln tan )) 2 a2 cot2 u =0 a2 sin2 u a2 cot2 udu2 + a2 sin2 udv 2 = = = = = = = (r cos θ. THE FIRST FUNDAMENTAL FORM 47 represent infinitesimal tangent vectors (1.7 Proposition The parametric lines are orthogonal if F = 0. and along the second we have δu1 arbitrary and δu2 = 0. r cos θ. u1 = constant and u2 = constant then along one curve we have du1 = 0.12) In particular.2.13) cos θ = =√ 1 )2 2 )2 g11 g22 EG g11 (δu g22 (du As a result.8 Examples a) Sphere x xθ xφ E F G ds2 = = = = = = = (a sin θ cos φ. du2 arbitrary. 4. 0) xr · xr = 1 + f 2 (r) xr · xθ = 0 xθ · xθ = r2 [1 + f 2 (r)]dr2 + r2 dθ2 . the angle between two infinitesimal vectors tangent to two intersecting curves on the surface satisfies cos θ = gαβ duα δuβ gαβ duα duβ gαβ δuα δuβ (4. Thus.4. f (r)) (−r sin θ. a sin θ sin φ. a sin θ cos φ. f (r)) (cos θ. a cos θ) (a cos θ cos φ. −a sin θ) (−a sin θ sin φ. ) xθ · xθ = a2 xθ · xφ = 0 xφ · xφ = a2 sin2 θ a2 dθ2 + a2 sin2 θdφ2 b) Surface of Revolution x xr xθ E F G ds2 c) Pseudosphere x = E F G ds2 d) Torus = = = = u (a sin u cos v.

(4. (b + a cos u) sin v. we se that the unit tangent vector T to the curve is given by T = dx dx duα duα = α = xα ds du ds ds (4. u sin v.48 CHAPTER 4.16) where κn = Kn =< T .14) Now.2)) T = K n + Kg = κn n + K g . Without loss of generality. u sin v.15) Since the curve lives on the surface and the the vector T is tangent to the curve. c cosh−1 u2 u2 − c2 u ) c = = = = = (u cos v. The normal curvature measures the the curvature of x(uα (s)) resulting . the vector T = dT /ds does not in general have this property.3 The Second Fundamental Form Let x = x(uα ) be a coordinate patch on a surface M. we can construct a unit normal n to the surface by taking n= xu × xv xu × xv (4. Since xu and xv are tangential to the surface. Using the chain rule. av) 1 0 u2 + a2 du2 + (u2 + a2 )dv 2 = 0 = u2 = u2 du2 + u2 dv 2 u2 − c2 4. On the other hand. n > The scalar quantity κn is called the normal curvature of the curve and Kg is called the geodesic curvature vector. consider a curve on the surface given by uα = uα (s). we assume that the curve is parametrized by arclength s so that the curve has unit speed. so what we will do is to decompose T into its normal and tangential components (see fig (4. it is clear that T is also tangent to the surface. THEORY OF SURFACES x E F G ds2 = = = = = ((b + a cos u) cos v. a sin u) a2 0 (b + a cos u)2 a2 du2 + (b + a cos u)2 dv 2 e) Helicoid x E F G ds2 f) Catenoid x = E F G ds2 = (u cos v.

3. if one specifies a point p ∈ M and a direction vector Xp ∈ Tp M. the normal curvature is more of a property pertaining to a direction on the surface at a point. The geodesic curvature vector . it might be possible to walk on a path with zero geodesic curvature as long as the hiker can maintain the same compass direction. since the rolling hills of the terrain extend in all directions. one can geometrically envision the normal curvature by considering the equivalence class of all unit speed curves in M which contain the point p and whose tangent vectors line up with the direction of X. All curves in this equivalence class have the same normal curvature and their geodesic curvatures vanish. measures the “sideward” component of the curvature in the tangent plane to the surface. the geodesic curvature really depends on the curve itself. However.4. all these curves can be obtained by intersecting the surface with a ”vertical” plane containing the vector X and the normal to M. Thus. if one draws a straight line on a flat piece of paper and then smoothly bend the paper into a surface. Since the line was originally straight. To find an explicit formula for the normal curvature we first differentiate equation (4.17) . we get κn = < T .15) T = = = = = dT ds d duα (xα ) ds ds duα d2 uα d (xα ) + xα 2 ds ds ds dxα duβ duα d2 uα ( β ) + xα 2 du ds ds ds duα duβ d2 uα xαβ + xα 2 .2: Normal Curvature by the constraint of the curve to lie on a surface. there are infinitely many such curves. In this sense. n > = where bαβ =< xαβ .9 Definition The expression II = bαβ duα ⊗ duβ is called the second fundamental form . whereas. gαβ duα duβ duα duβ ds ds (4. n > 4. It might be impossible for a hiker walking on the ondulating hills of the Ozarks to find a straight line trail. This means that the entire contribution to the curvature comes from the normal component. reflecting the fact that the only reason there is curvature here is due to the bend in the surface itself. there is no sideward component of curvature so Kg = 0 in this case. (4.18) bαβ duα duβ . ds ds ds Taking the inner product of the last equation with the normal and noticing that < xα .19) (4. n >=< xαβ . but at an infinitesimal level. then the straight line would now acquire some curvature. Of course. n >= 0. THE SECOND FUNDAMENTAL FORM 49 Figure 4. Similarly.

so can we represent the second fundamental form as II = − < dx.50 CHAPTER 4. n > =< xuv . dn > To see this it suffices to note that differentiation of the identity < xα . a more useful formula for the coefficients of the second fundamental formula can be derived by first applying the classical vector identity (A × B) · (C × D) = to compute xu × xv 2 A·C B·C A·D B·D (4.10 Proposition The second fundamental form is symmetric. Proof: In the classical formulation of the second fundamental form the proof is trivial. so that equation (4. n > =< xvu . < dx. n > duα duβ −II From a computational point a view.20) =< xuu . dn > = = = = = < xα duα . nβ duβ > < xα . We have bαβ = bβα since for a C ∞ patch x(uα ). the normal vector can be written as n= xu × xv xu × xv =√ xu × xv EG − F 2 . nβ duβ > < xα duα .22) (xu × xv ) · (xu × xv ) xu · xu xu · xv = det xv · xu xv · xv = = EG − F 2 (4. Therefore. n >= − < xα .17) as κn = Edu2 + 2F dudv + Gdv 2 II = I edu2 + 2f dudv + gdv 2 (4. THEORY OF SURFACES 4. n >= 0 implies that < xαβ . dx >. We will denote the coefficients of the second fundamental form by e = b11 f = b12 = b21 g = b22 . nβ > duα duβ − < xαβ .19) can be written as II = edu2 + 2f dudv + gdv 2 and equation (4. n > =< xvv . nβ > .21) (4. we have xαβ = xβα because the partial derivatives commute. n > We would also like to point out that just as the first fundamental form can be represented as I =< dx.23) Consequently.

. we get 1 x(uα + duα ) = x0 + (x0 )α duα + (x0 )αβ duα duβ + . what quantities remain invariant as one surface is smoothly changed into another? There is certainly something intrinsically different from a cone. the discussion above indicates that if b = |bαβ | = eg − f 2 > 0 then all the neighboring points lie on the same side of the tangent plane. That is dS S = = xu × xv dudv EG − F 2 dudv The second fundamental form contains information about the shape of the surface at a point. Expanding on a 0 0 Taylor series. if b < 0. consider a point x0 = x(uα ) ∈ M and a nearby point x(uα + duα ). we can write the coefficients bαβ directly as triple products involving derivatives of (x). and hence. CURVATURE 51 Thus. For example. n > duα duβ 2 1 II0 = 2 The first fundamental form (or rather. n >= 0. we find that the distance D is D = < x − x0 . The expressions for these coefficients are e = f g = = (x x x ) √ u u uu EG − F 2 (x x x ) √ u v uv EG − F 2 (x x x ) √ v v vv EG − F 2 (4. One would like to be able to answer questions such as.24) The first fundamental form on a surface measures the (square) of the distance between two infinitesimally separated points. then the differential of surface area is given by the length of the cross product of these two infinitesimal tangent vectors. . There is a similar interpretation of the second fundamental form as we show below.4. The second fundamental form measures the distance from a point on the surface to the tangent plane at a second infinitesimally separated point. 4. the surface is concave in one direction. Since the normal to the plane at x0 is the same as the unit normal to the surface and < xα . the point is called an elliptic point. If at a point on a surface b > 0. the point is called parabolic. constitute the central object of study in differential geometry. What is it that makes these two surfaces so different? How does one calculate the shortest path between two objects when the path is constrained to be on a surface? . To see this simple geometrical interpretation. its determinant) also appears in calculus in the context of calculating the area of a parametrized surface. 0 2 We recall that the distance formula from a point x to a plane which contains x0 is just the scalar projection of (x − x0 ) onto the normal.4 Curvature Curvature and all related questions which surround curvature.4. the point is called hyperbolic or a saddle point. n > 1 = < (x0 )αβ . which we can construct from a flat piece of paper and a sphere which we can not. and if b = 0. the reason is that if one considers an infinitesimal parallelogram subtended by the vectors xu du and xv dv.

Y ] of two vector fields X and Y on a surface M is defined as the commutator [X. Y ] = XY = Y X. Assuming that the connection is compatible with the metric.14 Proposition The Weingarten map is a linear transformation on T (M). For simplicity. the operator Weingarten map is clearly a good place to start. Proof: If suffices to prove that the bracket is a linear derivation on the space of C ∞ functions. 4. and even more disastrous. Then [X. and let N be the normal vector. Y ](g) + g[X. Thus. . it is variations in the normal that indicate the presence of curvature. no Star Trek. Since N is the unit normal to the surface. We will also introduce in this section the modern version of the second fundamental form 4. Y ](f ). Y ](f g) = X(Y (f g)) − Y (X(f g)) = X[f Y (g) + gY (f )] − Y [f X(g) + gX(f )] = X(f )Y (g) + f X(Y (g)) + X(g)Y (f ) + gX(Y (f )) −Y (f )X(g) − f (Y (X(g)) − Y (g)X(f ) − gY (X(f )) = f [X(Y (g)) − (Y (X(g))] + g[X(Y (f )) − Y (X(f ))] = f [X.25) LX = − X N is called the Weingarten map. N > is 0. The study of curvature of a hypersurface in Rn (a surface of dimension n − 1) begins by trying to understand the covariant derivative of the normal to the surface.26) meaning that if f is a function on M then [X. We cannot overstate the great importance of this subject.13 Proposition The Lie bracket of two vectors X. N >= 1. then the surface is a flat hyperplane. The map L given by (4. Y ](f ) + [X. The reason is simple. Y ](g). If the normal to a surface is constant.12 Definition The Lie bracket [X. Y ∈ T (calM ) is another vector in T (M). [X. but the formalism we use is applicable to any dimension. Y ](f + g) = = = = X(Y (f + g)) − Y (X(f + g)) X(Y (f ) + Y (g)) − Y (X(f ) + X(g)) X(Y (f )) − Y (X(f )) + X(Y (g)) − Y (X(g)) [X. (4. Y ∈ T (calM ) and smooth functions f.11 Definition Let X be a vector field on a surface M in R3 . Proof: Linearity follows from the linearity of . 4. In this definition we will be careful to differentiate between operators which live on the surface and operators which live in the ambient space.52 CHAPTER 4. 4. since it is the rate of change of the normal in an arbitrary direction which we wish to quantify. perhaps it suffices to say that without a clear understanding of curvature. g in M. THEORY OF SURFACES These and many other questions of similar type can be quantitatively answered through the study curvature. so it suffices to show that L : X −→ LX maps X ∈ T (M) to a vector LX ∈ T (M). We will adopt the convention of overlining objects above being an example of one such object. we constrain our discussion to surfaces in R3 . The which live in the ambient space. no concept of black holes. so that any derivative of < N. Consider vectors X. Y ]f = X(Y (f )) − Y (X(f )). there would not be a general theory of relativity. < N.

Since < X. N >= 0. Y > (4. CURVATURE 53 X < N. 4. X > . In this case. Y > − < LY. the third fundamental form contains third order Taylor series information about the surface. Computing the difference of these two quantities. Y > (4. LX is orthogonal to N .29) A connection is called torsion free if T (X. X]). so that the theorem is equivalent to proving that that the second fundamental for is symmetric (II[X. > + < N. For the time being. Y ]. 4. N > = < X N. nβ > We also note that there is third fundamental form defined by III(X. Y ] = II[Y. LX has components −na and II(X. We will not treat III(X. X N. and the connection is compatible with the metric. Y N. we know that < < > = − < N.18 Theorem The Weingarten map is a self adjoint operator on T M.4.17 Definition The torsion of a connection T (X. = 2 < X N. We will now do the same for the second fundamental form. Y ) = 0. it suffices to assume that for the rest of this section.4. we get II[X. As one would expect. Y − Y X − [X. XN > Therefore. the third fundamental form would be denoted by < dn. Proof: We have already shown that L : T M −→ T M is a linear map. dn >. and hence it lies in T (M). Y ) = XY is the operator T such that ∀X. N >=< Y. Y ) becomes bαβ = − < xα . Y ) =< LX. Y XY > Y X >. Y ) in much detail in this work. Y >=< X. dx >.15 Definition The second fundamental form is the bilinear map II(X. it is possible to prove the following very important theorem. and < X. LY >. We recall at this point that the in the previous section we gave two equivalent definitions < dx. Y ] (4. Using this assumption. This is easy to see if one chooses X to have components xα and Y to have components xβ . N > = 2 < LX. X > = − < N. We will say much more later about the torsion and the importance of torsion free connections. XY − Y X = [X. Y ) =< LX. N >= 0. Y >) of the first fundamental form.28) In classical notation. Recall that an operator L on a linear space is self adjoint if < LX.27) 4. With these choices. Y ] − II[Y. X] = < LX. .16 Remark It should be noted that the two definitions of the second fundamental form are consistent. X > = < X N. Y > − < Y N. all connections are torsion free. LY >=< L2 X. 4.

X] = = = = CHAPTER 4. Y X − X Y > < N. they can be expresses as linear combinations of the basis vectors X and Y . Y ] > 0 (iff [X. Y X > − < N. 3. Clearly these invariants are important in the case of the operator L. and they deserve to be given special names. that the the determinant and the trace of a self adjoint operator are the only invariants under a adjoint (similarity) transformation. Since any self-adjoint operator is diagonalizable and in a diagonal basis.21 Proposition Let X and Y be any linearly independent vectors in T (M). so the dimension is 2. if κ1 κ2 = 0.33) 4.54 hence. κ2 ).34) Proof: Since LX. If κ1 κ2 < 0. 4.32) 4. Y ] − II[Y. Then LX × LY = K(X × Y ) (LX × Y ) + (X × LY ) = 2H(X × Y ) (4. we expect two eigenvalues and two eigenvectors LX1 LX2 = κ1 X 1 = κ1 X 2 . the vector spaces in question are the tangent spaces at each point of a surface in R3 . the matrix representing L is diag(κ1 .31) (4. . If κ1 = κ2 = 0.19 Definition The eigenvalues κ1 and κ2 of the Weingarten map L are called the principal curvatures and the eigenvectors X1 and X2 are called the principal directions. Several possible situations may occur depending on the classification of the eigenvalues at each point p on the surface: 1.30) then the eigenvalues are always real and eigenvectors corresponding to different eigenvalues are orthogonal. If κ1 = κ2 and both eigenvalues are positive. The main result in this area states that if one considers the eigenvalue equation LX = κX (4. (4. II[X. if follows immediately that K H = κ1 κ2 1 = (κ1 + κ2 ) 2 (4. Y ] ∈ T (M)) One of the most important topics in an introductory course linear algebra deals with the spectrum of self adjoint operators. < N. Hence. LY ∈ T (M). then p is called a hyperbolic point. THEORY OF SURFACES < N.20 Definition The determinant K = det(L) is called the Gaussian curvature of M and H = (1/2)Tr(L) is called the mean curvature. In the current situation. then p is called an elliptic point 2. [X. then p is called an umbilic point. 4. X Y >. then p is called a parabolic point It is also well known from linear algebra.

X > < LX. X > < Y. Y > + < LY. we get LX × LY = a1 X + b1 Y = a2 X + b2 Y. (cos θ)X1 + (sin θ)X2 > = κ1 cos2 θ < X1 . Y > < X. X2 > = κ1 cos2 θ + κ2 sin2 θ. using the fact the LX1 = κ1 X1 . and noting that the eigenvectors are orthogonal. Y > < X. X > < LX. X > < X. X > < Y. X1 > +κ2 sin2 θ < X2 . Just compute II(X. = = a1 a2 b1 b2 X ×Y det(L)(X × Y ).4. X > < LX. X >. Similarly (LX × Y ) + (X × LY ) = (a1 + b2 )(X × Y ) = Tr(L)(X × Y ) = (2H)(X × Y ). Then II(X. Y > < Y.35) Proof: Starting with equations (4. Y > < LY. X > < Y. We get. X > = < (cos θ)κ1 X1 + (sin θ)κ2 X2 . 4.34) take the dot product of both sides with X × Y and use the vector identity (4. CURVATURE 55 LX LY computing the cross product. Y > < Y.22 Proposition K H = = eg − f 2 EG − F 2 1 Eg − 2F f + eG 2 EG − F 2 (4.22).4. X > < X. X > < Y. Y > < LY. X) =< LX. Y > K= 2H = < X. < LX. X > < LX. Y > The result follows by taking X = xu and Y = xv 4. .36) Proof: Easy. X) = κ1 cos2 θ + κ2 sin2 θ (4. LX2 = κ2 X2 . We immediately get < LX.23 Theorem (Euler) Let X1 and X2 be unit eigenvectors of L and let X = (cos θ)X1 +(sin θ)X2 . X > < X.