P. 1
Tensor Analysis

# Tensor Analysis

|Views: 23|Likes:

See more
See less

02/17/2013

pdf

text

original

## Sections

• Physical space
• 1.1 Coordinate systems
• 1.2 Distances
• 1.3 Symmetries
• Scalars and vectors
• 2.1 Deﬁnitions
• 2.2 Basic vector algebra
• 2.2.1 Scalar product
• 2.2.2 Cross product
• 2.3 Coordinate systems and bases
• 2.3.1 Components and notation
• 2.3.2 Triplet algebra
• 2.4 Orthonormal bases
• 2.4.1 Scalar product
• 2.4.2 Cross product
• 2.5 Ordinary derivatives and integrals of vectors
• 2.5.1 Derivatives
• 2.5.2 Integrals
• 2.6 Fields
• 2.6.1 Deﬁnition
• 2.6.2 Partial derivatives
• 2.6.3 Diﬀerentials
• 2.6.4 Gradient, divergence and curl
• 2.6.5 Line, surface and volume integrals
• 2.6.6 Integral theorems
• 2.7 Curvilinear coordinates
• 2.7.1 Cylindrical coordinates
• 2.7.2 Spherical coordinates
• Tensors
• 3.1 Deﬁnition
• 3.2 Outer product
• 3.3 Basic tensor algebra
• 3.3.1 Transposed tensors
• 3.3.2 Contraction
• 3.3.3 Special tensors
• 3.4 Tensor components in orthonormal bases
• 3.4.1 Matrix algebra
• 3.4.2 Two-point components
• 3.5 Tensor ﬁelds
• 3.5.1 Gradient, divergence and curl
• 3.5.2 Integral theorems
• Tensor calculus
• 4.1 Tensor notation and Einsteins summation
• 4.2 Orthonormal basis transformation
• 4.2.1 Cartesian coordinate transformation
• 4.2.2 The orthogonal group
• 4.2.3 Algebraic invariance
• 4.2.4 Active and passive transformation
• 4.2.5 Summary on scalars, vectors and tensors
• 4.3 Tensors of any rank
• 4.3.1 Introduction
• 4.3.2 Deﬁnition
• 4.3.3 Basic algebra
• 4.3.4 Diﬀerentiation
• 4.4 Reﬂection and pseudotensors
• 4.4.1 Levi-Civita symbol
• 4.4.2 Manipulation of the Levi-Civita symbol
• 4.5 Tensor ﬁelds of any rank
• Invariance and symmetries

# Introduction to vector and tensor analysis

Jesper Ferkinghoﬀ-Borg
September 6, 2007
Contents
1 Physical space 3
1.1 Coordinate systems . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Scalars and vectors 5
2.1 Deﬁnitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Basic vector algebra . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.1 Scalar product . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.2 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Coordinate systems and bases . . . . . . . . . . . . . . . . . . . . 7
2.3.1 Components and notation . . . . . . . . . . . . . . . . . . 9
2.3.2 Triplet algebra . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Orthonormal bases . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.1 Scalar product . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.2 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Ordinary derivatives and integrals of vectors . . . . . . . . . . . . 13
2.5.1 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5.2 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.6.1 Deﬁnition . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.6.2 Partial derivatives . . . . . . . . . . . . . . . . . . . . . . 15
2.6.3 Diﬀerentials . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6.4 Gradient, divergence and curl . . . . . . . . . . . . . . . . 17
2.6.5 Line, surface and volume integrals . . . . . . . . . . . . . 20
2.6.6 Integral theorems . . . . . . . . . . . . . . . . . . . . . . . 24
2.7 Curvilinear coordinates . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.1 Cylindrical coordinates . . . . . . . . . . . . . . . . . . . 27
2.7.2 Spherical coordinates . . . . . . . . . . . . . . . . . . . . . 28
3 Tensors 30
3.1 Deﬁnition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 Outer product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Basic tensor algebra . . . . . . . . . . . . . . . . . . . . . . . . . 31
1
3.3.1 Transposed tensors . . . . . . . . . . . . . . . . . . . . . . 32
3.3.2 Contraction . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3.3 Special tensors . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4 Tensor components in orthonormal bases . . . . . . . . . . . . . . 34
3.4.1 Matrix algebra . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4.2 Two-point components . . . . . . . . . . . . . . . . . . . . 38
3.5 Tensor ﬁelds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.1 Gradient, divergence and curl . . . . . . . . . . . . . . . . 39
3.5.2 Integral theorems . . . . . . . . . . . . . . . . . . . . . . . 40
4 Tensor calculus 42
4.1 Tensor notation and Einsteins summation rule . . . . . . . . . . 43
4.2 Orthonormal basis transformation . . . . . . . . . . . . . . . . . 44
4.2.1 Cartesian coordinate transformation . . . . . . . . . . . . 45
4.2.2 The orthogonal group . . . . . . . . . . . . . . . . . . . . 45
4.2.3 Algebraic invariance . . . . . . . . . . . . . . . . . . . . . 46
4.2.4 Active and passive transformation . . . . . . . . . . . . . 47
4.2.5 Summary on scalars, vectors and tensors . . . . . . . . . . 48
4.3 Tensors of any rank . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3.2 Deﬁnition . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3.3 Basic algebra . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.4 Diﬀerentiation . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4 Reﬂection and pseudotensors . . . . . . . . . . . . . . . . . . . . 54
4.4.1 Levi-Civita symbol . . . . . . . . . . . . . . . . . . . . . . 55
4.4.2 Manipulation of the Levi-Civita symbol . . . . . . . . . . 56
4.5 Tensor ﬁelds of any rank . . . . . . . . . . . . . . . . . . . . . . . 57
5 Invariance and symmetries 60
2
Chapter 1
Physical space
1.1 Coordinate systems
In order to consider mechanical -or any other physical- phenomena it is neces-
sary to choose a frame of reference, which is a set of rules for ascribing numbers
to physical objects. The physical space has three dimensions, which means that
it takes exactly three numbers -say x
1
, x
2
, x
3
- to locate a point, P, in space.
These numbers are called coordinates of a point, and the reference frame for the
coordinates is called the coordinate system C. The coordinates of P can con-
veniently be collected into a a triplet, (x)
C
(P) = (x
1
(P), x
2
(P), x
3
(P))
C
, called
the position of the point. Here, we use an underline simply as a short hand no-
tation for a triplet of quantities. The same point will have a diﬀerent position
(x

)
C
(P) = (x

1
(P), x

2
(P), x

3
(P))
C
in a diﬀerent coordinate system C

. How-
ever, since diﬀerence coordinate systems are just diﬀerent ways of representing
the same physical space in terms of real numbers, the coordinates of P in C

must be a unique function of its coordinates in C. This functional relation can
be expressed as
x

= x

(x) (1.1)
which is a shorthand notation for
x

1
= x

1
(x
1
, x
2
, x
3
)
x

2
= x

2
(x
1
, x
2
, x
3
)
x

3
= x

3
(x
1
, x
2
, x
3
)
(1.2)
This functional relation is called a coordinate transformation. The inverse trans-
formation must also exist, because both systems are assumed to cover the same
space.
1.2 Distances
.
3
We will take it for given that one can ascribe a unique value (ie. indepen-
dent of the chosen coordinate system), D(O, P), for the distance between two
arbitraily chosen points O and P. Also we assume (or we may take it as an
observational fact) that when D becomes suﬃciently small, a coordinate system
exists by which D can be calculated according to Pythagoras law:
D(O, P) =
¸
¸
¸
_
d

i=1
(x
i
(P) −x
i
(O))
2
, Cartesian coordinate system, (1.3)
where d = 3 is the dimension.
We will call such a coordinate system a Cartesian or a Rectangular coordinate
system. Without loss of generality we can choose one of the points as origen,
-say O-, so x(O) = (0, 0, 0). The neighborhood, Ω(O), around O for which
distances between any two points, P
1
, P
2
∈ Ω(O) can be calculated according
to Eq. (1.3) for the same ﬁxed coordinate system is coined a (local) Euklidean
space. In an Euklidean space, we also write |P
1
P
2
| for D(P
1
, P
2
), representing
the length of the line segment connecting P
1
and P
2
. Trivially, O ∈ Ω(O).
Classical physics takes place in a 3-dimensional globally Euclidean space
Ω(O) = R
3
. In general relativity space are intrinsically curved and the as-
sumption of an Euklidean space can only be applied locally. The principle of
curved space is easier to envisage for 2d-surfaces. For instance all points on the
surface of earth can be represented by two numbers -say x
1
, x
2
- representing
the longitude and latitude. However, the law of Pythagoras (with d = 2) can
only be applied for small rectangular triangles
1
on the surface, ie. locally. For
larger rectangular triangles the sum of the angles will be larger than 180
0
and
Pythagoras’ law will not be correct. All geometric analysis, however, rely on
the assumption that at suﬃcently small scales the space will appear ﬂat. In
diﬀerential geometry one only requires ﬂatness in a diﬀerential sence.
1.3 Symmetries
For classical mechanical phenomena it is found that a frame of reference can
always be chosen in which space is homogeneous and isotropic and time is ho-
mogeneous. It implies that nature has no sence of an absolute origin in time
and space and no sence of an absolute direction. Mechanical or geometrical laws
should therefore be invariant to coordinate transformations involving rotations
and translations. Motivated by this fact we are inclined to develop a mathe-
matical formalism for operating with geometrical objects without referring to a
coordinate system or, equivalently, a formalism which will take the same form
(ie. is covariant) for all coordinates. This fundamental principle is called the
principle of covariance. The mathematics of scalar, vector and tensor algebra is
precisely such a formalism.
1
Small would mean that the length of line segments are much smaller than the radius of
earth
4
Chapter 2
Scalars and vectors
2.1 Deﬁnitions
A vector is a quantity having both magnitude and a direction in space, such as
displacement, velocity, force and acceleration.
Graphically a vector is represented by an arrow OP from a point O to a
point P, deﬁning the direction and the magnitude of the vector being indicated
by the length of the arrow. Here, O is called the initial point and P is called the
terminal point. Analytically, the vector is represented by either

OP or OP and
the magnitude by |

OP| or |OP|. We shall use the bold face notation in these
notes. In this chapter will assume that all points P belong to an Euklidean
space, P ∈ Ω(O), meaning that lengths of line segments can be calculated
according to Pythagoras.
A scalar is a quantity having magnitude but no direction, e.g. mass, length,
time, temperature and any real number.
We indicate scalars by letters of ordinary types. For example, a vector a will
have length a = |a|. Operations with scalars follow the same rules as elementary
algebra; so multiplication, addition and substraction (provided the scalars have
same units) follow the usual algebraic rules.
2.2 Basic vector algebra
The operations deﬁned for real numbers are, with suitable deﬁnitions, capable
of extension to an algebra of vectors. The following deﬁnitions are fundamental
and deﬁne the basic algebraic rules of vectors:
1. Two vectors a and b are equal if they have the same magnitude and
direction regardless of the position of their initial point.
2. A vector having direction opposite of a vector a but having the same
magnitude is denoted −a.
5
3. The sum of resultant of vectors a and b is a vector c formed by placing
the initial point of b on the terminal point of a and then joining the initial
point of a to the terminal point of b. The sum is written c = a + b.
4. The diﬀerence between two vectors, a and b, represented by a − b is
deﬁned as the sum a + (−b).
5. The product of a vector a with a scalar m is a vector ma with magnitude
|m| times the magnitude of a, ie |ma| = |m||a|, and with direction the
same as or opposite to that of a, according as m is positive or negative.
We stress that these deﬁnitions for vector addition, substraction and scalar
multiplications are deﬁned geometrically, ie. they have no reference to coordi-
nates. It then becomes a pure geometric exercise to prove the following laws:
a +b = b +a Commutative law for addition
a + (b +c) = (a +b) +c Associate law for addition
m(na) = (mn)a Associate law for multiplication
(m + n)a = ma + na Distributive law
m(a +b) = ma + nb Distributive law
am =
def
am Commutative law for multiplication
(2.1)
Here, the last formula should just be read as a deﬁnition of a vector times a
scalar. Note that in all cases only multiplication of a vector by one or more
scalars are deﬁned. One can deﬁne diﬀerent types of bilinear vector products.
The three basic types are called scalar product (or inner product), cross product
and outer product (or tensor product). We shall deﬁne each in turn. The
deﬁnition of the outer product is postponed to chapter 3.
Following deﬁnition will become useful:
A unit vector is a vector having unit magnitude. If a is not a null vector
then a/|a| is a unit vector having the same direction as a.
2.2.1 Scalar product
The scalar product between two vectors, a and b is deﬁned by
a · b = ab cos(θ), 0 ≤ θ ≤ π (2.2)
where a = |a|, b = |b| and θ is the angle between the two vectors. Note that
a · b is a scalar. Following rules apply:
a · b = b · a Commutative law for scalar product
(a +b) · c = a · c +b · c Distributive law in 1. argument
a · (b +c) = a · b +a · c Distributive law for 2. argment
m(a · b) = (ma) · b = a · (mb) m is a scalar
(2.3)
6
The three last formulas make the scalar product bilinear in the two arguments.
Note also,
1 a · a = |a|
2
2 If a · b = 0 and a and b are not null vectors,
then a and b are perpendicular.
3 The projection of a vector a on b is equal to a · e
b
,
where e
b
= b/|b| is the unit vector in direction of b.
(2.4)
2.2.2 Cross product
The cross product, a ×b between two vectors a and b is a vector deﬁned by
a ×b = ab sin(θ)u, 0 ≤ θ ≤ π, (2.5)
where θ is the angle between a and b and u is a unit vector in the direction
perpendicular to the plane of a and b such that a, b and u form a right-handed
system
1
.
The following laws are valid:
a ×b = −b ×a Cross product is not commutative.
(a +b) ×c = a ×c +b ×c Distributive law for 1. argument
a ×(b +c) = a ×b +a ×c Distributive law for 2. argument
m(a ×b) = (ma) ×b = a ×(mb) m is a scalar
(2.6)
The last three formulas make the cross product bilinear.
Note that:
1. The absolute value of the cross product |a×b| has a particular geometric
meaning. It equals the area of the parallelogram spanned by a and b.
2. The absolute value of the triple product a·(b×c) has a particular geometric
meaning. It equals the volume of the parallelepiped spanned by a, b and
c
2.3 Coordinate systems and bases
We emphasize again that all deﬁnitions and laws of vector algebra, as introduced
above, are invariant to the choise of coordinate system
2
. Once we introduce a
way of ascribing positions to points by the choise of a coordinate system, C, we
obtain a way of representing vectors in terms of triplets.
1
It is seen that the deﬁnition of the cross-product explicitly depends on an arbitrary choise
of handedness. A vector whose direction depends on the choise of handedness is called an
axial vector or pseudovector as opposed to ordinary or polar vectors, whose directions are
independent of the choise of handedness.
2
With the one important exception that we have assumed the absolute notion of handedness
in the deﬁnition of the cross product.
7
By assumption, we can choose this coordinate system to be rectangular
(R), C
R
, cf. chapter 1. By selecting four points O, P
1
, P
2
and P
3
having the
coordinates
x
CR
(O) = (0, 0, 0), x
CR
(P
1
) = (1, 0, 0), x
CR
(P
2
) = (0, 1, 0), x
CR
(P
3
) = (0, 0, 1)
(2.7)
we can construct three vectors
e
1
= OP
1
, e
2
= OP
2
, e
3
= OP
3
. (2.8)
Furthermore, from the choise of an origin O, a position vector, r = OP, can be
associated to any point P by the formula
r(P) = r(x
CR
(P)) =

i
x
i
(P)e
i
C
R
rectangular. (2.9)
The position vector is an improper vector, meaning that it explicitly depends on
the position of its initial point O. It will therefore change upon a transformation
to a coordinate system with a diﬀerent origin. However, we shall see that all
physical laws only involve positional displacements,
∆r = r(P
2
) −r(P
1
) = OP
2
−OP
1
= P
1
P
2
,
which is a vector invariant to the choise of the coordinate system.
Any point will have one and only one associated position vector by Eq. (2.9).
Thus, the three vectors R = {e
1
, e
2
, e
3
} form a basis for the vector space
3
. In
any coordinate system, C, a well-deﬁned procedure exists for constructing the
associated basis, G, of the vector space
4
. However, there are two deﬁnining
characteristica for a rectangular system
1. The basis is spatially constant, ie. the basis vectors are not themselves a
function of position.
2. The basis is orthonormal, meaning that the basis vectors are mutually
orthogonal, e
i
· e
j
= 0 for i = j, and unitary |e
i
| = 1.
The basis G = {g
1
, g
2
, g
3
} associated with a general coordinate system, C, may
satisfy none of the above. There will always be a one to one relation between the
coordinates and the position vector in geometrical problems of classical physics,
but it is only for bases satisfying 1, that this relation will be on the form of Eq.
(2.9) with the replacement e
i
→ g
i
. In thes cases, G and the choise of origin,
O, fully speciﬁes the coordinate system, C = (O, G).
3
Recall that the deﬁnition of a basis is a set of vectors, G, that span the vector space
and are linear independent. A set of vectors will be a basis if and only if the triple product
g
1
· (g
2
×g
3
) = 0.
4
The simple procedure of Eq. (2.7) and (2.8) for obtaining the basis vectors is not generally
applicable. We shall illustrate the general method in section (2.7).
8
In the following we shall retain the notation E = {e
1
, e
2
, e
3
} for the par-
ticular case of an orthonormal basis
5
. If e
i
is non-constant we explicitly write
e
i
(x). On the other hand, if E is constant the coordinate system is rectangular
and we use the letter R for the basis.
2.3.1 Components and notation
From the deﬁnition of any basis, G, spatially dependent or not, an abitrary
vector a will have a unique triplet of quantities a = (a
1
, a
2
, a
3
) ∈ R
3
with
respect to G such that
a =

i
a
i
g
i
. (2.10)
These quantities are called the components of a with respect to G. Mathemat-
ically speaking, we obtain a one-to-one map φ
G
: V → R
3
between the vector
space, V , and the triplet of scalars R
3
, with the choise of a basis. It is useful
to have a short-hand notation for this map (φ) and its inverse (φ
−1
), ie. for the
resolvent of a vector into its components and for the vector obtained by expand-
ing a set of components along the basis vectors. We deﬁne the the following two
bracket functions for the two operations respectively:
[a]
G
=
def.
φ
G
(a) = a ∈ R
3
(a)
G
=
def.
φ
−1
G
(a) =

i
a
i
g
i
= a ∈ V
The notation for the set of components of a vector a, a = [a]
G
, also mean that
we shall refer to the i

’th component as a
i
= [a]
G,i
. Furthermore, when it is
obvious from the context (or irrelevant) which basis is implied, we will omit the
basis subscript and use the notation a = [a] and a
i
= [a]
i
. For the collection of
components, a
i
, into a triplet, ordinary brackets are commonly used, so
(a
i
) =
def.
(a
1
, a
2
, a
3
) = a.
In order not to confuse this notation with the vector obtained from a set of
components
(a)
G
=
def.

i
a
i
g
i
,
whe shall always retain the basis subscript in the latter expression.
Trivially, we have
g
1
= (1, 0, 0)
G
, g
2
= (0, 1, 0)
G
, g
3
= (0, 0, 1)
G
or in the more dense, index-based notation
6
[g
i
]
j
= δ
ij
, wrt. basis G = {g
1
, g
2
, g
3
}
5
This basis need not be spatially independent. In the case of a constant orthonormal basis,
ie. a Cartesian basis, the notation {i, j, k} is also often used. However, we shall retain the
other notation for algebraic convenience.
6
Expressing vector identities in terms of their components is also referred to as tensor
notation.
9
where δ
ij
is the Kronecker delta
δ
ij
=
_
1 if i = j
0 if i = j
(2.11)
Furthermore, if G is constant in space the relation between coordinates and the
position vector is on the form of Eq. (2.9)
r(x) = (x)
G
=

i
x
i
g
i
⇔ [r]
G,i
= x
i
G const. (2.12)
With the risk of being pedantic let us stress the diﬀerence between a vector and
its components. For two diﬀerent bases G = G

, a vector a will be represented
by the components a ∈ R
3
and a

∈ R
3
respectively, so that
a = (a)
G
= (a

)
G
but a = a

.
Though representing the same vector, the two sets of components a and a

will
diﬀer because they refer to diﬀerent bases. The distinction between a vector
and its components becomes irrelevant in the case where one basis is involved
only.
2.3.2 Triplet algebra
By choosing a basis G for the vector space, the basic algebraic vector laws of
addition, substraction and scalar multiplication, Eq. (2.1), can be expressed in
terms of normal triplet-algebra for the components. For instance, since
a +b =

i
a
i
g
i
+

i
b
i
g
i
=

i
(a
i
+ b
i
)g
i
we have
a +b = (a
1
, a
2
, a
3
)
G
+ (b
1
, b
2
, b
3
)
G
= (a
1
+ b
1
, a
2
+ b
2
, a
3
+ b
3
)
G
,
or equivalently (for all i)
[a +b]
i
= [a]
i
+ [b]
i
(2.13)
The same naturally holds for vector substraction. Also, since
ma = m(

i
a
i
g
i
) =

i
ma
i
g
i
,
for a scalar quantity m, we have
ma = m(a
1
, a
2
, a
3
)
G
= (ma
1
, ma
2
, ma
3
)
G
,
or
[ma]
i
= m[a]
i
. (2.14)
10
The bilinear scalar and cross product are not as easy to operate with in
terms of the components in an arbitrary basis G as in the case of vector addition,
substraction and scalar multiplication. For instance
a · b = (a
1
, a
2
, a
3
)
G
· (b
1
, b
2
, b
3
)
G
= (

i
a
i
g
i
) ·
_

j
b
j
g
j
_
=

ij
a
i
b
j
g
i
· g
j
=

ij
a
i
b
j
g
ij
, g
ij
= g
i
· g
j
(2.15)
The quantities g
ij
, are called the metric coeﬃcients. If G is spatially dependent
then g
ij
= g
ij
(x) will be functions of the coordinates. In fact,the functional form
of the metric coeﬃcients fully speciﬁes the type of geometry involved (Euklidean
or not) and they are the starting point for extending vector calculus to arbitrary
geometries. Here it suﬃces to say that a Cartesian coordinate system uniquely
implies that the metric coeﬃcients are constant and satisfy g
ij
= δ
ij
.
2.4 Orthonormal bases
An orthonormal basis, E, satisﬁes by deﬁnition
e
i
· e
j
= δ
ij
. (2.16)
This relation is particular handy for obtaining the components of a vector a.
Indeed, we have
a =

i
a
i
e
i
⇒ a
i
= a · e
i
, E ortonormal. (2.17)
So we obtain the i’th component of a vector a by taking the scalar product
between a and the i’th basis-vector.
2.4.1 Scalar product
In an ortonormal basis the scalar product, Eq. (2.15), is a simple expression in
terms of the components
a · b = (

i
a
i
e
i
) · (

j
b
j
e
j
) =

i
a
i
b
i
. (2.18)
Furthermore, if the orthonormal basis is spatially independent, the displacement
vector, PQ, between two points P and Q with coordinates p = (p
1
, p
2
, p
3
) and
q = (q
1
, q
2
, q
3
) respectively, will be
PQ = r(Q) −r(P) =

i
(q
i
−p
i
)e
i
The distance between P and Q is given by the length of |PQ| and
|PQ|
2
=

i
(q
i
−p
i
)
2
⇒ |PQ| =
¸

i
(q
i
−p
i
)
2
. (2.19)
11
In other words, we recover the law of Pythagoras, cf. Eq. (1.3), which is
reassuring since a constant orthonormal basis is equivalent to the choise of a
rectangular coordinate system.
2.4.2 Cross product
Let E be an ortonormal right-handed basis, so e
3
= e
1
×e
2
. Using the laws of
cross-product, Eq. (2.6), one can show
a×b = det
_
_
e
1
e
2
e
3
a
1
a
2
a
3
b
1
b
2
b
3
_
_
= (a
2
b
3
−a
3
b
2
, a
3
b
1
−a
1
b
3
, a
1
b
2
−a
2
b
1
)
E
(2.20)
where det() denotes the determinant of the array considered as a matrix. An
alternative expression deﬁned only in terms of the components, is
[a ×b]
i
=

jk

ijk
a
j
b
k
(2.21)
The symbol
ijk
with the three indices is the Levi-Civita symbol. It consists of
3 ×3 ×3 = 27 real numbers given by

123
=
231
=
312
= +1

321
=
132
=
213
= −1

ijk
= 0 otherwise
(2.22)
In words,
ijk
is antisymmetric in all indices. From this it follows that the
ortonormal basis vectors satisfy
e
i
×e
j
=

k

ijk
e
k
, (2.23)
i.e. e
3
= e
1
×e
2
, e
1
= e
2
×e
3
, and so on.
Applying eq. (2.3) and (2.6) one can show that the triple product a· (b×c)
in an ortonormal right-handed basis is given by
a · (b ×c) =

ijk

ijk
a
i
b
j
c
k
= det
_
_
a
1
a
2
a
3
b
1
b
2
b
3
c
1
c
2
c
3
_
_
(2.24)
The problem of handedness
Let us emphasize that Eq. (2.20,2.21,2.23,2.24) would have been the same had
we deﬁned the cross-product by a left-hand rule and expressed the components
in a left-handed coordinate system. Indeed, the deﬁnition of the Levi-Civita
symbol, Eq. (2.22), does not itself depend on the handedness of the coordinate
system. Since it is in fact not possible to give an absolute prescribtion of how to
construct a right-handed (or left-handed) coordinate system it is more common
12
in modern vector analysis to deﬁne the cross product through the determinant,
Eq. (2.20) og equivalently through the Levi-Civita symbol, Eq. (2.21). In
replacing the geometric deﬁnition, Eq. (2.5) which pressumes an absolute notion
of handedness, with an algebraic one the direction of the cross-product will
simply be determined by what-ever convention of handedness that has been
adopted with the ortonormal coordinate system in the ﬁrst place.
Physically, the arbitrariness in the choise of handedness implies that the
orientation of the cross product of two ordinary vectors is not a physical ob-
jective quantity, in contrast to vectors representing displacements, velocities,
forces etc. One therefore distinguishes between proper or polar vectors whose
direction are independent on the choise of handedness and axial or pseudovectors
whose directions depend on the choise of handedness. The distinction between
the two types of vectors becomes important when one considers transformations
between left- and right-handed coordinate systems.
Dimensionality
For all vector operations the only explicit reference to dimension of the physical
space appears in the cross product. This can be seen from the deﬁnition of
the -symbol, which explicitly has three indices running from 1 to 3. All other
vector operations are valid in arbitrary dimensions.
The generalization of the cross product to any dimension d is obtained by
constructing a Levi-Civita symbol having d indices each running from 1 to d
and being totally antisymmetric.
For instance for d = 2 :

12
= 1

21
= −1

11
=
22
= 0
(2.25)
Here, we get the “cross-product”, ˆ a, of a vector a = (a
1
, a
2
)
E
by
[ˆ a]
j
=

i

ji
a
i
or in doublet notation
ˆ a = (ˆ a
1
, ˆ a
2
)
E
= (a
2
, −a
1
)
E
,
which is the familiar expression in d = 2 for obtaining a vector orthogonal to a
given vector. Generally, in d > 1 dimensions the cross product will take d − 1
vectors as arguments.
2.5 Ordinary derivatives and integrals of vectors
Let c(t) be a vector depending on a single scalar variable t. In this section we
will consider how to deﬁne derivatives and integrals of c with respect to t.
13
2.5.1 Derivatives
The deﬁnition for ordinary derivatives of functions can directly be extended to
vector functions of scalars:
dc(t)
du
= lim
∆u→0
c(u + ∆u) −c(u)
∆u
. (2.26)
Note that the deﬁnition involves substracting two vectors, ∆c = c(u+∆u)−c(u)
which is a vector. Therefore,
dr(t)
du
will also be a vector provided the limit, Eq.
(2.26), exists. Eq. (2.26) is deﬁned independently of coordinate systems. In an
orthonormal basis independent of u:
d
du
a =

i
da
i
du
e
i
, (2.27)
whence the derivative of a vector is reduced to ordinary derivatives of the scalar
functions a
i
.
If u represents the time u = t and c(t) is the position vector of a particle r(t)
with respect to some coordinate system then
dr(t)
dt
will represent the velocity v(t).
The velocity will be a tangentiel vector to the space curve mapped by r(t). Even
though the position vector itself is not a proper vector due to its dependence on
the choise of origin of the coordinate system, cf. section 2.3, the velocity is a
proper vector since it is deﬁned through the positional displacement, ∆r, which
is a proper vector. Consequently, the acceleration a(t) =
def.
dv
dt
is also a proper
vector.
Ordinary diﬀerentiation of a vector co-exists nicely with the basic vector
operations, e.g.
d
du
(a +b) =
da
du
+
db
du
(2.28)
d
du
(a · b) =
da
du
· b +a ·
db
du
(2.29)
d
du
(a ×b) =
da
du
×b +a ×
db
du
(2.30)
d
du
(φa) =

du
a + φ
da
du
(2.31)
where φ(u) is a normal scalar function. The above formulas can of course be
expressed in terms of the vector components, a
i
and b
i
, using Eq. (2.27).
2.5.2 Integrals
The deﬁnition of ordinary integral of a vector depending on a single scalar
variable, c(u), also follows that of ordinary calculus. With respect to a basis E
independent of u the indeﬁnite integral of c(u) is deﬁned by
_
c(u)du =

i
_
c
i
(u)due
i
,
14
where
_
c
i
(u)du is the indeﬁnite integral of an ordinary scalar function. If s(u)
is a vector satisfying c(u) =
d
du
s(u) then
_
c(u)du =
_
d
du
(s(u)) du = s(u) +k
where k is an arbitrary constant vector independent of u. The deﬁnite integral
between two limits u = u
0
and u = u
1
can in such case be written
_
u1
u0
c(u)du =
_
u1
u0
d
du
(s(u))du = [s(u) +k]
u1
u0
= s(u
1
) −s(u
0
).
This integral can also be deﬁned as a limit of a sum in a manner analogous to
that of elementary integral calculus.
2.6 Fields
2.6.1 Deﬁnition
In continuous systems the basic physical variables are distributed over space.
A function of space is known as a ﬁeld. Let an arbitrary coordinate system be
given.
Scalar ﬁeld. If to each position x = (x
1
, x
2
, x
3
) of a region in space the
corresponds a number or scalar φ(x
1
, x
2
, x
3
), then φ is called a scalar function of
position or scalar ﬁeld. Physical examples of scalar ﬁelds are the mass or charge
density distribution of an object, the temperature or the pressure distribution
at a given time in a ﬂuid.
Vector ﬁeld. If to each position x = (x
1
, x
2
, x
3
) of a region in space there
corresponds a vector a(x
1
, x
2
, x
3
) then a is called a vector function of position
or a vector ﬁeld. Physical examples of vector ﬁelds are the gravitational ﬁeld
around the earth, the velocity ﬁeld of a moving ﬂuid, electro-magnetic ﬁeld of
charged particle systems.
In the following we shall assume the choise of a rectangular coordinate system
C = (O, R). Introducing the position vector r(x) =

i
x
i
e
i
the expressions φ(r)
and a(r) is taken to have the same meaning as φ(x) and a(x), respectively.
2.6.2 Partial derivatives
Let a be a vector ﬁeld a(r) = a(x
1
, x
2
, x
3
). We can deﬁne the partial derivative
∂a
∂xi
in the same fashion as in Eq. (2.26). Here we will use the short hand
notation:

i
=

∂xi

2
ij
=

2
∂xi∂xj
In complete analogy to the usual deﬁnition of partial derivatives of a scalar
function, the partial derivative of a vector ﬁeld with respect to x
i
(i = 1, 2, 3) is
(∂
i
a)(r) = lim
∆xi→0
a(r + ∆x
i
e
i
) −a(r)
∆x
i
(2.32)
15
Equation (2.32) expresses how the vector ﬁeld changes in the spatial direction
of e
i
. For the same arguments as listed in previous section, ∂
i
a will for each i
also be a vector ﬁeld. Note that in general, ∂
j
a = ∂
i
a when j = i.
Rules for partial diﬀentiation of vectors are similar to those used in elemen-
tary calculus for scalar functions Thus if a and b are functions of x then

i
(a · b) = a · (∂
i
b) + (∂
i
a) · b

i
(a ×b) = (∂
i
a) ×b +a ×(∂
i
b)

2
ji
(a · b) = ∂
j
(∂
i
(a · b)) = ∂
j
((∂
i
a) · b +a · (∂
i
b))
= a · (∂
2
ji
b) + (∂
j
a) · (∂
i
b) + (∂
i
a) · (∂
j
b) + (∂
2
ji
a) · b
We can verify these expressions by resolving the vector ﬁelds into R = {e
1
, e
2
, e
3
}.
Then all diﬀerentiation is reduced to ordinary diﬀerention of the scalar func-
tions, a
k
(x) and b
k
(x). Notice that for the position vector

i
r = e
i
. (2.33)
If the vector ﬁeld is resolved into a non-cartesian coordinate system, C

a(x

) =

j
a
j
(x

)g
j
(x

) (2.34)
then one must remember to include the spatial derivatives of the basis vectors
as well
∂a
∂x

i
(x

) =

j
_
∂a
j
∂x

i
(x

)g
j
(x

) + a
j
(x

)
∂g
j
∂x

i
(x

)
_
(2.35)
2.6.3 Diﬀerentials
Since partial derivatives of vector ﬁeld follow those used in elementary calculus
for scalar functions the same will be true for vector diﬀerentials. For example,
for C = (O, R)
if a(x) =

i
a
i
(x)e
i
, then da =

i
da
i
(x)e
i
(2.36)
The change of the vector components due to a inﬁtesimal change of coordinates
dx
j
is obtained by the chain rule
da
i
(x) =

j
(∂
j
a
i
)(x)dx
j
(2.37)
da(x) =

i
_

j
(∂
j
a
i
)(x)dx
j
_
e
i
(2.38)
or simply
da =

i

i
i
. (2.39)
16
The position vector r =

i
x
i
e
i
is a special vector ﬁeld for which Eq. (2.38)
implies
dr =

i
dx
i
e
i
. (2.40)
The square, (ds)
2
of the inﬁnitesimal distance moved is then given by
(ds)
2
= dr · dr =

i
dx
2
i
If r(u) is space curve parametrized by u then
_
ds
du
_
2
=
dr
du
·
dr
du
,
Therefore, the arc length between two points on the curve r(u) given by u = u
1
and u = u
2
is
s =
_
u2
u1
_
dr
du
·
dr
du
du.
In general, we have (irrespective of the choise of basis)
d(a · b) = da · b +a · db
d(a ×b) = da ×b +a ×db
Let a rectangular coordinate system be given C = (O, R).
Nabla operator ∇
The vector diﬀerential operator del or nabla written as ∇ is deﬁned by
∇(·) =

i
e
i

i
(·) (2.41)
where · represents a scalar or –as we shall see later– a vector ﬁeld. Notice that
the i’th component of the operator is given by

i
= e
i
· ∇ = ∂
i
.
This vector operator possesses properties analogous to those of ordinary vectors
which is not obvious at ﬁrst sight, since its operation is deﬁned relative to
the choise of the coordinate system, in contrast to the vector algebra deﬁned
hitherto. We shall postpone the demonstration of its vectorial nature until
section 4.5, where we have become more familiar with tensor calculus.
The gradient of a scalar ﬁeld φ(r) is deﬁned by inserting a scalar ﬁeld in the
above deﬁnition:

i
(∂
i
φ)e
i
17
The gradient of a scalar ﬁeld has some interesting geometrical properties. Con-
sider the change of φ in some particular direction. For an inﬁnitesimal vector
displacement, dr, forming its scalar product with ∇φ we have
(∇φ)(r) · dr =
_

i
(∂
i
φ)(r)e
i
_
·
_
_

j
dx
j
e
j
_
_
=

i
(∂
i
φ)(r)dx
i
= dφ,
which is the inﬁnitesimal change in φ in going from from position r to r + dr.
In particular, if r depends on some parameter t such that r(t) deﬁnes a space
curve then the total derivative of φ with respect to that curve is simply

dt
= ∇φ · dr (2.42)
Setting v =
dr
dt
we obtain from the deﬁnition of the scalar product

dt
= |∇φ||v| cos(θ) (2.43)
where θ is the angle between ∇φ and v. This also shows that the largest increase
of the scalar ﬁeld is obtained in the direction ∇φ and that, if v is a unit vector
the rate of change in this case equals |∇φ|.
We can extend the above analysis to ﬁnd the rate of change of a vector ﬁeld
in a particular direction v =
dr
dt
.
d
dt
a(r(t)) =

i
dai(r(t))
dt
e
i
=

i
_

j
(∂
j
a
i
)(r))v
j
_
e
i
=

j
v
j

j
(

i
a
i
(r)e
i
)
=

j
v
j

j
a(r)
= (v · ∇) a(r)
(2.44)
This shows that the operator
v · ∇ =

i
v
i

i
(2.45)
gives the rate of change in direction of v of the quantity (vector or scalar) on
which it acts.
A second interesting geometrical property of ∇φ may be found by considering
the surface deﬁned by φ(r) = c, where c is some constant. If r(t) is a space
curve in the surface we clearly have

dt
= 0 and consequently for any tangent
vector v in the plane we have ∇φ · v = 0, according to Eq. (2.42). In other
words, ∇φ is a vector normal to the surface φ(r) = c at every point.
Divergence of a vector ﬁeld
The divergence of a vector ﬁeld a(r) is deﬁned by
(div a)(r) = (∇· a)(r) =

i
(∂
i
a
i
)(r) (2.46)
18
The full physical and geometrical meaning of the divergence is discussed in next
section. Clearly (∇ · a)(r) is a scalar ﬁeld. Now if some vector ﬁeld a is itself
derived from a scalar ﬁeld via a = ∇φ then ∇· a has the form ∇· ∇φ or, as it
is usually written ∇
2
φ where

2
=

i

2
ii
_
=

i

2
∂x
2
i
_

2
φ is called the laplacian of φ and is typically encountered in electrostatic
problems or in diﬀusion equations of physical scalar ﬁeld such as a tempera-
ture or density distribution. The laplacian can also act on vector through the
components

2
a =

i

2
ii
a =

j

i
(∂
2
ii
a
j
)e
j
Curl of a vector ﬁeld
The curl of a vector ﬁeld a(r) is deﬁned by
curl (a) = ∇×a =
_

2
a
3
−∂
3
a
2
_
e
1
+
_

3
a
1
−∂
1
a
3
_
e
2
+
_

1
a
2
−∂
2
a
1
_
e
3
,
In analogy to the deﬁnition of cross-product between two ordinary vectors we
can express the deﬁnition symbolically
∇×a = det
_
_
e
1
e
2
e
3

1

2

3
a
1
a
2
a
3
_
_
,
where it is understood that, on expanding the determinant, the partial deriva-
tives act on the components of a. The tensor notation of this expression (for
the i’th component) is even more compact
[∇×a]
i
=

jk

ijk

j
a
k
(2.47)
Clearly, ∇×a is itself a vector ﬁeld.
For a vector ﬁeld v(x) describing the local velocity at any point in a ﬂuid,
∇×v is a measure of the angular velocity of the ﬂuid in the neighbourhood of
that point. If a small paddle wheel were placed at various points in the ﬂuid
then it would tend to rotate in regﬁons where ∇ × v = 0, while it would not
rotate in regions where ∇×v = 0.
Another insight into the physical interpretation of the curl operator is gained
by considering the vector ﬁeld v describing the velocity at any point in a regied
body rotation about some axis with angular velocity ω. If r is the position
vector of the point with respect to some origin on the axis of rotation then the
velocity ﬁeld would be given by v(x) = ω ×r(x). The curl of the vector ﬁeld is
then found to be ∇×v = 2ω.
19
There are myriad of identities between various combinations of the three impor-
tant vector operators grad, div and curl. The identities involving cross products
are more easily proven using tensor calculus which is postponed for chapter 4.
Here we simply list the most important identities:
∇(φ + ψ) = ∇φ +∇ψ
∇· (a +b) = ∇· a +∇· b
∇×(a +b) = ∇×a + ∇×b
∇· (φa) = φ∇· a +a · ∇φ
∇· (a ×b) = b · (∇×a) −a · (∇×b)
∇×(∇×a) = ∇(∇· a) −∇
2
a
∇· (∇×a) = 0
∇×∇φ = 0
(2.48)
Here, φ and ψ are scalar ﬁelds and a and c are vector ﬁelds. The last identity
has an important meaning. If a is derived from the gradient of some scalar ﬁeld,
a = ∇φ then the identity shows that a is necessarily irrotational ,∇ × a = 0.
2.6.5 Line, surface and volume integrals
In the previous section we have discussed continuously varying scalar and vector
ﬁelds and discussed the action of various diﬀerential operators on the. Often,
the need arises to consider the integration of ﬁeld quantities along lines, over
surfaces and throughout volumes. In general the integrand may be scalar or
vector, but the evaluation of such intregrals involves their reduction to one or
more scalar integrals, which are then evaluated. This procedure is equivalent to
the way one in practice operates with diﬀerential operators on scalar and vector
ﬁelds.
Line integrals
In general, one may encounter line integrals of the forms
_
C
φdr,
_
C
a · dr,
_
C
a ×dr, (2.49)
where φ is a scalar ﬁeld, a is a vector ﬁeld and C is a prescribed curve in space
joining to given points A and B in space. The three integrals themselves are
respectively vector, scalar and vector in nature.
The formal deﬁnition of a line integral closely follow that of ordinary integrals
and can be considered as the limit of a sum. We may divide the path joining
the points A and B into N small line elements ∆r
p
, p = 1, · · · , N. If x
p
=
(x
1,p
, x
2,p
, x
3,p
) is any point on the line element ∆r
p
then the second type of
20
line integral in Eq. (2.49), for example, is deﬁned as
_
C
a · dr =
def
lim
N→∞
N

p=
a(x
p
) · ∆r
p
,
where all |∆r
p
| →0 as N →∞.
Each of the line integrals in Eq. (2.49) is evaluated over some curve C
that may be either open (A and B being distinct points) or closed (A and B
coincide). In the latter case, one writes
_
C
to indicate this. The curve C is a
space curve, c.f. section 2.5.1, most often deﬁned in a parametric form. In a
cartesian coordinate system C = (O, R), it becomes
C : r(u) =

i
x
i
(u)e
i
, u
0
≤ u ≤ u
1
, and [r(u
0
)] = x(A), [r(u
1
)] = x(B)
(2.50)
The metod of evaluating a line integral is to reduce it to a set of scalar
integrals. It is usual to work in cartesian coordinates, in which case, dr =

i
e
i
dx
i
. The three integrals in Eq. (2.49) then becomes
_
C
φ(r)dr =
_
C
φ(r) (

i
e
i
dx
i
) =

i
__
C
φ(r)dx
i
_
e
i
_
C
a(r) · dr =
_
C
(

i
a
i
(r)e
i
) ·
_

j
e
j
dx
j
_
=

i
_
C
a
i
(r)dx
i
(2.51)
and
_
C
a(r) ×dr =
_
C
(

i
a
i
(r)) ×
_

j
e
j
dx
j
_
=
__
C
a
2
(r)dx
3

_
C
a
3
(r)dx
2
_
e
1
+
__
C
a
3
(r)dx
1

_
C
a
1
(r)dx
3
_
e
2
+
__
C
a
1
(r)dx
2

_
C
a
2
(r)dx
1
_
e
3
(2.52)
Note, that in the above we have used relations of the form
_
C
a
i
e
j
dx
j
=
__
C
a
i
dx
j
_
e
j
,
which is allowable since the cartesian basis is independent of the coordinates.
If E = E(x) then the basis vectors could not be factorised out in the integral.
The ﬁnal scalar integrals in Eq. (2.51) and Eq. (2.52) can be solved using the
parametrized form for C, Eq. (2.50). For instance
_
C
a
2
(r)dx
3
=
_
u1
u0
a
2
(x
1
(u), x
2
(u), x
3
(u))
dx
3
du
du.
In general, a line integral between two points will depend on the speciﬁc path
C deﬁning the integral. However, for line integrals of the form
_
C
a · dr there
exists a class of vector ﬁelds for which the line integral between two points
is independent of the path taken. Such vector ﬁelds are called conservative.
A vector ﬁeld a that has continuous partial derivatives in a simply connected
region R
7
, is conservative if, and only if, any of the following is true
7
A simply connected region R is a region in space for which any closed path can be
continuously shrunk to a point, ie. R has no “holes”
21
1. The integral
_
B
A
a · dr, where A, B ∈ R, is independent of the path from
A to B. Hence
_
C
a · dr = 0 around any closed loop in R.
2. There exists a scalar ﬁeld φ in R such that a = ∇φ.
3. ∇×a = 0.
4. a · dr is an exact diﬀerential.
We will not demonstrate the equivalence of these statements. If a vector ﬁeld
is conservative we can write
a · dr = ∇φ · dr = dφ
and
_
B
A
a · dr =
_
B
A
∇φ · dr =
_
B
A
dφ = φ(A) −φ(B).
This situtation is encountered whenever a = f represents the force f derived
from a potential (scalar) ﬁeld, φ, such as the potential energy in a gravitational
ﬁeld, the potential energy in an elastic spring, the voltage in a electrical circuits,
etc.
Surface integrals
As with line integrals, integrals over surfaces can involve vector and scalar ﬁelds
and, equally, result in either a vector or a scalar. We shall focus on surface
integrals of the form
_
S
a · dS. (2.53)
where a is a vector ﬁeld and S is a surface in space which may be either open or
closed. Following the notation of line integrals, for surface integrals over a closed
surface
_
S
is replaced by
_
S
. The vector diﬀerential dS in Eq. (2.53) represents
a vector area element of the surface S. It may also be written dS = ndS where
n is a unit normal to the surface at the position of the element and dS is a
scalar area of the element. The convention for the direction of the normal n
to a surface depends on whether the suface is open or closed. For a closed
surface the direction of n is taken be outwards from the enclosed volume. An
open surface spans some perimeter curve C. The direction of n is then given
by the right-hand sense with respect to the direction in which the perimeter is
traversed, ie. it follows the right-hand screw rule.
The formal deﬁnition of a surface integral is very similar to that of a line
integral. One divides the surface into N elements of area ∆S
p
, p = 1, · · · , N
each with a unit normal n
p
. If x
p
is any point in ∆S
p
then
_
S
a · dS = lim
N→∞
N

p=1
a(x
p
) · n
p
∆S
p
,
22
where it is required that ∆S
p
→0 for N →∞.
A standard way of evaluating surface integrals is to use cartesian coordinates
and project the surface onto one of the basis planes. For instance, suppose a
surface S has projection R onto the 12-plane (xy-plane), so that an element of
surface area dS projects onto the area element dA. Then
dA = |e
3
· dS| = |e
3
· ndS| = |e
3
· n|dS.
Since in the 12-plane dA = dx
1
dx
2
we have the expression for the surface integral
_
S
a(r) · dS =
_
R
a(r) · ndS =
_
R
a(r) · n
dx
1
dx
2
|e
3
· n|
Now, if the surface S is given by the equation x
3
= z(x
1
, x
2
), where z(x
1
, x
2
)
gives the third coordinate of the surface for each (x
1
, x
2
) then the scalar ﬁeld
f(x
1
, x
2
, x
3
) = x
3
−z(x
1
, x
2
) (2.54)
is identical zero on S. The unit normal at any point of the surface will be given
by n =
∇f
|∇f|
evaluated at that point, c.f. section 2.6.4. We then obtain
dS = ndS =
∇f
|∇f|
dA
|n · e
3
|
= ∇f
dA
|∇f · e
3
|
= ∇f
dA
|∂
3
f|
= ∇f dx
1
dx
2
,
where the last identity follows from the fact that ∂
3
f = 1 from Eq. (2.54). The
surface integral then becomes
_
S
a(r) · dS =
_
R
a(x
1
, x
2
, z(x
1
, x
2
)) · (∇f)(x
1
, x
2
)dx
1
dx
2
.
which is an ordinary scalar integral in two variables x
1
and x
2
.
Volume integrals
Volume integrals are generally simpler than line or surface integrals since the
element of the volume dV is a scalar quantity. Volume integrals are most often
on the form _
V
φ(r)dV
_
V
a(r)dV (2.55)
Clearly, the ﬁrs form results in a scalar, whereas the second one yields a vector.
Two closely related physical examples, one of each kind, are provided by the
total mass M, of a ﬂuid contained in a volume V , given by M =
_
V
ρ(r)dV and
the total linear momentum of that same ﬂuid, given by
_
V
ρ(r)v(r)dV , where
ρ(r) is the density ﬁeld and v(r) is the velocity ﬁeld of the ﬂuid.
The evaluation of the ﬁrst volume integral in Eq. (2.55) is an ordinary
multiple integral. The evaluation of the second type of volume integral follows
directly since we can resolve the vector ﬁeld into cartesian coordinates
_
V
a(r)dV =

i
_
_
V
a
i
(r)dV
_
e
i
.
23
Of course we could have written a in terms of the basis vectors of other coordi-
nate system (e.g. spherical coordinates) but since such basis vectors are not in
general constant, they cannot be taken out of the integrand.
2.6.6 Integral theorems
There are two important theorems relating surface and volume integrals of vec-
tor ﬁelds, the divergence theorem of Gauss and Stokes theorem.
Gauss theorem
Let a be a (diﬀerentiable) vector ﬁeld, and V be any volume bounded by a
closed surface S. Then Gauss theorem states
_
V
_
∇· a
_
(r)dV =
_
S
a(r) · dS. (2.56)
, The proof goes as follows. Let C = (O, R) be a cartesian coordinate system,
and let B(r
0
) be a small box around r
0
with its edges oriented along the direc-
tions of the basis vectors and with volume V
B
= ∆x
1
∆x
2
∆x
3
. Each face can be
labelled as sk where s = ±1 and k = 1, 2, 3 indicates the orientation of the face.
Thus the outward normal of face sk is n
sk
= se
k
. If the area A
k
=

i=k
∆x
i
of each face F
sk
is small then we can approximate each surface integral
_
F
sk
a(r) · dS =
_
F
sk
a(r) · se
k
dS
by evaluating the vector ﬁeld a(r) in the center point of the face r
0
+ s
∆x
k
2
e
k
_
F
sk
a(r) · dS ≈ a
_
r
0
+ s
∆x
k
2
e
k
_
· se
k
A
k
The surface integral on the rhs. of Eq. (2.56) for the box B(r
0
) then becomes
_
B(r0)
a(r) · dS ≈

sk
a
_
r
0
+ s
∆x
k
2
e
k
_
· se
k
A
k
=

k
_
a
_
r
0
+
∆x
k
2
e
k
_
−a
_
r
0

∆x
k
2
e
k
_
_
· e
k
A
k
=

k
_
a
k
_
r
0
+
∆x
k
2
e
k
_
−a
k
_
r
0

∆x
k
2
e
k
_
_
A
k

k
(∂
k
a
k
)(r
0
) ∆x
k
A
k
=
_
∇· a
_
(r
0
) V
B
(2.57)
Any volume V can be approximated by a set of small boxes, B
i
, centered around
the points r
0,i
, where i = 1, · · · , N. For the lhs. of Eq. (2.56) we then have
_
V
∇· a(r)dV ≈

i
_
∇· a
_
(r
0,i
) V
Bi
. (2.58)
Adding the surface integrals of each of these boxes, the contribution from the
mutual interfaces vanishes (since the outward normal of the two adjacent boxes
24
point in opposite direction). Consequently, the only contributions comes from
the surface S of V .
_
S
a(r) · dS ≈

i
_
B(r0,i)
a(r) · dS
i
(2.59)
Since each term on the rhs. of Eq. (2.58) equals a corresponding term on the
rhs. of Eq. (2.59) the Gauss teorem is demonstrated.
Gauss theorem is often used in conjunction with following mathematical
theorem
d
dt
_
V
φ(r, t)dV =
_
V

∂t
φ(r, t)dV,
where t is time and φ is a time dependent scalar ﬁeld (The theorem works in
arbitrary spatial dimension). The two theorems are central in deriving partial
diﬀerential equations for dynamical systems, in particular the so called conti-
nuity equations, linking a ﬂow ﬁeld to the time changes of scalar ﬁeld advected
by the ﬂow. For instance if ρ(r, t) is a density and v(r, t), is the velocity ﬁeld
of a ﬂuid then the vector j(r) = ρ(r)v(r) gives the density current. It can then
by shown (try it) that under mass conservation then
∂ρ
∂t
+∇· j = 0.
The divergence of a vector ﬁeld therefore has the physical meaning of giving the
net “outﬂux” of a scalar advected by the ﬁeld within an inﬁnitesimal volume.
Stokes theorem
Stokes theorem states that if S is the “curl analogue” of the divergence theorem
and relates the integral of the curl of a vector ﬁeld over an open surface S to the
line integral of the vector ﬁeld around the perimeter C bounding the surface.
_
S
_
∇×a
_
(r) · dS =
_
C
a(r) · dr (2.60)
Following the same lines as for the derivation of the divergence theorem the
surface S can be divided into many small areas S
i
with boundaries C
i
and unit
normals n
i
. For each small area one can show that
(∇×a) · n
i
S
i

_
Ci
a · dr.
Summing over i one ﬁnds that on the rhs. all parts of all interior boundaries
that are not part of C are included twice, being traversed in opposite directions
on each occasion and thus cancelling each other. Only contributions from line
elements that are also part of C survive. If each S
i
is allows to to tend to zero,
Stokes theorem, Eq. (2.60), is obtained.
25
2.7 Curvilinear coordinates
The vector operators, ∇φ, ∇ · a and ∇ × a , we have discussed so far have
all been deﬁned in terms of cartesian coordinates. In that respect we have
been more restricted than in the algebraic deﬁnitions of the analogous ordinary
scalar and cross product, Eq. (2.18) and Eq. (2.20) respectively. Here we only
assumed orthonormality of the basis. The reason is that the nabla operator
involves spatial derivatives which implies that one must account for the possible
non-constancy of E, c.f. Eq. (2.35).
Many systems possess some particular symmetry which makes other coordi-
nate systems more natural, notably cylindrical or spherical coordinates. These
coordinates are just two examples of what are called curvilinear coordinates.
Curvilinear coordinates refer to the general case in which the rectangular coor-
dinates x of any point can be expressed as functions of another set of coordinates
x

, thus deﬁning a coordinate transformation, x = x(x

) or x

= x

(x)
8
. Here
we shall discuss the algebraic form of the standard vector operators for the two
particular transformations into cylindrical or spherical coordinates. An impor-
tant feature of these coordinates is that although they lead to non-constant
bases, these bases retain the property of orthonormality.
The starting point for the disgression is to realize that a non-constant basis
E

(x

) = {e

1
(x

), e

2
(x

), e

3
(x

)},
implies that the basis vectors are vector ﬁelds, as opposed to the constant basis
R = {e
1
, e
2
, e
3
} associated with cartesian coordinates. Any vector ﬁeld, a(x

)
is then generally written as
a(x

) =

j
a

j
(x

)e

j
(x

) (2.61)
where a

j
are the components of the vector ﬁeld in the new basis, and x

= x

(x)
is the coordinate transformation. One notes, that the functional form of the
components a

j
(x

) diﬀer from the functional form of the cartesian components,
a
j
(x), because the same point, P will have two diﬀerent numerical representa-
tions x

(P) and x(P). For a scalar ﬁeld one must have the identity
φ

(x

) = φ(x),
where φ

is the functional form of the scalar ﬁeld in the primed coordinate
system and φ is the function of the scalar ﬁeld with respect to the unprimed
system. Again, the two functional forms must be diﬀerent because the same
point has diﬀerent numerical representations. However, the value of the two
functions must be the same since x

and x represent the same point in space.
8
Recall that the notation x

= x

(x) is a short hand notation for the three functional
relationships listed in Eq. (1.2).
26
2.7.1 Cylindrical coordinates
Cylindrical coordinates, x

= (ρ, φ, z), are deﬁned in terms of normal cartesian
coordinates x = (x
1
, x
2
, x
3
) = (x, y, z) by the coordinate transformation
x = ρ cos(φ)
y = ρ sin(φ),
z = z,
(2.62)
The domain of variation is 0 ≤ ρ < ∞, 0 ≤ φ < 2π.
The position vector may be written as
r(x

) = ρ cos(φ)e
1
+ ρ sin(φ)e
2
+ ze
3
Local basis
If we take the partial derivatives with respect to ρ, φ, z, c.f. Eq. (2.33), and
normalize we obtain
e

1
(x

) = e
ρ
(x

) = ∂
ρ
r = cos(φ)e
1
+ sin(φ)e
2
e

2
(x

) = e
φ
(x

) =
1
ρ

φ
r = −sin(φ)e
1
+ cos(φ)e
2
e

3
= e
3
These three unit vectors, like the Cartesian unit vectors e
i
, form an orthonormal
basis at each point in space. An arbitrary vector ﬁeld may therefore be resolved
in this basis
a(x

) = a
r
(x

)e
r
(x

) + a
φ
(x

)e
φ
(x

) + a
z
(x

)e
z
where the vector components
a
r
= a · e
r
, a
φ
= a · e
φ
, V
z
= a · e
z
are the projections of a on the local basis vectors.
The derivatives after cylindrical coordinates are found by diﬀerentiation through
the Cartesian coordinates (chain rule)

ρ
=
∂x
∂ρ

x
+
∂y
∂ρ

y
= cos(φ)∂
x
+ sin(φ)∂
y

φ
=
∂x
∂φ

x
+
∂y
∂φ

y
= −ρ sin(φ)∂
x
+ ρ cos(φ)∂
y
From these relations we can calculate the projections of the gradient operator
∇ =

i
e
i

i
on the cylindrical basis and we obtain

ρ
= e
ρ
· ∇ = ∂
ρ

φ
= e
φ
· ∇ =
1
ρ

φ

z
= e
z
· ∇ = ∂
z
27
The resolution of the gradient in the two bases therefore becomes
∇ = e
ρ

ρ
+e
φ

φ
+e
z

z
= e
ρ

ρ
+e
φ
1
ρ

φ
+e
z

z
Together with the only non-vanishing derivatives of the basis vectors

φ
e
ρ
= e
φ

φ
e
φ
= −e
ρ
we have the necessary tools for calculating in cylindrical coordinates. Note, that
it is the vanishing derivatives of the basis vectors that lead to the simple form
of the vector operators in cartesian coordinates.
Laplacian
The laplacian in cylindrical coordinates takes the form

2
= ∇· ∇ = (e
ρ

ρ
+e
φ

φ
+e
z

z
) · (e
ρ

ρ
+e
φ

φ
+e
z

z
)
Using the linearity of the scalar product it can be rewritten to the form

e

i

i
· e

j

j
,
where e

1
= e
ρ
, ∇

1
= ∇
ρ
etc. The interpretation of each of these terms is
e

i

i
· e

j

j
= e

i
· ∇

i
(e

j

j
) (2.63)
Applying the chain rule of diﬀerentiation and Eq. (2.7.1) one can then show

2
= ∂
2
ρρ
+
1
ρ

ρ
+
1
ρ
2

2
φφ
+ ∂
2
zz
2.7.2 Spherical coordinates
The treatment of spherical coordinates follow the same line as cylindrical coor-
dinates. The spherical coordinates are x

= (r, φ, θ) and the coordinate trans-
formation is given by
x = r sin(θ) cos(φ)
y = r sin(θ) sin(φ)
z = r cos(θ)
(2.64)
The domain of variation is 0 ≤ r < ∞, 0 ≤ θ ≤ π, 0 ≤ φ < 2π. The position
vector is
r(x

) = r sin(θ) cos(φ)e
1
+ r sin(θ) sin(φ)e
2
+ r cos(θ)e
3
28
Local basis
The normalized tangent vectors along the directions of the spherical coordinates
are
e
r
(x

) = ∂
r
r = sin(θ) cos(φ)e
1
+ sin(θ) sin(φ)e
2
−sin(θ)e
3
e
θ
(x

) =
1
r

θ
r = cos(θ) cos(φ)e
1
+ cos(θ) sin(φ)e
2
−sin(θ)e
3
e
φ
(x

) =
1
r sin(θ)

φ
r = −sin(φ)e
1
+ cos(φ)e
2
They deﬁne an orthonormal basis such that an arbitrary vector ﬁeld may be
resolved in these directions
a(x

) = a
r
(x

)e
r
(x

) + a
θ
(x

)e
θ
(x

) + a
φ
(x

)e
φ
(x

)
with a

i
= e

i
· a, where a

1
= a
r
, e

1
= e
r
etc.
The gradient operator may also be resolved on the basis

r
= e
r

r
+e
θ

θ
+e
φ

φ
= e
r

r
+e
θ
1
r

θ
+
1
r sin(θ)

φ
(2.65)
The non vanishing derivatives of the basis vectors are

θ
e
r
= e
θ

φ
e
r
= sin(θ)e
φ

θ
e
θ
= −e
r

φ
e
θ
= cos(θ)e
φ

φ
e
φ
= −sin(θ)e
r
−cos(θ)e
θ
(2.66)
This is all we need for expressing vector operators in spherical coordinates.
Laplacian
The laplacian in spherical coordinates becomes

2
= (e
r

r
+e
θ

θ
+e
φ

φ
) · (e
r

r
+e
θ

θ
+e
φ

φ
) ,
which, after using the linearity of the scalar product and Eq. (2.66) becomes

2
= ∂
2
rr
+
2
r

r
+
1
r
2

2
θθ
+
cos(θ)
r
2
sin(θ)

θ
+
1
r
2
sin
2
(θ)

2
φφ
Notice that the two ﬁrst terms involving radial derivatives can be given alter-
native expressions
(∂
2
rr
φ) + (
2
r

r
φ) =
1
r
2
_

r
_
r
2
(∂
r
φ)
_
_
=
1
r
_

2
rr
(rφ)
_
,
where φ is a scalar ﬁeld.
29
Chapter 3
Tensors
In this chapter we will limit ourself to the discussion of rank 2 tensors, unless
stated otherwise. The precise deﬁnition of the rank of a tensor will become clear
later.
3.1 Deﬁnition
A tensor, T, of rank two is a geometrical object that to any vector a associate
another vector u = T(a) by a linear operation
T(ma) = mT(a) (3.1)
T(a +b) = T(a) +T(b) (3.2)
In other words a rank 2 tensor is a linear vector operator. We will denote tensors
(of rank 2 or higher) with capital bold-face letters.
Any linear transformation of vectors, such as rotations, reﬂections or pro-
jections are examples of tensors. In fact, we have already encountered tensors
in disguise. The operator T = c× is a tensor. For each vector, a, it associates
the vector T(a) = c × a, obtained by rotating a 90
0
counter-clockwise around
c and scaling it with the magnitude |c|. Since the cross-product is linear in the
second argument T is linear and thus a tensor. For reasons to become clear we
will also extend and use the dot-notation to indicate the operation of a tensor
on a vector so
T· a =
def.
T(a) (3.3)
Physical examples of tensors include the inertia tensor, I, or the moment of
inertia, that speciﬁes how a torque, τ (a vector), changes the angular momen-
tum,
dl
dt
(a vector)
dl
dt
= I · τ
Another example is the (transposed) stress tensor, σ
t
, that speciﬁes the force f
a continuous medium exerts on a surface element deﬁned by the normal vector
30
n.
f = σ
t
· n
A tensor often associated with the stress tensor is the strain tensor, U, that
speciﬁes how a strained material is distorted ∆u in some direction ∆r
∆u = U· ∆r
We shall be more speciﬁc on these physical tensors later.
3.2 Outer product
The outer product a ⊗b or simply ab between two vectors a and b is a tensor
deﬁned by the following equation, where c is any vector
(ab)(c) = a(b · c) (3.4)
In words, for each c the tensor ab associates a vector in the direction of a and
with a magnitude equal to the projection of c into b. In order to call the object
(ab) a tensor we should verify that it is a linear operator. Using the deﬁnition,
Eq. (3.3), allows us to “place the brackets where we want”
(ab) · c = a(b · c),
which will ease the notation. Now, demonstrating that (ab) indeed is a a tensor
amounts to “moving brackets”:
(ab) · (mc) = a(b · (mc)) = ma(b · c)
= m(ab) · c
(ab) · (c +d) = a(b · (c +d)) = a(b · c) +a(b · d)
= (ab) · c + (ab) · d
(3.5)
We note that since the deﬁnition, eq. (??) involves vector operations that
bear no reference to coordinates/components, the outer product will itself be
invariant to the choise of coordinate system. The outer product is also known
in the litterature as the tensor, direct, exterior or dyadic product. The tensor
formed by the outer product of two vectors is called a dyad.
3.3 Basic tensor algebra
We can form more general linear vector operators by taking sum of dyads. The
sum of any two tensors,S and T, and the multiplication of a tensor with a scalar
m are naturally deﬁned by
(S +T) · c = S · c +T· c
(mS) · c = m(S · c).
(3.6)
31
Here, c is any vector. It is easy to show that that (S + T) and (mS) are also
tensors, ie. they satisfy the deﬁnition of being linear vector operators, Eq.(4.3.2).
Deﬁnition Eq. (3.6) and the properties Eq. (3.5) guarantee that dyad products,
sums of tensors and dot products of tensors with vectors satisfy all the usual
algebraic rules for sums and products. For instance, the outer product between
two vectors on the form c =

i
m
i
a
i
and d =

j
n
j
b
j
, where m
i
and n
j
are
scalars, is
cd =
_

i
m
i
a
i
_
_
_

j
n
j
b
j
_
_
=

ij
m
i
n
j
a
i
b
j
, (3.7)
ie. it is a sum of all dyad combinations a
i
b
j
. The sum of two or more dyads
is called a dyadic. As we shall demonstrate in section 3.4 any tensor can be
We can also extend the deﬁnition of the dot product. For any vector c we
deﬁne the dot product of two tensors by the following formula where c is any
vector
(T· S) · c = T· (S · c) (3.8)
In words, application of the operator (T· S) to any vector means ﬁrst applying S
and then T. Since the association of two linear functions is a linear function (T·
S) is a tensor itself. Sums and products of tensors also obey usual rules of algebra
except that dot multiplication of two tensors, in general, is not commutative
T+S = S +T (3.9)
T· (S +P) = T· S +T· P (3.10)
T· (S · P) = (T· S) · P (3.11)
3.3.1 Transposed tensors
Furthermore, we deﬁne a dot product of a dyad with a vector on the left in the
obvious way
c · (ab) = (c · a)b (3.12)
and correspondingly for a dyadic. Since the dot product of a dyadic with a
vector is not in general commutative
T· c = c · T.
it makes sence to introduce the transpose T
t
of a tensor
T
t
· c = c · T (3.13)
Since c · (ab) = (ba) · c we have for a dyad
(ab)
t
= ba. (3.14)
32
As we shall see in section 3.4 any tensor can be expressed as a sum of dyads.
Using this fact following properties can easily be proved
(T+S)
t
= T
t
+S
t
(3.15)
(T· S)
t
= S
t
· T
t
(3.16)
_
T
t
_
t
= T. (3.17)
3.3.2 Contraction
Another useful operation on tensors is that of contraction. The contraction,
ab : cd, of two dyads results in a scalar deﬁned by
ab : cd = (a · c)(b · d) (3.18)
Note the following useful relation for the contraction of two dyads formed by
basis vectors
e
i
e
j
: e
k
e
l
= δ
ik
δ
jl
The contraction of two dyadics is deﬁned as the bilinear operation
_

i
m
i
a
i
b
i
_
:
_
_

j
n
j
c
j
d
j
_
_
=

ij
m
i
n
j
(a
i
· c
j
)(b
i
· d
j
), (3.19)
where m
i
and n
j
are scalars.
In some textbook another type of contraction (a “transposed contraction”)
is also deﬁned
ab · ·cd = ab : dc
_

i
m
i
a
i
b
i
_
· ·
_
_

j
n
j
c
j
d
j
_
_
=

ij
m
i
n
j
(a
i
· d
j
)(b
i
· c
j
)
3.3.3 Special tensors
Some special tensors arise as a consequence of the basic tensor algebra.
A symmetric tensor is a tensor that satisﬁes T
t
= T.
A anti-symmetric tensor is a tensor that satisﬁes T
t
= −T
Any tensor T can trivially be decomposed into a symmetric and an antisym-
metric part. To see this, deﬁne
S(T) = T+T
t
A(T) = T−T
t
(3.20)
Then S
t
= S is symmetric and A
t
= −A is anti-symmetric and
T =
1
2
S +
1
2
A
33
Finally, an important tensor is the identity or unit tensor, 1. It may be
deﬁned as the operator which acting on any vector, yields the vector itself.
Evidently 1 is one of the special cases for which for all tensors A
1 · A = A· 1
If m is any scalar the product m1 is called a constant tensor and has the property
(m1) · A = A· (m1) = mA
Constant tensors therefore commute will all tensors and no other tensors have
this property.
3.4 Tensor components in orthonormal bases
Just as is the case with vectors we can represent a tensor in terms of its com-
ponents once we choose a basis. Let therefore E be an orthonormal basis and
T be any tensor. Deﬁne
T
ij
= e
i
· T· e
j
(3.21)
In words T
ij
is the i’th component of the image of the j’th basis vector. T
ij
is
also called the ijth component of T.
Using the linearity of T and scalar products one observes that the tensor is
fully speciﬁed by these 3 ×3 = 9 quantities. Indeed, for any vector c =

j
c
j
e
j
one obtains the i’th component of the image u = T· c by
e
i
·
_
T· (

j
c
j
e
j
)
_
=
e
i
·
_

j
c
j
T· e
j
_
=

j
c
j
(e
i
· T· e
j
) =

j
T
ij
c
j
(3.22)
Thus,
T· c =

ij
T
ij
c
j
e
i
(3.23)
Since two indices are required for a full representation of T, it is classiﬁed as
a tensor of second rank. We will use the notation [T]
E,ij
for the ij component
of the tensor in basis E or simply [T]
ij
when it is clear from the context (or
irrelevant) which basis is implied. Note that the kl component of a dyad e
i
e
j
is
(e
i
e
j
)
kl
= (e
k
· e
i
)(e
j
· e
l
) = δ
ik
δ
jl
and that the 3 ×3 dyad combinations e
i
e
j
form a basis for the tensor space
T =

ij
T
ij
e
i
e
j
(3.24)
Thus, the concepts of a dyadic and linear vector operator or tensor are identical
and are equivalent to the concept of linear vector function in the sense that
every linear vector function deﬁnes a certain tensor or dyadic, and conversely.
34
Conveniently, the components, T
ij
, can be arranged in a matrix T = (T
ij
),
1
T = (T
ij
) =
def.
_
_
T
11
T
12
T
13
T
21
T
22
T
23
T
31
T
32
T
33
_
_
(3.25)
In analogy with the notation used for vectors we write T = [T]
E
–or simply
T = [T] when the basis is irrelevant– as a short hand notation for the component
matrix of a tensor with respect to a given basis. Also, for the inverse operation
(ie. the tensor obtained by expanding the components along the basis dyads)
we shall use the notation
T = (T)
E
=
def

ij
T
ij
e
i
e
j
. (3.26)
The row-column convention in Eq. (3.25) implies that the representation of
i
e
j
is a matrix with a 1 in the i’th row and j’th column and zero
everywhere else.
As for vectors one should notice the diﬀerence between a tensor and its
matrix representation. The concept of a matrix is purely algebraic; matrices are
arrays of numbers which may be added and multiplied according to particular
rules (see below). The concept of a tensor is geometrical; a tensor may be
represented in any particular coordinate system by a matrix, but the matrix
must be transformed according to a deﬁnite rule if the coordinate system is
changed.
3.4.1 Matrix algebra
Our deﬁnitions of the tensor-vector dot product and tensor-tensor dot-product
are consistent with the ordinary rules of matrix algebra.
Tensor · vector
For instance, Eq. (3.22) shows that if u = T· c then the relationship between
the components of u, T and c in any ortonormal basis will satisfy
u
i
= [T· c]
i
=

j
T
ij
c
j
, (3.27)
In standard matrix notation this is precisely equivalent to
_
_
u
1
u
2
u
3
_
_
=
_
_
T
11
T
12
T
13
T
21
T
22
T
23
T
31
T
32
T
33
_
_
·
_
_
u
1
u
2
u
3
_
_
1
In accordance with standard notation in litterature the bracket around the components,
(T
ij
), is used to indicate the matrix collection of these. This is in analogy to the notation
used for triplets a = (a
i
), cf. section 2.3.
35
or
u = T · a,
where the triplets, u and a are to be considered as a columns. Therefore, we have
the correspondence between the tensor operation and its matrix representation
[T· c] = [T] · [c]
Tensor · tensor
Also, let us consider the implication of deﬁnition of the dot product of two
tensors, Eq. (3.8) in terms of the components.
T· (S · c) = T·
_

kj
S
kj
c
j
e
k
_
=

kj
S
kj
c
j
T· e
k
=

kj
S
kj
c
j

i
T
ik
e
i
=

ij

k
T
ik
S
kj
c
j
e
i
(3.28)
Comparing this with Eq. (3.23), shows that
[T· S]
ij
=

k
T
ik
S
kj
(3.29)
which again is identical to the ij’th element of the matrix product T · S. A more
transparent derivation is obtained by noting that for any vector c
(e
k
e
l
) · c = e
k
(e
l
· c) = e
k
c
l
and consequently
(e
i
e
j
· e
k
e
l
) · c = (e
i
e
j
) · (e
k
e
l
· c) = (e
i
e
j
) · e
k
c
l
= e
i
(e
j
· e
k
)c
l
= e
i
δ
jk
c
l
= δ
jk
e
i
(e
l
· c) = δ
jk
(e
i
e
l
) · c
Therefore, one obtains the simple result for the dot product of two basis dyads
e
i
e
j
· e
k
e
l
= δ
jk
e
i
e
l
(3.30)
Now, evaluating T · S in terms of its dyadic representations and collecting
T· S =
_

ij
T
ij
e
i
e
j
_
· (

kl
S
kl
e
k
e
l
)
=

ijkl
T
ij
S
kl
(e
i
e
j
) · (e
k
e
l
)
=

ijkl
T
ij
S
kl
δ
jk
e
i
e
l
=

ij

l
T
il
S
lj
e
i
e
j
(3.31)
In obtaining the last expression we have exchanged the summation indices j
and l. The ﬁnal expression conﬁrms Eq. (3.29). In summary,
[T· S] = [T] · [S]
36
Tensor + tensor
One can similarly show that the deﬁnition of the sum of two tensors Eq. (3.6),
(S +T) · c = S · c +T· c, (3.6)
(S +T) · c =
_

ij
S
ij
e
i
e
j
+

ij
T
ij
e
i
e
j
_
· c
=

ij
S
ij
e
i
(e
j
· c) +

ij
T
ij
e
i
(e
j
· c) (def. Eq. (3.6))
=

ij
(S
ij
+ T
ij
)e
i
(e
j
· c)
=
_

ij
(S
ij
+ T
ij
)e
i
e
j
_
· c
(3.32)
Consequently,
S +T =

ij
(S
ij
+ T
ij
)e
i
e
j
(3.33)
or equivalently,
[S +T]
ij
= S
ij
+ T
ij
, (3.34)
which means that the component matrix of S + T is obtained by adding the
component matrices of S and T respectively
[S +T] = [S] + [T]
It is left as an exercise to demonstrate that
[mT]
ij
= mT
ij
, (3.35)
where m is a scalar.
Transposition
Finally, the deﬁnition of the transpose of a tensor, Eq. (3.13), is also consistent
with the algebraic deﬁnition of transposition. Speciﬁcally, comparing

_
_

ij
T
ij
e
i
e
j
_
_
=

ij
T
ij
(c·e
i
)e
j
=

ij
T
ij
e
j
(e
i
·c) =
_
_

ij
T
ji
e
i
e
j
_
_
·c (3.36)
with the deﬁnition of the transpose
T
t
· c = c · T
the components satisfy
_
T
t
¸
ij
= T
ji
, (3.37)
in agreement with the matrix deﬁnition of transposition. In other words, the
matrix representation of the transposed tensor equals the transposed matrix
representation of the tensor itself:
[T
t
] = [T]
t
37
Eq. (3.36) also demonstrates
[c · T]
i
=

j
c
j
T
ji
(3.38)
In keeping with the convention that c represents a column, the matrix form
of this equation is
c
t
· T,
where c
t
and and the image c
t
· T will be rows.
3.4.2 Two-point components
As a curiosity, we mention that it is not compulsery to use the same basis for
the left and the right basis-vectors when expressing a tensor in terms of its
components. There are indeed cases where it is advantageous to use diﬀerent
left and right bases. Such tensor-components are called two-point components.
In these cases one should make it clear in the notation that the ﬁrst index of the
component refer to a diﬀerent basis than the second. For a two-point component
matrix
˜
T we write
T = (
˜
T)
E

,E
=

ij
˜
T
ij
e

i
e
j
and similarly, [T]
E

E,ij
for
˜
T
ij
. Two-point components do not involve any extra
formalism though. For instance, getting the components in the dyad basis
E

E

or in the basis EE is only a question of respectively multiplying with the
basis-vectors e

i
to the right or the basis-vectors e
i
to the left on the expression

kl
˜
T
kl
e

k
e
l
and collect the terms. For instance
[T]
EE,ij
= e
i
·
_

kl
˜
T
kl
e

k
e
l
_
· e
j
=

kl
˜
T
kl
(e
i
· e

k
)(e
l
· e
j
)
=

kl
˜
T
kl
(e
i
· e

k

lj
=

k
˜
T
kj
(e
i
· e

k
)
(3.39)
showing that that ij’th component in basis EE, T
ij
, is
T
ij
=

k
˜
T
kj
(e
i
· e

k
)
3.5 Tensor ﬁelds
As for scalar and vector ﬁelds one speaks of a tensor ﬁeld (here of rank 2) if to
each position x = (x
1
, x
2
, x
3
) of a region R in space there corresponds a tensor
T(x). The best known examples of tensor ﬁelds in physics is the stress and
strain tensor ﬁelds.
38
In cartesian coordinates a tensor ﬁeld is given by
T(x) =

ij
T
ij
(x)e
i
e
j
(3.40)
Note in particular that for each i = 1, 2, 3
a
i
(x) = T(x) · e
i
=

j
T
ji
(x)e
j
and
b
i
(x) = e
i
· T(x) =

j
T
ij
(x)e
j
(3.41)
are vector ﬁelds with components [a
i
]
j
= T
ji
and [b
i
]
j
= T
ij
, respectively.
Let a cartesian coordinate system be given C = (O, R).
Nabla operator
For any vector ﬁeld, a(r) we can construct a tensor ﬁeld by the nabla operator.
Inserting a(r) =

i
a
i
(r)e
i
“blindly” into the deﬁnition Eq. (2.41) gives
(∇a)(r) =

i
e
i

i
_
_

j
a
j
(r)e
j
_
_
=

ij
(∂
i
a
j
)(r)e
i
e
j
(3.42)
Assuming that ∇ is a proper vector operator
2
then (∇a)(r) represents a tensor
ﬁeld with the components
[∇a]
ij
(r) = (∂
i
a
j
)(r) (3.43)
Notice that the components of ∇a are that of an outer product between two
ordinary vectors, [ba]
ji
= b
j
a
i
.
The tensor, ∇a, represents how the vector ﬁeld a changes for a given dis-
placement ∆r
a(r + ∆r) ≈ a(r) + ∆r · (∇a)(r)
or
da = dr · (∇a)
An alternative expression for the diﬀerential da was obtained in Eq. (2.44),
da = dr · (∇a) = (dr · ∇)a
Notice that this identity is in accordance with the usual deﬁnition for a dyad
operating on a vector from the left.
2
We recall that the ∇ operator has been deﬁned relative to a chosen coordinate system. It
remains to be proven that operators derived from ∇ such as gradients, curls, divergence etc.
“behave as” proper scalars or vectors. We return to this point in 4.5.
39
Divergence
The divergence of a tensor ﬁeld is obtained straight forwardly by applying the
deﬁnition of ∇ and the dot-product. With respect to a cartesian basis R we
have
_
∇· T
_
(x) = (

i
e
i

i
) ·
_

jk
T
jk
(x)e
j
e
k
_
=

i

k

i
T
ik
(x)e
k
=

k
(

i

i
T
ik
(x)) e
k
.
(3.44)
Consequently, a = ∇ · T is a vector with components a
k
=

i

i
T
ik
, jvf. Eq.
(3.41). The result is actually easier seen in pure subscript notation
[∇· T]
k
=

i

i
T
ik
,
because it follows directly from the matrix algebra, Eq. (3.38). Another way of
viewing the divergence of a tensor ﬁeld is to take the divergence of each of the
vector ﬁelds a
i
= T· e
i
∇· T =

i
(∇· a
i
) e
i
Curl
The curl of a tensor ﬁeld is obtained similarly to the divergence. The result is
most easily obtained by applying the algebraic deﬁnition of curl, Eq. (2.47).
Then we obtain
[∇×T]
ij
=

mn

imn

m
T
nj
Thus the curl of a tensor ﬁeld is another tensor ﬁeld. As for the divergence
we obtain the same result by considering the curl operator on each of the three
vector ﬁeld a
j
= T· e
j
∇×T =

j
_
∇×a
j
_
e
j
,
where an outer vector product is involved in each of the terms
_
∇×a
j
_
e
j
.
3.5.2 Integral theorems
The two important theorems for vector ﬁelds, Gauss theorem and Stokes theo-
rem, can be directly translated to tensor ﬁelds. The analogy is most easily seen
in tensor notation. Then for a vector ﬁeld a(r) the two theorem reads
_
V
(

i

i
a
i
) dV =
_
S
(

i
a
i
n
i
) dS Gauss
_
V
_

ijk

ijk

j
a
k
_
n
i
dS =
_
C
(

i
a
i
dx
i
) Stokes
(3.45)
40
The corresponding tensor versions are then
_
V
(

i

i
T
il
) dV =
_
S
(

i
T
il
n
i
) dS Gauss
_
V
_

ijk

ijk

j
T
kl
_
n
i
dS =
_
C
(

i
T
il
dx
i
) Stokes
(3.46)
Here l refer to any component index l = 1, 2, 3. The reason these formulas also
works for tensors is that for a ﬁxed l, a
l
= T · e
l
=

i
T
il
e
i
deﬁnes a vector
ﬁeld, c.f. (3.41), and Gauss’ and Stokes’ theorems works for each of these.
Due to Gauss theorem the physical interpretation of the divergence of a
tensor ﬁeld is analogous to the divergence of a vector ﬁeld. For instance, if a
ﬂow ﬁeld v(r, t) is advecting a vector a(r, t) then the outer product J(r, t) =
v(r, t)a(r, t) is a tensor ﬁeld where [J]
ij
(r, t) = v
i
(r, t)a
j
(r, t) describes how
much of the j’th component of the vector a is transported in direction e
i
. The
divergence ∇ · J is then a vector where each component [∇ · J]
j
corresponds
to the accumulation of a
j
in an inﬁnitesimal volume due to the ﬂow. In other
words, if a represents a conserved quantity then we have a continuity equation
for the vector ﬁeld a
∂a
∂t
+∇· J = 0
41
Chapter 4
Tensor calculus
In the two preceeding chapters we have developed the basic algebra to for scalar,
vector and rank 2 tensors. We have seen that the notation of vectors and tensors
comes in two ﬂavours. The ﬁrst notation insists in not refering to coordinates or
components at all. This geometrically deﬁned notation, called direct notation,
is explictly invariant to the choise of coordinate system and is the one adopted
in the ﬁrst part of chapter two and three (2.1-2.2, 3.1-3.3). The second notation,
based on components and indices, is called tensor notation and is the one that
naturally arises with a given coordinate system. Here, vectorial or tensorial
relations are expressed algebraically.
Seemingly, a discrepancy exists between the two notations in that the lat-
ter appears to depend on the choise of coordinates. In section 4.2 we remove
this descrepancy by learning how to transform the components of vectors and
tensors when the coordinate system is changed. As we shall see these transfor-
mation rules guarantee that one can unambigously express vectorial or tensorial
relations using component/tensor notation because the expressions will preserve
the form upon a coordinate transformation. Indeed, we should already expect
this to be the case since we have not made any speciﬁc assumptions about the
coordinate system in the propositions regarding relations between vector or ten-
sor components already presented, except of this system beeing rectangular. It
is possible to generalize the tensor notation to ensure that the formalism takes
the same form in non-rectangular coordinate systems as well. This requirement
is known as general covariance. In the following we will, however, restrict the
discussion to rectangular coordinates.
Since tensor notation often requires knowledge of transformation rules be-
tween coordinate systems one may validly ask why use it at all. First of all,
in quantifying any physical system we will eventually have to give a numerical
representation which for vectors or tensors imply to specify the value of their
components with respect to some basis. Secondly, in many physical problems it
is natural to introduce tensors of rank higher than two
1
. To insist on a direct
1
The Levi-Civita symbol being an example of a (pseudo)-tensor of rank 3.
42
notation for these objects become increasingly tedious and unnatural. In tensor
notation no new notation needs to be deﬁned, only those we have already seen
(some in disguise), which is reviewed in section 4.3. It is therefore strongly
recommended to become familiar with the tensor notation once and for all.
4.1 Tensor notation and Einsteins summation
rule
Using tensor notation an index will either be free or bound. A free index occurs
exactly once in a term and has the meaning “for any component”. For instance,
in the formula for the addition of two vectors in terms of its components, Eq.
(2.13),
[a +b]
i
= [a]
i
+ [b]
i
,
index i is free. The same free indices must occur in each term. As in the
above example, an equation with exactly one free index in each term expresses
a vector identity, an expression with two free indices in each term expresses a
tensor identity, etc.
A bound index (or a dummy index) refers to a summation index. For in-
stance the scalar product in an orthonormal basis, Eq. (2.18), reads
a · b =

i
a
i
b
i
.
Here, i is a bound index and can be renamed without changing the meaning of
the expression. The situation where a dummy index appears exactly twice in a
product occurs so often that it is convenient to introduce the convention that
the summation symbol may be left out in this speciﬁc case. In particular, it
occurs in any component representation of a “dot” or inner product. Hence we
may write
a · b =

i
a
i
b
i
= a
i
b
i
(i a bound index)
[T· a]
j
=

i
T
ji
a
i
= T
ji
a
i
(i a bound index, j a free index)
[T· S]
ij
=

k
T
ik
S
kj
= T
ik
S
kj
(ij free indices, k bound index)
T : S =

ij
T
ij
S
ij
= T
ij
S
ij
ij bound indices
(4.1)
Similar, for the expansion of the components along the basis vectors, Eq. (2.10)
a =

i
a
i
g
i
= a
i
g
i
and for the divergence of a vector ﬁeld
∇· a =

i

i
a
i
= ∂
i
a
i
The convention of implicit summation which is very convenient for tensor
calculus is due to Einstein and is referred to as Einsteins summation rule. In
this and the following chapters we shall adopt the implicit summation rule unless
otherwise stated.
43
4.2 Orthonormal basis transformation
Let E and E

be two diﬀerent orthonormal bases. The basis vectors of the primed
basis may be revolved into the unprimed basis
e

j
= a
ji
e
i
(sum over i), (4.2)
where
a
ji
=
def
e

j
· e
i
(4.3)
represents the cosine of the angle between e

j
and e
i
. The relation between the
primed and unprimed components of any (proper) vector
v = c
i
e
i
= c

j
e

j
can be obtained by dotting with e

j
or e
i
v

j
= a
ji
v
i
(4.4)
v
i
= a
ji
v

j
(4.5)
Here, v

j
= [v]
E

,j
and v
i
= [v]
E,i
. Note, that the two transformations diﬀer in
whether the sum is over the ﬁrst or the second index of a
ij
. The matrix version
v

= A· v, A = (a
ij
), (4.6)
and Eq. (4.5)
v = A
t
· v

.
Using the same procedure the primed and unprimed components of a tensor
T = T

ij
e

i
e

k
= T
jl
e
j
e
l
2
are related by
T

ik
= e

i
· T· e

k
= a
ij
a
kl
T
jl
(4.7)
T
ik
= e
i
· T· e
k
= a
ij
a
kl
T
jl
, (4.8)
where T

ik
= [T]
E

,ik
and T
ik
= [T]
E,ik
. Note that the transformation matrix A
is simply the two-point components of the identity tensor
A = [1]
E

E
2
In keeping with Einsteins summation rule this expression is short hand notation for
T =
X
ik
T

ij
e

i
e

k
=
X
jl
T
jl
e
j
e
l
44
4.2.1 Cartesian coordinate transformation
A particular case of an orthonormal basis transformation is the transformation
between two cartesian coordinate systems, C = (O, R) and C

= (O

, R

). Here,
the form of the vector transformation, Eq. (4.4), can be directly translated to
the coordinates themselves. To show this we employ the simple relation between
coordinates and the position vector, Eq. (2.9), valid in any cartesian system.
The displacement vector between any two points is given by ∆r = ∆x
i
e
i
. This
vector –or its diﬀerential analogue dr = dx
i
e
i
– is a proper vector independent
of the coordinate system. Following relation between the coordinate diﬀerentials
dx
i
and dx

j
must then be satisﬁed
dr = dx
i
e
i
= dx

j
e

j
Multiplying with e

j
we obtain
dx

j
= dx
i
e
i
· e

j
or equivalently
∂x

j
∂x
i
= e

j
· e
i
= a
ji
, (4.9)
where a
ji
is deﬁned as in Eq. (4.3). Since the rhs. is constant (constant basis
vectors) the integration of Eq. (4.9) gives
x

j
= a
ji
x
i
+ d
j
. (4.10)
Here, d
j
is the j’th coordinate of the origin, O, of the unprimed coordinate
system as seen from the primed coordinate system, d = (x

)
C
(O). In matrix
notation Eq. (4.10) takes the form
x

= A· x + d, A = (a
ij
), (4.11)
which is identical to Eq. (4.6) except from the optional displacement.
4.2.2 The orthogonal group
Considering Eq. (4.4) and Eq. (4.5) as a set of two matrix equations we note
that the transpose matrix A
t
operates as the inverse of A
A· A
t
= 1, (4.12)
where 1 is the identity matrix, (1)
ij
= δ
ij
. In index notation
(A· A
t
)
ij
= a
ik
a
kj
= δ
ij
(4.13)
A matrix with the above property is called orthogonal, and the set of of all
orthogonal 3×3 matrices constitutes a continuous group
3
called O(3). According
to Eq. (4.12)
det(A· A
t
) = det(A)
2
= 1,
3
Recall, that a mathematical group (G, ∗) is a set G with a binary operator, ∗, that satisﬁes
following axioms:
45
so
det(A) = ±1
Transformations with the determinant +1 or −1 cannot be continuously con-
nected. Therefore, the set of transformations with determinant +1 itself forms
a group, called SO(3), which represents the set of all rotations. By considering
a pure reﬂection, R = (r
ij
), in the origin of a cartesian coordinate system
x

i
= −x
i
= r
ij
x
j
, r
ij
= −δ
ij
we see that det(R) = −1. Clearly, R · R = 1, so the set Z(2) = {1, R} also
forms a group. Consequently, we may write O(3) = Z(2) ⊗ SO(3). In words,
any orthogonal matrix can be decomposed into a pure rotation and an optional
reﬂection. Note, that any matrix with det = −1 will change the handedness of
the coordinate system.
4.2.3 Algebraic invariance
In view of the fact that the various vector and tensor operations (section 2.1-
2.3, 3.1-3.3) were deﬁned without reference to a coordinate system, it is clear
that all algebraic rules for computing sums, products, transposes, etc. of vec-
tors and tensors (cf. section 3.4.1) will be unaﬀected by an orthogonal basis
transformation. Thus for examples
a · b = a

i
b

i
= a
j
b
j
[a +b]
E

,i
= a

i
+ b

i
[T· c]
E

,i
= T

ij
c

j
[T
t
]
E

,ij
= T

ji
,
(4.14)
where, a

i
= [a]
E

,i
, a
j
= [a]
E,j
, T

ji
= [T]
E

,ji
etc.
Any property or relation between vectors and tensors which is expressed in
the same algebraic form in all coordinate systems has a geometrical meaning
independent of the coordinate system and is called an invariant property or
relation.
The invariance of the algebraic rules for vector and tensor operations can
be veriﬁed directly by using the orthogonality property of the transformation
matrix, Eq. (4.13). In particular, whenever a “dot” or inner product appears
between two vectors, a vector and a tensor or two tensors we can pull together
two tranformation matrices and exploit Eq. (4.13). For instance, to verify the
1. Closure: ∀a, b ∈ G : a ∗ b ∈ G.
2. Associativity: ∀a, b, c ∈ G : (a ∗ b) ∗ c = a ∗ (b ∗ c)
3. Identity element: There exists an element e ∈ G so that ∀a : a ∗ e = e ∗ a = a.
4. Inverse element: For each a ∈ G the exists an inverse a
−1
∈ G such that a ∗ a
−1
=
a
−1
∗ a = e.
Orthogonal matrices forms a group with matrix multiplication as the binary operator and
the identity matrix as the identity element. Notice in particular that the composition or
multiplication of two orthogonal transformations is orthogonal.
46
third invariance above, ie. that the i’th component of the image u = T· c of T
operating on c, is always obtained as
u
i
= T
ij
c
j
, (3.27)
irrespective of the chosen coordinate system, we can tranform the components
of u from one (unprimed) to another (primed) coordinate system and see if these
equal the transformed component matrix of T times the transformed component
triplet of c:
[T]
E

,ij
[c]
E

,j
= T

ij
c

j
= (a
ik
a
jl
T
kl
)(a
jm
c
m
) Transf. rules for components
= a
ik
T
kl
(a
jl
a
jm
)c
m
= a
ik
T
kl
δ
ml
c
m
Orthogonality (summing over k)
= a
ik
(T
kl
c
l
)
= a
ik
[u]
E,k
= [u]
E

,i
Tranf. rules for components
(4.15)
4.2.4 Active and passive transformation
Comparing the transformation of the components of a vector upon a basis trans-
formation, Eq. (4.4) with the matrix representation of a tensor operating on
a vector, Eq. (3.27), it is clear that these two operations are deﬁned in the
same manner algebraically. Since the ﬁrst equation only expresses the change
of components due to a change of basis (the vector itself remains unchanged) it
is referred to as a passive transformation as opposed to the second case which
reﬂects an active transformation of the vector. Here, we shall show a simple
algebraic relation between the two type of transformations. Let O represent an
orthogonal tensor, ie. a tensor for which
O
t
· O = 1,
where 1 as usual denotes the identity tensor. In any orthonormal basis the
matrix representation of this identity will be that of Eq. (4.12). Further, let a
new basis system E

= {e

1
, e

2
, e

3
} be deﬁned by
e

i
= O· e
i
where E = {e
1
, e
2
, e
3
} is the old basis. We shall use the short-hand notation
E

= O(E)
Due to the orthogonality property of O, E

will be ortonormal iﬀ E is.
47
The components of a vector v in the primed basis relates to the the unprimed
components as
v

i
= e

i
· v
= (O· e
i
) · v
= [O· e
i
]
E,j
[v]
E,j
(Scalar product wrt. E)
= [O]
jk
[e
i
]
k
v
j
(Eq. (3.27), all components wrt. E)
= [O]
jk
δ
ik
v
j
= [O]
ji
v
j
= [O
t
]
ij
v
j
= (O
t
· v)
i
(4.16)
This shows that the matrix, A, representing the basis transformation, Eq. (4.6),
satisﬁes
A = [O
t
]
E
= O
t
= O
−1
.
Being explicit about the bases involved we may write this identity as
[1]
E

E
= [O
−1
]
E
, where E

= O(E), (4.17)
which is to say that the matrix representing a passive transformation from one
to another basis, E →E

, equals the matrix representation wrt. E of the tensor
mapping the new basis vectors onto the old ones, O
−1
(E

) = E.
4.2.5 Summary on scalars, vectors and tensors
To summarize the present section we may give an algebraic deﬁnition of scalars,
vectors and tensors by refering to the transformation properties of their compo-
nents upon a change of coordinates. For an cartesian coordinate transformation,
Eq. (4.10) following transformation properties of scalars, vectors and tensors
apply.
Scalars
A quantity m is a scalar if it has the same value in every coordinate system.
Consequently it transforms according to the rule
m

= m
Vectors
A triplet of real numbers, v, is a vector if the components transform according
to
v

j
= a
ji
v
i
The matrix notation of the transformation rule is
v

= A· v
48
Rank 2 Tensors
A rank two tensor is a 3 ×3 matrix of real numbers, T, which transforms as the
outer product of two vectors
T

ij
= a
ik
a
jl
T
kl
Again it should be noticed that it is the transformation rule which guarantees
that the nine quantities collected in a matrix form a tensor. The matrix notation
of this transformation is
T

= A· T · A
t
4.3 Tensors of any rank
4.3.1 Introduction
Tensors of rank higher than two are obtained by a straightforward iteration of
the deﬁnition of the outer product. For instance the outer product between a
1
a
2
and a vector a
3
can be deﬁned as the linear operator, a
1
a
2
a
3
, that
maps any vector c into a tensor according to the rule
a
1
a
2
a
3
(c) =
def.
a
1
a
2
(a
3
· c) (4.18)
In words, for each c the object a
1
a
2
a
3
associates a tensor obtained by multi-
1
a
2
with the scalar a
3
· c. Deﬁnition Eq. (4.18) preserves the
invariance with respect to the choise of coordinates. Addition of triple products
and scalar multiplication is deﬁned analogous to the algebra for dyads.
In section 3.4 we demonstated that a basis for second order tensors is ob-
tained by forming all 3×3 dyad combinations, e
i
e
j
, between orthonormal basis
vectors. Similarly, one can demonstrate that a basis for all linear operators,
T
3
, which map a vector into a second rank tensor is obtained by forming the
3
3
combinations of the direct products e
i
e
j
e
k
. Consequently, for any T
3
there
exists a unique set of quantities, T
ijk
, such that
T
3
= T
ijk
e
i
e
j
e
k
NB! Einstein summation convention
Not surprisingly, these quantities, T
ijk
, are called the components of T
3
and
since three indicies are needed, T
3
is called a third rank tensor. Upon a co-
ordinate transformation we would need to transform the components of each
individual vector in the triple product. Expressing the original basis vectors in
terms of the new basis
e
i
= a
ji
e

j
one obtains
T
3
= T

lmn
e

l
e

m
e

n
= T
ijk
e
i
e
j
e
k
= T
ijk
(a
li
e

l
)(a
mj
e

m
)(a
nk
e

n
) = a
li
a
mj
a
nk
T
ijk
e

l
e

m
e

n
(4.19)
49
This shows that upon an orthogonal transformation the components of a third
rank tensor transform as
T

lmn
= a
li
a
mj
a
nk
T
ijk
The inverse transformation follows from expressing the new basis vectors in
terms of the old one, e

j
= a
ji
e
i
in Eq. (4.19)
T
ijk
= a
li
a
mj
a
nk
T

lmn
.
We can continue this procedure, deﬁning the outer product of four vectors
in terms triple products, cf. Eq. (4.18), and introducing addition and scalar
multiplication of these objects as well.
In keeping with previous notation, the set of all components is written as
T = (T
ijk
).
Here, ijk are dummy indices and the bracket around T
ijk
indicates the set of
all of these
(T
ijk
) = { T
ijk
| i = 1, 2, 3 ; j = 1, 2, 3 ; k = 1, 2, 3 }.
Also, the tensor is obtained from the components as
T = (T)
E
= ( (T
ijk
) )
E
=
def
T
ijk
e
i
e
j
e
k
4.3.2 Deﬁnition
In general, a tensor T of rank r is deﬁned as a set of 3
r
quantities, T
i1i2···ir
, that
upon an orthogonal change of coordinates, Eq. (4.11), transform as the outer
product of r vectors
T

j1j2···jr
= a
j1i1
a
j2i2
· · · a
jrir
T
i1i2···ir
. (4.20)
4
Accordingly, a vector is a rank 1 tensor and a scalar is a rank 0 tensor.
It is common to refer to both T and the set (T
i1i2···ir
) as a tensor, although
the latter quantities are strictly speaking only the components of the tensor
with respect to some particular basis E. However, the distinction between a
tensor and its components becomes unimportant, provided we keep in mind
that T

j1j2···jr
in Eq. (4.20) are components of the same tensor only with respect
to a diﬀerent basis E

. It is precisely the transformation rule, Eq. (4.20), that
ensures that the two set of components represent the same tensor.
For completeness, the inverse transformation is obtained by summing on the
ﬁrst indices of the transformation matrix elements instead of the second ones,
cf. section 4.3.2:
T
i1i2···ir
= a
j1i1
a
j2i2
· · · a
jrir
T

j1j2···jr
.
4
The notation T
i
1
i
2
···ir
may be confusing at ﬁrst sight. However, i
1
, i
2
, · · · ir are simply r
independent indices each taking values in the range {1, 2, 3}. Choosing a second rank tensor
as an example and comparing with previous notation, T
ij
, it simply means that i
1
= i, i
2
= j.
For any particular component of the tensor, say T
31
, we would have with the old notation
i = 3, j = 1 and in the new notation i
1
= 3, i
2
= 1.
50
4.3.3 Basic algebra
Tensors can be combined in various ways to form new tensors. Indeed, we have
5
. The upshot of these rules below is that
tensor operations that look “natural” are permissible in the sense that if one
starts with tensors the result will be a tensor ??.
Tensors of the same rank may be added together. Thus
C
i1i2···ir
= A
i1i2···ir
+ B
i1ii2···ir
is a tensor of rank r if A and B are tensors of rank r. This follows directly from
the linearity of Eq. (4.20),
C

j1j2···jr
= A

j1j2···jr
+ B

j1j2···jr
= a
j1i1
a
j2i2
· · · a
jrir
A
i1i2···ir
+ a
j1i1
a
j2i2
· · · a
jrir
B
i1i2···ir
= a
j1i1
a
j2i2
· · · a
jrir
(A
i1i2···ir
+ B
i1i2···ir
)
= a
j1i1
a
j2i2
· · · a
jrir
C
i1i2···ir
.
Consequently, the 3
r
quantities (C
i1i2···ir
) transform as the components of r
rank tensor when both (A
i1i2···ir
) and (B
i1ii2···ir
) do (We made use of this in
the second line of the demonstation above). It follows directly that substraction
of two equally ranked tensors is also a tensor of the same rank, and that tensorial
Outer product
The product of two tensors is a tensor whose rank is the sum of the ranks
of the given tensors. This product which involves ordinary multiplication of
the components of the tensor is called the outer product. It is the natural
generalization of the outer product of two vectors (two tensors of rank 1) deﬁned
in section 3.2. For example
C
i1i2···irj1j2···js
= A
i1i2···ir
B
j1j2···js
(4.21)
is a tensor of rank r +s if A
i1i2···ir
is a tensor of rank r and B
j1j2···js
is a tensor
of rank s. Note, that this rule is consistent with the cartesian components of a
dyad c = ab in the speciﬁc case where A = a and B = b are vectors. Also, a
tensor may be multiplied with a scalar m = B (tensor of rank 0) according to
the same rule
C
i1i2···ir
= mA
i1i2···ir
Note, that not every tensor can be written as a product of two tensors of lower
rank. We have already emphasized this point in case of second rank tensors
which can not in general be expressed as a single dyad. For this reason divison
of tensors is not always possible.
5
The composition, T · S, of two second rank tensors, T and S, forms a new second rank
tensor. The operation of a second rank tensor on a vector (1. rank tensor) gives a new vector.
The addition of two second rank tensors gives a new second rank tensor, etc.
51
Permutation
It is permitted to exchange or permute indices in a tensor and still remain with
a tensor. For instance, if we deﬁne a new set of 3
r
quantities C
i1i2···ir
from
a tensor A
i1i2···ir
by permuting two arbitrarily chosen index numbers α and
β > α:
C
i1i2···iα−1iαiα+1···i
β−1
i
β
i
β+1
···ir
= A
i1i2···iα−1i
β
iα+1···i
β−1
iαi
β+1
···ir
,
this new set will also be a tensor. Its tensorial property follows from the sym-
metry among the a-factors in Eq. (4.20). Tensors obtained from permutting
indices are called isomers. Tensors of rank less than two (ie. scalars and vectors)
have no isomers. A tensor of rank two has precisely one isomer, the transposed
one, obtained by setting α = 1 and β = 2 in the above notation:
(T
t
)
i1i2
= T
i2i1
Contraction
The most important rule in tensor algebra is the contraction rule which states
that if two indices in a tensor of rank r + 2 are put equal and summed over,
then the result is again a tensor with rank r. Because of the permutation rule
discussed above we only have to demonstrate it for the ﬁrst two indices. The
contraction rule states that if A is a tensor of rank r + 2 then
B
j1j2···jr
= A
iij1j2···jr
(NB! A
iij1j2···jr
=

3
i=1
A
iij1j2···jr
)
is also a tensor of rank r. The proof follows
B

j1j2···jr
= A

iij1j2···jr
= a
ik
a
il
a
j1m1
a
j2m2
· · · a
jrmr
A
klm1m2···mr
= δ
kl
a
j1m1
a
j2m2
· · · a
jrmr
A
klm1m2···mr
A
kkm1m2···mr
= a
j1m1
a
j2m2
· · · a
jrmr
B
m1m2···mr
Consequently, B
j1···jr
does transform as a tensor of rank r as claimed.
Inner product
By the process of forming the outer product of two tensors followed by a con-
traction, we obtain a new tensor called an inner product of the given tensors. If
the rank of the two given tensors are respectively r and s then the rank of the
new tensor will be r + s − 2. Its tensorial nature follows from the fact that an
outer product of two tensors is a tensor and the contraction of two indices in a
tensor gives a new tensor. In the previous chapters we have reserved the “dot”-
symbol for precisely this operation. For instance, a scalar product between two
vectors, a and b is a inner product of two rank 1 tensors giving a 0 rank tensor
(scalar):
[ab]
ij
= a
i
b
j
(Outer product)
a · b = a
i
b
i
(Contraction of outer product, setting j = i)
52
A rank two tensor, T, operating on a vector v is a inner product of between a
rank two and a rank one tensor giving a rank 1 tensor:
[Tv]
ijk
= T
ij
v
k
(Outer product)
[T· v]
i
= T
ij
v
j
(Contraction of outer product, setting k = j)
It is left to an exercise to see that the “dot”-operation deﬁned in chapter 2 for
two second rank tensors, T· S, is an inner product.
In general, any of two indices in the outer product between two tensors can
be contracted to deﬁne an inner product. For example
C
i1i2i4i5j1j3
= A
i1i2ki4i5
B
j1kj3
is a tensor of rank 6 obtained as an inner product between a tensor A
i1i2i3i4i5
of
rank 5 and a tensor B
j1j2j3
of rank 3 by setting i
3
= j
2
. Tensors of rank higher
than 4 are an odditity in the realm of physics, fortunately .
Summary
In summary, all tensorial algebra of any rank can basically be boiled down to
these few operations: addition, outer product, permutation, contraction and
inner product. Only one more operation is essential to know, namely that of
diﬀerentiation.
4.3.4 Diﬀerentiation
Let us consider a tensor A of rank r which is a diﬀerentiable function of all
of the elements of another tensor B of rank s. The direct notation for this
situation would be
A = A(B) (Rank(A)=r, Rank(B)=s)
In eﬀect, it implies the existence of 3
r
functions, A
i1···ir
, each diﬀerentiable in
each of its 3
s
arguments B = (B
j1j2···js
),
A
i1i2···ir
= A
i1i2···ir
( (B
j1j2···js
) )
Then the partial derivatives
C
i1i2···ir,j1j2···js
=
∂A
i1i2···ir
∂B
j1j2···js
(4.22)
is itself a tensor of rank r + s. As for ordinary derivatives of functions, the
components C
i1i2···ir
will in general also be functions of B,
C
i1i2···ir
= C
i1i2···ir
( (B
j1j2···js
) )
The direct notation for taking partial derivatives with respect to a set of tensor
components is
C(B) =
_
∂A
∂B
_
(B)
53
To demonstrate that C is a tensor, and so to justify this direct notation in the
ﬁrst place, we will have to look at the transformation properties of its elements.
Indeed, we have
C

i1i2···ir,j1j2···js
(B

) =
_
∂A

i
1
i
2
···ir
∂B

j
1
j
2
···js
_
(B

)
=
∂B
l
1
l
2
···ls
∂B

j
1
j
2
···js
_
∂(a
i
1
k
1
a
i
2
k
2
···a
irkr
A
k
1
k
2
···kr
)
∂B
l
1
l
2
···ls
_
(B)
= a
i1k1
a
i2k2
· · · a
irkr
a
j1l1
a
j2l2
· · · a
jsls
C
k1k2···kr,l1l2···ls
(B)
(4.23)
In the second step we have used the chain rule for diﬀerentiation. The last step
follows from
B
l1l2···ls
= a
j1l1
a
j2l2
· · · a
jsls
B

j1j2···js
.
It is essential that all components of B are independent and can vary freely
without constraints ??.
From this rule follows the quotient rule which states that if the tensor
A
i1i2···ir
is a linear function of the unconstrained tensor B
j1j2···js
through the
relation
A
i1i2···ir
= C
i1i2···irj1j2···js
B
j1j2···js
+ D
i1i2···ir
then C is a tensor of rank r + s and consequently D must also be a tensor of
rank r.
4.4 Reﬂection and pseudotensors
Applying the general tensor transformation formula, Eq. (4.20), to a reﬂection
through the origin of a cartesian coordinate system, x

= −x, we ﬁnd
T

i1···ir
= (−1)
r
T
i1···ir
(4.24)
There are quantities, notably the Levi-Civita symbol, that do not obey this
rule, but acquire an extra minus sign. Such quantities are called pseudo-tensors
in contradistinction to ordinary or proper tensors. Consequently, a set of 3
r
quantities, P
i1i2···ir
, is called a pseudo-tensor if it transforms according to the
rule
P

i1i2···ir
= det(A)a
i1j1
a
i2j2
· · · a
irjr
P
j1j2···jr
Pseudo-tensor, (4.25)
upon the coordinate transformation, Eq. (4.11) For a pure reﬂection, a
ij
= −δ
ij
,
so
P

i1i2···ir
= −(−1)
r
P
i1i2···ir
.
By comparing Eq. (4.25) with Eq. (4.20) one observes that the diﬀerence be-
tween a tensor and a pseudotensor only appears if the transformation includes a
reﬂection, det(A) = −1. A proper tensor can be considered as a real geometrical
object, independent of the coordinate system. A proper vector is an “arrow”
in space for which the components change sign upon a reﬂection of the axes
of the coordinate system, cf. Eq. (4.24). Direct product of ordinary vectors
54
are ordinary tensors, changing sign once for each index. An even number of
direct products of pseudo-tensors will make an ordinary tensor, whereas an odd
number of direct products of pseudo-tensors leads to another pseudo-tensor.
4.4.1 Levi-Civita symbol
If we apply the transformation rule, Eq. (4.20), to the Levi-Civita symbol,
ijk
we obtain

ijk
= a
il
a
jm
a
kn

lmn
. (4.26)
This expression is antisymmetric in all indices ijk
6
. Consequently, it must be
proportional to
ijk
. To get the constant of proportionality, we should observe
the following connection between the determinant of a matrix and the Levi-
Civita symbol
det(A) =
lmn
a
1l
a
2m
a
3n
, A = (a
ij
). (4.27)
Taking ijk = 123 in Eq. (4.26) we get

123
= det(A) = det(A)
123
,
since
123
= +1. Thus the constant of proportionality is det(A) and

ijk
=
det(A)
ijk
. If

ijk
should be identical to
ijk
we must multiply the transforma-
tion with an extra det(A) to account for the possiblity that the transformation
involves a reﬂection, where det(A) = −1. The correct transformation law must
therefore be

ijk
= det(A)a
il
a
jm
a
kn

lmn
,
whence the Levi-Civita symbol is a third rank pseudo-tensor. The transforma-
tion rule for pseudo-tensors leaves the Levi-Civita symbol invariant under all
orthogonal transformations,

ijk
=
ijk
.
7
The most important application of the Levi-Civita symbol is in the algebraic
deﬁnition of the cross-product, c = a ×b between two ordinary vectors
c
i
= [a ×b]
i
=
ijk
a
j
b
k
.
Since the rhs. is a double inner product between a pseudo-tensor of rank three
and two ordinary vectors, the lhs. will be a pseudo-vector. Indeed, the direc-
tion of c depends on the handedness of the coordinate system, as previously
mentioned. An equivalent manifestation of its pseudo-vectorial nature is that c
does not change its direction upon an active reﬂection.
6
For instance,

ikj
= a
il
a
km
a
jn

lmn
= a
il
a
kn
a
jm

lnm
= a
il
a
jm
a
kn

lnm
= −a
il
a
jm
a
kn

lmn
= −

ijk
The second identity follows from the fact that n and m are bound indices and can therefore
be renamed to one and another.
7
We should not be suprised by the fact that
ijk
are unaﬀected by coordinate transforma-
tions, since its deﬁnition makes no distinction between the 1,2 and 3 directions.
55
4.4.2 Manipulation of the Levi-Civita symbol
Two important relations for the Levi-Civita symbol are very useful to derive
the myriad of known identities between various vector products. The ﬁrst is a
formula to reduce the product of two Levi-Civita symbols into kronecker deltas

ijk

lmn
= det
_
_
δ
il
δ
im
δ
in
δ
jl
δ
jm
δ
jn
δ
kl
δ
km
δ
kn
_
_
(4.28)
which after calculating the determinant becomes

ijk

lmn
= δ
il
δ
jm
δ
kn
+ δ
im
δ
jn
δ
kl
+ δ
in
δ
jl
δ
km
−δ
in
δ
jm
δ
kl
−δ
jn
δ
km
δ
il
−δ
kn
δ
im
δ
jl
(4.29)
Due to this relation an arbitrary tensor expression may be reduced to contain
zero -symbols whereas a pseudotensor expression may be reduced to contain
only one such symbol. Every time two ’s meet in an expression they may be
reduced away ??. This is the secret behind all complicated formules involving
vector products. From Eq. (4.29) we may deduce a few more expressions by
contraction of indices

ijk

lmk
= δ
il
δ
jm
−δ
im
δ
jl

ijk

ljk
= 2δ
il

ijk

ijk
= 6
(4.30)
An other important relation for the -symbol follows from the observation that
it is impossible to construct a non-trivial totally antisymmetric symbol with
more than three indicies. The reason is that the antisymmetry implies that any
component with two equal components must vanish. Hence a non-vanishing
component must have all indices diﬀerent, and since there are only three possible
values for an index this is impossible. Consequently, all components of a totally
antisymmetric symbol with four indices must vanish. From this follows the rule
a
i

jkl
−a
j

ikl
−a
k

jil
−a
l

jki
= 0, (4.31)
because the lhs. is antisymmetric in four indices.
Derivation of Eq. (4.28)
To derive Eq. (4.28) one uses two well-known properties of the determinant of
a matrix. First, the determinant of a product of matrices equals the product of
the determinants of the individual matrices
det(M · N) = det(M) det(N)
Secondly, the determinant of the transpose, N
t
, of a matrix, N, equals the
determinant of the matrix
det(N
t
) = det(N)
56
Set
M =
_
_
a
1
a
2
a
3
b
1
b
2
b
3
c
1
c
2
c
3
_
_
, N =
_
_
x
1
x
2
x
3
y
1
y
2
y
3
z
1
z
2
z
3
_
_
,
then the matrix product M · N
t
becomes
M · N
t
=
_
_
a · x a · y a · z
b · x b · y b · z
c · x c · y c · z
_
_
(4.32)
Also, det(M) =
ijk
a
i
b
j
c
k
, and det(N) =
lmn
x
l
y
m
z
n
according to Eq. (2.24).
Consequently,
det(M · N
t
) = det(M) det(N) =
ijk

lmn
a
i
b
j
c
k
x
l
y
m
z
n
,
which together with Eq. (4.32) gives

ijk

lmn
a
i
b
j
c
k
x
l
y
m
z
n
= det
_
_
a · x a · y a · z
b · x b · y b · z
c · x c · y c · z
_
_
. (4.33)
Setting a = e
i
, b = e
j
, c = e
k
, x = e
l
, y = e
m
, z = e
n
we have for the lhs of
Eq. (4.33)

i

j

k

l

m

n

i

j

k

l

m

n
a
i
b
j
c
k
x
l
y
m
z
n
=

i

j

k

l

m

n

i

j

k

l

m

n
δ
i

i
δ
j

j
δ
k

k
δ
l

l
δ
m

m
δ
n

n
=
ijk

lmn
,
(4.34)
where we for pedagogical reasons have reinserted the involved sums, denoting
summation indices with a prime. The above expression equals the lhs of Eq.
(4.28). For the rhs of Eq. (4.33) we also recover the rhs. of Eq. (4.28) by noting
that each scalar product indeed becomes a kronecker delta, ie. a· x = e
i
· e
l
= δ
il
etc. This proves Eq. (4.28).
4.5 Tensor ﬁelds of any rank
All that has been said about transformation properties of tensors and pseudo-
tensors can be converted directly to that of tensor ﬁelds and pseudo-tensor
ﬁelds. One only needs to remember that all the (pseudo) tensor components
are now functions of the coordinates. To be speciﬁc, upon the cartesian coor-
dinate transformation, Eq. (4.11) the components of a tensor ﬁeld T of rank r
transforms as
T

i1i2···ir
(x

) = a
i1j1
a
i2j2
· · · a
irjr
T
j1j2···jr
(x) (4.35)
and the components of a pseudo tensor ﬁeld P of rank r transforms as
P

i1i2···ir
(x

) = det(A)a
i1j1
a
i2j2
· · · a
irjr
P
j1j2···jr
(x) (4.36)
57
The same transformation rules also hold true for a transformation between any
two orthonormal bases, for instance from a rectangular to a spherical basis.
One must bear in mind, however, than in this case the transformation matrix
elements, a
ij
, will themselves be function of the position, a
ij
= a
ij
(x).
A tensor ﬁeld is just a particular realization of the more general case con-
sidered in section 4.3.4, with a tensor A being a function of another tensor B,
A = A(B). Here, B = r. Therefore, we can apply Eq. (4.22) to demonstrate
that derivatives of a tensor ﬁeld is another tensor ﬁeld of one higher rank. Al-
though r is an improper vector, cf. section 2.3, C =
∂T
∂r
will still be a proper
tensor. The reason is that in deriving the tensorial nature of C, Eq. (4.23), we
have used the tensorial nature of B only to show that
∂B
l1l2···ls
∂B

j1j2···js
= a
j1l1
a
j2l2
· · · a
jsls
However, this also holds true for any improper tensor in cartesian coordinates.
Speciﬁcally, for the position vector we have [r]
R,l
= x
l
, [r]
R

,j
= x

j
and
∂x
l
∂x

j
= a
jl
.
Consequently, if T is a tensor ﬁeld of rank r, T(r) = (T
i1i2···ir
(r))
R
, then the
operation
∂T
∂r
= ∇T
gives a tensor ﬁeld of rank r + 1 with the components
[∇T]
i1i2···ir,j
(r) =
_

j
T
i1i2···ir
_
(r).
Speciﬁcally, ∇φ is a vector ﬁeld when T = φ is a scalar ﬁeld, c.f. section 2.6.4,
and ∇a is a rank two tensor ﬁeld when T = a is a vector, cf. section 3.5.
8
Divergence
The fact that spatial derivatives of a tensor always leads to a new tensor of one
higher rank makes it easy to deduce the type of objects resulting from various
vector operations. For instance, if a(x) is a vector ﬁeld then ∇a is a tensor ﬁeld
of rank two. By contracting the two indices we therefore obtain a scalar, cf.
section 4.3.2
[∇a]
i,i
= ∂
i
a
i
a scalar quantity.
Thus, the divergence of the vector ﬁeld is a scalar.
For a rank two tensor, T, ∇T is a rank 3 tensor, and we can perform two
diﬀerent contractions, a
j
= ∂
i
T
ij
and b
i
= ∂
j
T
ij
, yielding the components of
two diﬀerent vectors a and b. Only if T is symmetric will a = b. The direct
notation for the two operations are ∇· T and ∇· T
t
, respectively.
8
Note, that the convention in tensor notation is to place the index of the spatial derivative
at the end of the tensor components, in conﬂict with the natural convention arising from the
direct notation which for –say a vector a– reads [∇a]
ij
= ∇
i
a
j
. However, in tensor notation
one always explicitly writes the index to be summed over, so no ambiguities arise in practice.
58
Curl
The tensor notation for the curl operator ∇×a on a vector ﬁeld a involves the
Levi-Civita symbol
[∇×a]
i
=
ijk

j
a
k
.
If a is a polar vector then the rhs will be a double inner product between a
pseudo-tensor of rank 3 and two polar vectors yielding an axial (or pseudo)
vector ﬁeld.
59
Chapter 5
Invariance and symmetries
60

Contents
1 Physical space 1.1 Coordinate systems . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Scalars and vectors 2.1 Deﬁnitions . . . . . . . . . . . . . . . . . . . . 2.2 Basic vector algebra . . . . . . . . . . . . . . 2.2.1 Scalar product . . . . . . . . . . . . . 2.2.2 Cross product . . . . . . . . . . . . . . 2.3 Coordinate systems and bases . . . . . . . . . 2.3.1 Components and notation . . . . . . . 2.3.2 Triplet algebra . . . . . . . . . . . . . 2.4 Orthonormal bases . . . . . . . . . . . . . . . 2.4.1 Scalar product . . . . . . . . . . . . . 2.4.2 Cross product . . . . . . . . . . . . . . 2.5 Ordinary derivatives and integrals of vectors . 2.5.1 Derivatives . . . . . . . . . . . . . . . 2.5.2 Integrals . . . . . . . . . . . . . . . . . 2.6 Fields . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Deﬁnition . . . . . . . . . . . . . . . . 2.6.2 Partial derivatives . . . . . . . . . . . 2.6.3 Diﬀerentials . . . . . . . . . . . . . . . 2.6.4 Gradient, divergence and curl . . . . . 2.6.5 Line, surface and volume integrals . . 2.6.6 Integral theorems . . . . . . . . . . . . 2.7 Curvilinear coordinates . . . . . . . . . . . . 2.7.1 Cylindrical coordinates . . . . . . . . 2.7.2 Spherical coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 3 4 5 5 5 6 7 7 9 10 11 11 12 13 14 14 15 15 15 16 17 20 24 26 27 28

3 Tensors 30 3.1 Deﬁnition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2 Outer product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.3 Basic tensor algebra . . . . . . . . . . . . . . . . . . . . . . . . . 31

1

3.4

3.5

3.3.1 Transposed tensors . . . . . . . . . 3.3.2 Contraction . . . . . . . . . . . . . 3.3.3 Special tensors . . . . . . . . . . . Tensor components in orthonormal bases . 3.4.1 Matrix algebra . . . . . . . . . . . 3.4.2 Two-point components . . . . . . . Tensor ﬁelds . . . . . . . . . . . . . . . . . 3.5.1 Gradient, divergence and curl . . . 3.5.2 Integral theorems . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

32 33 33 34 35 38 38 39 40 42 43 44 45 45 46 47 48 49 49 50 51 53 54 55 56 57 60

4 Tensor calculus 4.1 Tensor notation and Einsteins summation rule . 4.2 Orthonormal basis transformation . . . . . . . . 4.2.1 Cartesian coordinate transformation . . . 4.2.2 The orthogonal group . . . . . . . . . . . 4.2.3 Algebraic invariance . . . . . . . . . . . . 4.2.4 Active and passive transformation . . . . 4.2.5 Summary on scalars, vectors and tensors . 4.3 Tensors of any rank . . . . . . . . . . . . . . . . 4.3.1 Introduction . . . . . . . . . . . . . . . . 4.3.2 Deﬁnition . . . . . . . . . . . . . . . . . . 4.3.3 Basic algebra . . . . . . . . . . . . . . . . 4.3.4 Diﬀerentiation . . . . . . . . . . . . . . . 4.4 Reﬂection and pseudotensors . . . . . . . . . . . 4.4.1 Levi-Civita symbol . . . . . . . . . . . . . 4.4.2 Manipulation of the Levi-Civita symbol . 4.5 Tensor ﬁelds of any rank . . . . . . . . . . . . . . 5 Invariance and symmetries

2

This functional relation can be expressed as x = x (x) (1.to locate a point.Chapter 1 Physical space 1. P . in space. x2 . we use an underline simply as a short hand notation for a triplet of quantities. x2 . Here. which is a set of rules for ascribing numbers to physical objects. Distances 3 . (x)C (P ) = (x1 (P ). because both systems are assumed to cover the same space. x3 ) (1. x2 (P ). These numbers are called coordinates of a point. x3 (P ))C in a diﬀerent coordinate system C . called the position of the point. and the reference frame for the coordinates is called the coordinate system C. 1. x3 ) = x3 (x1 . The coordinates of P can conveniently be collected into a a triplet. since diﬀerence coordinate systems are just diﬀerent ways of representing the same physical space in terms of real numbers. The inverse transformation must also exist. The physical space has three dimensions. x2 . x3 . x3 ) = x2 (x1 . x2 (P ).phenomena it is necessary to choose a frame of reference. x3 (P ))C .1 Coordinate systems In order to consider mechanical -or any other physical.2 . which means that it takes exactly three numbers -say x1 . The same point will have a diﬀerent position (x )C (P ) = (x1 (P ).1) which is a shorthand notation for x1 x2 x3 = x1 (x1 . However. x2 .2) This functional relation is called a coordinate transformation. the coordinates of P in C must be a unique function of its coordinates in C.

Mechanical or geometrical laws should therefore be invariant to coordinate transformations involving rotations and translations. locally. For instance all points on the surface of earth can be represented by two numbers -say x1 . All geometric analysis. We will call such a coordinate system a Cartesian or a Rectangular coordinate system. In diﬀerential geometry one only requires ﬂatness in a diﬀerential sence. independent of the chosen coordinate system). D(O. Motivated by this fact we are inclined to develop a mathematical formalism for operating with geometrical objects without referring to a coordinate system or. equivalently.3 Symmetries For classical mechanical phenomena it is found that a frame of reference can always be chosen in which space is homogeneous and isotropic and time is homogeneous. rely on the assumption that at suﬃcently small scales the space will appear ﬂat.3) where d = 3 is the dimension. However. Cartesian coordinate system. 0). Without loss of generality we can choose one of the points as origen.We will take it for given that one can ascribe a unique value (ie. around O for which distances between any two points. Classical physics takes place in a 3-dimensional globally Euclidean space Ω(O) = R3 . a coordinate system exists by which D can be calculated according to Pythagoras law: d D(O. vector and tensor algebra is precisely such a formalism. In general relativity space are intrinsically curved and the assumption of an Euklidean space can only be applied locally. For larger rectangular triangles the sum of the angles will be larger than 1800 and Pythagoras’ law will not be correct. we also write |P1 P2 | for D(P1 . the law of Pythagoras (with d = 2) can only be applied for small rectangular triangles1 on the surface. a formalism which will take the same form (ie. 1. (1. Trivially. The neighborhood.representing the longitude and latitude. It implies that nature has no sence of an absolute origin in time and space and no sence of an absolute direction.3) for the same ﬁxed coordinate system is coined a (local) Euklidean space. however. (1. The mathematics of scalar. is covariant) for all coordinates. 1 Small would mean that the length of line segments are much smaller than the radius of earth 4 . P ). O ∈ Ω(O). P1 . This fundamental principle is called the principle of covariance. The principle of curved space is easier to envisage for 2d-surfaces. Ω(O). for the distance between two arbitraily chosen points O and P . P ) = i=1 (xi (P ) − xi (O))2 . representing the length of the line segment connecting P1 and P2 . Also we assume (or we may take it as an observational fact) that when D becomes suﬃciently small. x2 . 0. ie. P2 ). P2 ∈ Ω(O) can be calculated according to Eq. so x(O) = (0. -say O-. In an Euklidean space.

capable of extension to an algebra of vectors. In this chapter will assume that all points P belong to an Euklidean space. force and acceleration. with suitable deﬁnitions. addition and substraction (provided the scalars have same units) follow the usual algebraic rules. We shall use the bold face notation in these notes. mass. Operations with scalars follow the same rules as elementary algebra. A vector having direction opposite of a vector a but having the same magnitude is denoted −a.2 Basic vector algebra The operations deﬁned for real numbers are. Analytically.1 Deﬁnitions A vector is a quantity having both magnitude and a direction in space. the vector is represented by either OP or OP and the magnitude by |OP | or |OP|. a vector a will have length a = |a|. Here. e. Graphically a vector is represented by an arrow OP from a point O to a point P . so multiplication. velocity. We indicate scalars by letters of ordinary types. 5 . time. 2. A scalar is a quantity having magnitude but no direction.Chapter 2 Scalars and vectors 2. such as displacement. length. O is called the initial point and P is called the terminal point. For example. 2. Two vectors a and b are equal if they have the same magnitude and direction regardless of the position of their initial point. deﬁning the direction and the magnitude of the vector being indicated by the length of the arrow.g. meaning that lengths of line segments can be calculated according to Pythagoras. temperature and any real number. The following deﬁnitions are fundamental and deﬁne the basic algebraic rules of vectors: 1. P ∈ Ω(O).

ie |ma| = |m||a|. and with direction the same as or opposite to that of a. argument Distributive law for 2. represented by a − b is deﬁned as the sum a + (−b).2) where a = |a|. 5. If a is not a null vector then a/|a| is a unit vector having the same direction as a.1 Scalar product The scalar product between two vectors. The deﬁnition of the outer product is postponed to chapter 3. argment m is a scalar (2. The diﬀerence between two vectors. 4. a and b is deﬁned by a · b = ab cos(θ). The sum is written c = a + b. they have no reference to coordinates. the last formula should just be read as a deﬁnition of a vector times a scalar. cross product and outer product (or tensor product). Following deﬁnition will become useful: A unit vector is a vector having unit magnitude.2. The sum of resultant of vectors a and b is a vector c formed by placing the initial point of b on the terminal point of a and then joining the initial point of a to the terminal point of b. a and b. 2. We stress that these deﬁnitions for vector addition. b = |b| and θ is the angle between the two vectors. Note that a · b is a scalar. Following rules apply: a·b (a + b) · c a · (b + c) m(a · b) =b·a =a·c+b·c =a·b+a·c = (ma) · b = a · (mb) Commutative law for scalar product Distributive law in 1. ie. 0≤θ≤π (2.3. The three basic types are called scalar product (or inner product).1) Here. We shall deﬁne each in turn. according as m is positive or negative. The product of a vector a with a scalar m is a vector ma with magnitude |m| times the magnitude of a.3) 6 . Note that in all cases only multiplication of a vector by one or more scalars are deﬁned. It then becomes a pure geometric exercise to prove the following laws: a+b a + (b + c) m(na) (m + n)a m(a + b) am =b+a = (a + b) + c = (mn)a = ma + na = ma + nb =def am Commutative law for addition Associate law for addition Associate law for multiplication Distributive law Distributive law Commutative law for multiplication (2. One can deﬁne diﬀerent types of bilinear vector products. substraction and scalar multiplications are deﬁned geometrically.

a × b between two vectors a and b is a vector deﬁned by a × b = ab sin(θ)u. Note that: 1.6) The last three formulas make the cross product bilinear.2 Cross product The cross product. argument m is a scalar (2. A vector whose direction depends on the choise of handedness is called an axial vector or pseudovector as opposed to ordinary or polar vectors. 0 ≤ θ ≤ π. 3 The projection of a vector a on b is equal to a · eb . 2. 7 . (2. b and c a×b (a + b) × c a × (b + c) m(a × b) = −b × a =a×c+b×c =a×b+a×c = (ma) × b = a × (mb) 2. are invariant to the choise of coordinate system2 . argument Distributive law for 2. The absolute value of the cross product |a × b| has a particular geometric meaning. we obtain a way of representing vectors in terms of triplets. The absolute value of the triple product a·(b×c) has a particular geometric meaning. b and u form a right-handed system1 .5) where θ is the angle between a and b and u is a unit vector in the direction perpendicular to the plane of a and b such that a. where eb = b/|b| is the unit vector in direction of b. The following laws are valid: Cross product is not commutative. as introduced above.2. then a and b are perpendicular.The three last formulas make the scalar product bilinear in the two arguments. Note also. It equals the volume of the parallelepiped spanned by a. whose directions are independent of the choise of handedness.3 Coordinate systems and bases We emphasize again that all deﬁnitions and laws of vector algebra. 2 With the one important exception that we have assumed the absolute notion of handedness in the deﬁnition of the cross product. (2. Once we introduce a way of ascribing positions to points by the choise of a coordinate system.4) 2. 1 a · a = |a|2 2 If a · b = 0 and a and b are not null vectors. C. Distributive law for 1. 1 It is seen that the deﬁnition of the cross-product explicitly depends on an arbitrary choise of handedness. It equals the area of the parallelogram spanned by a and b.

0). C = (O. G. The basis is spatially constant. In thes cases. a well-deﬁned procedure exists for constructing the associated basis. C. (2. 0. can be associated to any point P by the formula r(P ) = r(xCR (P )) = i xi (P )ei CR rectangular. xCR (P1 ) = (1. ∆r = r(P2 ) − r(P1 ) = OP2 − OP1 = P1 P2 . e2 = OP2 . meaning that the basis vectors are mutually orthogonal. ie. (2. there are two deﬁnining characteristica for a rectangular system 1.9). Thus. which is a vector invariant to the choise of the coordinate system. e3 } form a basis for the vector space3 . G and the choise of origin.9) with the replacement ei → gi . (2. C. 4 The simple procedure of Eq. Any point will have one and only one associated position vector by Eq. that span the vector space and are linear independent.7) and (2. 0). 0. It will therefore change upon a transformation to a coordinate system with a diﬀerent origin. O. CR . 0. meaning that it explicitly depends on the position of its initial point O. and unitary |ei | = 1. e3 = OP3 . P1 . e2 . 1. 1) (2. P2 and P3 having the coordinates xCR (O) = (0. (2.By assumption. The basis is orthonormal. ei · ej = 0 for i = j. we shall see that all physical laws only involve positional displacements. but it is only for bases satisfying 1. fully speciﬁes the coordinate system. The basis G = {g1 .7) we can construct three vectors e1 = OP1 . We shall illustrate the general method in section (2. 2. By selecting four points O. In any coordinate system. a position vector. r = OP. However. the three vectors R = {e1 . xCR (P3 ) = (0.9) The position vector is an improper vector. 8 . (2. the basis vectors are not themselves a function of position. we can choose this coordinate system to be rectangular (R). A set of vectors will be a basis if and only if the triple product g1 · (g2 × g3 ) = 0. 3 Recall that the deﬁnition of a basis is a set of vectors. cf. g3 } associated with a general coordinate system. There will always be a one to one relation between the coordinates and the position vector in geometrical problems of classical physics.7).8) Furthermore. xCR (P2 ) = (0. of the vector space4 . G). G. chapter 1. 0). However. from the choise of an origin O.8) for obtaining the basis vectors is not generally applicable. g2 . that this relation will be on the form of Eq. may satisfy none of the above.

j. 0. spatially dependent or not. a3 ) ∈ R3 with respect to G such that a i gi . 1)G or in the more dense. for the resolvent of a vector into its components and for the vector obtained by expanding a set of components along the basis vectors. 2. However. a3 ) = a.3.1 Components and notation From the deﬁnition of any basis. In the case of a constant orthonormal basis. V . an abitrary vector a will have a unique triplet of quantities a = (a1 . Trivially. basis G = {g1 . we will omit the basis subscript and use the notation a = [a] and ai = [a]i . we have g1 = (1. the notation {i. we obtain a one-to-one map φG : V → R3 between the vector space. If ei is non-constant we explicitly write ei (x). 9 . index-based notation6 [gi ]j = δij . 1. On the other hand. e2 . we shall retain the other notation for algebraic convenience. a = [a]G . and the triplet of scalars R3 . G. a2 . with the choise of a basis. It is useful to have a short-hand notation for this map (φ) and its inverse (φ−1 ). so (ai ) =def. ordinary brackets are commonly used. wrt. when it is obvious from the context (or irrelevant) which basis is implied. (a1 . For the collection of components. g2 = (0. In order not to confuse this notation with the vector obtained from a set of components (a)G =def. a Cartesian basis. 0)G . into a triplet. 6 Expressing vector identities in terms of their components is also referred to as tensor notation. a2 . e3 } for the particular case of an orthonormal basis5 .i . g3 } 5 This basis need not be spatially independent.10) a= i These quantities are called the components of a with respect to G. also mean that we shall refer to the i ’th component as ai = [a]G. ie. ai . i whe shall always retain the basis subscript in the latter expression. ie. g2 . (2. if E is constant the coordinate system is rectangular and we use the letter R for the basis. a i gi . φ−1 (a) = G ∈ R3 i a i gi = a ∈ V The notation for the set of components of a vector a. φG (a) = a =def. g3 = (0. Mathematically speaking.In the following we shall retain the notation E = {e1 . 0. Furthermore. We deﬁne the the following two bracket functions for the two operations respectively: [a]G (a)G =def. k} is also often used. 0)G .

(2. a3 )G + (b1 . For two diﬀerent bases G = G .14) . (2. or equivalently (for all i) [a + b]i = [a]i + [b]i The same naturally holds for vector substraction. 10 (2. a vector a will be represented by the components a ∈ R3 and a ∈ R3 respectively. The distinction between a vector and its components becomes irrelevant in the case where one basis is involved only. For instance. (2.3. ma2 .11) Furthermore.9) r(x) = (x)G = i xi g i ⇔ [r]G. substraction and scalar multiplication. Eq. Also. since a+b= i a i gi + i b i gi = i (ai + bi )gi we have a + b = (a1 . b2 . a3 + b3 )G .i = xi G const. a2 . can be expressed in terms of normal triplet-algebra for the components. we have ma = m(a1 . Though representing the same vector. b3 )G = (a1 + b1 .12) With the risk of being pedantic let us stress the diﬀerence between a vector and its components.2 Triplet algebra By choosing a basis G for the vector space. the two sets of components a and a will diﬀer because they refer to diﬀerent bases. so that a = (a)G = (a )G but a = a . a3 )G = (ma1 . ma3 )G .13) a i gi ) = i mai gi . a2 . 2.1). if G is constant in space the relation between coordinates and the position vector is on the form of Eq. for a scalar quantity m. or [ma]i = m[a]i . the basic algebraic vector laws of addition.where δij is the Kronecker delta δij = 1 if i = j 0 if i = j (2. a2 + b2 . since ma = m( i (2.

b3 )G =( = = a i gi ) · j b j gj ij ai bj gi · gj gij = gi · gj ij ai bj gij .15).1 Scalar product In an ortonormal basis the scalar product.15) The quantities gij . E ortonormal.the functional form of the metric coeﬃcients fully speciﬁes the type of geometry involved (Euklidean or not) and they are the starting point for extending vector calculus to arbitrary geometries. the displacement vector. p3 ) and q = (q1 . between two points P and Q with coordinates p = (p1 . p2 . q2 . i (2. will be PQ = r(Q) − r(P ) = i (qi − pi )ei The distance between P and Q is given by the length of |PQ| and |PQ|2 = i (qi − pi )2 ⇒ |PQ| = i (qi − pi )2 . (2. (2. Here it suﬃces to say that a Cartesian coordinate system uniquely implies that the metric coeﬃcients are constant and satisfy gij = δij . are called the metric coeﬃcients. If G is spatially dependent then gij = gij (x) will be functions of the coordinates. In fact. (2. (2.19) 11 . PQ. Eq. For instance a · b = (a1 .16) An orthonormal basis.18) Furthermore. q3 ) respectively.The bilinear scalar and cross product are not as easy to operate with in terms of the components in an arbitrary basis G as in the case of vector addition. Indeed. a3 )G · (b1 . we have a= i ai ei ⇒ ai = a · ei . satisﬁes by deﬁnition This relation is particular handy for obtaining the components of a vector a. b2 . if the orthonormal basis is spatially independent. 2. E.4. 2. is a simple expression in terms of the components a·b=( i ai ei ) · ( j bj e j ) = i a i bi .4 Orthonormal bases ei · ej = δij . a2 .17) So we obtain the i’th component of a vector a by taking the scalar product between a and the i’th basis-vector. (2. substraction and scalar multiplication.

24) would have been the same had we deﬁned the cross-product by a left-hand rule and expressed the components in a left-handed coordinate system.21.e. one can show   e1 e2 e3 a × b = det  a1 a2 a3  = (a2 b3 − a3 b2 . the deﬁnition of the Levi-Civita symbol.3) and (2.22) In words.2.22). (1. (2. we recover the law of Pythagoras.6) one can show that the in an ortonormal right-handed basis is given by  a1  b1 a · (b × c) = ijk ai bj ck = det c1 ijk The problem of handedness triple product a · (b × c) a2 b2 c2  a3 b3  c3 (2.24) Let us emphasize that Eq. From this it follows that the ortonormal basis vectors satisfy ei × e j = k ijk ek . Applying eq. so e3 = e1 × e2 .4. (2. Indeed.2. does not itself depend on the handedness of the coordinate system.In other words.2 Cross product Let E be an ortonormal right-handed basis.20. a1 b2 − a2 b1 )E (2.23. ijk is antisymmetric in all indices. e1 = e2 × e3 . Eq. a3 b1 − a1 b3 .2. is [a × b]i = jk ijk aj bk (2. (2. 2.6).3).23) i. Eq. (2.20) b1 b2 b3 where det() denotes the determinant of the array considered as a matrix. e3 = e1 × e2 .21) The symbol ijk with the three indices is the Levi-Civita symbol. It consists of 3 × 3 × 3 = 27 real numbers given by 123 = 231 = 312 = +1 321 = 132 = 213 = −1 ijk = 0 otherwise (2. Since it is in fact not possible to give an absolute prescribtion of how to construct a right-handed (or left-handed) coordinate system it is more common 12 . which is reassuring since a constant orthonormal basis is equivalent to the choise of a rectangular coordinate system. and so on. An alternative expression deﬁned only in terms of the components. (2. Using the laws of cross-product. cf. Eq.

Eq. In replacing the geometric deﬁnition. In this section we will consider how to deﬁne derivatives and integrals of c with respect to t. forces etc.in modern vector analysis to deﬁne the cross product through the determinant. a ˆ which is the familiar expression in d = 2 for obtaining a vector orthogonal to a given vector.20) og equivalently through the Levi-Civita symbol. One therefore distinguishes between proper or polar vectors whose direction are independent on the choise of handedness and axial or pseudovectors whose directions depend on the choise of handedness. a2 )E by [ˆ ]j = a i ji ai or in doublet notation ˆ a = (ˆ1 . Physically. All other vector operations are valid in arbitrary dimensions. Dimensionality For all vector operations the only explicit reference to dimension of the physical space appears in the cross product. which explicitly has three indices running from 1 to 3. a.5) which pressumes an absolute notion of handedness. 13 . we get the “cross-product”. For instance for d = 2 : 12 21 11 =1 = −1 = 22 = 0 (2. velocities. −a1 )E .25) ˆ Here.21). The distinction between the two types of vectors becomes important when one considers transformations between left.and right-handed coordinate systems. Eq. Eq. The generalization of the cross product to any dimension d is obtained by constructing a Levi-Civita symbol having d indices each running from 1 to d and being totally antisymmetric. with an algebraic one the direction of the cross-product will simply be determined by what-ever convention of handedness that has been adopted with the ortonormal coordinate system in the ﬁrst place. Generally. a2 )E = (a2 .5 Ordinary derivatives and integrals of vectors Let c(t) be a vector depending on a single scalar variable t. (2. 2. the arbitrariness in the choise of handedness implies that the orientation of the cross product of two ordinary vectors is not a physical objective quantity. of a vector a = (a1 . (2. This can be seen from the deﬁnition of the -symbol. in contrast to vectors representing displacements. in d > 1 dimensions the cross product will take d − 1 vectors as arguments. (2.

cf. d da (a + b) = du + db du du d da (a · b) = du · b + a · db du du d da (a × b) = du × b + a × db du du d dφ (φa) = du a + φ da du du (2.26) is deﬁned independently of coordinate systems.30) (2. exists. e. Eq. the acceleration a(t) =def. du (2. (2. the velocity is a proper vector since it is deﬁned through the positional displacement. c(u). 14 . In an orthonormal basis independent of u: d a= du dai ei . also follows that of ordinary calculus. ∆r. ai and bi .1 Derivatives The deﬁnition for ordinary derivatives of functions can directly be extended to vector functions of scalars: c(u + ∆u) − c(u) dc(t) = lim . If u represents the time u = t and c(t) is the position vector of a particle r(t) with respect to some coordinate system then dr(t) will represent the velocity v(t). The above formulas can of course be expressed in terms of the vector components. dv is also a proper dt vector.3. Consequently.27) i whence the derivative of a vector is reduced to ordinary derivatives of the scalar functions ai .26). section 2.31) where φ(u) is a normal scalar function.29) (2. ∆u→0 du ∆u (2.26) Note that the deﬁnition involves substracting two vectors. Eq. which is a proper vector.2.28) (2. du (2.27). ∆c = c(u+∆u)−c(u) which is a vector. dr(t) will also be a vector provided the limit.g. With respect to a basis E independent of u the indeﬁnite integral of c(u) is deﬁned by c(u)du = i ci (u)duei . Therefore. 2. (2.2 Integrals The deﬁnition of ordinary integral of a vector depending on a single scalar variable. using Eq.5. dt The velocity will be a tangentiel vector to the space curve mapped by r(t). Ordinary diﬀerentiation of a vector co-exists nicely with the basic vector operations.5. Even though the position vector itself is not a proper vector due to its dependence on the choise of origin of the coordinate system.

Here we will use the short hand notation: ∂ ∂i = ∂xi 2 2 ∂ij = ∂x∂∂xj i In complete analogy to the usual deﬁnition of partial derivatives of a scalar function. Physical examples of vector ﬁelds are the gravitational ﬁeld around the earth. Physical examples of scalar ﬁelds are the mass or charge density distribution of an object.6.32) . 2. R). If to each position x = (x1 . Let an arbitrary coordinate system be given. Scalar ﬁeld.26). We can deﬁne the partial derivative ∂a ∂xi in the same fashion as in Eq. x2 . x3 ) of a region in space the corresponds a number or scalar φ(x1 . x3 ) of a region in space there corresponds a vector a(x1 . x2 . If s(u) d is a vector satisfying c(u) = du s(u) then c(u)du = d (s(u)) du = s(u) + k du where k is an arbitrary constant vector independent of u. The deﬁnite integral between two limits u = u0 and u = u1 can in such case be written u1 u1 c(u)du = u0 u0 d u (s(u))du = [s(u) + k]u1 = s(u1 ) − s(u0 ). the velocity ﬁeld of a moving ﬂuid. respectively.1 Fields Deﬁnition In continuous systems the basic physical variables are distributed over space. x3 ). the partial derivative of a vector ﬁeld with respect to xi (i = 1.6 2.2 Partial derivatives Let a be a vector ﬁeld a(r) = a(x1 . x2 . 2. x3 ) then a is called a vector function of position or a vector ﬁeld. x3 ). x2 .where ci (u)du is the indeﬁnite integral of an ordinary scalar function. 3) is (∂i a)(r) = lim ∆xi →0 a(r + ∆xi ei ) − a(r) ∆xi 15 (2. then φ is called a scalar function of position or scalar ﬁeld. (2. If to each position x = (x1 . the temperature or the pressure distribution at a given time in a ﬂuid. Vector ﬁeld. electro-magnetic ﬁeld of charged particle systems. 2. 0 du This integral can also be deﬁned as a limit of a sum in a manner analogous to that of elementary integral calculus.6. In the following we shall assume the choise of a rectangular coordinate system C = (O. x2 . Introducing the position vector r(x) = i xi ei the expressions φ(r) and a(r) is taken to have the same meaning as φ(x) and a(x). A function of space is known as a ﬁeld.

38) or simply da = i ∂i adxi . If the vector ﬁeld is resolved into a non-cartesian coordinate system. For example.33) aj (x )gj (x ) (2. R) if a(x) = i ai (x)ei . Rules for partial diﬀentiation of vectors are similar to those used in elementary calculus for scalar functions Thus if a and b are functions of x then ∂i (a · b) = a · (∂i b) + (∂i a) · b ∂i (a × b) = (∂i a) × b + a × (∂i b) 2 ∂ji (a · b) = ∂j (∂i (a · b)) = ∂j ((∂i a) · b + a · (∂i b)) 2 2 = a · (∂ji b) + (∂j a) · (∂i b) + (∂i a) · (∂j b) + (∂ji a) · b We can verify these expressions by resolving the vector ﬁelds into R = {e 1 . For the same arguments as listed in previous section. then da = i dai (x)ei (2.Equation (2. ∂i a will for each i also be a vector ﬁeld. C a(x ) = j (2. ∂j a = ∂i a when j = i. (2.32) expresses how the vector ﬁeld changes in the spatial direction of ei . e2 .34) then one must remember to include the spatial derivatives of the basis vectors as well ∂aj ∂a ∂gj (2.35) (x ) = (x )gj (x ) + aj (x ) (x ) ∂xi ∂xi ∂xi j 2. Note that in general.39) 16 .3 Diﬀerentials Since partial derivatives of vector ﬁeld follow those used in elementary calculus for scalar functions the same will be true for vector diﬀerentials.36) The change of the vector components due to a inﬁtesimal change of coordinates dxj is obtained by the chain rule dai (x) = j (∂j ai )(x)dxj (2. e3 }. ak (x) and bk (x).6.37) In vector notation it reads da(x) = i j (∂j ai )(x)dxj ei (2. Then all diﬀerentiation is reduced to ordinary diﬀerention of the scalar functions. Notice that for the position vector ∂i r = e i . for C = (O.

We shall postpone the demonstration of its vectorial nature until section 4. (ds)2 of the inﬁnitesimal distance moved is then given by (ds)2 = dr · dr = i dx2 i If r(u) is space curve parametrized by u then ds du 2 = dr dr · . Nabla operator The vector diﬀerential operator del or nabla written as (·) = i is deﬁned by (2.38) dr = i dxi ei .40) The square.4 Gradient. R). The gradient of a scalar ﬁeld φ(r) is deﬁned by inserting a scalar ﬁeld in the above deﬁnition: grad φ = φ = (∂i φ)ei i 17 . where we have become more familiar with tensor calculus. in contrast to the vector algebra deﬁned hitherto.5. (2. du du Therefore. we have (irrespective of the choise of basis) d(a · b) = da · b + a · db d(a × b) = da × b + a × db 2.The position vector r = implies i xi ei is a special vector ﬁeld for which Eq. (2. divergence and curl Let a rectangular coordinate system be given C = (O.41) ei ∂i (·) where · represents a scalar or –as we shall see later– a vector ﬁeld.6. since its operation is deﬁned relative to the choise of the coordinate system. the arc length between two points on the curve r(u) given by u = u1 and u = u2 is u2 dr dr · du. Notice that the i’th component of the operator is given by i = ei · = ∂i . This vector operator possesses properties analogous to those of ordinary vectors which is not obvious at ﬁrst sight. s= du du u1 In general.

46) 18 .43) dt where θ is the angle between φ and v. if v is a unit vector the rate of change in this case equals | φ|. if r depends on some parameter t such that r(t) deﬁnes a space curve then the total derivative of φ with respect to that curve is simply dφ = dt Setting v = dr dt · j dxj ej  = (∂i φ)(r)dxi = dφ.42). (2. φ is a vector normal to the surface φ(r) = c at every point. dt d dt a(r(t)) = i dai (r(t)) ei dt = i j (∂j ai )(r))vj ei = j vj ∂j ( i ai (r)ei ) = j vj ∂j a(r) = (v · ) a(r) This shows that the operator v· = i (2. dr. according to Eq. Consider the change of φ in some particular direction. In particular. If r(t) is a space curve in the surface we clearly have dφ = 0 and consequently for any tangent dt vector v in the plane we have φ · v = 0. We can extend the above analysis to ﬁnd the rate of change of a vector ﬁeld in a particular direction v = dr . forming its scalar product with φ we have   ( φ)(r) · dr = (∂i φ)(r)ei i which is the inﬁnitesimal change in φ in going from from position r to r + dr.44) vi ∂i (2. For an inﬁnitesimal vector displacement. A second interesting geometrical property of φ may be found by considering the surface deﬁned by φ(r) = c. This also shows that the largest increase of the scalar ﬁeld is obtained in the direction φ and that.45) gives the rate of change in direction of v of the quantity (vector or scalar) on which it acts. Divergence of a vector ﬁeld The divergence of a vector ﬁeld a(r) is deﬁned by (div a)(r) = ( · a)(r) = i (∂i ai )(r) (2. where c is some constant. In other words. i φ · dr (2.42) we obtain from the deﬁnition of the scalar product dφ = | φ||v| cos(θ) (2.The gradient of a scalar ﬁeld has some interesting geometrical properties.

Clearly ( · a)(r) is a scalar ﬁeld. 19 . The tensor notation of this expression (for the i’th component) is even more compact [ × a]i = jk ijk ∂j ak In analogy to the deﬁnition of cross-product between two ordinary vectors we can express the deﬁnition symbolically   e1 e2 e3 × a = det  ∂1 ∂2 ∂3  . the partial derivatives act on the components of a. as it is usually written 2 φ where 2 = i 2 ∂ii = i ∂2 ∂x2 i φ is called the laplacian of φ and is typically encountered in electrostatic problems or in diﬀusion equations of physical scalar ﬁeld such as a temperature or density distribution. Another insight into the physical interpretation of the curl operator is gained by considering the vector ﬁeld v describing the velocity at any point in a regied body rotation about some axis with angular velocity ω.The full physical and geometrical meaning of the divergence is discussed in next section. a1 a2 a3 (2. The curl of the vector ﬁeld is then found to be × v = 2ω. Now if some vector ﬁeld a is itself derived from a scalar ﬁeld via a = φ then · a has the form · φ or. For a vector ﬁeld v(x) describing the local velocity at any point in a ﬂuid. If r is the position vector of the point with respect to some origin on the axis of rotation then the velocity ﬁeld would be given by v(x) = ω × r(x). × a is itself a vector ﬁeld.47) Clearly. where it is understood that. If a small paddle wheel were placed at various points in the ﬂuid then it would tend to rotate in regﬁons where × v = 0. on expanding the determinant. while it would not rotate in regions where × v = 0. × v is a measure of the angular velocity of the ﬂuid in the neighbourhood of that point. The laplacian can also act on vector through the components 2 2 2 (∂ii aj )ej ∂ii a = a= i j i 2 Curl of a vector ﬁeld The curl of a vector ﬁeld a(r) is deﬁned by curl (a) = × a = ∂2 a3 − ∂ 3 a2 e1 + ∂3 a1 − ∂ 1 a3 e2 + ∂1 a2 − ∂ 2 a1 e3 .

This procedure is equivalent to the way one in practice operates with diﬀerential operators on scalar and vector ﬁelds. φ and ψ are scalar ﬁelds and a and c are vector ﬁelds. one may encounter line integrals of the forms φdr. N .49) where φ is a scalar ﬁeld. The identities involving cross products are more easily proven using tensor calculus which is postponed for chapter 4. × a = 0. x3. · · · . x2. div and curl. scalar and vector in nature. Here we simply list the most important identities: (φ + ψ) = φ+ ψ · (a + b) = ·a+ ·b × (a + b) = × a + × b · (φa) =φ ·a+a· φ · (a × b) = b · ( × a) − a · ( × ( × a) = ( · a) − 2 a · ( × a) =0 × φ =0 × b) (2. a = φ then the identity shows that a is necessarily irrotational . The formal deﬁnition of a line integral closely follow that of ordinary integrals and can be considered as the limit of a sum.Combinations of grad. C C a · dr. In general the integrand may be scalar or vector. but the evaluation of such intregrals involves their reduction to one or more scalar integrals. which are then evaluated. the need arises to consider the integration of ﬁeld quantities along lines. p = 1. If xp = (x1. Often.p . The three integrals themselves are respectively vector. surface and volume integrals In the previous section we have discussed continuously varying scalar and vector ﬁelds and discussed the action of various diﬀerential operators on the. (2. The last identity has an important meaning.p ) is any point on the line element ∆rp then the second type of 20 .p .6. 2.48) Here.div and curl There are myriad of identities between various combinations of the three important vector operators grad. Line integrals In general. If a is derived from the gradient of some scalar ﬁeld.5 Line. We may divide the path joining the points A and B into N small line elements ∆rp . We shall return to this point in the next section. a is a vector ﬁeld and C is a prescribed curve in space joining to given points A and B in space. over surfaces and throughout volumes. C a × dr.

x2 (u). u0 ≤ u ≤ u1 . is conservative if. In a cartesian coordinate system C = (O. section 2. [r(u1 )] = x(B) (2. (2. x3 (u)) dx3 du.49) is evaluated over some curve C that may be either open (A and B being distinct points) or closed (A and B coincide). that in the above we have used relations of the form ai ej dxj = C C ai dxj ej . most often deﬁned in a parametric form. for line integrals of the form C a · dr there exists a class of vector ﬁelds for which the line integral between two points is independent of the path taken. (2. Such vector ﬁelds are called conservative. Each of the line integrals in Eq. It is usual to work in cartesian coordinates. any of the following is true 7 A simply connected region R is a region in space for which any closed path can be continuously shrunk to a point. dr = i ei dxi .52) Note. in which case.f.49) then becomes C C φ(r)dr a(r) · dr = = C C φ(r) ( ( i i ei dxi ) j = ej dxj = i C φ(r)dxi ei ai (r)ei ) · i C ai (r)dxi (2.49). (2.50). and only if. Eq. R). and [r(u0 )] = x(A). for example. However. The ﬁnal scalar integrals in Eq. The three integrals in Eq. it becomes C : r(u) = i xi (u)ei . R has no “holes” 21 .52) can be solved using the parametrized form for C. p= where all |∆rp | → 0 as N → ∞. one writes C to indicate this. (2. du In general. The curve C is a space curve. A vector ﬁeld a that has continuous partial derivatives in a simply connected region R 7 . In the latter case. c.50) The metod of evaluating a line integral is to reduce it to a set of scalar integrals.1. (2. which is allowable since the cartesian basis is independent of the coordinates.5. ie. If E = E(x) then the basis vectors could not be factorised out in the integral. is deﬁned as N C a · dr =def lim N →∞ a(xp ) · ∆rp . (2. For instance u1 a2 (r)dx3 = C u0 a2 (x1 (u). a line integral between two points will depend on the speciﬁc path C deﬁning the integral.51) and C a(r) × dr = = C ( i ai (r)) × j ej dxj C + C a2 (r)dx3 − C a3 (r)dx2 e1 + a (r)dx2 − C a2 (r)dx1 e3 C 1 a3 (r)dx1 − C a1 (r)dx3 e2 (2.51) and Eq.line integral in Eq.

is independent of the path from A to B. the voltage in a electrical circuits. for surface integrals over a closed surface S is replaced by S . result in either a vector or a scalar. a · dr is an exact diﬀerential. One divides the surface into N elements of area ∆Sp . (2. The vector diﬀerential dS in Eq. We will not demonstrate the equivalence of these statements. etc. · · · . × a = 0. If a vector ﬁeld is conservative we can write a · dr = and B B φ · dr = dφ B a · dr = A A φ · dr = A dφ = φ(A) − φ(B). Surface integrals As with line integrals.53) where a is a vector ﬁeld and S is a surface in space which may be either open or closed. φ. The integral A a · dr. the potential energy in an elastic spring. An open surface spans some perimeter curve C. The convention for the direction of the normal n to a surface depends on whether the suface is open or closed. equally. B 4. There exists a scalar ﬁeld φ in R such that a = 3. N each with a unit normal np .53) represents a vector area element of the surface S. This situtation is encountered whenever a = f represents the force f derived from a potential (scalar) ﬁeld. B ∈ R. integrals over surfaces can involve vector and scalar ﬁelds and. such as the potential energy in a gravitational ﬁeld. The formal deﬁnition of a surface integral is very similar to that of a line integral. Following the notation of line integrals. Hence C a · dr = 0 around any closed loop in R. p=1 22 . The direction of n is then given by the right-hand sense with respect to the direction in which the perimeter is traversed. S (2. p = 1. where A.1. It may also be written dS = ndS where n is a unit normal to the surface at the position of the element and dS is a scalar area of the element. For a closed surface the direction of n is taken be outwards from the enclosed volume. ie. If xp is any point in ∆Sp then N a · dS = lim S N →∞ a(xp ) · np ∆Sp . We shall focus on surface integrals of the form a · dS. it follows the right-hand screw rule. φ. 2.

z(x1 . The evaluation of the ﬁrst volume integral in Eq. A standard way of evaluating surface integrals is to use cartesian coordinates and project the surface onto one of the basis planes. Two closely related physical examples. suppose a surface S has projection R onto the 12-plane (xy-plane). which is an ordinary scalar integral in two variables x1 and x2 . where ρ(r) is the density ﬁeld and v(r) is the velocity ﬁeld of the ﬂuid.54) is identical zero on S. x2 ) (2. x2 ) then the scalar ﬁeld f (x1 . Volume integrals Volume integrals are generally simpler than line or surface integrals since the element of the volume dV is a scalar quantity. the ﬁrs form results in a scalar. Then dA = |e3 · dS| = |e3 · ndS| = |e3 · n|dS.where it is required that ∆Sp → 0 for N → ∞.55) is an ordinary multiple integral. x2 )dx1 dx2 . x2 . c.f.54). For instance. The unit normal at any point of the surface will be given by n = | f | evaluated at that point. The surface integral then becomes a(r) · dS = S R a(x1 . Volume integrals are most often on the form φ(r)dV a(r)dV (2. x2 )) · ( f )(x1 . so that an element of surface area dS projects onto the area element dA. where the last identity follows from the fact that ∂3 f = 1 from Eq. The evaluation of the second type of volume integral follows directly since we can resolve the vector ﬁeld into cartesian coordinates a(r)dV = V i V ai (r)dV ei . x3 ) = x3 − z(x1 . given by V ρ(r)v(r)dV . are provided by the total mass M . one of each kind. whereas the second one yields a vector. Since in the 12-plane dA = dx1 dx2 we have the expression for the surface integral a(r) · dS = S R a(r) · ndS = R a(r) · n dx1 dx2 |e3 · n| Now. We then obtain f dS = ndS = f dA = | f | |n · e3 | f dA = | f · e3 | f dA = |∂3 f | f dx1 dx2 . (2. of a ﬂuid contained in a volume V . x2 ) gives the third coordinate of the surface for each (x1 . section 2. if the surface S is given by the equation x3 = z(x1 . x2 .4.6. (2. x2 ). 23 .55) V V Clearly. where z(x1 . given by M = V ρ(r)dV and the total linear momentum of that same ﬂuid.

of Eq. Gauss theorem Let a be a (diﬀerentiable) vector ﬁeld. they cannot be taken out of the integrand. · · · .i . (2.56) for the box B(r0 ) then becomes B(r0 ) a(r) · dS ≈ = = ≈ = sk k a r0 + s ∆xk ek · sek Ak 2 a r0 + − a r0 − − ∆xk 2 ek ak r0 − ∆xk ek 2 ∆xk 2 ek ∆xk k ak r0 + 2 ek · ek Ak Ak (2. (2. 3 indicates the orientation of the face.56) we then have · a(r)dV ≈ V i · a (r0. where i = 1. Each face can be labelled as sk where s = ±1 and k = 1. If the area Ak = i=k ∆xi of each face Fsk is small then we can approximate each surface integral a(r) · dS = Fsk Fsk a(r) · sek dS by evaluating the vector ﬁeld a(r) in the center point of the face r0 + s ∆xk ek 2 a(r) · dS ≈ a r0 + s Fsk ∆xk ek 2 · sek Ak The surface integral on the rhs. For the lhs.6 Integral theorems There are two important theorems relating surface and volume integrals of vector ﬁelds. spherical coordinates) but since such basis vectors are not in general constant.6. the divergence theorem of Gauss and Stokes theorem. the contribution from the mutual interfaces vanishes (since the outward normal of the two adjacent boxes 24 .56) . Let C = (O. N . 2. (2. 2. and let B(r0 ) be a small box around r0 with its edges oriented along the directions of the basis vectors and with volume VB = ∆x1 ∆x2 ∆x3 . and V be any volume bounded by a closed surface S. R) be a cartesian coordinate system. Then Gauss theorem states · a (r)dV = V S a(r) · dS. of Eq.Of course we could have written a in terms of the basis vectors of other coordinate system (e.58) Adding the surface integrals of each of these boxes.57) k (∂k ak )(r0 ) ∆xk Ak · a (r0 ) VB Any volume V can be approximated by a set of small boxes.g. The proof goes as follows. centered around the points r0. (2. Thus the outward normal of face sk is nsk = sek . Bi .i ) VBi .

is the velocity ﬁeld of a ﬂuid then the vector j(r) = ρ(r)v(r) gives the density current.60). (2. (2.59) the Gauss teorem is demonstrated.58) equals a corresponding term on the rhs. × a (r) · dS = S C a(r) · dr (2. (2. The divergence of a vector ﬁeld therefore has the physical meaning of giving the net “outﬂux” of a scalar advected by the ﬁeld within an inﬁnitesimal volume. t) is a density and v(r. of Eq. It can then by shown (try it) that under mass conservation then ∂ρ + ∂t · j = 0. 25 . Stokes theorem. all parts of all interior boundaries that are not part of C are included twice. the only contributions comes from the surface S of V . For each small area one can show that ( × a) · ni Si ≈ Ci a · dr. Summing over i one ﬁnds that on the rhs. a(r) · dS ≈ S i B(r0. linking a ﬂow ﬁeld to the time changes of scalar ﬁeld advected by the ﬂow. t)dV = dt V ∂t V where t is time and φ is a time dependent scalar ﬁeld (The theorem works in arbitrary spatial dimension). The two theorems are central in deriving partial diﬀerential equations for dynamical systems. is obtained. Stokes theorem Stokes theorem states that if S is the “curl analogue” of the divergence theorem and relates the integral of the curl of a vector ﬁeld over an open surface S to the line integral of the vector ﬁeld around the perimeter C bounding the surface. of Eq.i ) a(r) · dSi (2. in particular the so called continuity equations. For instance if ρ(r. Only contributions from line elements that are also part of C survive.59) Since each term on the rhs. If each Si is allows to to tend to zero. being traversed in opposite directions on each occasion and thus cancelling each other. Consequently. Gauss theorem is often used in conjunction with following mathematical theorem d ∂ φ(r. φ(r.point in opposite direction). Eq.60) Following the same lines as for the derivation of the divergence theorem the surface S can be divided into many small areas Si with boundaries Ci and unit normals ni . t). t)dV.

Eq. Eq. x = x(x ) or x = x (x) 8 .2. The reason is that the nabla operator involves spatial derivatives which implies that one must account for the possible non-constancy of E. Many systems possess some particular symmetry which makes other coordinate systems more natural. φ. 8 Recall 26 . thus deﬁning a coordinate transformation. we have discussed so far have all been deﬁned in terms of cartesian coordinates. e2 . (2. One notes. because the same point. For a scalar ﬁeld one must have the identity φ (x ) = φ(x).7 Curvilinear coordinates The vector operators. Here we shall discuss the algebraic form of the standard vector operators for the two particular transformations into cylindrical or spherical coordinates.18) and Eq. the two functional forms must be diﬀerent because the same point has diﬀerent numerical representations. e3 } associated with cartesian coordinates.2). e3 (x )}. where φ is the functional form of the scalar ﬁeld in the primed coordinate system and φ is the function of the scalar ﬁeld with respect to the unprimed system. An important feature of these coordinates is that although they lead to non-constant bases.f. e2 (x ). (1. Any vector ﬁeld. (2. (2. However. as opposed to the constant basis R = {e1 . a(x ) is then generally written as a(x ) = j aj (x )ej (x ) (2. Here we only assumed orthonormality of the basis. · a and × a . and x = x (x) is the coordinate transformation. In that respect we have been more restricted than in the algebraic deﬁnitions of the analogous ordinary scalar and cross product. these bases retain the property of orthonormality. notably cylindrical or spherical coordinates. c. that the functional form of the components aj (x ) diﬀer from the functional form of the cartesian components. P will have two diﬀerent numerical representations x (P ) and x(P ). Again. the value of the two functions must be the same since x and x represent the same point in space. Curvilinear coordinates refer to the general case in which the rectangular coordinates x of any point can be expressed as functions of another set of coordinates x . that the notation x = x (x) is a short hand notation for the three functional relationships listed in Eq. aj (x). The starting point for the disgression is to realize that a non-constant basis E (x ) = {e1 (x ).20) respectively. implies that the basis vectors are vector ﬁelds.61) where aj are the components of the vector ﬁeld in the new basis. These coordinates are just two examples of what are called curvilinear coordinates.35).

7. x3 ) = (x. The domain of variation is 0 ≤ ρ < ∞. Eq. (2. x2 . The position vector may be written as r(x ) = ρ cos(φ)e1 + ρ sin(φ)e2 + ze3 Local basis If we take the partial derivatives with respect to ρ. z = z.f.2. c. φ. and normalize we obtain = cos(φ)e1 + sin(φ)e2 e1 (x ) = eρ (x ) = ∂ρ r 1 e2 (x ) = eφ (x ) = ρ ∂φ r = − sin(φ)e1 + cos(φ)e2 e3 = e3 These three unit vectors. z. like the Cartesian unit vectors ei . An arbitrary vector ﬁeld may therefore be resolved in this basis a(x ) = ar (x )er (x ) + aφ (x )eφ (x ) + az (x )ez where the vector components ar = a · e r .33). 0 ≤ φ < 2π. are deﬁned in terms of normal cartesian coordinates x = (x1 . x = (ρ.62) are the projections of a on the local basis vectors. y. z) by the coordinate transformation x = ρ cos(φ) y = ρ sin(φ). φ. Resolution of gradient The derivatives after cylindrical coordinates are found by diﬀerentiation through the Cartesian coordinates (chain rule) ∂ρ ∂φ = = ∂x ∂ρ ∂x ∂x ∂φ ∂x + + ∂y ∂ρ ∂y ∂y ∂φ ∂y = cos(φ)∂x + sin(φ)∂y = −ρ sin(φ)∂x + ρ cos(φ)∂y From these relations we can calculate the projections of the gradient operator = i ei ∂i on the cylindrical basis and we obtain ρ φ z = eρ · = eφ · = ez · 27 = ∂ρ 1 = ρ ∂φ = ∂z . aφ = a · e φ .1 Cylindrical coordinates Cylindrical coordinates. Vz = a · e z (2. form an orthonormal basis at each point in space. z).

0 ≤ θ ≤ π. 1 i · ej j. φ.The resolution of the gradient in the two bases therefore becomes = eρ ρ + eφ φ + ez z 1 = e ρ ∂ρ + e φ ∂φ + e z ∂z ρ Together with the only non-vanishing derivatives of the basis vectors ∂φ eρ ∂φ eφ = eφ = −eρ we have the necessary tools for calculating in cylindrical coordinates. 0 ≤ φ < 2π. (2.63) Applying the chain rule of diﬀerentiation and Eq. The interpretation of each of these terms is ei i · ej j = ei · i (ej j) (2. Note.64) z = r cos(θ) The domain of variation is 0 ≤ r < ∞. that it is the vanishing derivatives of the basis vectors that lead to the simple form of the vector operators in cartesian coordinates. = ρ etc. θ) and the coordinate transformation is given by x = r sin(θ) cos(φ) y = r sin(θ) sin(φ) (2. The position vector is r(x ) = r sin(θ) cos(φ)e1 + r sin(θ) sin(φ)e2 + r cos(θ)e3 28 .2 Spherical coordinates The treatment of spherical coordinates follow the same line as cylindrical coordinates. The spherical coordinates are x = (r.1) one can then show 2 1 1 2 2 2 = ∂ρρ + ∂ρ + 2 ∂φφ + ∂zz ρ ρ 2. Laplacian The laplacian in cylindrical coordinates takes the form 2 = · = (eρ ρ + eφ φ + ez z) · (eρ ρ + eφ φ + ez z) Using the linearity of the scalar product it can be rewritten to the form ei where e1 = eρ .7.7.

= 1 2 ∂ (rφ) . where a1 = ar .65) The non vanishing derivatives of the basis vectors are ∂θ er ∂θ eθ = eθ = −er ∂φ er ∂φ eθ ∂φ eφ = sin(θ)eφ = cos(θ)eφ = − sin(θ)er − cos(θ)eθ (2. e1 = er etc. Laplacian The laplacian in spherical coordinates becomes 2 = (er r + eθ θ + eφ φ) · (er r + eθ θ + eφ φ) . Resolution of the gradient The gradient operator may also be resolved on the basis r = er r + eθ θ + eφ φ 1 1 = er ∂r + eθ r ∂θ + r sin(θ) ∂φ (2.66) This is all we need for expressing vector operators in spherical coordinates. after using the linearity of the scalar product and Eq.Local basis The normalized tangent vectors along the directions of the spherical coordinates are = sin(θ) cos(φ)e1 + sin(θ) sin(φ)e2 − sin(θ)e3 er (x ) = ∂r r eθ (x ) = 1 ∂θ r = cos(θ) cos(φ)e1 + cos(θ) sin(φ)e2 − sin(θ)e3 r 1 eφ (x ) = r sin(θ) ∂φ r = − sin(φ)e1 + cos(φ)e2 They deﬁne an orthonormal basis such that an arbitrary vector ﬁeld may be resolved in these directions a(x ) = ar (x )er (x ) + aθ (x )eθ (x ) + aφ (x )eφ (x ) with ai = ei · a.66) becomes 2 2 1 2 cos(θ) 1 2 2 = ∂rr + ∂r + 2 ∂θθ + 2 ∂θ + 2 2 ∂φφ r r r sin(θ) r sin (θ) Notice that the two ﬁrst terms involving radial derivatives can be given alternative expressions 1 2 2 (∂rr φ) + ( ∂r φ) = 2 ∂r r2 (∂r φ) r r where φ is a scalar ﬁeld. r rr 29 . which. (2.

reﬂections or projections are examples of tensors.1 Deﬁnition A tensor. of rank two is a geometrical object that to any vector a associate another vector u = T(a) by a linear operation T(ma) T(a + b) = mT(a) = T(a) + T(b) (3. obtained by rotating a 900 counter-clockwise around c and scaling it with the magnitude |c|. dt (a vector) dl =I·τ dt Another example is the (transposed) stress tensor. The precise deﬁnition of the rank of a tensor will become clear later. Any linear transformation of vectors. For each vector.2) In other words a rank 2 tensor is a linear vector operator. changes the angular momendl tum. or the moment of inertia. that speciﬁes how a torque. we have already encountered tensors in disguise. τ (a vector). unless stated otherwise. it associates the vector T(a) = c × a. I.Chapter 3 Tensors In this chapter we will limit ourself to the discussion of rank 2 tensors. 3. such as rotations. that speciﬁes the force f a continuous medium exerts on a surface element deﬁned by the normal vector 30 . σ t . In fact. a. T. We will denote tensors (of rank 2 or higher) with capital bold-face letters. Since the cross-product is linear in the second argument T is linear and thus a tensor.1) (3. For reasons to become clear we will also extend and use the dot-notation to indicate the operation of a tensor on a vector so T · a =def.3) Physical examples of tensors include the inertia tensor. T(a) (3. The operator T = c× is a tensor.

Eq. Now. exterior or dyadic product. The sum of any two tensors. 3. In order to call the object (ab) a tensor we should verify that it is a linear operator. (3.2 Outer product The outer product a ⊗ b or simply ab between two vectors a and b is a tensor deﬁned by the following equation. f = σt · n A tensor often associated with the stress tensor is the strain tensor.n. for each c the tensor ab associates a vector in the direction of a and with a magnitude equal to the projection of c into b. U. demonstrating that (ab) indeed is a a tensor amounts to “moving brackets”: (ab) · (mc) (ab) · (c + d) = a(b · (mc)) = ma(b · c) = m(ab) · c = a(b · (c + d)) = a(b · c) + a(b · d) = (ab) · c + (ab) · d (3. 3. and the multiplication of a tensor with a scalar m are naturally deﬁned by (S + T) · c (mS) · c =S·c+T·c = m(S · c). that speciﬁes how a strained material is distorted ∆u in some direction ∆r ∆u = U · ∆r We shall be more speciﬁc on these physical tensors later.5) We note that since the deﬁnition.3). allows us to “place the brackets where we want” (ab) · c = a(b · c).S and T. The outer product is also known in the litterature as the tensor. (??) involves vector operations that bear no reference to coordinates/components. which will ease the notation.6) 31 . The tensor formed by the outer product of two vectors is called a dyad. the outer product will itself be invariant to the choise of coordinate system. Using the deﬁnition.4) In words. direct.3 Basic tensor algebra We can form more general linear vector operators by taking sum of dyads. eq. where c is any vector (ab)(c) = a(b · c) (3. (3.

14) (3. (3. ie. where mi and nj are scalars. sums of tensors and dot products of tensors with vectors satisfy all the usual algebraic rules for sums and products.3.10) (3. (3.8) In words.9) (3. c is any vector. in general. For any vector c we deﬁne the dot product of two tensors by the following formula where c is any vector (T · S) · c = T · (S · c) (3.11)  j nj b j  = m i n j ai b j .4 any tensor can be expressed as a dyadic but not necessarily as a single dyad. Since the dot product of a dyadic with a vector is not in general commutative T · c = c · T. Since the association of two linear functions is a linear function (T· S) is a tensor itself.5) guarantee that dyad products.1 Transposed tensors Furthermore.3. the outer product between two vectors on the form c = i mi ai and d = j nj bj .Here.(4. they satisfy the deﬁnition of being linear vector operators. is   cd = m i ai i ie. we deﬁne a dot product of a dyad with a vector on the left in the obvious way c · (ab) = (c · a)b (3. (3. As we shall demonstrate in section 3. Eq. It is easy to show that that (S + T) and (mS) are also tensors. application of the operator (T·S) to any vector means ﬁrst applying S and then T.12) and correspondingly for a dyadic. (3. For instance. Deﬁnition Eq.13) 32 . The sum of two or more dyads is called a dyadic.2). We can also extend the deﬁnition of the dot product. Sums and products of tensors also obey usual rules of algebra except that dot multiplication of two tensors. it is a sum of all dyad combinations ai bj . is not commutative T+S =S+T T · (S + P) = T · S + T · P T · (S · P) = (T · S) · P (3. it makes sence to introduce the transpose Tt of a tensor Tt · c = c · T Since c · (ab) = (ba) · c we have for a dyad (ab)t = ba.6) and the properties Eq.7) ij 3.

A symmetric tensor is a tensor that satisﬁes Tt = T. A anti-symmetric tensor is a tensor that satisﬁes Tt = −T Any tensor T can trivially be decomposed into a symmetric and an antisymmetric part.15) (3.20) Then St = S is symmetric and At = −A is anti-symmetric and T= 1 1 S+ A 2 2 33 . To see this. deﬁne S(T) A(T) = T + Tt = T − Tt (3. (3. In some textbook another type of contraction (a “transposed contraction”) is also deﬁned ab · ·cd = ab : dc Similarly for two dyadics m i ai b i i : j nj c j d j  = mi nj (ai · cj )(bi · dj ).18) Note the following useful relation for the contraction of two dyads formed by basis vectors ei ej : ek el = δik δjl The contraction of two dyadics is deﬁned as the bilinear operation   m i ai b i i where mi and nj are scalars.3. The contraction.17) 3. t t (3. ab : cd.19) ij ··  j nj c j d j  =  mi nj (ai · dj )(bi · cj ) ij 3.3.3 Special tensors Some special tensors arise as a consequence of the basic tensor algebra.4 any tensor can be expressed as a sum of dyads.As we shall see in section 3.16) (3. Using this fact following properties can easily be proved (T + S)t (T · S) T t t t = T t + St =S ·T = T. of two dyads results in a scalar deﬁned by ab : cd = (a · c)(b · d) (3.2 Contraction Another useful operation on tensors is that of contraction.

ij for the ij component of the tensor in basis E or simply [T]ij when it is clear from the context (or irrelevant) which basis is implied. Note that the kl component of a dyad ei ej is (ei ej )kl = (ek · ei )(ej · el ) = δik δjl and that the 3 × 3 dyad combinations ei ej form a basis for the tensor space T= ij Tij ei ej (3.24) Thus. 34 . T·c = ij j cj e j ) = = = Tij cj ei (3. Let therefore E be an orthonormal basis and T be any tensor. yields the vector itself. 3.4 Tensor components in orthonormal bases Just as is the case with vectors we can represent a tensor in terms of its components once we choose a basis. and conversely. Deﬁne Tij = ei · T · ej (3. Indeed. an important tensor is the identity or unit tensor.21) In words Tij is the i’th component of the image of the j’th basis vector. It may be deﬁned as the operator which acting on any vector.23) Since two indices are required for a full representation of T. the concepts of a dyadic and linear vector operator or tensor are identical and are equivalent to the concept of linear vector function in the sense that every linear vector function deﬁnes a certain tensor or dyadic.Finally. 1. for any vector c = j cj ej one obtains the i’th component of the image u = T · c by ei · T · ( ei · Thus. Evidently 1 is one of the special cases for which for all tensors A 1·A =A·1 If m is any scalar the product m1 is called a constant tensor and has the property (m1) · A = A · (m1) = mA Constant tensors therefore commute will all tensors and no other tensors have this property.22) j · ej j cj (ei · T · ej ) j cj T Tij cj (3. We will use the notation [T]E. it is classiﬁed as a tensor of second rank. Tij is also called the ijth component of T. Using the linearity of T and scalar products one observes that the tensor is fully speciﬁed by these 3 × 3 = 9 quantities.

the tensor obtained by expanding the components along the basis dyads) we shall use the notation T = (T )E =def ij Conveniently. (3. Eq. for the inverse operation (ie. 3. The concept of a matrix is purely algebraic. As for vectors one should notice the diﬀerence between a tensor and its matrix representation.25) implies that the representation of the dyad ei ej is a matrix with a 1 in the i’th row and j’th column and zero everywhere else. matrices are arrays of numbers which may be added and multiplied according to particular rules (see below). cf. can be arranged in a matrix T = (Tij ).In analogy with the notation used for vectors we write T = [T]E –or simply T = [T] when the basis is irrelevant– as a short hand notation for the component matrix of a tensor with respect to a given basis.4.22) shows that if u = T · c then the relationship between the components of u.  T21 T22 T23  (3. This is in analogy to the notation used for triplets a = (ai ). The concept of a tensor is geometrical. a tensor may be represented in any particular coordinate system by a matrix.27) In standard matrix notation this is precisely equivalent to       u1 T11 T12 T13 u1  u2  =  T21 T22 T23  ·  u2  u3 T31 T32 T33 u3 1 In accordance with standard notation in litterature the bracket around the components.26) The row-column convention in Eq.1 Matrix algebra Our deﬁnitions of the tensor-vector dot product and tensor-tensor dot-product are consistent with the ordinary rules of matrix algebra. Also. but the matrix must be transformed according to a deﬁnite rule if the coordinate system is changed. (3. (Tij ).25) T31 T32 T33 Tij ei ej . T and c in any ortonormal basis will satisfy ui = [T · c]i = j Tij cj . the components. (3.1   T11 T12 T13 T = (Tij ) =def.3. is used to indicate the matrix collection of these. Tensor · vector For instance. (3. section 2. 35 . Tij .

(3.31) = ijkl Tij Skl δjk ei el = ij l Til Slj ei ej In obtaining the last expression we have exchanged the summation indices j and l. T · (S · c) = T · = = = kj kj ij kj Skj cj ek Skj cj T · ek (3. In summary. evaluating T · S in terms of its dyadic representations and collecting terms leads to T·S = ij Tij ei ej · ( kl Skl ek el ) = ijkl Tij Skl (ei ej ) · (ek el ) (3. The ﬁnal expression conﬁrms Eq. (3. we have the correspondence between the tensor operation and its matrix representation [T · c] = [T] · [c] Tensor · tensor Also. u and a are to be considered as a columns.23).29). let us consider the implication of deﬁnition of the dot product of two tensors. A more transparent derivation is obtained by noting that for any vector c (ek el ) · c = ek (el · c) = ek cl and consequently (ei ej · ek el ) · c = (ei ej ) · (ek el · c) = (ei ej ) · ek cl = ei (ej · ek )cl = ei δjk cl = δjk ei (el · c) = δjk (ei el ) · c Therefore.30) Now. Eq.29) which again is identical to the ij’th element of the matrix product T · S.8) in terms of the components. where the triplets.28) Skj cj i Tik ei k Tik Skj cj ei Comparing this with Eq. one obtains the simple result for the dot product of two basis dyads ei ej · ek el = δjk ei el (3. shows that [T · S]ij = k Tik Skj (3.or u = T · a. Therefore. (3. [T · S] = [T] · [S] 36 .

the matrix representation of the transposed tensor equals the transposed matrix representation of the tensor itself: [Tt ] = [T]t 37 . (3. comparing     with the deﬁnition of the transpose c·  ij (3. is also consistent with the algebraic deﬁnition of transposition.6). (3.6)) (3.33) or equivalently. In other words. Indeed. where m is a scalar.32) + Tij )ei ej · c (Sij + Tij )ei ej (3.37) in agreement with the matrix deﬁnition of transposition. (S + T) · c = = = = Consequently. Speciﬁcally. Eq.Tensor + tensor One can similarly show that the deﬁnition of the sum of two tensors Eq. Eq. (3. (3.13).34) which means that the component matrix of S + T is obtained by adding the component matrices of S and T respectively [S + T] = [S] + [T] It is left as an exercise to demonstrate that [mT]ij = mTij .35) Tij ei ej  = Tij (c·ei )ej = ij ij Tij ej (ei ·c) =  ij Tji ei ej  ·c (3. (3. [S + T]ij = Sij + Tij . the deﬁnition of the transpose of a tensor. S+T = ij Sij ei ej + ij Tij ei ej · c ij Sij ei (ej · c) + ij Tij ei (ej · c) ij (Sij + Tij )ei (ej · c) ij ij (Sij (def. (S + T) · c = S · c + T · c. (3.36) Tt · c = c · T the components satisfy Tt ij = Tji . Transposition Finally.6) implies that tensors are added by adding their component matrices.

Eq. (3.36) also demonstrates [c · T]i =
j

cj Tji

(3.38)

In keeping with the convention that c represents a column, the matrix form of this equation is ct · T , where ct and and the image ct · T will be rows.

3.4.2

Two-point components

As a curiosity, we mention that it is not compulsery to use the same basis for the left and the right basis-vectors when expressing a tensor in terms of its components. There are indeed cases where it is advantageous to use diﬀerent left and right bases. Such tensor-components are called two-point components. In these cases one should make it clear in the notation that the ﬁrst index of the component refer to a diﬀerent basis than the second. For a two-point component ˜ matrix T we write ˜ ˜ T = (T )E ,E = Tij ei ej
ij

˜ and similarly, [T]E E,ij for Tij . Two-point components do not involve any extra formalism though. For instance, getting the components in the dyad basis E E or in the basis EE is only a question of respectively multiplying with the basis-vectors ei to the right or the basis-vectors ei to the left on the expression ˜ kl Tkl ek el and collect the terms. For instance [T]EE,ij = ei · = = = ˜ kl Tkl ek el · ej ˜kl (ei · e )(el · ej ) k kl T ˜ kl Tkl (ei · ek )δlj ˜ Tkj (ei · e )
k k

(3.39)

showing that that ij’th component in basis EE, Tij , is Tij =
k

˜ Tkj (ei · ek )

3.5

Tensor ﬁelds

As for scalar and vector ﬁelds one speaks of a tensor ﬁeld (here of rank 2) if to each position x = (x1 , x2 , x3 ) of a region R in space there corresponds a tensor T(x). The best known examples of tensor ﬁelds in physics is the stress and strain tensor ﬁelds.

38

In cartesian coordinates a tensor ﬁeld is given by T(x) =
ij

Tij (x)ei ej

(3.40)

Note in particular that for each i = 1, 2, 3 ai (x) = T(x) · ei bi (x) = ei · T(x) = = Tji (x)ej Tij (x)ej j
j

and

(3.41)

are vector ﬁelds with components [ai ]j = Tji and [bi ]j = Tij , respectively.

3.5.1

Let a cartesian coordinate system be given C = (O, R). Nabla operator For any vector ﬁeld, a(r) we can construct a tensor ﬁeld by the nabla operator. Inserting a(r) = i ai (r)ei “blindly” into the deﬁnition Eq. (2.41) gives   ( a)(r) =
i

Assuming that is a proper vector operator2 then ( a)(r) represents a tensor ﬁeld with the components [ a]ij (r) = (∂i aj )(r) (3.43)

ei ∂i 

j

aj (r)ej  =

(∂i aj )(r)ei ej

(3.42)

ij

Notice that the components of a are that of an outer product between two ordinary vectors, [ba]ji = bj ai . The tensor, a, represents how the vector ﬁeld a changes for a given displacement ∆r a(r + ∆r) ≈ a(r) + ∆r · ( a)(r) or da = dr · ( a) An alternative expression for the diﬀerential da was obtained in Eq. (2.44), da = dr · ( a) = (dr · )a

Notice that this identity is in accordance with the usual deﬁnition for a dyad operating on a vector from the left.
2 We recall that the operator has been deﬁned relative to a chosen coordinate system. It remains to be proven that operators derived from such as gradients, curls, divergence etc. “behave as” proper scalars or vectors. We return to this point in 4.5.

39

Divergence The divergence of a tensor ﬁeld is obtained straight forwardly by applying the deﬁnition of and the dot-product. With respect to a cartesian basis R we have · T (x) = ( i ei ∂i ) · jk Tjk (x)ej ek (3.44) = i k ∂i Tik (x)ek = k ( i ∂i Tik (x)) ek . Consequently, a = · T is a vector with components ak = i ∂i Tik , jvf. Eq. (3.41). The result is actually easier seen in pure subscript notation [ · T]k =
i

∂i Tik ,

because it follows directly from the matrix algebra, Eq. (3.38). Another way of viewing the divergence of a tensor ﬁeld is to take the divergence of each of the vector ﬁelds ai = T · ei ( · ai ) e i ·T=
i

Curl The curl of a tensor ﬁeld is obtained similarly to the divergence. The result is most easily obtained by applying the algebraic deﬁnition of curl, Eq. (2.47). Then we obtain [ × T]ij = imn ∂m Tnj
mn

Thus the curl of a tensor ﬁeld is another tensor ﬁeld. As for the divergence we obtain the same result by considering the curl operator on each of the three vector ﬁeld aj = T · ej ×T=
j

× aj ej , × aj ej .

where an outer vector product is involved in each of the terms

3.5.2

Integral theorems

The two important theorems for vector ﬁelds, Gauss theorem and Stokes theorem, can be directly translated to tensor ﬁelds. The analogy is most easily seen in tensor notation. Then for a vector ﬁeld a(r) the two theorem reads
V V

(

i

∂i ai ) dV ni dS

= =

S C

( (

i i

ai ni ) dS ai dxi )

Gauss Stokes

ijk ijk ∂j ak

(3.45)

40

if a ﬂow ﬁeld v(r. c. t) = v(r. t) describes how much of the j’th component of the vector a is transported in direction ei . and Gauss’ and Stokes’ theorems works for each of these. 3. t) = vi (r. The reason these formulas also works for tensors is that for a ﬁxed l. al = T · el = i Til ei deﬁnes a vector ﬁeld. t) is advecting a vector a(r. (3.f.46) Here l refer to any component index l = 1. 2.41). Due to Gauss theorem the physical interpretation of the divergence of a tensor ﬁeld is analogous to the divergence of a vector ﬁeld. The divergence · J is then a vector where each component [ · J]j corresponds to the accumulation of aj in an inﬁnitesimal volume due to the ﬂow. t) is a tensor ﬁeld where [J]ij (r. t) then the outer product J(r. t)aj (r.The corresponding tensor versions are then V V ( i ∂i Til ) dV ni dS = = S C ( ( i i Til ni ) dS Til dxi ) Gauss Stokes ijk ijk ∂j Tkl (3. t)a(r. if a represents a conserved quantity then we have a continuity equation for the vector ﬁeld a ∂a + ·J=0 ∂t 41 . In other words. For instance.

a discrepancy exists between the two notations in that the latter appears to depend on the choise of coordinates. is explictly invariant to the choise of coordinate system and is the one adopted in the ﬁrst part of chapter two and three (2. Here.2. Since tensor notation often requires knowledge of transformation rules between coordinate systems one may validly ask why use it at all.2 we remove this descrepancy by learning how to transform the components of vectors and tensors when the coordinate system is changed. In the following we will. The ﬁrst notation insists in not refering to coordinates or components at all. This geometrically deﬁned notation. In section 4. Seemingly. 42 . is called tensor notation and is the one that naturally arises with a given coordinate system. It is possible to generalize the tensor notation to ensure that the formalism takes the same form in non-rectangular coordinate systems as well. The second notation. called direct notation. As we shall see these transformation rules guarantee that one can unambigously express vectorial or tensorial relations using component/tensor notation because the expressions will preserve the form upon a coordinate transformation. except of this system beeing rectangular.1-2. in many physical problems it is natural to introduce tensors of rank higher than two1 . vectorial or tensorial relations are expressed algebraically. vector and rank 2 tensors. based on components and indices. We have seen that the notation of vectors and tensors comes in two ﬂavours.1-3. To insist on a direct 1 The Levi-Civita symbol being an example of a (pseudo)-tensor of rank 3.Chapter 4 Tensor calculus In the two preceeding chapters we have developed the basic algebra to for scalar. however. First of all. in quantifying any physical system we will eventually have to give a numerical representation which for vectors or tensors imply to specify the value of their components with respect to some basis.3). Indeed. we should already expect this to be the case since we have not made any speciﬁc assumptions about the coordinate system in the propositions regarding relations between vector or tensor components already presented. Secondly. 3. This requirement is known as general covariance. restrict the discussion to rectangular coordinates.

(2. Eq.notation for these objects become increasingly tedious and unnatural.10) a= i a i gi = a i gi and for the divergence of a vector ﬁeld ·a = i ∂i ai = ∂ i ai The convention of implicit summation which is very convenient for tensor calculus is due to Einstein and is referred to as Einsteins summation rule. [a + b]i = [a]i + [b]i . In particular. an equation with exactly one free index in each term expresses a vector identity. Eq.13). in the formula for the addition of two vectors in terms of its components. reads a·b= i a i bi . The same free indices must occur in each term. Eq. In this and the following chapters we shall adopt the implicit summation rule unless otherwise stated. For instance the scalar product in an orthonormal basis. etc.3. It is therefore strongly recommended to become familiar with the tensor notation once and for all. it occurs in any component representation of a “dot” or inner product. j a free index) (ij free indices.1) Similar. which is reviewed in section 4.18). (2. Here. A free index occurs exactly once in a term and has the meaning “for any component”. For instance. Hence we may write a·b [T · a]j [T · S]ij T:S = = = = a i bi Tji ai i k Tik Skj ij Tij Sij i = a i bi = Tji ai = Tik Skj = Tij Sij (i a bound index) (i a bound index. an expression with two free indices in each term expresses a tensor identity. for the expansion of the components along the basis vectors. In tensor notation no new notation needs to be deﬁned. i is a bound index and can be renamed without changing the meaning of the expression. A bound index (or a dummy index) refers to a summation index. 4. The situation where a dummy index appears exactly twice in a product occurs so often that it is convenient to introduce the convention that the summation symbol may be left out in this speciﬁc case. index i is free. As in the above example. (2. k bound index) ij bound indices (4. only those we have already seen (some in disguise).1 Tensor notation and Einsteins summation rule Using tensor notation an index will either be free or bound. 43 .

Note.4) reads v = A · v. The basis vectors of the primed basis may be revolved into the unprimed basis e j = aji ei where aji =def ej · ei (4. The matrix version of Eq. (4. A = (aij ). Note that the transformation matrix A is simply the two-point components of the identity tensor A = [1]E 2 In E keeping with Einsteins summation rule this expression is short hand notation for X X T= Tij ei ek = Tjl ej el ik jl 44 .8) where Tik = [T]E . (4. that the two transformations diﬀer in whether the sum is over the ﬁrst or the second index of aij . (4. Using the same procedure the primed and unprimed components of a tensor T = Tij ei ek = Tjl ej el 2 are related by Tik Tik =ei·T·ek = ei · T · ek = aij akl Tjl = aij akl Tjl . vj = [v]E .5) (sum over i). The relation between the primed and unprimed components of any (proper) vector v = c i ei = c j e j can be obtained by dotting with e j or ei vj vi = aji vi = aji vj (4.j and vi = [v]E.7) (4.2) Here.4.6) and Eq.3) represents the cosine of the angle between ej and ei .2 Orthonormal basis transformation Let E and E be two diﬀerent orthonormal bases. (4.ik .4) (4.5) v = At · v .i . (4.ik and Tik = [T]E.

and the set of of all orthogonal 3×3 matrices constitutes a continuous group3 called O(3).12) det(A · At ) = det(A)2 = 1. (4. (4. R ). Here.9) ∂xi where aji is deﬁned as in Eq. (4. (4. of the unprimed coordinate system as seen from the primed coordinate system.9).9) gives xj = aji xi + dj . Following relation between the coordinate diﬀerentials dxi and dxj must then be satisﬁed dr = dxi ei = dxj e j Multiplying with ej we obtain dxj = dxi ei · ej or equivalently ∂xj = ej · ei = aji . (4. can be directly translated to the coordinates themselves.2.10) Here.4).12) A matrix with the above property is called orthogonal. where 1 is the identity matrix. In index notation (A · At )ij = aik akj = δij (4.2. In matrix notation Eq. Eq.6) except from the optional displacement. This vector –or its diﬀerential analogue dr = dxi ei – is a proper vector independent of the coordinate system.2 The orthogonal group Considering Eq. 4. (4.13) (4. the form of the vector transformation. that satisﬁes following axioms: 45 . ∗. (4.11) which is identical to Eq. that a mathematical group (G. A = (aij ). To show this we employ the simple relation between coordinates and the position vector. (4. The displacement vector between any two points is given by ∆r = ∆xi ei . (4. (4. R) and C = (O . Eq. C = (O. d = (x )C (O).1 Cartesian coordinate transformation A particular case of an orthonormal basis transformation is the transformation between two cartesian coordinate systems. According to Eq. 3 Recall.5) as a set of two matrix equations we note that the transpose matrix At operates as the inverse of A A · At = 1. (1)ij = δij . is constant (constant basis vectors) the integration of Eq.10) takes the form x = A · x + d. dj is the j’th coordinate of the origin. ∗) is a set G with a binary operator. Since the rhs.4) and Eq.3). O.4. valid in any cartesian system. (2. (4.

Therefore. products. a vector and a tensor or two tensors we can pull together two tranformation matrices and exploit Eq. in the origin of a cartesian coordinate system xi = −xi = rij xj . R} also forms a group. it is clear that all algebraic rules for computing sums. rij = −δij we see that det(R) = −1. Orthogonal matrices forms a group with matrix multiplication as the binary operator and the identity matrix as the identity element. Inverse element: For each a ∈ G the exists an inverse a−1 ∈ G such that a ∗ a−1 = a−1 ∗ a = e. transposes. Closure: ∀a.i [T · c]E . aj = [a]E. 3. (4.1) will be unaﬀected by an orthogonal basis transformation.3) were deﬁned without reference to a coordinate system. of vectors and tensors (cf.i [Tt ]E . etc. (4. In words.4. 46 . b. called SO(3).i . R = (rij ).j . Associativity: ∀a.13). b ∈ G : a ∗ b ∈ G.1-3.ij = a i bi = a j bj = ai + bi = Tij cj = Tji . R · R = 1. Thus for examples a·b [a + b]E . we may write O(3) = Z(2) ⊗ SO(3). which represents the set of all rotations.so det(A) = ±1 Transformations with the determinant +1 or −1 cannot be continuously connected. ai = [a]E . The invariance of the algebraic rules for vector and tensor operations can be veriﬁed directly by using the orthogonality property of the transformation matrix.3. that any matrix with det = −1 will change the handedness of the coordinate system. Any property or relation between vectors and tensors which is expressed in the same algebraic form in all coordinate systems has a geometrical meaning independent of the coordinate system and is called an invariant property or relation. to verify the 1. 2.3 Algebraic invariance In view of the fact that the various vector and tensor operations (section 2. In particular. the set of transformations with determinant +1 itself forms a group.14) where.ji etc. 4. (4.12. section 3. Tji = [T]E .13). By considering a pure reﬂection. any orthogonal matrix can be decomposed into a pure rotation and an optional reﬂection. Notice in particular that the composition or multiplication of two orthogonal transformations is orthogonal.2. Identity element: There exists an element e ∈ G so that ∀a : a ∗ e = e ∗ a = a. whenever a “dot” or inner product appears between two vectors. c ∈ G : (a ∗ b) ∗ c = a ∗ (b ∗ c) 3. Eq. Clearly. Note. Consequently. 4. For instance. so the set Z(2) = {1.

j Tranf.15) We shall return to other important invariances in chapter 5 = Tij cj = (aik ajl Tkl )(ajm cm ) = aik Tkl (ajl ajm )cm = aik Tkl δml cm = aik (Tkl cl ) = aik [u]E. ie.4) with the matrix representation of a tensor operating on a vector.27). 47 . let a new basis system E = {e1 .i Transf. (4. Since the ﬁrst equation only expresses the change of components due to a change of basis (the vector itself remains unchanged) it is referred to as a passive transformation as opposed to the second case which reﬂects an active transformation of the vector. (4. Let O represent an orthogonal tensor. ie. (3. In any orthonormal basis the matrix representation of this identity will be that of Eq. it is clear that these two operations are deﬁned in the same manner algebraically. E will be ortonormal iﬀ E is. rules for components Orthogonality (summing over k) 4.third invariance above.4 Active and passive transformation Comparing the transformation of the components of a vector upon a basis transformation.27) irrespective of the chosen coordinate system. Eq. We shall use the short-hand notation E = O(E) Due to the orthogonality property of O. e2 . rules for components (4. Here. is always obtained as ui = Tij cj . Further.ij [c]E . where 1 as usual denotes the identity tensor. we can tranform the components of u from one (unprimed) to another (primed) coordinate system and see if these equal the transformed component matrix of T times the transformed component triplet of c: [T]E . e3 } be deﬁned by ei = O · e i where E = {e1 . e3 } is the old basis.12). a tensor for which Ot · O = 1. we shall show a simple algebraic relation between the two type of transformations.k = [u]E .2. (3. that the i’th component of the image u = T · c of T operating on c. e2 . Eq.

2. E) (4. Being explicit about the bases involved we may write this identity as [1]E E = [O−1 ]E . (4. 4. (4. vectors and tensors by refering to the transformation properties of their components upon a change of coordinates. A.27). For an cartesian coordinate transformation.The components of a vector v in the primed basis relates to the the unprimed components as vi = ei · v = (O · ei ) · v = [O · ei ]E. Eq. Consequently it transforms according to the rule m =m Vectors A triplet of real numbers. is a vector if the components transform according to vj = aji vi The matrix notation of the transformation rule is v =A·v 48 . Eq.17) which is to say that the matrix representing a passive transformation from one to another basis. equals the matrix representation wrt. Scalars A quantity m is a scalar if it has the same value in every coordinate system.10) following transformation properties of scalars. vectors and tensors To summarize the present section we may give an algebraic deﬁnition of scalars. where E = O(E). (3.5 Summary on scalars. satisﬁes A = [Ot ]E = O t = O−1 . O−1 (E ) = E. vectors and tensors apply. E) (Eq. E → E .6).16) This shows that the matrix. all components wrt. v. (4. representing the basis transformation.j [v]E.j = [O]jk [ei ]k vj = [O]jk δik vj = [O]ji vj = [Ot ]ij vj = (Ot · v)i (Scalar product wrt. E of the tensor mapping the new basis vectors onto the old ones.

Tijk . T . ei ej . Upon a coordinate transformation we would need to transform the components of each individual vector in the triple product. a1 a2 a3 .19) 49 .18) In words. Consequently. In section 3. between orthonormal basis vectors.18) preserves the invariance with respect to the choise of coordinates. such that T3 = Tijk ei ej ek NB! Einstein summation convention Not surprisingly. Expressing the original basis vectors in terms of the new basis ei = aji ej one obtains T3 = Tlmn el em en = Tijk ei ej ek = Tijk (ali e l )(amj e m )(ank e n ) = ali amj ank Tijk el em en (4. Deﬁnition Eq. Addition of triple products and scalar multiplication is deﬁned analogous to the algebra for dyads. one can demonstrate that a basis for all linear operators. For instance the outer product between a dyad a1 a2 and a vector a3 can be deﬁned as the linear operator.1 Tensors of any rank Introduction Tensors of rank higher than two are obtained by a straightforward iteration of the deﬁnition of the outer product.3. a1 a2 (a3 · c) (4. that maps any vector c into a tensor according to the rule a1 a2 a3 (c) =def. which map a vector into a second rank tensor is obtained by forming the 33 combinations of the direct products ei ej ek . Tijk . for any T3 there exists a unique set of quantities. T3 .3 4. T3 is called a third rank tensor. Similarly. The matrix notation of this transformation is T = A · T · At 4.Rank 2 Tensors A rank two tensor is a 3 × 3 matrix of real numbers. for each c the object a1 a2 a3 associates a tensor obtained by multiplying the dyad a1 a2 with the scalar a3 · c. which transforms as the outer product of two vectors Tij = aik ajl Tkl Again it should be noticed that it is the transformation rule which guarantees that the nine quantities collected in a matrix form a tensor. these quantities. (4. are called the components of T3 and since three indicies are needed.4 we demonstated that a basis for second order tensors is obtained by forming all 3 × 3 dyad combinations.

Also. i2 = j. the tensor is obtained from the components as T = (T )E = ( (Tijk ) )E =def Tijk ei ej ek 4. provided we keep in mind that Tj1 j2 ···jr in Eq. For any particular component of the tensor. a tensor T of rank r is deﬁned as a set of 3r quantities.11). deﬁning the outer product of four vectors in terms triple products. Eq. (4. although the latter quantities are strictly speaking only the components of the tensor with respect to some particular basis E.18). In keeping with previous notation. Here. For completeness. 2.This shows that upon an orthogonal transformation the components of a third rank tensor transform as Tlmn = ali amj ank Tijk The inverse transformation follows from expressing the new basis vectors in terms of the old one.20). (4. k = 1.20) Accordingly. 3 . 2. Ti1 i2 ···ir . (4. Tij . We can continue this procedure. the set of all components is written as T = (Tijk ). 3 . it simply means that i1 = i. that upon an orthogonal change of coordinates. Choosing a second rank tensor as an example and comparing with previous notation. (4.3. section 4. It is precisely the transformation rule. the distinction between a tensor and its components becomes unimportant. However. 50 . (4. 2. i2 = 1. However.2: Ti1 i2 ···ir = aj1 i1 aj2 i2 · · · ajr ir Tj1 j2 ···jr . that ensures that the two set of components represent the same tensor. the inverse transformation is obtained by summing on the ﬁrst indices of the transformation matrix elements instead of the second ones. cf. 4 (4.3. e j = aji ei in Eq. i2 . Eq. 4 The notation T i1 i2 ···ir may be confusing at ﬁrst sight. we would have with the old notation i = 3. j = 1 and in the new notation i1 = 3.2 Deﬁnition In general. 3}. It is common to refer to both T and the set (Ti1 i2 ···ir ) as a tensor. 2. say T31 . Eq. j = 1. a vector is a rank 1 tensor and a scalar is a rank 0 tensor.19) Tijk = ali amj ank Tlmn . i1 . · · · ir are simply r independent indices each taking values in the range {1. ijk are dummy indices and the bracket around Tijk indicates the set of all of these (Tijk ) = { Tijk | i = 1. 3 }.20) are components of the same tensor only with respect to a diﬀerent basis E . cf. transform as the outer product of r vectors Tj1 j2 ···jr = aj1 i1 aj2 i2 · · · ajr ir Ti1 i2 ···ir . and introducing addition and scalar multiplication of these objects as well.

The addition of two second rank tensors gives a new second rank tensor. The upshot of these rules below is that tensor operations that look “natural” are permissible in the sense that if one starts with tensors the result will be a tensor ??.21) is a tensor of rank r + s if Ai1 i2 ···ir is a tensor of rank r and Bj1 j2 ···js is a tensor of rank s. of two second rank tensors. 5 The composition.2. Note.3. the 3r quantities (Ci1 i2 ···ir ) transform as the components of r rank tensor when both (Ai1 i2 ···ir ) and (Bi1 ii 2···ir ) do (We made use of this in the second line of the demonstation above). For example Ci1 i2 ···ir j1 j2 ···js = Ai1 i2 ···ir Bj1 j2 ···js (4. Also. Addition/substraction Tensors of the same rank may be added together. T · S.4. a tensor may be multiplied with a scalar m = B (tensor of rank 0) according to the same rule Ci1 i2 ···ir = mAi1 i2 ···ir Note. T and S. 51 .3 Basic algebra Tensors can be combined in various ways to form new tensors. We have already emphasized this point in case of second rank tensors which can not in general be expressed as a single dyad. Outer product The product of two tensors is a tensor whose rank is the sum of the ranks of the given tensors. we have already seen several examples thereof5 . that not every tensor can be written as a product of two tensors of lower rank. Cj1 j2 ···jr = Aj1 j2 ···jr + Bj1 j2 ···jr = aj1 i1 aj2 i2 · · · ajr ir Ai1 i2 ···ir + aj1 i1 aj2 i2 · · · ajr ir Bi1 i2 ···ir = aj1 i1 aj2 i2 · · · ajr ir (Ai1 i2 ···ir + Bi1 i2 ···ir ) = aj1 i1 aj2 i2 · · · ajr ir Ci1i2 ···ir . (4. This product which involves ordinary multiplication of the components of the tensor is called the outer product. The operation of a second rank tensor on a vector (1. Thus Ci1 i2 ···ir = Ai1 i2 ···ir + Bi1 ii 2···ir is a tensor of rank r if A and B are tensors of rank r. that this rule is consistent with the cartesian components of a dyad c = ab in the speciﬁc case where A = a and B = b are vectors. forms a new second rank tensor. Indeed. and that tensorial addition/substraction is commutative and associative. For this reason divison of tensors is not always possible. It is the natural generalization of the outer product of two vectors (two tensors of rank 1) deﬁned in section 3. This follows directly from the linearity of Eq. It follows directly that substraction of two equally ranked tensors is also a tensor of the same rank.20). rank tensor) gives a new vector. etc. Consequently.

The proof follows Bj1 j2 ···jr = Aiij1 j2 ···jr = aik ail aj1 m1 aj2 m2 · · · ajr mr Aklm1 m2 ···mr = δkl aj1 m1 aj2 m2 · · · ajr mr Aklm1 m2 ···mr Akkm1 m2 ···mr = aj1 m1 aj2 m2 · · · ajr mr Bm1 m2 ···mr Consequently. obtained by setting α = 1 and β = 2 in the above notation: (Tt )i1 i2 = Ti2 i1 Contraction The most important rule in tensor algebra is the contraction rule which states that if two indices in a tensor of rank r + 2 are put equal and summed over. Its tensorial property follows from the symmetry among the a-factors in Eq. the transposed one. setting j = i) 52 . Tensors of rank less than two (ie. (4. In the previous chapters we have reserved the “dot”symbol for precisely this operation. this new set will also be a tensor. A tensor of rank two has precisely one isomer. For instance.Permutation It is permitted to exchange or permute indices in a tensor and still remain with a tensor. a scalar product between two vectors.20). then the result is again a tensor with rank r. a and b is a inner product of two rank 1 tensors giving a 0 rank tensor (scalar): [ab]ij a·b = a i bj = a i bi (Outer product) (Contraction of outer product. If the rank of the two given tensors are respectively r and s then the rank of the new tensor will be r + s − 2. For instance. Its tensorial nature follows from the fact that an outer product of two tensors is a tensor and the contraction of two indices in a tensor gives a new tensor. Inner product By the process of forming the outer product of two tensors followed by a contraction. Tensors obtained from permutting indices are called isomers. we obtain a new tensor called an inner product of the given tensors. Bj1 ···jr does transform as a tensor of rank r as claimed. if we deﬁne a new set of 3r quantities Ci1 i2 ···ir from a tensor Ai1 i2 ···ir by permuting two arbitrarily chosen index numbers α and β > α: Ci1 i2 ···iα−1 iα iα+1 ···iβ−1 iβ iβ+1 ···ir = Ai1 i2 ···iα−1 iβ iα+1 ···iβ−1 iα iβ+1 ···ir . scalars and vectors) have no isomers. Because of the permutation rule discussed above we only have to demonstrate it for the ﬁrst two indices. The contraction rule states that if A is a tensor of rank r + 2 then Bj1 j2 ···jr = Aiij1 j2 ···jr (NB! Aiij1 j2 ···jr = 3 i=1 Aiij1 j2 ···jr ) is also a tensor of rank r.

Summary In summary. all tensorial algebra of any rank can basically be boiled down to these few operations: addition. Rank(B)=s) In eﬀect. As for ordinary derivatives of functions.22) is itself a tensor of rank r + s. each diﬀerentiable in each of its 3s arguments B = (Bj1 j2 ···js ).3. contraction and inner product.j1 j2 ···js = ∂Ai1 i2 ···ir ∂Bj1 j2 ···js (4. The direct notation for this situation would be A = A(B) (Rank(A)=r. permutation.4 Diﬀerentiation Let us consider a tensor A of rank r which is a diﬀerentiable function of all of the elements of another tensor B of rank s. namely that of diﬀerentiation. Ai1 i2 ···ir = Ai1 i2 ···ir ( (Bj1 j2 ···js ) ) Then the partial derivatives Ci1 i2 ···ir . setting k = j) It is left to an exercise to see that the “dot”-operation deﬁned in chapter 2 for two second rank tensors. the components Ci1 i2 ···ir will in general also be functions of B. T. any of two indices in the outer product between two tensors can be contracted to deﬁne an inner product. For example Ci1 i2 i4 i5 j1 j3 = Ai1 i2 ki4 i5 Bj1 kj3 is a tensor of rank 6 obtained as an inner product between a tensor Ai1 i2 i3 i4 i5 of rank 5 and a tensor Bj1 j2 j3 of rank 3 by setting i3 = j2 . Ci1 i2 ···ir = Ci1 i2 ···ir ( (Bj1 j2 ···js ) ) The direct notation for taking partial derivatives with respect to a set of tensor components is ∂A C(B) = (B) ∂B 53 . Ai1 ···ir . operating on a vector v is a inner product of between a rank two and a rank one tensor giving a rank 1 tensor: [Tv]ijk = Tij vk [T · v]i = Tij vj (Outer product) (Contraction of outer product. Only one more operation is essential to know. is an inner product. In general.A rank two tensor. T · S. outer product. Tensors of rank higher than 4 are an odditity in the realm of physics. it implies the existence of 3r functions. fortunately . 4.

Indeed. It is essential that all components of B are independent and can vary freely without constraints ??.11) For a pure reﬂection. By comparing Eq. that do not obey this rule. A proper tensor can be considered as a real geometrical object.25) with Eq. we have Ci1 i2 ···ir . From this rule follows the quotient rule which states that if the tensor Ai1 i2 ···ir is a linear function of the unconstrained tensor Bj1 j2 ···js through the relation Ai1 i2 ···ir = Ci1 i2 ···ir j1 j2 ···js Bj1 j2 ···js + Di1 i2 ···ir then C is a tensor of rank r + s and consequently D must also be a tensor of rank r. is called a pseudo-tensor if it transforms according to the rule Pi1 i2 ···ir = det(A)ai1 j1 ai2 j2 · · · air jr Pj1 j2 ···jr Pseudo-tensor. (4. det(A) = −1. The last step follows from Bl1 l2 ···ls = aj1 l1 aj2 l2 · · · ajs ls Bj1 j2 ···js . to a reﬂection through the origin of a cartesian coordinate system.j1 j2 ···js (B ) = = ∂Ai1 i2 ···ir ∂Bj j ···js (B ) 1 2 ∂(ai1 k1 ai2 k2 ···air kr Ak1 k2 ···kr ) ∂Bl1 l2 ···ls ∂Bj j ···js ∂Bl1 l2 ···ls 1 2 (B) = ai1 k1 ai2 k2 · · · air kr aj1 l1 aj2 l2 · · · ajs ls Ck1 k2 ···kr . (4. Pi1 i2 ···ir . x = −x. Direct product of ordinary vectors 54 . Eq. Eq.23) In the second step we have used the chain rule for diﬀerentiation. independent of the coordinate system. aij = −δij . so Pi1 i2 ···ir = −(−1)r Pi1i2 ···ir . we will have to look at the transformation properties of its elements.24).24) There are quantities. and so to justify this direct notation in the ﬁrst place. Such quantities are called pseudo-tensors in contradistinction to ordinary or proper tensors.l1 l2 ···ls (B) (4. (4. cf. a set of 3 r quantities.20). A proper vector is an “arrow” in space for which the components change sign upon a reﬂection of the axes of the coordinate system.25) upon the coordinate transformation. Consequently. we ﬁnd Ti1 ···ir = (−1)r Ti1 ···ir (4. (4. (4. (4. notably the Levi-Civita symbol.To demonstrate that C is a tensor.4 Reﬂection and pseudotensors Applying the general tensor transformation formula. Eq. 4.20) one observes that the diﬀerence between a tensor and a pseudotensor only appears if the transformation includes a reﬂection. but acquire an extra minus sign.

Consequently. If ijk should be identical to ijk we must multiply the transformation with an extra det(A) to account for the possiblity that the transformation involves a reﬂection. since 123 = +1. whence the Levi-Civita symbol is a third rank pseudo-tensor. The correct transformation law must therefore be ijk = det(A)ail ajm akn lmn . it must be proportional to ijk . c = a × b between two ordinary vectors ci = [a × b]i = ijk aj bk . This expression is antisymmetric in all indices ijk 6 . ikj 6 For 55 . Thus the constant of proportionality is det(A) and ijk = det(A) ijk . An equivalent manifestation of its pseudo-vectorial nature is that c does not change its direction upon an active reﬂection. An even number of direct products of pseudo-tensors will make an ordinary tensor. where det(A) = −1. instance.2 and 3 directions. 4. = ail akm ajn lmn = ail akn ajm lnm = ail ajm akn lnm = −ail ajm akn lmn = − ijk The second identity follows from the fact that n and m are bound indices and can therefore be renamed to one and another. To get the constant of proportionality.are ordinary tensors. is a double inner product between a pseudo-tensor of rank three and two ordinary vectors. (4.26) we get 123 = det(A) = det(A) 123 . Indeed. changing sign once for each index. since its deﬁnition makes no distinction between the 1. the lhs.1 Levi-Civita symbol If we apply the transformation rule. the direction of c depends on the handedness of the coordinate system. Eq.26) ijk = ail ajm akn lmn . to the Levi-Civita symbol. (4. Since the rhs. as previously mentioned. we should observe the following connection between the determinant of a matrix and the LeviCivita symbol A = (aij ).20). ijk = ijk . 7 The most important application of the Levi-Civita symbol is in the algebraic deﬁnition of the cross-product.27) det(A) = lmn a1l a2m a3n . will be a pseudo-vector. The transformation rule for pseudo-tensors leaves the Levi-Civita symbol invariant under all orthogonal transformations. 7 We should not be suprised by the fact that ijk are unaﬀected by coordinate transformations.4. Taking ijk = 123 in Eq. (4. whereas an odd number of direct products of pseudo-tensors leads to another pseudo-tensor. ijk we obtain (4.

(4. is antisymmetric in four indices. This is the secret behind all complicated formules involving vector products. N . The ﬁrst is a formula to reduce the product of two Levi-Civita symbols into kronecker deltas   δil δim δin  δjl δjm δjn  (4. the determinant of a product of matrices equals the product of the determinants of the individual matrices det(M · N) = det(M ) det(N ) Secondly.2 Manipulation of the Levi-Civita symbol which after calculating the determinant becomes ijk lmn Two important relations for the Levi-Civita symbol are very useful to derive the myriad of known identities between various vector products. Every time two ’s meet in an expression they may be reduced away ??. First.28) one uses two well-known properties of the determinant of a matrix. N t . all components of a totally antisymmetric symbol with four indices must vanish. From this follows the rule ai jkl − aj ikl − ak jil − al jki = 0.28) To derive Eq. Hence a non-vanishing component must have all indices diﬀerent. Consequently.30) An other important relation for the -symbol follows from the observation that it is impossible to construct a non-trivial totally antisymmetric symbol with more than three indicies. (4.4. of a matrix.31) because the lhs. (4. equals the determinant of the matrix det(N t ) = det(N ) 56 .28) ijk lmn = det δkl δkm δkn = δil δjm δkn + δim δjn δkl + δin δjl δkm −δin δjm δkl − δjn δkm δil − δkn δim δjl (4. From Eq. The reason is that the antisymmetry implies that any component with two equal components must vanish.29) we may deduce a few more expressions by contraction of indices ijk lmk ijk ljk ijk ijk = δil δjm − δim δjl = 2δil =6 (4.4. and since there are only three possible values for an index this is impossible. the determinant of the transpose.29) Due to this relation an arbitrary tensor expression may be reduced to contain zero -symbols whereas a pseudotensor expression may be reduced to contain only one such symbol. (4. Derivation of Eq.

(4.35) and the components of a pseudo tensor ﬁeld P of rank r transforms as Pi1 i2 ···ir (x ) = det(A)ai1 j1 ai2 j2 · · · air jr Pj1 j2 ···jr (x) (4. c·z (4.Set Also. (4. det(M ) = Consequently. y = em . z = en we have for the lhs of Eq. ie. This proves Eq.5 Tensor ﬁelds of any rank All that has been said about transformation properties of tensors and pseudotensors can be converted directly to that of tensor ﬁelds and pseudo-tensor ﬁelds. (4. b = ej .34) where we for pedagogical reasons have reinserted the involved sums. For the rhs of Eq.32) according to Eq.28). ijk Setting a = ei . of Eq.28). 4. then the matrix product M · N t becomes  a·x a·y M · Nt =  b · x b · y c·x c·y ijk ai bj ck . denoting summation indices with a prime.32) gives ijk lmn ai bj ck xl ym zn .33) we also recover the rhs. (4. c3 x1 N =  y1 z1  x2 y2 z2  x3 y3  . (4. (4. (4.33) ij k lmn ij k lmn ij k ij k lmn lmn a·x  b·x lmn ai bj ck xl ym zn = det c·x  a·y b·y c·y  a·z b·z . upon the cartesian coordinate transformation. (2. det(M · N t ) = det(M ) det(N ) = which together with Eq.36) 57 . a1 M =  b1 c1  a2 b2 c2  a3 b3  . z3 and det(N ) = lmn xl ym zn  a·z b·z  c·z (4. (4.28) by noting that each scalar product indeed becomes a kronecker delta.33) a i b j c k xl y m z n δi i δj j δk k δl l δm m δn n = = ijk lmn . Eq. a·x = ei ·el = δil etc. One only needs to remember that all the (pseudo) tensor components are now functions of the coordinates. The above expression equals the lhs of Eq. c = ek . x = el .11) the components of a tensor ﬁeld T of rank r transforms as Ti1 i2 ···ir (x ) = ai1 j1 ai2 j2 · · · air jr Tj1 j2 ···jr (x) (4.24). To be speciﬁc.

f. in conﬂict with the natural convention arising from the direct notation which for –say a vector a– reads [ a]ij = i aj . than in this case the transformation matrix elements. c. aj = ∂i Tij and bi = ∂j Tij . for instance from a rectangular to a spherical basis. if T is a tensor ﬁeld of rank r. we have used the tensorial nature of B only to show that ∂Bl1 l2 ···ls = a j 1 l1 a j 2 l2 · · · a j s ls ∂Bj1 j2 ···js However.l = xl .6. this also holds true for any improper tensor in cartesian coordinates. and we can perform two diﬀerent contractions.23). section 2. section 3. A tensor ﬁeld is just a particular realization of the more general case considered in section 4. A = A(B). cf. Thus. Speciﬁcally. aij = aij (x). cf. T. T is a rank 3 tensor. in tensor notation one always explicitly writes the index to be summed over. that the convention in tensor notation is to place the index of the spatial derivative at the end of the tensor components.4. will themselves be function of the position. Speciﬁcally. Therefore.The same transformation rules also hold true for a transformation between any two orthonormal bases.4. 8 Note. For a rank two tensor. B = r. The direct notation for the two operations are · T and · Tt . with a tensor A being a function of another tensor B. C = ∂T will still be a proper ∂r tensor. Here. 58 . Eq. aij . for the position vector we have [r]R. if a(x) is a vector ﬁeld then a is a tensor ﬁeld of rank two.5. yielding the components of two diﬀerent vectors a and b. One must bear in mind.j = xj and ∂xl = ajl . For instance. φ is a vector ﬁeld when T = φ is a scalar ﬁeld. Although r is an improper vector. however. cf. However. the divergence of the vector ﬁeld is a scalar. Only if T is symmetric will a = b. and a is a rank two tensor ﬁeld when T = a is a vector. By contracting the two indices we therefore obtain a scalar. section 2.2 [ a]i. (4.3. [r]R . 8 Divergence The fact that spatial derivatives of a tensor always leads to a new tensor of one higher rank makes it easy to deduce the type of objects resulting from various vector operations. (4. we can apply Eq. T(r) = (Ti1 i2 ···ir (r))R . so no ambiguities arise in practice.3.3. respectively. The reason is that in deriving the tensorial nature of C.22) to demonstrate that derivatives of a tensor ﬁeld is another tensor ﬁeld of one higher rank. section 4.i = ∂i ai a scalar quantity. then the operation ∂T = T ∂r gives a tensor ﬁeld of rank r + 1 with the components [ T]i1 i2 ···ir . ∂xj Consequently.j (r) = ∂j Ti1 i2 ···ir (r).

59 . If a is a polar vector then the rhs will be a double inner product between a pseudo-tensor of rank 3 and two polar vectors yielding an axial (or pseudo) vector ﬁeld.Curl The tensor notation for the curl operator × a on a vector ﬁeld a involves the Levi-Civita symbol [ × a]i = ijk ∂j ak .

Chapter 5 Invariance and symmetries 60 .

scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->