Curvilinear Analysis in a Euclidean Space

Presented in a framework and notation customized for students and professionals
who are already familiar with Cartesian analysis in ordinary 3D physical engineering
space.
Rebecca M. Brannon
UNM SUPPLEMENTAL BOOK DRAFT
June 2004
Written by Rebecca Moss Brannon of Albuquerque NM, USA, in connection with adjunct
teaching at the University of New Mexico. This document is the intellectual property of Rebecca
Brannon.
Copyright is reserved.
June 2004
Table of contents
PREFACE ................................................................................................................. iv
Introduction ............................................................................................................. 1
Vector and Tensor Notation ........................................................................................................... 5
Homogeneous coordinates ............................................................................................................. 10
Curvilinear coordinates .................................................................................................................. 10
Difference between Affine (non-metric) and Metric spaces .......................................................... 11
Dual bases for irregular bases .............................................................................. 11
Modified summation convention ................................................................................................... 15
Important notation glitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
The metric coefficients and the dual contravariant basis ............................................................... 18
Non-trivial lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
A faster way to get the metric coefficients: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Super-fast way to get the dual (contravariant) basis and metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Transforming components by raising and lower indices. .............................................................. 26
Finding contravariant vector components — classical method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Finding contravariant vector components — accelerated method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Tensor algebra for general bases ......................................................................... 30
The vector inner (“dot”) product for general bases. ....................................................................... 30
Index contraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Other dot product operations .......................................................................................................... 32
The transpose operation ................................................................................................................. 33
Symmetric Tensors ......................................................................................................................... 34
The identity tensor for a general basis. .......................................................................................... 34
Eigenproblems and similarity transformations .............................................................................. 35
The alternating tensor ..................................................................................................................... 38
Vector valued operations ................................................................................................................ 40
CROSS PRODUCT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Axial vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Scalar valued operations ................................................................................................................ 40
TENSOR INNER (double dot) PRODUCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
TENSOR MAGNITUDE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
TRACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
TRIPLE SCALAR-VALUED VECTOR PRODUCT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
DETERMINANT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Tensor-valued operations ............................................................................................................... 43
TRANSPOSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
INVERSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
COFACTOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
DYAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Basis and Coordinate transformations ................................................................. 46
Coordinate transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
What is a vector? What is a tensor? ............................................................................................... 55
Definition #0 (used for undergraduates):. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Definition #1 (classical, but our least favorite): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Definition #2 (our preference for ordinary engineering applications): . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Definition #3 (for mathematicians or for advanced engineering) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Coordinates are not the same thing as components ....................................................................... 59
Do there exist a “complementary” or “dual” coordinates? ............................................................ 60
Curvilinear calculus ................................................................................................ 62
A introductory example .................................................................................................................. 62
Curvilinear coordinates .................................................................................................................. 65
The “associated” curvilinear covariant basis ................................................................................. 65
EXAMPLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
The gradient of a scalar .................................................................................................................. 71
Example: cylindrical coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Gradient of a vector -- “simplified” example ................................................................................. 72
Gradient of a vector in curvilinear coordinates .............................................................................. 74
Christoffel Symbols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Important comment about notation:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Increments in the base vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Manifold torsion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
EXAMPLE: Christoffel symbols for cylindrical coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Covariant differentiation of contravariant components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Example: Gradient of a vector in cylindrical coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Covariant differentiation of covariant components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Product rules for covariant differentiation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Backward and forward gradient operators ..................................................................................... 83
Divergence of a vector ................................................................................................................... 84
Curl of a vector ............................................................................................................................... 85
Gradient of a tensor ........................................................................................................................ 86
Ricci’s lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Corollary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Christoffel symbols of the first kind .............................................................................................. 87
The fourth-order Riemann-Christoffel curvature tensor ................................................................ 88
Embedded bases and objective rates ................................................................... 90
Concluding remarks. .............................................................................................. 94
REFERENCES ......................................................................................................... 95
PREFACE
This document started out as a small set of notes assigned as supplemental reading
and exercises for the graduate students taking my continuum mechanics course at the
University of New Mexico (Fall of 1999). After the class, I posted the early versions of the
manuscript on my UNM web page to solicit interactions from anyone on the internet
who cared to comment. Since then, I have regularly fielded questions and added
enhancements in response to the encouragement (and friendly hounding) that I received
from numerous grad students and professors from all over the world.
Perhaps the most important acknowledgement I could give for this work goes to
Prof. Howard “Buck” Schreyer, who introduced me to curvilinear coordinates when I
was a student in his Continuum Mechanics class back in 1987. Buck’s daughter, Lynn
Betthenum (sp?) has also contributed to this work by encouraging her own grad stu-
dents to review it and send me suggestions and corrections.
Although the vast majority of this work was performed in the context of my Univer-
sity appointment, I must acknowledge the professional colleagues who supported the
continued refinement of the work while I have been a staff member and manager at San-
dia National Laboratories in New Mexico.
Rebecca Brannon
rmbrann@me.unm.edu
June 2004
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 1
Curvilinear Analysis in a Euclidean Space
Rebecca M.Brannon, University of New Mexico, First draft: Fall 1998, Present draft: 6/17/04.
1. Introduction
This manuscript is a student’s introduction on the mathematics of curvilinear coordinates,
but can also serve as an information resource for practicing scientists. Being an introduction,
we have made every effort to keep the analysis well connected to concepts that should be
familiar to anyone who has completed a first course in regular-Cartesian-coordinate
1
(RCC)
vector and tensor analysis. Our principal goal is to introduce engineering specialists to the
mathematician’s language of general curvilinear vector and tensor analysis. Readers who are
already well-versed in functional analysis will probably find more rigorous manuscripts
(such as [14]) more suitable. If you are completely new to the subject of general curvilinear
coordinates or if you seek guidance on the basic machinery associated with non-orthonormal
base vectors, then you will probably find the approach taken in this report to be unique and
(comparatively) accessible. Many engineering students presume that they can get along in
their careers just fine without ever learning any of this stuff. Quite often, that’s true. Nonethe-
less, there will undoubtedly crop up times when a system operates in a skewed or curved
coordinate system, and a basic knowledge of curvilinear coordinates makes life a lot easier.
Another reason to learn curvilinear coordinates — even if you never explicitly apply the
knowledge to any practical problems — is that you will develop a far deeper understanding
of Cartesian tensor analysis.
Learning the basics of curvilinear analysis is an essential first step to reading much of the
older materials modeling literature, and the theory is still needed today for non-Euclidean
surface and quantum mechanics problems. We added the proviso “older” to the materials
modeling literature reference because more modern analyses are typically presented using
structured notation (also known as Gibbs, symbolic, or direct notation) in which the highest-
level fundamental meaning of various operations are called out by using a notation that does
not explicitly suggest the procedure for actually performing the operation. For example,
would be the structure notation for the vector dot product whereas would be
the procedural notation that clearly shows how to compute the dot product but has the disad-
1. Here, “regular” means that the basis is right, rectangular, and normalized. “Right” means the basis forms a right-handed
system (i.e., crossing the first base vector into the second results in a third vector that has a positive dot product with the
third base vectors). “Rectangular” means that the base vectors are mutually perpendicular. “Normalized” means that the
base vectors are dimensionless and of unit length. “Cartesian” means that all three coordinates have the same physical
units [12, p90]. The last “C” in the RCC abbreviation stands for “coordinate” and its presence implies that the basis is
itself defined in a manner that is coupled to the coordinates. Specifically, the basis is always tangent to the coordinate
grid. A goal of this paper is to explore the implications of removing the constraints of RCC systems. What happens when
the basis is not rectangular? What happens when coordinates of different dimensions are used? What happens when the
basis is selected independently from the coordinates?
a
˜
b
˜

a
1
b
1
a
2
b
2
a
3
b
3
+ +
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 2
vantage of being applicable only for RCC systems. The same operation would be computed
differently in a non-RCC system — the fundamental operation itself doesn’t change; instead
the method for computing it changes depending on the system you adopt. Operations such as
the dot and cross products are known to be invariant when expressed using combined com-
ponent+basis notation. Anyone who chooses to perform such operations using Cartesian com-
ponents will obtain the same result as anyone else who opts to use general curvilinear
components — provided that both researchers understand the connections between the two
approaches! Every now and then, the geometry of a problem clearly calls for the use of non-
orthonormal or spatially varying base vectors. Knowing the basics of curvilinear coordinates
permits analysts to choose the approach that most simplifies their calculations. This manu-
script should be regarded as providing two services: (1) enabling students of Cartesian analy-
sis to solidify their knowledge by taking a foray into curvilinear analysis and (2) enabling
engineering professionals to read older literature wherein it was (at the time) considered
more “rigorous” or stylish to present all analyses in terms of general curvilinear analysis.
In the field of materials modeling, the stress tensor is regarded as a function of the strain
tensor and other material state variables. In such analyses, the material often contains certain
“preferred directions” such as the direction of fibers in a composite matrix, and curvilinear
analysis becomes useful if those directions are not orthogonal. For plasticity modeling, the
machinery of non-orthonormal base vectors can be useful to understand six-dimensional
stress space, and it is especially useful when analyzing the response of a material when the
stress resides at a so-called yield surface “vertex”. Such a vertex is defined by the convergence
of two or more surfaces having different and generally non-orthogonal orientation normals,
and determination of whether or not a trial elastic stress rate is progressing into the “cone of
limiting normals” becomes quite straightforward using the formal mathematics of non-
orthonormal bases.
This manuscript is broken into three key parts: Syntax, Algebra, and Calculus. Chapter 2
introduces the most common coordinate systems and iterates the distinction between irregu-
lar bases and curvilinear coordinates; that chapter introduces the several fundamental quanti-
ties (such as metrics) which appear with irresistible frequency throughout the literature of
generalized tensor analysis. Chapter 3 shows how Cartesian formulas for basic vector and
tensor operations must be altered for non-Cartesian systems. Chapter 4 covers basis and coor-
dinate transformations, and it provides a gentle introduction to the fact that base vectors can
vary with position.
The fact that the underlying base vectors might be non-normalized, non-orthogonal, and/
or non-right-handed is the essential focus of Chapter 4. By contrast, Chapter 5 focuses on how
extra terms must appear in gradient expressions (in addition to the familiar terms resulting
from spatial variation of scalar and vector components); these extra terms account for the fact
that the coordinate base vectors vary in space. The fact that different base vectors can be used
at different points in space is an essential feature of curvilinear coordinates analysis.
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 3
The vast majority of engineering applications use one of the coordinate systems illustrated
in Fig. 1.1. Of these, the rectangular Cartesian coordinate system is the most popular choice.
For all three systems in Fig. 1.1, the base vectors are unit vectors. The base vectors are also
mutually perpendicular, and the ordering is “right-handed” (i.e., the third base vector is
obtained by crossing the first into the second). Each base vector points in the direction that the
position vector moves if one coordinate is increased, holding the other two coordinates con-
stant; thus, base vectors for spherical and cylindrical coordinates vary with position. This is a
crucial concept: although the coordinate system has only one origin, there can be an infinite
number base vectors because the base vector orientations can depend on position.
Most practicing engineers can get along just fine without ever having to learn the theory
behind general curvilinear coordinates. Naturally, every engineer must, at some point, deal
with cylindrical and spherical coordinates, but they can look up whatever formulas they need
in handbook tables. So why bother learning about generalized curvilinear coordinates? Dif-
ferent people have different motivations for studying general curvilinear analysis. Those
dealing with general relativity, for example, must be able to perform tensor analysis on four
dimensional curvilinear manifolds. Likewise, engineers who analyze shells and membranes
in 3D space greatly benefit from general tensor analysis. Reading the literature of continuum
mechanics — especially the older work — demands an understanding of the notation. Finally,
the topic is just plain interesting in its own right. James Simmonds [7] begins his book on ten-
sor analysis with the following wonderful quote:
The magic of this theory will hardly fail to impose itself on anybody who has truly understood it; it
represents a genuine triumph of the method of absolute differential calculus, founded by Gauss, Rie-
mann, Ricci, and Levi-Civita.
—Albert Einstein
1
An important message articulated in this quote is the suggestion that, once you have mas-
x
1
x
2
x
3
e
˜
1
e
˜
2
e
˜
3
e
˜
r
e
˜
θ
e
˜
z
x
1
x
2
x
3
θ r
z
φ
θ
e
˜
r
e
˜
θ
e
˜
φ
(orthogonal)
Cartesian coordinates
(orthogonal)
Cylindrical coordinates
(orthogonal)
Spherical coordinates
FIGURE 1.1 The most common engineering coordinate systems. Note that all three systems are orthogonal because
the associated base vectors are mutually perpendicular. The cylindrical and spherical coordinate systems are
inhomogeneous because the base vectors vary with position. As indicated, depends on θ for cylindrical
coordinates and depends on both θ and ψ for spherical coordinates.
e
˜
r
e
˜
r
r
x
1
x
2
x
3
x
1
x
2
x
3
, , { } r θ z , , { }
r θ φ , , { }
x
˜
x
˜
x
˜
x
˜
r e
˜
r
θ φ , ( ) = x
˜
r e
˜
r
θ ( ) ze
˜
z
+ = x
˜
x
1
e
˜
1
x
2
e
˜
2
x
3
e
˜
3
+ + =
(a)
(c)
(b)
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 4
tered tensor analysis, you will begin to recognize its basic concepts in many other seemingly
unrelated fields of study. Your knowledge of tensors will therefore help you master a broader
range of subjects.
1. From: “Contribution to the Theory of General Relativity,” 1915; as quoted and translated by C. Lanczos in The Einstein
Decade, p213.
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 5
This document is a teaching and learning tool. To assist with this goal, you will note that the
text is color-coded as follows:
Please direct comments to
rmbrann@me.unm.edu
1.1 Vector and Tensor Notation
The tensorial order of quantities will be indicated by the number of underlines. For exam-
ple, is a scalar, is a vector, is a second-order tensor, is a third order tensor, etc. We fol-
low Einstein’s summation convention where repeated indices are to be summed (this rule will
be later clarified for curvilinear coordinates).
You, the reader, are presumed familiar with basic operations in Cartesian coordinates (dot
product, cross-product, determinant, etc.). Therefore, we may define our structured terminol-
ogy and notational conventions by telling you their meanings in terms ordinary Cartesian
coordinates. A principal purpose of this document is to show how these same structured oper-
ations must be computed using different procedures when using non-RCC systems. In this sec-
tion, where we are merely explaining the meanings of the non-indicial notation structures, we
will use standard RCC conventions that components of vectors and tensors are identified by
subscripts that take on the values 1, 2, and 3. Furthermore, when exactly two indices are
repeated in a single term, they are understood to be summed from 1 to 3. Later on, for non-
RCC systems, the conventions for subscripts will be generalized.
The term “array” is often used for any matrix having one dimension equal to 1. This docu-
ment focuses exclusively on ordinary 3D physical space. Thus, unless otherwise indicated, the
word “array” denotes either a or a matrix. Any array of three numbers may be
expanded as the sum of the array components times the corresponding primitive basis arrays:
(1.1)
Everyday engineering problems typically characterize vectors using only the regular Carte-
sian (orthonormal right-handed) laboratory basis, . Being orthonormal, the base
vectors have the property that , where is the Kronecker delta and the indices
( and ) take values from 1 to 3. Equation (1.1) is the matrix-notation equivalent of the usual
expansion of a vector as a sum of components times base vectors:
(1.2)
BLUE definition ⇒
RED important concept ⇒
s v
˜
T
˜
˜
ξ
˜
˜
˜
3 1 × 1 3 ×
v
1
v
2
v
3 ¹ )
¦ ¦
´ `
¦ ¦
¦ ¹
v
1
1
0
0
¹ )
¦ ¦
´ `
¦ ¦
¦ ¹
v
2
0
1
0
¹ )
¦ ¦
´ `
¦ ¦
¦ ¹
v
3
0
0
1
¹ )
¦ ¦
´ `
¦ ¦
¦ ¹
+ + =
e
˜
1
e
˜
2
e
˜
3
, , { }
e
˜
i
e
˜
j
• δ
ij
= δ
ij
i j
v
˜
v
1
e
˜
1
v
2
e
˜
2
v
3
e
˜
3
+ + =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 6
More compactly,
(1.3)
Many engineering problems (e.g., those with spherical or cylindrical symmetry) become
extremely complicated when described using the orthonormal laboratory basis, but they sim-
plify superbly when phrased in terms of some other basis, . Most of the time, this
“other” basis is also a “regular” basis, meaning that it is orthonormal ( ) and
right handed ( ) — the only difference is that the new basis is oriented differ-
ently than the laboratory basis. The best choice for this other, more convenient, basis might
vary in space. Note, for example, that the bases for spherical and cylindrical coordinates illus-
trated in Fig. 1.1 are orthonormal and right-handed (and therefore “regular”) even though a
different set of base vectors is used at each point in space. This harks back to our earlier com-
ment that the properties of being orthonormal and curvilinear are distinct — one does not
imply or exclude the other.
Generalized curvilinear coordinates show up when studying quantum mechanics or shell
theory (or even when interpreting a material deformation from the perspective of a person
who translates, rotates, and stretches along with the material). For most of these advanced
physics problems, the governing equations are greatly simplified when expressed in terms of
an “irregular” basis (i.e., one that is not orthogonal, not normalized, and/or not right-
handed). To effectively study curvilinear coordinates and irregular bases, the reader must
practice constant vigilance to keep track of what particular basis a set of components is refer-
enced to. When working with irregular bases, it is customary to construct a complementary or
“dual” basis that is intimately related to the original irregular basis. Additionally, even
though it is might not be convenient for the application at hand, the regular laboratory basis
still exists. Sometimes a quantity is most easily interpreted using yet other bases. For example,
a tensor is usually described in terms of the laboratory basis or some “applications” basis, but
we all know that the tensor is particularly simplified if it is expressed in terms of its principal
basis. Thus, any engineering problem might involve the simultaneous use of many different
bases. If the basis is changed, then the components of vectors and tensors must change too. To
emphasize the inextricable interdependence of components and bases, vectors are routinely
expanded in the form of components times base vectors .
What is it that distinguishes vectors from simple arrays of numbers? The answer is
that the component array for a vector is determined by the underlying basis and this compo-
nent array must change is a very particular manner when the basis is changed. Vectors have
(by definition) an invariant quality with respect to a change of basis. Even though the compo-
nents themselves change when a basis changes, they must change in a very specific way — it
they don’t change that way, then the thing you are dealing with (whatever it may be) is not a
vector. Even though components change when the basis changes, the sum of the components
times the base vectors remains the same. Suppose, for example, that is the regular
v
˜
v
k
e
˜
k
=
E
˜
1
E
˜
2
E
˜
3
, , { }
E
˜
i
E
˜
j
• δ
ij
=
E
˜
3
E
˜
1
E
˜
2
× =
v
˜
v
1
e
˜
1
v
2
e
˜
2
v
3
e
˜
3
+ + =
3 1 ×
e
˜
1
e
˜
2
e
˜
3
, , { }
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 7
laboratory basis and is some alternative orthonormal right-handed basis. Let the
components of a vector with respect to the lab basis be denoted by and let denote the
components with respect to the second basis. The invariance of vectors requires that the basis
expansion of the vector must give the same result regardless of which basis is used. Namely,
(1.4)
The relationship between the components and the components can be easily character-
ized as follows:
Dotting both sides of (1.4) by gives: (1.5a)
Dotting both sides of (1.4) by gives: (1.5b)
Note that . Similarly, . We can define
a set of nine numbers (known as direction cosines) . Therefore,
and , where we have used the fact that the dot product is commutative
( for any vectors and ). With these observations and definitions, Eq. (1.5)
becomes
(1.6a)
(1.6b)
These relationships show how the { } components are related to the components. Satisfy-
ing these relationships is often the identifying characteristic used to identify whether or not
something really is a vector (as opposed to a simple collection of three numbers). This discus-
sion was limited to changing from one regular (i.e., orthonormal right-handed) to another.
Later on, the concepts will be revisited to derive the change of component formulas that apply
to irregular bases. The key point (which is exploited throughout the remainder of this docu-
ment) is that, although the vector components themselves change with the basis, the sum of
components times base vectors in invariant.
The statements made above about vectors also have generalizations to tensors. For exam-
ple, the analog of Eq. (1.1) is the expansion of a matrix into a sum of individual compo-
nents times base tensors:
(1.7)
Looking at a tensor in this way helps clarify why tensors are often treated as nine-dimen-
sional vectors: there are nine components and nine associated “base tensors.” Just as the inti-
mate relationship between a vector and its components is emphasized by writing the vector in
the form of Eq. (1.4), the relationship between a tensor’s components and the underlying basis
is emphasized by writing tensors as the sum of components times “basis dyads”. Specifically
E
˜
1
E
˜
2
E
˜
3
, , { }
v
˜
v
k
v
˜
k
v
k
e
˜
k
v
˜
k
E
˜
k
=
v
˜
k
v
k
E
˜
m
v
k
e
˜
k
E
˜
m
• ( ) v
˜
k
E
˜
k
E
˜
m
• ( ) =
e
˜
m
v
k
e
˜
k
e
˜
m
• ( ) v
˜
k
E
˜
k
e
˜
m
• ( ) =
v
˜
k
E
˜
k
E
˜
m
• ( ) v
˜
k
δ
km
v
˜
m
= = v
k
e
˜
k
e
˜
m
• ( ) v
k
δ
km
v
m
= =
L
ij
e
˜
i
E
˜
j
• ≡ e
˜
k
E
˜
m
• L
km
=
E
˜
k
e
˜
m
• L
mk
=
a
˜
b
˜
• b
˜
a
˜
• = a
˜
b
˜
v
k
L
km
v
˜
m
=
v
m
v
˜
k
L
mk
=
v
k
v
˜
m
3 3 ×
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
A
11
1 0 0
0 0 0
0 0 0
A
12
0 1 0
0 0 0
0 0 0
… A
33
0 0 0
0 0 0
0 0 1
+ + + =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 8
in direct correspondence to Eq. (1.7) we write
. (1.8)
Two vectors written side-by-side are to be multiplied dyadically, for example, is a sec-
ond-order tensor dyad with Cartesian ij components . Any tensor can be expressed as a
linear combination of the nine possible basis dyads. Specifically the dyad corresponds to
an RCC component matrix that has zeros everywhere except “1” in the position, which was
what enabled us to write Eq. (1.7) in the more compact form of Eq. (1.8). Even a dyad itself
can be expanded in terms of basis dyads as . Dyadic multiplication is often
alternatively denoted with the symbol ⊗. For example, means the same thing as .
We prefer using no “⊗” symbol for dyadic multiplication because it allows more appealing
identities such as .
Working with third-order tensors requires introduction of triads, which are denoted struc-
turally by three vectors written side-by-side. Specifically, is a third-order tensor with
RCC ijk components . Any third-order tensor can always be expressed as a linear com-
bination of the fundamental basis triads. The concept of dyadic multiplication extends simi-
larly to fourth and higher-order tensors.
Using our summation notation that repeated indices are to be summed, the standard com-
ponent-basis expression for a tensor (Eq. 1.8) can be written
. (1.9)
The components of a tensor change when the basis changes, but the sum of components times
basis dyads remains invariant. Even though a tensor comprises many components and basis
triads, it is this sum of individual parts that’s unique and physically meaningful.
A raised single dot is the first-order inner product. For example, in terms of a Cartesian
basis, . When applied between tensors of higher or mixed orders, the single dot
continues to denote the first order inner product; that is, adjacent vectors in the basis dyads
are dotted together so that .
Here is the Kronecker delta, defined to equal 1 if and 0 otherwise. The common opera-
tion, denotes a first order vector whose Cartesian component is . If, for exam-
ple, , then the RCC components of may be found by the matrix multiplication:
implies (for RCC) , or (1.10)
Similarly, , where the superscript “T” denotes the tensor
transpose (i.e., ). Note that the effect of the raised single dot is to sum adjacent indi-
A
˜
˜
A
11
e
˜
1
e
˜
1
A
12
e
˜
1
e
˜
2
… A
33
e
˜
3
e
˜
3
+ + + =
a
˜
b
˜
a
i
b
j
e
˜
i
e
˜
j
ij
a
˜
b
˜
a
˜
b
˜
a
i
b
i
e
˜
i
e
˜
j
=
a
˜
b
˜
⊗ a
˜
b
˜
a
˜
b
˜
( ) c
˜
• a
˜
b
˜
c
˜
• ( ) =
u
˜
v
˜
w
˜
u
i
v
j
w
k
A
˜
˜
A
ij
e
˜
i
e
˜
j
=
u
˜
v
˜
• u
k
v
k
=
A
˜
˜
B
˜
˜
• A
ij
e
˜
i
e
˜
j
( ) B
pq
e
˜
p
e
˜
q
( ) • A
ij
B
pq
δ
jp
e
˜
i
e
˜
q
( ) A
ij
B
jq
e
˜
i
e
˜
q
= = =
δ
ij
i=j
A
˜
˜
u
˜
• i
th
A
ij
u
j
w
˜
A
˜
˜
u
˜
• = w
˜
w
˜
A
˜
˜
u
˜
• = w
i
A
ij
u
j
=
w
1
w
2
w
3
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
u
1
u
2
u
3
=
v
˜
B
˜
˜
• v
k
B
km
e
˜
m
B
˜
˜
T
v
˜
• = =
B
ij
T
B
ji
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 9
ces. Applying similar heuristic notational interpretation, the reader can verify that
must be a scalar computed in RCC by .
A first course in tensor analysis, for example, teaches that the cross-product between two
vectors is a new vector obtained in RCC by
(1.11)
or, more compactly,
, (1.12)
where is the permutation symbol
= 1 if = 123, 231, or 312
= –1 if = 321, 132, or 213
= 0 if any of the indices , , or are equal. (1.13)
Importantly,
(1.14)
This permits us to alternatively write Eq. (1.12) as
(1.15)
We will employ a self-defining notational structure for all conventional vector operations. For
example, the expression can be immediately inferred to mean
(1.16)
The “triple scalar-valued product” is denoted with square brackets around a list of three
vectors and is defined . Note that
(1.17)
We denote the second-order inner product by a “double dot” colon. For rectangular Carte-
sian components, the second-order inner product sums adjacent pairs of components. For
example, , , and . Caution: many authors
insidiously use the term “inner product” for the similar looking scalar-valued operation
, but this operation is not an inner product because it fails the positivity axiom required
for any inner product.
u
˜
C
˜
˜
v
˜
• •
u
i
C
ij
v
j
v
˜
w
˜
×
v
˜
w
˜
× v
2
w
3
v
3
w
2
– ( )e
˜
1
v
3
w
1
v
1
w
3
– ( )e
˜
2
v
1
w
2
v
2
w
1
– ( )e
˜
3
+ + =
v
˜
w
˜
× ε
ijk
v
j
w
k
e
˜
i
=
ε
ijk
ε
ijk
ijk
ε
ijk
ijk
ε
ijk
i j k
e
˜
i
e
˜
j
× ε
ijk
e
˜
k
=
v
˜
w
˜
× v
j
w
k
e
˜
j
e
˜
k
× ( ) =
C
˜
˜
v
˜
×
C
˜
˜
v
˜
× C
ij
e
˜
i
e
˜
i
v
k
e
˜
k
× C
ij
v
k
e
˜
i
e
˜
i
e
˜
k
× C
ij
v
k
ε
mik
e
˜
m
= = =
u
˜
v
˜
w
˜
, , | | u
˜
v
˜
w
˜
× ( ) • ≡
ε
ijk
e
˜
i
e
˜
j
e
˜
k
, , | | =
A
˜
˜
: B
˜
˜
A
ij
B
ij
= ξ
˜
˜
˜
: C
˜
˜
ξ
ijk
C
jk
e
˜
i
=
ξ
˜
˜
˜
:a
˜
b
˜
ξ
ijk
a
j
b
k
e
˜
i
=
A
ij
B
ji
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 10
1.2 Homogeneous coordinates
A coordinate system is called “homoge-
neous” if the associated base vectors are
the same throughout space. A basis is
“orthogonal” (or “rectangular”) if the
base vectors are everywhere mutually
perpendicular. Most authors use the term
“Cartesian coordinates” to refer to the
conventional orthonormal homogeneous
right-handed system of Fig. 1.1a. As seen
in Fig. 1.2b, a homogeneous system is not required to be orthogonal. Furthermore, no coordi-
nate system is required to have unit base vectors. The opposite of homogeneous is “curvilin-
ear,” and Fig. 1.3 below shows that a coordinate system can be both curvilinear and
orthogonal. In short, the properties of being “orthogonal” or “homogeneous” are indepen-
dent (one does not imply or exclude the other).
1.3 Curvilinear coordinates
The coordinate grid is the family of
lines along which only one coordinate var-
ies. If the grid has at least some curved
lines, the coordinate system is called “cur-
vilinear,” and, as shown in Fig. 1.3, the
associated base vectors (tangent to the
grid lines) necessarily change with posi-
tion, so curvilinear systems are always
inhomogeneous. The system in Fig. 1.3a
has base vectors that are everywhere orthogonal, so it is simultaneously curvilinear and
orthogonal. Note from Fig. 1.1 that conventional cylindrical and spherical coordinates are
both orthogonal and curvilinear. Incidentally, no matter what type of coordinate system is
used, base vectors need not be of unit length; they only need to point in the direction that the
g
˜
1
g
˜
2
g
˜
1
g
˜
2
g
˜
1
g
˜
2
(b)
e
˜
1
e
˜
2
e
˜
1
e
˜
2
e
˜
1
e
˜
2
Orthogonal (or rectangular)
homogeneous coordinates
Nonorthogonal
homogeneous coordinates
FIGURE 1.2 Homogeneous coordinates. The base vectors
are the same at all points in space. This condition is possible
only if the coordinate grid is formed by straight lines.
(a)
g
˜
1
g
˜
2
g
˜
2
g
˜
1
g
˜
2
g
˜
1
(a)
orthogonal
curvilinear coordinates
g
˜
1
g
˜
2
g
˜
2
g
˜
1
g
˜
1
g
˜
2
nonorthogonal
curvilinear coordinates
FIGURE 1.3 Curvilinear coordinates. The base vectors are still
tangent to coordinate lines. The left system is curvilinear and
orthogonal (the coordinate lines always meet at right angles).
(b)
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 11
position vector would move when changing the associated coordinate, holding others con-
stant.
1
We will call a basis “regular” if it consists of a right-handed orthonormal triad. The sys-
tems in Fig. 1.3 have irregular associated base vectors. The system in Fig 1.3a can be
“regularized” by normalizing the base vectors. Cylindrical and spherical systems are exam-
ples of regularized curvilinear systems.
In Section 2, we introduce mathematical tools for both irregular homogeneous and irregu-
lar curvilinear coordinates first deals with the possibility that the base vectors might be non-
orthogonal, non-normalized, and/or non-right-handed. Section 3 shows that the component
formulas for many operations such as the dot product take on forms that are different from
the regular (right-handed orthonormal) formulas. The distinction between homogeneous and
curvilinear coordinates becomes apparent in Section 5, where the derivative of a vector or
higher order tensor requires additional terms to account for the variation of curvilinear base
vectors with position. By contrast, homogeneous base vectors do not vary with position, so
the tensor calculus formulas look very much like their Cartesian counterparts, even if the
associated basis is irregular.
1.4 Difference between Affine (non-metric) and Metric spaces
As discussed by Papastavridis [12], there are situations where the axes used to define a
space don’t have the same physical dimensions, and there is no possibility of comparing the
units of one axis against the units of another axis. Such spaces are called “affine” or “non-met-
ric.” The apropos example cited by Papastavridis is “thermodynamic state space” in which
the pressure, volume, and temperature of a fluid are plotted against one another. In such a
space, the concept of lengths (and therefore angles) between two points becomes meaning-
less. In affine geometries, we are only interested in properties that remain invariant under
arbitrary scale and angle changes of the axes.
The remainder of this document is dedicated to metric spaces such as the ordinary physical
3D space that we all (hopefully) live in.
2. Dual bases for irregular bases
Suppose there are compelling physical reasons to use an irregular basis .
Here, “irregular” means the basis might be nonorthogonal, non-normalized, and/or non-
right-handed. In this section we develop tools needed to derive modified component formu-
las for tensor operations such as the dot product. For tensor algebra, it is irrelevant whether the
basis is homogeneous or curvilinear; all that matters is the possibility that the base vectors
1. Strictly speaking, it is not necessary to require that the base vectors have any relationship whatsoever with the coordinate
lines. If desired, for example, we could use arbitrary curvilinear coordinates while taking the basis to be everywhere
aligned with the laboratory basis. In this document, however, the basis is always assumed tangent to coordinate lines.
Such a basis is called the “associated” basis.
g
˜
1
g
˜
2
g
˜
3
, , { }
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 12
might not be orthogonal and/or might not be of unit length and/or might not form a right-
handed system. Again, keep in mind that we will be deriving new procedures for computing
the operations, but the ultimate result and meanings for the operations will be unchanged. If,
for example, you had two vectors expressed in terms of an irregular basis, then you could
always transform those vectors into conventional RCC expansions in order to compute the
dot product. The point of this section is to deduce faster methods that permit you to obtain the
same result directly from the irregular vector components without having to transform to
RCC.
To simplify the discussion, we will assume that the underlying space is our ordinary 3D
physical Euclidean space.
1
Whenever needed, we may therefore assume there exists a right-
handed orthonormal laboratory basis, where . This is particularly
convenient because we can then claim that there exists a transformation tensor such that
. (2.1)
If this transformation tensor is written in component form with respect to the laboratory
basis, then the column of the matrix [F] contains the components of the base vector with
respect to the laboratory basis. In terms of the lab components of [F], Eq. (2.1) can be written
(2.2)
Comparing Eq. (2.1) with our formula for how to dot a tensor into a vector [Eq. (1.10)], you
might wonder why Eq. (2.2) involves instead of . After all, Eq. (1.10) appears to be tell-
ing us that adjacent indices should be summed, but Eq. (2.2) shows the summation index
being summed with the farther (first) index on the tensor. There’s a subtle and important phe-
nomenon here that needs careful attention whenever you deal with equations like (2.1) that
really represent three separate equations for each value of i from 1 to 3. To unravel the mystery,
let’s start by changing the symbol used for the free index in Eq. (2.1) by writing it equivalently
by . Now, applying Eq. (1.10) gives . Any vector, , can be
expanded as . Applying this identity with replaced by gives , or,
. The expression represents the lab component of , so it must equal
. Consequently, , which is equivalent to Eq. (2.2).
Incidentally, the transformation tensor may be written in a purely dyadic form as
(2.3)
1. To quote from Ref. [7], “Three-dimensional Euclidean space, , may be characterized by a set of axioms that expresses
relationships among primitive, undefined quantities called points, lines, etc. These relationships so closely correspond to
the results of ordinary measurements of distance in the physical world that, until the appearance of general relativity, it
was thought that Euclidean geometry was the kinematic model of the universe.”
E
3
e
˜
1
e
˜
2
e
˜
3
, , { } e
˜
i
e
˜
j
• δ
ij
=
F
˜
˜
g
˜
i
F
˜
˜
e
˜
i
• =
i
th
g
˜
i
g
˜
i
F
ji
e
˜
j
=
F
ji
F
ij
j
g
˜
k
F
˜
˜
e
˜
k
• = g
˜
k
( )
i
F
ij
e
˜
k
( )
j
= v
˜
v
˜
v
i
e
˜
i
= v
˜
g
˜
k
g
˜
k
g
˜
k
( )
i
e
˜
i
=
g
˜
k
F
ij
e
˜
k
( )
j
e
˜
i
= e
˜
k
( )
j
j
th
e
˜
k
δ
kj
g
˜
k
F
ij
δ
kj
e
˜
i
F
ik
e
˜
i
= =
F
˜
˜
F
˜
˜
g
˜
k
e
˜
k
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 13
Partial Answer: (a) The term “regular” is defined on page 10.
(b) In terms of the lab basis, the component array of the first base vector is , so this must be the first
column of the [F] matrix.
This transformation tensor is defined for the specific irregular basis of interest as it relates to
the laboratory basis. The transformation tensor for a different pair of bases will be different.
This does not imply that is not a tensor. Readers who are familiar with continuum mechan-
ics may be wondering whether our basis transformation tensor has anything to do with the
deformation gradient tensor used to describe continuum motion. The answer is “no.” In
general, the tensor in this document merely represents the relationship between the labora-
tory basis and the irregular basis. Even though our tensor is generally unrelated to the
deformation gradient tensor from continuum mechanics, it’s still interesting to consider the
special case in which these two tensors are the same. If a deforming material is conceptually
“painted” with an orthogonal grid in its reference state, then this grid will deform with the
material, thereby providing a natural “embedded” curvilinear coordinate system with an
associated “natural” basis that is everywhere tangent to the painted grid lines. When this
“natural” embedded basis is used, our transformation tensor will be identical to the defor-
mation gradient tensor . The component forms of many constitutive material models
become intoxicatingly simple in structure when expressed using an embedded basis (it
remains a point of argument, however, whether or not simple structure implies intuitiveness).
The embedded basis co-varies with the grid lines — in other words, these vectors stay always
tangent to the grid lines and they stretch in proportion with the stretching of the grid lines.
For this reason, the embedded basis is called the covariant basis. Later on, we will introduce a
companion triad of vectors, called the contravariant basis, that does not move with the grid
lines; instead we will find that the contravariant basis moves in a way that it remains always
perpendicular to material planes that do co-vary with the deformation. When a plane of parti-
cles moves with the material, its normal does not generally move with the material!
Study Question 2.1 Consider the following irregular base
vectors expressed in terms of the laboratory basis:
.
(a) Explain why this basis is irregular.
(b) Find the 3×3 matrix of components of the transformation
tensor with respect to the laboratory basis.
g
˜
2
g
˜
1
e
˜
1
e
˜
2
g
˜
1
e
˜
1
2e
˜
2
+ =
g
˜
2
e
˜
1
– e
˜
2
+ =
g
˜
3
e
˜
3
=
F
˜
˜
1
2
0
F
˜
˜
F
˜
˜
F
˜
˜
F
˜
˜
F
˜
˜
F
˜
˜
F
˜
˜
F
˜
˜
F
˜
˜
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 14
Our tensor can be seen as characterizing a transformation operation that will take you
from the orthonormal laboratory base vectors to the irregular base vectors. The three irregular
base vectors, form a triad, which in turn defines a parallelepiped. The volume of
the parallelepiped is given by the Jacobian of the transformation tensor , defined by
. (2.4)
Geometrically, the Jacobian J in Eq. (2.4) equals the volume of the parallelepiped formed by
the covariant base vectors . To see why this triple scalar product is identically
equal to the determinant of the transformation tensor , we now introduce the direct notation
definition of a determinant:
The determinant, det[ ] (also called the Jacobian), of a tensor is the unique scalar
satisfying
[ ] = det[ ] for all vectors . (2.5)
Geometrically this strange-looking definition of the determinate states that if a parallelepiped
is formed by three vectors, , and a transformed parallelepiped is formed by the
three transformed vectors , then the ratio of the transformed volume to the
original volume will have a unique value, regardless what three vectors are chosen to form
original the parallelepiped! This volume ratio is the determinant of the transformation tensor.
Since Eq. (2.5) must hold for all vectors, it must hold for any particular choices of those vectors.
Suppose we choose to identify with the underlying orthonormal basis .
Then, recalling from Eq. (2.4) that is denoted by the Jacobian , Eq. (2.5) becomes
. The underlying Cartesian basis is orthonor-
mal and right-handed, so . Recalling from Eq. (2.1) that the covariant basis is
obtained by the transformation , we get
, (2.6)
which completes the proof that the Jacobian J can be computed by taking the determinant of
the Cartesian transformation tensor or by simply taking the triple scalar product of the covari-
ant base vectors, whichever method is more convenient:
. (2.7)
The set of vectors forms a basis if and only if is invertible — i.e., the Jacobian
must be nonzero. By choice, the laboratory basis is regular and therefore right-
handed. Hence, the irregular basis is
right-handed if
left-handed if . (2.8)
F
˜
˜
g
˜
1
g
˜
2
g
˜
3
, , { }
F
˜
˜
J det
F
˜
˜
| | ≡
g
˜
1
g
˜
2
g
˜
3
, , { }
F
˜
˜
F
˜
˜
F
˜
˜
F
˜
˜
u
˜
• F
˜
˜
v
˜
• F
˜
˜
w
˜
• , , F
˜
˜
u
˜
v
˜
w
˜
, , | | u
˜
v
˜
w
˜
, , { }
u
˜
v
˜
w
˜
, , { }
F
˜
˜
u
˜
• F
˜
˜
v
˜
• F
˜
˜
w
˜
• , ,
{ }
u
˜
v
˜
w
˜
, , { } e
˜
1
e
˜
2
e
˜
3
, , { }
det F
˜
˜
| | J
F
˜
˜
e
˜
1
• F
˜
˜
e
˜
2
• F
˜
˜
e
˜
3
• , ,
| |=J e
˜
1
e
˜
2
e
˜
3
, , | | e
˜
1
e
˜
2
e
˜
3
, , { }
e
˜
1
e
˜
2
e
˜
3
, , | |=1
g
˜
i
F
˜
˜
e
˜
i
• =
g
˜
1
g
˜
2
g
˜
3
, , | | J =
J det
F
˜
˜
| | g
˜
1
g
˜
2
g
˜
3
× ( ) • g
˜
1
g
˜
2
g
˜
3
, , | | ≡ = =
g
˜
1
g
˜
2
g
˜
3
, , { }
F
˜
˜
e
˜
1
e
˜
2
e
˜
3
, , { }
g
˜
1
g
˜
2
g
˜
3
, , { }
J 0 >
J 0 <
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 15
2.1 Modified summation convention
Given that is a basis, we know there exist unique coefficients
such that any vector can be written . Using Einstein’s summation
notation, you may write this expansion as
(2.9)
By convention, components with respect to an irregular basis are identified
with superscripts,
1
rather than subscripts. Summations always occur on different levels — a
superscript is always paired with a subscript in these implied summations. The summation
convention rules for an irregular basis are:
1. An index that appears exactly once in any term is called a “free index,” and it must ap-
pear exactly once in every term in the expression.
2. Each particular free index must appear at the same level in every term. Distinct free in-
dices may permissibly appear at different levels.
3. Any index that appears exactly twice in a given term is called a dummy sum index and
implies summation from 1 to 3. No index may appear more than twice in a single term.
4. Given a dummy sum pair, one index must appear at the upper “contravariant” level,
and one must appear at the lower “covariant” level.
5. Exceptions to the above rules must be clearly indicated whenever the need arises.
• Exceptions of rule #1 are extremely rare in tensor analysis because rule #1 can never be violated in
any well-formed tensor expression. However, exceptions to rule #1 do regularly appear in non-ten-
sor (matrix) equations. For example, one might define a matrix with components given by
. Here, both and are free indices, and the right-hand-side of this equation violates
rule #1 because the index occurs exactly once in the first term but not in the second term. This
definition of the numbers is certainly well-defined in a matrix sense
2
, but the equation is a vio-
lation of tensor index rule #1. Consequently if you really do wish to use the equation
to define some matrix , then you should include a parenthetical comment that the tensor index
conventions are not to be applied — otherwise your readers will think you made a typo.
• Exceptions of rules #2 and #4 can occur when working with a regular (right-handed orthonormal)
basis because it turns out that there is no distinction between covariant and contravariant components
when the basis is regular. For example, is identically equal to when the basis is regular. That’s
why indicial expressions in most engineering publications show all components using only sub-
scripts.
• Exceptions of rule #3 sometimes occur when the indices are actually referenced to a particular basis
and are not intended to apply to any basis. Consider, for example, how you would need to handle an
exception to rule #3 when defining the principle direction and eigenvalue associated with
some tensor . You would have to write something like “ ”
in order to call attention to the fact that the index is supposed to be a free index, not summed. An-
other exception to rule #3 occurs when an index appears only once, but you really do wish for a sum-
mation over that index. In that case you must explicitly show the summation sign in front of the
equation. Similarly, if you really do wish for an index to appear more than twice, then you must ex-
plicitly indicate whether that index is free or summed.
1. The superscripts are only indexes, not exponents. For example, is the second contravariant component of a vector
— it is not the square of some quantity . If your work does involve some scalar quantity “ ”, then you should typeset
its square as whenever there is any chance for confusion.
2. This equation is not well defined as an indicial tensor equation because it will not transform properly under a basis
change. The concept of what constitutes a well-formed tensor operation will be discussed in more detail later.
g
˜
1
g
˜
2
g
˜
3
, , { } a
1
a
2
a
3
, , { }
a
˜
a
˜
a
1
g
˜
1
a
2
g
˜
2
a
3
g
˜
3
+ + =
a
˜
a
i
g
˜
i
=
g
˜
1
g
˜
2
g
˜
3
, , { }
a
2
a
˜
a a
a ( )
2
A | |
A
ij
v
i
v
j
+ = i j
i
A
ij
A
ij
v
i
v
j
+ =
A | |
v
i
v
i
i
th
p
˜
i
λ
i
T
˜
˜
T
˜
˜
p
˜
i
• λ
i
p
˜
i
(no sum over index i) =
i
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 16
We arbitrarily elected to place the index on the lower level of our basis , so
(recalling rule #4) we call it the “covariant” basis. The coefficients have the index
on the upper level and are therefore called the contravariant components of the vector .
Later on, we will define covariant components with respect to a carefully defined
complementary “contravariant” basis . We will then have two ways to write the
vector: . Keep in mind: we have not yet indicated how these contra- and co-
variant components are computed, or what they mean physically, or why they are useful. For
now, we are just introducing the standard “high-low” notation used in the study of irregular
bases. (You may find the phrase “co-go-below” helpful to remember the difference between co- and con-
tra-variant.)
We will eventually show there are four ways to write a second-order tensor. We will intro-
duce contravariant components and covariant components , such that
. We will also introduce “mixed” components and such that
. Note the use of a “dot” to serve as a place holder to indicate the
order of the indices (the order of the indices is dictated by the order of the dyadic basis pair).
As shown in Section 3.4, use of a “dot” placeholder is necessary only for nonsymmetric tensors.
(namely, we will find that symmetric tensor components satisfy the property that ,
so the placement of the “dot” is inconsequential for symmetric tensors.). In professionally typeset
manuscripts, the dot placeholder might not be necessary because, for example, can be
typeset in a manner that is clearly distinguishable from . The dot placeholders are more
frequently used in handwritten work, where individuals have unreliable precision or clarity
of penmanship. Finally, the number of dot placeholders used in an expression is typically
kept to the minimum necessary to clearly demark the order of the indices. For example,
means the same thing as . Either of these expressions clearly show that the indices are sup-
posed to be ordered as “ followed by ,” not vice versa. Thus, only one dot is enough to
serve the purpose of indicating order. Similarly, means the same thing as , but the dots
serve no clarifying purpose for this case when all indices are on the same level (thus, they are
omitted). The importance of clearly indicating the order of the indices is inadequately empha-
sized in some texts.
1
1. For example, Ref. [4] fails to clearly indicate index ordering. They use neither well-spaced typesetting nor dot placehold-
ers, which can be confusing.
g
˜
1
g
˜
2
g
˜
3
, , { }
a
1
a
2
a
3
, , { }
a
˜
a
1
a
2
a
3
, , { }
g
˜
1
g
˜
2
g
˜
3
, , { }
a
˜
a
i
g
˜
i
a
i
g
˜
i
= =
T
ij
T
ij
T
˜
˜
T
ij
g
˜
i
g
˜
j
T
ij
g
˜
i
g
˜
j
˜
= = T
•j
i
T
i
•j
T
˜
˜
T
•j
i
g
˜
i
g
˜
j
T
i
•j
g
˜
i
g
˜
j
= =
T
•j
i
T
j
•i
=
T
j
i
T
j
i
T
•j
i•
T
•j
i
i j
T
••
ij
T
ij
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 17
IMPORTANT: As proved later (Study question 2.6), there is no difference between cova-
riant and contravariant components whenever the basis is orthonormal. Hence, for example,
is the same as . Nevertheless, in order to always satisfy rules #2 and
#4 of the sum conventions, we rewrite all familiar orthonormal formulas so that the summed
subscripts are on different levels. Furthermore, throughout this document, the following are
all equivalent symbols for the Kronecker delta:
= . (2.10)
BEWARE: as discussed in Section 3.5, the set of values should be regarded as indexed
symbols as defined above, not as components of any particular tensor. Yes, it’s true that
are components of the identity tensor with respect to the underlying rectangular Cartesian
basis , but they are not the contravariant components of the identity tensor with
respect to an irregular basis. Likewise, are not the covariant components of
the identity tensor with respect to the irregular basis. Interestingly, the mixed-level Kronecker
delta components, , do turn out to be the mixed components of the identity tensor with
respect to either basis! Most of the time, we will be concerned only with the components of
tensors with respect to the irregular basis. The Kronecker delta is important in its own right.
This is one reason why we denote the identity tensor by a symbol different from its compo-
nents. Later on, we will note the importance of the permutation symbol (which equals +1
if ijk ={123, 231, or 312}, -1 if ijk={321, 213, or 132}, and zero otherwise). The permutation sym-
bol represents the components of the alternating tensor with respect to the any regular (i.e.,
right-handed orthonormal basis), but not with respect to an irregular basis. Consequently, we
will represent the alternating tensor by a different symbol so that we can continue to use the
permutation symbol as an independent indexed quantity. Tracking the basis to which
components are referenced is one the most difficult challenges of curvilinear coordinates.
Important notation glitch Square brackets [ ] will be used to indicate a matrix, and
braces { } will indicate a matrix containing vector components. For example,
denotes the matrix that contains the contravariant components of a vector . Similarly,
is the matrix that contains the covariant components of a second-order tensor , and
will be used to denote the matrix containing the contravariant components of . Any
indices appearing inside a matrix merely indicate the co/contravariant nature of the matrix —
they are not interpreted in the same way as indices in an indicial expression. The indices
merely to indicate the (high/low/mixed) level of the matrix components. The rules on page 15
apply only to proper indicial equations, not to equations involving matrices. We will later
e
˜
1
e
˜
2
e
˜
3
, , { } e
˜
1
e
˜
2
e
˜
3
, , { }
δ
ij
δ
i
j
δ
ij
, ,
1 if i=j
0 if i j ≠
¹
´
¦
δ
ij
δ
ij
I
˜
˜
e
˜
1
e
˜
2
e
˜
3
, , { }
g
˜
1
g
˜
2
g
˜
3
, , { } δ
ij
δ
i
j
I
˜
˜
ε
ijk
ξ
˜
˜
˜
ε
ijk
3 3 ×
3 1 × v
i
{ }
3 1 × v
˜
T
ij
| |
T
˜
˜
T
ij
| |
T
˜
˜
ij
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 18
prove, for example, that the determinant can be computed by the determinant of the
mixed components of . Thus, we might write . The brackets around
indicate that this equation involves the matrix of mixed components, so the rules on page 15
do not apply to the indices. It’s okay that and don’t appear on the left-hand side.
2.2 The metric coefficients and the dual contravariant basis
For reasons that will soon become apparent, we introduce a symmetric set of numbers ,
called the “metric coefficients,” defined
. (2.11)
When the space is Euclidean, the base vectors can be expressed as linear combi-
nations of the underlying orthonormal laboratory basis and the above set of dot products can
be computed using the ordinary orthonormal basis formulas.
1
We also introduce a dual “contravariant” basis defined such that
. (2.12)
Geometrically, Eq. (2.12) requires that the first contravariant base vector must be perpen-
dicular to both and , so it must be of the form . The as-yet undeter-
mined scalar is determined by requiring that equal unity.
(2.13)
In the second-to-last step, we recognized the triple scalar product, , to be the
Jacobian defined in Eq. (2.7). In the last step we asserted that the result must equal unity.
Consequently, the scalar is merely the reciprocal of the Jacobian:
(2.14)
All three contravariant base vectors can be determined similarly to eventually give the final
result:
, , . (2.15)
where
. (2.16)
1. Note: If the space is not Euclidean, then an orthonormal basis does not exist, and the metric coefficients must be spec-
ified a priori. Such a space is called Riemannian. Shell and membrane theory deals with 2D curved Riemannian mani-
folds embedded in 3D space. The geometry of general relativity is that of a four-dimensional Riemannian manifold. For
further examples of Riemannian spaces, see, e.g., Refs. [5, 7].
det
T
˜
˜
T
˜
˜
det
T
˜
˜
det T
•j
i
| | = T
•j
i
ij i j
g
ij
g
ij
g
˜
i
g
˜
j
• ≡
g
˜
1
g
˜
2
g
˜
3
, , { }
g
ij
g
˜
1
g
˜
2
g
˜
3
, , { }
g
˜
i
g
˜
j
• δ
j
i
=
g
˜
1
g
˜
2
g
˜
3
g
˜
1
α g
˜
2
g
˜
3
× ( ) =
α g
˜
1
g
˜
1

g
˜
1
g
˜
1
• α g
˜
2
g
˜
3
× ( ) | | g
˜
1
• αg
˜
1
g
˜
2
g
˜
3
× ( ) • αJ 1 = = = =
“set”
g
˜
1
g
˜
2
g
˜
3
× ( ) •
J
α
α
1
J
--- =
g
˜
1
1
J
--- g
˜
2
g
˜
3
× ( ) = g
˜
2
1
J
--- g
˜
3
g
˜
1
× ( ) = g
˜
3
1
J
--- g
˜
1
g
˜
2
× ( ) =
J g
˜
1
g
˜
2
g
˜
3
× ( ) • g
˜
1
g
˜
2
g
˜
3
, , | | ≡ =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 19
An alternative way to obtain the dual contravariant basis is to assert that is in
fact a basis; we may therefore demand that coefficients must exist such that each covariant
base vector can be written as a linear combination of the contravariant basis: .
Dotting both sides with and imposing Eqs. (2.11) and (2.12) shows that the transformation
coefficients must be identical to the covariant metric coefficients: . Thus
. (2.17)
This equation may be solved for the contravariant basis. Namely,
, (2.18)
where the matrix of contravariant metric components is obtained by inverting the covari-
ant metric matrix . Dotting both sides of Eq. (2.18) by we note that
, (2.19)
which is similar in form to Eq. (2.11).
In later analyses, keep in mind that is the inverse of the matrix. Furthermore, both
metric matrices are symmetric. Thus, whenever these are multiplied together with a con-
tracted index, the result is the Kronecker delta:
. (2.20)
Another quantity that will appear frequently in later analyses is the determinant of the
covariant metric matrix and the determinant of the contravariant metric matrix:
and . (2.21)
Recalling that the matrix is the inverse of the matrix, we note that
. (2.22)
Furthermore, as shown in Study Question 2.5, is related to the Jacobian J from Eqs. (2.4)
and (2.6) by
, (2.23)
Thus
. (2.24)
g
˜
1
g
˜
2
g
˜
3
, , { }
L
ik
g
˜
i
g
˜
i
L
ik
g
˜
k
=
g
˜
k
L
ik
g
ik
=
g
˜
i
g
ik
g
˜
k
=
g
˜
i
g
ik
g
˜
k
=
g
ij
g
ij
| | g
˜
j
g
ij
g
˜
i
g
˜
j
• =
g
ij
g
ij
g
ik
g
kj
g
ki
g
kj
g
ik
g
jk
g
ki
g
jk
δ
j
i
= = = =
g
o
g
ij
g
o
g
ij
g
o
det
g
11
g
12
g
13
g
21
g
22
g
23
g
31
g
32
g
33
≡ g
o
det
g
11
g
12
g
13
g
21
g
22
g
23
g
31
g
32
g
33

g
ij
| | g
ij
| |
g
o
1
g
o
----- =
g
o
g
o
J
2
=
g
o
1
J
2
----- =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 20
Non-trivial lemma
1
Note that Eq. (2.21) shows that may be regarded as a function of the
nine components of the matrix. Taking the partial derivative of with respect to a partic-
ular component gives a result that is identical to the signed subminor (also called the
cofactor) associated with that component. The “subminor” is a number associated with each
matrix position that is equal to the determinant of the submatrix obtained by striking
out the row and column of the original matrix. To obtain the cofactor (i.e., the signed
subminor) associated with the ij position, the subminor is multiplied by . Denoting this
cofactor by , we have
(2.25)
Any book on matrix analysis will include a proof that the inverse of a matrix can be
obtained by taking the transpose of the cofactor matrix and dividing by the determinant:
(2.26)
From which it follows that . Applying this result to the case that is
the symmetric matrix , and recalling that is denoted , and also recalling that
is given by , Eq. (2.25) can be written
(2.27)
Similarly,
(2.28)
With these equations, we are introduced for the first time to a new index notation rule: sub-
script indices that appear in the “denominator” of a derivative should be regarded as super-
script indices in the expression as a whole. Similarly, superscripts in the denominator should
be regarded as subscripts in the derivative as a whole. With this convention, there is no viola-
tion of the index rule that requires free indices to be on the same level in all terms.
Recalling Eq. (2.23), we can apply the chain rule to note that
(2.29)
Equation (2.27) permits us to express the left-hand-side of this equation in terms of the contra-
1. This side-bar can be skipped without impacting your ability to read subsequent material.
g
o
g
ij
g
o
g
ij
2 2 ×
i
th
j
th
1 – ( )
i j +
g
ij
C
∂g
o
∂g
ij
--------- g
ij
C
=
A | |
A | |
1 –
A | |
CT
det A | |
---------------- =
A | |
C
det A | | ( ) A | |
T –
= A | |
g
ij
| | det g
ij
| | g
o
g
ij
| |
1 –
g
ij
| |
∂g
o
∂g
ij
--------- g
o
g
ij
=
∂g
o
∂g
ij
--------- g
o
g
ij
=
∂g
o
∂g
ij
--------- 2J
∂J
∂g
ij
--------- =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 21
variant metric. Thus, we may solve for the derivative on the right-hand-side to obtain
(2.30)
where we have again recalled that .
Partial Answer: (a) (b) (c) ,
. (d) = .
∂J
∂g
ij
---------
1
2
--- Jg
ij
=
g
o
J
2
=
Study Question 2.2 Let represent the ordinary orthonormal laboratory basis.
Consider the following irregular base vectors:
.
(a) Construct the metric coefficients .
(b) Construct by inverting .
(c) Construct the contravariant (dual) basis by
directly using the formula of Eq. (2.15). Sketch
the dual basis in the picture at right and visually
verify that it satisfies the condition of Eq. (2.12).
(d) Confirm that the formula of Eq. (2.18)
gives the same result as derived in part (c).
(e) Redo parts (a) through (d) if is now replaced by .
e
˜
1
e
˜
2
e
˜
3
, , { }
g
˜
2
g
˜
1
e
ˆ
˜
1
e
ˆ
˜
2
g
˜
1
e
˜
1
2e
˜
2
+ =
g
˜
2
e
˜
1
– e
˜
2
+ =
g
˜
3
e
˜
3
=
g
ij
g
ij
| | g
ij
| |
g
˜
3
g
˜
3
5e
˜
3
=
g
11
=5 g
13
=0 g
22
=2 , , g
11
=2 9 ⁄ g
33
=1 g
21
= 1 – 9 ⁄ , , J=3
g
˜
2
=
1
3
---
g
˜
3
g
˜
1
× ( )= –
2
3
---
e
˜
1
+
1
3
---
e
˜
2
g
˜
2
g
21
g
˜
1
g
22
g
˜
2
g
23
g
˜
3
+ + =
1
9
---
g
˜
1

5
9
---
g
˜
2
+
2
3
---
e
ˆ
˜
1

1
3
---
e
ˆ
˜
2
+ =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 22
Partial Answer: (a) yes -- note the direction of the third base vector (b)
A faster way to get the metric coefficients: Suppose you already have the with
respect to the regular laboratory basis. We now prove that the matrix can by obtained by
. Recall that . Therefore
, (2.31)
The left hand side is , and the right hand side represents the ij components of with
respect to the regular laboratory basis, which completes the proof that the covariant metric
coefficients are equal to the ij laboratory components of the tensor
Study Question 2.3 Let represent the ordinary orthonormal laboratory basis.
Consider the following irregular base vectors:
.
(a) is this basis right handed?
(b) Compute and .
(c) Construct the contravariant (dual) basis.
(d) Prove that points in the direction of the
outward normal to the upper surface of the
shaded parallelogram.
(d) Prove that (see drawing label) is the
reciprocal of the length of .
(e) Prove that the area of the face of the parallelepiped whose normal is parallel to is
given by the Jacobian times the magnitude of .
e
˜
1
e
˜
2
e
˜
3
, , { }
g
˜
2
g
˜
1
e
ˆ
˜
1
e
ˆ
˜
2
g
˜
1
θ
h
1
h
2
g
˜
1
e
˜
1
2e
˜
2
+ =
g
˜
2
3e
˜
1
e
˜
2
+ =
g
˜
3
7e
˜
3
– =
g
ij
| | g
ij
| |
g
˜
1
h
k
g
˜
k
g
˜
k
J g
˜
k
F
ij
| |
g
ij
| |
F | |
T
F | | g
˜
i
F
˜
˜
e
˜
i
• =
g
˜
i
g
˜
j
• F
˜
˜
e
˜
i
• ( ) F
˜
˜
e
˜
j
• ( ) • e
˜
i
F
˜
˜
T
• ( ) F
˜
˜
e
˜
j
• ( ) • e
˜
i
F
˜
˜
T
F
˜
˜
• ( ) e
˜
j
• • = = =
g
ij
F | |
T
F | |
g
ij
F
˜
˜
T
F
˜
˜

Study Question 2.4 Using the [F] matrix from Study Question 2.1, verify that
gives the same matrix for as computed in Question 2.2.
F | |
T
F | |
g
ij
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 23
Partial Answer: (a) Substitute Eqs (2.1) and (2.32) into Eq. (2.12) and use the fact that
. (b) Follow logic similar to Eq. (2.31). (c) A correct proof must use the fact that is
invertible. Reminder: a tensor is positive definite iff . The dot product
positivity property states that . (d) Easy: the determinant of a product is the product
of the determinants.
(a) By definition, , so the fact that these dot products equal the Kronecker delta says,
for example, that (i.e., it is a unit vector) and is perpendicular to , etc. (b) There is no
info about the handedness of the system. (c) By definition, the first contravariant base vector must be
Study Question 2.5 Recall that the covariant basis may be regarded as a transformation
of the right-handed orthonormal laboratory basis. Namely, .
(a) Prove that the contravariant basis is obtained by the inverse transpose transformation:
, (2.32)
where is the same as .
(b) Prove that the contravariant metric coefficients are equal to the ij laboratory compo-
nents of the tensor .
(c) Prove that, for any invertible tensor , the tensor is positive definite. Explain
why this implies that the and matrices are positive definite.
(d) Use the result from part (b) to prove that the determinant of the covariant metric
matrix equals the square of the Jacobian .
g
˜
i
F
˜
˜
e
˜
i
• =
g
˜
i
F
˜
˜
T –
e
˜
i
• =
e
˜
i
e
˜
i
g
ij
F
˜
˜
1 –
F
˜
˜
T –

S
˜
˜
S
˜
˜
T
S
˜
˜

g
ij
| | g
ij
g
o
g
ij
J
2
e
˜
i
e
˜
j
• δ
j
i
= S
˜
˜
A
˜
˜
u
˜
A
˜
˜
u
˜
• • 0 > u
˜
0
˜
≠ ∀
v
˜
v
˜

> 0 if v
˜
0
˜

= 0 iff v
˜
=
0
˜
¹
´
¦
Study Question 2.6 SPECIAL CASE (orthonormal systems)
Suppose that the metric coefficients happen to equal the Kronecker delta:
(2.33)
(a) Explain why this condition implies that the covariant base vectors are orthonormal.
(b) Does the above equation tell us anything about the handedness of the systems?
(c) Explain why orthonormality implies that contravariant base vectors are identically
equal to the covariant base vectors.
g
ij
δ
ij
=
g
ij
g
˜
i
g
˜
j
• ≡
g
˜
1
1 = g
˜
1
g
˜
2
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 24
perpendicular to the second and third covariant vectors, and it must satisfy , which (with
some thought) leads to the conclusion that .
Super-fast way to get the dual (contravariant) basis and metrics Recall [Eq. (2.1)] that
the covariant basis can be connected to the lab basis through the equation
(2.34)
When we introduced this equation, we explained that the column of the lab component
matrix [F] would contain the lab components of . Later, in Study Question 2.5, Eq. (2.34), we
asserted that
, (2.35)
Consequently, we may conclude that the column of must contain the lab compo-
nents of . This means that the row of must contain the contravariant base vec-
tor.
Partial Answer: (a) (b)
(c), . (d)
(e) All results the same. This way was computationally faster!
g
˜
1
g
˜
1
• 1 =
g
˜
1
g
˜
1
=
g
˜
i
F
˜
˜
e
˜
i
• =
i
th
g
˜
i
g
˜
i
F
˜
˜
T –
e
˜
i
• =
i
th
F | |
T –
g
˜
i
i
th
F | |
1 –
i
th
Study Question 2.7 Let represent the ordinary orthonormal laboratory basis.
Consider the following irregular base vectors:
.
(a) Construct the matrix by putting the lab compo-
nents of into the column.
(b) Find
(c) Find from the row of .
(d) Directly compute from the result of part (c).
(e) Compare the results with those found in earlier study questions and comment on
which method was fastest.
e
˜
1
e
˜
2
e
˜
3
, , { }
g
˜
2
g
˜
1
e
ˆ
˜
1
e
ˆ
˜
2
g
˜
1
e
˜
1
2e
˜
2
+ =
g
˜
2
e
˜
1
– e
˜
2
+ =
g
˜
3
e
˜
3
=
F | |
g
˜
i
i
th
F | |
1 –
g
˜
i
i
th
F | |
1 –
g
ij
F | |
1 1 – 0
2 1 0
0 0 1
= F | |
1 –
1 3 ⁄ 1 3 ⁄ 0
2 3 ⁄ – 1 3 ⁄ 0
0 0 1
=
g
˜
2
= –
2
3
---
e
˜
1
+
1
3
---
e
˜
2
g
12
g
˜
1
g
˜
2
• dot product of 1st and 2nd rows
1
9
--- – = = =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 25
Partial Answer: (a) We know that because it’s the coefficient of in the expression
. (b) Draw a path from the origin to the tip of the vector such that the path consists of
straight line segments that are always parallel to one of the base vectors or . You can thereby
demonstrate graphically that . Hence , (c) You’re on your own. (d) The
length of the second segment equals the magnitude of , or . (e) In general, when a vector is
broken into segments parallel to the covariant basis, then the length of the segment parallel to must
equal , with no implied sum on i. Lesson here? Because the base vectors are not necessarily of
unit length, the meaning of the contravariant component must never be confused with the length of
the associated line segment! The contravariant component is merely the coefficient of in the lin-
ear expansion . The component’s magnitude equals the length of the segment divided by the
length of the base vector!
Study Question 2.8 Consider the same irregular base vectors in Study Question 2.2.
Namely, , , .
Now consider three vectors, , , and , in
the 1-2 plane as shown.
(a) Referring to the sketch, graphically
demonstrate that gives a vector
identically equal to . Thereby explain
why the contravariant components of
are: , , and .
(b) Using similar geometrical arguments,
find the contravariant components of .
(c) Find the contravariant components of .
(d) The path from the tail to the tip of a vec-
tor can be decomposed into parts that are parallel to the base vectors. For example, the vec-
tor can be viewed as a segment equal to plus a segment equal to . What are the
lengths of each of these individual segments?
(e) In general, when any given vector is broken into segments parallel to the covariant
base vectors { , how are the lengths of these segments related (if at all) to the
contravariant components of the vector?
g
˜
1
e
˜
1
2e
˜
2
+ = g
˜
2
e
˜
1
– e
˜
2
+ = g
˜
3
e
˜
3
=
a
˜
b
˜
g
˜
2
g
˜
1
e
ˆ
˜
1
e
ˆ
˜
2
c
˜
a
˜
b
˜
c
˜
g
˜
1
g
˜
2

a
˜
a
˜
a
1
=1 a
2
= 1 – a
3
=0
b
˜
c
˜
b
˜
g
˜
1
2g
˜
2
u
˜
g
˜
1
g
˜
2
g
˜
3
, , { }
u
1
u
2
u
3
, , { }
a
1
=1 g
˜
1
a
˜
g
˜
1
g
˜
2
– = b
˜
g
˜
1
g
˜
2
b
˜
g
˜
1
2g
˜
2
+ = b
1
=1 b
2
= etc
2g
˜
2
2 5 u
˜
g
˜
i
u
i
g
ii
u
i
u
i
g
˜
i
u
˜
u
i
g
˜
i
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 26
2.3 Transforming components by raising and lower indices.
At this point, we have not yet identified a procedural means of determining the contravar-
iant components or covariant components — we merely assert that
they exist. The sets and are each bases, so we know that we may
express any vector as
. (2.36)
Dotting both sides of this equation by gives .
Equation (2.12) allows us to simplify the second term as . Using
Eq. (2.19), the third term simplifies as , where the last step utilized
the fact that the matrix is symmetric. Thus, we conclude
(2.37)
. (2.38)
Similarly, by dotting both sides of Eq. (2.36) by you can show that
(2.39)
. (2.40)
These very important equations provide a means of transforming between types of compo-
nents. In Eq. (2.40), note how the matrix effectively “lowers” the index on , changing it
to an “i”. Similarly, in Eq. (2.38), the matrix “raises” the index on , changing it to an “i”.
Thus the metric coefficients serve a role that is very similar to that of the Kronecker delta in
orthonormal theory. Incidentally, note that Eqs. (2.17) and (2.18) are also examples of raising
a
1
a
2
a
3
, , { } a
1
a
2
a
3
, , { }
g
˜
1
g
˜
2
g
˜
3
, , { } g
˜
1
g
˜
2
g
˜
3
, , { }
a
˜
a
˜
a
k
g
˜
k
a
k
g
˜
k
= =
g
˜
i
a
˜
g
˜
i
• a
k
g
˜
k
g
˜
i
• a
k
g
˜
k
g
˜
i
• = =
a
k
g
˜
k
g
˜
i
• a
k
δ
k
i
a
i
= =
a
k
g
˜
k
g
˜
i
• a
k
g
ki
g
ik
a
k
= =
g
ik
a
i
a
˜
g
˜
i
• =
a
i
g
ik
a
k
=
g
˜
i
a
i
a
˜
g
˜
i
• =
a
i
g
ik
a
k
=
g
ik
a
k
g
ik
a
k
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 27
and lowering indices.
Partial Answer: For the first one, the index “m” is repeated. The allows you to lower the super-
script “m” on so that it becomes an “l” when the metric coefficient is removed. The final simplified
result for the first expression is . Be certain that your final simplified results still have the same free
indices on the same level as they were in the original expression. The very last expression has a twist:
recognizing the repeated index , we can lower the index on and turn it into an “i” when we
remove so that a simplified expression becomes . Alternatively, if we prefer to remove
the , we could raise the index on , making it into a “k” so that an alternative simplification of
the very last expression becomes . Either method gives the same result. The final simplifi-
cation comes from recalling that the mixed metric components are identically equal to the Kronecker
delta. Thus, , which is merely one of the expressions given in Eq. (2.20).
Finding contravariant vector components — classical method In Study Question 2.8,
we used geometrical methods to find the contravariant components of several vectors. Eqs
(2.37) through (2.40) provide us with an algebraic means of determining the contravariant and
covariant components of vectors. Namely, given a basis and a vector , the con-
tra- and co-variant components of the vector are determined as follows:
STEP 1. Compute the covariant metric coefficients .
STEP 2. Compute the contravariant metric coefficients by inverting .
STEP 3. Compute the contravariant basis .
STEP 4. Compute the covariant components .
STEP 5. Compute the contravariant components . Alternatively, .
Finding contravariant vector components — accelerated method If the lab components
of the vector are available, then you can quickly compute the covariant and contravariant
components by noting that and, similarly,
. The steps are as follows:
STEP 1. Construct the matrix by putting the lab components of into the column.
STEP 2. The covariant components are found from , using lab components.
STEP 3. The contravariant components are found from , using lab components.
Study Question 2.9 Simplify the following expressions so that there are no metric coeffi-
cients:
, , , , , (2.41) a
m
g
ml
g
pk
u
p
f
n
g
ni
r
i
g
ij
s
j
g
ij
b
k
g
kj
g
ij
g
jk
g
ml
a
m
a
l
j j g
jk
g
ij
g
ij
g
jk
g
i
k
=
g
jk
j g
ij
g
ij
g
jk
g
i
k
=
g
ij
g
jk
δ
i
k
=
g
˜
1
g
˜
2
g
˜
3
, , { } a
˜
g
ij
g
˜
i
g
˜
j
• =
g
ij
g
ij
| |
g
˜
i
g
ij
g
˜
j
=
a
i
a
˜
g
˜
i
• =
a
i
a
˜
g
˜
i
• = a
i
g
ij
a
j
=
a
˜
a
i
a
˜
g
˜
i
• a
˜
F
˜
˜
e
˜
i
• • F
˜
˜
T
a
˜
• ( ) e
˜
i
• = = =
a
i
F
˜
˜
1 –
a
˜
• ( ) e
˜
i
• =
F | | g
˜
i
i
th
F | |
T
a { }
F | |
1 –
a { }
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 28
Partial Answer: Classical method: First get the contra- and co-variant base vectors, then apply the
formulas:
, and so on giving .
, and so on giving .
ACCELERATED METHOD:
covariant components: . . . agrees!
contravariant components: . . . agrees!
By applying techniques similar to those used to derive Eqs. (2.37) through (2.40), the follow-
ing analogous results can be shown to hold for tensors:
(2.42)
(2.43a)
(2.43b)
(2.43c)
. (2.43d)
Again, observe how the metric coefficients play a role similar to that of the Kronecker delta
for orthonormal bases. For example, in Eq. (2.43a), the expression “becomes” as
follows: upon seeing , you look for a subscript i or m on T. Finding the subscript m, you
replace it with an i and change its level from a subscript to a superscript. The metric coeffi-
cient similarly raises the subscript n on to a become a superscript j on .
Incidentally, if the matrix and lab components are available for , then
, , , and (2.44)
These are nifty quick-answer formulas, but for general discussions, Eqs. (2.43) are really more
meaningful and those equations are applicable even when you don’t have lab components.
Study Question 2.10 Using the basis and vectors from Study Question 2.8, apply the clas-
sical step-by-step algorithm to compute the covariant and contravariant components of
the vectors, and . Check whether the accelerated method gives the same answer. Be
sure to verify that your contravariant components agree with those determined geometri-
cally in Question 2.8.
a
˜
b
˜
a
2
a
˜
g
˜
2
• 2e
˜
1
e
˜
2
+ ( ) e
˜
1
– e
˜
2
+ ( ) • 1 – = = = a
1
a
2
a
3
, , { } 4 1 – 0 , , { } =
a
2
a
˜
g
˜
2
• 2e
˜
1
e
˜
2
+ ( )
2
3
---
e
˜
1

1
3
---
e
˜
2
+ ( ) • 1 – = = = a
1
a
2
a
3
, , { } 1 1 – 0 , , { } =
F | |
T
a { }
lab
1 2 0
1 – 1 0
0 0 1
2
1
0
4
1 –
0
= =
F | |
1 –
a { }
lab
1 3 ⁄ 1 3 ⁄ 0
2 3 ⁄ – 1 3 ⁄ 0
0 0 1
2
1
0
1
1 –
0
= =
T
˜
˜
T
ij
g
˜
i
g
˜
j
T
ij
g
˜
i
g
˜
j
T
•j
i
g
˜
i
g
˜
j
T
i
•j
g
˜
i
g
˜
j
= = = =
T
ij
g
˜
i
T
˜
˜
g
˜
j
• • T
mn
g
im
g
jn
T
•k
i
g
kj
T
k
•j
g
ki
= = = =
T
ij
g
˜
i
T
˜
˜
g
˜
j
• • T
mn
g
mi
g
nj
T
•j
k
g
ki
T
i
•k
g
kj
= = = =
T
•j
i
g
˜
i
T
˜
˜
g
˜
j
• • T
ik
g
kj
T
kj
g
ki
T
m
•n
g
mi
g
nj
= = = =
T
i
•j
g
˜
i
T
˜
˜
g
˜
j
• • T
kj
g
ki
T
ik
g
kj
T
•n
m
g
mi
g
nj
= = = =
T
mn
g
im
g
jn
T
ij
g
im
g
jn
T
mn
T
ij
F | | T | |
lab
T
˜
˜
T
ij
| | F | |
1 –
T | |
lab
F | |
T –
= T
ij
| | F | |
T
T | |
lab
F | | = T
•j
i
| | F | |
1 –
T | |
lab
F | | = T
i
•j
| | F | |
T
T | |
lab
F | |
T –
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 29
Our final index changing properties involve the mixed Kronecker delta itself. The mixed
Kronecker delta changes the index symbol without changing its level. For example,
(2.45)
etc. (2.46)
Contrast this with the metric coefficients and , which change both the symbol and the
level of the sub/superscripts.
Partial Answer: (a) Seeing the , you can get rid of it if you can find a subscript “i” or “m.” Find-
ing “i” as a subscript on H, you raise its level and change it to an “m,” leaving a “dot” placeholder
where the old subscript lived. After similarly eliminating the , the final simplified result is .
(b) (c) For the first and last “g” factor, you can lower the “i” to make it an “m” and then raise
the “m” to make it a “p.” Alternatively, you can recognize the opportunity to apply Eq. (2.20) so that
, and then Eq. (2.46) allows you to change the original superscript “i” on the H tensor to
a “p” leaving its level unchanged. (d) The i and j are already on the upper level, so you leave them
alone. To raise the “k” subscript, you multiply by . The “p” is raised similarly so that the final
result is . (g) Yes, of course!
δ
i
j
v
i
δ
i
j
v
j
= v
i
δ
j
i
v
j
=
T
ij
δ
i
k
T
kj
= δ
j
n
T
i
•j
δ
m
i
T
m
•n
=
g
ij
g
ij
Study Question 2.11 Let denote a fourth-order tensor. Use the method of raising and
lowering indices to simplify the following expressions so that there are no metric coeffi-
cients or Kronecker deltas ( , , or ) present.
(a) (b) (c)
Multiply the following covariant, or mixed components by appropriate combinations of the
metric coefficients to convert them all to pure contravariant components (i.e., all super-
scripts).
(d) (e) (f)
(g) Are we having fun yet?
H
~
~
~
~
g
ij
g
ij
δ
i
j
H
ijkl
g
im
g
jn
δ
m
q
H
ipqn
g
is
g
nj
H
•j•l
i•k•
g
im
g
jn
g
mp
H
••kp
ij••
H
ijkl
H
•j•l
i•k•
g
im
g
jn
H
••kl
mn••
H
•pm•
s••j
g
im
g
mp
δ
i
p
=
g
km
H
ijmn
H
••kp
ij••
g
km
g
pn
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 30
3. Tensor algebra for general bases
A general basis is one that is not restricted to be orthogonal, normalized, or right-handed.
Throughout this section, it is important to keep in mind that structured (direct) notation for-
mulas remain unchanged. Here we show how the component formulas take on different forms
for general bases that are permissibly nonorthogonal, non-normalized, and/or non-right-
handed.
3.1 The vector inner (“dot”) product for general bases.
Presuming that the contravariant components of two vectors and are known, we may
expand the direct notation for the dot product as
. (3.1)
Substituting the definition (2.11) into Eq. (3.1), the formula for the dot product becomes
= . (3.2)
This result can be written in other forms by raising and lowering indices. Noting, for example,
that , we obtain a much simpler formula for the dot product
= . (3.3)
The simplicity of this formula and its close similarity to the familiar formula from orthonor-
mal theory is one of the principal reasons for introducing the dual bases. We can lower the
index on by writing it as so we obtain another formula for the dot product:
= . (3.4)
Finally, we recognize the index raising combination , to obtain yet another for-
mula:
= . (3.5)
Whenever new results are derived for a general basis, it is always advisable to ensure that
they all reduce to a familiar form whenever the basis is orthonormal. For the special case that
the basis happens to be orthonormal, we note that . Furthermore, for an
orthonormal basis, there is no difference between contravariant and covariant components.
Thus Eqs. 3.2, 3.3, 3.4, and 3.5 do indeed all reduce to the usual orthonormal formula.
a
˜
b
˜
a
˜
b
˜
• a
i
g
˜
i
( ) b
j
g
˜
j
( ) • a
i
b
j
g
˜
i
g
˜
j
• ( ) = =
a
˜
b
˜
• a
i
b
j
g
ij
= a
1
b
1
g
11
a
1
b
2
g
12
a
1
b
3
g
13
a
2
b
1
g
21
… a
3
b
3
g
33
+ + + + +
b
j
g
ij
b
i
=
a
˜
b
˜
• a
i
b
i
= a
1
b
1
a
2
b
2
a
3
b
3
+ +
a
i
g
ki
a
k
a
˜
b
˜
• a
k
b
i
g
ki
= a
1
b
1
g
11
a
1
b
2
g
12
a
1
b
3
g
13
a
2
b
1
g
21
… a
3
b
3
g
33
+ + + + +
b
i
g
ki
b
k
=
a
˜
b
˜
• a
i
b
i
= a
1
b
1
a
2
b
2
a
3
b
3
+ +
g
˜
1
g
˜
2
g
˜
3
, , { } g
ij

ij
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 31
Partial Answer: (a) (b) (c) yes, of course it agrees!
Index contraction The vector dot product is a special case of a more general operation,
which we will call index contraction. Given two free indices in an expression, we say that we
“contract” them when they are turned into dummy summation indices according to the fol-
lowing rules:
1. if the indices started at the same level, multiply them by a metric coefficient
2. if the indices are at different levels, multiply by a Kronecker delta (and simplify).
In both cases, the indices on the metric coefficient or Kronecker delta must be the same sym-
bol as the indices being contracted, but at different levels so that they become dummy sum-
mation symbols upon application of the summation conventions. The symbol denotes
contraction of the first and second indices; would denote contraction of the second and
eighth indices (assuming, of course, that the expression has eight free indices to start with!).
Index contraction is actually basis contraction and it is the ordering of the basis that dictates
the contraction.
Consider, for example, the expression . This expression has six free indices, and
we will suppose that it therefore corresponds to a sixth-order tensor with the associated base
vectors being ordered the same as the free indices . To contract the first and sec-
ond indices ( and ), which lie on different levels, we must dot the first and second base vec-
tors into themselves, resulting in the Kronecker delta . Thus, contracting the first and
Study Question 3.1 Consider the following nonorthonormal base vectors:
and
Consider two vectors, and , lying in the
1-2 plane as drawn to scale in the figure.
(a) Express and in terms of the lab
basis.
(b) Compute in the ordinary familiar
manner by using lab components.
(c) Use Eqs. (3.2) and (3.3) to compute .
Does the result agree with part (b)?
a
˜
b
˜
g
˜
2
g
˜
1
e
˜
1
e
˜
2
g
˜
1
e
˜
1
2e
˜
2
+ =
g
˜
2
e
˜
1
– e
˜
2
+ = g
˜
3
e
˜
3
=
a
˜
b
˜
a
˜
b
˜
e
˜
a
˜
b
˜

a
˜
b
˜

a
˜
2e
˜
1
e
˜
2
+ = a
˜
b
˜
• 2 =
C
1
2
C
2
8
v
i
W
jk
Z
lmn
g
˜
i
g
˜
j
g
˜
k
g
˜
l
g
˜
m
g
˜
n
i j
δ
j
i
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 32
second indices of gives , which simplifies to . Note that the
contraction operation has reduced the order of the result from a sixth-order tensor down to a
fourth-order tensor. To compute the contraction , we must dot the fourth base vector
into the sixth base vector , which results in . Thus, contracting the fourth and sixth indi-
ces of gives . To summarize,
Index contraction is equivalent to dotting together the base vectors associated with the identified indices.
denotes contraction of the and base vectors.
No matter what basis is used to expand a tensor, the result of a contraction operation will be
the same. In other words, the contraction operation is invariant under a change in basis. By
this, we mean that you can apply the contract and then change the basis or vice versa — the
result will be the same.
Incidentally, note that the vector dot product can be expressed in terms of a contrac-
tion as operating on the dyad . The formal notation for index contraction is rarely
used, but the phrase “index contraction” is very common.
3.2 Other dot product operations
Using methods similar to those in Section 3.1, indicial forms of the operation are
found to be
etc.
In all these formulas, the dot product results in a summation between adjacent indices on
opposite levels. If you know only and , you must first raise an index to apply the dot
product. For example,
. (3.6)
In general, the correct indicial expression for any operation involving dot products can be
derived by starting with the direct notation and expanding each argument in whatever form
happens to be available. This approach typically leads to the opportunity to apply one or
more of Eqs. (2.11), (2.12), or (2.19). For example, suppose you know two tensors in the forms
and . Then
. (3.7)
Note the change in dummy sum indices from ij to mn required to avoid violation of the sum-
mation conventions. For the final step, we merely applied Eq. (2.11). If desired, the above
result may be simplified by using the to lower the “j” superscript on (changing it to an
v
i
W
jk
Z
lmn
δ
j
i
v
i
W
jk
Z
lmn
v
i
W
ik
Z
lmn
C
4
6
g
˜
l
g
˜
n
g
ln
v
i
W
jk
Z
lmn
g
ln
v
i
W
jk
Z
lmn
C
α
β
α
th
β
th
u
˜
v
˜

C
1
2
u
˜
v
˜
C
α
β
T
˜
˜
v
˜

T
˜
˜
v
˜
• T
ij
v
j
g
˜
i
T
ij
v
j
g
˜
i
T
•j
i
v
j
g
˜
i
= = =
T
ij
v
k
T
˜
˜
v
˜
• T
ij
v
k
g
kj
g
˜
i
=
A
˜
˜
A
ij
g
˜
i
g
˜
j
=
B
˜
˜
B
•j
i
g
˜
i
g
˜
j
=
A
˜
˜
B
˜
˜
• A
ij
g
˜
i
g
˜
j
( ) B
•n
m
g
˜
m
g
˜
n
( ) • = A
ij
B
•n
m
g
˜
i
g
˜
j
g
˜
m
• ( )g
˜
n
= A
ij
B
•n
m
g
jm
g
˜
i
g
˜
n
=
g
jm
A
ij
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 33
“m”) so that
. (3.8)
Alternatively, we could have used the to lower the “m” superscript on B (changing it to
a “j”) so that
. (3.9)
These are all valid expressions for the composition of two tensors. Note that the final result in
the last three equations involved components times the basis dyad . Hence those compo-
nents represent the mixed “ ” components of .
3.3 The transpose operation
The transpose of a tensor is defined in direct notation such that
for all vectors and . (3.10)
Since this must hold for all vectors, it must hold for any particular choice. Taking and
, we see that
. (3.11)
Similarly, you can show that
. (3.12)
A
˜
˜
B
˜
˜
• A
•m
i
B
•n
m
g
˜
i
g
˜
n
=
g
jm
A
˜
˜
B
˜
˜
• A
ij
B
jn
g
˜
i
g
˜
n
=
g
˜
i
g
˜
n
i
•n
A
˜
˜
B
˜
˜

Study Question 3.2 Complete the following table, which shows the indicial components
for all of the sixteen possible ways to express the operation , depending on what
type of components are available for and . The shaded cells have the simplest form
because they do not involve metric coefficients.
A
˜
˜
B
˜
˜

A
˜
˜
B
˜
˜
B
nj
g
˜
n
g
˜
j
B
nj
g
˜
n
g
˜
j
B
•j
n
g
˜
n
g
˜
j
B
n
•j
g
˜
n
g
˜
j
A
im
g
˜
i
g
˜
m
A
im
B
nj
g
mn
g
˜
i
g
˜
j
A
im
B
mj
g
˜
i
g
˜
j
A
im
B
n
•j
g
mn
g
˜
i
g
˜
j
A
im
g
˜
i
g
˜
m
A
im
B
mj
g
˜
i
g
˜
j
A
im
B
•j
n
g
mn
g
˜
i
g
˜
j
A
•m
i
g
˜
i
g
˜
m
A
•m
i
B
•j
m
g
˜
i
g
˜
j
A
•m
i
B
n
•j
g
mn
g
˜
i
g
˜
j
A
i
•m
g
˜
i
g
˜
m
A
i
•m
B
mj
g
˜
i
g
˜
j
A
i
•m
B
nj
g
mn
g
˜
i
g
˜
j
B
˜
˜
T
B
˜
˜
u
˜
B
˜
˜
T
v
˜
• • v
˜
B
˜
˜
u
˜
• • = u
˜
v
˜
u
˜
=g
˜
i
v
˜
=g
˜
j
B
˜
˜
T
( )
ij
B
ji
=
B
˜
˜
T
( )
ij
B
ji
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 34
The mixed components of the transpose are more different. Namely, taking and ,
. (3.13)
Note that the high/low level of the indices remains unchanged. Only their ordering is
switched. Similarly,
. (3.14)
All of the above relations could have been derived in the following alternative manner: The
transpose of any dyad is simply . Therefore, knowing that the transpose of a sum is
the sum of the transposes and that any tensor can be written as a sum of dyads, we can write:
. (3.15)
The coefficient of is . Thus, , which is the same as Eq. (3.11).
3.4 Symmetric Tensors
A tensor is symmetric only if it equals its own transpose. Therefore, referring to
Eqs. (3.11) through (3.15) the components of a symmetric tensor satisfy
, , . (3.16)
The last relationship shows that the “dot placeholder” is unnecessary for symmetric tensors,
and we may write simply without ambiguity.
3.5 The identity tensor for a general basis.
The identity tensor is the unique symmetric tensor for which
for all vectors and . (3.17)
Since this must hold for all vectors, it must hold for any particular choice. Taking and
, we find that . The left-hand side represents the ij covariant compo-
nents of the identity, and the right hand side is . By taking and , it can be simi-
larly shown that the contravariant components of the identity are . By taking and
, the mixed components of the identity are found to equal . Thus,
(3.18)
This result further validates raising and lowering indices by multiplying by appropriate met-
ric coefficients — such an operation merely represents dotting by the identity tensor.
u
˜
=g
˜
i
v
˜
=g
˜
j
B
˜
˜
T
( )
i
•j
B
•i
j
=
B
˜
˜
T
( )
•j
i
B
j
•i
=
u
˜
v
˜
( )
T
v
˜
u
˜
B
˜
˜
T
B
ij
g
˜
i
g
˜
j
( )
T
B
ij
g
˜
i
g
˜
j
( )
T
B
ij
g
˜
j
g
˜
i
= = =
g
˜
i
g
˜
j
B
ij
B
˜
˜
T
( )
ji
B
ij
=
A
˜
˜
A
ij
A
ji
= A
ij
A
ji
= A
•j
i
A
j
•i
=
A
i
j
I
˜
˜
u
˜
I
˜
˜
v
˜
• • u
˜
v
˜
• = u
˜
v
˜
u
˜
=g
˜
i
v
˜
=g
˜
j
g
˜
i
I
˜
˜
g
˜
j
• • g
˜
i
g
˜
j
• =
g
ij
u
˜
=g
˜
i
u
˜
=g
˜
j
g
ij
u
˜
=g
˜
i
u
˜
=g
˜
j
δ
i
j
I
˜
˜
g
ij
g
˜
i
g
˜
j
g
ij
g
˜
i
g
˜
j
δ
i
j
g
˜
i
g
˜
j
δ
j
i
g
˜
i
g
˜
j
= = = =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 35
3.6 Eigenproblems and similarity transformations
The eigenproblem for a tensor requires the determination of all eigenvectors and
corresponding eigenvalues λ such that
. (3.19)
In this form, the eigenvector is called the “right” eigenvector. We can also define a “left”
eigenvector such that
. (3.20)
In other words, the left eigenvectors are the right eigenvectors of . The characteristic
equation for is the same as that for , so the eigenvalues are the same for both the
right and left eigenproblems. Consider a particular eigenpair :
. (no sum on k) (3.21)
Dot from the left by a left eigenvector , noting that (no sum on m). Then
. (no sum on k) (3.22)
Rearranging,
. (no sums) (3.23)
From this result we conclude that the left and right eigenvectors corresponding distinct
( ) eigenvalues are orthogonal. This motivates renaming the eigenvectors using dual
basis notation:
Rename .
Rename . (3.24)
The magnitudes of eigenvectors are arbitrary, so we can select normalization such that the
renamed vectors are truly dual bases:
. (3.25)
Nonsymmetric tensors might not possess a complete set of eigenvectors (i.e., the geometric
multiplicity of eigenvalues might be less than their algebraic multiplicity). If, however, the
tensor happens to possess a complete (or “spanning”) set of eigenvectors, then those
eigenvectors form an acceptable basis. The mixed components of with respect to this
principal basis are then
. (no sum on j) (3.26)
Ah! Whenever a tensor possesses a complete set of eigenvectors, it is diagonal in its mixed
T
˜
˜
u
˜
T
˜
˜
u
˜
• λu
˜
=
u
˜
v
˜
v
˜
T
˜
˜
• λv
˜
=
T
˜
˜
T
T
˜
˜
T
T
˜
˜
λ
k
u
˜
k
, ( )
T
˜
˜
u
˜
k
• λ
k
u
˜
k
=
v
˜
m
v
˜
m
T
˜
˜
• λ
m
v
˜
m
=
λ
m
v
˜
m
u
˜
k
• λ
k
v
˜
m
u
˜
k
• =
λ
m
λ
k
– ( ) v
˜
m
u
˜
k
• ( ) 0 =
λ
m
λ
k

u
˜
k
p
˜
k
=
v
˜
m
p
˜
m
=
p
˜
m
p
˜
k
• δ
k
m
=
T
˜
˜
T
˜
˜
T
•j
i
p
˜
i
T
˜
˜
p
˜
j
• • p
˜
i
λ
j
p
˜
j
( ) • λ
j
δ
j
i
= = =
T
˜
˜
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 36
principal basis! Stated differently,
. (3.27)
or
. (3.28)
In matrix analysis books, this result is usually presented as a similarity transformation. As
was done in Eqs. (2.2) and (2.32), we can invoke the existence of a basis transformation tensor
, such that
and . (3.29)
The columns of the matrix of with respect to the laboratory basis are simply
the right eigenvectors expressed in the lab basis:
. (3.30)
With Eq. (3.29), the diagonalization result (3.28) can be written as similarity transformation.
Namely,
= , (3.31)
where
. (3.32)
T
•j
i
| |
λ
1
0 0
0 λ
2
0
0 0 λ
3 p
˜
i
p
˜
j
=
T
˜
˜
λ
k
p
˜
k
p
˜
k
k 1 =
3

=
F
˜
˜
p
˜
k
F
˜
˜
e
˜
k
• = p
˜
k
F
˜
˜
T –
e
˜
k
• =
F
˜
˜
e
˜
1
e
˜
2
e
˜
3
, , { }
F
˜
˜
| |
e
˜
e
˜
p
˜
1
{ }
e
˜
p
˜
2
{ }
e
˜
p
˜
3
{ }
e
˜
| | =
T
˜
˜
λ
k
F
˜
˜
e
˜
k
• ( )
F
˜
˜
T –
e
˜
k
• ( )
k 1 =
3

=
F
˜
˜
Λ
˜
˜
F
˜
˜
1 –
• •
Λ
˜
˜
λ
k
e
˜
k
e
˜
k
k 1 =
3

λ
1
0 0
0 λ
2
0
0 0 λ
3 e
˜
k
e
˜
k
→ ≡
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 37
Partial Answer: .
The above discussion focused on eigenproblems. We now mention the more general relation-
ship between similar components and similarity transformations. When two tensors and
possess the same components but with respect to different mixed bases, i.e., when
and , (identical mixed components) (3.34)
then the tensors are similar. In other words, there exists a transformation tensor such that
and therefore , (3.35)
In the context of continuum mechanics, the transformation tensor is typically the defor-
mation gradient tensor. It is important to have at least a vague notion of this concept in order
to communicate effectively with researchers who prefer to do all analyses in general curvilin-
ear coordinates. To them, the discovery that two tensors have identical mixed components
with respect to different bases has great significance, whereas you might find it more mean-
ingful to recognize this situation as merely a similarity transformation.
Study Question 3.3 Consider a tensor having components with respect to the orthonor-
mal laboratory basis given by
. (3.33)
Prove that this tensor has a spanning set of eigenvectors . Find the labora-
tory components of the basis transformation tensor , such that . Also
verify Eq. (3.31) that is similar to a tensor that is diagonal in the laboratory basis,
with the diagonal components being equal to the eigenvalues of .
T
˜
˜
| |
2 1 2 –
1 2 2 –
0 0 2
e
˜
k
e
˜
k
=
p
˜
1
p
˜
2
p
˜
3
, , { }
F
˜
˜
p
˜
k
F
˜
˜
e
˜
k
• =
T
˜
˜
T
˜
˜
F
˜
˜
| |
1 2 1
1 – 2 1
0 1 0
=
T
˜
˜
S
˜
˜
T
˜
˜
α
•j
i
g
˜
i
g
˜
j
=
S
˜
˜
α
•j
i
G
˜
i
G
˜
j
=
F
˜
˜
g
˜
k
F
˜
˜
G
˜
k
• =
T
˜
˜
F
˜
˜
S
˜
˜
F
˜
˜
1 –
• • =
F
˜
˜
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 38
3.7 The alternating tensor
When using a nonorthonormal or non-right-handed basis, it is a good idea to use a differ-
ent symbol for the alternating tensor so that the permutation symbol may retain its
usual meaning. As we did for the Kronecker delta, we will assign the same meaning to the
permutation symbol regardless of the contra/covariant level of the indices. Namely,
= . (3.39)
Important: These quantities are all defined the same regardless of the contra- or co- level at
which the indices are placed. The above defined permutation symbols are the components of
the alternating tensor with respect to any regular right-handed orthonormal laboratory basis,
so they do not transform co/contravariant level via the metric tensors. It is not allowed to
raise or lower indices on the ordinary permutation symbol with the metric tensors. A similar
situation was encountered in connection with the Kronecker delta of Eq. (2.10).
Through the use of the permutation symbol, the three formulas written explicitly in Eq.
(2.15) can be expressed compactly as a single indicial equation:
. (3.40)
In terms of an orthonormal right-handed basis, the alternating tensor is
. (3.41)
Study Question 3.4 Suppose two tensors and possess the same contravariant
components with respect to different bases. In other words,
and (same contravariant components) (3.36)
Demonstrate by direct substitution that
, (3.37)
where is a basis transformation tensor defined such that
. (3.38)
T
˜
˜
S
˜
˜
T
˜
˜
β
ij
g
˜
i
g
˜
j
=
S
˜
˜
β
ij
G
˜
i
G
˜
j
=
T
˜
˜
F
˜
˜
S
˜
˜
F
˜
˜
T
• • =
F
˜
˜
g
˜
k
F
˜
˜
G
˜
k
• =
ξ
˜
˜
˜
ε
ijk
ε
ijk
ε
ijk
ε
ij•
••k
etc. , , ,
+1 if ijk 123 231 312 , , =
–1 if ijk 321 213 132 , , =
0 otherwise
¹
¦
´
¦
¦
ε
ijm
g
˜
m
1
J
--- g
˜
i
g
˜
j
× ( ) =
ξ
˜
˜
˜
ε
ijk
e
˜
i
e
˜
j
e
˜
k
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 39
In terms of a general basis, the basis form for the alternating tensor is
, (3.42)
where
, , , etc. (3.43)
Using Eq. (3.40),
. (3.44)
Thus, simplifying the last expression,
. (3.45)
Hence, the covariant alternating tensor components simply equal to the permutation symbol
times the Jacobian. This result could have been derived in the following alternative way: Sub-
stituting Eq. (2.1) into Eq. (3.43) and using the direct notation definition of determinant gives
= [ , , ] = . (3.46)
ξ
˜
˜
˜
ξ
ijk
g
˜
i
g
˜
j
g
˜
k
ξ
ijk
g
˜
i
g
˜
j
g
˜
k
ξ
ij•
••k
g
˜
i
g
˜
j
g
˜
k
etc. = = = =
ξ
ijk
g
˜
i
g
˜
j
g
˜
k
, , | | = ξ
ijk
g
˜
i
g
˜
j
g
˜
k
, , | | = ξ
ij•
••k
g
˜
i
g
˜
j
g
˜
k
, , | | =
ξ
ijk
g
˜
i
g
˜
j
× ( ) g
˜
k
• Jε
ijm
g
˜
m
( ) g
˜
k
• Jε
ijm
δ
k
m
= = =
ξ
ijk

ijk
=
ξ
ijk
F
˜
˜
e
˜
i

F
˜
˜
e
˜
j

F
˜
˜
e
˜
k
• J e
˜
i
e
˜
j
e
˜
k
, , | | Jε
ijk
=
Study Question 3.5 Use Eq. (2.32), in Eq. (3.43) to prove that
(3.47) ξ
ijk
1
J
--- ε
ijk
=
Study Question 3.6 Consider a basis that is orthonormal but left-handed (see Eq. 2.8).
Prove that
. (3.48)
This is why some textbooks claim to “define” the permutation symbol to be its negative
when the basis is left-handed. We do not adopt this practice. We keep the permutation
symbol unchanged. For a left-handed basis, the permutation symbol stays the same, but
the components of the alternating tensor change sign.
ξ
ijk
ε
ijk
– =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 40
3.8 Vector valued operations
CROSS PRODUCT In direct notation, the cross product is defined
. (3.49)
Hence,
. (3.50)
Alternatively,
. (3.51)
Axial vectors Some textbooks choose to identify certain vectors such as angular velocity as
“different” because, according to these books, they take on different form in a left-handed
basis. This viewpoint is objectionable since it intimates that physical quantities are affected by
the choice of bases. Those textbooks handle these supposedly different “axial” vectors by
redefining the left-handed cross-product to be negative of the right-hand definition. Malvern
suggests alternatively redefining the permutation symbol to be its negative for left-hand
basis. Malvern’s suggested sign change is handled automatically by defining the alternating
tensor as we have above. Specifically Eqs. (3.44) and (2.8) show that the alternating tensor
components will automatically change sign for left-handed systems. Hence, with this
approach, there is no need for special formulas for left-hand bases.
3.9 Scalar valued operations
TENSOR INNER (double dot) PRODUCT The inner product between two dyads, and
is a scalar defined
(3.52)
When applied to base vectors, this result tells us that
etc. (3.53)
The “etc” stands for the many other ways we could possibly mix up the level of the indices;
the basic trend should be clear.
The inner product between two tensors, and , is defined to be distributive over addi-
tion. In other words, each tensor can be expanded in terms of known components times base
vectors and then Eq. (3.53) can be applied to the base vectors. If, for example, the mixed
u
˜
v
˜
× ξ: u
˜
v
˜
=
~
~
~
u
˜
v
˜
× ξ
ijk
u
j
v
k
g
˜
i
=
u
˜
v
˜
× ξ
ijk
u
j
v
k
g
˜
i
=
ξ
ijk
a
˜
b
˜
r
˜
s
˜
a
˜
b
˜
( ): r
˜
s
˜
( ) a
˜
r
˜
• ( ) b
˜
s
˜
• ( ) ≡
g
˜
i
g
˜
j
( ): g
˜
m
g
˜
n
( ) g
im
g
jn
= g
˜
i
g
˜
j
( ): g
˜
m
g
˜
n
( ) δ
i
m
δ
j
n
=
g
˜
i
g
˜
j
( ): g
˜
m
g
˜
n
( ) δ
m
i
δ
n
j
= g
˜
i
g
˜
j
( ): g
˜
m
g
˜
n
( ) g
im
g
jn
=
g
˜
i
g
˜
j
( ): g
˜
m
g
˜
n
( ) g
im
δ
n
j
= g
˜
i
g
˜
j
( ): g
˜
m
g
˜
n
( ) δ
i
m
g
jn
=
A
˜
˜
B
˜
˜
A
•j
i
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 41
components of and the contravariant components of are known, then the inner
product between and is
(3.54)
Stated differently, to find the inner product between two tensors, and , you should con-
tract
1
the adjacent indices pairwise so that the first index in is contracted with the first index
in and the second index in is contracted with the second index in . To clarify, here are
some expressions for the tensor inner product for various possibilities of which components
of each tensor are available:
(3.55)
Note that
(3.56)
where “tr” is the trace operation.
TENSOR MAGNITUDE The magnitude of a tensor is defined
(3.57)
Based on the result of Eq. (3.55), note that the magnitude of a tensor is not found by simply
summing the squares of the tensor components (and rooting the result). Instead, the tensor
magnitude is computed by rooting the summation running over each component of multi-
plied by its dual component. If, for example, the components are known, then the dual
components must be computed by so that
(3.58)
TRACE The trace an operation in which the two base vectors of a second order tensor are
contracted
2
together, resulting in a scalar. In direct notation, the trace of a tensor can be
defined . There are four ways to write the tensor :
. (3.59)
The double dot operation is distributive over addition. Furthermore, for any vectors and ,
. Therefore, the trace is found by contracting the base vectors to give
, (3.60)
1. The definition of index “contraction” is given on page 31.
2. The definition of index “contraction” is given on page 31.
A
˜
˜
B
mn
B
˜
˜
A
˜
˜
B
˜
˜
A
˜˜
:B
˜˜
A
•j
i
g
˜
i
g
˜
j
( ): B
mn
g
˜
m
g
˜
n
( ) A
•j
i
B
mn
g
˜
i
g
˜
j
( ): g
˜
m
g
˜
n
( ) A
•j
i
B
mn
g
im
δ
n
j
A
•j
i
B
mj
g
im
= = = =
A
˜
˜
B
˜
˜
A
˜
˜
B
˜
˜
A
˜
˜
B
˜
˜
A
˜
˜
: B
˜
˜
A
ij
B
ij
A
ij
B
ij
A
•j
i
B
mj
g
im
A
•j
i
B
i
•j
A
ij
B
mn
g
im
g
jn
… = = = = = =
A
˜
˜
: B
˜
˜
tr A
˜
˜
T
B
˜
˜
• ( ) tr A
˜
˜
B
˜
˜
T
• ( ) = =
A
˜
˜
A
˜
˜
A
˜
˜
:A
˜
˜
=
A
˜
˜
A
•j
i
A
i
•j
A
i
•j
g
im
A
•n
m
g
jn
=
A
˜
˜
: A
˜
˜
A
•j
i
A
i
•j
A
•j
i
A
•n
m
( )g
im
g
jn
= =
B
˜
˜
trB
˜
˜
I
˜
˜
: B
˜
˜
= B
˜
˜
B
˜
˜
B
ij
g
˜
i
g
˜
j
B
ij
g
˜
i
g
˜
j
B
•j
i
g
˜
i
g
˜
j
B
i
•j
g
˜
i
g
˜
j
= = = =
u
˜
v
˜
I
˜
˜
: u
˜
v
˜
( ) u
˜
v
˜
• =
trB
˜
˜
B
ij
g
˜
i
g
˜
j
• B
ij
g
˜
i
g
˜
j
• B
•j
i
g
˜
i
g
˜
j
• B
i
•j
g
˜
i
g
˜
j
• = = = =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 42
or
. (3.61)
Note that the last two expressions are the closest in form to the familiar formula for the trace
in orthonormal bases.
TRIPLE SCALAR-VALUED VECTOR PRODUCT In direct notation,
. (3.62)
Thus, applying Eq. (3.50),
. (3.63)
DETERMINANT The determinant of a tensor is defined in direct notation as
for all vectors . (3.64)
Recalling that the triple scalar product is defined we may apply
previous formulas for the dot and cross product to conclude that
, (3.65)
Recalling Eqs. (3.44) and (3.47), we may introduce the ordinary permutation symbol to write
this result as
. (3.66)
Now that we are using the ordinary permutation symbol, we may interpret this result as a
matrix equation. Specifically, the left hand side represents the determinant of the contravari-
ant component matrix times the permutation symbol . Therefore, the above equation
implies that . Recalling Eq. (2.23),
. (3.67)
Similarly, it can be shown that
. (3.68)
In other words, if you have the matrix of covariant components, you compute the determi-
nant of the tensor by finding the determinant of the matrix and multiplying the result
by .
Suppose the components of are known in mixed form. Then Eq. (3.64) gives
. (3.69)
trB
˜
˜
B
ij
g
ij
B
ij
g
ij
B
•k
k
B
k
•k
= = = =
u
˜
v
˜
w
˜
, , | | u
˜
v
˜
w
˜
× ( ) • ≡
u
˜
v
˜
w
˜
, , | | ξ
ijk
u
i
v
j
w
k
=
T
˜
˜
T
˜
˜
u
˜
• T
˜
˜
v
˜
• T
˜
˜
w
˜
• , , | | detT
˜
˜
u
˜
v
˜
w
˜
, , | | = u
˜
v
˜
w
˜
, , { }
u
˜
v
˜
w
˜
, , | | u
˜
v
˜
w
˜
× ( ) • =
ξ
ijk
T
ip
T
jq
T
kr
detT
˜
˜
( )ξ
pqr
=
ε
ijk
T
ip
T
jq
T
kr
detT
˜
˜
J
2
----------- ε
pqr
=
T
ij
ε
pqr
det T
ij
| | detT
˜
˜
( ) J
2
⁄ =
detT
˜
˜
g
o
det T
ij
| | =
detT
˜
˜
g
o
det T
ij
| | =
T
˜
˜
T
ij
| |
g
o
T
˜
˜
ξ
ijk
T
•p
i
T
•q
j
T
•r
k
detT
˜
˜
ξ
pqr
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 43
Recalling Eqs. (3.45), we may introduce the ordinary permutation symbol. Noting that the J’s
on each side cancel, the above result as
. (3.70)
Therefore, the determinant of the tensor is equal to the determinant of the matrix of mixed
components:
. (3.71)
Likewise, it can be shown that
. (3.72)
3.10 Tensor-valued operations
TRANSPOSE We have already discussed one tensor-valued operation, the transpose. Spe-
cifically, consider a tensor expressed in one of its four possible ways:
. (3.73)
In Section 3.3, we showed that the transpose may be obtained by transposing the matrix base
dyads.
. (3.74)
Most people are more accustomed to thinking of transposing components rather than base
vectors. Referring to the above result, we note that
, or (3.75a)
, or (3.75b)
, or (3.75c)
, or (3.75d)
Using our matrix notational conventions, these equations would be written
. (3.76a)
. (3.76b)
. (3.76c)
. (3.76d)
Here, we have intentionally used different indices ( and ) on the left-hand-side to remind
the reader that matrix equations are not subject to the same index rules. The indices are
present within the matrix brackets only to indicate to the reader which components (covari-
ant, contravariant, or mixed) are contained in the matrix.
ε
ijk
T
•p
i
T
•q
j
T
•r
k
detT
˜
˜
ε
pqr
=
T
˜
˜
detT
˜
˜
det T
•j
i
| | =
detT
˜
˜
det T
i
•j
| | =
T
˜
˜
T
ij
g
˜
i
g
˜
j
T
ij
g
˜
i
g
˜
j
T
•j
i
g
˜
i
g
˜
j
T
i
•j
g
˜
i
g
˜
j
= = = =
T
˜
˜
T
T
ij
g
˜
j
g
˜
i
T
ij
g
˜
j
g
˜
i
T
•j
i
g
˜
j
g
˜
i
T
i
•j
g
˜
j
g
˜
i
= = = =
T
˜
˜
T
( )
ji
T
ij
= g
˜
j
T
˜
˜
T
g
˜
i
• • g
˜
i
T
˜
˜
g
˜
j
• • =
T
˜
˜
T
( )
ji
T
ij
= g
˜
j
T
˜
˜
T
g
˜
i
• • g
˜
i
T
˜
˜
g
˜
j
• • =
T
˜
˜
T
( )
j
•i
T
•j
i
= g
˜
j
T
˜
˜
T
g
˜
i
• • g
˜
i
T
˜
˜
g
˜
j
• • =
T
˜
˜
T
( )
•i
j
T
i
•j
= g
˜
j
T
˜
˜
T
g
˜
i
• • g
˜
i
T
˜
˜
g
˜
j
• • =
T
˜
˜
T
( )
mn
| | T
ij
| |
T
=
T
˜
˜
T
( )
mn
| | T
ij
| |
T
=
T
˜
˜
T
( )
m
•n
| | T
•j
i
| |
T
=
T
˜
˜
T
( )
•n
m
| | T
i
•j
| |
T
=
m n
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 44
Equation (3.76a) says that the matrix of contravariant components of are obtained by
taking the transpose of the matrix of contravariant components of the original tensor .
Similarly, Eq. (3.76b) says that the covariant component matrix for is just the transpose of
the covariant matrix for .
The mixed component formulas are more tricky. Equation (3.76c) states that the low-high
mixed components of are obtained by taking the transpose of the high-low components
of . Likewise, Eq. (3.76d) states that the high-low mixed components of are obtained by
taking the transpose of the low-high components of .
INVERSE Consider a tensor . Let . Then
. (3.77)
Hence
. (3.78)
Hence, the contravariant components of are obtained by inverting the matrix of covari-
ant components.
Alternatively note a different component form of Eq. (3.77) is
. (3.79)
This result states that the high-low mixed components of are obtained by inverting the
high-low mixed matrix.
The formulas for inverses may be written in our matrix notation as
. (3.80a)
. (3.80b)
. (3.80c)
. (3.80d)
COFACTOR Without proof, we claim that similar methods can be used to show that the
matrix of high-low mixed components of are the cofactor of the low-high matrix.
Note that the components come from the matrix. The contravariant matrix for will
equal the cofactor of the covariant matrix for times the metric scale factor . The full set of
T
˜
˜
T
| |
T
˜
˜
| |
T
˜
˜
T
T
˜
˜
T
˜
˜
T
T
˜
˜
T
˜
˜
T
T
˜
˜
T
˜
˜
U
˜
˜
T
˜
˜
1 –
=
T
˜
˜
U
˜
˜
• I
˜
˜
=
T
ik
U
kj
δ
i
j
=
T
˜
˜
1 –
T
ij
T
•k
i
U
•j
k
δ
j
i
=

•j
i
T
˜
˜
1 –
T
•j
i
| |
T
˜
˜
1 –
( )
mn
| | T
ij
| |
1 –
=
T
˜
˜
1 –
( )
mn
| | T
ij
| |
1 –
=
T
˜
˜
1 –
( )
m
•n
| | T
˜
˜
1 –
( )
i
•j
| |
1 –
=
T
˜
˜
1 –
( )
•n
m
| | T
˜
˜
1 –
( )
•j
i
| |
1 –
=

•j
i
T
˜
˜
C
T
i
•j
| |

•j
i

i
•j
T
˜
˜
C
T
˜
˜
g
o
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 45
formulas is
. (3.81a)
. (3.81b)
. (3.81c)
. (3.81d)
DYAD By direct expansion, . Thus,
. (3.82)
Similarly,
. (3.83)
. (3.84)
. (3.85)
T
˜
˜
C
( )
mn
| | g
o
T
ij
| |
C
=
T
˜
˜
C
( )
mn
| | g
o
T
ij
| |
C
=
T
˜
˜
C
( )
m
•n
| | T
•j
i
| |
C
=
T
˜
˜
C
( )
•n
m
| | T
i
•j
| |
C
=
u
˜
v
˜
u
i
g
˜
i
( ) v
j
g
˜
j
( ) u
i
v
j
g
˜
i
g
˜
j
= =
u
˜
v
˜
( )
ij
u
i
v
j
=
u
˜
v
˜
( )
ij
u
i
v
j
=
u
˜
v
˜
( )
•j
i
u
i
v
j
=
u
˜
v
˜
( )
i
•j
u
i
v
j
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 46
4. Basis and Coordinate transformations
This chapter discusses how vector and tensor components transform when the basis changes.
This chapter may be skipped without any negative impact on the understandability of later chapters.
Chapter 3 had focused exclusively on the implications of using an irregular basis. For vec-
tor and tensor algebra (Chapter 3), all that matters is the possibility that the basis might be
non-normalized and/or non-right-handed and/or non-orthogonal. For vector and tensor
algebra, it doesn’t matter whether or not the basis changes from one point in space to another
— that’s because algebraic operations are always applied at a single location. By contrast, dif-
ferential operations such as the spatial gradient (discussed later in Chapter 5) are dramatically
affected by whether or not the basis is fixed or spatially varying. This present chapter pro-
vides a gentle transition between the topics of Chapter 3 to the topics of Chapter 4 by outlin-
ing the transformation rules that govern how components of a vector with respect to one
irregular basis will change if a different basis is used at that same point in space.
Throughout this document, we have asserted that any vector can (and should) be
regarded as invariant in the sense that the vector itself is unchanged upon a change in basis.
The vector can be expanded as a sum of components times corresponding base vectors :
(4.1)
It’s true that the individual components will change if the basis is changed, but the sum of
components times base vectors will be invariant. Consequently, a vector’s components must
change in a very specific way if a different basis is used.
Basis transformation discussions are complicated and confusing because you have to con-
sider two different systems at the same time. Since each system has both a covariant and a
contravariant basis, talking about two different systems entails keeping track of four different
basis triads. To help with this book-keeping nightmare, we will now refer to the first system
as the “A” system and the other system as the “B” system. Each contravariant index (which is
a superscript) will be accompanied by a subscript (either A or B) to indicate which system the
index refers to. Likewise, each covariant index (a subscript) will now be accompanied by a
superscript (A or B) to indicate the associated system. You should regard the system indicator
(A or B) to serve the same sort of role as the “dot placeholder” discussed on page 16 — they
are not indices. With this convention, we may now say
are the covariant base vectors for system-A (4.2)
are the contravariant base vectors for system-A (4.3)
are the covariant base vectors for system-B (4.4)
are the contravariant base vectors for system-B (4.5)
v
˜
v
i
g
˜
i
v
˜
v
i
g
˜
i
=
g
˜
1
A
g
˜
2
A
g
˜
3
A
, , { }
g
˜
A
1
g
˜
A
2
g
˜
A
3
, , { }
g
˜
1
B
g
˜
2
B
g
˜
3
B
, , { }
g
˜
B
1
g
˜
B
2
g
˜
B
3
, , { }
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 47
The system flag, A or B, acts as a “dot-placeholder” and moves along with its associated index
in operations. Now that we are dealing with up to four basis triads, the components of vectors
must also be flagged to indicate the associated basis. Hence, depending on which of the above
four bases are used, the basis expansion of a vector can be any of the following:
(4.6)
(4.7)
(4.8)
(4.9)
Keep in mind that the system label (A or B) is to be regarded as a “dot-placeholder,” not an
implicitly summed index. The metric coefficients for any given system are defined in the
usual way and the Kronecker delta relationship between contravariant and covariant base
vectors within a single system still holds. Specifically,
(4.10)
(4.11)
(4.12)
(4.13)
In previous chapters (which dealt with only a single system), we showed that the matrix con-
taining the components could be obtained by inverting the matrix. This relationship
still holds within each individual system (A or B) listed above. Namely,
(4.14)
(4.15)
Now that we are considering two systems simultaneously, we can further define new
matrices that inter-relate the two systems:
(4.16a)
(4.16b)
(4.16c)
(4.16d)
v
˜
v
A
i
g
˜
i
A
=
v
˜
v
i
A
g
˜
A
i
=
v
˜
v
B
i
g
˜
i
B
=
v
˜
v
B
i
g
˜
i
B
=
g
ij
AA
g
˜
i
A
g
˜
j
A
• = g
iA
Aj
g
˜
i
A
g
˜
A
j
• δ
i
j
= =
g
Aj
iA
g
˜
A
i
g
˜
j
A
• δ
j
i
= = g
AA
ij
g
˜
A
i
g
˜
A
j
• =
g
ij
BB
g
˜
i
B
g
˜
j
B
• = g
iB
Bj
g
˜
i
B
g
˜
B
j
• δ
i
j
= =
g
Bj
iB
g
˜
B
i
g
˜
j
B
• δ
j
i
= = g
BB
ij
g
˜
B
i
g
˜
B
j
• =
g
ij
g
ij
| |
g
ik
AA
g
AA
kj
g
iA
Aj
δ
i
j
= =
g
ik
BB
g
BB
kj
g
iB
Bj
δ
i
j
= =
g
ij
AB
g
˜
i
A
g
˜
j
B
• = g
AB
ij
g
˜
A
i
g
˜
B
j
• =
g
ij
BA
g
˜
i
B
g
˜
j
A
• = g
BA
ij
g
˜
B
i
g
˜
A
j
• =
g
Aj
iB
g
˜
A
i
g
˜
j
B
• = g
iB
Aj
g
˜
i
A
g
˜
B
j
• =
g
Bj
iA
g
˜
B
i
g
˜
j
A
• = g
iA
Bj
g
˜
i
B
g
˜
A
j
• =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 48
Partial Answer: (a) , , , .
(b) , , ,
(c) , , , ,
, , ,
From the commutativity of the dot product, Eq. (4.16) implies that
, , , and . (4.17)
System metrics (i.e., “g” matrices that involve AA or BB) are components of a tensor (the iden-
tity tensor). Coupling matrix components (i.e., ones that involve A and B) are not components
of a tensor — instead, a coupling matrix characterizes interrelationship between two bases.
Study Question 4.1 Consider the following irregular base vectors (the A-system):
and
Additionally consider a second B-system:
and
(a) Find the contravariant and covariant met-
rics for each individual system.
(b) Find the contravariant basis for each indi-
vidual system.
(c) Directly apply Eq. (4.16) to find the cou-
pling matrices. Then verify that .
g
˜
1
A
e
˜
1
e
˜
2
g
˜
1
B
g
˜
2
B
g
˜
2
A
g
˜
1
A
e
˜
1
2e
˜
2
+ =
g
˜
2
A
e
˜
1
– e
˜
2
+ = g
˜
3
A
e
˜
3
=
g
˜
1
B
2e
˜
1
e
˜
2
+ =
g
˜
2
B
e
˜
1
– 4e
˜
2
+ = g
˜
3
B
e
˜
3
=
g
i j
AB
| | g
ij
BA
| |
T
=
g
i j
AA
| |
5 1 0
1 2 0
0 0 1
= g
AA
ij
| |
2 9 ⁄ 1 9 ⁄ – 0
1 9 ⁄ – 5 9 ⁄ 0
0 0 1
= g
ij
BB
5 2 0
2 17 0
0 0 1
= g
ij
AA
17 81 ⁄ 1 81 ⁄ – 0
1 81 ⁄ – 5 81 ⁄ 0
0 0 1
=
g
˜
A
1
1
3
---e
˜
1
1
3
---e
˜
2
+ = g
˜
A
2
2 –
3
------e
˜
1
1
3
---e
˜
2
+ = g
˜
B
1
4
9
---e
˜
1
1
9
---e
˜
2
+ = g
˜
B
2
1 –
9
------e
˜
1
2
9
---e
˜
2
+ =
g
i j
AB
| |
4 7 0
1 – 5 0
0 0 1
= g
i j
BA
| |
4 1 – 0
7 5 0
0 0 1
= g
Bj
iA
| |
2 3 ⁄ 1 3 ⁄ – 0
1 3 ⁄ 1 3 ⁄ 0
0 0 1
= g
i B
Aj
| |
2 3 ⁄ 1 3 ⁄ 0
1 3 ⁄ – 1 3 ⁄ 0
0 0 1
=
g
AB
ij
| |
5 27 ⁄ 1 27 ⁄ 0
7 27 ⁄ – 4 27 ⁄ 0
0 0 1
= g
BA
ij
| |
5 27 ⁄ 7 27 ⁄ – 0
1 27 ⁄ 4 27 ⁄ 0
0 0 1
= g
Aj
iB
| |
1 1 0
1 – 2 0
0 0 1
= g
i A
Bj
| |
1 1 – 0
1 2 0
0 0 1
=
g
ij
AB
g
ji
BA
= g
AB
ij
g
BA
ij
= g
Aj
iB
g
jA
Bi
= g
i B
A j
g
Bi
jA
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 49
The first relationship in Eq. (4.17) does not say that the matrix containing is symmet-
ric. Instead this equation is saying that the matrix containing may be obtained by the
transpose of the different matrix that contains . Given a “g” matrix of any type, you can
transform it immediately to other types of g-matrices by using the following rules:
•To exchange system labels (A or B) left-right, apply a transpose.
•To exchange system labels (A or B) up-down, apply an inverse.
•To exchange system labels (A or B) along a diagonal, apply an inverse transpose.
For example, if is available, then
, (4.18)
Similarly, if is available, then
, (4.19)
Here, and in all upcoming matrix equations involving “g” components, a star (*) is inserted
where indices normally reside in indicial equations. These equations show how to move two
system labels simultaneously.
To move only one system label to a new location, you need a matrix product. In matrix
products, abutting system labels must be on opposite levels (the system labels are not
summed — index rules apply only to indicial expressions, not to matrix expressions. For
example, what operation would you need to move only the “B” label in from the bot-
tom to the top? The answer is that you need to use the B-system metric as follows:
(this is the matrix form of ) (4.20)
This behavior also applies to transforming bases. For example,
(4.21)
All of the “g” matrices may be obtained only from knowledge of one system metric (same sys-
tem labels) and one coupling matrix (different system labels).
g
ij
AB
g
ij
AB
g
ij
BA
g
ij
AB
g
**
BA
| | g
**
AB
| |
T
=
left-right
g
AB
**
g
**
AB
| |
1 –
=
up-down
g
BA
**
g
**
AB
| |
T –
=
diagonal
g
iB
Aj
g
B*
*A
| | g
*B
A*
| |
T
=
left-right
g
A*
*B
g
*B
A*
| |
1 –
=
up-down
g
*A
B*
g
*B
A*
| |
T –
=
diagonal
g
B*
*A
| |
g
**
BA
| | g
**
BB
| | g
B*
*A
| | = g
ij
BA
g
ik
BB
g
Bj
kA
=
g
˜
i
A
g
iB
Ak
g
˜
k
B
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 50
Partial Answer: (a) We are given a regular metric and a mixed coupling matrix. Start by expressing
as the product of any AB matrix times a BB metric times a BA matrix: .
We now need to get expressed in terms of the given coupling matrix . In other words, we
need to swap A and B along the diagonal, which requires an inverse-transpose: . Next,
we need to express the coupling matrix in terms of the given coupling matrix . This requires
only an up-down motion of the system labels, so that’s just an inverse: . Putting it all
back in the first equation gives
(b) . The matrix in this relation is . Therefore
, , and .
(c) is similar.
Study Question 4.2 Suppose you are studying a problem involving two bases, A and
B, and you know one system metric and one coupling matrix as follows:
and (4.22)
(a) Find and
(b) Express the base vectors in terms of the base vectors.
(b) Express the base vectors in terms of the base vectors.
g
ij
BB
g
iA
Bj
g
**
BB
| |
4 2 3
2 4 1 –
3 1 – 5
= g
*A
B*
| |
1 2 4 –
0 1 – 2
3 0 2 –
=
g
**
AA
| | g
AA
**
| |
g
˜
i
A
g
˜
k
B
g
˜
A
i
g
˜
k
B
g
**
AA
| | g
**
AA
| | g
*B
A*
| | g
**
BB
| | g
B*
*A
| | =
g
*B
A*
| | g
*A
B*
| |
g
*B
A*
| | g
*A
B*
| |
T –
=
g
B*
*A
| | g
*A
B*
| |
g
B*
*A
| | g
*A
B*
| |
1 –
=
g
**
AA
| | g
*A
B*
| |
T –
g
**
BB
| | g
*A
B*
| |
1 –
1 2 4 –
0 1 – 2
3 0 2 –
T –
4 2 3
2 4 1 –
3 1 – 5
1 2 4 –
0 1 – 2
3 0 2 –
19
2
------
5
2
---
57
2
------
5 11 49
9
2
--- –
1
2
---
15
2
------ –
= = =
g
˜
i
A
g
iB
Ak
g
˜
k
B
= g
*B
A*
| | g
*A
B*
| |
T –
1 3
3
2
---
2 5 3
0 1 –
1
2
--- –
= =
g
˜
1
A
g
˜
1
B
3g
˜
2
B
3
2
---g
˜
3
B
+ + = g
˜
2
A
2g
˜
1
B
5g
˜
2
B
3g
˜
3
B
+ + = g
˜
3
A
g
˜
2
B

1
2
---g
˜
3
B
– =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 51
We showed in previous sections how the covariant components of a vector are related to
the contravariant components from the same system. For example, we showed that
when considering only one system. When considering more than one basis at a time, this
result needs to explicitly show the system (A or B) flags:
(4.23)
(4.24)
These equations show how to transform components within a single system. To transform com-
ponents from one system to another, the formulas are
(4.25)
(4.26)
Equivalently,
(4.27)
(4.28)
Tensor transformations are similar. For example
and (4.29)
These formulas are constructed one index position at a time. In the last formula, for example,
the first free index position on the T is the pair . On the other side of the equation, the first
index pair on the T is . Because is a dummy summed index, the “g” must have . Thus,
the first “g” is a combination of the free index pair on the left side and the “flipped” index pair
on the right . The “T” in the above equations may be moved to the middle if you like it
better that way:
and (4.30)
If you wanted to write these as matrix equations, you can write them in a way that
dummy summed indices are abutting, keeping in mind that the g components are unchanged
when indices along with their system label are moved left-to-right. The g components require an
inverse on their matrix forms to move system labels up-down. Thus, the matrix forms for
these transformation formulas are
and (4.31)
Numerous matrix versions of component transformation formulas may be written. In this
case, we wrote the matrix expressions on the assumption that the known coupling and metric
matrices were and .
v
i
g
ij
v
j
=
v
i
A
g
ij
AA
v
A
j
=
v
i
B
g
ij
BB
v
B
j
=
v
i
A
g
iB
Ak
v
k
B
= v
i
A
g
ik
AB
v
B
k
=
v
A
i
g
AB
ik
v
k
B
= v
A
i
g
Ak
iB
v
B
k
=
v
i
A
v
k
B
g
Bi
kA
= v
i
A
v
B
k
g
ki
BA
=
v
A
i
v
k
B
g
BA
ki
= v
A
i
v
B
k
g
kA
Bi
=
T
ij
AA
T
pq
BB
g
Bi
pA
g
Bj
qA
= T
Bj
iB
T
AA
pq
g
pB
Ai
g
qj
AB
=
i
B
p
A
p
A
p
g
pB
Ai
T
ij
AA
g
Bi
pA
T
pq
BB
g
Bj
qA
= T
Bj
iB
g
pB
Ai
T
AA
pq
g
qj
AB
=
T
**
AA
| | g
B*
*A
| |
T
T
**
BB
| | g
B*
*A
| | = T
Bj
iB
g
B*
*A
| |
T
T
AA
**
| | g
B*
*A
| |
T –
g
**
AA
| | =
g
B*
*A
| | g
**
AA
| |
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 52
Partial Answer: (a) , , , , , , , .
(b) The first third and fourth results of part (a) imply that . Drawing grid
lines parallel to and , the path from the origin to the tip of vector may be reached by first tra-
versing along a line segment equal to , and then moving anti-parallel to to reach the tip.
(c) Using results from study question 4.1, becomes in matrix form .
Multiplying this out verifies that it is true.
(d) The answer is .
Study Question 4.3 Consider the same irregular base vectors in Study Question 4.1.
Namely,
and
and
and
Let .
(a) Determine the covariant and contravari-
ant components of with respect to sys-
tems A and B. (i.e., find ).
(b) Demonstrate graphically that your
answers to part (a) make sense.
(c) Verify the following:
(d) Fill in the question marks to make the following equation true:
e
ˆ
˜
1
e
ˆ
˜
2
v
˜
g
˜
1
A
g
˜
1
B
g
˜
2
B
g
˜
2
A
g
˜
1
A
e
˜
1
2e
˜
2
+ =
g
˜
2
A
e
˜
1
– e
˜
2
+ = g
˜
3
A
e
˜
3
=
g
˜
1
B
2e
˜
1
e
˜
2
+ =
g
˜
2
B
e
˜
1
– 4e
˜
2
+ = g
˜
3
B
e
˜
3
=
v
˜
3e
˜
1
3e
˜
2
+ =
v
˜
v
i
A
v
A
i
v
i
B
v
B
i
, , ,
v
i
A
g
ik
AB
v
B
k
= v
i
A
g
iB
Ak
v
k
B
=
v
A
i
g
Ak
iB
v
B
k
= v
A
i
g
AB
ik
v
k
B
=
v
i
B
g
??
??
v
?
A
=
v
1
A
9 = v
2
A
0 = v
A
1
2 = v
A
2
1 – = v
1
B
9 = v
2
B
9 = v
B
1
5
3
--- = v
B
2
1
3
--- =
v
˜
v
A
1
g
˜
1
A
v
A
2
g
˜
2
A
+ 2g
˜
1
A
g
˜
2
A
– = =
g
˜
1
A
g
˜
2
A
v
˜
2g
˜
1
A
g
˜
2
A
v
i
A
g
i k
AB
v
B
k
=
9
0
0
4 7 0
1 – 5 0
0 0 1
5 3 ⁄
1 3 ⁄
0
=
v
i
B
g
i A
Bk
v
k
A
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 53
Coordinate transformations A principal goal of this chapter is to show how the compo-
nents must be related to the components. We are seeking component transformations
resulting from a change of basis. Strictly speaking, coordinates may always be selected inde-
pendently from the basis. This, however, is rarely done. Usually the basis is taken to be
defined to be the one that is naturally defined by the coordinate system grid lines. The natural
covariant basis co-varies with the coordinates — i.e., each covariant base vector always points
tangent to the coordinate a grid line and has a length that varies in proportion to the grid
spacing density. The natural contravariant basis contra-varies with the grid — i.e., each con-
travariant base vector points normal to a surface of constant coordinate value. When people
talk about component transformations resulting from a change in coordinates, they implicitly
assuming that the basis is coupled to the choice of coordinates. There is an unfortunate
implicit assumption in many publications that vector and tensor components must be coupled
to the spatial coordinates. In fact, it’s fine to use a basis that is selected entirely independently
from the coordinates. For example, to study loads on a ferris wheel, you might decide to use a
fixed Cartesian basis that is aligned with gravity, while also using cylindrical coordi-
nates to identify points in space.
Spatial coordinates are any three numbers that uniquely define a location in space. For
example, Cartesian coordinates are frequently denoted , cylindrical coordinates are
, and so on. Any quantity that is known to vary in space can be written as a function
of the coordinates. Coordinates uniquely define the position vector , but coordinates are not
the same thing as components of the position vector. For example, the position vector in cylin-
drical coordinates is . Notice that position vector is not . The
components of the position vector are , not . The position vector has only
two nonzero components. It has no term. Does this mean that the position vector depends
only on and ? Nope. The position vector’s dependence on is buried implicitly in the
dependence of on .
Whether the basis is chosen to be coupled to the coordinates is entirely up to you. If conve-
nient to do so (as in the ferris wheel example), you might choose to use cylindrical coordinates
but a Cartesian basis so that the position vector would be written
. In this case, the chosen basis is the Cartesian laboratory
basis , which is entirely uncoupled from the coordinates .
To speak in generality, we will denote the three spatial coordinates by . If, for
example, cylindrical coordinates are used, then , , and . If spherical
coordinates are used, then , , and . If Cartesian coordinates are used,
then , , and .
v
B
i
v
A
i
r θ z , , ( )
x y z , , { }
r θ z , , { }
x
˜
x
˜
re
˜
r
ze
˜
z
+ = re
˜
r
θe
˜
θ
ze
˜
z
+ +
r zero z , , { } r θ z , , { }
θe
˜
θ
r z θ
e
˜
r
θ
x
˜
r θ cos ( )e
˜
x
r θ sin ( )e
˜
y
ze
˜
z
+ + =
e
˜
x
e
˜
y
e
˜
z
, , { } r θ z , , { }
η
1
η
2
η
3
, , { }
η
1
r = η
2
θ = η
3
z =
η
1
r = η
2
θ = η
3
ψ =
η
1
x = η
2
y = η
3
z =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 54
The position vector is truly a function of all three coordinates. The basis can
be selected independently from the choice of coordinates, However, we can always define the
natural basis [12] that is associated with the choice of coordinates by
(4.32)
The natural covariant base vector equals the derivative of the position vector with
respect to the coordinate. Hence, this base vector points in the direction of increasing
and it is tangent to the grid line along which the other two coordinates remain constant.
Examples of the natural basis for various coordinate systems are provided in Section 5.3. The
natural basis defined in Eq. (4.32) is clearly coupled to the coordinates themselves. Conse-
quently, a change in coordinates will result in a change of the natural basis. Thus, the compo-
nents of any vector expressed using the natural basis will change upon a change in
coordinates. Having the basis be coupled to the coordinates is a choice. One can alternatively
choose the basis to be uncoupled from the coordinates (see the discussion of non-holonomic-
ity in Ref. [12]).
Each particular value of the coordinates defines a unique location in space.
Conversely, each location in space corresponds to a unique set of coordinates.
That means the coordinates themselves can be regarded as functions of the position vector,
and we can therefore take the spatial gradients of the coordinates. As further explained in Sec-
tion 5.3, the contravariant natural base vector is defined to equal these coordinate gradients:
(4.33)
Being the gradient of , the contravariant base vector must point normal to surfaces of
constant . Proving that these are indeed the contravariant base vectors associated with the
covariant natural basis defined in Eq. (4.32) is simply a matter of applying the chain rule to
demonstrate that comes out to equal the Kronecker delta:
(4.34)
The metric coefficients corresponding to Eq. (4.32) are
(4.35)
Many textbooks present this result in pure component form by writing the vector in terms
of its cartesian components as so that the above expression becomes
for the natural basis (4.36)
η
1
η
2
η
3
, , { }
g
˜
i
∂x
˜
∂η
i
-------- ≡
i
th
g
˜
i
x
˜
i
th
η
i
η
1
η
2
η
3
, , { } x
˜
η
1
η
2
η
3
, , { }
g
˜
j
∂η
j
∂x
˜
-------- ≡
η
j
g
˜
j
η
j
g
˜
i
g
˜
j

g
˜
i
g
˜
j

∂x
˜
∂η
i
--------
∂η
j
∂x
˜
-------- •
∂η
j
∂η
i
-------- δ
i
j
= = =
g
ij
g
˜
i
g
˜
j

∂x
˜
∂η
i
--------
∂x
˜
∂η
j
-------- • = =
x
˜
x
˜
x
k
e
˜
k
=
g
ij
∂x
k
∂η
i
--------
∂x
k
∂η
j
--------
k 1 =
3

=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 55
When written this way, it’s important to recognize that are the Cartesian components of the
position vector (not the covariant components with respect to an irregular basis). That’s why
we introduced the summation in Eq. (4.36) -- otherwise, we would be violating the high-low
rule of the summation conventions.
Similarly, the contravariant metric coefficients are
for the natural basis (4.37)
These are the expressions for the metric coefficients when the natural basis is used. Let us reit-
erate that there is no law that says the basis used to described vectors and tensors must neces-
sarily be coupled to the choice of spatial coordinates in any way. We cite these relationships to
assist those readers who wish to make connections between our own present formalism and
other published discourses on curvilinear coordinates.
In the upcoming discussion of coordinate transformations, we will not assume that the
metric coefficients are given by the above expressions. Because our transformation equations
will be expressed in terms of metrics only, they will apply even when the chosen basis is not
the natural basis. When we derive the component transformation rules, we will include both
the general basis transformation expression and its specialization to the case that the basis is
chosen to be the natural basis.
4.1 What is a vector? What is a tensor?
The bane of many rookie graduate students is the common qualifier question: “what is a
vector?” The frustration stems, in part, from the fact that the “correct” answer depends on
who is doing the asking. Different people have different answers to this question. More than
likely, the professor follows up with the even harder question “what is a tensor?”
Definition #0 (used for undergraduates): A college freshman is typically told that a vector
is something with length and direction, and nothing more is said (certainly no mention is
made of tensors!). The idea of a tensor is tragically withheld from most undergraduates.
Definition #1 (classical, but our least favorite): Some professors merely want their stu-
dent to say that there are two kinds of vectors: A contravariant vector is a set of three numbers
that transform according to when changing from an “A-basis” to a
“B-basis”, whereas a covariant vector is a set of three numbers that transform
such that . These professors typically wish for their students to define four differ-
ent kinds of tensors, the contravariant tensor being a set of nine numbers that transform
x
k
g
ij
∂η
i
∂x
k
--------
∂η
j
∂x
k
--------
k 1 =
3

=
v
1
v
2
v
3
, , { } v
B
k
v
A
i
g
iB
Ak
=
v
1
v
2
v
3
, , { }
v
k
B
v
i
A
g
Ak
iB
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 56
according to , the mixed tensors being ones whose components transform
according to or , etc.
Definition #2 (our preference for ordinary engineering applications): We regard a vec-
tor to be an entity that exists independent of our act of observing it — independent of our act
of representing it quantitatively. Of course, to decide if something is an engineering vector,
we demand that it must nevertheless lend itself to a quantitative representation as a sum of
three components times
1
three base vectors , and we test whether or
not we have a vector by verifying that holds upon a change of basis. The base
vectors themselves are regarded as abstractions defined in terms of geometrical fundamental
primitives. Namely, the base vectors are regarded as directed line segments between points in
space, which are presumed (by axiom) to exist. The key distinction between definition #2 and
#1 is that definition #2 makes no distinction between covariant vectors and contravariant vec-
tors. Definition #2 instead uses the terms “covariant components” and “contravariant compo-
nents” — these are components of the same vector.
Recall that definition of a second-order tensor under our viewpoint required introduction of
new objects, called dyads, that could be constructed from vectors. From there, we asserted
that any collection of nine numbers could be assembled as a sum of these numbers times basis
dyads . A tensor would then be defined as any linear combination of tensor dyads for
which the coefficients of these dyads (i.e., the tensor components) follow the transformation
rules outlined in the preceding section.
Definition #3 (for mathematicians or for advanced engineering) Mathematicians define
vectors as being “members of a vector space.” A vector space must comprise certain basic components:
A1. A field R must exist. (For engineering applications, the field is the set of reals.)
A2. There must be a discerning definition of membership in a set V.
A3. There must be a rule for multiplying a scalar times a member of V. Furthermore, this
multiplication rule must be proved closed in :
If and then
A4. There must be a rule for adding two members of V.
Furthermore, this vector addition rule must be proved closed in :
If and then
A5. There must be a well defined process for determining whether two members of are equal.
A6. The multiplication and addition rules must satisfy the following rules:
• and

•There must exist a zero vector such that .
1. The act of multiplying a scalar times a vector is presumed to be well defined and to satisfy the rules outlined in
definition #3.
T
BB
ij
T
AA
mn
g
Bm
iA
g
Bn
jA
=
T
iB
Bj
T
mA
An
g
im
BA
g
BA
jn
= T
Bj
iB
T
An
mA
g
BA
im
g
jn
BA
=
v
1
v
2
v
3
, , { } g
˜
1
g
˜
2
g
˜
3
, , { }
v
B
k
v
A
i
g
iB
Ak
=
g
˜
i
g
˜
j
α v
˜
V
α R ∈ v
˜
V ∈ αv
˜
V ∈
V
v
˜
V ∈ w
˜
V ∈ v
˜
w
˜
+ V ∈
V
v
˜
w
˜
+ w
˜
v
˜
+ = αv
˜
v
˜
α =
u
˜
v
˜
w
˜
+ ( ) + u
˜
v
˜
+ ( ) w
˜
+ =
0
˜
V ∈ v
˜
0
˜
+ v
˜
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 57
Unfortunately, textbooks seem to fixate on item A6, completely neglecting the far more subtle and dif-
ficult items A2, A3, A4, and A5. Engineering vectors are more than something with length and direc-
tion. Likewise, engineering vectors are more than simply an array of three numbers. When people
define vectors according to the way their components change upon a change of basis (definitions #1 and
#2), they are implicitly addressing axiom A2. Our “definition #2” is a special case of this more general
definition. In general, axiom A2 is the most difficult axiom to satisfy when discussing specific vector
spaces.
Whenever possible, vector spaces are typically supplemented with the definition of an inner product
(here denoted ), which is a scalar-valued binary
1
operation between two vectors, and , that
must satisfy the following rules:
A7.
A8. if and only if .
An inner product space is just a vector space that has an inner product.
Partial Answer: (a) You may assume that the set of reals constitutes an adequate field, but you
may obtain a careful definition of the term “field” in Reference [6]. In order to deal with axiom A2, you
will need to locate a carefully crafted definition of what it means to be a “real valued continuous func-
1.The term “binary” is just an fancy way of saying that the function has two arguments.
a
˜
b
˜
, ( ) a
˜
b
˜
a
˜
b
˜
, ( ) b
˜
a
˜
, ( ) =
a
˜
a
˜
, ( ) 0 > a
˜
0
˜
≠ a
˜
a
˜
, ( ) 0
˜
= a
˜
0
˜
=
Study Question 4.4
(a) Apply definition #3 to prove that the set of all real-valued continuous functions of
one variable (on a domain being reals) is a vector space.
(b) Suppose that and are members of this vector space. Prove that the scalar-valued
functional
(4.38)
is an acceptable definition of the inner product, .
(c) Explain what a Taylor-series expansion has in common with a basis expansion
( ) of a vector. A Fourier-series expansion is related to the Taylor series in a
way that is analogous to what operation from ordinary vector analysis in 3D space?
(d) Explain how the binary function is analogous to the dyad from conven-
tional vector/tensor analysis.
(e) Explain why the dummy variables and play roles similar to the indices and
(f) Explain why a general binary function is analogous to a conventional tensor
components .
(g) Assuming that the goal of this document is to give you greater insight into vector
and tensor analysis in 3D space, do you think this problem contributes toward that
goal? Explain?
f g
f x ( )g x ( ) x d
∞ –


f g , ( )
f x ( )
f
˜
f
i
g
i
=
f x ( )g y ( ) f
˜
g
˜
x y i j
a x y , ( )
A
ij
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 58
tion of one (real) variable” [see References 8, 9, or 10 if you are stumped]. (b) Demonstrate that axioms
A7 and A8 hold true. (c) The Taylor series expansion writes a function as a linear combination of more
simple functions, namely, the powers , for ranging from 0 to . The Fourier series expansion
writes a function as a linear combination of trigonometric functions. (d) The dyad has components
, where the indices and range from 1 to 3. The function f(x)g(y) has values defined for values of
the independent variables and ranging from to . A dyad is the most primitive of tensors,
and it is defined such that it has no meaning (other than book-keeping) unless it operates on an arbi-
trary vector so that means . Likewise, we can define the most primitive function of two
variables, f(x)g(y) so that it takes on meaning when appearing in an operation on a general function
so that f(x)g(y) is defined so that it means . (e) see pre-
vious answer (f).
When addressing the question of whether or not something is indeed a tensor, you must
commit yourself to which of the definitions discussed on page 55 you wish to use. When we
cover the topic of curvilinear calculus, we will encounter the Christoffel symbols and .
These three-index quantities characterize how the natural curvilinear basis varies in space.
Their definition is based upon your choice of basis, . Naturally it stands to reason
that choosing some other basis will still permit you to construct Christoffel symbols for that
system. Any review of the literature will include a statement that the Christoffel symbols “are
not third-order tensors.” This statement merely means that
(4.39)
Note that it is perfectly acceptable for you to construct a distinct tensors defined
and (4.40)
Equation (4.39) tells us that these two tensors are not equal. That is,
(4.41)
Stated differently, for each basis, there exists a basis-dependent Christoffel tensor. This should
not be disturbing. After all, the base vectors themselves are, by definition, basis-dependent, but
that doesn’t mean they aren’t vectors. Changing the basis will change the associated Christof-
fel tensor. A particular choice for the basis is itself (of course) basis dependent -- change the
basis, and you will (obviously) change the base vectors themselves. You can always construct
other vectors and tensors from the basis, but you would naturally expect these tensors to
change upon a change of basis if the new definition of the tensor is defined the same as the old
definition except that the new base vectors are used. What’s truly remarkable is that there
exist certain combinations of base vectors and basis-referenced definitions of components that
turn out to be invariant under a change of basis.
x
n
n ∞
f
˜
g
˜
f
i
g
j
i j
x y ∞ – ∞ f
˜
g
˜
f
˜
g
˜
v
˜
• f
˜
g
˜
v
˜
• ( )
v y ( ) f x ( )g y ( )v y ( ) y d
∞ –


f x ( ) g y ( )v y ( ) y d
∞ –


=
Γ
ij
k
Γ
kij
g
˜
1
g
˜
2
g
˜
3
, , { }
Γ
ijB
BBk
Γ
pqA
AAr
g
ip
BA
g
jq
BA
g
BA
kr

Γ
˜
˜
˜
A
Γ
ijA
AAk
g
˜
A
m
g
˜
A
n
g
˜
p
A
= Γ
˜
˜
˜
B
Γ
ijB
BBk
g
˜
B
m
g
˜
B
n
g
˜
p
B
=
Γ
˜
˜
˜
A
Γ
˜
˜
˜
B

rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 59
Consider two tensors constructed from the metrics and :
and (4.42)
These two tensors, it turns out, are equal even though they were constructed using different
bases and different components! Even though and even though , it turns
out that the differences cancel each other out in the combinations defined in Eq. (4.42) so that
both tensors are equal. In fact, as was shown in Eq. (3.18), these tensors are simply the identity
tensor!
(4.43)
This is an exceptional situation. More often than not, when you construct a tensor by mul-
tiplying an ordered array of numbers by corresponding base vectors in different systems, you
will end up with two different tensors. To clarify, suppose that is a function (scalar, vector,
or tensor valued) which can be constructed from the basis. Presume that the same function can
be applied to any basis, and furthermore presume that the results will be different, so we
must denote the results by different symbols.
and if (4.44)
In the older literature, the components of were called tensor if and only if . In
more modern treatments, the function is called basis invariant if , whereas the
function f is basis-dependent if . These two tensors are equal only if the components of
with respect to one basis happen to equal the components of with respect to the same
basis. It’s not uncommon for two distinct tensors to have the same components with respect
to different bases — such a result would not imply equality of the two tensors.
4.2 Coordinates are not the same thing as components
All base vectors in this section are to be regarded as the natural basis so that they are
related to the coordinates by . Recall that we have typeset the three coordi-
nates as . Despite this typesetting, these numbers are not contravariant components of any
vector . By this we mean that if you compare a different set of components from a second
system, they are not generally related to the original set of components via a vector transfor-
mation rule. We may only presume that transformation rules exist such that each can be
expressed as a function of the coordinates. If coordinates were also vector components,
then they would have to be related by a transformation formula of the form
valid for homogeneous, but not curvilinear coordinates (4.45)
A coordinate transformation of this form is a linear, which holds only for homogenous (i.e.,
non-curvilinear) coordinate systems. Even though the coordinates are not generally com-
g
ij
AA
g
ij
BB
G
˜
˜
AA
g
ij
AA
g
˜
A
i
g
˜
A
j
= G
˜
˜
BB
g
ij
BB
g
˜
B
i
g
˜
B
j
=
g
˜
A
i
g
˜
B
i
≠ g
ij
AA
g
ij
BB

G
˜
˜
AA
G
˜
˜
BB
I
˜
˜
= =
f
F
A
f g
˜
1
A
g
˜
2
A
g
˜
3
A
, , ( ) = F
B
f g
˜
1
B
g
˜
2
B
g
˜
3
B
, , ( ) =
F
A
F
A
F
B
=
f F
A
F
B
=
F
A
F
B

F
A
F
B
η
k
g
˜
k
∂x
˜
∂η
k
⁄ =
η
k
η
˜
η
k
η
k
η
m
η
k
∂η
k
∂η
m
---------- η
m
= ←
η
k
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 60
ponents of a vector, their increments do transform like vectors. That is,
this is true for both homogenous and curvilinear coordinates! (4.46)
For curvilinear systems, the transformation formulas that express coordinates in terms of
are nonlinear functions, but their increments are linear for the same reason that the incre-
ment of a nonlinear function is an expression that is linear with respect to incre-
ments: . Geometrically, at any point on a curvy line, we can construct a straight
tangent to that line.
4.3 Do there exist a “complementary” or “dual” coordinates?
Recall that the coordinates are typeset as . We’ve explained that the coordinate incre-
ments transform as vectors even though the coordinates themselves are not components
of vectors. The increments are the components of the position increment vector . In
other words,
(4.47)
To better emphasize the order of operation, we should write this as
(4.48)
On the left side, we have the component of the vector differential, whereas on the right
side we have the differential of the coordinate.
The covariant component of the position increment is well-defined:
(4.49)
Looking back at Eq. (4.48), natural question would be: is it possible to define complementary
or dual coordinates that are related to the baseline coordinates such that
this is NOT possible in general (4.50)
We will now prove (by contradiction) that such dual coordinates do not generally exist. If
such coordinates did exist, then it would have to be true that
(4.51)
For the to exist, this equation would need to be integrable. In other words, the expression
would have to be an exact differential, which means that the metrics would have to
be given by
(4.52)

k

k
∂η
k
∂η
m
---------- dη
m
= ←
η
k
η
m
y f x ( ) =
dy f′ x ( )dx =
η
k

k

k
dx
˜
dx
˜
( )
k

k
=
dx
˜
( )
k
d η
k
( ) =
k
th
k
th
dx
˜
( )
i
g
ik
dx
˜
( )
k
=
η
i
η
i
dx
˜
( )
i
d η
i
( ) = ←
d η
i
( ) g
ij
d η
j
( ) =
η
i
g
ij
d η
j
( )
g
ij
∂η
i
∂η
j
-------- =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 61
and therefore, the second partials would need to be interchangeable:
(4.53)
This constraint is not satisfied by curvilinear systems. Consider, for example, cylindrical coor-
dinates, , where the metric matrix is
(4.54)
Note that
, but (4.55)
Hence, since these are not equal, the integrability condition of Eq. (4.53) is violated and there-
fore there do not exist dual coordinates.
∂g
ij
∂η
k
---------

2
η
i
∂η
j
∂η
k
------------------

2
η
i
∂η
k
∂η
j
------------------
∂g
ik
∂η
j
---------- = = =
η
1
=r η
2
=θ η
3
=z , , ( )
g
i j
| |
1 0 0
0 r
2
0
0 0 1
=
∂g
22
∂η
1
----------
∂g
22
∂r
---------- 2r = =
∂g
21
∂η
2
----------
∂g
21
∂θ
---------- 0 = =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 62
5. Curvilinear calculus
Chapter 3 focused on algebra, where the only important issue was the possibility that the
basis might be irregular (i.e., nonorthogonal, nonnormalized, and/or non-right-handed). In
that chapter, it made no difference whether the coordinates are homogeneous or curvilinear.
For homogeneous coordinates (where the base vectors are the same at all points in space), ten-
sor calculus formulas for the gradient are similar to the formulas for ordinary rectangular
Cartesian coordinates. By contrast, when the coordinates are curvilinear, there are new addi-
tional terms in gradient formulas to account for the variation of the base vectors with posi-
tion.
5.1 A introductory example
Before developing the general curvilinear theory, it is useful to first show how to develop
the formulas without using the full power of curvilinear calculus. Suppose an engineering
problem is most naturally described using cylindrical coordinates, which are related to the
laboratory Cartesian coordinates by
, , . (5.1)
The base vectors for cylindrical coordinates are related to the lab base vectors by
. (5.2)
Suppose you need the gradient of a scalar, . If s is written as a function of , then
the familiar Cartesian formula applies:
. (5.3)
Because curvilinear coordinates were selected, the function is probably simple in
form. If you were a sucker for punishment, you could substitute the inverse relationships,
, , . (5.4)
Then you’d have the function , with which you could directly apply Eq. (5.3). This
approach is unsatisfactory for several reasons. Strictly speaking, Eq. (5.4) is incorrect because
the arctangent has two solutions in the range from 0 to ; the correct formula would have to
be the two-argument arctangent. Furthermore, actually computing the derivative would be
tedious, and the final result would be expressed in terms of the lab Cartesian basis instead of
the cylindrical basis.
x
1
r θ cos = x
2
r θ sin = x
3
z =
e
˜
r
θ cos e
˜
1
θ sin e
˜
2
+ =
e
˜
θ
θ sin – e
˜
1
θ cos e
˜
2
+ =
e
˜
z
e
˜
3
=
ds dx
˜
⁄ x
i
ds
dx
˜
------
∂s
∂x
1
-------- e
˜
1
∂s
∂x
2
-------- e
˜
2
∂s
∂x
3
-------- e
˜
3
+ + =
s r θ z , , ( )
r x
1
2
x
1
2
+ = θ tan
1 –
x
2
x
1
-----
\ .
| |
= z x
3
=
s x
1
x
2
x
3
, , ( )

rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 63
The better approach is to look at the problem from a more academic slant by applying the
chain rule. Given that then
, (5.5)
where the subscripts indicate which variables are held constant in the derivatives. The deriva-
tives of the cylindrical coordinates with respect to can be computed a priori. For example,
the quantity is the gradient of . Physically, we know that the gradient of a quantity is
perpendicular to surfaces of constant values of that quantity. Surfaces of constant are cylin-
ders of radius , so we know that must be perpendicular to the cylinder. In other
words, we know that must be parallel to . The gradients of the cylindrical coordi-
nates are obtained by applying the ordinary Cartesian formula:
. (5.6)
We must determine these derivatives implicitly by using Eqs. (5.1) to first compute the
derivatives of the Cartesian coordinates with respect to the cylindrical coordinates. We can
arrange the nine possible derivatives of Eqs. (5.1) in a matrix:
. (5.7)
The inverse derivatives are obtained inverting this matrix to give
. (5.8)
s s r θ z , , ( ) =
ds
dx
˜
------
∂s
∂r
-----
\ .
| |
θ z ,
dr
dx
˜
------
∂s
∂θ
------
\ .
| |
r z ,

dx
˜
------
∂s
∂z
-----
\ .
| |
r θ ,
dz
dx
˜
------ + + =
x
˜
dr dx
˜
⁄ r
r
r dr dx
˜

dr dx
˜
⁄ e
˜
r
dr
dx
˜
------
∂r
∂x
1
-------- e
˜
1
∂r
∂x
2
-------- e
˜
2
∂r
∂x
3
-------- e
˜
3
+ + =

dx
˜
------
∂θ
∂x
1
-------- e
˜
1
∂θ
∂x
2
-------- e
˜
2
∂θ
∂x
3
-------- e
˜
3
+ + =
dz
dx
˜
------
∂z
∂x
1
-------- e
˜
1
∂z
∂x
2
-------- e
˜
2
∂z
∂x
3
-------- e
˜
3
+ + =
∂x
1
∂r
--------
∂x
1
∂θ
--------
∂x
1
∂z
--------
∂x
2
∂r
--------
∂x
2
∂θ
--------
∂x
2
∂z
--------
∂x
3
∂r
--------
∂x
3
∂θ
--------
∂x
3
∂z
--------
θ cos r θ sin – 0
θ sin r θ cos 0
0 0 1
=
∂r
∂x
1
--------
∂r
∂x
2
--------
∂r
∂x
3
--------
∂θ
∂x
1
--------
∂θ
∂x
2
--------
∂θ
∂x
3
--------
∂z
∂x
1
--------
∂z
∂x
2
--------
∂z
∂x
3
--------
θ cos θ sin 0
θ sin
r
----------- –
θ cos
r
------------ 0
0 0 1
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 64
Substituting these derivatives into Eq. (5.6) gives
. (5.9)
Referring to Eq. (5.2) we conclude that
This result was rather tedious to derive, but it was a one-time-only task. For any coordinate
system, it is always a good idea to derive the coordinate gradients and save them for later use.
Now that we have the coordinate gradients, Eq. (5.5) becomes
, (5.11)
where the commas are a shorthand notation for derivatives. This is the formula for the gradi-
ent of a scalar that you would find in a handbook.
The above analysis showed that high-powered curvilinear theory is not necessary to
derive the formula for the gradient of a scalar function of the cylindrical coordinates. All
dr
dx
˜
------ θ cos e
˜
1
θ sin e
˜
2
+ =

dx
˜
------
θ sin
r
----------- – e
˜
1
θ cos
r
------------ e
˜
2
+ =
dz
dx
˜
------ e
˜
3
=
(5.10)
dr
dx
˜
------ e
˜
r
=

dx
˜
------
e
˜
θ
r
----- =
dz
dx
˜
------ e
˜
3
=
ds
dx
˜
------ s
, r
e
˜
r
s
, θ
r
------ e
˜
θ
s
, z
e
˜
3
+ + =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 65
that’s needed is knowledge of tensor analysis in Cartesian coordinates. However, for more
complicated coordinate systems, a fully developed curvilinear theory is indispensable.
5.2 Curvilinear coordinates
In what follows, three coordinates are pre-
sumed to identify the position of a point in space. These
coordinates are identified with superscripts merely by con-
vention. As sketched in Figure 5.1, the associated base vectors
at any point in space are tangent to the grid lines at that point. The base vector points in
the direction of increasing . Importantly, the position vector is not generally equal to
. For example, the position vector for spherical coordinates is simply (not
); for spherical coordinates, the dependence of the position vector on the
coordinates and is hidden in dependence of on and .
5.3 The “associated” curvilinear covariant basis
Given three coordinates that identify the location in space, the associated
covariant base vectors are defined to be tangent to the grid lines along which only one coordi-
nate varies (the others being held constant). By definition, the three coordinates
identify the position in space:
; i.e., is a function of , , and . (5.12)
The i
th
covariant base vector is defined to point in the direction that moves when is
O
g
˜
1
g
˜
2
g
˜
3
x
˜
FIGURE 5.1 Curvilinear covariant basis. The covariant base
vectors are tangent to the grid lines associated with . Different
base vectors are used at different locations in space because the grid
itself varies in space. The lengths of the covariant base vectors “co-
vary” with the coordinate. Each unit change in the coordinate
corresponds to a specific change in the position vector, and the
covariant basis has a length equal to the change in position per unit
change in coordinate.
η
k
η
1
η
2
x
˜
g
˜
1
g
˜
2
η
1
η
2
η
3
, , { }
x
˜
g
˜
k
η
k
x
˜
η
k
g
˜
k
x
˜
re
˜
r
=
x
˜
re
˜
r
θe
˜
θ
φe
˜
ψ
+ + =
θ φ e
˜
r
θ φ
η
1
η
2
η
3
, , { }
η
1
η
2
η
3
, , { }
x
˜
x
˜
η
1
η
2
η
3
, , ( ) = x
˜
η
1
η
2
η
3
x
˜
η
k
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 66
increased, holding the other coordinates constant. Hence, the natural definition of the i
th
cova-
riant basis is the partial derivative of with respect to :
. (5.13)
Note the following new summation convention: the superscript on is interpreted as a sub-
script in the final result because appears in the “denominator” of the derivative.
Note that the natural covariant base vectors are not necessarily unit vectors. Furthermore,
because the coordinates do not necessarily have the same physical units, the nat-
ural base vectors themselves will have different physical units. For example, cylindrical coor-
dinates have dimensions of length, radians, and length, respectively. Consequently,
two of the base vectors ( and ) are dimensionless, but has dimensions of length. Such
a situation is not unusual and should not be alarming. We will find that the components of a
vector will have physical dimensions appropriate to ensure that the each term in the sum of
components times base vectors will have the same physical dimensions as the vector itself.
Again this point harks back to the fact that neither components nor base vectors are invariant,
but the sum of components times base vectors is invariant.
Equation (5.12) states that the coordinates uniquely identify the position in space. Con-
versely, any position in space corresponds to a unique set of coordinates. That is, each coordi-
nate may be regarded as a single-valued function of position vector:
. (5.14)
By the chain rule, note that
. (5.15)
Therefore, recalling Eq. (5.13), the contravariant dual basis must be given by
. (5.16)
This derivative is the spatial gradient of . Hence, as
sketched in Fig. 5.2, each contravariant base vector is
normal to surfaces of constant , and it points in the
direction of increasing .
Starting with Eq. (5.12), the increment in the position
vector is given by
, (5.17)
x
˜
η
i
g
˜
i
∂x
˜
∂η
i
-------- ≡
η
i
η
i
η
1
η
2
η
3
, , { }
r θ z , , { }
g
˜
1
g
˜
3
g
˜
2
η
j
η
j
x
˜
( ) =

j
dx
˜
--------
∂x
˜
∂η
i
-------- •
∂η
j
∂η
i
-------- δ
i
j
= =
η
1
η
2
FIGURE 5.2 Curvilinear contravariant
basis. The contravariant base vectors are
normal to surfaces of constant . η
k
x
˜
g
˜
1
g
˜
2
g
˜
j
∂η
j
∂x
˜
-------- ≡
η
j
g
˜
j
η
j
η
j
dx
˜
∂x
˜
∂η
k
--------- dη
k
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 67
or, recalling Eq. (5.13),
. (5.18)
Now we note a key difference between homogeneous and curvilinear coordinates:
ix. For homogeneous coordinates, each of the base vectors is same
throughout space — they are independent of the coordinates
. Hence, for homogeneous coordinates, Eq. (5.18) can be
integrated to show that .
x. For curvilinear coordinates, each varies in space — they depend on
the coordinates . Hence, for curvilinear coordinates,
Eq. (5.18) does not imply that .
EXAMPLE Consider cylindrical coordinates: , , .
Figure 5.3 illustrates how, for simple enough coordinate systems, you can determine the cova-
riant base vectors graphically. Recall that and therefore that is the
coefficient of when is varied by differentially changing holding and con-
stant. For cylindrical coordinates, when the radial coordinate, , is varied holding the oth-
ers constant, the position vector moves such , and therefore must equal
because it is the coefficient of . Similarly, must be equal to since that is the coeffi-
cient of in when the second coordinate, , is varied holding the others constant.
Summarizing,
, , and . (5.19)
To derive these results analytically (rather than geometrically), we utilize the underlying rect-
dx
˜

k
g
˜
k
=
g
˜
k
η
1
η
2
η
3
, , { }
x
˜
η
k
g
˜
k
=
g
˜
k
η
1
η
2
η
3
, , { }
x
˜
η
k
g
˜
k
=
η
1
=r η
2
=θ η
3
=z
e
˜
r
e
˜
θ
e
˜
z
θ r
z
x
1
x
2
x
3
x
˜
x
˜
r e
˜
r
θ ( ) ze
˜
z
+ =
x
˜
dx
˜
=dre
˜
r
x
˜
dx
˜
=rdθe
˜
θ
Here we vary r
FIGURE 5.3 Covariant basis for cylindrical coordinates. The views down the -axis show how the covariant base
vectors can be determined graphically by varying one coordinate, holding the others constant.
z
holding θ and z constant
Here we vary θ
holding r and z constant
x
1
x
1
x
2
x
2
g
˜
1
∂x
˜
∂η
1
⁄ ( )
η
2
η
3
,
= g
˜
1

1
x
˜
η
1
η
2
η
3
η
1
=r
x
˜
dx
˜
dre
˜
r
= g
˜
1
e
˜
r
dr g
˜
2
re
˜
θ
dθ dx
˜
η
2

g
˜
1
e
˜
r
= g
˜
2
re
˜
θ
= g
˜
3
e
˜
z
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 68
angular Cartesian basis to write the position vector as
, (5.20)
where
(5.21a)
(5.21b)
. (5.21c)
Then
(5.22a)
(5.22b)
. (5.22c)
which we note is equivalent to the graphically derived Eqs. (5.19). Also note that and
are dimensionless, whereas has physical dimensions of length.
The metric coefficients for cylindrical coordinates are derived in the usual way:
(5.23a)
(5.23b)
, (5.23c)
or, in matrix form,
. ← cylindrical coordinates (5.24)
Whenever the metric tensor comes out to be diagonal as it has here, the coordinate system is
orthogonal and the base vectors at each point in space are mutually perpendicular. Inverting
the covariant matrix gives the contravariant metric coefficients:
. ← cylindrical coordinates (5.25)
e
˜
1
e
˜
2
e
˜
3
, , { }
x
˜
x
1
e
˜
1
x
2
e
˜
2
x
3
e
˜
3
+ + =
x
1
r θ cos =
x
2
r θ sin =
x
3
z =
g
˜
1
∂x
˜
∂r
------
\ .
| |
θ z ,
∂x
1
∂r
--------
\ .
| |
θ z ,
e
˜
1
∂x
2
∂r
--------
\ .
| |
θ z ,
e
˜
2
∂x
3
∂r
--------
\ .
| |
θ z ,
e
˜
3
+ + θ cos e
˜
1
θe
˜
2
sin + = = =
g
˜
2
∂x
˜
∂θ
------
\ .
| |
r z ,
∂x
1
∂θ
--------
\ .
| |
r z ,
e
˜
1
∂x
2
∂θ
--------
\ .
| |
r z ,
e
˜
2
∂x
3
∂θ
--------
\ .
| |
r z ,
e
˜
3
+ + r θ sin e
˜
1
– r θ cos e
˜
2
+ = = =
g
˜
3
∂x
˜
∂z
------
\ .
| |
r θ ,
∂x
1
∂z
--------
\ .
| |
r θ ,
e
˜
1
∂x
2
∂z
--------
\ .
| |
r θ ,
e
˜
2
∂x
3
∂z
--------
\ .
| |
r θ ,
e
˜
3
+ + e
˜
3
= = =
g
˜
1
g
˜
3
g
˜
2
g
ij
g
11
g
˜
1
g
˜
1
• 1 = = g
12
g
˜
1
g
˜
2
• 0 = = g
13
g
˜
1
g
˜
3
• 0 = =
g
12
g
˜
2
g
˜
1
• 0 = = g
22
g
˜
2
g
˜
2
• r
2
= = g
23
g
˜
2
g
˜
3
• 0 = =
g
31
g
˜
3
g
˜
1
• 0 = = g
32
g
˜
3
g
˜
2
• 0 = = g
33
g
˜
3
g
˜
3
• 1 = =
g
ij
| |
1 0 0
0 r
2
0
0 0 1
=
g
ij
| |
g
ij
| |
1 0 0
0 1 r
2
⁄ 0
0 0 1
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 69
The contravariant dual basis can be derived in the usual way by applying the formula of
Eq (2.18):
(5.26a)
(5.26b)
. (5.26c)
With considerably more effort, we can alternatively derive these results by directly applying
Eq. (5.16). To do this, we must express the coordinates in terms of :
(5.27a)
(5.27b)
. (5.27c)
Then applying Eq. (5.16) gives
(5.28a)
(5.28b)
. (5.28c)
which agrees with the previous result in Eq. (5.26).
Because the curvilinear basis associated with cylindrical coordinates is
orthogonal, the dual vectors are in exactly the same directions but have different
magnitudes. It is common practice to use the associated orthonormal basis as we
have in the above equations. This practice also eliminates the sometimes confusing complica-
tion of dimensional base vectors. When a coordinate system is orthogonal, the shared
orthonormal basis is called the “physical basis.” Notationally, vector components
or are defined and , respectively. Because cylindrical coor-
dinates are orthogonal, it is conventional to also define so-called “physical components” with
respect to the orthonormalized cylindrical basis.
For cylindrical coordinates, the physical components are denoted . Whenever
you derive a formula in terms of the general covariant and contravariant vector components,
g
˜
1
g
11
g
˜
1
g
12
g
˜
2
g
13
g
˜
3
+ + g
˜
1
e
˜
r
= = =
g
˜
2
g
21
g
˜
1
g
22
g
˜
2
g
23
g
˜
3
+ +
g
˜
2
r
2
-----
e
˜
θ
r
----- = = =
g
˜
3
g
31
g
˜
1
g
32
g
˜
2
g
33
g
˜
3
+ + g
˜
3
e
˜
z
= = =
r θ z , , { } x
1
x
2
x
3
, , { }
r x
1
2
x
2
2
+ =
θ tan
1 –
x
2
x
1
-----
\ .
| |
=
z x
3
=
g
˜
1
dr
dx
˜
------
∂r
∂x
1
-------- e
˜
1
∂r
∂x
2
-------- e
˜
2
∂r
∂x
3
-------- e
˜
3
+ +
x
1
r
----- e
˜
1
x
2
r
----- e
˜
2
+ θ cos e
˜
1
θ sin e
˜
2
+ e
˜
r
= = = = =
g
˜
2

dx
˜
------
∂θ
∂x
1
-------- e
˜
1
∂θ
∂x
2
-------- e
˜
2
∂θ
∂x
3
-------- e
˜
3
+ +
x
2

x
1
2
--------
\ .
| |
cos
2
θe
˜
1
1
x
1
-----
\ .
| |
cos
2
θe
˜
2
+
1
r
-- - e
˜
θ
= = = =
g
˜
3
dz
dx
˜
------
∂z
∂x
1
-------- e
˜
1
∂z
∂x
2
-------- e
˜
2
∂z
∂x
3
-------- e
˜
3
+ + e
˜
3
e
˜
z
= = = =
g
˜
1
g
˜
2
g
˜
3
, , { }
g
˜
1
g
˜
2
g
˜
3
, , { }
e
˜
r
e
˜
θ
e
˜
z
, , { }
v
1
v
2
v
3
, , { }
v
1
v
2
v
3
, , { } v
k
g
˜
k
v
˜
• ≡ v
k
g
˜
k
v
˜
• ≡
v
r
v
θ
v
z
, , { }
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 70
it’s a good idea to convert the final result to physical coordinates and the physical basis. For
cylindrical coordinates, these conversion formulas are:
(5.29a)
(5.29b)
. (5.29c)
v
1
g
˜
1
v
˜
• e
˜
r
v
˜
• v
r
= = = v
1
g
˜
1
v
˜
• e
˜
r
v
˜
• v
r
= = =
v
2
g
˜
2
v
˜
• re
˜
θ
( ) v
˜
• rv
θ
= = = v
2
g
˜
2
v
˜

1
r
--- e
˜
θ
\ .
| |
v
˜

v
θ
r
----- = = =
v
3
g
˜
3
v
˜
• e
˜
z
v
˜
• v
z
= = = v
3
g
˜
3
v
˜
• e
˜
z
v
˜
• v
z
= = =
Study Question 5.1 For spherical coordinates, , the underlying rectan-
gular Cartesian coordinates are
(5.30a)
(5.30b)
. (5.30c)
(a) Follow the above example to prove that the covariant basis for spherical coordinates is
, where (5.31a)
, where (5.31b)
where . (5.31c)
(b) Prove that the dual contravariant basis for spherical coordinates is
, (5.32a)
, (5.32b)
. (5.32c)
(c) As was done for cylindrical coordinates in Eq. (5.29) show that the spherical covariant
and contravariant vector components are related to the spherical “physical” components by
(5.33a)
(5.33b)
. (5.33c)
η
1
=r η
2
=θ η
3
=φ , , { }
x
1
r θ sin φ cos =
x
2
r θ sin φ sin =
x
3
r θ cos =
g
˜
1
e
˜
r
= e
˜
r
θ sin φ cos e
˜
1
θ sin φ sin e
˜
2
θ cos e
˜
3
+ + =
g
˜
2
re
˜
θ
= e
˜
θ
θ cos φ cos e
˜
1
θ φ sin cos e
˜
2
θ sin e
˜
3
– + =
g
˜
3
r θ sin e
˜
φ
= e
˜
φ
φ sin e
˜
1
– φ cos e
˜
2
+ =
g
˜
1
e
˜
r
=
g
˜
2
1
r
--- e
˜
θ
=
g
˜
3
1
r θ sin
-------------- e
˜
φ
=
v
1
v
r
= v
1
v
r
=
v
2
rv
θ
= v
2
v
θ
r
----- =
v
3
r θ sin ( )v
φ
= v
3
v
φ
r θ sin
-------------- =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 71
5.4 The gradient of a scalar
Section 5.1 provided a simplified example for finding the formula for the gradient of a scalar
function of cylindrical coordinates. Now we outline the procedure for more complicated gen-
eral curvilinear coordinates. Recall Eq. (5.18):
, (5.34)
where
. (5.35)
Dotting both sides of Eq. (5.34) with shows that
. (5.36)
This important equation is crucial in determining expressions for gradient operations. Con-
sider, for example, a scalar-valued field
. (5.37)
The increment in this function is given by
, (5.38)
or, using Eq. (5.36),
, (5.39)
which holds for all . The direct notation definition of the gradient of a scalar field is
. (5.40)
Comparing the above two equations gives the formula for the gradient of a scalar in curvilin-
ear coordinates:
. (5.41)
Notice that this formula is very similar in form to the familiar formula for rectangular Carte-
sian coordinates. Gradient formulas won’t look significantly different until we compute vec-
tor gradients in the next section.
Example: cylindrical coordinates Consider a scalar field
. (5.42)
dx
˜

k
g
˜
k
=
g
˜
k
∂x
˜
∂η
k
--------- =
g
˜
i

i
g
˜
i
dx
˜
• =
s s η
1
η
2
η
3
, , ( ) =
ds
∂s
∂η
k
--------- dη
k
=
ds
∂s
∂η
k
--------- g
˜
k
dx
˜
• =
dx
˜
ds dx
˜

ds
ds
dx
˜
------ dx
˜
• = dx
˜

ds
dx
˜
------
∂s
∂η
k
--------- g
˜
k
=
s s r θ z , , ( ) =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 72
Applying Eq. (5.41) with the contravariant basis of Eq. (5.26) gives
, (5.43)
which is the gradient formula typically found in math handbooks.
5.5 Gradient of a vector -- “simplified” example
Now let’s look at the gradient of a vector. If the vector is expressed in Cartesian coordi-
nates, the formula for the gradient is
. (5.45)
Suppose a vector is expressed in the cylindrical coordinates:
. (5.46)
The components , , and are presumably known as functions of the coordi-
nates. Importantly, the base vectors also vary with the coordinates!
First, recall the product rule for the gradient of a scalar times a vector:
. (5.47)
Applying the product rule to each of the terms in Eq. (5.46), the gradient of gives
+ . (5.48)
Eq. (5.11) applies to the gradient of any scalar. Hence, the first three terms of Eq. (5.48) can be
ds
dx
˜
------
∂s
∂r
----- e
˜
r
∂s
∂θ
------
e
˜
θ
r
-----
∂s
∂z
----- e
˜
z
+ + =
Study Question 5.2 Follow the above example [using Eq. (5.31) in Eq. (5.41)] to prove that
the formula for the gradient of a scalar s in spherical coordinates is
. (5.44)
ds
dx
˜
------
∂s
∂r
----- e
˜
r
∂s
∂θ
------
e
˜
θ
r
-----
∂s
∂φ
------
e
˜
φ
r θ sin
-------------- + + =
dv
˜
dx
˜
------
∂v
i
∂x
j
------- e
˜
i
e
˜
j
=
v
˜
v
˜
v
r
e
˜
r
v
θ
e
˜
θ
v
z
e
˜
z
+ + =
v
r
v
θ
v
z
r θ z , , { }
d su
˜
( )
dx
˜
--------------- u
˜
ds
dx
˜
------ s
du
˜
dx
˜
------- + =
dv
˜
dx
˜

dv
˜
dx
˜
------ e
˜
r
dv
r
dx
˜
-------- e
˜
θ
dv
θ
dx
˜
-------- e
˜
z
dv
z
dx
˜
-------- + + =
v
r
de
˜
r
dx
˜
-------- v
θ
de
˜
θ
dx
˜
--------- v
z
de
˜
z
dx
˜
-------- + +
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 73
written
. (5.49)
Now we need formulas for the gradients of the base vectors. Applying the product rule to Eq.
(5.2), noting that the gradients of the Cartesian basis are all zero, gives
. (5.50)
Applying (5.10),
Substituting Eqs. (5.49) and (5.51) into (5.48) gives
+
+ . (5.52)
dv
r
dx
˜
-------- v
r
, r
e
˜
r
v
r
, θ
r
-------- e
˜
θ
v
r
, z
e
˜
3
+ + =
dv
θ
dx
˜
-------- v
θ
, r
e
˜
r
v
θ
, θ
r
--------- e
˜
θ
v
θ
, z
e
˜
3
+ + =
dv
z
dx
˜
-------- v
z
, r
e
˜
r
v
z
, θ
r
-------- e
˜
θ
v
z
, z
e
˜
3
+ + =
de
˜
r
dx
˜
-------- θ sin – e
˜
1
θ cos e
˜
2
+ ( )

dx
˜
------ e
˜
θ

dx
˜
------ = =
de
˜
θ
dx
˜
--------- θ cos – e
˜
1
θ sin e
˜
2
– ( )

dx
˜
------ e
˜
r

dx
˜
------ – = =
de
˜
z
dx
˜
-------- 0
˜
˜
=
. (5.51)
de
˜
r
dx
˜
--------
1
r
--- e
˜
θ
e
˜
θ
=
de
˜
θ
dx
˜
---------
1
r
--- e
˜
r
e
˜
θ
– =
de
˜
z
dx
˜
-------- 0
˜
˜
=
dv
˜
dx
˜
------ e
˜
r
v
r
, r
e
˜
r
v
r
, θ
r
-------- e
˜
θ
v
r
, z
e
˜
3
+ +
\ .
| |
=
e
˜
θ
v
θ
, r
e
˜
r
v
θ
, θ
r
--------- e
˜
θ
v
θ
, z
e
˜
3
+ +
\ .
| |
e
˜
z
v
z
, r
e
˜
r
v
z
, θ
r
-------- e
˜
θ
v
z
, z
e
˜
3
+ +
\ .
| |
v
r
1
r
--- e
˜
θ
e
˜
θ
\ .
| |
v
θ
1
r
--- e
˜
r
e
˜
θ

\ .
| |
+
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 74
Collecting terms gives
= + +
+ + +
+ + + . (5.53)
This result is usually given in textbooks in matrix form with respect to the cylindrical basis:
. (5.54)
There’s no doubt that this result required a considerable amount of effort to derive. Typically,
these kinds of formulas are compiled in the appendices of most tensor analysis reference
books. The appendix of R.B. Bird’s book on macromolecular hydrodynamics is particularly
well-organized and error-free. If, however, you use Bird’s appendix, you will notice that the
components given for the gradient of a vector seem to be the transpose of what we have pre-
sented above; that’s because Bird (and some others) define the gradient of a tensor to be the
transpose of our definition. Before using anyone’s gradient table, you should always ascertain
which definition the author uses.
Now we are going to perform the same sort of analysis to show how the gradient is deter-
mined for general curvilinear coordinates.
5.6 Gradient of a vector in curvilinear coordinates
The formula for the gradient of a scalar in curvilinear coordinates was not particularly tough
to derive and comprehend — it didn’t look profoundly different from the formula for rectan-
gular Cartesian coordinates. Taking gradients of vectors, however, begins a new nightmare.
Consider a vector field,
. (5.55)
Each component of the vector is of course a function of the coordinates, but for general curvi-
dv
˜
dx
˜
------ v
r
, r
( )e
˜
r
e
˜
r
v
r
, θ
r
--------
v
θ
r
----- –
\ .
| |
e
˜
r
e
˜
θ
v
r
, z
( )e
˜
r
e
˜
z
v
θ
, r
( )e
˜
θ
e
˜
r
v
θ
, θ
r
---------
v
r
r
---- +
\ .
| |
e
˜
θ
e
˜
θ
v
θ
, z
( )e
˜
θ
e
˜
z
v
z
, r
( )e
˜
z
e
˜
r
v
z
, θ
r
--------
\ .
| |
e
˜
z
e
˜
θ
v
z
, z
e
˜
z
e
˜
z
dv
˜
dx
˜
------
v
r
, r
v
r
, θ
-v
θ
r
----------------- v
r
, z
v
θ
, r
v
θ
, θ
+v
r
r
------------------ v
θ
, z
v
z
, r
v
z
, θ
r
-------- v
z
, z
=
v
˜
v
˜
η
1
η
2
η
3
, , ( ) =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 75
linear coordinates, so are the base vectors! Written out,
. (5.56)
Therefore the increment involves both increments of the components and increments
of the base vectors:
. (5.57)
Applying the chain rule and using Eq. (5.36), the component increments can be written
. (5.58)
Similarly, the base vector increments are
. (5.59)
Substituting these results into Eq. (5.57) and rearranging gives
, (5.60)
which holds for all . Recall the gradient of a vector is defined in direct notation
such that
. (5.61)
Comparing the above two equations gives us a formula for the gradient:
. (5.62)
Incidentally, these equations serve as further examples of how a superscript in the “denomi-
nator” is regarded as a subscript in the summation convention.
Christoffel Symbols Note that the nine vectors [i.e., the coefficients of in the last
term of Eq. (5.62)] are strictly properties of the coordinate system and they may be computed
and tabulated a priori. This family of system-dependent vectors is denoted
. (5.63)
Recalling Eq. (5.35), note that
. (5.64)
Thus, only six of the nine vectors are independent due to the symmetry in . The k
th
con-
v
˜
v
i
η
1
η
2
η
3
, , ( ) g
˜
i
η
1
η
2
η
3
, , ( ) =
dv
˜
dv
i
dg
˜
i
dv
˜
dv
i
g
˜
i
v
i
dg
˜
i
+ =
dv
i
∂v
i
∂η
j
-------- dη
j
∂v
i
∂η
j
--------g
˜
j
dx
˜
• = =
dg
˜
i
∂g
˜
i
∂η
j
-------- dη
j
∂g
˜
i
∂η
j
-------- g
˜
j
dx
˜
• = =
dv
˜
∂v
i
∂η
j
--------g
˜
i
g
˜
j
dx
˜
• v
i
∂g
˜
i
∂η
j
--------g
˜
j
dx
˜
• + =
dx
˜
dv
˜
dx
˜

dv
˜
dv
˜
dx
˜
------ dx
˜
• = dx
˜

dv
˜
dx
˜
------
∂v
i
∂η
j
-------- g
˜
i
g
˜
j
v
i
∂g
˜
i
∂η
j
-------- g
˜
j
+ =
∂g
˜
i
∂η
j
⁄ v
i
Γ
˜
ij
∂g
˜
i
∂η
j
-------- ≡
Γ
˜
ij

2
x
˜
∂η
j
∂η
i
-----------------

2
x
˜
∂η
i
∂η
j
----------------- Γ
˜
ji
= = =
Γ
˜
ij
ij
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 76
travariant component of is obtained by dotting into :
. (5.65)
The quantity is called the Christoffel symbol of the second kind.
Important comment about notation: Even though the Christoffel symbols have three
indices, they are not components of a third-order tensor. By this, we mean that a second basis
will have a different set of Christoffel symbols and, as discussed below, they are that
are not obtainable via a simple tensor transformation of the Christoffel symbols of the original
system [i.e., ]. Instead of the notation , Christoffel
symbols are therefore frequently denoted in the literature with the odd-looking notation .
The fact that Christoffel symbols are not components of a tensor is, of course, a strong justifi-
cation for avoiding typesetting Christoffel symbols in the same way as tensors. However, to
be 100% consistent, proponents of this notational ideology would — by the same arguments
— be compelled to not typeset coordinates using the notation which erroneously makes it
look as though the are contravariant components of some vector , even though they
aren’t. Being comfortable with the typesetting , we are also comfortable with the typeset-
ting . The key is for the analyst to recognize that neither of these symbols connote tensors.
Instead, they are “basis-intrinsic” quantities (i.e., indexed quantities whose meaning is
defined for a particular basis and whose connection with counterparts from a different basis
are not obtained via a tensor transformation rule). Of course, the base vectors themselves are
basis-intrinsic objects. Any new object that is constructed from basis-intrinsic quantities
should be itself regarded as basis-intrinsic until proven otherwise. For example, the metric
were initially regarded as basis-intrinsic because they were constructed from
basis-intrinsic objects (the base vectors), but it was proved that they turned out to also satisfy
the tensor transformation rule. Consequently, even though the metric matrix is constructed
from basis-intrinsic quantities, it turns out to not be basis intrinsic itself (the metric compo-
nents are components of the identity tensor).
On page 86, we define Christoffel symbols of the “first” kind, which are useful in Rieman-
nian spaces where there is no underlying rectangular Cartesian basis. In general, if the term
“Christoffel symbol” is used by itself, it should be taken to mean the Christoffel symbol of the
second kind defined above. Christoffel symbols may appear rather arcane, but keep in mind
that these quantities simply characterize how the base vectors vary in space. Christoffel are
also sometimes called the “affinities” [12].
Γ
˜
ij
g
˜
k
Γ
˜
ij
Γ
ij
k
g
˜
k
Γ
˜
ij
• ≡
Γ
ij
k
Γ
ij
k
g
˜
k
Γ
ij
k
Γ
ij
k
Γ
mn
p
g
˜
m
g
˜
m
• ( ) g
˜
n
g
˜
n
• ( ) g
˜
p
g
˜
k
• ( ) ≠ Γ
ij
k
k
ij
{ }
η
k
η
k
η
˜
η
k
Γ
ij
k
g
ij
g
˜
i
g
˜
j
• ≡
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 77
By virtue of Eq. (5.64), note that
. (5.66)
Like the components of the tensor , the Christoffel symbols are properties of the par-
ticular coordinate system and its basis. Consequently, the Christoffel symbols are not the com-
ponents of a third-order tensor. Specifically, if we were to consider some different curvilinear
coordinate system and compute the Christoffel symbols for that system, the result would not
be the same as would be obtained by a tensor transformation of the Christoffel symbols of the
first coordinate system to the second system. This is in contrast to the metric coefficients
which do happen to be components of a tensor (namely, the identity tensor).
Recalling that is the k
th
contravariant component of and by Eq. (5.63)
, we conclude that the variation of the base vectors in space is completely
characterized by the Christoffel symbols. Namely,
. (5.67)
Increments in the base vectors By the chain rule, the increment in the covariant base vec-
tor can always be written
(5.68)
or, using the notation introduced in Eq. (5.67)
(5.69)
Manifold torsion Recall that the Christoffel symbols are not components of basis-indepen-
dent tensors. Consider, however [12], the anti-symmetric part of the Christoffel symbols:
(5.70)
As long as Eq. (5.66) holds, then the manifold torsion will be zero. For a non-holomomic sys-
tem, it’s possible that the manifold torsion will be nonzero, but it will turn out to be a basis
independent (i.e., “free” vector). Henceforth, we assume that Eq. (5.66) holds true, so no fur-
ther mention will be made of the manifold torsion.
Γ
ij
k
Γ
ji
k
=
F
ij
F
˜
˜
Γ
ij
k
g
ij
Γ
ij
k
Γ
˜
ij
Γ
˜
ij
∂g
˜
i
∂η
j
⁄ =
∂g
˜
i
∂η
j
-------- Γ
ij
k
g
˜
k
=
dg
˜
i
∂g
˜
i
∂η
j
-------- dη
j
=
dg
˜
i
Γ
ij
k
g
˜
k

j
=

ij | |
k
Γ
ij
k
Γ
ji
k
– ≡
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 78
EXAMPLE: Christoffel symbols for cylindrical coordinates In terms of the underlying
rectangular Cartesian basis, the covariant base vectors from Eq. (5.19) can be written
(5.71a)
(5.71b)
. (5.71c)
Therefore, applying Eq. (5.63), using
(5.72a)
(5.72b)
. (5.72c)
Noting that is the coefficient of in the expression for , we find that only three of the
27 Christoffel symbols are nonzero. Namely, noting that is the coefficient of in
and noting that the coefficient of in ,
and (all other ). (5.73)
If you look up cylindrical Christoffel symbols in a handbook, you will probably find the sub-
scripts (1,2,3) replaced with the coordinate symbols (r,θ,z) for clarity so they are listed as
and (all other ). (5.74)
g
˜
1
θ cos e
˜
1
θ sin e
˜
2
+ =
g
˜
2
r θ sin – e
˜
1
r θe
˜
2
cos + =
g
˜
3
e
˜
3
=
η
1
=r η
2
=θ η
3
=z , , { }
Γ
˜
11
∂g
˜
1
∂r
--------- 0
˜
= = Γ
˜
12
∂g
˜
1
∂θ
--------- θ sin – e
˜
1
θe
˜
2
cos +
1
r
---g
˜
2
= = = Γ
˜
13
∂g
˜
1
∂z
--------- 0
˜
= =
Γ
˜
21
Γ
˜
12
= Γ
˜
22
∂g
˜
2
∂θ
--------- r θ cos – e
˜
1
r θ sin e
˜
2
– rg
˜
1
– = = = Γ
˜
23
∂g
˜
2
∂z
--------- 0
˜
= =
Γ
˜
31
Γ
˜
13
= Γ
˜
32
Γ
˜
23
= Γ
˜
33
∂g
˜
3
∂z
--------- 0
˜
= =
Γ
ij
k
g
˜
k
Γ
˜
ij
Γ
12
2
g
˜
2
Γ
˜
12
Γ
22
1
g
˜
1
Γ
˜
22
Γ
12
2
Γ
21
2
1
r
--- = = Γ
22
1
r – = Γ
ij
k
0 =
Γ

θ
Γ
θr
θ
1
r
--- = = Γ
θθ
r
r – = Γ
ij
k
0 =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 79
Partial Answer: To find the vector , first differentiate from Eq. (5.31a) with respect to
(i.e., with respect to θ). You then recognize from (5.31b) that the result is . Then t is
found by dotting by from Eq. (5.32b). Hence
.
Covariant differentiation of contravariant components Substituting Eq. (5.67) into
Eq. (5.62) gives
, (5.77)
or, changing the dummy summation indices so that the basis dyads will have the same sub-
scripts in both terms,
. (5.78)
We now introduce a compact notation called covariant vector differentiation:
. (5.79)
The notation for covariant differentiation varies widely: the slash in is also often denoted
with a comma although many writers use a comma to denote ordinary partial differentia-
tion. Keep in mind: the Christoffel terms in Eq. (5.79) account for the variation of the base vec-
tors in space. Using covariant differentiation, the gradient of a vector is then written
compactly as
. (5.80)
Study Question 5.3 Using the spherical covariant base vectors in Eq. (5.31), prove that
(5.75a)
(5.75b)
. (5.75c)
Therefore show that the Christoffel symbols for spherical coordinates are
, , (5.76a)
, (5.76b)
, (all other ). (5.76c)
Γ
˜
11
0
˜
= Γ
˜
12
1
r
--- g
˜
2
= Γ
˜
13
1
r
--- g
˜
3
=
Γ
˜
21
Γ
˜
12
= Γ
˜
22
rg
˜
1
– = Γ
˜
23
φ cot g
˜
3
=
Γ
˜
31
Γ
˜
13
= Γ
˜
32
Γ
˜
23
= Γ
˜
33
rsin
2
θg
˜
1
– θ sin θ cos g
˜
2
– =
r θ φ , , { }
Γ

θ
Γ
θr
θ
1
r
--- = = Γ

φ
Γ
φr
φ
1
r
--- = =
Γ
θθ
r
r – = Γ
θφ
φ
Γ
φθ
φ
φ cot = =
Γ
φφ
r
rsin
2
θ – = Γ
φφ
θ
θ sin θ cos – = Γ
ij
k
0 =
Γ
˜
12
g
˜
1
η
2
Γ
˜
12
e
˜
θ
= Γ
12
2
Γ
˜
12
g
˜
2
Γ
12
2
Γ

θ
Γ
˜
12
g
˜
2
• e
˜
θ
e
˜
θ
r ⁄ ( ) • 1 r ⁄ = = = =
dv
˜
dx
˜
------
∂v
i
∂η
j
-------- g
˜
i
g
˜
j
v
i
Γ
ij
k
g
˜
k
g
˜
j
+ =
dv
˜
dx
˜
------
∂v
i
∂η
j
-------- g
˜
i
g
˜
j
v
k
Γ
kj
i
g
˜
i
g
˜
j
+ =
v
i
/
j
∂v
i
∂η
j
-------- v
k
Γ
kj
i
+ ≡
v
i
/j
v
i
, j
dv
˜
dx
˜
------ v
/j
i
g
˜
i
g
˜
j
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 80
Example: Gradient of a vector in cylindrical coordinates Recalling Eq. (5.72), we can
apply Eq. (5.79) to obtain
= (5.81rr)
= (5.81rt)
= (5.81rz)
= (5.81tr)
= (5.81tt)
= (5.81tz)
= (5.81zr)
= (5.81zt)
= (5.81zz)
Hence, the gradient of a vector is given by
= + +
+ + +
+ + + . (5.82)
Substituting Eqs. (5.19), (5.26), and (5.29) into the above formula gives
= + +
+ + +
+ + + . (5.83)
v
1
/1
∂v
1
∂η
1
--------- v
1
Γ
11
1
v
2
Γ
21
1
v
3
Γ
31
1
+ + + =
∂v
1
∂r
--------
v
1
/2
∂v
1
∂η
2
--------- v
1
Γ
12
1
v
2
Γ
22
1
v
3
Γ
32
1
+ + + =
∂v
1
∂θ
-------- v
2
r –
v
1
/3
∂v
1
∂η
3
--------- v
1
Γ
13
1
v
2
Γ
23
1
v
3
Γ
33
1
+ + + =
∂v
1
∂z
--------
v
2
/1
∂v
2
∂η
1
--------- v
1
Γ
11
2
v
2
Γ
21
2
v
3
Γ
31
2
+ + + =
∂v
2
∂r
--------
v
2
/2
∂v
2
∂η
2
--------- v
1
Γ
12
2
v
2
Γ
22
2
v
3
Γ
32
2
+ + + =
∂v
2
∂θ
--------
v
1
r
----- +
v
2
/3
∂v
2
∂η
3
--------- v
1
Γ
13
2
v
2
Γ
23
2
v
3
Γ
33
2
+ + + =
∂v
2
∂z
--------
v
3
/1
∂v
3
∂η
1
--------- v
1
Γ
11
3
v
2
Γ
21
3
v
3
Γ
31
3
+ + + =
∂v
3
∂r
--------
v
3
/2
∂v
3
∂η
2
--------- v
1
Γ
12
3
v
2
Γ
22
3
v
3
Γ
32
3
+ + + =
∂v
3
∂θ
--------
v
3
/3
∂v
3
∂η
3
--------- v
1
Γ
13
3
v
2
Γ
23
3
v
3
Γ
33
3
+ + + =
∂v
3
∂z
--------
dv
˜
dx
˜
------
∂v
1
∂r
--------
\ .
| |
g
˜
1
g
˜
1
∂v
1
∂θ
-------- v
2
r –
\ .
| |
g
˜
1
g
˜
2
∂v
1
∂z
--------
\ .
| |
g
˜
1
g
˜
3
∂v
2
∂r
--------
\ .
| |
g
˜
2
g
˜
1
∂v
2
∂θ
--------
v
1
r
----- +
\ .
| |
g
˜
2
g
˜
2
∂v
2
∂z
--------
\ .
| |
g
˜
2
g
˜
3
∂v
3
∂r
--------
\ .
| |
g
˜
3
g
˜
1
∂v
3
∂θ
--------
\ .
| |
g
˜
3
g
˜
2
∂v
3
∂z
--------
\ .
| |
g
˜
3
g
˜
3
dv
˜
dx
˜
------
∂v
r
∂r
--------
\ .
| |
e
˜
r
e
˜
r
∂v
r
∂θ
-------- v
θ

\ .
| |
e
˜
r
e
˜
θ
r
-----
\ .
| |
∂v
r
∂z
--------
\ .
| |
e
˜
r
e
˜
z
1
r
---
∂v
θ
∂r
--------
\ .
| |
re
˜
θ
( )e
˜
r
1
r
---
∂v
θ
∂θ
--------
v
r
r
---- +
\ .
| |
re
˜
θ
( )
e
˜
θ
r
-----
\ .
| |
1
r
---
∂v
θ
∂z
--------
\ .
| |
re
˜
θ
( )e
˜
z
∂v
z
∂r
--------
\ .
| |
e
˜
z
e
˜
r
∂v
z
∂θ
--------
\ .
| |
e
˜
z
e
˜
θ
r
-----
\ .
| |
∂v
z
∂z
--------
\ .
| |
e
˜
z
e
˜
z
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 81
Upon simplification, the matrix of with respect to the usual orthonormalized basis
is
w.r.t unit basis. (5.84)
This is the formula usually cited in handbooks.
dv
˜
dx
˜

e
˜
r
e
˜
θ
e
˜
z
, , { }
dv
˜
dx
˜
------
∂v
r
∂r
--------
1
r
---
∂v
r
∂θ
-------- v
θ

\ .
| |
∂v
r
∂z
--------
∂v
θ
∂r
--------
1
r
---
∂v
θ
∂θ
--------
v
r
r
---- +
\ .
| |
∂v
θ
∂z
--------
∂v
z
∂r
--------
1
r
---
∂v
z
∂θ
--------
∂v
z
∂z
--------
= e
˜
r
e
˜
θ
e
˜
z
, , { }
Study Question 5.4 Again we will duplicate the methods of the preceding example for
spherical coordinates. However, rather than duplicating the entire hideous analyses, we
here compute only the component of . In particular, we seek
. (5.85)
(a) Noting from Eqs. (5.31) and (5.32) how and are related to , show that
. (5.86)
(b) Explain why Eq. (5.80) therefore implies that
. (5.87)
(c) With respect to the orthonormal basis recall from Eq. (5.33) that and
. Use the Christoffel symbols of Eq. (5.76) in the formula (5.79) to show that
. (5.88)
(d) The final step is to substitute this result into Eq. (5.87) to deduce that
. (5.89)
Cite a textbook (or other reference) that tabulates formulas for gradients in spherical coordi-
nates. Does your Eq. (5.89) agree with the formula for the component of pro-
vided in the textbook?
rφ dv
˜
dx
˜

e
˜
r
dv
˜
dx
˜
------ e
˜
φ
• •
e
˜
r
e
˜
φ
g
˜
1
g
˜
2
g
˜
3
, , { }
e
˜
r
dv
˜
dx
˜
------ e
˜
φ
• •
1
r θ sin
-------------- g
˜
1
dv
˜
dx
˜
------ g
˜
3
• • =
e
˜
r
dv
˜
dx
˜
------ e
˜
φ
• •
v
3 ;
1
r θ sin
-------------- =
e
˜
r
e
˜
θ
e
˜
φ
, , { } v
1
=v
r
v
3
=v
φ
r θ sin ⁄
v
1
/3
∂v
r
∂φ
-------- v
φ
sinθ – ( ) + =
e
˜
r
dv
˜
dx
˜
------ e
˜
φ
• •
1
r θ sin
--------------
∂v
r
∂φ
--------
v
φ
r
----- – =
rφ dv
˜
dx
˜

rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 82
Covariant differentiation of covariant components Recalling, Eq. (5.63), we now consider
a similarly-defined basis-dependent vector:
, (5.90)
and analogously to Eq. (5.65) we define
. (5.91)
Given a vector for which the covariant components are known, an analysis similar to
that of the previous sections eventually reveals that the gradient of the vector can be written:
, (5.92)
where
. (5.93)
The question naturally arises: what connection, if any, does have with ? To answer this
question, differentiate both sides of Eq. (2.12) with respect to :
, (5.94)
or
and therefore . (5.95)
In other words, is just the negative of . Consequently, equation (5.92) becomes
where . (5.96)
We now have an equation for increments of the contravariant base vectors
(5.97)
which should be compared with Eq. (5.69)
Product rules for covariant differentiation Most of the usual rules of differential calculus
apply. For example,
and , etc. (5.98)
P
˜
i
k
∂g
˜
k
∂η
i
--------- ≡
P
ij
k
P
˜
i
k
g
˜
j
• ≡
v
˜
v
k
dv
˜
dx
˜
------ v
i/j
g
˜
i
g
˜
j
=
v
i/j
∂v
i
∂η
j
-------- v
k
P
ij
k
+ ≡
P
ij
k
Γ
ij
k
η
k
∂g
˜
i
∂η
k
--------- g
˜
j
• g
˜
i
∂g
˜
j
∂η
k
--------- • + 0 =
P
˜
k
i
g
˜
j
• g
˜
i
Γ
˜
jk
• + 0 = P
kj
i
Γ
kj
i
+ 0 =
P
ij
k
Γ
ij
k
dv
˜
dx
˜
------ v
i/j
g
˜
i
g
˜
j
= v
i/j
∂v
i
∂η
j
-------- v
k
Γ
ij
k
– ≡
dg
˜
k
Γ
ij
k
g
˜
j
– ( )dη
i
=
v
i
w
j
( )
/k
v
i
w
j/k
v
i/k
w
j
+ = v
i
w
j
( )
/k
v
i
w
j/k
v
i
/k
w
j
+ =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 83
5.7 Backward and forward gradient operators
The gradient that we defined in the preceding section is really a backward operating
gradient. By this we mean that the component follows an index ordering that is analogous
to the index ordering on the dyad . This dyad has components , which is comparable
to the index ordering in the Cartesian ij component formula .
In Cartesian coordinates, the gradient is a second order tensor whose Cartesian ij
components are given by . But couldn’t we just have well have defined a second order
tensor whose Cartesian ij components would be . The only difference between these
two choices is that the index placement is swapped. Thus, the transpose of one choice gives
the other choice.
Following definitions and notation used by Malvern [11], we will denote the backward
operating gradient by and the forward operating gradient by . The “ ” operates on
arguments to its right, while the operates on arguments to its left. As mentioned earlier,
our previously defined vector gradient is actually a backward del definition:
means the same thing as . Thus, (5.99)
means the same thing as Thus, (5.100)
The component ordering for the forward gradient operator is defined in a manner that is
analogous to the dyad , whereas the component ordering for the backward
gradient is analogous to that on the dyad . This leads naturally to the ques-
tion of whether or not it is possible to define right and left gradient operators in a manner that
permits some “heuristic assistance.”
Recall that
(5.101)
Suppose that we wish to identify a left-acting operator such that
(5.102)
Let’s suppose we desire to define this operator such that it follows a product rule so that
(5.103)
This suggests that we should define
and (5.104)
dv
˜
dx
˜

ij
v
˜
d
˜
v
i
d
j
∂v
i
∂x
j

dv
˜
dx
˜

∂v
i
∂x
j

∂v
j
∂x
i

v
˜
∇ ∇v
˜


v
˜

dv
˜
dx
˜
------ v
˜
∇ v
/j
i
g
˜
i
g
˜
j
=
∇v
˜
dv
˜
dx
˜
------
\ .
| |
T
∇v
˜
v
/i
j
g
˜
i
g
˜
j
=
d
˜
v
˜
d
i
v
j
( )g
˜
i
g
˜
j
=
v
˜
d
˜
v
j
d
i
( )g
˜
i
g
˜
j
=
dv
˜
dx
˜
------
∂v
i
∂η
j
-------- g
˜
i
g
˜
j
v
k
Γ
kj
i
g
˜
i
g
˜
j
+ =

dv
˜
dx
˜
------ v
i
g
˜
i
( )∇ =
v
i
g
˜
i
( )∇ g
˜
i
v
i
( )∇ | | v
i
g
˜
i
( )∇ | | + =
v
i
( )∇
∂v
i
∂η
j
-------- g
˜
j
= g
˜
k
( )∇ Γ
kj
i
g
˜
i
g
˜
j
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 84
Similarly, for the forward operation gradient, we define
(5.105)
from which it follows
and (5.106)
These definitions are cute, but we caution against using them too cavalierly. A careful applica-
tion of the original definition of the gradient is probably safer.
5.8 Divergence of a vector
The divergence of a vector is denoted and it is defined by
(5.107)
The simplicity of the component expression for the trace depends on the expression chosen
for the gradient operation. For example, taking the trace of Eq. (5.96) gives a valid but some-
what ugly expression:
(5.108)
A much simpler expression for the divergence can be obtained by taking the trace of Eq. (5.80)
to give
(5.109)
Written out explicitly, this result is
(5.110)
It will later be shown [in Eq. (5.126)] that
(5.111)
Therefore, Eq. (5.110) gives the very useful formula [7]:
(5.112)
∇ v
i
g
˜
i
( ) ∇ v
i
( ) g
˜
i
v
i
∇g
˜
i
+ =
∇ v
i
( )
∂v
i
∂η
j
-------- g
˜
j
= ∇ g
˜
k
( ) Γ
kj
i
g
˜
j
g
˜
i
=
v
˜
∇ v
˜

∇ v
˜
• tr
dv
˜
dx
˜
------
\ .
| |
=
∇ v
˜
• v
i/j
g
˜
i
g
˜
j
• ( ) v
i/j
g
ij
( )
∂v
i
∂η
j
-------- v
k
Γ
ij
k
– g
ij
= = =
∇ v
˜
• v
/j
i
g
˜
i
g
˜
j
• v
/j
i
δ
i
j
v
/i
i
= = =
∇ v
˜

∂v
i
∂η
i
-------- v
k
Γ
ki
i
+ =
Γ
ki
i
1
J
---
∂J
∂η
k
--------- =
∇ v
˜

1
J
---
∂ Jv
k
( )
∂η
k
---------------- =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 85
5.9 Curl of a vector
The curl of a vector is denoted and it is defined to equal the axial vector
1
associ-
ated with . Thus, recalling Eq. (5.100),
(5.113)
In the last step, we have used the skew-symmetry of the permutation tensor. Substituting Eq.
(5.77) gives
(5.114)
This result can be written as
(5.115)
We can alternatively start with the covariant expression for the gradient given in Eq. (5.96).
Namely,
where . (5.116)
which gives
(5.117)
Recalling from Eq. (3.47) that , the expanded component form of this result is
(5.118)
where (recall)
(5.119)
1. The axial vector associated with any tensor is defined by , where is the permutation tensor.
v
˜
∇ v
˜
×
T
˜
˜
1
2
---
ξ
˜˜
˜
:T
˜
˜
– ξ
˜˜
˜
∇v
˜
∇ v
˜
×
1
2
--- ξ
˜
˜
˜
:
dv
˜
dx
˜
------
\ .
| |
T

1
2
--- ξ
˜
˜
˜
:
dv
˜
dx
˜
------
\ .
| |
= =
∇ v
˜
×
1
2
--- ξ
˜
˜
˜
:
∂v
i
∂η
j
-------- v
k
Γ
kj
i
+
\ .
| |
g
˜
i
g
˜
j
=
∇ v
˜
×
1
2
---
∂v
i
∂η
j
-------- v
k
Γ
kj
i
+
\ .
| |
g
˜
i
g
˜
j
× =
dv
˜
dx
˜
------ v
i/j
g
˜
i
g
˜
j
= v
i/j
∂v
i
∂η
j
-------- v
k
Γ
ij
k
– ≡
∇ v
˜
×
1
2
--- ξ
˜
˜
˜
: v
i/j
g
˜
i
g
˜
j
| |
1
2
--- ξ
kij
v
i/j
g
˜
k
= =
ξ
kij
1
J
--- ε
ijk
=
∇ v
˜
×
1
2J
------ v
2/3
v
3/2
– ( )g
˜
1
v
3/1
v
1/3
– ( )g
˜
2
v
1/2
v
2/1
– ( )g
˜
1
+ + | | =
J g
˜
1
g
˜
2
× ( ) g
˜
3
• =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 86
5.10 Gradient of a tensor
Using methods similar to the preceding sections, one can apply the direct notation defini-
tion of the gradient to eventually prove that
, where , (5.120)
or, if the tensor is known in its covariant components,
, where . (5.121)
For mixed tensor components, we have
, where . (5.122)
Note that there is a Christoffel term for each subscript on the tensor. The term is negative if the
summed index is a subscript on T and positive if the summed index is a superscript on T.
Ricci’s lemma Recall from Eq. (3.18) that the metric coefficients are components of the iden-
tity tensor. Knowing that the gradient of the identity tensor must be zero, it follows that
and . (5.123)
Corollary Recall that the Jacobian equals the volume of the parallelepiped formed by the
three base vectors . Since the base vectors vary with position, it follows that var-
ies with position. In other words, the Jacobian may be regarded as a function of the coordi-
nates. Taking the derivative of the Jacobian with respect to the coordinate and applying
the chain rule gives
, where we have used Eq. (2.30). (5.124)
Recalling from Eq. (5.123) that the metric components have vanishing covariant derivatives, Eq.
(5.121) tells us that
(5.125)
Thus, Eq. (5.124) becomes
=
dT
˜
˜
dx
˜

dT
˜
˜
dx
˜
------- T
ij
/k
g
˜
i
g
˜
j
g
˜
k
= T
ij
/k
∂T
ij
∂η
k
---------- T
mj
Γ
mk
i
T
im
Γ
mk
j
+ + ≡
dT
˜
˜
dx
˜
------- T
ij
/k
g
˜
i
g
˜
j
g
˜
k
= T
ij
/k
∂T
ij
∂η
k
--------- T
mj
Γ
ik
m
– T
im
Γ
jk
m
– ≡
dT
˜
˜
dx
˜
------- T
•j /k
i
g
˜
i
g
˜
j
g
˜
k
= T
•j /k
i
∂T
•j
i
∂η
k
---------- T
•j
m
Γ
mk
i
T
•m
i
Γ
jk
m
– + ≡
g
ij /k
0 = g
ij
/k
0 =
J
g
˜
1
g
˜
2
g
˜
3
, , | | J
J
η
k
∂J
∂η
k
---------
∂J
∂g
ij
---------
∂g
ij
∂η
k
---------
1
2
--- Jg
ij
∂g
ij
∂η
k
--------- = =
∂g
ij
∂η
k
--------- g
mj
Γ
ik
m
g
mi
Γ
jk
m
+ =
∂J
∂η
k
---------
1
2
--- Jg
ij
g
mj
Γ
ik
m
g
mi
Γ
jk
m
+ | | =
1
2
--- J δ
m
i
Γ
ik
m
δ
m
j
Γ
jk
m
+ | |
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 87
=
= (5.126)
or
(5.127)
5.11 Christoffel symbols of the first kind
Section 5.6 showed how to compute the Christoffel symbols of the second kind by directly
applying the formula:
. (5.128)
This method was tractable because the space was Euclidean and we could therefore utilize the
existence of an underlying rectangular Cartesian basis to determine the change in the base
vectors with respect to the coordinates. However, for non-Euclidean spaces, you have only
the metric coefficients , and the simplest way to compute the Christoffel symbols of the sec-
ond kind then begins by first computing the Christoffel symbols of the first kind , which
are related to the Christoffel symbols of the second kind by
. (5.129)
These are frequently denoted in the literature as , presumably to emphasize that
they are not components of any third-order-tensor, in the sense discussed on page __. Substi-
tuting Eq. (5.128) into Eq. (5.129) gives
, (5.130)
where the final term results from lowering the index on . Thus, using Eq. (5.63),
. (5.131)
Recalling that , this result also reveals that is symmetric in its first two indices:
. (5.132)
Now note that
. (5.133)
1
2
--- J Γ
mk
m
Γ
mk
m
+ | |

mk
m
Γ
mk
m
1
J
---
∂J
∂η
k
--------- =
Γ
ij
k
g
˜
k
∂g
˜
i
∂η
j
-------- • =
g
ij
Γ
ijk
Γ
ijk
Γ
ij
m
g
mk
=
Γ
ijk
ij k , | |
Γ
ijk
g
˜
m
∂g
˜
i
∂η
j
-------- •
\ .
| |
g
mk
∂g
˜
i
∂η
j
--------
\ .
| |
g
˜
m
• g
mk
∂g
˜
i
∂η
j
-------- g
˜
k
• = = =
g
˜
m
Γ
ijk
Γ
˜
ij
g
˜
k
• =
Γ
˜
ij
Γ
˜
ji
= Γ
ijk
Γ
ijk
Γ
jik
=
∂ g
ik
( )
∂η
j
---------------
∂ g
˜
i
g
˜
k
• ( )
∂η
j
-------------------------
∂g
˜
i
∂η
j
-------- g
˜
k
• g
˜
i
∂g
˜
k
∂η
j
--------- • + Γ
˜
ij
g
˜
k
• g
˜
i
Γ
˜
kj
• + = = =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 88
Using Eq. (5.131) gives
1

. (5.134)
This relationship can be easily remembered by noting the structure of the indices. Note, for
example, that the middle subscript on both ‘s is the same as that on the coordinate .
Direct substitution of Eq. (5.134) into (5.135) validates the following expression that can be
used to directly obtain the Christoffel symbols of the first kind given only the metric coeffi-
cients:
. (5.135)
This formula (which can be verified by substituting Eq. (5.134) into (5.135) represents the
easiest way to obtain the Christoffel symbols of the first kind when only the metric coeffi-
cients are known. This formula has easily remembered index structure: for , the index
symbols on each of the coordinates are ordered . Once the are known, the Christ-
offel symbols of the second kind are obtained by solving Eq. (5.129) to give
. (5.136)
5.12 The fourth-order Riemann-Christoffel curvature tensor
Recall that this document has limited its scope to curvilinear systems embedded in a
Euclidean space of the same dimension. When the Euclidean space is three-dimensional, then
this document has focused on alternative coordinate systems that still define points in this
space and therefore require three coordinates. An example of a two-dimensional curvilinear
space embedded in Euclidean space is the surface of a sphere. Only two coordinates are
required to specify a point on a sphere and this two dimensional space is “Riemannian” (i.e.,
non-Euclidean) because it is not possible for us to construct a rectangular coordinate grid on a
sphere. In ordinary engineering mechanics applications, the mathematics of reduced dimen-
sion spaces within ordinary physical space is needed to study plate and beam theory.
We have focused on the three-dimensional Euclidean space that we think we live it. In this
modern (post-Einstein) era, this notion of a Euclidean physical world is now recognized to be
only an approximation (referred to as Newtonian space). Einstein and colleagues threw a
wrench in our thinking by introducing the notion that space and time are curved.
We have now mentioned two situations (prosaic shell/beam theory and exciting relativity
theory) where understanding some basics of non-Euclidean spaces is needed. We have men-
1. An alternative way to obtain this result is to simply apply Eq. (5.129) to Eq. (5.125).
∂g
ik
∂η
j
---------- Γ
ijk
Γ
kji
+ =
Γ η
j
Γ
ijk
1
2
---
∂g
jk
∂η
i
----------
∂g
ki
∂η
j
----------
∂g
ij
∂η
k
--------- – + =
Γ
ijk
η i j k , , Γ
ijk
Γ
ij
n
Γ
ijk
g
kn
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 89
tioned that a Riemannian space is one that does not permit construction of a rectangular coor-
dinate grid. How is this statement cast in mathematical terms? In other words, what process is
needed to decide if a space is Euclidean or Riemannian in the first place. The answer is tied to
a special fourth-order tensor called the Riemann-Christoffel tensor soon to be defined. If this
tensor turns out to be zero, then your space is Euclidean. Otherwise, it is Riemannian.
The Riemann-Christoffel tensor is defined
(5.137)
or
(5.138)
This tensor is skew-symmetric in the first two indices and in the last two indices. It is major
symmetric:
(5.139)
In a two-dimensional space, only the component is independent, the others being either
zero or related to this component by .
Note from Eq. (5.137) that depends on the Christoffel symbols. For a Cartesian sys-
tem, all Christoffel symbols are zero. Hence, in a Cartesian system, . Thus, if a space
is capable of supporting a Cartesian system (i.e., if the space is Euclidean), then the Riemann-
Christoffel tensor must be zero. This would be true even if you aren’t actually using a Carte-
sian system. For example, ordinary 3D Newtonian space is Euclidean and therefore its Rie-
mann-Christoffel tensor must be zero even if you happen to employ a different set of three
curvilinear coordinates such as spherical coordinates . This would follow because the
transformation relations are linear. Hence, if there exists a Cartesian system (in which the Rie-
mann-Christoffel tensor is zero), a linear transformation to a different, possibly curvilinear,
system would result again in the zero tensor. For the Riemann-Christoffel tensor to not be
zero, you would have to be working in a reduced dimension space [such as the surface of a
sphere where the coordinates are ]. The Riemann-Christoffel tensor is, therefore, a mea-
sure of curvature of a Riemannian space. Because the Riemann-Christoffel tensor transforms
like a tensor, it is not a basis-intrinsic quantity despite the fact that it has been here defined in
terms of basis-intrinsic quantities. A similar situation was encountered with the metric coeffi-
cients , which were defined . The metrics are components of the identity
tensor. Therefore the product of these components times base vectors is basis-inde-
pendent (it equals the identity tensor). Similarly, the product of the Riemann-Christoffel com-
ponents times basis vectors is basis-independent:
(5.140)
R
ijkl
∂Γ
jli
∂η
k
-----------
∂Γ
jki
∂η
l
------------ – Γ
ilp
Γ
jk
p
Γ
ikp
Γ
jl
p
– + =
R
ijkl
1
2
---

2
g
il
∂η
j
∂η
k
------------------

2
g
jl
∂η
i
∂η
k
------------------ –

2
g
ik
∂η
j
∂η
l
----------------- –

2
g
jk
∂η
i
∂η
l
----------------- +
\ .
| |
g
mn
Γ
jkn
Γ
ilm
Γ
jlm
Γ
ikn
– ( ) + =
R
ijkl
R
jikl
– R
ijlk
– R
klij
= = =
R
1212
R
1212
R
2112
– R
1221
– R
2121
= = =
R
ijkl
R
ijkl
0 =
r θ φ , , ( )
θ φ , ( )
g
ij
g
ij
g
˜
i
g
˜
j
• = g
ij
g
ij
g
˜
i
g
˜
j
R
˜
˜
˜
˜
R
ijkl
g
˜
i
g
˜
j
g
˜
k
g
˜
l
=
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 90
or
(5.141)
or
(5.142)
or
(5.143)
where
(5.144)
Noting that the indices on this tensor may be permuted in any manner, note that the first and
third terms in Eq. (5.143) may be canceled, giving
(5.145)
or, rearranging slightly,
(5.146)
or
(5.147)
where
(5.148)
6. Embedded bases and objective rates
We introduced the mapping tensor in Eq. (2.1) to serve as a mere “helper” tensor.
Namely, if the basis exists in a Euclidean space, then we defined
(6.1)
Here, the basis, is the same as the “laboratory” basis that we had originally used
R
˜
˜
˜
˜
∂Γ
jli
∂η
k
-----------
∂Γ
jki
∂η
l
------------ – Γ
ilp
Γ
jk
p
Γ
ikp
Γ
jl
p
– + g
˜
i
g
˜
j
g
˜
k
g
˜
l
=
R
˜˜˜˜
∂ Γ
˜
j l
g
˜
i
• ( )
∂η
k
-------------------------
∂ Γ
˜
jk
g
˜
i
• ( )
∂η
l
------------------------- – Γ
˜
i l
g
˜
p
• ( ) Γ
˜
j k
g
˜
p
• ( ) Γ
˜
i k
g
˜
p
• ( ) Γ
˜
jl
g
˜
p
• ( ) – + g
˜
i
g
˜
j
g
˜
k
g
˜
=
R
˜˜˜˜
Γ
˜
j lk
g
˜
i
• Γ
˜
jl
Γ
˜
i k
• Γ
˜
jkl
g
˜
i
• – Γ
˜
jk
Γ
˜
il
• – Γ
˜
il
g
˜
p
• ( ) Γ
˜
jk
g
˜
p
• ( ) Γ
˜
ik
g
˜
p
• ( ) Γ
˜
j l
g
˜
p
• ( ) – + + | |g
˜
i
g
˜
j
g
˜
k
g
˜
l
=
Γ
˜
ijk
∂Γ
˜
ij
∂η
k
----------

2
g
˜
i
∂η
j
∂η
k
------------------

3
x
˜
∂η
i
∂η
j
∂η
k
--------------------------- = = =
R
˜˜˜˜
Γ
˜
j l
Γ
˜
ik
• Γ
˜
j k
Γ
˜
il
• – Γ
˜
il
Γ
˜
jk
• Γ
˜
ik
Γ
˜
j l
• – + | |g
˜
i
g
˜
j
g
˜
k
g
˜
l
=
R
˜˜˜˜
Γ
˜
ik
Γ
˜
jl
• Γ
˜
il
Γ
˜
j k
• – Γ
˜
j k
Γ
˜
il
• Γ
˜
j l
Γ
˜
ik
• – + | |g
˜
i
g
˜
j
g
˜
k
g
˜
l
– =
R
˜˜˜˜
g
ikjl
g
ilj k
– g
j ki l
g
j lik
– + | |g
˜
i
g
˜
j
g
˜
k
g
˜
l
– =
g
ijkl
Γ
˜
ij
Γ
˜
kl
• ≡

2
x
˜
∂η
i
∂η
j
-----------------

2
x
˜
∂η
k
∂η
l
------------------ • =
F
˜
˜
g
˜
1
g
˜
2
g
˜
3
, , { }
g
˜
i
F
˜
˜
E
˜
i
• =
E
˜
1
E
˜
2
E
˜
3
, , { }
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 91
in Section 2. In continuum mechanics (especially the older literature), a special “convected”
coordinate system is often adopted in which the coordinates of a point currently located a
a position at time t are identified to equal the initial Cartesian coordinates of the same
material particle at time . For this special case, the coordinate grid is identical to the defor-
mation of the initial coordinate grid when it is convected along with the material particles. For
this special case, the mapping tensor in Eq. (6.1) is identical to the deformation gradient ten-
sor from traditional continuum mechanics. In this physical context, the term “covariant” is
especially apropos because the covariant base vector varies coincidently with the underly-
ing material particles. That is, the unit cube is defined by three unit vectors will,
upon deformation, deform to a parallelepiped whose sides move with the material and the
sides of this parallelepiped are given by the three convected base vectors .
By contrast, consider the contravariant base vectors, which (recall) are related to the map-
ping tensor by
(6.2)
These base vectors do not move coincidently with the material particles. Instead, the contra-
variant base vectors move somewhat contrary to the material motion. In particular, if
are the outward unit normals to the unit cube, then material deformation will
generally move the cube to a parallelepiped whose faces now have outward unit normals par-
allel to . In general, the motion of the normal to a plane moves somewhat con-
trary to the motion of material fibers that were originally parallel to the plane’s normal.
Of course, it is not really necessary for the initial coordinate system to be Cartesian itself.
When the initial coordinate system is permitted to be curvilinear, then we will denote its asso-
ciated set of base vectors by . As before, these covariant base vectors deform to
new orientations given by
(6.3)
The associated dual basis is given by
(6.4)
Here is here retaining its meaning as the physical deformation gradient tensor, which
necessitates introducing new symbols to denote the mapping tensors for the individual curvi-
linear bases. Namely, we will presume the existence of tensors and such that
and (6.5)
and (6.6)
η
k
x
˜
X
k
0
F
˜
˜
g
˜
i
E
˜
1
E
˜
2
E
˜
3
, , { }
g
˜
1
g
˜
2
g
˜
3
, , { }
g
˜
k
F
˜
˜
1 –
E
˜
k
• =
E
˜
1
E
˜
2
E
˜
3
, , { }
g
˜
1
g
˜
2
g
˜
3
, , { }
G
˜
1
G
˜
2
G
˜
3
, , { }
g
˜
i
F
˜
˜
G
˜
i
• =
g
˜
k
F
˜
˜
T –
G
˜
k
• =
F
˜
˜
h
˜
˜
H
˜
˜
g
˜
i
h
˜
˜
E
˜
i
• = g
˜
k
h
˜
˜
T –
E
˜
k
• =
G
˜
i
H
˜
˜
E
˜
i
• = G
˜
k
H
˜
˜
T –
E
˜
k
• =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 92
from which it follows that
and , where (6.7)
and , where (6.8)
Furthermore, substituting Eqs. (6.5) and (6.6) into (6.3) implies that
(6.9)
In continuum mechanics, the Seth-Hill generalized strain [13] measure is defined in direct
notation as
(6.10)
Here, the Seth-Hill parameter is selected according to whichever strain measure the analyst
prefers. Namely,
--> True/Swainger strain
--> engineering/Cauchy strain
--> Green/Lagrange strain
--> Almansi
--> logarithmic/Hencky strain
Of course, the case of must be applied in the limit.
The Lagrangian strain measure corresponds to choosing to give
(6.11)
The “Euler” strain measure corresponds to choosing to give
(6.12)
Strain measures in older literature look drastically different from this because they are
typically expressed in terms of initial and deformed metric tensors. What is the connection?
First note from Eq. (6.3) that
(6.13)
Furthermore, by definition,
(6.14)
Thus, dotting Eq. (6.11) from the left by and from the right by gives
(6.15)
This result shows that the difference between deformed and initial covariant metrics (which
g
ij
E
˜
i
y
˜
˜
E
˜
j
• • = g
ij
E
˜
i
y
˜
˜
1 –
E
˜
j
• • = y
˜
˜
h
˜
˜
T
h
˜
˜
• ≡
G
ij
E
˜
i
Y
˜
˜
E
˜
j
• • = G
ij
E
˜
i
Y
˜
˜
1 –
E
˜
j
• • = Y
˜
˜
H
˜
˜
T
H
˜
˜
• ≡
F
˜
˜
h
˜
˜
H
˜
˜
1 –
• =
e
˜
˜
1
k
--- F
˜
˜
T
F
˜
˜
• ( )
k 2 /
I
˜
˜
– | | =
k
k 1 – =
k=1
k=2
k= 2 –
k=0
k=0
k=2
e
˜
˜
Lagr
1
2
--- F
˜
˜
T
F
˜
˜
• ( ) I
˜
˜
– | | =
k= 2 –
e
˜
˜
Euler
1
2
--- I
˜
˜
F
˜
˜
T
F
˜
˜
• ( )
1 –
– | | =
g
ij
G
˜
i
F
˜
˜
T
F
˜
˜
• G
˜
j
• • =
G
ij
G
˜
i
G
˜
j
• ≡ G
˜
i
I
˜
˜
G
˜
j
• • =
G
˜
i
G
˜
j
G
˜
i
e
˜
˜
Lagr
G
˜
j
• •
1
2
--- g
ij
G
ij
– | | =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 93
appears frequently in older literature) is identically equal to the covariant components of the
Lagrangian strain tensor with respect to the initial basis.
Similarly, you can show that
(6.16)
In general, if you wish to convert a component equation from the older literature into modern
invariant direct notation form, you can use Eqs. (6.5), (6.6) and (6.9) as long as you can deduce
which basis applies to the component formulas. Converting an index equation to direct sym-
bolic form is harder than the reverse (which is a key argument used in favor of direct sym-
bolic formulations of governing equations). Consider, for example, the definition of the
second Piola Kirchhoff tensor:
, where . (6.17)
Here, and is the Cauchy stress. The tensor is called the “Kirchhoff” stress,
and it is identically equal to the Cauchy stress for incompressible materials. Dotting both
sides of the above equation by the initial contravariant base vectors gives
(6.18)
or, using Eq. (6.4),
(6.19)
This shows that the contravariant components of the second Piola Kirchhoff tensor with
respect to the initial basis are equal to the contravariant components of the Kirchhoff tensor
with respect to the current basis. Results like this are worth noting and recording in your per-
sonal file so that you can quickly convert older curvilinear constitutive model equations into
modern direct notation form.
G
˜
i
e
˜
˜
Euler
G
˜
j
• •
1
2
--- G
ij
g
ij
– | | =
s
˜
˜
F
˜
˜
1 –
τ
˜
˜
F
˜
˜
T –
• • ≡ τ
˜
˜

˜
˜
=
J det F
˜
˜
( ) = σ
˜
˜
τ
˜
˜
G
˜
i
s
˜
˜
G
˜
j
• • G
˜
i
F
˜
˜
1 –
τ
˜
˜
F
˜
˜
T –
G
˜
j
• • • • =
G
˜
i
s
˜
˜
G
˜
j
• • g
˜
i
τ
˜
˜
g
˜
j
• • =
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 94
7. Concluding remarks.
R.B. Bird et al. [1] astutely remark “...some authors feel it is stylish to use general tensor
notation even when it is not needed, and hence it is necessary to be familiar with the use of
base vectors and covariant and contravariant components in order to study the literature...”
Modern researchers realize that operations such as the dot product, cross product, and
gradient are proper tensor operations. Such operations commute with basis and/or coordi-
nate transformations as long as the computational procedure for evaluating them is defined
in a manner appropriate for the underlying coordinates or basis. In other words, one can
apply the applicable formula for such an operation in any convenient system and transform the
result to a second system, and the result will be the same as if the operation had been applied
directly in the second system at the outset.
Once an operation is known to be a proper tensor operation, it is endowed with a struc-
tured (direct) notation symbolic form. Derivations of new identities usually begin by casting a
direct notation formula in a conveniently simple system such as a principal basis or maybe
just ordinary rectangular Cartesian coordinates. From there, proper tensor operations are per-
formed within that system. In the final step, the result is re-cast in structured (direct) notation.
A structured result can then be justifiably expressed into any other system because all the oper-
ations used in the derivation had been proper operations. Consequently, one should always
perform derivations using only proper tensor operations using whatever coordinate system
makes the analysis accessible to the largest audience of readers. The last step is to cast the
final result back in direct notation, thereby allowing it to be recast in any other desired basis
or coordinate system.
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 95
8. REFERENCES
All of the following references are books, not journal articles. This choice has been made to
emphasize that the subject of curvilinear analysis has been around for centuries -- there exists
no new literature for us to quote that has not already been quoted in one or more of the fol-
lowing texts. So why, you might ask, is this new manuscript called for? The answer is that
the present manuscript is designed specifically for new students and researchers. It contains
many heuristically derived results. Any self-respecting student should follow up this manu-
script’s elementary introduction with careful reading of the following references.
Greenberg’s applied math book should be part of any engineer’s personal library. Readers
should make an effort to read all footnotes and student exercises in Greenberg’s book —
they’re both enlightening and entertaining! Simmond’s book is a good choice for continued
pursuit of curvilinear coordinates. Tables of gradient formulas for particular coordinate sys-
tems can be found in the appendix of Bird, Armstrong, and Hassager’s book.
1
Bird, R.B., Armstrong, R.C., Hassager, O., Dynamics of Polymeric Liquids. Wiley. (1987).
2
Schreyer, H.L., Introduction to Continuum Mechanics, University of New Mexico internal report.
(1976)
3
Arfken, G.B and Weber, H.J., Mathematical Methods for Physicists. Academic Press (1995).
4
McConnel, A.J., Applications of Tensor Analysis, Dover, NY, (1957)
5
Lovelock, David and Rund, Hanno. Tensors, differential forms, and variational principles., Wiley,
NY (1975).
6
Ramkrishna, D. and Amundson, NR. Linear Operator Methods in Chemical Engineering, Pren-
tice-Hall, New Jersey, (1985).
7
Simmonds, James G. A Brief on Tensor Analysis, Springer-Verlag, NY (1994).
8
Buck, R.C., Advanced Calculus, McGraw-Hill, NY (1978)
9
Rudin, W., Real and Complex Analysis, 3rd Ed., McGraw-Hill, NY (1987).
10
Greenberg, M.D., Foundations of applied mathematics, Prentice Hall, New Jersey (1978).
11
Malvern, L.E., Introduction to the mechanics of a continuous medium. Prentice-Hall, New Jersey
(1969).
12
Papastravridis, John G., Tensor Calculus and Analytical Dynamics, CRC Press, (1998).
13
Narasimhan, Mysore N., Principles of continuum mechanics., Wiley-Interscience, NY (1993.
rmbrann@me.unm.edu http://me.unm.edu/~rmbrann/gobag.html DRAFT June 17, 2004 96
14
Abraham, R., Marsden, J.E., and Ratiu, T. Manifolds, Tensor Analysis, and Applications, 2nd
Ed. (Vol. 75 in series entitled: Applied Mathematical Sciences). Springer, N.Y. (1988).

Written by Rebecca Moss Brannon of Albuquerque NM, USA, in connection with adjunct teaching at the University of New Mexico. This document is the intellectual property of Rebecca Brannon.

Copyright is reserved. June 2004

Table of contents
PREFACE ................................................................................................................. Introduction .............................................................................................................
Vector and Tensor Notation ........................................................................................................... Homogeneous coordinates ............................................................................................................. Curvilinear coordinates .................................................................................................................. Difference between Affine (non-metric) and Metric spaces ..........................................................

iv 1
5 10 10 11

Dual bases for irregular bases ..............................................................................
Modified summation convention ...................................................................................................
Important notation glitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11
15
17

The metric coefficients and the dual contravariant basis ...............................................................
Non-trivial lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A faster way to get the metric coefficients: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Super-fast way to get the dual (contravariant) basis and metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18
20 22 24

Transforming components by raising and lower indices. ..............................................................
Finding contravariant vector components — classical method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Finding contravariant vector components — accelerated method . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26
27 27

Tensor algebra for general bases .........................................................................
The vector inner (“dot”) product for general bases. .......................................................................
Index contraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30
30
31

Other dot product operations .......................................................................................................... The transpose operation ................................................................................................................. Symmetric Tensors ......................................................................................................................... The identity tensor for a general basis. .......................................................................................... Eigenproblems and similarity transformations .............................................................................. The alternating tensor ..................................................................................................................... Vector valued operations ................................................................................................................
CROSS PRODUCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Axial vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32 33 34 34 35 38 40
40 40

Scalar valued operations ................................................................................................................
TENSOR INNER (double dot) PRODUCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TENSOR MAGNITUDE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TRACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TRIPLE SCALAR-VALUED VECTOR PRODUCT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DETERMINANT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40
40 41 41 42 42

Tensor-valued operations ...............................................................................................................
TRANSPOSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . INVERSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . COFACTOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DYAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43
43 44 44 45

Basis and Coordinate transformations .................................................................
Coordinate transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46
53

What is a vector? What is a tensor? ...............................................................................................
Definition #0 (used for undergraduates):. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition #1 (classical, but our least favorite): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition #2 (our preference for ordinary engineering applications): . . . . . . . . . . . . . . . . . . . . . . . . . . Definition #3 (for mathematicians or for advanced engineering) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55
55 55 56 56

Coordinates are not the same thing as components .......................................................................

59

........... . . ... ... . .. . ...... . ..... . .. ........... .... ... ..... ....... ........ Product rules for covariant differentiation. . ... . ... . . . ....... . ... ..... ...... ....... Concluding remarks..... .. ........ ........... ....... .. .. ..... Example: Gradient of a vector in cylindrical coordinates . . . . ... . .. .... .... . . . . . . . ........ . .. ... . .. ... . .... .. . . . .. . ... . ........ . . ..... . . ......... .... ....... ... . .... .. . ... ...... . .... . ... ..... 60 Curvilinear calculus . .. .. .. ... ... .......... .. . .. . .... .. ..... .. ....... ..... .. .. .. . ... ...... .. . .. . ... . .. ... .. .. ....... .... ........ . .. .. ... . .. ....... . . ... . . . ...... .... ..... ... ... .... . ... . .. ... ......... 71 71 Gradient of a vector -.... ..... . ... .. . ... ... Gradient of a vector in curvilinear coordinates ..... . ........ . ....... . . . . .............. . . . . .. .. .. .... . ... . 87 88 Embedded bases and objective rates .. .... ... ..... Manifold torsion ....... . .. .... .. REFERENCES ............. ... ...... . .. ..... ... . . .... . ... . .... ... ........ ..... ... ........ . .... ..... . ... Example: cylindrical coordinates .. . ... .. ............ . ...... .... .. .. ... .... Ricci’s lemma .. .... . .. .... .... ..... ........ .... .. . .. ..... . .. ...... . . .. ....... Corollary .... .. . .. .. ... . . .. ... EXAMPLE ....... ........ .. ..... .... .. . .... The “associated” curvilinear covariant basis . ... ... ........ .......... . . ..... . .... .... .. .......... ... . .. ... . ..........“simplified” example ..... .... .. ... ... . . Curvilinear coordinates ... .. Divergence of a vector ..... . .... . .... . .. ........ ..... ... .. .......... ... ...................... .... . . ... ...... Covariant differentiation of covariant components ..... . . . ......... .... . . .. ............. . ... . ... .... . ... . .. .. ... .... .......... ... . .... .. ... ... . ....... ... .. .. Christoffel Symbols .... The fourth-order Riemann-Christoffel curvature tensor .... .......... ... . .. ..... . ....... .. . ...... . ...... ... ....... ... ... .... .. .... . .... . ... ....... .... ... ... ........ . . .. . .. ......... . Important comment about notation:... ... .. ... . .. .... . . .. ........ . .. ... .. .. EXAMPLE: Christoffel symbols for cylindrical coordinates ....... ... .... ....... . ... .. ......... .... ... ... .. .... .... . ..... ......... ..... ..... . .... ......... .. . ... .. ...... . .... 72 74 75 76 77 77 78 79 80 82 82 Backward and forward gradient operators ... .. ..... . .. . ... .... .. ...... .. .. ... . . .... ..... ....... ... . .. .Do there exist a “complementary” or “dual” coordinates? . .... . .... ... .. . ... . ............... .. . . . ..... . . . .. . . . . .... . .. ..... .. ..... .. ..... ... . .......... .. . . . .. . . .. . . .......... ..... .... . . . .. ... ... ............ . ...... .... ........ . .. .. .. . ... . ......... . . ... ... ....... .... .. ... ... .. . ............. .. ... ... . . .... .... . . .. .. .. ....... .. .. . . . ...... ........ ....... ... ..... .... .. .. .... .. . ..... ....... ... . .. .. .... ...... ..... ..... . . .......... .... ... ....... . .... ...... ........ ... . . ... .... .. Increments in the base vectors.. ..... . ... ... . ..... ... . ... .. 62 62 65 65 67 The gradient of a scalar ... .. .. .. . .. . . .. ....... .. ..... Curl of a vector ........ ......... . .......... ..... ... ..... . ... . . .. ... .. .. .. Covariant differentiation of contravariant components...... ... . . ... ... ... ......... ..... .... . .... ...... .. . . . . .. . . .. 83 84 85 86 86 86 Christoffel symbols of the first kind . .. ... ..... . .. . ..... ..... .. .. ....... . ........ ... .. ...... ..... .......... . .... . Gradient of a tensor . ..... .. . .. ........ .... . .. .. .. .... ........ . . ... . .... .. .. .. ... .. .. . . .. .. ..... ..... .. ... . 90 94 95 ........ .. ........ ... . ...... . . . ............ . .. .............. ...... .... A introductory example . . . ... . . ........ ........ .

I must acknowledge the professional colleagues who supported the continued refinement of the work while I have been a staff member and manager at Sandia National Laboratories in New Mexico. Rebecca Brannon rmbrann@me. I posted the early versions of the manuscript on my UNM web page to solicit interactions from anyone on the internet who cared to comment.edu June 2004 .unm. Although the vast majority of this work was performed in the context of my University appointment. Perhaps the most important acknowledgement I could give for this work goes to Prof. Since then. Buck’s daughter. After the class. Lynn Betthenum (sp?) has also contributed to this work by encouraging her own grad students to review it and send me suggestions and corrections. Howard “Buck” Schreyer. who introduced me to curvilinear coordinates when I was a student in his Continuum Mechanics class back in 1987.PREFACE This document started out as a small set of notes assigned as supplemental reading and exercises for the graduate students taking my continuum mechanics course at the University of New Mexico (Fall of 1999). I have regularly fielded questions and added enhancements in response to the encouragement (and friendly hounding) that I received from numerous grad students and professors from all over the world.

Readers who are already well-versed in functional analysis will probably find more rigorous manuscripts (such as [14]) more suitable. or direct notation) in which the highestlevel fundamental meaning of various operations are called out by using a notation that does not explicitly suggest the procedure for actually performing the operation. and a basic knowledge of curvilinear coordinates makes life a lot easier. symbolic.edu/~rmbrann/gobag. A goal of this paper is to explore the implications of removing the constraints of RCC systems. the basis is always tangent to the coordinate grid. but can also serve as an information resource for practicing scientists. Present draft: 6/17/04. and the theory is still needed today for non-Euclidean surface and quantum mechanics problems. p90]. then you will probably find the approach taken in this report to be unique and (comparatively) accessible. Here.html DRAFT June 17. Being an introduction. rectangular.unm. Our principal goal is to introduce engineering specialists to the mathematician’s language of general curvilinear vector and tensor analysis. 1.. “Rectangular” means that the base vectors are mutually perpendicular. First draft: Fall 1998. and normalized. Nonetheless. we have made every effort to keep the analysis well connected to concepts that should be 1 familiar to anyone who has completed a first course in regular-Cartesian-coordinate (RCC) vector and tensor analysis. Another reason to learn curvilinear coordinates — even if you never explicitly apply the knowledge to any practical problems — is that you will develop a far deeper understanding of Cartesian tensor analysis.rmbrann@me. 2004 1 Curvilinear Analysis in a Euclidean Space Rebecca M.unm. Learning the basics of curvilinear analysis is an essential first step to reading much of the older materials modeling literature. “Cartesian” means that all three coordinates have the same physical units [12. “regular” means that the basis is right. The last “C” in the RCC abbreviation stands for “coordinate” and its presence implies that the basis is itself defined in a manner that is coupled to the coordinates. “Normalized” means that the base vectors are dimensionless and of unit length. “Right” means the basis forms a right-handed system (i. For example. University of New Mexico. Specifically. that’s true. What happens when the basis is not rectangular? What happens when coordinates of different dimensions are used? What happens when the basis is selected independently from the coordinates? .Brannon. Many engineering students presume that they can get along in their careers just fine without ever learning any of this stuff. Introduction This manuscript is a student’s introduction on the mathematics of curvilinear coordinates. If you are completely new to the subject of general curvilinear coordinates or if you seek guidance on the basic machinery associated with non-orthonormal base vectors. crossing the first base vector into the second results in a third vector that has a positive dot product with the third base vectors).e. We added the proviso “older” to the materials modeling literature reference because more modern analyses are typically presented using structured notation (also known as Gibbs.edu http://me. a • b ˜ ˜ would be the structure notation for the vector dot product whereas a 1 b1 + a2 b 2 + a 3 b 3 would be the procedural notation that clearly shows how to compute the dot product but has the disad1. there will undoubtedly crop up times when a system operates in a skewed or curved coordinate system. Quite often.

instead the method for computing it changes depending on the system you adopt.html DRAFT June 17. Chapter 2 introduces the most common coordinate systems and iterates the distinction between irregular bases and curvilinear coordinates. For plasticity modeling. Algebra. and/ or non-right-handed is the essential focus of Chapter 4. . 2004 2 vantage of being applicable only for RCC systems. Anyone who chooses to perform such operations using Cartesian components will obtain the same result as anyone else who opts to use general curvilinear components — provided that both researchers understand the connections between the two approaches! Every now and then. the material often contains certain “preferred directions” such as the direction of fibers in a composite matrix. that chapter introduces the several fundamental quantities (such as metrics) which appear with irresistible frequency throughout the literature of generalized tensor analysis. In such analyses.edu http://me. the geometry of a problem clearly calls for the use of nonorthonormal or spatially varying base vectors. Such a vertex is defined by the convergence of two or more surfaces having different and generally non-orthogonal orientation normals. these extra terms account for the fact that the coordinate base vectors vary in space. This manuscript is broken into three key parts: Syntax. By contrast. and determination of whether or not a trial elastic stress rate is progressing into the “cone of limiting normals” becomes quite straightforward using the formal mathematics of nonorthonormal bases. Chapter 5 focuses on how extra terms must appear in gradient expressions (in addition to the familiar terms resulting from spatial variation of scalar and vector components). the machinery of non-orthonormal base vectors can be useful to understand six-dimensional stress space. the stress tensor is regarded as a function of the strain tensor and other material state variables. and it is especially useful when analyzing the response of a material when the stress resides at a so-called yield surface “vertex”. and it provides a gentle introduction to the fact that base vectors can vary with position. The fact that the underlying base vectors might be non-normalized. In the field of materials modeling. Operations such as the dot and cross products are known to be invariant when expressed using combined component+basis notation. Knowing the basics of curvilinear coordinates permits analysts to choose the approach that most simplifies their calculations.unm. Chapter 4 covers basis and coordinate transformations. The fact that different base vectors can be used at different points in space is an essential feature of curvilinear coordinates analysis. and curvilinear analysis becomes useful if those directions are not orthogonal. The same operation would be computed differently in a non-RCC system — the fundamental operation itself doesn’t change.rmbrann@me. This manuscript should be regarded as providing two services: (1) enabling students of Cartesian analysis to solidify their knowledge by taking a foray into curvilinear analysis and (2) enabling engineering professionals to read older literature wherein it was (at the time) considered more “rigorous” or stylish to present all analyses in terms of general curvilinear analysis. and Calculus.edu/~rmbrann/gobag. non-orthogonal.unm. Chapter 3 shows how Cartesian formulas for basic vector and tensor operations must be altered for non-Cartesian systems.

but they can look up whatever formulas they need in handbook tables. ˜ (orthogonal) { x 1. engineers who analyze shells and membranes in 3D space greatly benefit from general tensor analysis. As indicated. Reading the literature of continuum mechanics — especially the older work — demands an understanding of the notation. The base vectors are also mutually perpendicular.html DRAFT June 17. e r depends on θ for cylindrical ˜ coordinates and e r depends on both θ and ψ for spherical coordinates. thus.. The cylindrical and spherical coordinate systems are inhomogeneous because the base vectors vary with position. the rectangular Cartesian coordinate system is the most popular choice.1. Likewise. x 2. For all three systems in Fig.edu/~rmbrann/gobag. 1. James Simmonds [7] begins his book on tensor analysis with the following wonderful quote: The magic of this theory will hardly fail to impose itself on anybody who has truly understood it. at some point. deal with cylindrical and spherical coordinates.unm.edu http://me.rmbrann@me. must be able to perform tensor analysis on four dimensional curvilinear manifolds. 2004 3 The vast majority of engineering applications use one of the coordinate systems illustrated in Fig. the topic is just plain interesting in its own right. Of these. θ. Ricci. the base vectors are unit vectors. once you have mas- . founded by Gauss. —Albert Einstein 1 An important message articulated in this quote is the suggestion that. there can be an infinite number base vectors because the base vector orientations can depend on position. for example. holding the other two coordinates constant. Each base vector points in the direction that the position vector moves if one coordinate is increased. Riemann.1 The most common engineering coordinate systems. and the ordering is “right-handed” (i. z } (orthogonal) { r. φ ) x = r e r ( θ ) + ze z x1 x1 ˜ ˜ ˜ ˜ ˜ FIGURE 1.1. every engineer must. Those dealing with general relativity.e. θ. 1. x 3 } Cartesian coordinates x = x1 e1 + x2 e2 + x3 e3 ˜ ˜ ˜ ˜ Most practicing engineers can get along just fine without ever having to learn the theory behind general curvilinear coordinates. This is a crucial concept: although the coordinate system has only one origin. it represents a genuine triumph of the method of absolute differential calculus. Note that all three systems are orthogonal because the associated base vectors are mutually perpendicular. Finally. the third base vector is obtained by crossing the first into the second).unm. Naturally. base vectors for spherical and cylindrical coordinates vary with position. So why bother learning about generalized curvilinear coordinates? Different people have different motivations for studying general curvilinear analysis. and Levi-Civita. φ } Cylindrical coordinates Spherical coordinates x1 x = r e r ( θ. x3 e3 ˜ x3 e2 ˜ z x2 r (c) ez ˜ x3 eθ ˜ r θ x ˜ φ er ˜ eφ ˜ eθ ˜ x2 e1 ˜ x ˜ (a) er ˜ x ˜ (b) x2 θ (orthogonal) { r.

.rmbrann@me. 2004 4 tered tensor analysis.html DRAFT June 17.” 1915. you will begin to recognize its basic concepts in many other seemingly unrelated fields of study.unm.unm. as quoted and translated by C. Lanczos in The Einstein Decade. Your knowledge of tensors will therefore help you master a broader range of subjects.edu/~rmbrann/gobag.edu http://me. p213. 1. From: “Contribution to the Theory of General Relativity.

Furthermore.edu 1. ξ is a third order tensor. we will use standard RCC conventions that components of vectors and tensors are identified by subscripts that take on the values 1. Therefore.edu/~rmbrann/gobag. Being orthonormal. This document focuses exclusively on ordinary 3D physical space. are presumed familiar with basic operations in Cartesian coordinates (dot product.unm. The term “array” is often used for any matrix having one dimension equal to 1.). etc. T is a second-order tensor. unless otherwise indicated.html DRAFT June 17. v is a vector.edu http://me. Equation (1. s is a scalar. determinant.rmbrann@me. they are understood to be summed from 1 to 3. We fol˜ ˜ ˜ low Einstein’s summation convention where repeated indices are to be summed (this rule will be later clarified for curvilinear coordinates).unm. { e 1.2) . the base ˜ ˜ ˜ vectors have the property that e i • e j = δ ij . when exactly two indices are repeated in a single term. A principal purpose of this document is to show how these same structured operations must be computed using different procedures when using non-RCC systems. the reader. cross-product. Any array of three numbers may be expanded as the sum of the array components times the corresponding primitive basis arrays:  v1  1  0  0           v2  = v1  0  + v2  1  + v3  0           v3  0  0  1  (1.1) Everyday engineering problems typically characterize vectors using only the regular Cartesian (orthonormal right-handed) laboratory basis. Later on. for nonRCC systems. where we are merely explaining the meanings of the non-indicial notation structures.unm.1) is the matrix-notation equivalent of the usual expansion of a vector as a sum of components times base vectors: v = v1 e1 + v2 e2 + v3 e3 ˜ ˜ ˜ ˜ (1. where δ ij is the Kronecker delta and the indices ˜ ˜ ( i and j ) take values from 1 to 3. e 3 } . you will note that the text is color-coded as follows: BLUE ⇒ definition RED ⇒ important concept Please direct comments to rmbrann@me. and 3. In this section. Thus. For example. 2. the conventions for subscripts will be generalized. we may define our structured terminology and notational conventions by telling you their meanings in terms ordinary Cartesian coordinates. You. etc. the word “array” denotes either a 3 × 1 or a 1 × 3 matrix. To assist with this goal.1 Vector and Tensor Notation The tensorial order of quantities will be indicated by the number of underlines. 2004 5 This document is a teaching and learning tool. e 2.

rotates.edu http://me. { E 1. Generalized curvilinear coordinates show up when studying quantum mechanics or shell theory (or even when interpreting a material deformation from the perspective of a person who translates. Thus. for example. Even though the components themselves change when a basis changes. This harks back to our earlier comment that the properties of being orthonormal and curvilinear are distinct — one does not imply or exclude the other. this ˜ ˜ ˜ “other” basis is also a “regular” basis. E 3 } . ˜ ˜ ˜ ˜ What is it that distinguishes vectors from simple 3 × 1 arrays of numbers? The answer is that the component array for a vector is determined by the underlying basis and this component array must change is a very particular manner when the basis is changed. and/or not righthanded).. meaning that it is orthonormal ( E i • E j = δ ij ) and ˜ ˜ right handed ( E 3 = E 1 × E 2 ) — the only difference is that the new basis is oriented differ˜ ˜ ˜ ently than the laboratory basis. E 2. Most of the time. e 2. even though it is might not be convenient for the application at hand. e 3 } is the regular ˜ ˜ ˜ .html DRAFT June 17. it is customary to construct a complementary or “dual” basis that is intimately related to the original irregular basis. more convenient. that { e 1.unm. Additionally. the regular laboratory basis still exists. For most of these advanced physics problems. that the bases for spherical and cylindrical coordinates illustrated in Fig. Sometimes a quantity is most easily interpreted using yet other bases. Suppose. 1. 2004 6 More compactly.1 are orthonormal and right-handed (and therefore “regular”) even though a different set of base vectors is used at each point in space.edu/~rmbrann/gobag. Vectors have (by definition) an invariant quality with respect to a change of basis. vectors are routinely expanded in the form of components times base vectors v = v 1 e 1 + v 2 e 2 + v 3 e 3 . and stretches along with the material). not normalized.e.. then the thing you are dealing with (whatever it may be) is not a vector. but we all know that the tensor is particularly simplified if it is expressed in terms of its principal basis. When working with irregular bases. Even though components change when the basis changes. then the components of vectors and tensors must change too. any engineering problem might involve the simultaneous use of many different bases. basis might vary in space. The best choice for this other. those with spherical or cylindrical symmetry) become extremely complicated when described using the orthonormal laboratory basis. v = vk ek (1.unm.rmbrann@me. the sum of the components times the base vectors remains the same. the governing equations are greatly simplified when expressed in terms of an “irregular” basis (i. the reader must practice constant vigilance to keep track of what particular basis a set of components is referenced to. a tensor is usually described in terms of the laboratory basis or some “applications” basis.3) ˜ ˜ Many engineering problems (e. If the basis is changed. Note. but they simplify superbly when phrased in terms of some other basis. one that is not orthogonal.g. they must change in a very specific way — it they don’t change that way. To effectively study curvilinear coordinates and irregular bases. To emphasize the inextricable interdependence of components and bases. for example. For example.

the sum of components times base vectors in invariant. The invariance of vectors requires that the basis expansion of the vector must give the same result regardless of which basis is used. (1.edu http://me. For example.html DRAFT June 17.5b) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ Note that v k ( E k • E m ) = v k δ km = v m . the analog of Eq.1) is the expansion of a 3 × 3 matrix into a sum of individual components times base tensors: A 11 A 12 A 13 1 0 0 0 1 0 0 0 0 A 21 A 22 A 23 = A 11 0 0 0 + A 12 0 0 0 + … + A 33 0 0 0 0 0 0 0 0 0 0 0 1 A 31 A 32 A 33 (1. e k • E m = L km ˜ ˜ ˜ ˜ and E k • e m = L mk . Namely.edu/~rmbrann/gobag. (1.unm. Later on.4) ˜ ˜ ˜ The relationship between the v k components and the v k components can be easily characterized as follows: ˜ Dotting both sides of (1.” Just as the intimate relationship between a vector and its components is emphasized by writing the vector in the form of Eq. ˜ vk ek = vk Ek (1.rmbrann@me.. With these observations and definitions.6b) ˜ These relationships show how the { v k } components are related to the v m components. This discussion was limited to changing from one regular (i. E 2. Therefore.4) by E m gives: v k ( e k • E m ) = v k ( E k • E m ) (1.6a) (1.unm. Eq. The key point (which is exploited throughout the remainder of this document) is that.4). although the vector components themselves change with the basis.5) ˜ ˜ ˜ ˜ ˜ ˜ becomes ˜ v k L km = v m ˜ v m = v k L mk (1. (1. v k ( e k • e m ) = v k δ km = v m . E 3 } is some alternative orthonormal right-handed basis. Satisfying these relationships is often the identifying characteristic used to identify whether or not something really is a vector (as opposed to a simple collection of three numbers). Similarly. The statements made above about vectors also have generalizations to tensors. the concepts will be revisited to derive the change of component formulas that apply to irregular bases. where we have used the fact that the dot product is commutative ˜ ˜ ( a • b = b • a for any vectors a and b ). We can define ˜ ˜ ˜ ˜ a set of nine numbers (known as direction cosines) L ij ≡ e i • E j . Let the ˜ ˜ ˜ ˜ components of a vector v with respect to the lab basis be denoted by v k and let v k denote the ˜ components with respect to the second basis. the relationship between a tensor’s components and the underlying basis is emphasized by writing tensors as the sum of components times “basis dyads”. 2004 7 laboratory basis and { E 1. orthonormal right-handed) to another.5a) ˜ ˜ ˜ ˜ ˜ ˜ Dotting both sides of (1.4) by e m gives: v k ( e k • e m ) = v k ( E k • e m ) (1.e.7) Looking at a tensor in this way helps clarify why tensors are often treated as nine-dimensional vectors: there are nine components and nine associated “base tensors. Specifically .

(1. for example. Using our summation notation that repeated indices are to be summed.8). (1. the standard component-basis expression for a tensor (Eq. Even though a tensor comprises many components and basis triads. in terms of a Cartesian basis.7) in the more compact form of Eq. Even a dyad a b itself ˜˜ can be expanded in terms of basis dyads as a b = a i b i e i e j .html DRAFT June 17. that is. the single dot ˜ ˜ continues to denote the first order inner product. A • u denotes a first order vector whose i th Cartesian component is A ij u j . (1. (1. w = A • u .unm.8) ˜ ˜ ˜ ˜ ˜ ˜ ˜ Two vectors written side-by-side are to be multiplied dyadically. For example.edu/~rmbrann/gobag. Dyadic multiplication is often ˜˜ ˜ ˜ alternatively denoted with the symbol ⊗. The common operation.edu http://me.. ˜˜ ˜ ˜ ˜ ˜ Working with third-order tensors requires introduction of triads.unm. it is this sum of individual parts that’s unique and physically meaningful. When applied between tensors of higher or mixed orders. Any tensor can be expressed as a linear combination of the nine possible basis dyads. Specifically the dyad e i e j corresponds to ˜ ˜ an RCC component matrix that has zeros everywhere except “1” in the ij position. defined to equal 1 if i=j and 0 otherwise.9) ˜ ˜ ˜ The components of a tensor change when the basis changes. where the superscript “T” denotes the tensor ˜ ˜ ˜ ˜ ˜ T transpose (i. which was what enabled us to write Eq. ˜ ˜ ˜˜ We prefer using no “⊗” symbol for dyadic multiplication because it allows more appealing identities such as ( a b ) • c = a ( b • c ) . then the RCC components of w may be found by the matrix multiplication: ˜ ˜ ˜ ˜ w = A • u implies (for RCC) w i = A ij u j . for exam˜ ˜ ple. u • v = u k v k . Any third-order tensor can always be expressed as a linear combination of the fundamental basis triads. If. v • B = v k B km e m = BT • v .10) Similarly. A raised single dot is the first-order inner product. B ij = B ji ). 2004 8 in direct correspondence to Eq. (1. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ Here δ ij is the Kronecker delta. 1.rmbrann@me. Specifically. or w 2 = A 21 A 22 A 23 u2 ˜ ˜ ˜ w3 A 31 A 32 A 33 u 3 w1 A 11 A 12 A 13 u 1 (1. Note that the effect of the raised single dot is to sum adjacent indi- . The concept of dyadic multiplication extends similarly to fourth and higher-order tensors. a b is a sec˜˜ ond-order tensor dyad with Cartesian ij components a i b j . a ⊗ b means the same thing as a b . but the sum of components times basis dyads remains invariant.e. adjacent vectors in the basis dyads are dotted together so that A • B = ( A ij e i e j ) • ( B pq e p e q ) = ( A ij B pq δ jp e i e q ) = A ij B jq e i e q .8) can be written A = A ij e i e j . which are denoted structurally by three vectors written side-by-side. For example.7) we write A = A 11 e 1 e 1 + A 12 e 1 e 2 + … + A 33 e 3 e 3 . u v w is a third-order tensor with ˜˜ ˜ RCC ijk components u i v j w k .

11) (1.edu http://me. Importantly. v. or 213 ε ijk = 0 if any of the indices i .edu/~rmbrann/gobag. Caution: many authors ˜ ˜ ˜ ˜˜ ˜ ˜ ˜ ˜ ˜ insidiously use the term˜ “inner product” for the similar looking scalar-valued operation A ij B ji .16) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ The “triple scalar-valued product” is denoted with square brackets around a list of three vectors and is defined [ u.unm. Applying similar heuristic notational interpretation. e i × e j = ε ijk e k ˜ ˜ ˜ This permits us to alternatively write Eq. ξ : C = ξ ijk C jk e i . or 312 ε ijk = –1 if ijk = 321. more compactly. 231. e k ] (1.12) (1. (1. w ] ≡ u • ( v × w ) . A : B = A ij B ij . v × w = ε ijk v j w k e i .rmbrann@me.13) v × w = vj wk ( ej × ek ) (1.12) as (1. or k are equal. the reader can verify that u • C • v ˜ ˜ ˜ must be a scalar computed in RCC by u i C ij v j . the second-order inner product sums adjacent pairs of components.14) (1. For rectangular Cartesian components. the expression C × v can be immediately inferred to mean ˜ ˜ C × v = C ij e i e i × v k e k = C ij v k e i e i × e k = C ij v k ε mik e m (1. Note that ˜ ˜ ˜ ˜ ˜ ˜ ε ijk = [ e i. teaches that the cross-product between two vectors v × w is a new vector obtained in RCC by ˜ ˜ v × w = ( v 2 w 3 – v 3 w 2 )e 1 + ( v 3 w 1 – v 1 w 3 )e 2 + ( v 1 w 2 – v 2 w 1 )e 3 ˜ ˜ ˜ ˜ ˜ or. A first course in tensor analysis.html DRAFT June 17. For example. but this operation is not an inner product because it fails the positivity axiom required for any inner product.17) ˜ ˜ ˜ We denote the second-order inner product by a “double dot” colon. For example. 132.unm. j . and ξ :a b = ξ ijk a j b k e i . e j.15) ˜ ˜ ˜ ˜ We will employ a self-defining notational structure for all conventional vector operations. ˜ ˜ ˜ where ε ijk is the permutation symbol ε ijk = 1 if ijk = 123. . 2004 9 ces. for example.

rmbrann@me.unm.edu

http://me.unm.edu/~rmbrann/gobag.html

DRAFT June 17, 2004

10

1.2 Homogeneous coordinates

g2 A coordinate system is called “homogee1 ˜ g1 ˜ neous” if the associated base vectors are e2 ˜ g2 ˜ e2 ˜ g1 ˜ g2 the same throughout space. A basis is ˜ e1 ˜ g1 ˜ e1 (b) “orthogonal” (or “rectangular”) if the (a) ˜ ˜ Nonorthogonal base vectors are everywhere mutually Orthogonal (or rectangular) homogeneous coordinates homogeneous coordinates perpendicular. Most authors use the term FIGURE 1.2 Homogeneous coordinates. The base vectors “Cartesian coordinates” to refer to the are the same at all points in space. This condition is possible conventional orthonormal homogeneous only if the coordinate grid is formed by straight lines. right-handed system of Fig. 1.1a. As seen in Fig. 1.2b, a homogeneous system is not required to be orthogonal. Furthermore, no coordinate system is required to have unit base vectors. The opposite of homogeneous is “curvilinear,” and Fig. 1.3 below shows that a coordinate system can be both curvilinear and orthogonal. In short, the properties of being “orthogonal” or “homogeneous” are independent (one does not imply or exclude the other).

e2 ˜

1.3 Curvilinear coordinates

g2 ˜

The coordinate grid is the family of g1 ˜ g2 lines along which only one coordinate var- g 2 ˜ ˜ ies. If the grid has at least some curved g2 g2 ˜ lines, the coordinate system is called “cur- (a) g1 ˜ g1 g1 (b) g1 ˜ ˜ ˜ ˜ vilinear,” and, as shown in Fig. 1.3, the orthogonal nonorthogonal associated base vectors (tangent to the curvilinear coordinates curvilinear coordinates FIGURE 1.3 Curvilinear coordinates. The base vectors are still grid lines) necessarily change with positangent to coordinate lines. The left system is curvilinear and tion, so curvilinear systems are always orthogonal (the coordinate lines always meet at right angles). inhomogeneous. The system in Fig. 1.3a has base vectors that are everywhere orthogonal, so it is simultaneously curvilinear and orthogonal. Note from Fig. 1.1 that conventional cylindrical and spherical coordinates are both orthogonal and curvilinear. Incidentally, no matter what type of coordinate system is used, base vectors need not be of unit length; they only need to point in the direction that the

g2 ˜ g1 ˜

rmbrann@me.unm.edu

http://me.unm.edu/~rmbrann/gobag.html

DRAFT June 17, 2004

11

position vector would move when changing the associated coordinate, holding others con1 stant. We will call a basis “regular” if it consists of a right-handed orthonormal triad. The systems in Fig. 1.3 have irregular associated base vectors. The system in Fig 1.3a can be “regularized” by normalizing the base vectors. Cylindrical and spherical systems are examples of regularized curvilinear systems. In Section 2, we introduce mathematical tools for both irregular homogeneous and irregular curvilinear coordinates first deals with the possibility that the base vectors might be nonorthogonal, non-normalized, and/or non-right-handed. Section 3 shows that the component formulas for many operations such as the dot product take on forms that are different from the regular (right-handed orthonormal) formulas. The distinction between homogeneous and curvilinear coordinates becomes apparent in Section 5, where the derivative of a vector or higher order tensor requires additional terms to account for the variation of curvilinear base vectors with position. By contrast, homogeneous base vectors do not vary with position, so the tensor calculus formulas look very much like their Cartesian counterparts, even if the associated basis is irregular.

1.4 Difference between Affine (non-metric) and Metric spaces
As discussed by Papastavridis [12], there are situations where the axes used to define a space don’t have the same physical dimensions, and there is no possibility of comparing the units of one axis against the units of another axis. Such spaces are called “affine” or “non-metric.” The apropos example cited by Papastavridis is “thermodynamic state space” in which the pressure, volume, and temperature of a fluid are plotted against one another. In such a space, the concept of lengths (and therefore angles) between two points becomes meaningless. In affine geometries, we are only interested in properties that remain invariant under arbitrary scale and angle changes of the axes. The remainder of this document is dedicated to metric spaces such as the ordinary physical 3D space that we all (hopefully) live in.

2. Dual bases for irregular bases
Suppose there are compelling physical reasons to use an irregular basis { g 1, g 2, g 3 } . ˜ ˜ ˜ Here, “irregular” means the basis might be nonorthogonal, non-normalized, and/or nonright-handed. In this section we develop tools needed to derive modified component formulas for tensor operations such as the dot product. For tensor algebra, it is irrelevant whether the basis is homogeneous or curvilinear; all that matters is the possibility that the base vectors
1. Strictly speaking, it is not necessary to require that the base vectors have any relationship whatsoever with the coordinate lines. If desired, for example, we could use arbitrary curvilinear coordinates while taking the basis to be everywhere aligned with the laboratory basis. In this document, however, the basis is always assumed tangent to coordinate lines. Such a basis is called the “associated” basis.

rmbrann@me.unm.edu

http://me.unm.edu/~rmbrann/gobag.html

DRAFT June 17, 2004

12

might not be orthogonal and/or might not be of unit length and/or might not form a righthanded system. Again, keep in mind that we will be deriving new procedures for computing the operations, but the ultimate result and meanings for the operations will be unchanged. If, for example, you had two vectors expressed in terms of an irregular basis, then you could always transform those vectors into conventional RCC expansions in order to compute the dot product. The point of this section is to deduce faster methods that permit you to obtain the same result directly from the irregular vector components without having to transform to RCC. To simplify the discussion, we will assume that the underlying space is our ordinary 3D 1 physical Euclidean space. Whenever needed, we may therefore assume there exists a righthanded orthonormal laboratory basis, { e 1, e 2, e 3 } where e i • e j = δ ij . This is particularly ˜ ˜ ˜ ˜ ˜ convenient because we can then claim that there exists a transformation tensor F such that ˜ gi = F • ei . (2.1) ˜ ˜ ˜ If this transformation tensor is written in component form with respect to the laboratory basis, then the i th column of the matrix [F] contains the components of the g i base vector with ˜ respect to the laboratory basis. In terms of the lab components of [F], Eq. (2.1) can be written g i = F ji e j (2.2) ˜ ˜ Comparing Eq. (2.1) with our formula for how to dot a tensor into a vector [Eq. (1.10)], you might wonder why Eq. (2.2) involves F ji instead of F ij . After all, Eq. (1.10) appears to be telling us that adjacent indices should be summed, but Eq. (2.2) shows the summation index j being summed with the farther (first) index on the tensor. There’s a subtle and important phenomenon here that needs careful attention whenever you deal with equations like (2.1) that really represent three separate equations for each value of i from 1 to 3. To unravel the mystery, let’s start by changing the symbol used for the free index in Eq. (2.1) by writing it equivalently by g k = F • e k . Now, applying Eq. (1.10) gives ( g k ) i = F ij ( e k ) j . Any vector, v , can be ˜ ˜ ˜ ˜ ˜ ˜ expanded as v = v i e i . Applying this identity with v replaced by g k gives g k = ( g k ) i e i , or, ˜ ˜ ˜ ˜ ˜ ˜ g k = F ij ( e k ) j e i . The expression ( e k ) j represents the j th lab component of e k ˜ so it must equal , ˜ ˜ ˜ ˜ ˜ δ kj . Consequently, g k = F ij δ kj e i = F ik e i , which is equivalent to Eq. (2.2). ˜ ˜ ˜ Incidentally, the transformation tensor F may be written in a purely dyadic form as ˜ F = gk ek (2.3) ˜ ˜ ˜

1. To quote from Ref. [7], “Three-dimensional Euclidean space, E 3 , may be characterized by a set of axioms that expresses relationships among primitive, undefined quantities called points, lines, etc. These relationships so closely correspond to the results of ordinary measurements of distance in the physical world that, until the appearance of general relativity, it was thought that Euclidean geometry was the kinematic model of the universe.”

When this “natural” embedded basis is used. This does not imply that F is not a tensor. The transformation tensor for a different pair of bases will be different.1 Consider the following irregular base vectors expressed in terms of the laboratory basis: g 1 = e 1 + 2e 2 ˜ ˜ ˜ g2 = – e1 + e2 ˜ ˜ ˜ g3 = e3 .edu http://me. Readers who are familiar with continuum mechan˜ ics may be wondering whether our basis transformation tensor F has anything to do with the ˜ deformation gradient tensor F used to describe continuum motion. Later on. When a plane of particles moves with the material. 1 (b) In terms of the lab basis. the component array of the first base vector is 2 . thereby providing a natural “embedded” curvilinear coordinate system with an associated “natural” basis that is everywhere tangent to the painted grid lines. called the contravariant basis. these vectors stay always tangent to the grid lines and they stretch in proportion with the stretching of the grid lines. The answer is “no. For this reason. it’s still interesting to consider the ˜ special case in which these two tensors are the same. the embedded basis is called the covariant basis.unm. ˜ ˜ (a) Explain why this basis is irregular.rmbrann@me.unm. 0 This transformation tensor F is defined for the specific irregular basis of interest as it relates to ˜ the laboratory basis. however.edu/~rmbrann/gobag. 2004 13 Study Question 2. The component forms of many constitutive material models ˜ become intoxicatingly simple in structure when expressed using an embedded basis (it remains a point of argument.html DRAFT June 17. ˜ g1 ˜ g2 ˜ e2 ˜ e1 ˜ Partial Answer: (a) The term “regular” is defined on page 10. then this grid will deform with the material. If a deforming material is conceptually “painted” with an orthogonal grid in its reference state. instead we will find that the contravariant basis moves in a way that it remains always perpendicular to material planes that do co-vary with the deformation. we will introduce a companion triad of vectors.” In ˜ general. our transformation tensor F will be identical to the defor˜ mation gradient tensor F . that does not move with the grid lines. whether or not simple structure implies intuitiveness). its normal does not generally move with the material! . Even though our F tensor is generally unrelated to the ˜ deformation gradient F tensor from continuum mechanics. so this must be the first column of the [F] matrix. (b) Find the 3×3 matrix of components of the transformation tensor F with respect to the laboratory basis. The embedded basis co-varies with the grid lines — in other words. the tensor F in this document merely represents the relationship between the labora˜ tory basis and the irregular basis.

and a transformed parallelepiped is formed by the ˜ ˜ ˜ three transformed vectors { F • u. ˜ ˜ handed. F • w ] = det[ F ] [ u.4) that det [ F ] is denoted by the Jacobian J .edu http://me. (2. F • w } . defined by ˜ J ≡ det [F] . e 2. g 3 } forms a basis if and only if ˜ ˜ ˜ must be nonzero. g 3 ] = J . v.rmbrann@me. The three irregular base vectors. the irregular basis { g 1. Hence. (2. g 2. v. (2. F • e 2. recalling from Eq. (2. (2. (2. e 3 } is orthonor˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ mal and right-handed. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ The set of vectors { g 1. g 2.5) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ Geometrically this strange-looking definition of the determinate states that if a parallelepiped is formed by three vectors.. g 3 } . F • e 3 ]=J [ e 1. w } . g 2. g 3 } form a triad. { u. g 2. Recalling from Eq. (2. F • v. e 2. then the ratio of the transformed volume to the ˜ ˜ ˜ ˜ ˜ ˜ original volume will have a unique value. (2. we now introduce the direct notation ˜ definition of a determinant: The determinant. By choice.8) . det[ F ] (also called the Jacobian). w ] for all vectors { u. e 2.5) becomes ˜ [ F • e 1. e 2. which in turn defines a parallelepiped. F • v. To see why this triple scalar product is identically ˜ ˜ ˜ equal to the determinant of the transformation tensor F .6) ˜ ˜ ˜ which completes the proof that the Jacobian J can be computed by taking the determinant of the Cartesian transformation tensor or by simply taking the triple scalar product of the covariant base vectors. we get ˜ ˜ ˜ [ g 1.4) equals the volume of the parallelepiped formed by the covariant base vectors { g 1. Suppose we choose to identify { u.edu/~rmbrann/gobag. e 3 } .unm.7) F is invertible — i.4) ˜ ˜ Geometrically. The volume of ˜ is ˜ the parallelepiped ˜ given by the Jacobian of the transformation tensor F . g 3 } is ˜ ˜ ˜ right-handed if J > 0 left-handed if J < 0 . the laboratory basis { e 1. so [ e 1. the Jacobian ˜ e˜3 } is regular and therefore right˜ (2.5) must hold for all vectors.html DRAFT June 17. (2. regardless what three vectors are chosen to form original the parallelepiped! This volume ratio is the determinant of the transformation tensor. The underlying Cartesian basis { e 1. w } . w } with the underlying orthonormal basis { e 1. g 2. Since Eq.unm. g 3 ] . of a tensor F is the unique scalar ˜ ˜ satisfying [ F • u. whichever method is more convenient: J = det [F] = g 1 • ( g 2 × g 3 ) ≡ [ g 1. ˜ ˜ ˜ ˜ ˜ ˜ Then. it must hold for any particular choices of those vectors. v. e 3 ] . g 2. { g 1. e 2. Eq.1) that the covariant basis is ˜ ˜ ˜ obtained by the transformation g i = F • e i . v.e. 2004 14 Our tensor F can be seen as characterizing a transformation operation that will take you ˜ from the orthonormal laboratory base vectors to the irregular base vectors. e 3 ]=1 . the Jacobian J in Eq.

Similarly. v i is identically equal to v i when the basis is regular. for example. we know there exist unique coefficients { a 1. Consequently if you really do wish to use the equation A ij = v i + v j to define some matrix [ A ] . Consider. Distinct free indices may permissibly appear at different levels. then you must explicitly indicate whether that index is free or summed. not exponents. 3. Exceptions of rules #2 and #4 can occur when working with a regular (right-handed orthonormal) basis because it turns out that there is no distinction between covariant and contravariant components when the basis is regular. No index may appear more than twice in a single term. Another exception to rule #3 occurs when an index appears only once. a 3 } ˜ ˜ such that any vector˜ a can be written a = a 1 g 1 + a 2 g 2 + a 3 g 3 . • • 1. and the right-hand-side of this equation violates rule #1 because the index i occurs exactly once in the first term but not in the second term. then you should include a parenthetical comment that the tensor index conventions are not to be applied — otherwise your readers will think you made a typo. Any index that appears exactly twice in a given term is called a dummy sum index and implies summation from 1 to 3. Each particular free index must appear at the same level in every term. one index must appear at the upper “contravariant” level.unm. but you really do wish for a summation over that index. Exceptions to the above rules must be clearly indicated whenever the need arises.9) ˜ ˜ By convention. This equation is not well defined as an indicial tensor equation because it will not transform properly under a basis change.rmbrann@me. Exceptions of rule #3 sometimes occur when the indices are actually referenced to a particular basis and are not intended to apply to any basis. Given a dummy sum pair. not summed. For example. one might define a matrix [ A ] with components given by A ij = v i + v j . g 3 } is a basis. Summations always occur on different levels — a superscript is always paired with a subscript in these implied summations.unm. The concept of what constitutes a well-formed tensor operation will be discussed in more detail later. For example. This 2 definition of the A ij numbers is certainly well-defined in a matrix sense . In that case you must explicitly show the summation sign in front of the equation. g 2. 2004 15 2. If your work does involve some scalar quantity “ a ”. exceptions to rule #1 do regularly appear in non-tensor (matrix) equations. components with respect to an irregular basis { g 1. you may write this expansion as a = a i gi (2. You would have to write something like “ T •˜ p i = λ i p i (no sum over index i) ” ˜ ˜ ˜ in order to call attention to the fact that the index i is supposed ˜to be a free index. Here. then you should typeset its square as ( a ) 2 whenever there is any chance for confusion.edu/~rmbrann/gobag. a 2 is the second contravariant component of a vector a ˜ — it is not the square of some quantity a . 2.html DRAFT June 17.1 Modified summation convention Given that { g 1. The superscripts are only indexes. how you would need to handle an exception to rule #3 when defining the i th principle direction p i and eigenvalue λ i associated with some tensor T .” and it must appear exactly once in every term in the expression.edu http://me. • Exceptions of rule #1 are extremely rare in tensor analysis because rule #1 can never be violated in any well-formed tensor expression. The summation convention rules for an irregular basis are: 1. 4. An index that appears exactly once in any term is called a “free index. but the equation is a violation of tensor index rule #1. g 3 } are identified 1 ˜ ˜ ˜ with superscripts. Using Einstein’s summation ˜ ˜ ˜ ˜ ˜ notation. a 2. 5. g 2. . if you really do wish for an index to appear more than twice. However. That’s why indicial expressions in most engineering publications show all components using only subscripts. and one must appear at the lower “covariant” level. 2. rather than subscripts. For example. both i and j are free indices.

or what they mean physically.and contra-variant. For now. Either of these expressions clearly show that the indices are sup- posed to be ordered as “ i followed by j . 2004 16 We arbitrarily elected to place the index on the lower level of our basis { g 1. for example. T •• means the same thing as T ij . a 2. g 3 } . [4] fails to clearly indicate index ordering. We will also introduce “mixed” components T •j and T i•j such that ˜ ˜ ˜ ˜ ˜˜ ˜ i T = T •j g i g j = T i•j g i g j . a 3 } have the index on the upper level and are therefore called the contravariant components of the vector a . g 2. we will define covariant components { a 1. or why they are useful.html DRAFT June 17. We will introduce contravariant components T ij and covariant components T ij . where individuals have unreliable precision or clarity of penmanship. g 2. ˜ Later on. Similarly. the number of dot placeholders used in an expression is typically i• kept to the minimum necessary to clearly demark the order of the indices. 1 1. we will find that symmetric tensor components satisfy the property that T •j = T j•i . For example. (You may find the phrase “co-go-below” helpful to remember the difference between co.) We will eventually show there are four ways to write a second-order tensor. In professionally typeset manuscripts. The dot placeholders are more frequently used in handwritten work. i (namely.unm. but the dots serve no clarifying purpose for this case when all indices are on the same level (thus. T i j can be typeset in a manner that is clearly distinguishable from T j i . use of a “dot” placeholder is necessary only for nonsymmetric tensors. so ˜ ˜ ˜ (recalling rule #4) we call it the “covariant” basis.). so the placement of the “dot” is inconsequential for symmetric tensors. For example. a 3 } with respect to a carefully defined complementary “contravariant” basis { g 1.edu http://me. They use neither well-spaced typesetting nor dot placeholders. a 2.and co˜ ˜ ˜ variant components are computed.” not vice versa. the dot placeholder might not be necessary because. The importance of clearly indicating the order of the indices is inadequately emphasized in some texts. .4. Note the use of a “dot” to serve as a place holder to indicate the ˜ ˜ ˜ ˜ ˜ ˜ order of the indices (the order of the indices is dictated by the order of the dyadic basis pair).unm.edu/~rmbrann/gobag. Keep in mind: we have not yet indicated how these contra. T •j i means the same thing as T •j . Ref. only one dot is enough to ij serve the purpose of indicating order. they are omitted). such that i T = T ij g i g j = T ij g i g j . The coefficients { a 1. Finally. g 3 } . Thus. We will then have two ways to write the ˜ ˜ ˜ vector: a = a i g i = a i g i .rmbrann@me. which can be confusing. we are just introducing the standard “high-low” notation used in the study of irregular bases. As shown in Section 3.

Any ˜ ˜ indices appearing inside a matrix merely indicate the co/contravariant nature of the matrix — they are not interpreted in the same way as indices in an indicial expression.unm. for example. Yes. there is no difference between covariant and contravariant components whenever the basis is orthonormal.html DRAFT June 17. we will note the importance of the permutation symbol ε ijk (which equals +1 if ijk ={123. 213. and braces { } will indicate a 3 × 1 matrix containing vector components.rmbrann@me. not to equations involving matrices. δ ij are not the covariant components of ˜ ˜ ˜ the identity tensor with respect to the irregular basis. Tracking the basis to which components are referenced is one the most difficult challenges of curvilinear coordinates. ˜ [ T ij ] is the matrix that contains the covariant components of a second-order tensor T . Important notation glitch Square brackets [ ] will be used to indicate a 3 × 3 matrix. the mixed-level Kronecker delta components. we will be concerned only with the components of tensors with respect to the irregular basis. do turn out to be the mixed components of the identity tensor with respect to either basis! Most of the time. e 2. δ ij.unm.  0 if i ≠ j (2. the set of δ ij values should be regarded as indexed symbols as defined above. or 132}. The Kronecker delta is important in its own right. Furthermore. we rewrite all familiar orthonormal formulas so that the summed subscripts are on different levels. 231. in order to always satisfy rules #2 and ˜ ˜ ˜ ˜ ˜ ˜ #4 of the sum conventions.5. { e 1. -1 if ijk={321. The permutation symbol represents the components of the alternating tensor with respect to the any regular (i. and zero otherwise).6). e 3 } is the same as { e 1. The rules on page 15 apply only to proper indicial equations. but not with respect to an irregular basis. right-handed orthonormal basis). and ˜ ˜ [ T ij ] will be used to denote the matrix containing the contravariant components of T .edu http://me. For example. The indices ij merely to indicate the (high/low/mixed) level of the matrix components. δ ij  1 if i=j = . { v i } denotes the 3 × 1 matrix that contains the contravariant components of a vector v . Consequently. δ ij . Nevertheless.10) BEWARE: as discussed in Section 3. or 312}.edu/~rmbrann/gobag. e 2. but they are not the contravariant components of the identity tensor with ˜ ˜ ˜ respect to an irregular { g 1. it’s true that δ ij are components of the identity tensor I with respect to the underlying rectangular Cartesian ˜ basis { e 1. e 3 } . g 2. not as components of any particular tensor. Similarly. Likewise. the following are all equivalent symbols for the Kronecker delta: δ ij. we will represent the alternating tensor by a different symbol ξ so that we can continue to use the ˜ permutation symbol ε ijk as an independent indexed quantity. g 3 } basis. e 2. 2004 17 IMPORTANT: As proved later (Study question 2.e. We will later . e 3 } . Hence. Later on. Interestingly.. This is one reason why we denote the identity tensor I by a symbol different from its compo˜ nents. throughout this document.

unm.( g 3 × g 1 ) . g 3 } defined such that ˜ ˜ ˜ g i • g j = δ ji . (2. Thus. g 1 • ( g 2 × g 3 ) . the { g 1.” defined g ≡ gi • gj .2 The metric coefficients and the dual contravariant basis For reasons that will soon become apparent. 7]. then an orthonormal basis does not exist. Shell and membrane theory deals with 2D curved Riemannian manifolds embedded in 3D space.edu http://me. 2. and the metric coefficients g ij must be specified a priori. .unm.13) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ In the second-to-last step.html DRAFT June 17. 2004 18 prove.edu/~rmbrann/gobag.g. For further examples of Riemannian spaces.. (2.11) ij ˜ ˜ When the space is Euclidean. we recognized the triple scalar product. (2. Consequently. Eq. The brackets around T •j ˜ ˜ ˜ ˜ indicate that this equation involves the matrix of mixed components. In the last step we asserted that the result˜must equal unity. Note: If the space is not Euclidean.12) requires that the first contravariant base vector g 1 must be perpendicular to both g 2 and g 3 . e. 1 We also introduce a dual “contravariant” basis { g 1. we might write det T = det [ T •j ] . the scalar α is merely the reciprocal of the Jacobian: α = 1 -J (2. called the “metric coefficients.16) 1. so the rules on page 15 do not apply to the ij indices. (2.( g 1 × g 2 ) .12) ˜ ˜ Geometrically. g 3 ] . g 3 } base vectors can be expressed as linear combi˜ laboratory basis and the above set of dot products can nations of the underlying orthonormal ˜ ˜ be computed using the ordinary orthonormal basis formulas.15) J = g 1 • ( g 2 × g 3 ) ≡ [ g 1. It’s okay that i and j don’t appear on the left-hand side.rmbrann@me. g 2.( g 2 × g 3 ) .7). ˜ The as-yet undeter˜ determined by requiring that g 1 • g ˜equal unity. we introduce a symmetric set of numbers g ij . J ˜ ˜ ˜ where 1 g 2 = -. [5. so it must be of the form g 1 = α ( g 2 × g 3 ) . g 2. to be the ˜ ˜ Jacobian J defined in Eq. The geometry of general relativity is that of a four-dimensional Riemannian manifold. Such a space is called Riemannian. g 2. J ˜ ˜ ˜ (2. see. J ˜ ˜ ˜ 1 g 3 = -.14) All three contravariant base vectors can be determined similarly to eventually give the final result: 1 g 1 = -. for example. Refs. ˜ ˜ ˜ ˜ ˜ ˜ (2. that the determinant det T can be computed by the determinant of the ˜ ˜ i i mixed components of T . ˜ ˜ ˜ mined scalar α is 1 ˜ ˜ “set” g 1 • g 1 = [ α ( g 2 × g 3 ) ] • g 1 = αg 1 • ( g 2 × g 3 ) = αJ = 1 (2.

rmbrann@me. (2.21) Recalling that the [ g ij ] matrix is the inverse of the [ g ij ] matrix. as shown in Study Question 2. Namely. g 2. Thus 1 g o = ---. (2. both metric matrices are symmetric. In later analyses.11).23) .6) by go = J 2 ... (2.edu/~rmbrann/gobag. go (2. Furthermore. 2004 19 An alternative way to obtain the dual contravariant basis is to assert that { g 1.edu http://me. (2. whenever these are multiplied together with a contracted index.18) ˜ ˜ where the matrix of contravariant metric components g ij is obtained by inverting the covariant metric matrix [ g ij ] . (2. (2.18) by g j we note that ˜ ij = g i • g j . Thus. the result is the Kronecker delta: g ik g kj = g ki g kj = g ik g jk = g ki g jk = δ ji .24) (2. g 31 g 32 g 33 (2.4) and (2. J2 (2. keep in mind that g ij is the inverse of the g ij matrix.12) shows that the transformation ˜ coefficients must be identical to the covariant metric coefficients: L ik = g ik .unm.22) Furthermore.20) Another quantity that will appear frequently in later analyses is the determinant g o of the covariant g ij metric matrix and the determinant g o of the contravariant g ij metric matrix: g 11 g 12 g 13 g o ≡ det g 21 g 22 g 23 g 31 g 32 g 33 and g 11 g 12 g 13 g o ≡ det g 21 g 22 g 23 .unm. ˜ ˜ ˜ Dotting both sides with g k and imposing Eqs. we may therefore demand that coefficients L ik must exist such that each covariant base vector g i can be written as a linear combination of the contravariant basis: g i = L ik g k . (2.19) g ˜ ˜ which is similar in form to Eq. ˜ ˜ This equation may be solved for the contravariant basis. Dotting both sides of Eq.11) and (2.5.17) g i = g ik g k . g 3 } is in ˜ ˜ ˜ fact a basis.html DRAFT June 17. g o is related to the Jacobian J from Eqs. we note that 1g o = ---. Thus g i = g ik g k . (2.

Applying this result to the case that [ A ] is the symmetric matrix [ g ij ] . (2.= 2J -------∂g ij ∂g ij (2.edu/~rmbrann/gobag. This side-bar can be skipped without impacting your ability to read subsequent material. and recalling that det [ g ij ] is denoted g o . 2004 20 Non-trivial lemma1 Note that Eq. ∂g o -------. Eq. Taking the partial derivative of g o with respect to a particular g ij component gives a result that is identical to the signed subminor (also called the cofactor) associated with that component.27) permits us to express the left-hand-side of this equation in terms of the contra- 1.23).unm. we are introduced for the first time to a new index notation rule: subscript indices that appear in the “denominator” of a derivative should be regarded as superscript indices in the expression as a whole.html DRAFT June 17. and also recalling that [ g ij ] – 1 is given by [ g ij ] . Recalling Eq.= g o g ij ∂g ij (2. the signed subminor) associated with the ij position.25) Any book on matrix analysis will include a proof that the inverse of a matrix [ A ] can be obtained by taking the transpose of the cofactor matrix and dividing by the determinant: [ A ] CT [ A ] – 1 = --------------det [ A ] (2. the subminor is multiplied by ( – 1 ) i + j .25) can be written ∂g o -------.29) Equation (2.26) From which it follows that [ A ] C = ( det [ A ] ) [ A ] – T . superscripts in the denominator should be regarded as subscripts in the derivative as a whole.e.= g ij ∂g ij (2.28) (2. To obtain the cofactor (i. (2.edu http://me.21) shows that g o may be regarded as a function of the nine components of the g ij matrix.= g o g ij ∂g ij Similarly. Similarly. With this convention. we have ∂g o C -------. Denoting this C cofactor by g ij . (2.27) With these equations. The “subminor” is a number associated with each matrix position that is equal to the determinant of the 2 × 2 submatrix obtained by striking out the i th row and j th column of the original matrix. there is no violation of the index rule that requires free indices to be on the same level in all terms.. . we can apply the chain rule to note that ∂g o ∂J-------.unm.rmbrann@me.

---g 2 = 1 ( g 3 × g 1 )= – 2 e 1 + 1 e 2 . (c) Construct the contravariant (dual) basis by directly using the formula of Eq. (2. e 2.rmbrann@me.edu/~rmbrann/gobag. e 3 } represent the ordinary orthonormal laboratory basis. g 33 =1. 2004 21 variant metric.html DRAFT June 17.18) gives the same result as derived in part (c). (d) Confirm that the formula of Eq. (b) Construct [ g ij ] by inverting [ g ij ] .ˆ (d) g 2 = g 21 g 1 + g 22 g 2 + g 23 g 3 = – 1 g 1 + 5 g 2 = – 2 e 1 + 1 e 2 .12). ˜ ˜ ˜ Partial Answer: (a) g =5. Sketch the dual basis in the picture at right and visually verify that it satisfies the condition of Eq. (e) Redo parts (a) through (d) if g 3 is now replaced by g 3 = 5e 3 .= -. (2. Thus. 9 9 3˜ 3˜ ˜ ˜ ˜ ˜ ˜ ˜ . ˜ ˜ ˜ Consider the following irregular base vectors: g 1 = e 1 + 2e 2 ˜ ˜ ˜ g2 = – e1 + e2 ˜ ˜ ˜ g3 = e3 .unm.Jg ij 2 ∂g ij where we have again recalled that g o = J 2 . g 21 = – 1 ⁄ 9 (c) J=3 .2 Let { e 1. ˜ ˜ (a) Construct the metric coefficients g ij .ˆ -.15). g 22 =2 11 g1 ˜ g2 ˜ ˆ e2 ˜ ˆ e1 ˜ (b) g 11 =2 ⁄ 9. (2. (2.30) Study Question 2. 3 3˜ 3˜ ˜ ˜ ˜ ---.edu http://me. g 13 =0. we may solve for the derivative on the right-hand-side to obtain 1 ∂J -------.unm.

˜ ˜ (a) is this basis right handed? (b) Compute [ g ij ] and [ g ij ] . Recall that g i = F • e i . (2. ˆ e2 ˜ g1 ˜ h2 g1 ˜ h1 g2 ˜ ˆ e1 ˜ θ (d) Prove that h k (see drawing label) is the reciprocal of the length of g k . (c) Construct the contravariant (dual) basis. We now prove that the [ g ij ] matrix can by obtained by [ F ] T [ F ] .unm.1. e 3 } represent the ordinary orthonormal laboratory basis. ˜ Partial Answer: (a) yes -.31) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ The left hand side is g ij .rmbrann@me.edu http://me.4 Using the [F] matrix from Study Question 2. (d) Prove that g 1 points in the direction of the outward normal ˜to the upper surface of the shaded parallelogram. 2004 22 Study Question 2. which completes the proof that the covariant metric coefficients g ij are equal to the ij laboratory components of the tensor F T • F ˜ ˜ ˜ ˜ Study Question 2.edu/~rmbrann/gobag. ˜ (e) Prove that the area of the face of the parallelepiped whose normal is parallel to g k is ˜ given by the Jacobian J times the magnitude of g k .unm. .html DRAFT June 17. Therefore ˜ ˜ ˜ gi • gj = ( F • ei ) • ( F • ej ) = ( ei • F T ) • ( F • ej ) = ei • ( F T • F ) • ej . ˜ ˜ ˜ Consider the following irregular base vectors: g 1 = e 1 + 2e 2 ˜ ˜ ˜ g 2 = 3e 1 + e 2 ˜ ˜ ˜ g 3 = – 7e 3 .note the direction of the third base vector (b) A faster way to get the metric coefficients: Suppose you already have the [ F ij ] with respect to the regular laboratory basis.2.3 Let { e 1. verify that [ F ] T [ F ] gives the same matrix for g ij as computed in Question 2. e 2. and the right hand side represents the ij components of [ F ] T [ F ] with respect to the regular laboratory basis.

unm. it˜ is a unit vector) and g 1 is perpendicular to g 2 .rmbrann@me. ˜ ˜ (c) Prove that. etc. 2004 23 Study Question 2. The dot product ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ > 0 if positivity property states that v • v  = 0 iff v ≠ 0 . (2. Namely.edu/~rmbrann/gobag. (b) Does the above equation tell us anything about the handedness of the systems? (c) Explain why orthonormality implies that contravariant base vectors are identically equal to the covariant base vectors. (d) Use the result from part (b) to prove that the determinant g o of the covariant metric matrix g ij equals the square of the Jacobian J 2 . ˜ for example. (b) Follow logic similar to Eq.5 Recall that the covariant basis may be regarded as a transformation of the right-handed orthonormal laboratory basis. (b) There is no ˜ ˜ ˜ info about the handedness of the system. Reminder: a tensor A is positive definite iff u • A • u > 0 ∀u ≠ 0 . the first contravariant base vector must be . ˜ ˜ ˜ (a) Prove that the contravariant basis is obtained by the inverse transpose transformation: (2.. Partial Answer: (a) Substitute Eqs (2. ˜ ˜ (b) Prove that the contravariant metric coefficients g ij are equal to the ij laboratory compo–1 –T nents of the tensor F • F . for any invertible tensor S . the tensor S T • S is positive definite. Study Question 2.e. (a) By definition. Explain ˜ ˜ ˜ why this implies that the [ g ij ] and g ij matrices are positive definite.edu http://me.1) and (2.html DRAFT June 17. that g 1 = 1 (i. g ij ≡ g i • g j . (2.12) and use the fact that e i • e j = δ ji . (c) A correct proof must use the fact that S is ˜ ˜ ˜ invertible.6 SPECIAL CASE (orthonormal systems) Suppose that the metric coefficients happen to equal the Kronecker delta: g ij = δ ij (2.33) (a) Explain why this condition implies that the covariant base vectors are orthonormal.unm. so the fact that these dot products equal the Kronecker delta says. (d) Easy: the determinant of a product is the product ˜ =˜ v 0 ˜ ˜ ˜ ˜  –T of the determinants.32) gi = F • ei . (c) By definition. g i = F • e i .32) into Eq. ˜ ˜ ˜ where e i is the same as e i .31).

(2.35) gi = F • ei .unm.34). Partial Answer: (a) [ F ] = 2 1 0 0 0 1 1 –1 0 (b) [ F ] –1 = – 2 ⁄ 3 1 ⁄ 3 0 0 0 1 1⁄3 1⁄3 0 1 --(c). 2004 24 perpendicular to the second and third covariant vectors. This means that the i th row of [ F ] – 1 must contain the i th contravariant base vec˜ tor.html DRAFT June 17. and it must satisfy g 1 • g 1 = 1 .rmbrann@me. ˜ ˜ (a) Construct the [ F ] matrix by putting the lab components of g i into the i th column.1)] that the covariant basis can be connected to the lab basis through the equation gi = F • ei (2. e 2. which (with ˜ ˜ some thought) leads to the conclusion that g 1 = g 1 . ˜ ˜ ˜ Consequently. ˜ ˜ Super-fast way to get the dual (contravariant) basis and metrics Recall [Eq. Later. ˜ (b) Find [ F ] – 1 (c) Find g i from the i th row of [ F ] – 1 . g1 ˜ g2 ˜ ˆ e2 ˜ ˆ e1 ˜ (e) Compare the results with those found in earlier study questions and comment on which method was fastest. (2. we may conclude that the i th column of [ F ] – T must contain the lab components of g i . ˜ (d) Directly compute g ij from the result of part (c). (d) g 12 = g 1 • g 2 = dot product of 1st and 2nd rows = – -3˜ 3˜ 9 ˜ ˜ ˜ (e) All results the same. g 2 = – 2 e 1 + 1 e 2 .5. we ˜ asserted that (2.34) ˜ ˜ ˜ When we introduced this equation. –T Study Question 2. in Study Question 2.7 Let { e 1.edu http://me. Eq. ˜ ˜ ˜ Consider the following irregular base vectors: g 1 = e 1 + 2e 2 ˜ ˜ ˜ g2 = – e1 + e2 ˜ ˜ ˜ g3 = e3 .unm.edu/~rmbrann/gobag. we explained that the i th column of the lab component matrix [F] would contain the lab components of g i . This way was computationally faster! . e 3 } represent the ordinary orthonormal laboratory basis.

2004 25 Study Question 2.edu/~rmbrann/gobag. You can thereby ˜ ˜ demonstrate graphically that b = g 1 + 2g 2 .rmbrann@me. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ Now consider three vectors. how are the lengths of these segments related (if at all) to the ˜ ˜ ˜ contravariant components { u 1.8 Consider the same irregular base vectors in Study Question 2. with no implied sum on i. Lesson here? Because the base vectors are not necessarily of unit length. (e) In general. g2 ˆ a e2 ˜ ˜ ˜ (b) Using similar geometrical arguments.2. and c . then the length of the segment parallel to g i must ˜ equal u i g ii . Namely. and a 3 =0 .unm. Thereby explain g1 ˜ ˜ why the contravariant components of a ˜ are: a 1 =1 . in ˜ ˜ ˜ b the 1-2 plane as shown.edu http://me. a 2 = – 1 . when any given vector u is broken into segments parallel to the covariant ˜ base vectors { { g 1. g 3 = e 3 . the meaning of the contravariant component u i must never be confused with the length of the associated line segment! The contravariant component u i is merely the coefficient of g i in the lin˜ ear expansion u = u i g i . the vector b can be viewed as a segment equal to g 1 plus a segment equal to 2g 2 . (d) The ˜ ˜ length of the second segment equals ˜ magnitude of 2g 2 . g 3 } . when a vector u is the ˜ ˜ broken into segments parallel to the covariant basis. For example. find the contravariant components of b . b . ˜ c (a) Referring to the sketch.unm. g 2 = – e 1 + e 2 . Hence b 1 =1 . What are the ˜ ˜ ˜ lengths of each of these individual segments? (e) In general. or 2 5 . g 2. g 1 = e 1 + 2e 2 . ˆ e1 ˜ ˜ (d) The path from the tail to the tip of a vector can be decomposed into parts that are parallel to the base vectors. b 2 = etc (c) You’re on your own.html DRAFT June 17. ˜ (c) Find the contravariant components of c . u 2. graphically ˜ demonstrate that g 1 – g 2 gives a vector ˜ ˜ identically equal to a . The component’s magnitude equals the length of the segment divided by the ˜ ˜ length of the base vector! . u 3 } of the vector? Partial Answer: (a) We know that a 1 =1 because it’s the coefficient of g 1 in the expression ˜ a = g 1 – g 2 . (b) Draw a path from the origin to the tip of the vector b such that the path consists of ˜ ˜ ˜ ˜ straight line segments that are always parallel to one of the base vectors g 1 or g 2 . a .

by dotting both sides of Eq.18) are also examples of raising . Thus the metric coefficients serve a role that is very similar to that of the Kronecker delta in orthonormal theory. Similarly. g 3 } are each bases. The sets { g 1. a 2. so we know that we may ˜ ˜ ˜ ˜ express any vector a as ˜ ˜ ˜ a = a k gk = ak g k .40). a 3 } — we merely assert that they exist. the third term simplifies as a k g k • g i = a k g ki = g ik a k˜.˜ Using ˜ Eq. changing it to an “i”. 2004 26 2. g 2. in Eq.unm.36) ˜ ˜ ˜ Dotting both sides of this equation by g i gives a • g i = a k g k • g i = a k g k • g i . Incidentally. ˜Thus. note how the g ik matrix effectively “lowers” the index on a k .19).17) and (2.36) by g i you can show that ˜ ai = a • gi ˜ ˜ a i = g ik a k .edu http://me. changing it to an “i”. we conclude ai = a • gi ˜ ˜ a i = g ik a k . (2. In Eq. (2.rmbrann@me.12) allows us to simplify the second term as a k g k • g i =˜ a k δ˜k = a i . (2. where the last step utilized ˜ the fact that the g ik matrix is symmetric.unm.39) (2. (2.38) (2.edu/~rmbrann/gobag.38). (2.40) These very important equations provide a means of transforming between types of components. a 2. Similarly. (2. we have not yet identified a procedural means of determining the contravariant components { a 1. At this point.3 Transforming components by raising and lower indices. (2. the g ik matrix “raises” the index on a k . g 3 } and { g 1. note that Eqs. g 2.html DRAFT June 17. a 3 } or covariant components { a 1.37) (2. ˜ ˜ ˜ ˜ i Equation (2.

r i g ij s j . Namely. the index “m” is repeated. f n g ni . using lab components. The contravariant components are found from [ F ] – 1 { a } . g pk u p .37) through (2.8.41) Partial Answer: For the first one.edu http://me.20). ˜ ˜ Finding contravariant vector components — accelerated method If the lab components of the vector a are available. we could raise the index j on g ij . Eqs (2. ˜ ˜ STEP 4. Compute the covariant metric coefficients g ij = g i • g j . The final simplified result for the first expression is a l .edu/~rmbrann/gobag. The covariant components are found from [ F ] T { a } . STEP 3. Alternatively. Finding contravariant vector components — classical method In Study Question 2. ˜ ˜ STEP 2.rmbrann@me. 2004 27 and lowering indices.and co-variant components of the vector are determined as follows: STEP 1. Thus. The final simplification comes from recalling that the mixed metric components are identically equal to the Kronecker delta. ˜ ˜ STEP 5. The steps are as follows: ˜ ˜ ˜ STEP 1. The g ml allows you to lower the superscript “m” on a m so that it becomes an “l” when the metric coefficient is removed. Compute the covariant components a i = a • g i . g ij g jk (2. Compute the contravariant basis g i = g ij g j . then you can quickly compute the covariant and contravariant ˜ components by noting that a i = a • g i = a • F • e i = ( F T • a ) • e i and. making it into a “k” so that an alternative simplification of the very last expression becomes g ij g jk = g i k . g 3 } and a vector a . similarly. Alternatively.40) provide us with an algebraic means of determining the contravariant and covariant components of vectors. given a basis { g 1. Construct the [ F ] matrix by putting the lab components of g i into the i th column. STEP 3. the con˜ ˜ ˜ ˜ tra. using lab components. if we prefer to remove the g jk . ˜ STEP 2. g 2. g ij b k g kj . The very last expression has a twist: recognizing the repeated index j . Compute the contravariant components a i = a • g i . (2. Either method gives the same result. g ij g jk = δ ik . .unm. a i = g ij a j . Study Question 2. Compute the contravariant metric coefficients g ij by inverting [ g ij ] . we used geometrical methods to find the contravariant components of several vectors.html DRAFT June 17.9 Simplify the following expressions so that there are no metric coefficients: a m g ml .unm. we can lower the index j on g jk and turn it into an “i” when we remove g ij so that a simplified expression becomes g ij g jk = g i k . ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ a i = ( F – 1 • a ) • e i . Be certain that your final simplified results still have the same free indices on the same level as they were in the original expression. which is merely one of the expressions given in Eq.

in Eq.43c) ˜ ˜ ˜ ˜ m T i•j = g i • T • g j = T kj g ki = T ik g kj = T •n g mi g nj . 0 } .unm. Incidentally.43a) ˜ ˜ ˜ ˜ k T ij = g i • T • g j = T mn g mi g nj = T •j g ki = T i•k g kj (2. 3˜ 3˜ ˜ ˜ ˜ ˜ ACCELERATED METHOD: covariant components: [ F ] T { a } lab = – 1 1 0 1 = – 1 . For example.43) are really more meaningful and those equations are applicable even when you don’t have lab components. 0 } . ˜ ˜ ˜ ˜ ˜ ˜ --a 2 = a • g 2 = ( 2e 1 + e 2 ) • ( – 2 e 1 + 1 e 2 ) = – 1 . the following analogous results can be shown to hold for tensors: i T = T ij g i g j = T ij g i g j = T •j g i g j = T i•j g i g j (2. then ˜ ˜ i [ T ij ] = [ F ] – 1 [ T ] lab [ F ] – T . you look for a subscript i or m on T. . apply the classical step-by-step algorithm to compute the covariant and contravariant components of the vectors. observe how the metric coefficients play a role similar to that of the Kronecker delta for orthonormal bases. . . a and b . if the [ F ] matrix and lab components [ T ] lab are available for T . – 1. 2004 28 Study Question 2.rmbrann@me. The metric coefficient g jn similarly raises the subscript n on T mn to a become a superscript j on T ij . (2. but for general discussions. [ T •j ] = [ F ] – 1 [ T ] lab [ F ] .43a). [ T ij ] = [ F ] T [ T ] lab [ F ] . Be ˜ ˜ sure to verify that your contravariant components agree with those determined geometrically in Question 2. (2. Finding the subscript m. then apply the formulas: a 2 = a • g 2 = ( 2e 1 + e 2 ) • ( – e 1 + e 2 ) = – 1 .43d) ˜ ˜ ˜ ˜ Again. and [ T i•j ] = [ F ] T [ T ] lab [ F ] – T (2. a 3 } = { 4. Check whether the accelerated method gives the same answer.10 Using the basis and vectors from Study Question 2.html DRAFT June 17. and so on giving { a 1. and so on giving { a 1. Eqs.edu http://me.42) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ i •j T ij = g i • T • g j = T mn g im g jn = T •k g kj = T k g ki (2. – 1. Partial Answer: Classical method: First get the contra.40).unm.edu/~rmbrann/gobag. a 3 } = { 1. a 2.37) through (2. agrees! 0 By applying techniques similar to those used to derive Eqs. (2.and co-variant base vectors. . . the expression T mn g im g jn “becomes” T ij as follows: upon seeing g im . a 2. 0 0 1 0 0 1 1⁄3 1⁄3 0 2 0 0 1 0 1 2 0 2 4 agrees! contravariant components: [ F ] –1 { a } lab = – 2 ⁄ 3 1 ⁄ 3 0 1 = – 1 . you replace it with an i and change its level from a subscript to a superscript.8.8. (2.43b) ˜ ˜ ˜ ˜i i •n T •j = g • T • g j = T ik g kj = T kj g ki = T m g mi g nj (2.44) These are nifty quick-answer formulas.

you raise its level and change it to an “m.edu/~rmbrann/gobag.html DRAFT June 17. Study Question 2. or mixed components by appropriate combinations of the metric coefficients to convert them all to pure contravariant components (i. The mixed Kronecker delta δ ij changes the index symbol without changing its level. (a) H ijkl g im g jn q (b) δ m H ipqn g is g nj i•k• (c) H •j•l g im g jn g mp Multiply the following covariant. s••j (b) H •pm• (c) For the first and last “g” factor.unm. (2.” leaving a “dot” placeholder mn•• where the old subscript lived.46) Contrast this with the metric coefficients g ij and g ij . the final simplified result is H ••kl . or δ ij ) present..” Alternatively.e. (d) The i and j are already on the upper level. 2004 29 Our final index changing properties involve the mixed Kronecker delta itself.unm. you multiply by g km . which change both the symbol and the level of the sub/superscripts. g ij .edu http://me. you can recognize the opportunity to apply Eq.” Finding “i” as a subscript on H.45) etc. and then Eq. (2. all superscripts).11 Let H denote a fourth-order tensor. so you leave them alone.rmbrann@me. Use the method of raising and ~ ~ lowering indices to simplify the following expressions so that there are no metric coefficients or Kronecker deltas ( g ij . of course! . For example.20) so that g im g mp = δ ip . (g) Yes. To raise the “k” subscript. you can lower the “i” to make it an “m” and then raise the “m” to make it a “p. you can get rid of it if you can find a subscript “i” or “m. ij•• (d) H ••kp (e) H ijkl i•k• (f) H •j•l (g) Are we having fun yet? Partial Answer: (a) Seeing the g im .46) allows you to change the original superscript “i” on the H tensor to a “p” leaving its level unchanged. After similarly eliminating the g jn . The “p” is raised similarly so that the final ij•• result is H ijmn = H ••kp g km g pn . (2. v i δ ij = v j T ij δ ik = T kj v i δ ji = v j i •n δ jn T i•j δ m = T m (2.

we note that g =δ ij . (3.edu http://me.1) ˜ ˜ ˜ ˜ ˜ ˜ Substituting the definition (2.5) ˜ ˜ Whenever new results are derived for a general basis. non-normalized. (3.html DRAFT June 17. and 3. it is important to keep in mind that structured (direct) notation formulas remain unchanged.2. 3. we recognize the index raising combination b i g ki = b k . the formula for the dot product becomes a • b = aibjg = a 1 b 1 g 11 + a 1 b 2 g 12 + a 1 b 3 g 13 + a 2 b 1 g 21 + … + a 3 b 3 g 33 . normalized. 2004 30 3. Presuming that the contravariant components of two vectors a and b are known. Tensor algebra for general bases A general basis is one that is not restricted to be orthogonal. Thus Eqs. for an ij ˜ ˜ ˜ orthonormal basis. .unm.4. g 3 } happens to be orthonormal. for example.3. (3.2) ij ˜ ˜ This result can be written in other forms by raising and lowering indices.11) into Eq. we may ˜ ˜ expand the direct notation for the dot product as a • b = ( a i gi ) • ( b j gj ) = a i b j ( gi • gj ) . (3.edu/~rmbrann/gobag. 3. 3. (3. (3.5 do indeed all reduce to the usual orthonormal formula. For the special case that the basis { g 1. there is no difference between contravariant and covariant components.3) ˜ ˜ The simplicity of this formula and its close similarity to the familiar formula from orthonormal theory is one of the principal reasons for introducing the dual bases. Here we show how the component formulas take on different forms for general bases that are permissibly nonorthogonal. Noting. to obtain yet another formula: a • b = ai bi = a1 b 1 + a2 b 2 + a3 b 3 . Furthermore. it is always advisable to ensure that they all reduce to a familiar form whenever the basis is orthonormal. 3. We can lower the index on a i by writing it as g ki a k so we obtain another formula for the dot product: = a 1 b 1 g 11 + a 1 b 2 g 12 + a 1 b 3 g 13 + a 2 b 1 g 21 + … + a 3 b 3 g 33 .4) a • b = a k b i g ki ˜ ˜ Finally. we obtain a much simpler formula for the dot product ij a • b = a i bi = a 1 b1 + a 2 b2 + a 3 b3 . or right-handed. Throughout this section.rmbrann@me. g 2. and/or non-righthanded. that b j g = b i .unm.1).1 The vector inner (“dot”) product for general bases.

resulting in the Kronecker delta δ ji .rmbrann@me. Consider. (a) Express a and b in terms of the e lab ˜ ˜ ˜ basis. multiply them by a metric coefficient 2.1 Consider the following nonorthonormal base vectors: g 1 = e 1 + 2e 2 ˜ ˜ ˜ g2 = – e1 + e2 and g3 = e3 ˜ ˜ ˜ ˜ ˜ Consider two vectors.unm.2) and (3. of course it agrees! ˜ ˜ ˜ ˜ ˜ g2 ˜ e2 ˜ b ˜ g1 ˜ a ˜ e1 ˜ Index contraction The vector dot product is a special case of a more general operation. the expression v i W jk Z lmn . (b) Compute a • b in the ordinary familiar ˜ ˜ manner by using lab components. of course. Given two free indices in an expression. In both cases. the indices on the metric coefficient or Kronecker delta must be the same symbol as the indices being contracted. and we will suppose that it therefore corresponds to a sixth-order tensor with the associated base vectors being ordered the same as the free indices g i g j g k g l g m g n . Thus.unm. if the indices started at the same level. which lie on different levels. Index contraction is actually basis contraction and it is the ordering of the basis that dictates the contraction. multiply by a Kronecker delta (and simplify). This expression has six free indices. The symbol C 1 denotes 8 contraction of the first and second indices. C 2 would denote contraction of the second and eighth indices (assuming. contracting the first and . that the expression has eight free indices to start with!).edu http://me. if the indices are at different levels. a and b . ˜ ˜ Does the result agree with part (b)? Partial Answer: (a) a = 2e 1 + e 2 (b) a • b = 2 (c) yes.3) to compute a • b . (3. To contract the first and sec˜ ˜ ˜ ˜ ˜ ond indices ( i and j ). we say that we “contract” them when they are turned into dummy summation indices according to the following rules: 1. lying in the ˜ ˜ 1-2 plane as drawn to scale in the figure. ˜we must dot the first and second base vectors into themselves. which we will call index contraction.html DRAFT June 17. 2004 31 Study Question 3. but at different levels so that they become dummy sum2 mation symbols upon application of the summation conventions.edu/~rmbrann/gobag. (c) Use Eqs. for example.

To summarize. Then ˜ ˜ ˜ ˜ ˜ ˜ m m m (3.6) T • v = T ij v k g kj g i . (2. (2. ˜ ˜ ˜ In general. 2004 32 second indices of v i W jk Z lmn gives δ ji v i W jk Z lmn . To compute the contraction C 4 . we merely applied Eq.html DRAFT June 17. Incidentally. ˜ ˜ ˜ ˜ ˜ In all these formulas.1. For the final step.12).2 Other dot product operations Using methods similar to those in Section 3. Note that the contraction operation has reduced the order of the result from a sixth-order tensor down to a 6 fourth-order tensor. which results in g ln . If you know only T ij and v k . the dot product results in a summation between adjacent indices on opposite levels. note that the vector dot product u • v can be expressed in terms of a contrac˜ β˜ 2 tion as C 1 operating on the dyad u v .unm. indicial forms of the operation T • v are ˜ ˜ ˜ found to be i T • v = T ij v j g i = T ij v j g i = T •j v j g i etc. 3. which simplifies to v i W ik Z lmn . contracting the fourth and sixth indi˜ ces of v i W jk Z lmn gives g ln v i W jk Z lmn .11). the above result may be simplified by using the g jm to lower the “j” superscript on A ij (changing it to an . No matter what basis is used to expand a tensor. we must dot the fourth base vector g l ˜ into the sixth base vector g n . the correct indicial expression for any operation involving dot products can be derived by starting with the direct notation and expanding each argument in whatever form happens to be available. Index contraction is equivalent to dotting together the base vectors associated with the identified indices. For example.rmbrann@me. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ Note the change in dummy sum indices from ij to mn required to avoid violation of the summation conventions. This approach typically leads to the opportunity to apply one or more of Eqs.edu/~rmbrann/gobag.11). For example. the result of a contraction operation will be the same.unm. In other words.7) A • B = ( A ij g i g j ) • ( B •n g m g n ) = A ij B •n g i ( g j • g m )g n = A ij B •n g jm g i g n . but the phrase “index contraction” is very common. (2. you must first raise an index to apply the dot product. or (2. By this. we mean that you can apply the contract and then change the basis or vice versa — the result will be the same. Thus. the contraction operation is invariant under a change in basis. If desired. (3. The formal C α notation for index contraction is rarely ˜˜ used.19).edu http://me. suppose you know two tensors in the forms i A = A ij g i g j and B = B •j g i g j . β C α denotes contraction of the α th and β th base vectors.

unm. it must hold for any particular choice.unm. B nj g n g j ˜ ˜ A im B nj g mn g i g j ˜ ˜ im B g g j A mj i ˜ ˜ A i•m B mj g i g j ˜ ˜ B nj g n g j ˜ ˜ A im B mj g i g j ˜ ˜ n B •j g n g j ˜ ˜ •j Bn g n gj ˜ ˜ •j A im B n g mn g i g j ˜ ˜ A im g i g m ˜ ˜ im g g A i m ˜ ˜ i A •m g i g m ˜ ˜ A i•m g i g m ˜ ˜ n A im B •j g mn g i g j ˜ ˜ i B mg gj A •m •j i ˜ ˜ i •j A •m B n g mn g i g j ˜ ˜ A i•m B nj g mn g i g j ˜ ˜ 3.10) Since this must hold for all vectors.2 Complete the following table.9) A • B = A ij B jn g i g n . which shows the indicial components for all of the sixteen possible ways to express the operation A • B . Hence those compo˜ ˜ i nents represent the mixed “ •n ” components of A • B .8) A • B = A •m B •n g i g n . ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ T (3.rmbrann@me.11) ˜ ij ˜ Similarly. ˜ ˜ Study Question 3. ˜ ˜ ˜ ˜ These are all valid expressions for the composition of two tensors. we could have used the g jm to lower the “m” superscript on B (changing it to a “j”) so that (3.edu http://me.3 The transpose operation The transpose B of a tensor B is defined in direct notation such that ˜ ˜ ˜ ˜ T u • B • v = v • B • u for all vectors u and v .12) . Taking u =g i and ˜ ˜ v =g j . ˜ ˜ ˜ ˜ Alternatively. Note that the final result in the last three equations involved components times the basis dyad g i g n . (3. ˜ ˜ T (3.html DRAFT June 17. depending on what ˜ ˜ type of components are available for A and B . 2004 33 “m”) so that i m (3. The shaded cells have the simplest form ˜ ˜ because they do not involve metric coefficients.edu/~rmbrann/gobag. you can show that ( B ) ij = B ji . we see that ˜ ˜ T ( B ) = B ji .

i A •j = A j•i . Only their ordering is switched.13) Note that the high/low level of the indices remains unchanged.18) This result further validates raising and lowering indices by multiplying by appropriate metric coefficients — such an operation merely represents dotting by the identity tensor. (3. Thus. ˜ ˜ T (3.15) the components of a symmetric tensor satisfy A ij = A ji .unm.17) Since this must hold for all vectors. (3.html DRAFT June 17.11) through (3. 3. The identity tensor I is the unique symmetric tensor for which ˜ ˜ u • I • v = u • v for all vectors u and v . it must hold for any particular choice. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ The coefficient of g i g j is B ij .edu http://me.15) 3. Therefore.edu/~rmbrann/gobag. (3. ˜ ˜ ˜ ˜ j ( B ) i•j = B •i .11). Namely. it can be simi˜ ˜ ˜ ˜ larly shown that the contravariant components of the identity are g ij . taking u =g i and v =g j . and the right hand side is g ij . Taking u =g i and ˜ ˜ v =g j . The left-hand side represents the ij covariant compo˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ nents of the identity. A ij = A ji . ( B )ji = B ij . Thus.rmbrann@me.16) The last relationship shows that the “dot placeholder” is unnecessary for symmetric tensors. the mixed components of the identity are found to equal δ ij . i ( B ) •j = B j•i . Therefore. 2004 34 The mixed components of the transpose are more different.4 Symmetric Tensors A tensor A is symmetric only if it equals its own transpose. ˜ ˜ I = g ij g i g j = g ij g i g j = δ ij g i g j = δ ji g i g j ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ (3.14) All of the above relations could have been derived in the following alternative manner: The transpose ( u v ) T of any dyad is simply v u . and we may write simply A ij without ambiguity.5 The identity tensor for a general basis. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ (3. ˜ ˜ ˜ ˜ T T (3. Similarly. referring to ˜ ˜ Eqs. . we can write: B = ( B ij g i g j ) T = B ij ( g i g j ) T = B ij g j g i . By taking u =g i and u =g j .unm. we find that g i • I • g j = g i • g j . knowing that the transpose of a sum is ˜˜ ˜˜ the sum of the transposes and that any tensor can be written as a sum of dyads. By taking u =g i and ˜ ˜ u =g j . ˜ ˜ T (3. which is the same as Eq.

(3. The mixed components of T with respect to this ˜ ˜ principal basis are then i T •j = p i • T • p j = p i • ( λ j p j ) = λ j δ ji ..6 Eigenproblems and similarity transformations The eigenproblem for a tensor T requires the determination of all eigenvectors u and ˜ ˜ ˜ corresponding eigenvalues λ such that T • u = λu . The characteristic ˜ equation for T T is the same as that for T . then those ˜ ˜ eigenvectors form an acceptable basis. so we can select normalization such that the renamed vectors are truly dual bases: m p m • pk = δk .21) T • uk = λk uk .unm. Then ˜ ˜ ˜ ˜ ˜ λm vm • uk = λk vm • uk . 2004 35 3. it is diagonal in its mixed ˜ ˜ . so the eigenvalues are ˜ the same for both the ˜ ˜ ˜ ˜ right and left eigenproblems. Consider a particular eigenpair ( λ k. u k ) : ˜ (no sum on k) (3. ˜ ˜ ˜ ˜ ˜ ˜ (no sum on j) (3. noting that v m • T = λ m v m (no sum on m). however. (no sums) (3.unm.20) ˜ ˜ ˜ ˜ In other words. (no sum on k) (3. If. (3. We can also define a “left” ˜ eigenvector v such that ˜ v • T = λv . ˜ ˜ ˜ ˜ Dot from the left by a left eigenvector v m .edu/~rmbrann/gobag. the geometric multiplicity of eigenvalues might be less than their algebraic multiplicity).edu http://me. the left eigenvectors are the right eigenvectors of T T .e.26) Ah! Whenever a tensor T possesses a complete set of eigenvectors. the tensor T happens to possess a complete (or “spanning”) set of eigenvectors.rmbrann@me. (3. ˜ ˜ Rename v m = p m . ˜ ˜ ˜ ˜ (3. This motivates renaming the eigenvectors using dual basis notation: Rename u k = p k .24) ˜ ˜ The magnitudes of eigenvectors are arbitrary.23) ˜ ˜ From this result we conclude that the left and right eigenvectors corresponding distinct ( λ m ≠ λ k ) eigenvalues are orthogonal.19) In this form.25) ˜ ˜ Nonsymmetric tensors might not possess a complete set of eigenvectors (i.html DRAFT June 17.22) ˜ ˜ ˜ ˜ Rearranging. the eigenvector u is called the “right” eigenvector. ( λm – λk ) ( vm • uk ) = 0 .

edu http://me. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ 1 (3. (3. e 2. (3. such that ˜ ˜ p k = F • e k and p k = F – T • e k .2) and (2.32) .28) In matrix analysis books. (2.29) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ The columns of the matrix of F with respect to the laboratory { e 1. this result is usually presented as a similarity transformation.28) can be written as similarity transformation. As was done in Eqs. 3 T = ˜ ˜ where ∑ k = λ k ( F • e k ) ( F –T • e k ) = F • Λ • F –1 . (3.rmbrann@me.31) 3 Λ ≡ ˜ ˜ ∑ k = 1 λ1 0 0 λk ek → ˜ ˜ ek 0 λ2 0 0 0 λ3 ek ek ˜ ˜ . the diagonalization result (3.unm.32). we can invoke the existence of a basis transformation tensor F . Namely. λ1 0 0 i [ T •j ] = 0 λ2 0 0 0 λ3 pi pj ˜ ˜ .edu/~rmbrann/gobag.html DRAFT June 17. 2004 36 principal basis! Stated differently.30) With Eq. ˜ ee ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜˜ (3. (3. e 3 } basis are simply ˜ ˜ ˜ ˜ ˜ the right eigenvectors expressed in the lab basis: [F ] = [ { p1 }e { p2 }e { p3 }e ] .unm.27) or 3 T = ˜ ˜ ∑ k = 1 λk pk p k . ˜ ˜ (3.29).

2004 37 Study Question 3. Find the labora˜ ˜ tory components of the basis transformation tensor F . the transformation tensor F is typically the defor˜ ˜ mation gradient tensor. ˜ ˜ with the diagonal components being equal to the eigenvalues of T . It is important to have at least a vague notion of this concept in order to communicate effectively with researchers who prefer to do all analyses in general curvilinear coordinates.. T = α •j g i g j ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ then the tensors are similar.edu/~rmbrann/gobag. gk = F • Gk and therefore (3. .3 Consider a tensor having components with respect to the orthonormal laboratory basis given by [T ] = ˜ ˜ 2 1 0 1 –2 2 –2 0 2 . When two tensors T and ˜ S possess the same components but with respect to different mixed bases. there exists a transformation tensor F such that ˜ ˜ –1 . p 2. p 3 } .31) that T is similar to a tensor that is ˜ diagonal in the laboratory basis. (3.rmbrann@me. Also ˜ ˜ ˜ ˜ ˜ verify Eq. when ˜ ˜ ˜ i i and (identical mixed components) (3. We now mention the more general relationship between similar components and similarity transformations.33) Prove that this tensor has a spanning set of eigenvectors { p 1.e. the discovery that two tensors have identical mixed components with respect to different bases has great significance.unm.edu http://me.html DRAFT June 17. whereas you might find it more meaningful to recognize this situation as merely a similarity transformation.35) T = F • S • F ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ In the context of continuum mechanics. In other words. To them.34) S = α •j G i G j . ek ek ˜ ˜ (3. ˜ ˜ Partial Answer: [ F ] = ˜ ˜ 1 2 –1 2 0 1 1 1 0 . i.unm. The above discussion focused on eigenproblems. such˜ that p k = F • e k .

41) .  +1 if ijk = 123.( g i × g j ) . In other words.unm. (2. The above defined permutation symbols are the components of the alternating tensor with respect to any regular right-handed orthonormal laboratory basis.15) can be expressed compactly as a single indicial equation: 1 ε ijm g m = -. the alternating tensor is ξ = ε ijk e i e j e k .  otherwise 0 (3. the three formulas written explicitly in Eq. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ where F ˜ ˜ gk = ˜ is a basis transformation tensor defined such that F • Gk . T = β ij g i g j and ˜ ˜ ˜ ˜ T = F • S • FT . ˜ ˜ ˜ (3.level at which the indices are placed. Namely.39) Important: These quantities are all defined the same regardless of the contra. A similar situation was encountered in connection with the Kronecker delta of Eq. (2. It is not allowed to raise or lower indices on the ordinary permutation symbol with the metric tensors. 132 . As we did for the Kronecker delta.7 The alternating tensor When using a nonorthonormal or non-right-handed basis. ˜ ˜ ˜ ˜ ˜ (3.html DRAFT June 17.4 Suppose two tensors T and S possess the same contravariant ˜ ˜ ˜ ˜ components with respect to different bases. =  –1 if ijk = 321. ε ij• .36) Demonstrate by direct substitution that (3.37) 3. it is a good idea to use a different symbol ξ for the alternating tensor so that the permutation symbol ε ijk may retain its ˜ ˜ usual meaning.edu/~rmbrann/gobag.or co. 312  ••k ε ijk. we will assign the same meaning to the permutation symbol regardless of the contra/covariant level of the indices. 2004 38 Study Question 3.edu http://me.40) (3.10).unm. 213. J ˜ ˜ ˜ In terms of an orthonormal right-handed basis. ε ijk.rmbrann@me.38) S = β ij G i G j ˜ ˜ ˜ ˜ (same contravariant components) (3. Through the use of the permutation symbol. 231. so they do not transform co/contravariant level via the metric tensors. etc.

F • e k ] = J [ e i. We do not adopt this practice.46) Study Question 3.42) ξ ijk = [ g i. e j.html DRAFT June 17.unm. (3. .ε ijk J (3.1) into Eq. For a left-handed basis. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ where (3. Prove that ξ ijk = – ε ijk .44) ξ ijk = Jε ijk . (3. 2004 39 In terms of a general basis. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ Study Question 3.45) Hence. (3. g j.43) to prove that 1 ξ ijk = -. g j. g k ] .40).48) This is why some textbooks claim to “define” the permutation symbol to be its negative when the basis is left-handed.47) (3.6 Consider a basis that is orthonormal but left-handed (see Eq. g j. but the components of the alternating tensor change sign.43) and using the direct notation definition of determinant gives ξ ijk = [ F • e i . F • e j . in Eq.edu http://me.unm. (3. g k ] . g k ] .32). the basis form for the alternating tensor is ••k ξ = ξ ijk g i g j g k = ξ ijk g i g j g k = ξ ij• g i g j g k = etc. ˜ ˜ ˜ ••k ξ ij• = [ g i. (3. . (2. ˜ ˜ ˜ etc.5 Use Eq. (2. 2. simplifying the last expression. ξ ijk = [ g i. We keep the permutation symbol unchanged. (3. (3.rmbrann@me. the permutation symbol stays the same. e k ] = Jε ijk . ˜ ˜ ˜ Using Eq. This result could have been derived in the following alternative way: Substituting Eq.8). the covariant alternating tensor components simply equal to the permutation symbol times the Jacobian. ˜ ˜ ˜ ˜ ˜ Thus.edu/~rmbrann/gobag.43) m ξ ijk = ( g i × g j ) • g k = ( Jε ijm g m ) • g k = Jε ijm δ k .

according to these books. In other words. Hence.rmbrann@me.edu http://me. each tensor can be expanded in terms of known components times base i vectors and then Eq.˜ ( g i g j ): ( g m g n ) = δ im δ jn ˜ ˜ ˜ ˜ ( g i g j ): ( g m g n ) = g im g jn ˜ ˜ ˜ ˜ ( g i g j ): ( g m g n ) = δ im g jn ˜ ˜ ˜ ˜ (3.49) (3. the cross product is defined u × v = ξ:u v . a b and ˜˜ r s is a scalar defined ˜˜ ( a b ): ( r s ) ≡ ( a • r ) ( b • s ) (3.edu/~rmbrann/gobag.51) Axial vectors Some textbooks choose to identify certain vectors such as angular velocity as “different” because. ˜ ˜ ˜ Alternatively. is defined to be distributive over addi˜ ˜ tion.53) The “etc” stands for the many other ways we could possibly mix up the level of the indices. (3. this result tells us that ( g i g j ): ( g m g n ) = g im g jn ˜ ˜ ˜ ˜ i j ( g i g j ): ( g m g n ) = δ m δ n ˜ ˜ ˜ ˜ j ( g i g j ): ( g m g n ) = g im δ n ˜ ˜ ˜ etc. ~ ˜˜ ˜ ˜ ~ Hence. Malvern suggests alternatively redefining the permutation symbol to be its negative for left-hand basis. The inner product between two tensors.9 Scalar valued operations TENSOR INNER (double dot) PRODUCT The inner product between two dyads. This viewpoint is objectionable since it intimates that physical quantities are affected by the choice of bases.52) ˜ ˜ ˜˜ ˜ ˜ ˜ ˜ When applied to base vectors. the mixed A •j .53) can be applied to the base vectors. they take on different form in a left-handed basis. (3. u × v = ξ ijk u j v k g i .8 Vector valued operations CROSS PRODUCT In direct notation. Specifically Eqs. If.8) show that the alternating tensor components ξ ijk will automatically change sign for left-handed systems. the basic trend should be clear. Those textbooks handle these supposedly different “axial” vectors by redefining the left-handed cross-product to be negative of the right-hand definition.html DRAFT June 17.44) and (2. Malvern’s suggested sign change is handled automatically by defining the alternating tensor as we have above. there is no need for special formulas for left-hand bases. with this approach. u × v = ξ ijk u j v k g i .50) (3. 2004 40 3.unm. for example. ˜ ˜ ˜ (3. A and B . 3.unm.

The definition of index “contraction” is given on page 31. the tensor magnitude is computed by rooting the summation running over each component of A multi˜ i plied by its dual component. the trace is found by contracting the base vectors to give ˜ ˜˜ ˜ ˜ i trB = B ij g i • g j = B ij g i • g j = B •j g i • g j = B i•j g i • g j . A and B . note that the magnitude of a tensor is not found by simply summing the squares of the tensor components (and rooting the result). the components A •j are known.59) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ The double dot operation is distributive over addition. There are four ways to write the tensor B : ˜ ˜ ˜ ˜ i B = B ij g i g j = B ij g i g j = B •j g i g j = B i•j g i g j . To clarify.unm.60) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ 1. (3.html DRAFT June 17. you should con˜ ˜ 1 tract the adjacent indices pairwise so that the first index in A is contracted with the first index ˜ in B and the second index in A is contracted with the second index in B . In direct notation.edu http://me.rmbrann@me. If. 2. (3.edu/~rmbrann/gobag. 2004 41 components of A and the contravariant B mn components of B are known. for any vectors u and v . then the inner ˜ ˜ product between A and B is ˜ ˜ i i i j i A :B = ( A •j g i g j ): ( B mn g m g n ) = A •j B mn ( g i g j ): ( g m g n ) = A •j B mn g im δ n = A •j B mj g im (3. to find the inner product between two tensors. resulting in a scalar. ˜ ˜ I : ( u v ) = u • v .58) TRACE The trace an operation in which the two base vectors of a second order tensor are 2 contracted together.55). Therefore. the trace of a tensor B can be ˜ defined trB = I :B . Furthermore.54) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ Stated differently.56) TENSOR MAGNITUDE The magnitude of a tensor A is defined ˜ A = A :A (3. The definition of index “contraction” is given on page 31. then the dual m components A i•j must be computed by A i•j = g im A •n g jn so that i i m A :A = A •j A i•j = ( A •j A •n )g im g jn ˜ ˜ (3. here are ˜ ˜ ˜ some expressions for the tensor inner product for various possibilities of which components of each tensor are available: i i A :B = A ij B ij = A ij B ij = A •j B mj g im = A •j B i•j = A ij B mn g im g jn = … ˜ ˜ Note that (3. for example. . Instead.57) ˜ ˜ ˜ Based on the result of Eq. (3. (3.55) A :B = tr ( A T • B ) = tr ( A • B T ) ˜ ˜ ˜ ˜ ˜ ˜ where “tr” is the trace operation.unm.

edu http://me. [ u. the left hand side represents the determinant of the contravariant component matrix T ij times the permutation symbol ε pqr . (3. (3.edu/~rmbrann/gobag. (3.67) ˜ Similarly. v.unm. v. T • w ] = detT [ u.68) ˜ In other words. applying Eq.50). 2004 42 or k •k trB = B ij g ij = B ij g ij = B •k = B k . Recalling Eq. Suppose the components of T are known in mixed form. v.62) (3. Then Eq. ˜ ˜ ˜ (3.61) ˜ Note that the last two expressions are the closest in form to the familiar formula for the trace in orthonormal bases. (2. ˜ ˜ ij ] . (3.ε pqr .65) ˜ Recalling Eqs.47). the above equation implies that det [ T ij ] = ( detT ) ⁄ J 2 .64) gives ˜ ˜ i T j T k = detT ξ .63) DETERMINANT The determinant of a tensor T is defined in direct notation as ˜ [ T • u. ξ ijk T •p •q •r ˜ pqr (3.64) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ Recalling that the triple scalar product is defined [ u. Specifically.html DRAFT June 17. w } . w ] = ξ ijk u i v j w k . you compute the determinant of the tensor T by finding the determinant of the [ T ij ] matrix and multiplying the result ˜ ˜ by g o . if you have the matrix of covariant components. detT = g o det [ T (3. ˜ ˜ ˜ ˜ ˜ ˜ Thus.23).rmbrann@me. w ] ≡ u • ( v × w ) . (3. v. we may introduce the ordinary permutation symbol to write this result as detT ˜ε ijk T ip T jq T kr = ---------. Therefore.unm. v. w ] for all vectors { u. it can be shown that detT = g o det [ T ij ] . J2 (3. (3. we may interpret this result as a matrix equation.69) . w ] = u • ( v × w ) we may apply ˜ ˜ ˜ ˜ ˜ ˜ previous formulas for the dot and cross product to conclude that ξ ijk T ip T jq T kr = ( detT )ξ pqr .66) Now that we are using the ordinary permutation symbol. (3. T • v. [ u. TRIPLE SCALAR-VALUED VECTOR PRODUCT In direct notation.44) and (3.

html DRAFT June 17. (3.10 Tensor-valued operations TRANSPOSE We have already discussed one tensor-valued operation.75b) (3.73) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ In Section 3.unm.rmbrann@me. (3. we have intentionally used different indices ( m and n ) on the left-hand-side to remind the reader that matrix equations are not subject to the same index rules.72) 3. Specifically.74) ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ Most people are more accustomed to thinking of transposing components rather than base vectors.edu http://me.76a) ˜ [ ( T T ) mn ] = [ T ij ] T . T • gi = gi • T • g (T j or g j • T •j ˜ ˜ ˜ ˜j ˜ ˜ ˜ j ( T T ) •i = T i•j .45). we note that ( T T ) ji = T ij . consider a tensor expressed in one of its four possible ways: i T = T ij g i g j = T ij g i g j = T •j g i g j = T i•j g i g j . The indices are present within the matrix brackets only to indicate to the reader which components (covariant.unm.76d) ˜ Here. we may introduce the ordinary permutation symbol. (3. ˜ ˜ (3.75c) (3. (3. contravariant.edu/~rmbrann/gobag. or g j • T i i ˜ ˜ ˜j ˜ ˜ ˜ ˜ T ) •i = T i .76c) ˜ m [ ( T T ) •n ] = [ T i•j ] T . Referring to the above result.71) detT = det [ T i•j ] .75d) [ ( T T ) mn ] = [ T ij ] T . the determinant of the tensor T is equal to the determinant of the matrix of mixed ˜ components: i detT = det [ T •j ] . these equations would be written (3. or mixed) are contained in the matrix.75a) (3. (3. (3. the above result as i j k ε ijk T •p T •q T •r = detT ε pqr . we showed that the transpose may be obtained by transposing the matrix base dyads. it can be shown that (3. or g j • T T • g i = g i • T • g j ˜ ˜ ˜ ˜ ˜ ˜ ˜ Using our matrix notational conventions. Noting that the J’s on each side cancel.76b) ˜ •n i [ ( T T ) m ] = [ T •j ] T . (3. i T T = T ij g j g i = T ij g j g i = T •j g j g i = T i•j g j g i . . (3. or g j • T T • g i = g i • T • g j ˜ ˜ ˜ ˜ ˜ ˜ ˜ T) T • g = g • T • g ( T ji = T ij .3. the transpose. 2004 43 Recalling Eqs.70) ˜ Therefore. ˜ ˜ Likewise.

Eq. ˜ ˜ ˜ Hence T ik U kj = δ ij . (3.rmbrann@me. Let U = T – 1 . (3. ˜ INVERSE Consider a tensor T . (3.html DRAFT June 17.80d) COFACTOR Without proof. = [ T ij ] – 1 .edu http://me. = [ ( T – 1 ) i•j ] – 1 . (3.edu/~rmbrann/gobag.80a) (3. Then ˜ ˜ ˜ T • U = I.79) i •j This result states that the high-low mixed i high-low mixed [ T •j ] matrix. ˜ The mixed component formulas are more tricky.76b) says that the covariant component matrix for T T is just the transpose of ˜ the covariant matrix for T .76a) says that the matrix of contravariant components of [ T T ] are obtained by ˜ taking the transpose of the matrix of contravariant components of the original tensor [ T ] . ˜ (3. The full set of ˜ . ˜ Similarly. Likewise. ˜ i = [ ( T – 1 ) •j ] – 1 .76d) states that the high-low mixed components of T T are obtained by ˜ ˜ taking the transpose of the low-high components of T .unm. we claim that similar methods can be used to show that the i matrix of high-low mixed •j components of T C are the cofactor of the low-high [ T i•j ] matrix.77) is i k T •k U •j = δ ji .unm.78) Hence. Equation (3.80c) (3. ˜ i Note that the •j components come from the i•j matrix.77) (3. components of T – 1 are obtained by inverting the ˜ The formulas for inverses may be written in our matrix notation as [ ( T – 1 ) mn ] ˜ [ ( T – 1 ) mn ] ˜ •n [ ( T –1 ) m ] ˜ m [ ( T – 1 ) •n ] ˜ = [ T ij ] – 1 . 2004 44 Equation (3. The contravariant matrix for T C will ˜ equal the cofactor of the covariant matrix for T times the metric scale factor g o . Eq.76c) states that the low-high mixed components of T T are obtained by taking the transpose of the high-low components ˜ of T . the contravariant components of T – 1 are obtained by inverting the T ij matrix of covari˜ ant components. (3.80b) (3. Alternatively note a different component form of Eq.

= g o [ T ij ] C .unm. u v = ( u i g i ) ( v j g j ) = u i v j g i g j . ˜˜ .rmbrann@me. i = [ T •j ] C . ˜˜ i ( u v ) •j = u i v j .html DRAFT June 17.83) (3. Thus. 2004 45 formulas is [ ( T C ) mn ] ˜ [ ( T C ) mn ] ˜ •n [ ( T C )m ] ˜ m [ ( T C ) •n ] ˜ = g o [ T ij ] C .81b) (3. DYAD By direct expansion. (3.edu http://me. ˜˜ ˜ ˜ ˜ ˜ ij = u i v j .81a) (3.85) ( u v ) i•j = u i v j .84) (3.81c) (3. (uv) ˜˜ Similarly.edu/~rmbrann/gobag.unm.82) (3.81d) = [ T i•j ] C . ˜˜ (3. ( u v ) ij = u i v j .

Throughout this document. we will now refer to the first system as the “A” system and the other system as the “B” system. ˜ 1 { gB.1) ˜ ˜ It’s true that the individual components will change if the basis is changed. Consequently. a vector’s components must change in a very specific way if a different basis is used. You should regard the system indicator (A or B) to serve the same sort of role as the “dot placeholder” discussed on page 16 — they are not indices.unm. ˜ 2 gB .unm.4) (4. 2004 46 4. To help with this book-keeping nightmare. With this convention. we have asserted that any vector v can (and should) be ˜ regarded as invariant in the sense that the vector itself is unchanged upon a change in basis. Chapter 3 had focused exclusively on the implications of using an irregular basis.html DRAFT June 17. ˜ A g2 . Basis transformation discussions are complicated and confusing because you have to consider two different systems at the same time. all that matters is the possibility that the basis might be non-normalized and/or non-right-handed and/or non-orthogonal. This chapter may be skipped without any negative impact on the understandability of later chapters. we may now say A { g1 .edu/~rmbrann/gobag.5) . but the sum of components times base vectors will be invariant. Since each system has both a covariant and a contravariant basis. ˜ 1 { gA. Each contravariant index (which is a superscript) will be accompanied by a subscript (either A or B) to indicate which system the index refers to. The vector can be expanded as a sum of components v i times corresponding base vectors g i : ˜ v = v i gi (4. Likewise. ˜ A g3 } ˜ 3 gA } ˜ B g3 } ˜ 3 gB } ˜ are the covariant base vectors for system-A are the contravariant base vectors for system-A are the covariant base vectors for system-B are the contravariant base vectors for system-B (4. This present chapter provides a gentle transition between the topics of Chapter 3 to the topics of Chapter 4 by outlining the transformation rules that govern how components of a vector with respect to one irregular basis will change if a different basis is used at that same point in space. For vector and tensor algebra.edu http://me.2) (4. ˜ B { g1 . each covariant index (a subscript) will now be accompanied by a superscript (A or B) to indicate the associated system. Basis and Coordinate transformations This chapter discusses how vector and tensor components transform when the basis changes. ˜ 2 gA . By contrast.3) (4. talking about two different systems entails keeping track of four different basis triads.rmbrann@me. ˜ B g2 . For vector and tensor algebra (Chapter 3). it doesn’t matter whether or not the basis changes from one point in space to another — that’s because algebraic operations are always applied at a single location. differential operations such as the spatial gradient (discussed later in Chapter 5) are dramatically affected by whether or not the basis is fixed or spatially varying.

The metric coefficients for any given system are defined in the usual way and the Kronecker delta relationship between contravariant and covariant base vectors within a single system still holds.7) ˜ ˜ i v = v B g iB (4.9) ˜ ˜ Keep in mind that the system label (A or B) is to be regarded as a “dot-placeholder. 2004 47 The system flag.unm. Now that we are dealing with up to four basis triads.edu http://me. we can further define new matrices that inter-relate the two systems: AB g ij = g iA • g jB ˜ ˜ BA g ij = g iB • g jA ˜ ˜ iB = g i • g B g Aj A j ˜ ˜ i iA g Bj = g B • g jA ˜ ˜ i j ij g AB = g A • g B ˜ ˜ i j ij g BA = g B • g A ˜ ˜ Aj = g A • g j g iB i B ˜ ˜ j Bj g iA = g iB • g A ˜ ˜ (4.edu/~rmbrann/gobag.16a) (4. AA g ij = g iA • g jA ˜ ˜ iA = g i • g A = δ i g Aj j A j ˜ ˜ j Aj g iA = g iA • g A = δ ij ˜ ˜ ij i • gj g AA = g A A ˜ ˜ (4.rmbrann@me.16c) (4.” not an implicitly summed index. Hence. Namely.10) (4. A or B.14) (4. the basis expansion of a vector can be any of the following: i (4.6) v = v A g iA ˜ ˜ i v = v iA g A (4. depending on which of the above four bases are used.12) ˜ ˜ ˜ ˜ i i j iB ij g Bj = g B • g jB = δ ji g BB = g B • g B (4.8) ˜ ˜ i v = v B g iB (4.html DRAFT June 17. This relationship still holds within each individual system (A or B) listed above.13) ˜ ˜ ˜ ˜ In previous chapters (which dealt with only a single system).unm. the components of vectors must also be flagged to indicate the associated basis. Specifically.15) Now that we are considering two systems simultaneously.11) j BB Bj g ij = g iB • g jB g iB = g iB • g B = δ ij (4.16d) .16b) (4. acts as a “dot-placeholder” and moves along with its associated index in operations. AA kj Aj g ik g AA = g iA = δ ij BB kj Bj g ik g BB = g iB = δ ij (4. we showed that the matrix containing the g ij components could be obtained by inverting the [ g ij ] matrix.

(4. [ g ij ] = 7 5 0 . “g” matrices that involve AA or BB) are components of a tensor (the identity tensor). g A = – 2 e 1 + 1 e 2 . (c) Directly apply Eq. ij ij g AB = g BA . [ g Aj ] = – 1 2 0 .. and j jA gA B = g Bi . Coupling matrix components (i. [ g AA ] = – 1 ⁄ 9 5 ⁄ 9 0 . iB Bi g Aj = g jA . 4 7 0 4 –1 0 0 0 1 2 ⁄ 3 –1 ⁄ 3 0 0 0 1 2⁄3 1⁄3 0 0 0 1 0 0 1 5 ⁄ 27 1 ⁄ 27 0 ij [ g AB ] = – 7 ⁄ 27 4 ⁄ 27 0 0 0 1 ij iB Bj .html DRAFT June 17. g B = 4 e 1 + 1 e 2 . B g2 ˜ A g1 ˜ A g2 ˜ B g1 ˜ e2 ˜ (b) Find the contravariant basis for each individual system. 5 1 0 0 0 1 3˜ 3˜ 3 ˜ 3˜ 9˜ 9˜ 2 ⁄ 9 –1 ⁄ 9 0 0 0 1 9 ˜ 9˜ 5 2 0 0 0 1 e1 ˜ AA ij BB AA Partial Answer: (a) [ g ij ] = 1 2 0 .e.e. 17 ⁄ 81 – 1 ⁄ 81 0 0 0 1 1 2 1 2 --------------(b) g A = 1 e 1 + 1 e2 . gij = – 1 ⁄ 81 5 ⁄ 81 0 . Eq.unm.1 Consider the following irregular base vectors (the A-system): A g 1 = e 1 + 2e 2 ˜ ˜ ˜ A A g2 = – e1 + e2 and g3 = e3 ˜ ˜ ˜ ˜ ˜ Additionally consider a second B-system: B g 1 = 2e 1 + e 2 ˜ ˜ ˜ B = – e + 4e B g2 and g3 = e3 ˜1 ˜2 ˜ ˜ ˜ (a) Find the contravariant and covariant metrics for each individual system. g B = – 1 e 1 + 2 e 2 ˜ ˜ ˜ ˜ AB BA iA Aj (c) [ g ij ] = – 1 5 0 ..16) implies that AB BA g ij = g ji . [ g iB ] = – 1 ⁄ 3 1 ⁄ 3 0 . [ g iA ] = 1 2 0 5 ⁄ 27 – 7 ⁄ 27 0 0 0 1 1 1 0 0 0 1 1 –1 0 0 0 1 From the commutativity of the dot product. . i (4.16) to find the couAB BA pling matrices. g ij = 2 17 0 .rmbrann@me. 2004 48 Study Question 4. [ g BA ] = 1 ⁄ 27 4 ⁄ 27 0 . a coupling matrix characterizes interrelationship between two bases.edu http://me. [ g Bj ] = 1 ⁄ 3 1 ⁄ 3 0 .unm.edu/~rmbrann/gobag. (4. ones that involve A and B) are not components of a tensor — instead.17) System metrics (i. Then verify that [ g ij ] = [ gij ] T .

• To exchange system labels (A or B) along a diagonal. and in all upcoming matrix equations involving “g” components.18) Aj Similarly. you can transform it immediately to other types of g-matrices by using the following rules: • To exchange system labels (A or B) left-right.edu http://me. not to matrix expressions. Ak B g iA = g iB g k (4.unm. . 2004 49 AB The first relationship in Eq. then left-right *A A* [ g B* ] = [ g *B ] T . Instead this equation is saying that the matrix containing g ij may be obtained by the BA transpose of the different matrix that contains g ij . apply an inverse. then left-right BA AB [ g ** ] = [ g ** ] T .edu/~rmbrann/gobag. you need a matrix product. AB For example. up-down *B A* g A* = [ g *B ] – 1 diagonal B* A* g *A = [ g *B ] – T (4.17) does not say that the matrix containing g ij is symmetAB ric. These equations show how to move two system labels simultaneously.20) This behavior also applies to transforming bases. apply an inverse transpose. For example. apply a transpose. • To exchange system labels (A or B) up-down. For *A example. Given a “g” matrix of any type.unm. what operation would you need to move only the “B” label in [ g B* ] from the bottom to the top? The answer is that you need to use the B-system metric as follows: BA BB *A [ g ** ] = [ g ** ] [ g B* ] BA BB kA (this is the matrix form of g ij = g ik g Bj ) (4. (4. up-down ** AB g AB = [ g ** ] – 1 diagonal ** AB g BA = [ g ** ] – T (4.html DRAFT June 17. abutting system labels must be on opposite levels (the system labels are not summed — index rules apply only to indicial expressions. a star (*) is inserted where indices normally reside in indicial equations.rmbrann@me. In matrix products.21) ˜ ˜ All of the “g” matrices may be obtained only from knowledge of one system metric (same system labels) and one coupling matrix (different system labels).19) Here. if g ij is available. if g iB is available. To move only one system label to a new location.

In other words. Next. .edu http://me. ˜ ˜ The matrix in this relation is A* B* [ g *B ] = [ g *A ] – T 3 1 3 -2 = 2 5 3 0 –1 – 1 -2 . which requires an inverse-transpose: [ g*B ] = [ g *A ] –T . A and BB Bj B. *A B* we need to express the coupling matrix [ g B* ] in terms of the given coupling matrix [ g *A ] .2 Suppose you are studying a problem involving two bases.edu/~rmbrann/gobag. we A* B* need to swap A and B along the diagonal.unm.22) AA ** (a) Find [ g ** ] and [ g AA ] B (b) Express the g iA base vectors in terms of the g k base vectors. Putting it all AA B* BB B* back in the first equation gives [ g** ] = [ g *A ] –T [ g ** ] [ g *A ] –1 1 2 –4 = 0 –1 2 3 0 –2 –T 4 2 3 1 2 –4 2 4 –1 0 –1 2 = 3 –1 5 3 0 –2 19 ----2 5 9 – -2 5 -2 11 1 -2 57 ----2 49 15 – ----2 (b) Ak B g iA = g iB g k . A* B* We now need to get [ g *B ] expressed in terms of the given coupling matrix [ g*A ] .html DRAFT June 17. ˜ ˜ i B (b) Express the g A base vectors in terms of the g k base vectors. 2004 50 Study Question 4. This requires *A B* only an up-down motion of the system labels.unm.rmbrann@me. Therefore 3 B A B B A B B B A B 1 B g 1 = g 1 + 3g 2 + -. Start by expressing AA AA A* BB *A [ g ** ] as the product of any AB matrix times a BB metric times a BA matrix: [ g ** ] = [ g*B ] [ g ** ] [ g B* ] .g 3 .g 3 . so that’s just an inverse: [ g B* ] = [ g *A ] –1 . g 2 = 2g 1 + 5g 2 + 3g 3 . and you know one system metric g ij and one coupling matrix g iA as follows: BB [ g ** ] = 4 2 3 2 4 –1 3 –1 5 and B* [ g *A ] = 1 2 –4 0 –1 2 3 0 –2 (4. ˜ ˜ Partial Answer: (a) We are given a regular metric and a mixed coupling matrix. 2˜ 2˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ (c) is similar. and g 3 = – g 2 – -.

The g components require an inverse on their matrix forms to move system labels up-down. In this case.unm. For example AA BB pA qA T ij = T pq g Bi g Bj and iB pq Ai AB T Bj = T AA g pB g qj (4. For example.rmbrann@me.29) These formulas are constructed one index position at a time. keeping in mind that the g components are unchanged when indices along with their system label are moved left-to-right. the matrix forms for these transformation formulas are AA *A BB *A [ T ** ] = [ g B* ] T [ T ** ] [ g B* ] and iB *A ** *A AA T Bj = [ g B* ] T [ T AA ] [ g B* ] – T [ g ** ] (4. this result needs to explicitly show the system (A or B) flags: AA j v iA = g ij v A BB j v iB = g ij v B (4. for example. the “g” must have A . you can write them in a way that dummy summed indices are abutting. the first index pair on the T is p . p A the first “g” is a combination of the free index pair on the left side and the “flipped” index pair Ai on the right g pB . On the other side of the equation. the formulas are Ak B v iA = g iB v k i ik B v A = g AB v k AB k v iA = g ik v B i iB k v A = g Ak v B (4.24) These equations show how to transform components within a single system.27) (4. The “T” in the above equations may be moved to the middle if you like it better that way: AA pA BB qA T ij = g Bi T pq g Bj and iB Ai pq AB T Bj = g pB T AA g qj (4.html DRAFT June 17. i the first free index position on the T is the pair B . In the last formula. Thus.edu http://me.unm. Because p is a dummy summed index. B kA v iA = v k g Bi i B ki v A = v k g BA k BA v iA = v B g ki i k Bi v A = v B g kA (4. Thus.30) If you wanted to write these as matrix equations.31) Numerous matrix versions of component transformation formulas may be written. . To transform components from one system to another.23) (4.edu/~rmbrann/gobag.26) Equivalently. When considering more than one basis at a time. 2004 51 We showed in previous sections how the covariant components of a vector are related to the contravariant components from the same system.25) (4.28) Tensor transformations are similar. we wrote the matrix expressions on the assumption that the known coupling and metric *A AA matrices were [ g B* ] and [ g ** ] . we showed that v i = g ij v j when considering only one system.

A g1 ˜ and A g2 . ˜ ˜ ˜ (a) Determine the covariant and contravariant components of v with respect to sys˜ i i tems A and B.unm. Bk A (d) The answer is v iB = g iA v k . 2004 52 Study Question 4. v B ).unm. Namely. v 2 = 0 . v A = 2 . (c) Verify the following: AB k v iA = g ik v B i iB k v A = g Ak v B Ak B v iA = g iB v k i ik B v A = g AB v k ˆ e1 ˜ (d) Fill in the question marks to make the following equation true: ?? A v iB = g ?? v ? A A 1 2 B B 1 Partial Answer: (a) v 1 = 9 . Drawing grid ˜ ˜ lines parallel to the path from the origin to the tip of vector reached by first tra˜ A A versing along a line segment equal to 2g 1 . and then moving anti-parallel to g 2 to reach the tip. v iA = gik v B becomes in matrix form 0 = – 1 5 0 1 ⁄ 3 . v B = 1 . A g 1 = e 1 + 2e 2 ˜ ˜ ˜ A g2 = – e1 + e2 ˜ ˜ ˜ and B g2 ˜ and A g3 = e3 ˜ ˜ v ˜ A g1 ˜ B g 1 = 2e 1 + e 2 ˜ ˜ ˜ B B and g 2 = – e 1 + 4e 2 g3 = e3 ˜ ˜ ˜ ˜ ˜ Let v = 3e 1 + 3e 2 . . 9 0 4 7 0 5⁄3 0 0 1 0 Multiplying this out verifies that it is true. v iB.rmbrann@me. (i.1. A g2 ˜ ˆ e2 ˜ B g1 ˜ (b) Demonstrate graphically that your answers to part (a) make sense. v 1 = 9 . -..2 -- 3 3 A A 1 A 2 A (b) The first third and fourth results of part (a) imply that v = v A g1 + v A g 2 = 2g 1 – g 2 . ˜ ˜ v may be ˜ ˜ ˜ ˜ AB k (c) Using results from study question 4.3 Consider the same irregular base vectors in Study Question 4. v 2 = 9 . v A .html DRAFT June 17.e.edu http://me.1. find v iA.edu/~rmbrann/gobag. v A = –1 . v B = 5 .

edu http://me. If. η 2 = y . It has no θe θ term. then η 1 = r . θ. The position vector’s dependence on θ is buried implicitly in the dependence of e r on θ . which is entirely uncoupled from the coordinates { r. θ. z } . θ. then η 1 = r . each contravariant base vector points normal to a surface of constant coordinate value.. and η 3 = z . Coordinates uniquely define the position vector x . not { r. z } . the position vector in cylindrical coordinates is x = re r + ze z . and η 3 = ψ . e y. z ) coordinates to identify points in space.. θ. but coordinates are not ˜ the same thing as components of the position vector.e. Cartesian coordinates are frequently denoted { x. and so on. and η 3 = z . ˜ Whether the basis is chosen to be coupled to the coordinates is entirely up to you. If spherical coordinates are used. e z } . Notice that position vector is not re r + θe θ + ze z . 2004 53 i Coordinate transformations A principal goal of this chapter is to show how the v B compoi nents must be related to the v A components. In fact. cylindrical coordinates are used. In this case. . Usually the basis is taken to be defined to be the one that is naturally defined by the coordinate system grid lines. Strictly speaking. for example. it’s fine to use a basis that is selected entirely independently from the coordinates. η 2. This. We are seeking component transformations resulting from a change of basis. we will denote the three spatial coordinates by { η 1. For example. Does this mean that the position vector depends ˜ only on r and z ? Nope. the chosen basis is the Cartesian laboratory ˜ ˜ ˜ ˜ basis { e x. while also using cylindrical ( r. ˜ ˜ ˜ To speak in generality. η 3 } . If Cartesian coordinates are used.e. z } . y. The ˜ ˜ ˜ ˜ ˜ ˜ components of the position vector are { r. you might choose to use cylindrical coordinates but a Cartesian basis so that the position vector would be written x = ( r cos θ )e x + ( r sin θ )e y + ze z . however. you might decide to use a fixed Cartesian basis that is aligned with gravity. The natural covariant basis co-varies with the coordinates — i. There is an unfortunate implicit assumption in many publications that vector and tensor components must be coupled to the spatial coordinates. Any quantity that is known to vary in space can be written as a function of the coordinates.edu/~rmbrann/gobag.rmbrann@me. Spatial coordinates are any three numbers that uniquely define a location in space. z } . The natural contravariant basis contra-varies with the grid — i. z } . For example. zero.unm. coordinates may always be selected independently from the basis. each covariant base vector always points tangent to the coordinate a grid line and has a length that varies in proportion to the grid spacing density. When people talk about component transformations resulting from a change in coordinates. If convenient to do so (as in the ferris wheel example). For example. η 2 = θ . The position vector has only two nonzero components.html DRAFT June 17. cylindrical coordinates are { r. then η 1 = x . is rarely done. to study loads on a ferris wheel.unm. they implicitly assuming that the basis is coupled to the choice of coordinates. η 2 = θ .

As further explained in Section 5. That means the coordinates themselves can be regarded as functions of the position vector. The basis can be selected independently from the choice of coordinates. One can alternatively choose the basis to be uncoupled from the coordinates (see the discussion of non-holonomicity in Ref.edu http://me.unm.------. each location in space corresponds to a unique set of { η 1. 2004 54 The position vector is truly a function of all three { η 1.33) ∂x ˜ ˜ Being the gradient of η j . η 2. The natural basis defined in Eq. Each particular value of the coordinates { η 1. Examples of the natural basis for various coordinate systems are provided in Section 5. the contravariant base vector g j must point normal to surfaces of ˜ constant η j .32) The i th natural covariant base vector g i equals the derivative of the position vector x with ˜ ˜ respect to the i th coordinate.rmbrann@me. Thus.∂η i ∂η j for the natural basis (4.unm. Hence. ˜ Conversely.32) is clearly coupled to the coordinates themselves. η 3 } coordinates.35) Many textbooks present this result in pure component form by writing the vector x in terms ˜ of its cartesian components as x = x k e k so that the above expression becomes ˜ ˜ g ij = ∑ k = 1 3 ∂x k ∂x k ------.html DRAFT June 17.edu/~rmbrann/gobag. η 2.32) are ∂x ∂x g ij = g i • g j = -------i • -------j ˜˜∂η ∂η ˜ ˜ (4.3. Proving that these are indeed the contravariant base vectors associated with the covariant natural basis defined in Eq. η 3 } defines a unique location x in space.= δ ij ------(4. (4.= ∂η. Having the basis be coupled to the coordinates is a choice. this base vector points in the direction of increasing η i and it is tangent to the grid line along which the other two coordinates remain constant. (4. η 2. a change in coordinates will result in a change of the natural basis.34) ˜∂x ∂η ∂η i ˜ ˜ ˜ The metric coefficients corresponding to Eq. we can always define the natural basis [12] that is associated with the choice of coordinates by ∂x g i ≡ -------i ˜∂η ˜ (4. Consequently. [12]). However.3. the components of any vector expressed using the natural basis will change upon a change in coordinates.32) is simply a matter of applying the chain rule to demonstrate that g i • g j comes out to equal the Kronecker delta: ˜ ˜ j ∂x ∂η j g i • g j = -------i • ------. and we can therefore take the spatial gradients of the coordinates. η 3 } coordinates. the contravariant natural base vector is defined to equal these coordinate gradients: j ∂ηg j ≡ ------(4.36) . (4.

the professor follows up with the even harder question “what is a tensor?” Definition #0 (used for undergraduates): A college freshman is typically told that a vector is something with length and direction.unm.edu http://me. we will not assume that the metric coefficients are given by the above expressions. the contravariant tensor being a set of nine numbers that transform . The idea of a tensor is tragically withheld from most undergraduates. they will apply even when the chosen basis is not the natural basis. (4. 4. it’s important to recognize that x k are the Cartesian components of the position vector (not the covariant components with respect to an irregular basis). 2004 55 When written this way. More than likely. In the upcoming discussion of coordinate transformations. v 2. from the fact that the “correct” answer depends on who is doing the asking.37) These are the expressions for the metric coefficients when the natural basis is used.rmbrann@me. When we derive the component transformation rules. but our least favorite): Some professors merely want their student to say that there are two kinds of vectors: A contravariant vector is a set of three numbers k i Ak { v 1. and nothing more is said (certainly no mention is made of tensors!). Definition #1 (classical.otherwise. That’s why we introduced the summation in Eq.36) -.edu/~rmbrann/gobag.∂η------. We cite these relationships to assist those readers who wish to make connections between our own present formalism and other published discourses on curvilinear coordinates. Let us reiterate that there is no law that says the basis used to described vectors and tensors must necessarily be coupled to the choice of spatial coordinates in any way. whereas a covariant vector is a set of three numbers { v 1. v 3 } that transform B iB such that v k = v iA g Ak . v 2. Because our transformation equations will be expressed in terms of metrics only. These professors typically wish for their students to define four different kinds of tensors.html DRAFT June 17.------∂x k ∂x k for the natural basis (4.unm. v 3 } that transform according to v B = v A g iB when changing from an “A-basis” to a “B-basis”.1 What is a vector? What is a tensor? The bane of many rookie graduate students is the common qualifier question: “what is a vector?” The frustration stems. Different people have different answers to this question. we will include both the general basis transformation expression and its specialization to the case that the basis is chosen to be the natural basis. we would be violating the high-low rule of the summation conventions. in part. the contravariant metric coefficients are g ij = ∑ k = 1 3 i j ∂η. Similarly.

etc.html DRAFT June 17..edu/~rmbrann/gobag. Definition #3 (for mathematicians or for advanced engineering) Mathematicians define vectors as being “members of a vector space. 2004 56 ij mn iA jA according to T BB = T AA g Bm g Bn . The act of multiplying a scalar times a vector is presumed to be well defined and to satisfy the rules outlined in definition #3. and we test whether or ˜ ˜ k i Ak not we have a vector by verifying that v B = v A g iB holds upon ˜a change of basis. The multiplication and addition rules must satisfy the following rules: • v + w = w + v and αv = v α ˜ ˜ ˜ ˜ ˜ ˜ •u + (v + w) = (u + v) + w ˜ ˜ ˜ ˜ ˜ ˜ •There must exist a zero vector 0 ∈ V such that v + 0 = v . that could be constructed from vectors. . we demand that it must nevertheless lend itself to a quantitative representation as a sum of 1 three components { v 1. There must be a rule for adding two members of V. Of course.) A2. A field R must exist. v 2. There must be a well defined process for determining whether two members of V are equal. g 3 } . The base vectors themselves are regarded as abstractions defined in terms of geometrical fundamental primitives. the base vectors are regarded as directed line segments between points in space. There must be a discerning definition of membership in a set V. called dyads. (For engineering applications.rmbrann@me. A tensor would then be defined as any linear combination of tensor dyads for ˜ ˜ which the coefficients of these dyads (i. Namely. Furthermore. g 2. Furthermore. ˜ ˜ ˜ ˜ 1. The key distinction between definition #2 and #1 is that definition #2 makes no distinction between covariant vectors and contravariant vectors. this vector addition rule must be proved closed in V : If v ∈ V and w ∈ V then v + w ∈ V ˜ ˜ ˜ ˜ A5. From there. Recall that definition of a second-order tensor under our viewpoint required introduction of new objects.unm. A3. There must be a rule for multiplying a scalar α times a member v of V.e.unm. Definition #2 (our preference for ordinary engineering applications): We regard a vector to be an entity that exists independent of our act of observing it — independent of our act of representing it quantitatively. which are presumed (by axiom) to exist.” A vector space must comprise certain basic components: A1. the field is the set of reals.edu http://me. we asserted that any collection of nine numbers could be assembled as a sum of these numbers times basis dyads g i g j . v 3 } times three base vectors { g 1. to decide if something is an engineering vector. the tensor components) follow the transformation rules outlined in the preceding section. this ˜ multiplication rule must be proved closed in V : If α ∈ R and v ∈ V then αv ∈ V ˜ ˜ A4. A6. the mixed tensors being ones whose components transform Bj An BA jn iB mA im BA according to T iB = T mA g im g BA or T Bj = T An g BA g jn . Definition #2 instead uses the terms “covariant components” and “contravariant components” — these are components of the same vector.

g ) .unm. b ) ). When people define vectors according to the way their components change upon a change of basis (definitions #1 and #2). completely neglecting the far more subtle and difficult items A2. you will need to locate a carefully crafted definition of what it means to be a “real valued continuous func1. textbooks seem to fixate on item A6. ( a. but you may obtain a careful definition of the term “field” in Reference [6]. Likewise.38) is an acceptable definition of the inner product. engineering vectors are more than simply an array of three numbers. which is a scalar-valued binary1 operation between two vectors. do you think this problem contributes toward that goal? Explain? Partial Answer: (a) You may assume that the set of reals constitutes an adequate field. ˜ A8. In general. y ) is analogous to a conventional tensor components A ij . Prove that the scalar-valued functional ∞ ∫ f ( x )g ( x ) dx –∞ (4. (g) Assuming that the goal of this document is to give you greater insight into vector and tensor analysis in 3D space. A Fourier-series expansion is related to the Taylor series in a ˜ way that is analogous to what operation from ordinary vector analysis in 3D space? (d) Explain how the binary function f ( x )g ( y ) is analogous to the dyad f g from conven˜˜ tional vector/tensor analysis. Engineering vectors are more than something with length and direction. a ) ˜ ˜ ˜ a ) > 0 if a ≠ 0 and ( a. ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ An inner product space is just a vector space that has an inner product. (c) Explain what a Taylor-series expansion f ( x ) has in common with a basis expansion ( f = f i g i ) of a vector. . a and b . that ˜ ˜ must satisfy the ˜ ˜ following rules: A7. (b) Suppose that f and g are members of this vector space. and A5.4 (a) Apply definition #3 to prove that the set of all real-valued continuous functions of one variable (on a domain being reals) is a vector space.edu http://me. ( a. ˜ b ) = ( b. ( f.edu/~rmbrann/gobag.rmbrann@me. vector spaces are typically supplemented with the definition of an inner product (here denoted ( a.unm. A4. 2004 57 Unfortunately. a ) = 0 only if a = 0 . Whenever possible. Our “definition #2” is a special case of this more general definition. Study Question 4. A3.The term “binary” is just an fancy way of saying that the function has two arguments. axiom A2 is the most difficult axiom to satisfy when discussing specific vector spaces. In order to deal with axiom A2. they are implicitly addressing axiom A2.html DRAFT June 17. (e) Explain why the dummy variables x and y play roles similar to the indices i and j (f) Explain why a general binary function a ( x.

edu/~rmbrann/gobag.change the basis. Likewise. –∞ –∞ When addressing the question of whether or not something is indeed a tensor. f(x)g(y) so ˜that it takes on˜meaning when appearing in an operation on a general function v ( y ) so that f(x)g(y) is defined so that it means ∫ f ( x )g ( y )v ( y ) dy = f ( x ) ∫ g ( y )v ( y ) dy . where the indices i and j range from 1 to 3. for n ranging from 0 to ∞ . When we k cover the topic of curvilinear calculus.” This statement merely means that BBk AAr BA BA kr Γ ijB ≠ Γ pqA g ip g jq g BA (4. basis-dependent. but you would naturally expect these tensors to change upon a change of basis if the new definition of the tensor is defined the same as the old definition except that the new base vectors are used. This should not be disturbing. g 2. or 10 if you are stumped]. g 3 } . the base vectors themselves are. (e) see previous answer (f).40) Equation (4. ΓA ≠ ΓB (4.39) Note that it is perfectly acceptable for you to construct a distinct tensors defined AAk m n A Γ A = Γ ijA g A g A g p ˜ ˜ ˜ ˜ ˜ and BBk m n B Γ B = Γ ijB g B g B g p ˜ ˜ ˜ ˜ ˜ (4. Changing the basis will change the associated Christoffel tensor. 2004 58 tion of one (real) variable” [see References 8.rmbrann@me.39) tells us that these two tensors are not equal.html DRAFT June 17. you must commit yourself to which of the definitions discussed on page 55 you wish to use.41) ˜ ˜ ˜ ˜ Stated differently. The function f(x)g(y) has values defined for values of the independent variables x and y ranging from – ∞ to ∞ . we will encounter the Christoffel symbols Γ ij and Γ kij . A dyad f g is the most primitive of tensors. there exists a basis-dependent Christoffel tensor. and you will (obviously) change the base vectors themselves. A particular choice for the basis is itself (of course) basis dependent -. (b) Demonstrate that axioms A7 and A8 hold true. 9.edu http://me.unm. After all. The Fourier series expansion writes a function as a linear combination of trigonometric functions. { g 1. but that doesn’t mean they aren’t vectors. (c) The Taylor series expansion writes a function as a linear combination of more simple functions.unm. namely. . we can define the most primitive function of two ˜ ˜ ˜ ˜ ∞ ∞ variables. the powers x n . by definition. That is. Naturally it stands to reason ˜ ˜ ˜ that choosing some other basis will still permit you to construct Christoffel symbols for that system. Their definition is based upon your choice of basis. You can always construct other vectors and tensors from the basis. These three-index quantities characterize how the natural curvilinear basis varies in space. (d) The dyad f g has components ˜˜ f i g j . Any review of the literature will include a statement that the Christoffel symbols “are not third-order tensors. What’s truly remarkable is that there exist certain combinations of base vectors and basis-referenced definitions of components that turn out to be invariant under a change of basis. for each basis. ˜˜ and it is defined such that it has no meaning (other than book-keeping) unless it operates on an arbitrary vector so that f g • v means f ( g • v ) .

vector. and furthermore presume that the results will be different. or tensor valued) which can be constructed from the basis.html DRAFT June 17. these numbers are not contravariant components of any vector η . suppose that f is a function (scalar. then they would have to be related by a transformation formula of the form ∂η k η k = --------. (3. these tensors are simply the identity tensor! G AA = G BB = I (4. More often than not.45) A coordinate transformation of this form is a linear. 2004 59 AA BB Consider two tensors constructed from the metrics g ij and g ij : AA i j BB i j G AA = g ij g A g A and G BB = g ij g B g B (4. In more modern treatments. whereas the function f is basis-dependent if F A ≠ F B . These two tensors are equal only if the components of F A with respect to one basis happen to equal the components of F B with respect to the same basis. it turns ˜ out that the differences cancel each other out in˜ the combinations defined in Eq. It’s not uncommon for two distinct tensors to have the same components with respect to different bases — such a result would not imply equality of the two tensors. which holds only for homogenous (i. In fact. Recall that we have typeset the three coordi˜ ˜ nates as η k .e. as was shown in Eq. Despite this typesetting. We may only presume that transformation rules exist such that each η k can be expressed as a function of the η m coordinates. To clarify.43) ˜ ˜ ˜ This is an exceptional situation.edu http://me.unm.2 Coordinates are not the same thing as components All base vectors in this section are to be regarded as the natural basis so that they are related to the coordinates η k by g k = ∂x ⁄ ∂η k . Presume that the same function can be applied to any basis. g 3 ) and if F B = f ( g 1 .42) ˜ ˜ ˜ ˜ ˜ ˜ These two tensors. 4. are equal even though they were constructed using different i i AA BB bases and different components! Even though g A ≠ g B and even though g ij ≠ g ij .rmbrann@me. Even though the η k coordinates are not generally com- .44) F A = f ( g 1 . If coordinates were also vector components.unm.η m ∂η m ← valid for homogeneous. g 2 .42) so that both tensors are equal. (4.edu/~rmbrann/gobag. it turns out. g 3 ) ˜ ˜ ˜ ˜ ˜ ˜ In the older literature.. A A A B B B (4. you will end up with two different tensors. By this we mean that if you compare a different set of components η k from a second system. g 2 . but not curvilinear coordinates (4.18). the function f is called basis invariant if F A = F B . so we must denote the results by different symbols. non-curvilinear) coordinate systems. the components of F A were called tensor if and only if F A = F B . when you construct a tensor by multiplying an ordered array of numbers by corresponding base vectors in different systems.˜ they are not generally related to the original set of components via a vector transformation rule.

natural question would be: is it possible to define complementary or dual coordinates η i that are related to the baseline coordinates η i such that ( dx ) i = d ( η i ) ← this is NOT possible in general (4.52) . The covariant component of the position increment is well-defined: (4. In other words.47) (4.dη m ∂η m ← this is true for both homogenous and curvilinear coordinates! (4. the expression g ij d ( η j ) would have to be an exact differential. Geometrically. which means that the metrics would have to be given by ∂η g ij = -------i ∂η j (4.edu/~rmbrann/gobag. That is. In ˜ other words. we can construct a straight tangent to that line.html DRAFT June 17.3 Do there exist a “complementary” or “dual” coordinates? Recall that the coordinates are typeset as η k .50) ˜ We will now prove (by contradiction) that such dual coordinates do not generally exist. this equation would need to be integrable. whereas on the right side we have the differential of the k th coordinate. we have the k th component of the vector differential.unm. but their increments are linear for the same reason that the increment of a nonlinear function y = f ( x ) is an expression that is linear with respect to increments: dy = f′ ( x )dx . 4. their increments dη k do transform like vectors. we should write this as (4.49) ( dx ) i = g ik ( dx ) k ˜ ˜ Looking back at Eq. then it would have to be true that d ( η i ) = g ij d ( η j ) (4. ( dx ) k = dη k ˜ To better emphasize the order of operation.rmbrann@me. the transformation formulas that express coordinates η k in terms of η m are nonlinear functions. ∂η k dη k = --------.46) For curvilinear systems. at any point on a curvy line.48). If such coordinates did exist.unm. (4.edu http://me.51) For the η i to exist. The dη k increments are the components of the position increment vector dx .48) ( dx ) k = d ( η k ) ˜ On the left side. 2004 60 ponents of a vector. We’ve explained that the coordinate increments dη k transform as vectors even though the coordinates themselves are not components of vectors.

( η 1 =r. η 3 =z ) . 2004 61 and therefore.53) is violated and therefore there do not exist dual coordinates.= ----------------. where the metric matrix is 1 0 0 [ g ij ] = 0 r 2 0 0 0 1 (4. since these are not equal.= --------. 1 ∂r ∂η but ∂g 21 ∂g 21 --------. the integrability condition of Eq.53) This constraint is not satisfied by curvilinear systems. .54) Note that ∂g 22 ∂g 22 --------.= --------. Consider.edu/~rmbrann/gobag.rmbrann@me.= ----------------. for example. the second partials would need to be interchangeable: ∂2 ηi ∂ 2 ηi ∂g ik ∂g ij -------. (4.unm.= 0 2 ∂θ ∂η (4.55) Hence.unm. cylindrical coordinates.= --------∂η k ∂η j ∂η k ∂η k ∂η j ∂η j (4.html DRAFT June 17. η 2 =θ.edu http://me.= 2r .

(5. θ.4) is incorrect because the arctangent has two solutions in the range from 0 to 2π .edu/~rmbrann/gobag. the function s ( r. which are related to the laboratory Cartesian coordinates by x 1 = r cos θ . the correct formula would have to be the two-argument arctangent. If s is written as a function of x i . 2004 62 5. For homogeneous coordinates (where the base vectors are the same at all points in space). z ) is probably simple in form. x 2. In that chapter.4) Then you’d have the function s ( x 1.unm. By contrast. there are new additional terms in gradient formulas to account for the variation of the base vectors with position. then ˜ the familiar Cartesian formula applies: ∂s∂s∂sds ----.e 2 + ------. x 2 = r sin θ .e. x 3 ) . ds ⁄ dx .  x 1 z = x3 . when the coordinates are curvilinear. (5. (5. Curvilinear calculus Chapter 3 focused on algebra. tensor calculus formulas for the gradient are similar to the formulas for ordinary rectangular Cartesian coordinates. . nonorthogonal. and/or non-right-handed). (5. it is useful to first show how to develop the formulas without using the full power of curvilinear calculus.html DRAFT June 17. you could substitute the inverse relationships. actually computing the derivative would be tedious. r = 2 2 x1 + x1 .. Strictly speaking. This approach is unsatisfactory for several reasons.= ------. 5. with which you could directly apply Eq.3).e 1 + ------.e 3 . x3 = z .unm. Eq.3) ∂x 1 ˜ ∂x 2 ˜ ∂x 3 ˜ dx ˜ Because curvilinear coordinates were selected.rmbrann@me. Suppose an engineering problem is most naturally described using cylindrical coordinates.1) The base vectors for cylindrical coordinates are related to the lab base vectors by e r = cos θ e 1 + sin θ e 2 ˜ ˜ ˜ e θ = – sin θ e 1 + cos θ e 2 ˜ ˜ ˜ ez = e3 . (5. and the final result would be expressed in terms of the lab Cartesian basis instead of the cylindrical basis.edu http://me. nonnormalized. where the only important issue was the possibility that the basis might be irregular (i. If you were a sucker for punishment. it made no difference whether the coordinates are homogeneous or curvilinear. x2 θ = tan – 1  ---- .2) ˜ ˜ Suppose you need the gradient of a scalar. (5.1 A introductory example Before developing the general curvilinear theory. Furthermore.

e 3 ∂x 2 ˜ ∂x 3 ˜ ∂x 1 ˜ dx ˜ ∂θ∂θ∂θdθ = ------. (5. we know that the gradient of a quantity is ˜ perpendicular to surfaces of constant values of that quantity.e 1 + ------.edu http://me.+  ---. ----.1) in a matrix: ∂x 1 ------∂r ∂x 2 ------∂r ∂x 3 ------∂r ∂x 1 ------∂θ ∂x 2 ------∂θ ∂x 3 ------∂θ ∂x 1 ------∂z ∂x 2 ------∂z ∂x 3 ------∂z = cos θ sin θ 0 – r sin θ r cos θ 0 0 0 . Physically. (5.. r r 0 0 1 (5. For example.e 3 .unm. In other ˜ words.e 1 + ------.= ------. z dx  ∂z r.e + ------.8) . so we know that dr ⁄ dx must be perpendicular to the cylinder.1) to first compute the derivatives of the Cartesian coordinates with respect to the cylindrical coordinates.+  ----. The gradients of the cylindrical coordi˜ ˜ nates are obtained by applying the ordinary Cartesian formula: ∂r∂r∂rdr ----.6) ∂x 1 ˜ ∂x 2 ˜ ∂x 3 ˜ dx ˜ We must determine these derivatives implicitly by using Eqs.7) The inverse derivatives are obtained inverting this matrix to give ∂r------∂x 1 ∂θ------∂x 1 ∂z------∂x 1 ∂r------∂x 2 ∂θ------∂x 2 ∂z------∂x 2 ∂r------∂x 3 ∂θ------∂x 3 ∂z------∂x 3 = cos θ sin θ 0 sin θ cos θ – ---------.html DRAFT June 17.edu/~rmbrann/gobag.e 2 + ------.5)  ∂r θ. ----.----------. θ dx dx ˜ ˜ ˜ ˜ where the subscripts indicate which variables are held constant in the derivatives. Surfaces of constant r are cylinders of radius r . ˜ the quantity dr ⁄ dx is the gradient of r . ----.rmbrann@me.e 2 + ------.=  ---. θ.e + ------. we know that dr ⁄ dx must be parallel to e r . 2004 63 The better approach is to look at the problem from a more academic slant by applying the chain rule. The derivatives of the cylindrical coordinates with respect to x can be computed a priori. Given that s = s ( r.= ------. We can arrange the nine possible derivatives of Eqs. (5. 1 (5.0 . z ) then ∂s dr ∂s dθ ∂s dz ds ----. (5.e ----1 2 ∂x 2 ˜ ∂x 3 ˜ 3 ∂x 1 ˜ dx ˜ ∂z∂z∂zdz ----.unm. z dx  ∂θ r.

edu http://me.= e 3 .rmbrann@me. 2004 64 Substituting these derivatives into Eq.θ ds ----. (5.e sin θ ----r ˜2 dx r ˜1 ˜ dz ----. (5.html DRAFT June 17. dx ˜ ˜ Referring to Eq.unm. (5.= e 3 dx ˜ ˜ (5.edu/~rmbrann/gobag. The above analysis showed that high-powered curvilinear theory is not necessary to derive the formula for the gradient of a scalar function of the cylindrical coordinates.z e 3 . Now that we have the coordinate gradients.e θ + s . This is the formula for the gradient of a scalar that you would find in a handbook. For any coordinate system. it is always a good idea to derive the coordinate gradients and save them for later use.6) gives dr ----.2) we conclude that dr ----.e + ----------.r e r + ----. Eq. All .10) This result was rather tedious to derive.= e r dx ˜ ˜ eθ dθ = -------˜dx r ˜ dz ----.unm.5) becomes s . but it was a one-time-only task.= cos θ e 1 + sin θ e 2 dx ˜ ˜ ˜ cos θ dθ = – ---------. r ˜ dx ˜ ˜ ˜ (5.= s .11) where the commas are a shorthand notation for derivatives.9) (5.

η 2. Each unit change in the coordinate corresponds to a specific change in the position vector. the three coordinates { η 1. for more complicated coordinate systems. These ˜ ˜ ˜ coordinates are identified with superscripts merely by cong1 η1 ˜ vention. (5. As sketched in Figure 5. the position vector for spherical coordinates is simply x = re r (not ˜ ˜ ˜ x = re r + θe θ + φe ψ ). However.12) ˜ ˜ ˜ th The i covariant base vector is defined to point in the direction that x moves when η k is ˜ . the position vector x is not generally equal to ˜ η k g k . the associated base vectors at any point in space are tangent to the grid lines at that point. η3 } In what follows.edu/~rmbrann/gobag.unm. η 3 } identify the position in space: x = x ( η 1.rmbrann@me.html DRAFT June 17. η 3 } that identify the location in space. η 2. the dependence of the position vector on the ˜ ˜ ˜ ˜ coordinates θ and φ is hidden in dependence of e r on θ and φ . i. the associated covariant base vectors are defined to be tangent to the grid lines along which only one coordinate varies (the others being held constant). Importantly.1 Curvilinear covariant basis. 2004 65 that’s needed is knowledge of tensor analysis in Cartesian coordinates.e.unm.2 Curvilinear coordinates { η 1. η 2. ˜ η2 5. η 2 . and the covariant basis has a length equal to the change in position per unit change in coordinate. g3 ˜ g2 ˜ x ˜ g1 ˜ O FIGURE 5.3 The “associated” curvilinear covariant basis Given three coordinates { η 1. a fully developed curvilinear theory is indispensable. η 3 ) . The base vector g k points in ˜ the direction of increasing η k .1. By definition. The lengths of the covariant base vectors “covary” with the coordinate. For example. The covariant base vectors are tangent to the grid lines associated with η k .edu http://me. Different base vectors are used at different locations in space because the grid itself varies in space. three coordinates are prex g2 sumed to identify the position of a point x in space. x is a function of η 1 . 5. η 2. and η 3 .. for spherical coordinates.

Again this point harks back to the fact that neither components nor base vectors are invariant.2. recalling Eq. but the sum of components times base vectors is invariant. Conversely. (5.= δ ij . η 3 } do not necessarily have the same physical units.12). (5.13) th Note the following new summation convention: the superscript on η i is interpreted as a subscript in the final result because η i appears in the “denominator” of the derivative. each contravariant base vector g j is η1 ˜ normal to surfaces of constant η j . Consequently. holding the other coordinates constant. Such ˜ ˜ a situation is not unusual and should not be alarming. 2004 66 increased. ˜∂η k ˜ (5. Hence. any position in space corresponds to a unique set of coordinates. Equation (5. (5. each coordinate may be regarded as a single-valued function of position vector: ηj = ηj( x ) . Furthermore.14) (5.17) .html DRAFT June 17.2 Curvilinear contravariant basis.12) states that the coordinates uniquely identify the position in space. as g1 sketched in Fig. and it points in˜ the FIGURE 5. respectively. but g 2 has dimensions of length. ˜ By the chain rule. z } have dimensions of length.16) η2 ∂x ˜ g2 ˜ x ˜ ˜ This derivative is the spatial gradient of η j .= ∂η. because the coordinates { η 1. the natural base vectors themselves will have different physical units. Note that the natural covariant base vectors are not necessarily unit vectors.rmbrann@me. cylindrical coordinates { r. θ. and length.------------.• ˜.dη k .13).unm.15) ∂η j g j ≡ ------. Hence. ------i dx ∂η ∂η i ˜ Therefore.unm.edu/~rmbrann/gobag. radians. That is. We˜will find that the components of a vector will have physical dimensions appropriate to ensure that the each term in the sum of components times base vectors will have the same physical dimensions as the vector itself.. ˜∂η ˜ (5. For example. Starting with Eq. η 2. note that j j ∂x dη. 5. The contravariant base vectors are direction of increasing η j . the contravariant dual basis must be given by (5. the natural definition of the i covariant basis is the partial derivative of x with respect to η i : ˜ ∂x g i ≡ -------i . normal to surfaces of constant η k . two of the base vectors ( g 1 and g 3 ) are dimensionless.edu http://me. the increment in the position vector is given by ∂x dx = -------.

Eq. η 3 and therefore that g 1 is the ˜ ˜ coefficient of dη 1 when x is varied by differentially changing η 1 holding η 2 and ˜ η 3 con˜ stant. you can determine the covariant base vectors graphically. the position vector x moves such dx = dre r .18) η 1 =r . η 2 =θ .3 Covariant basis for cylindrical coordinates. each of the g k base vectors is same ˜ throughout space — they are independent of the coordinates { η 1. recalling Eq. η 2. 2004 67 or. η 2 =θ . g3 = ez .edu/~rmbrann/gobag. ˜ ˜ EXAMPLE Consider cylindrical coordinates: x3 ez ˜ eθ ˜ x ˜ x2 er ˜ x ˜ θ x1 x = r e r ( θ ) + ze z ˜ ˜ ˜ FIGURE 5.3 illustrates how. we utilize the underlying rect- . η 1 =r . η 3 } .13). Hence.rmbrann@me.html DRAFT June 17. η 3 =z .unm. holding the others constant. For homogeneous coordinates. is varied holding the others constant. for curvilinear coordinates. (5. r z x2 dx =dre r ˜ ˜ x1 (5.18) can be integrated to show that x = η k g k . g1 = er . g 2 = re θ . (5. The views down the z -axis show how the covariant base vectors can be determined graphically by varying one coordinate. for homogeneous coordinates. dx = dη k g k . For cylindrical coordinates. Similarly. ˜ ˜ x.edu http://me. when the radial coordinate. is varied holding the others constant. η 2. Eq. η 3 } .unm.18) does not imply that x = η k g k . ˜ Summarizing. Recall that g 1 = ( ∂x ⁄ ∂η 1 ) η 2. ˜ ˜ Now we note a key difference between homogeneous and curvilinear coordinates: ix. x2 dx =rdθe θ ˜ ˜ Here we vary r holding θ and z constant Here we vary θ holding r and z constant x ˜ x1 Figure 5. and (5. for simple enough coordinate systems. each g k varies in space — they depend on ˜ the coordinates { η 1. For curvilinear coordinates.19) ˜ ˜ ˜ ˜ ˜ ˜ To derive these results analytically (rather than geometrically). and therefore g 1 must equal e r ˜ ˜ ˜ ˜ because it is the coefficient of dr . g 2 must be equal to re θ since ˜that is the coeffi˜ ˜ cient of dθ in dx when the second coordinate. (5. Hence.

whereas g 2 has physical dimensions of length. e 1 +  ------. z ˜ ˜ ˜ ∂x 1 ∂x 2 ∂x 3 ∂x =  ------.22c) which we note is equivalent to the graphically derived Eqs. 2004 68 angular Cartesian basis { e 1.22a) (5. =  ------. 0 0 1 ← cylindrical coordinates (5.21c) (5. Inverting the covariant [ g ij ] matrix gives the contravariant metric coefficients: [ g ij ] 1 = 0 0 0 1 ⁄ r2 0 . θ ˜  ∂z  r.19). θ  ∂z  r. the coordinate system is orthogonal and the base vectors at each point in space are mutually perpendicular.html DRAFT June 17. Then ∂x 1 ∂x 2 ∂x 3 ∂x g 1 =  ----. (5.23a) (5. z ˜  ∂r  θ.24) Whenever the metric tensor comes out to be diagonal as it has here. z ˜  ∂θ  r.21b) (5. z  ∂r  θ.22b) (5. in matrix form. e 2. e 1 +  ------. z ˜  ∂θ  r. θ ˜  ∂z  r.21a) (5.edu http://me. 0 0 1 g 12 = g 1 • g 2 = 0 ˜ ˜ g 22 = g 2 • g 2 = r 2 ˜ ˜ g 32 = g 3 • g 2 = 0 ˜ ˜ g 13 = g 1 • g 3 = 0 ˜ ˜ g 23 = g 2 • g 3 = 0 ˜ ˜ g 33 = g 3 • g 3 = 1 . Also note that g 1 and g 3 ˜ ˜ are dimensionless. e 2 +  ------. e 3 } to write the position vector as ˜ ˜ ˜ x = x1 e1 + x2 e2 + x3 e3 . [ g ij ] = 1 0 0 0 r2 0 . ˜ ˜ ˜ ˜ where x 1 = r cos θ x 2 = r sin θ x3 = z . ˜ The metric coefficients g ij for cylindrical coordinates are derived in the usual way: g 11 = g 1 • g 1 = 1 ˜ ˜ g 12 = g 2 • g 1 = 0 ˜ ˜ g 31 = g 3 • g 1 = 0 ˜ ˜ or. ˜  ∂z  r.edu/~rmbrann/gobag. z ˜  ∂r  θ.25) .unm. z ˜ ˜ ˜ ˜ ∂x 1 ∂x 2 ∂x 3 ∂x g 2 =  ----. z ˜  ∂θ r. g 3 =  ----. e 1 +  ------. e 2 +  ------.rmbrann@me.23c) ← cylindrical coordinates (5. e 3 = cos θ e 1 + sin θe 2 ˜  ∂r θ.20) (5. e 3 = – r sin θ e 1 + r cos θ e 2 ˜  ∂θ  r.23b) (5. θ ˜ ˜ ˜ (5. e 3 = e 3 . e 2 +  ------. =  ------. ˜ ˜ (5.unm.

html DRAFT June 17. g 3 } associated with cylindrical coordinates is ˜ orthogonal.e θ ---- x2   x 1 ∂x 2 ˜ ∂x 3 ˜ ∂x 1 ˜ r˜ dx ˜ ˜ ˜ 1 ˜ ∂z∂z∂zdz = ------.e 3 = ---. x 3 } : 2 2 x1 + x2 x2 θ = tan – 1  ----  x 1 r = (5.26).26a) ˜ ˜ ˜ ˜ ˜ ˜ g2 eθ g 2 = g 21 g 1 + g 22 g 2 + g 23 g 3 = ----. the physical components are denoted { v r. respectively.e 2 = cos θ e 1 + sin θ e 2 = e r ∂x 1 ˜ ∂x 2 ˜ ∂x 3 ˜ r ˜ r ˜ dx ˜ ˜ ˜ ˜ ˜ –x2 ∂θ∂θ1∂θ1 g 2 = dθ = ------. 2004 69 The contravariant dual basis can be derived in the usual way by applying the formula of Eq (2. (5. (5.28c) Because the curvilinear basis { g 1.edu http://me.e + ------. It is common practice to˜use the associated orthonormal basis { e r.28b) (5. Whenever you derive a formula in terms of the general covariant and contravariant vector components.27b) (5. the dual vectors { g 1.18): g 1 = g 11 g 1 + g 12 g 2 + g 13 g 3 = g 1 = e r (5. e z } as we ˜ ˜ ˜ have in the above equations.16).e = e = e g 3 = ----. To do this. it is conventional to also ˜define so-called “physical components” with respect to the orthonormalized cylindrical basis.e 3 =  ------. (5. This practice also eliminates the sometimes confusing complication of dimensional base vectors. v 2.16) gives x1 x2 ∂r∂r∂rdr g 1 = ----. the shared orthonormal basis is called the “physical basis. When a coordinate system is orthogonal.” Notationally. g 2.unm. Because cylindrical coor˜ ˜ ˜ dinates are orthogonal. v 2. v z } .27c) z = x3 . v 3 } are defined v k ≡ g k • v and v k ≡ g k • v . v 3 } or { v 1.e + ------. Then applying Eq. vector components { v 1. we must express the coordinates { r. (5.rmbrann@me. g 2. we can alternatively derive these results by directly applying Eq. e θ. (5.e 2 + ------. ∂x 1 ˜ 1 ∂x 2 ˜ 2 ∂x 3 ˜ 3 dx ˜3 ˜z ˜ ˜ which agrees with the previous result in Eq.edu/~rmbrann/gobag.26b) ˜˜2 r r ˜ ˜ ˜ ˜ g 3 = g 31 g 1 + g 32 g 2 + g 33 g 3 = g 3 = e z .26c) ˜ ˜ ˜ ˜ ˜ ˜ With considerably more effort.= ---(5. . x 2.unm. For cylindrical coordinates.e 1 + ------. θ.28a) (5. cos 2 θe 1 +  ---- cos 2 θe 2 = -.e 1 + ---.27a) (5.= ------. z } in terms of { x 1.e 2 + ------. g 3 } ˜are˜ in exactly the same directions but have different ˜ ˜ magnitudes. v θ.e 1 + ------.

e r = sin θ cos φ e 1 + sin θ sin φ e 2 + cos θ e 3 where ˜ ˜ ˜ ˜ ˜ ˜ g 2 = re θ . { η 1 =r.edu/~rmbrann/gobag.32c) r sin θ ˜ ˜ (c) As was done for cylindrical coordinates in Eq.29a) (5..rmbrann@me. where ˜ ˜ ˜ ˜ ˜ (b) Prove that the dual contravariant basis for spherical coordinates is (5.30c) (a) Follow the above example to prove that the covariant basis for spherical coordinates is g1 = er . η 2 =θ.30a) (5.32a) ˜ ˜ 1 g 2 = -. e θ = cos θ cos φ e 1 + cos θ sin φ e 2 – sin θ e 3 where ˜ ˜ ˜ ˜ ˜ ˜ g 3 = r sin θ e φ e φ = – sin φ e 1 + cos φ e 2 . (5. η 3 =φ } .29) show that the spherical covariant and contravariant vector components are related to the spherical “physical” components by v1 = vr v 2 = rv θ v 3 = ( r sin θ )v φ v 1 = vr vθ v 2 = ---r vφ v 3 = ------------.θ ---r˜ r ˜ ˜ ˜ v3 = g3 • v = ez • v = vz . (5.32b) r˜ ˜ 1 g 3 = ------------.html DRAFT June 17.30b) (5.29b) (5.31a) (5. For cylindrical coordinates.33a) (5. (5.1 For spherical coordinates.33b) (5.unm.31c) g1 = er .29c) Study Question 5. the underlying rectangular Cartesian coordinates are x 1 = r sin θ cos φ x 2 = r sin θ sin φ x 3 = r cos θ . r sin θ (5.e θ .unm.e φ . these conversion formulas are: v1 = g1 • v = er • v = vr ˜ ˜ ˜ ˜ v 2 = g 2 • v = ( re θ ) • v = rv θ ˜ ˜ ˜ ˜ v3 = g3 • v = ez • v = vz ˜ ˜ ˜ ˜ v1 = g1 • v = er • v = vr ˜ ˜ ˜ ˜  1 e  • v = vθ v 2 = g 2 • v =  -. (5. (5.33c) . ˜ ˜ ˜ ˜ (5.edu http://me.31b) (5. 2004 70 it’s a good idea to convert the final result to physical coordinates and the physical basis.

η 3 ) .37) which holds for all dx . Now we outline the procedure for more complicated general curvilinear coordinates. z ) . (5.unm.4 The gradient of a scalar Section 5.34) (5.html DRAFT June 17. Gradient formulas won’t look significantly different until we compute vector gradients in the next section. θ.edu/~rmbrann/gobag. (5. a scalar-valued field s = s ( η 1.18): dx = dη k g k . The increment in this function is given by ∂s ds = -------. The direct notation definition of the gradient ds ⁄ dx of a scalar field is ˜ ˜ ds ds = ----.42) . Recall Eq. (5. ˜∂η k ˜ Dotting both sides of Eq.unm.39) (5.dη k .40) dx ˜ ˜ ˜ Comparing the above two equations gives the formula for the gradient of a scalar in curvilinear coordinates: ∂sds ----. using Eq. (5.34) with g i shows that ˜ dη i = g i • dx ˜ ˜ . Example: cylindrical coordinates Consider a scalar field s = s ( r.36).1 provided a simplified example for finding the formula for the gradient of a scalar function of cylindrical coordinates. ˜ ˜ where ∂x g k = -------.• dx ∀dx . 2004 71 5.edu http://me.= -------.35) This important equation is crucial in determining expressions for gradient operations. Consider. ∂sds = -------. (5. (5.41) Notice that this formula is very similar in form to the familiar formula for rectangular Cartesian coordinates.g k • dx .. ∂η k or.rmbrann@me. (5.g k dx ∂η k ˜ ˜ . for example.38) (5. η 2. ∂η k ˜ ˜ (5.36) (5.

(5. recall the product rule for the gradient of a scalar times a vector: du ds d ( su ) -------------.e z .+ s -----.11) applies to the gradient of any scalar.+ v z ------..˜ ----.---. (5.rmbrann@me. (5. (5. z } coordinates. (5.------------.46) v = vr er + vθ eθ + vz ez .45) ----. and v z are presumably known as functions of the { r.48) dx dx dx ˜ ˜ ˜ Eq.˜----.e r + ----. (5.unm. v θ .5 Gradient of a vector -. the base vectors also vary with the coordinates! First.---. ˜ ∂x j ˜ ˜ dx ˜ Suppose a vector v is expressed in the cylindrical coordinates: ˜ (5.44) 5. the formula for the gradient is ∂v i dv (5.edu http://me.46).41) with the contravariant basis of Eq.43) Study Question 5. (5.+ ----. the gradient of dv ⁄ dx gives ˜ ˜ dv r dv θ dv z dv ----.26) gives ∂s ∂s e θ ∂s ds .48) can be . If the vector is expressed in Cartesian coordinates. (5. ∂r ˜ ∂z ˜ ∂θ r dx ˜ which is the gradient formula typically found in math handbooks.= e r ------.+ ---.+ v θ -------.= ---. ∂φ r sin θ ∂r ˜ ∂θ r dx ˜ (5. (5.41)] to prove that the formula for the gradient of a scalar s in spherical coordinates is ∂s e φ ∂s ∂s e θ ds .= -----.+ e θ ------.html DRAFT June 17.+ e z ------˜ dx ˜ dx ˜ dx ˜ dx ˜ ˜ ˜ ˜ de r de θ de z ˜˜˜+ v r ------.47) ˜˜dx dx ˜ dx ˜ ˜ ˜ Applying the product rule to each of the terms in Eq.e i e j .e r + ----.unm.“simplified” example Now let’s look at the gradient of a vector..˜. Hence.= u ----.= ---. the first three terms of Eq. (5. Importantly. 2004 72 Applying Eq..31) in Eq.edu/~rmbrann/gobag. ˜ ˜ ˜ ˜ The components v r . θ.2 Follow the above example [using Eq.

= -. ˜dx ˜ ˜ Applying (5.= 0 .49) and (5. gives de r dθ dθ ------.e r e θ ˜r˜ ˜ dx ˜ de z ------.= v r .html DRAFT June 17.z e 3 r ˜ ˜  ˜ ˜  1 1 + v r  -.z e 3 ˜ r ˜ dx ˜ ˜  ˜  ˜ v θ .= – -.e θ + v r . (5.θ + e θ  v θ .r e r + -------.θ dv r ------.edu/~rmbrann/gobag.e θ + v θ . ˜dx ˜ ˜ Substituting Eqs.48) gives v r .r e r + -------.e θ e θ + v θ  – -.z e 3 r ˜ ˜  ˜ ˜  v z .= e θ ----˜dx ˜ ˜ dx ˜ dx ˜ ˜ ˜ de θ dθ dθ -------.edu http://me.52) (5.z e 3 r ˜ dx ˜ ˜ ˜ v θ .unm. Applying the product rule to Eq.50) (5.z e 3 .θ dv θ ------.θ dv z ------.2).r e r + -------. r ˜ ˜   r˜ ˜  (5.unm.49) r ˜ dx ˜ ˜ ˜ Now we need formulas for the gradients of the base vectors.e θ + v θ .rmbrann@me.e θ + v r .e θ + v z .= e r  v r .51) .θ dv ----.= ( – sin θ e 1 + cos θ e 2 ) ----. (5.= v θ .r e r + -------.z e 3 r ˜ dx ˜ ˜ ˜ v z .e r e θ .10).θ e z  v z . de r 1 ------.r e r + -------.e θ e θ ˜r˜ ˜ dx ˜ de θ 1 -------. 2004 73 written v r .= – e r ----˜dx ˜ ˜ dx ˜ dx ˜ ˜ ˜ de z ------. noting that the gradients of the Cartesian basis are all zero. (5.= v z .r e r + -------.e θ + v z .= ( – cos θ e 1 – sin θ e 2 ) ----.51) into (5.= 0 .

Bird’s book on macromolecular hydrodynamics is particularly well-organized and error-free. 5. η 2. If.z . Typically.r )e θ e r ˜ ˜ + ( v z .θ v +  -------. that’s because Bird (and some others) define the gradient of a tensor to be the transpose of our definition. ˜ ˜ (5. you use Bird’s appendix.55) ˜ ˜ Each component of the vector is of course a function of the coordinates.unm. e r e θ  r r˜ ˜ v θ .r )e z e r ˜ ˜ v r . v = v ( η 1.rmbrann@me.θ v θ +  -------. but for general curvi- .z v z .θ -v θ ---------------r v θ .edu/~rmbrann/gobag.z e z e z .r v z .54) There’s no doubt that this result required a considerable amount of effort to derive. begins a new nightmare.z )e r e z ˜ ˜ + ( v θ .z )e θ e z ˜ ˜ + v z .edu http://me.r dv ----˜ dx ˜ = v θ . Taking gradients of vectors. Before using anyone’s gradient table. you will notice that the components given for the gradient of a vector seem to be the transpose of what we have presented above. 2004 74 Collecting terms gives dv ----.53) This result is usually given in textbooks in matrix form with respect to the cylindrical basis: v r .r v r .θ +v r ----------------r v z .+ ---r e θ e θ  r r ˜ ˜ v z .html DRAFT June 17.θ +  -------- e z e θ  r ˜ ˜ + ( v r .– ---. The appendix of R. (5.r )e r e r ˜ ˜ + ( v θ . Consider a vector field.unm. however. however. you should always ascertain which definition the author uses.θ -------r v r .B. Now we are going to perform the same sort of analysis to show how the gradient is determined for general curvilinear coordinates.z v θ . η 3 ) .= ˜ dx ˜ ( v r . (5. these kinds of formulas are compiled in the appendices of most tensor analysis reference books.6 Gradient of a vector in curvilinear coordinates The formula for the gradient of a scalar in curvilinear coordinates was not particularly tough to derive and comprehend — it didn’t look profoundly different from the formula for rectangular Cartesian coordinates.

(5.36).e.edu http://me. 2004 75 linear coordinates. ˜ dx ˜ ˜ ˜ Comparing the above two equations gives us a formula for the gradient: dv = ˜ (5.= -------j g i g j + v i -------j g j . The k con˜ . note that ∂2x ∂2x Γ ij = ----------------.62) ˜˜ dx ∂η ˜ ˜ ∂η ˜ ˜ Incidentally.57) ˜ ˜ ˜ Applying the chain rule and using Eq. v = v i ( η 1. This family of system-dependent vectors is denoted ∂g i Γ ij ≡ -------j .35). (5. (5.58) Similarly. η 3 ) g i ( η 1. ˜˜∂η ∂η ˜ ˜ ˜ Substituting these results into Eq.60) (5. η 3 ) . ˜˜ ∂η ˜ ˜ ∂η ˜ ˜ ˜ (5. Christoffel Symbols Note that the nine ∂g i ⁄ ∂η j vectors [i.= Γ ji .unm. these equations serve as further examples of how a superscript in the “denominator” is regarded as a subscript in the summation convention. ∂η ∂η ˜ ˜ (5. η 2. the coefficients of v i in the last ˜ term of Eq.64) th (5.= ----------------.56) ˜ ˜ Therefore the increment dv involves both increments dv i of the components and increments ˜ dg i of the base vectors: ˜ dv = dv i g i + v i dg i .63) Thus. the base vector increments are ∂g i ∂g i dg i = -------j dη j = -------j g j • dx . ˜∂η ˜ Recalling Eq. (5.• dx ∀dx . (5.html DRAFT June 17. only six of the nine Γ ij vectors are independent due to the symmetry in ij . (5.rmbrann@me..59) which holds for all dx . the component increments can be written i i ∂v∂vdv i = -------j dη j = -------j g j • dx .62)] are strictly properties of the coordinate system and they may be computed and tabulated a priori. η 2.57) and rearranging gives ∂g i i ∂vdv = -------j g i g j • dx + v i -------j g j • dx .61) ∂g i i ∂vdv ----. Recall the gradient dv ⁄ dx of a vector is defined in direct notation ˜ ˜ ˜ such that dv ----. (5.unm.edu/~rmbrann/gobag. so are the base vectors! Written out. ˜ i ˜ ∂η j ∂η ∂η i ∂η j ˜ ˜ (5.

˜ symbols are therefore frequently denoted ij The fact that Christoffel symbols are not components of a tensor is. even though they ˜ aren’t.unm. we are also comfortable with the typesetk ting Γ ij . but keep in mind that these quantities simply characterize how the base vectors vary in space. we define Christoffel symbols of the “first” kind.html DRAFT June 17. it should be taken to mean the Christoffel symbol of the second kind defined above. even though the metric matrix is constructed from basis-intrinsic quantities..e. of course. Christoffel are also sometimes called the “affinities” [12]. but it was proved that they turned out to also satisfy the tensor transformation rule. it turns out to not be basis intrinsic itself (the metric components are components of the identity tensor). they are “basis-intrinsic” quantities (i. For example. which are useful in Riemannian spaces where there is no underlying rectangular Cartesian basis. proponents of this notational ideology would — by the same arguments — be compelled to not typeset coordinates using the notation η k which erroneously makes it look as though the η k are contravariant components of some vector η . Of course. Instead of the notation Γ ij . Γ ij ≠ Γ mn ( g m • g m ) ( g n • g n ) ( g p • g k ) ]. Christoffel symbols may appear rather arcane. 2004 76 travariant component of Γ ij is obtained by dotting g k into Γ ij : ˜ ˜ ˜ k Γ ij ≡ g k • Γ ij ˜ ˜ . In general. we mean that a second basis k g k will have a different set of Christoffel symbols Γ ij and. a strong justification for avoiding typesetting Christoffel symbols in the same way as tensors. On page 86. if the term “Christoffel symbol” is used by itself. Any new object that is constructed from basis-intrinsic quantities should be itself regarded as basis-intrinsic until proven otherwise.e. Christoffel ˜ ˜ ˜ ˜ in the˜literature with the odd-looking notation { k } . Being comfortable with the typesetting η k . as discussed below. Consequently.. they are that ˜ are not obtainable via a simple tensor transformation of the Christoffel symbols of the original k p k system [i. indexed quantities whose meaning is defined for a particular basis and whose connection with counterparts from a different basis are not obtained via a tensor transformation rule).unm. However. . to be 100% consistent.edu/~rmbrann/gobag. the metric g ij ≡ g i • g j were initially regarded as basis-intrinsic because they were constructed from ˜ ˜ basis-intrinsic objects (the base vectors). Instead. k Important comment about notation: Even though the Christoffel symbols Γ ij have three indices. the base vectors themselves are basis-intrinsic objects.rmbrann@me. they are not components of a third-order tensor. By this.65) k The quantity Γ ij is called the Christoffel symbol of the second kind.edu http://me. The key is for the analyst to recognize that neither of these symbols connote tensors. (5.

70) As long as Eq.64). (5. the anti-symmetric part of the Christoffel symbols: k k 2Γ [kij ] ≡ Γ ij – Γ ji (5. the Christoffel symbols are not the components of a third-order tensor. This is in contrast to the metric coefficients g ij which do happen to be components of a tensor (namely.e. (5. it’s possible that the manifold torsion will be nonzero. so no further mention will be made of the manifold torsion.rmbrann@me.. For a non-holomomic system.67) Increments in the base vectors By the chain rule. the increment in the covariant base vector can always be written ∂g i dg i = -------j dη j ˜∂η ˜ or.66) k Like the components F ij of the tensor F . the identity tensor).edu http://me. if we were to consider some different curvilinear coordinate system and compute the Christoffel symbols for that system. (5. 2004 77 By virtue of Eq. (5. Consequently. (5. we conclude that the variation of the base vectors˜ in space is completely ˜ ˜ characterized by the Christoffel symbols. Henceforth.edu/~rmbrann/gobag.69) Manifold torsion Recall that the Christoffel symbols are not components of basis-independent tensors.67) k dg i = Γ ij g k dη j ˜ ˜ (5. . Namely. “free” vector).63) Γ ij = ∂g i ⁄ ∂η j . Specifically. th ∂g k -------i = Γ ij g k . k Recalling that Γ ij is the k contravariant component of Γ ij and by Eq.68) (5.html DRAFT June 17. using the notation introduced in Eq. then the manifold torsion will be zero. note that k k Γ ij = Γ ji .66) holds true.unm. the Christoffel symbols Γ ij are properties of the par˜ ˜ ticular coordinate system and its basis. the result would not be the same as would be obtained by a tensor transformation of the Christoffel symbols of the first coordinate system to the second system. we assume that Eq.unm. Consider. however [12]. but it will turn out to be a basis independent (i. ˜∂η j ˜ (5. (5.66) holds.

63). you will probably find the subscripts (1.edu http://me.html DRAFT June 17.71b) (5.72c) k Noting that Γ ij is the coefficient of g k in the expression for Γ ij .= – sin θ e 1 + cos θe 2 = ˜∂θ ˜ ˜ ˜ ∂g 2 Γ 22 = -------. η 2 =θ.19) can be written g 1 = cos θ e 1 + sin θ e 2 ˜ ˜ ˜ g 2 = – r sin θ e 1 + r cos θe 2 ˜ ˜ ˜ g3 = e3 .z) for clarity so they are listed as θ θ Γ rθ = Γ θr = 1 -r and r Γ θθ = – r k (all other Γ ij = 0 ). applying Eq. ˜∂z ˜ ˜ (5.unm. (5.72b) (5.g r ˜2 = – rg 1 ˜ ∂g 1 Γ 13 = -------. noting that Γ 12 is the coefficient of g 2 in Γ 12 ˜ ˜ 1 and noting that Γ 22 the coefficient of g 1 in Γ 22 .= 0 ˜∂r ˜ ˜ Γ 21 = Γ 12 ˜ ˜ Γ 31 = Γ 13 ˜ ˜ ∂g 1 Γ 12 = -------.71c) (5. (5.71a) (5. 2004 78 EXAMPLE: Christoffel symbols for cylindrical coordinates In terms of the underlying rectangular Cartesian basis.73) r If you look up cylindrical Christoffel symbols in a handbook. the covariant base vectors from Eq.= 0 . (5.˜ Namely. (5.2.θ.rmbrann@me. ˜ ˜ Therefore. ˜ ˜ 1 2 2 1 k Γ 12 = Γ 21 = -and Γ 22 = – r (all other Γ ij = 0 ).72a) (5.edu/~rmbrann/gobag.3) replaced with the coordinate symbols (r. we find that only three of the ˜ 2 27 Christoffel symbols are nonzero. η 3 =z } ∂g 1 Γ 11 = -------.= 0 ˜∂z ˜ ˜ ∂g 2 Γ 23 = -------.= – r cos θ e 1 – r sin θ e 2 ˜∂θ ˜ ˜ ˜ Γ 32 = Γ 23 ˜ ˜ 1 -. using { η 1 =r.unm.= 0 ˜∂z ˜ ˜ ∂g 3 Γ 33 = -------.74) .

first differentiate g 1 from Eq.g 3 r˜ r˜ ˜ ˜ ˜ ˜ Γ 22 = – rg 1 Γ 23 = cot φ g 3 Γ 21 = Γ 12 ˜ ˜ ˜ ˜ ˜ ˜ Γ 31 = Γ 13 Γ 32 = Γ 23 Γ 33 = – rsin 2 θg 1 – sin θ cos θ g 2 . (5.edu/~rmbrann/gobag.76c) Partial Answer: To find the vector Γ 12 .html DRAFT June 17. Keep in mind: the Christoffel terms in Eq.67) into ∂v i dv k ----. 2004 79 Study Question 5.rmbrann@me.75c) (5.76b) (5.62) gives Substituting Eq.= -------j g i g j + v k Γ kj g i g j . (5. changing the dummy summation indices so that the basis dyads will have the same subscripts in both terms.unm.79) The notation for covariant differentiation varies widely: the slash in v i /j is also often denoted with a comma v i .31a) with respect to η 2 ˜ 2 (i. φ φ Γ rφ = Γ φr = 1 . (5.j although many writers use a comma to denote ordinary partial differentiation.31). Then t Γ 12 is ˜ ˜ found by dotting Γ 12 by g2 from Eq. (5. r Γ φφ = – rsin 2 θ .+ v k Γ kj .= v /ij g i g j .edu http://me. (5. θ. Hence ˜ ˜ 2 θ Γ 12 = Γ rθ = Γ 12 • g 2 = e θ • ( e θ ⁄ r ) = 1 ⁄ r . prove that 1 1 Γ 11 = 0 Γ 12 = -.75b) (5.75a) (5. Using covariant differentiation. ˜ ˜ ˜ ˜ Covariant differentiation of contravariant components Eq.79) account for the variation of the base vectors in space. with respect to θ).31b) that˜ the result is Γ 12 = e θ .= -------j g i g j + v i Γ ij g k g j .3 Using the spherical covariant base vectors in Eq.g 2 Γ 13 = -. ∂η j (5.. ˜ ˜ ˜ ˜ ˜ ˜ ˜ Therefore show that the Christoffel symbols for spherical { r. You then recognize from (5.unm. (5. i ∂vdv i ----. -r φ φ Γ θφ = Γ φθ = cot φ θ Γ φφ = – sin θ cos θ k (all other Γ ij = 0 ).78) ∂v i i v i / j ≡ ------.80) .76a) (5. (5.32b).e. the gradient of a vector is then written compactly as dv ----. -r r Γ θθ = – r .77) ˜ dx ∂η ˜ ˜ ˜ ˜ ˜ or. ˜ dx ∂η ˜ ˜ ˜ ˜ ˜ We now introduce a compact notation called covariant vector differentiation: (5. ˜ dx ˜ ˜ ˜ (5. (5. φ } coordinates are θ θ Γ rθ = Γ θr = 1 .

rmbrann@me.unm.edu

http://me.unm.edu/~rmbrann/gobag.html

DRAFT June 17, 2004

80

Example: Gradient of a vector in cylindrical coordinates Recalling Eq. (5.72), we can apply Eq. (5.79) to obtain ∂v 1 1 1 1 v 1 / 1 = -------- + v 1 Γ 11 + v 2 Γ 21 + v 3 Γ 31 1 ∂η ∂v 1 1 1 1 v 1 / 2 = -------- + v 1 Γ 12 + v 2 Γ 22 + v 3 Γ 32 2 ∂η ∂v 1 1 1 1 v 1 / 3 = -------- + v 1 Γ 13 + v 2 Γ 23 + v 3 Γ 33 ∂η 3 ∂v 2 2 2 2 v 2 / 1 = -------- + v 1 Γ 11 + v 2 Γ 21 + v 3 Γ 31 ∂η 1 ∂v 2 2 2 2 v 2 / 2 = -------- + v 1 Γ 12 + v 2 Γ 22 + v 3 Γ 32 ∂η 2 ∂v 2 2 2 2 v 2 / 3 = -------- + v 1 Γ 13 + v 2 Γ 23 + v 3 Γ 33 3 ∂η ∂v 3 3 3 3 v 3 / 1 = -------- + v 1 Γ 11 + v 2 Γ 21 + v 3 Γ 31 ∂η 1 ∂v 3 3 3 3 v 3 / 2 = -------- + v 1 Γ 12 + v 2 Γ 22 + v 3 Γ 32 ∂η 2 ∂v 3 3 3 3 v 3 / 3 = -------- + v 1 Γ 13 + v 2 Γ 23 + v 3 Γ 33 3 ∂η Hence, the gradient of a vector is given by dv ----- = ˜ dx ˜  ∂v - g g 1 ------ ∂r  1 ˜ ˜ 2 ∂v +  -------  g 2 g 1  ∂r  ˜ ˜
1

∂v 1 = ------∂r = ∂v - – v 2 r ------∂θ ∂v 1 = ------∂z
1

(5.81rr) (5.81rt) (5.81rz)

∂v 2 = ------∂r ∂v 2 v 1 = ------- + ---∂θ r ∂v 2 = ------∂z ∂v 3 = ------∂r ∂v 3 = ------∂θ ∂v 3 = ------∂z

(5.81tr) (5.81tt) (5.81tz)

(5.81zr) (5.81zt) (5.81zz)

∂v 1 +  ------- – v 2 r g 1 g 2  ∂θ  ˜ ˜ ∂v 2 v 1 +  ------- + ----  g 2 g 2  ∂θr ˜ ˜ ∂v +  -------  g 3 g 2  ∂θ  ˜ ˜
3

∂v 1 +  -------  g 1 g 3  ∂z  ˜ ˜ ∂v 2 +  -------  g 2 g 3  ∂z  ˜ ˜ ∂v +  -------  g 3 g 3 .  ∂z  ˜ ˜
3

∂v +  -------  g 3 g 1  ∂r  ˜ ˜
3 r  ∂v- e e ------ ∂r  ˜ r ˜ r

(5.82)

Substituting Eqs. (5.19), (5.26), and (5.29) into the above formula gives dv ----- = ˜ dx ˜ eθ ∂v r ˜+  ------- – v θ e r  ----   ∂θ ˜  r eθ 1 ∂v θ v - ˜- +  -- ------- + ---r ( re θ )  ----   r ∂θ r ˜  r eθ ∂v z ˜+  ------- e z  ----   ∂θ  ˜  r  ∂v r +  ------- e r e z  ∂z  ˜ ˜ 1 ∂v θ - +  -- -------  ( re θ )e z  r ∂z  ˜ ˜ ∂v z +  ------- e z e z .  ∂z  ˜ ˜ (5.83)

1 ∂v θ - +  -- -------  ( re θ )e r  r ∂r  ˜ ˜ ∂v z +  ------- e z e r  ∂r  ˜ ˜

rmbrann@me.unm.edu

http://me.unm.edu/~rmbrann/gobag.html

DRAFT June 17, 2004

81

Upon simplification, the matrix of dv ⁄ dx with respect to the usual orthonormalized basis ˜ ˜ { e r, e θ, e z } is ˜ ˜ ˜ ∂v r ------∂r dv -----˜ dx ˜ = ∂v θ ------∂r ∂v z ------∂r 1  ∂v r -- ------- – v θ  r  ∂θ  1 ∂v θ + v- -- ------- ---r -  r ∂θ r 1 ∂v z -- ------- r ∂θ ∂v r ------∂z ∂v θ ------∂z ∂v z ------∂z w.r.t { e r, e θ, e z } unit basis. ˜ ˜ ˜ (5.84)

This is the formula usually cited in handbooks.

Study Question 5.4 Again we will duplicate the methods of the preceding example for spherical coordinates. However, rather than duplicating the entire hideous analyses, we here compute only the rφ component of dv ⁄ dx . In particular, we seek ˜ ˜ dv (5.85) e r • ----- • e φ . ˜ dx ˜ ˜ ˜ (a) Noting from Eqs. (5.31) and (5.32) how e r and e φ are related to { g 1, g 2, g 3 } , show that ˜ ˜ ˜ ˜ ˜ dv dv 1 - 1 ----(5.86) e r • ----- • e φ = ------------- g • ˜ • g 3 . ˜ dx ˜ r sin θ ˜ dx ˜ ˜ ˜ ˜ (b) Explain why Eq. (5.80) therefore implies that v ;1 dv 3 (5.87) e r • ----- • e φ = ------------- . ˜ dx ˜ r sin θ ˜ ˜ (c) With respect to the orthonormal basis { e r, e θ, e φ } recall from Eq. (5.33) that v 1 =v r and ˜ ˜ ˜ v 3 =v φ ⁄ r sin θ . Use the Christoffel symbols of Eq. (5.76) in the formula (5.79) to show that ∂v r v 1 / 3 = ------- + v φ ( – sinθ ) . ∂φ (d) The final step is to substitute this result into Eq. (5.87) to deduce that dv 1 - ∂v- v φ r e r • ----- • e φ = ------------- ------- – ---- . (5.89) ˜ r sin θ ∂φ dx ˜ r ˜ ˜ Cite a textbook (or other reference) that tabulates formulas for gradients in spherical coordinates. Does your Eq. (5.89) agree with the formula for the rφ component of dv ⁄ dx pro˜ ˜ vided in the textbook? (5.88)

rmbrann@me.unm.edu

http://me.unm.edu/~rmbrann/gobag.html

DRAFT June 17, 2004

82

Covariant differentiation of covariant components Recalling, Eq. (5.63), we now consider a similarly-defined basis-dependent vector: ∂g k P ik ≡ -------- , ˜˜ ∂η i and analogously to Eq. (5.65) we define
k P ij ≡ P ik • g j ˜ ˜

(5.90)

.

(5.91)

Given a vector v for which the covariant v k components are known, an analysis similar to ˜ that of the previous sections eventually reveals that the gradient of the vector can be written: dv ----- = v i / j g i g j , ˜ dx ˜ ˜ ˜ where ∂v k v i / j ≡ -------i + v k P ij . j ∂η (5.92)

(5.93)

k k The question naturally arises: what connection, if any, does P ij have with Γ ij ? To answer this question, differentiate both sides of Eq. (2.12) with respect to η k :

∂g j ∂g i -------- • g + g i • -------- = 0 , ˜˜∂η k ˜ j ˜ ∂η k or
i i i P k • g j + g i • Γ jk = 0 and therefore P kj + Γ kj = 0 . ˜ ˜ ˜ ˜ k is just the negative of Γ k . Consequently, equation (5.92) becomes In other words, P ij ij

(5.94)

(5.95)

∂v dv k v i / j ≡ -------i – v k Γ ij . ----- = v i / j g i g j where ˜ dx ∂η j ˜ ˜ ˜ We now have an equation for increments of the contravariant base vectors
k dg k = ( – Γ ij g j )dη i ˜ ˜

(5.96)

(5.97)

which should be compared with Eq. (5.69) Product rules for covariant differentiation Most of the usual rules of differential calculus apply. For example, ( vi wj )/k = vi wj/k + vi/k wj and ( v i w j ) / k = v i w j / k + v i / k w j , etc. (5.98)

The “ ∇ ” operates on ˜ ˜ arguments to its right. the gradient dv ⁄ dx is a second order tensor whose Cartesian ij ˜ ˜ components are given by ∂v i ⁄ ∂x j . This dyad has components v i d j . Thus. In Cartesian coordinates. But couldn’t we just have well have defined a second order tensor whose Cartesian ij components would be ∂v j ⁄ ∂x i . As mentioned earlier.= ( v i g i )∇ (5.104) . By this we mean that the ij component follows an index ordering that is analogous to the index ordering on the dyad v d . we will denote the backward operating gradient by v ∇ and the forward operating gradient by ∇v .103) and i ( g k )∇ = Γ kj g i g j ˜ ˜ ˜ (5.edu http://me. while the ∇ operates on arguments to its left.101) dv ----. whereas the component ordering for the backward ˜˜ gradient is analogous to that on the ˜ ˜ dyad v d = ( v j d i )g i g j .unm.” Recall that i ∂vdv i ----. the transpose of one choice gives the other choice. This leads naturally to the ques˜˜ ˜ tion of whether or not it is possible to define right and left ˜gradient operators in a manner that permits some “heuristic assistance. Thus.unm. which is comparable ˜˜ to the index ordering in the Cartesian ij component formula ∂v i ⁄ ∂x j .= -------j g i g j + v k Γ kj g i g j ˜ dx ∂η ˜ ˜ ˜ ˜ ˜ Suppose that we wish to identify a left-acting operator ∇ such that (5.99) ˜ dx ˜ ˜ ˜ ˜ ˜ dv T ∇v means the same thing as  ------ Thus.7 Backward and forward gradient operators The gradient dv ⁄ dx that we defined in the preceding section is really a backward operating ˜ ˜ gradient.102) ˜ dx ˜ ˜ Let’s suppose we desire to define this operator such that it follows a product rule so that ( v i g i )∇ = g i [ ( v i )∇ ] + v i [ ( g i )∇ ] ˜ ˜ ˜ This suggests that we should define i ∂v( v i )∇ = -------j g j ∂η ˜ (5. The only difference between these two choices is that the index placement is swapped. Following definitions and notation used by Malvern [11].html DRAFT June 17.edu/~rmbrann/gobag.100) ˜  dx ˜ ˜ ˜ ˜ ˜ The component ordering for the forward gradient operator is defined in a manner that is analogous to the dyad d v = ( d i v j )g i g j . 2004 83 5. v ∇ = v /ij g i g j (5.. ∇v = v j/ i g i g j (5. our previously defined vector gradient is actually a backward del definition: dv v ∇ means the same thing as ----.rmbrann@me.

(5.110) gives the very useful formula [7]: ) 1 ∂ ( Jv k ∇ • v = -.108) A much simpler expression for the divergence can be obtained by taking the trace of Eq. taking the trace of Eq. for the forward operation gradient. 5.-------J ∂η k Therefore.rmbrann@me. A careful application of the original definition of the gradient is probably safer.111) .105) and i ∇ ( g k ) = Γ kj g j g i ˜ ˜ ˜ (5. For example.unm.+ v k Γ ki i ˜ ∂η (5.html DRAFT June 17. but we caution against using them too cavalierly. 2004 84 Similarly.109) (5. Eq.106) These definitions are cute.80) to give ∇ • v = v /ij g i • g j = v /ij δ ij = v /ii ˜ ˜ ˜ Written out explicitly.107) ˜  dx ˜ ˜ The simplicity of the component expression for the trace depends on the expression chosen for the gradient operation. we define ∇ ( v i g i ) = ∇ ( v i ) g i + v i ∇g i ˜ ˜ ˜ from which it follows i ∂v∇ ( v i ) = -------j g j ∂η ˜ (5. this result is i ∂vi ∇ • v = ------.112) (5. (5.110) It will later be shown [in Eq.unm.8 Divergence of a vector The divergence of a vector v is denoted ∇ • v and it is defined by ˜ ˜ dv ∇ • v = tr  ------ (5.--------------k J ∂η ˜ (5.96) gives a valid but somewhat ugly expression: ∇ • v = v i / j ( g i • g j ) = v i / j ( g ij ) = ˜ ˜ ˜ ∂v k -------i – v k Γ ij g ij ∂η j (5.126)] that 1 ∂Ji Γ ki = -. (5.edu/~rmbrann/gobag.edu http://me. (5.

118) 2J ˜ ˜ ˜ ˜ where (recall) J = ( g1 × g2 ) • g3 ˜ ˜ ˜ (5. 2˜ ˜ ˜ ˜ ˜ ˜ .edu http://me. Namely. (5. (3.115) We can alternatively start with the covariant expression for the gradient given in Eq.html DRAFT June 17.unm.114) This result can be written as 1 ∂v i i ∇ × v = -.ξ :  ------ = -. ∂η j (5. where ξ is the permutation tensor. 2004 85 5.ε ijk .100).ξ :  ------ (5. the expanded component form of this result is J 1 ∇ × v = ----. (5.= v i / j g i g j where ˜ dx ˜ ˜ ˜ which gives ∂v k v i / j ≡ -------i – v k Γ ij . we have used the skew-symmetry of the permutation tensor. Thus.77) gives i 1 ∂vi ∇ × v = -.113) ˜ ˜  dx  dx 2˜ 2˜ ˜ ˜ ˜ In the last step. recalling Eq.47) that ξ kij = -.rmbrann@me.117) 1 Recalling from Eq. ------. ˜ dv T dv 1 1 ∇ × v = – -.96). Substituting Eq.ξ kij v i / j g k 2˜ 2 ˜ ˜ ˜ ˜ (5.+ v k Γ kj g i g j  2 ˜  ∂η j ˜ ˜ ˜ 1 (5. The axial vector associated with any tensor -T is defined by – 1 ξ :T .119) 1.unm.[ ( v 2 / 3 – v 3 / 2 )g 1 + ( v 3 / 1 – v 1 / 3 )g 2 + ( v 1 / 2 – v 2 / 1 )g 1 ] (5.edu/~rmbrann/gobag. dv ----.ξ :  ------.9 Curl of a vector The curl of a vector v is denoted ∇ × v and it is defined to equal the axial vector associ˜ ˜ ated with ∇v . (5.+ v k Γ kj g i × g j  2  ∂η j ˜ ˜ ˜ (5.ξ : [ v i / j g i g j ] = -.116) 1 1 ∇ × v = -.

= T •j / k g g j g k . where T ij / k ≡ --------. (5. In˜other words.unm. we have (5.rmbrann@me. Eq. ˜k dx ∂η ˜ ˜ ˜ ˜ For mixed tensor components.30). (5. where T ij / k ≡ ∂T .124) becomes 1 ∂Jm m -------. one can apply the direct notation definition of the gradient dT ⁄ dx to eventually prove that ˜ ˜ ˜ dT ij i j ˜-----. (5.121) i dT ∂T •j i i m i i m ˜.120) ˜ k dx ∂η ˜ ˜ ˜ ˜ or. (5.Jg ij [ g mj Γ ik + g mi Γ jk ] k 2 ∂η 1 i m j m = -.= -.= T ij / k g i g j g k .= -.+ T •j Γ mk – T •m Γ jk . where T •j / k ≡ --------.= g mj Γ ik + g mi Γ jk ∂η k Thus. (5. Eq.J [ δ m Γ ik + δ m Γ jk ] 2 (5..Jg ij -------. where we have used Eq. it follows that g ij / k = 0 and g ij / k = 0 .edu http://me.125) .123) Corollary Recall that the Jacobian J equals the volume of the parallelepiped formed by the three base vectors [ g 1. --------(5. ∂g ij ∂η k 2 ∂η k ∂η k (5. Taking the derivative of the Jacobian with respect to the coordinate η k and applying the chain rule gives ∂g ij ∂J ∂g ij 1 ∂J . the Jacobian J may be regarded as a function of the coordinates.unm. Knowing that the gradient of the identity tensor must be zero.-------.123) that the metric components have vanishing covariant derivatives.124) Recalling from Eq.121) tells us that ∂g ij m m -------. 2004 86 5. (2.-------. if the tensor is known in its covariant components.= T ij / k g i g j g k .+ T mj Γ mk + T im Γ mk . (3. it follows that J var˜ ˜ ies with position.– T mj Γ ik – T im Γ jk . Ricci’s lemma Recall from Eq.18) that the metric coefficients are components of the identity tensor. dT ∂T ij m m -----.= -------. g 3 ] . g 2.html DRAFT June 17.10 Gradient of a tensor Using methods similar to the preceding sections.edu/~rmbrann/gobag.122) -----˜ i dx ∂η k ˜ ˜ ˜ ˜ Note that there is a Christoffel term for each subscript on the tensor. Since the base vectors vary with position. The term is negative if the summed index is a subscript on T and positive if the summed index is a superscript on T.

html DRAFT June 17.rmbrann@me.unm. Thus.132) Now note that ∂g k ∂g i ∂ ( gi • gk ) ∂ ( g ik ) -------------. ˜˜˜ j˜ ∂η ∂η j ∂η j ∂η ˜ ˜ ˜ ˜ ˜ ˜ (5. (5. you have only the metric coefficients g ij .127) 5. in the sense discussed on page __.= -------j • g k + g i • -------.unm. (5. k ] .11 Christoffel symbols of the first kind Section 5. ˜∂η ˜ (5.128) This method was tractable because the space was Euclidean and we could therefore utilize the existence of an underlying rectangular Cartesian basis to determine the change in the base vectors with respect to the coordinates. using Eq.edu http://me.= -----------------------. this result also reveals that Γ ijk is symmetric in its first two indices: ˜ ˜ Γ ijk = Γ jik . ˜ - ˜ - ˜  ∂η ∂η ∂η ˜ ˜ ˜ (5.= Γ ij • g k + g i • Γ kj .J [ Γ mk + Γ mk ] 2 m = JΓ mk or 1 ∂J m .128) into Eq. (5.-------J ∂η k (5. (5.6 showed how to compute the Christoffel symbols of the second kind by directly applying the formula: ∂g i k Γ ij = g k • -------j . (5.129) gives ∂g i ∂g i ∂g i Γ ijk =  g m • -------j g mk =  -------j • g m g mk = -------j • g k .131) ˜ ˜ Recalling that Γ ij = Γ ji . However. which are related to the Christoffel symbols of the second kind by m Γ ijk = Γ ij g mk . Substituting Eq. and the simplest way to compute the Christoffel symbols of the second kind then begins by first computing the Christoffel symbols of the first kind Γ ijk .133) .Γ mk = -. 2004 87 1 m m = -.130) where the final term results from lowering the index on g m . presumably to emphasize that they are not components of any third-order-tensor.129) These Γ ijk are frequently denoted in the literature as [ ij.edu/~rmbrann/gobag.126) (5. for non-Euclidean spaces. ˜ Γ ijk = Γ ij • g k . (5.63).

(5.134) into (5. We have focused on the three-dimensional Euclidean space that we think we live it. (5. An example of a two-dimensional curvilinear space embedded in Euclidean space is the surface of a sphere.unm. (5.135) represents the easiest way to obtain the Christoffel symbols of the first kind when only the metric coefficients are known.rmbrann@me. the mathematics of reduced dimension spaces within ordinary physical space is needed to study plate and beam theory.136) 5. In ordinary engineering mechanics applications. that the middle subscript on both Γ ‘s is the same as that on the coordinate η j .135) validates the following expression that can be used to directly obtain the Christoffel symbols of the first kind given only the metric coefficients: 1 ∂g jk ∂g ki ∂g ij Γ ijk = -.129) to Eq.edu http://me. Once the Γ ijk are known.edu/~rmbrann/gobag. . then this document has focused on alternative coordinate systems that still define points in this space and therefore require three coordinates.125). This formula has easily remembered index structure: for Γ ijk .12 The fourth-order Riemann-Christoffel curvature tensor Recall that this document has limited its scope to curvilinear systems embedded in a Euclidean space of the same dimension. We have men- 1. (5. (5.--------. Only two coordinates are required to specify a point on a sphere and this two dimensional space is “Riemannian” (i. When the Euclidean space is three-dimensional.= Γ ijk + Γ kji . ∂η j (5.html DRAFT June 17.135) This formula (which can be verified by substituting Eq..134) into (5. j. Note.unm.129) to give n Γ ij = Γ ijk g kn .134) This relationship can be easily remembered by noting the structure of the indices. An alternative way to obtain this result is to simply apply Eq. non-Euclidean) because it is not possible for us to construct a rectangular coordinate grid on a sphere.+ --------. Direct substitution of Eq. the Christoffel symbols of the second kind are obtained by solving Eq. (5. Einstein and colleagues threw a wrench in our thinking by introducing the notion that space and time are curved.e. (5. for example. (5. k . the index symbols on each of the η coordinates are ordered i.131) gives 1 ∂g ik --------. In this modern (post-Einstein) era. 2004 88 Using Eq.– -------2 ∂η i ∂η j ∂η k . this notion of a Euclidean physical world is now recognized to be only an approximation (referred to as Newtonian space). We have now mentioned two situations (prosaic shell/beam theory and exciting relativity theory) where understanding some basics of non-Euclidean spaces is needed.

A similar situation was encountered with the metric coefficients g ij . How is this statement cast in mathematical terms? In other words. This would follow because the transformation relations are linear. ----------------. For the Riemann-Christoffel tensor to not be zero.138) (5. therefore.+ ----------------- + g mn ( Γ jkn Γ ilm – Γ jlm Γ ikn ) 2  ∂η j ∂η k ∂η i ∂η k ∂η j ∂η l ∂η i ∂η l (5. a measure of curvature of a Riemannian space.edu/~rmbrann/gobag. 2004 89 tioned that a Riemannian space is one that does not permit construction of a rectangular coordinate grid.137) that R ijkl depends on the Christoffel symbols.unm. it is Riemannian. This would be true even if you aren’t actually using a Cartesian system.rmbrann@me. The metrics g ij are components of the identity ˜ tensor.html DRAFT June 17. the product of the Riemann-Christoffel components times basis vectors is basis-independent: R = R ijkl g i g j g k g l ˜ ˜ ˜ ˜ ˜ ˜ (5. If this tensor turns out to be zero. Thus. if a space is capable of supporting a Cartesian system (i.– ----------------. Hence. possibly curvilinear. The answer is tied to a special fourth-order tensor called the Riemann-Christoffel tensor soon to be defined.. only the R 1212 component is independent. all Christoffel symbols are zero.unm. you would have to be working in a reduced dimension space [such as the surface of a sphere where the coordinates are ( θ.– ----------.e.140) . θ. then your space is Euclidean. Otherwise. The Riemann-Christoffel tensor is. For a Cartesian system. then the RiemannChristoffel tensor must be zero. the others being either zero or related to this component by R 1212 = – R 2112 = – R 1221 = R 2121 . ordinary 3D Newtonian space is Euclidean and therefore its Riemann-Christoffel tensor must be zero even if you happen to employ a different set of three curvilinear coordinates such as spherical coordinates ( r. φ ) ]. if the space is Euclidean). a linear transformation to a different.edu http://me.– ----------------. Note from Eq. what process is needed to decide if a space is Euclidean or Riemannian in the first place. Hence. Therefore the product of these˜ components times base vectors g ij g i g j is basis-inde˜ ˜ pendent (it equals the identity tensor).+ Γ ilp Γ jk – Γ ikp Γ jl ∂η k ∂η l or ∂ 2 g jl ∂ 2 g ik ∂ 2 g jk 1 ∂ 2 g il R ijkl = -. Because the Riemann-Christoffel tensor transforms like a tensor. For example. it is not a basis-intrinsic quantity despite the fact that it has been here defined in terms of basis-intrinsic quantities. It is major symmetric: R ijkl = – R jikl = – R ijlk = R klij (5.137) This tensor is skew-symmetric in the first two indices and in the last two indices. if there exists a Cartesian system (in which the Riemann-Christoffel tensor is zero). system would result again in the zero tensor. which were defined g ij = g i • g j . φ ) . (5. R ijkl = 0 .139) In a two-dimensional space. in a Cartesian system. Similarly. The Riemann-Christoffel tensor is defined ∂Γ jli ∂Γ jki p p R ijkl = ----------.

141) (5.1) ˜ ˜ ˜ Here. the basis.1) to serve as a mere “helper” tensor.– ----------. Embedded bases and objective rates We introduced the mapping tensor F in Eq. E 3 } is the same as the “laboratory” basis that we had originally used ˜ ˜ ˜ .unm. g 2. ˜ Namely. g 3 } exists in a Euclidean space.edu/~rmbrann/gobag. { E 1. giving R = [ Γ jl • Γ ik – Γ jk • Γ il + Γ il • Γ jk – Γ ik • Γ jl ]g i g j g k g l ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ (5.• -----------------l ˜ j ˜ ∂η i ∂η ∂η k ∂η ˜ ˜ (5. R = – [ Γ ik • Γ jl – Γ il • Γ jk + Γ jk • Γ il – Γ jl • Γ ik ]g i g j g k g l ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ (5.edu http://me. if the basis { g 1.= -------------------------˜ k ˜k ˜ ∂η ∂η j ∂η ∂η i ∂η j ∂η k ˜ (5. 2004 90 or R = ˜ ˜ or ∂ ( Γ jl • g i ) ∂ ( Γ jk • g i ) R = -----------------------.unm.143) where ∂ 2 gi ∂Γ ij ∂3x Γ ijk = --------. rearranging slightly.+ ( Γ il • g p ) ( Γ jk • g p ) – ( Γ ik • g p ) ( Γ jl • g p ) g i g j g k g ˜ k˜ ˜ l˜ ˜ ∂η ∂η ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ∂Γ jli ∂Γ jki p p ---------.html DRAFT June 17.+ Γ ilp Γ jk – Γ ikp Γ jl g i g j g k g l ∂η k ∂η l ˜ ˜ ˜ ˜ (5.142) or R = [ Γ jlk • g i + Γ jl • Γ ik – Γ jkl • g i – Γ jk • Γ il + ( Γ il • g p ) ( Γ jk • g p ) – ( Γ ik • g p ) ( Γ jl • g p ) ]g i g j g k g l ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ (5. (5. (2.143) may be canceled.146) or R = – [ g ikjl – g iljk + g jkil – g jlik ]g i g j g k g l ˜ ˜ ˜ ˜ ˜ ˜ (5.rmbrann@me.– ------------------------.= ----------------. note that the first and third terms in Eq. E 2. then we defined ˜ ˜ ˜ gi = F • Ei (6.148) 6.145) or.144) Noting that the indices on this tensor may be permuted in any manner.147) where ∂2x ∂2 x g ijkl ≡ Γ ij • Γ kl = ----------------.

edu/~rmbrann/gobag. For this special case. For this special case. which (recall) are related to the mapping tensor by g k = F –1 • E k (6. consider the contravariant base vectors. As before. E 3 } are the outward unit normals to the unit cube. E 2.1) is identical to the deformation gradient ten˜ sor from traditional continuum mechanics. then material deformation will ˜ ˜ ˜ generally move the cube to a parallelepiped whose faces now have outward unit normals parallel to { g 1.rmbrann@me. the term “covariant” is especially apropos because the covariant base vector g i varies coincidently with the underlying material particles. deform to a parallelepiped whose sides move with the material and the sides of this parallelepiped are given by the three convected base vectors { g 1. it is not really necessary for the initial coordinate system to be Cartesian itself. a special “convected” coordinate system is often adopted in which the coordinates η k of a point currently located a a position x at time t are identified to equal the initial Cartesian coordinates X k of the same ˜ material particle at time 0 . g 2. E 3 } will. G 2. if { E 1. G 3 } .4) ˜ ˜ ˜ Here F is here retaining its meaning as the physical deformation gradient tensor.html DRAFT June 17. then we will denote its associated set of base vectors by { G 1. the contravariant base vectors move somewhat contrary to the material motion. In particular. Namely. the unit cube is defined˜by three unit vectors { E 1.6) .unm. ˜ ˜ ˜ upon deformation. ˜ ˜ ˜ By contrast. When the initial coordinate system is permitted to be curvilinear. the motion of the normal to a plane moves somewhat con˜ ˜ ˜ trary to the motion of material fibers that were originally parallel to the plane’s normal. the coordinate grid is identical to the deformation of the initial coordinate grid when it is convected along with the material particles. g 3 } .2) ˜ ˜ ˜ These base vectors do not move coincidently with the material particles. g 3 } .5) ˜ ˜ ˜ ˜ ˜ ˜ G i = H • E i and G k = H – T • E k ˜ ˜ ˜ ˜ ˜ ˜ (6. we will presume the existence of tensors h and H such that ˜ ˜ k = h –T • E k g i = h • E i and g (6. That is. which ˜ necessitates introducing new symbols to denote the mapping tensors for the individual curvilinear bases. In continuum mechanics (especially the older literature). the mapping tensor F in Eq. Of course.unm. E 2. In general. (6. In this physical context. 2004 91 in Section 2.edu http://me.3) g k = F –T • G k (6. g 2. Instead. these covariant base vectors deform to ˜ ˜ ˜ new orientations given by gi = F • Gi ˜ ˜ ˜ The associated dual basis is given by (6.

(6.11) Strain measures in older literature look drastically different from this because they are typically expressed in terms of initial and deformed metric tensors.edu/~rmbrann/gobag.12) (6.14) (6.rmbrann@me. the Seth-Hill generalized strain [13] measure is defined in direct notation as 1 e = -. What is the connection? First note from Eq. (6.7) (6.11) from the left by G i and from the right by G j gives ˜ ˜ 1 G i • e Lagr • G j = -.[ I – ( F T • F ) – 1 ] 2 ˜ ˜ ˜ ˜ (6.unm.html DRAFT June 17. the Seth-Hill parameter k is selected according to whichever strain measure the analyst prefers.6) into (6. by definition. where y ≡ h T • h ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ G ij = E i • Y • E j and G ij = E i • Y – 1 • E j . where Y ≡ H T • H ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ Furthermore. the case of k=0 must be applied in the limit. G ij ≡ G i • G j = G i • I • G j ˜ ˜ ˜ ˜ ˜ Thus. The Lagrangian strain measure corresponds to choosing k=2 to give 1 e Lagr = -. k = –1 k=1 k=2 k= – 2 k=0 --> --> --> --> --> True/Swainger strain engineering/Cauchy strain Green/Lagrange strain Almansi logarithmic/Hencky strain Of course.10) Here. 2004 92 from which it follows that g ij = E i • y • E j and g ij = E i • y – 1 • E j .5) and (6. dotting Eq.3) that g ij = G i • F T • F • G j ˜ ˜ ˜ ˜ Furthermore.15) This result shows that the difference between deformed and initial covariant metrics (which .[ ( F T • F ) – I ] 2 ˜ ˜ ˜ ˜ The “Euler” strain measure corresponds to choosing k= – 2 to give 1 e Euler = -.8) F = h • H –1 (6.13) (6.[ g ij – G ij ] 2 ˜ ˜ ˜ (6.9) ˜ ˜ ˜ In continuum mechanics. (6. substituting Eqs.unm.3) implies that (6.edu http://me.[ ( F T • F ) k / 2 – I ] k ˜ ˜ ˜ ˜ (6. Namely.

18) Gi • s • Gj = gi • τ • gj (6. (6.unm. you can show that 1 G i • e Euler • G j = -.5).6) and (6.4). Converting an index equation to direct symbolic form is harder than the reverse (which is a key argument used in favor of direct symbolic formulations of governing equations). J = det ( F ) and σ is the Cauchy stress.9) as long as you can deduce which basis applies to the component formulas. Results like this are worth noting and recording in your personal file so that you can quickly convert older curvilinear constitutive model equations into modern direct notation form.[ G ij – g ij ] 2 ˜ ˜ ˜ (6.16) In general.html DRAFT June 17. using Eq.edu/~rmbrann/gobag.unm. The tensor τ is called the “Kirchhoff” stress.17) ˜ ˜ ˜ ˜ ˜ ˜ Here. ˜ ˜ ˜ and it is identically equal to the Cauchy stress for incompressible materials.edu http://me. (6. Consider. (6. where τ = J σ .19) ˜ ˜ ˜ ˜ ˜ ˜ This shows that the contravariant components of the second Piola Kirchhoff tensor with respect to the initial basis are equal to the contravariant components of the Kirchhoff tensor with respect to the current basis. (6. Similarly. for example. 2004 93 appears frequently in older literature) is identically equal to the covariant components of the Lagrangian strain tensor with respect to the initial basis. you can use Eqs. (6.rmbrann@me. if you wish to convert a component equation from the older literature into modern invariant direct notation form. . the definition of the second Piola Kirchhoff tensor: s ≡ F – 1 • τ • F – T . Dotting both sides of the above equation by the initial contravariant base vectors gives G i • s • G j = G i • F –1 • τ • F –T • G j ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ or.

. and hence it is necessary to be familiar with the use of base vectors and covariant and contravariant components in order to study the literature.edu http://me. The last step is to cast the final result back in direct notation.. In other words. cross product. Concluding remarks.rmbrann@me. From there.unm.edu/~rmbrann/gobag.. A structured result can then be justifiably expressed into any other system because all the operations used in the derivation had been proper operations. and the result will be the same as if the operation had been applied directly in the second system at the outset. proper tensor operations are performed within that system. Once an operation is known to be a proper tensor operation.html DRAFT June 17.” Modern researchers realize that operations such as the dot product.. one can apply the applicable formula for such an operation in any convenient system and transform the result to a second system. the result is re-cast in structured (direct) notation. one should always perform derivations using only proper tensor operations using whatever coordinate system makes the analysis accessible to the largest audience of readers. Such operations commute with basis and/or coordinate transformations as long as the computational procedure for evaluating them is defined in a manner appropriate for the underlying coordinates or basis. Bird et al.unm..some authors feel it is stylish to use general tensor notation even when it is not needed. it is endowed with a structured (direct) notation symbolic form. 2004 94 7. R. thereby allowing it to be recast in any other desired basis or coordinate system. and gradient are proper tensor operations.B. [1] astutely remark “. Consequently. In the final step. Derivations of new identities usually begin by casting a direct notation formula in a conveniently simple system such as a principal basis or maybe just ordinary rectangular Cartesian coordinates.

Principles of continuum mechanics.. Springer-Verlag. R.J. A Brief on Tensor Analysis. Applications of Tensor Analysis.. New Jersey. NY (1978) Rudin. (1957) Lovelock. New Jersey (1978). R. Foundations of applied mathematics. McGraw-Hill. O. (1998).C. and variational principles.. University of New Mexico internal report. Wiley. M. 2 3 4 5 6 7 8 9 10 11 12 13 . Schreyer. Prentice-Hall. not journal articles. Greenberg.. H. Malvern. Tables of gradient formulas for particular coordinate systems can be found in the appendix of Bird. Buck. Tensor Calculus and Analytical Dynamics.unm.rmbrann@me.B and Weber. Papastravridis. NY (1987). 1 Bird. Greenberg’s applied math book should be part of any engineer’s personal library. McGraw-Hill. Real and Complex Analysis. Wiley.L.edu http://me.D. L. Prentice-Hall. Academic Press (1995). D. NY (1993.. Introduction to Continuum Mechanics. 2004 95 8.. NY. Mysore N. (1987). NY (1994). (1985). McConnel. (1976) Arfken. NR.. Introduction to the mechanics of a continuous medium. Prentice Hall. Wiley-Interscience. David and Rund... and Amundson. This choice has been made to emphasize that the subject of curvilinear analysis has been around for centuries -.E. Any self-respecting student should follow up this manuscript’s elementary introduction with careful reading of the following references.unm.. differential forms.there exists no new literature for us to quote that has not already been quoted in one or more of the following texts.. NY (1975). Tensors. Hanno. Linear Operator Methods in Chemical Engineering. G. Hassager. So why.J. Readers should make an effort to read all footnotes and student exercises in Greenberg’s book — they’re both enlightening and entertaining! Simmond’s book is a good choice for continued pursuit of curvilinear coordinates. CRC Press. John G. Armstrong. Advanced Calculus. REFERENCES All of the following references are books. Simmonds. and Hassager’s book.. New Jersey (1969). Ramkrishna. W. James G.C. Dynamics of Polymeric Liquids. Dover. Armstrong. R.html DRAFT June 17.. you might ask. A..edu/~rmbrann/gobag. Narasimhan.. It contains many heuristically derived results. Mathematical Methods for Physicists. is this new manuscript called for? The answer is that the present manuscript is designed specifically for new students and researchers. H. 3rd Ed.B.

.Y. Marsden.rmbrann@me. J. Springer. 75 in series entitled: Applied Mathematical Sciences).unm. 14 . and Ratiu.E. R. (Vol.edu/~rmbrann/gobag. 2nd Ed. 2004 96 Abraham. N. T.edu http://me.unm.. Manifolds. and Applications. Tensor Analysis. (1988).html DRAFT June 17.

Sign up to vote on this title
UsefulNot useful