The Mechanics of Elastic Solids
Volume 1: A Brief Review of Some Mathematical
Preliminaries
Version 1.0
Rohan Abeyaratne
Quentin Berg Professor of Mechanics
Department of Mechanical Engineering
MIT
Copyright c Rohan Abeyaratne, 1987
All rights reserved.
http://web.mit.edu/abeyaratne/lecture notes.html
December 2, 2006
2
3
Electronic Publication
Rohan Abeyaratne
Quentin Berg Professor of Mechanics
Department of Mechanical Engineering
77 Massachusetts Institute of Technology
Cambridge, MA 021394307, USA
Copyright c by Rohan Abeyaratne, 1987
All rights reserved
Abeyaratne, Rohan, 1952
Lecture Notes on The Mechanics of Elastic Solids. Volume 1: A Brief Review of Some Math
ematical Preliminaries / Rohan Abeyaratne – 1st Edition – Cambridge, MA:
ISBN13: 9780979186509
ISBN10: 0979186501
QC
Please send corrections, suggestions and comments to abeyaratne.vol.1@gmail.com
Updated June 25 2007
4
i
Dedicated with admiration and aﬀection
to Matt Murphy and the miracle of science,
for the gift of renaissance.
iii
PREFACE
The Department of Mechanical Engineering at MIT oﬀers a series of graduate level sub
jects on the Mechanics of Solids and Structures which include:
2.071: Mechanics of Solid Materials,
2.072: Mechanics of Continuous Media,
2.074: Solid Mechanics: Elasticity,
2.073: Solid Mechanics: Plasticity and Inelastic Deformation,
2.075: Advanced Mechanical Behavior of Materials,
2.080: Structural Mechanics,
2.094: Finite Element Analysis of Solids and Fluids,
2.095: Molecular Modeling and Simulation for Mechanics, and
2.099: Computational Mechanics of Materials.
Over the years, I have had the opportunity to regularly teach the second and third of
these subjects, 2.072 and 2.074 (formerly known as 2.083), and the current three volumes
are comprised of the lecture notes I developed for them. The ﬁrst draft of these notes was
produced in 1987 and they have been corrected, reﬁned and expanded on every following
occasion that I taught these classes. The material in the current presentation is still meant
to be a set of lecture notes, not a text book. It has been organized as follows:
Volume I: A Brief Review of Some Mathematical Preliminaries
Volume II: Continuum Mechanics
Volume III: Elasticity
My appreciation for mechanics was nucleated by Professors Douglas Amarasekara and
Munidasa Ranaweera of the (then) University of Ceylon, and was subsequently shaped and
grew substantially under the inﬂuence of Professors James K. Knowles and Eli Sternberg
of the California Institute of Technology. I have been most fortunate to have had the
opportunity to apprentice under these inspiring and distinctive scholars. I would especially
like to acknowledge a great many illuminating and stimulating interactions with my mentor,
colleague and friend Jim Knowles, whose inﬂuence on me cannot be overstated.
I am also indebted to the many MIT students who have given me enormous fulﬁllment
and joy to be part of their education.
My understanding of elasticity as well as these notes have also beneﬁtted greatly from
many useful conversations with Kaushik Bhattacharya, Janet Blume, Eliot Fried, Morton E.
iv
Gurtin, Richard D. James, Stelios Kyriakides, David M. Parks, Phoebus Rosakis, Stewart
Silling and Nicolas Triantafyllidis, which I gratefully acknowledge.
Volume I of these notes provides a collection of essential deﬁnitions, results, and illus
trative examples, designed to review those aspects of mathematics that will be encountered
in the subsequent volumes. It is most certainly not meant to be a source for learning these
topics for the ﬁrst time. The treatment is concise, selective and limited in scope. For exam
ple, Linear Algebra is a far richer subject than the treatment here, which is limited to real
3dimensional Euclidean vector spaces.
The topics covered in Volumes II and III are largely those one would expect to see covered
in such a set of lecture notes. Personal taste has led me to include a few special (but still
wellknown) topics. Examples of this include sections on the statistical mechanical theory
of polymer chains and the lattice theory of crystalline solids in the discussion of constitutive
theory in Volume II; and sections on the socalled Eshelby problem and the eﬀective behavior
of twophase materials in Volume III.
There are a number of Worked Examples at the end of each chapter which are an essential
part of the notes. Many of these examples either provide, more details, or a proof, of a
result that had been quoted previously in the text; or it illustrates a general concept; or it
establishes a result that will be used subsequently (possibly in a later volume).
The content of these notes are entirely classical, in the best sense of the word, and none
of the material here is original. I have drawn on a number of sources over the years as I
prepared my lectures. I cannot recall every source I have used but certainly they include
those listed at the end of each chapter. In a more general sense the broad approach and
philosophy taken has been inﬂuenced by:
Volume I: A Brief Review of Some Mathematical Preliminaries
I.M. Gelfand and S.V. Fomin, Calculus of Variations, Prentice Hall, 1963.
J.K. Knowles, Linear Vector Spaces and Cartesian Tensors, Oxford University Press,
New York, 1997.
Volume II: Continuum Mechanics
P. Chadwick, Continuum Mechanics: Concise Theory and Problems, Dover,1999.
J.L. Ericksen, Introduction to the Thermodynamics of Solids, Chapman and Hall, 1991.
M.E. Gurtin, An Introduction to Continuum Mechanics, Academic Press, 1981.
J. K. Knowles and E. Sternberg, (Unpublished) Lecture Notes for AM136: Finite Elas
ticity, California Institute of Technology, Pasadena, CA 1978.
v
C. Truesdell and W. Noll, The nonlinear ﬁeld theories of mechanics, in Handb¨ uch der
Physik, Edited by S. Fl¨ ugge, Volume III/3, Springer, 1965.
Volume IIII: Elasticity
M.E. Gurtin, The linear theory of elasticity, in Mechanics of Solids  Volume II, edited
by C. Truesdell, SpringerVerlag, 1984.
J. K. Knowles, (Unpublished) Lecture Notes for AM135: Elasticity, California Institute
of Technology, Pasadena, CA, 1976.
A. E. H. Love, A Treatise on the Mathematical Theory of Elasticity, Dover, 1944.
S. P. Timoshenko and J.N. Goodier, Theory of Elasticity, McGrawHill, 1987.
The following notation will be used consistently in Volume I: Greek letters will denote real
numbers; lowercase boldface Latin letters will denote vectors; and uppercase boldface Latin
letters will denote linear transformations. Thus, for example, α, β, γ... will denote scalars
(real numbers); a, b, c, ... will denote vectors; and A, B, C, ... will denote linear transforma
tions. In particular, “o” will denote the null vector while “0” will denote the null linear
transformation. As much as possible this notation will also be used in Volumes II and III
though there will be some lapses (for reasons of tradition).
vi
Contents
1 Matrix Algebra and Indicial Notation 1
1.1 Matrix algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Indicial notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Summation convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Kronecker delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 The alternator or permutation symbol . . . . . . . . . . . . . . . . . . . . . 10
1.6 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Vectors and Linear Transformations 17
2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1.1 Euclidean point space . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Linear Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3 Components of Tensors. Cartesian Tensors 41
3.1 Components of a vector in a basis. . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 Components of a linear transformation in a basis. . . . . . . . . . . . . . . . 43
3.3 Components in two bases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4 Determinant, trace, scalarproduct and norm . . . . . . . . . . . . . . . . . . 47
vii
viii CONTENTS
3.5 Cartesian Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.6 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4 Symmetry: Groups of Linear Transformations 67
4.1 An example in twodimensions. . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2 An example in threedimensions. . . . . . . . . . . . . . . . . . . . . . . . . 69
4.3 Lattices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.4 Groups of Linear Transformations. . . . . . . . . . . . . . . . . . . . . . . . 73
4.5 Symmetry of a scalarvalued function . . . . . . . . . . . . . . . . . . . . . . 74
4.6 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5 Calculus of Vector and Tensor Fields 85
5.1 Notation and deﬁnitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.2 Integral theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.3 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.4 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6 Orthogonal Curvilinear Coordinates 99
6.1 Introductory Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2 General Orthogonal Curvilinear Coordinates . . . . . . . . . . . . . . . . . . 102
6.2.1 Coordinate transformation. Inverse transformation. . . . . . . . . . . 102
6.2.2 Metric coeﬃcients, scale moduli. . . . . . . . . . . . . . . . . . . . . . 104
6.2.3 Inverse partial derivatives . . . . . . . . . . . . . . . . . . . . . . . . 106
6.2.4 Components of ∂ˆ e
i
/∂ˆ x
j
in the local basis (ˆ e
1
, ˆ e
2
, ˆ e
3
) . . . . . . . . . . 107
6.3 Transformation of Basic Tensor Relations . . . . . . . . . . . . . . . . . . . . 108
6.3.1 Gradient of a scalar ﬁeld . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.3.2 Gradient of a vector ﬁeld . . . . . . . . . . . . . . . . . . . . . . . . . 109
CONTENTS ix
6.3.3 Divergence of a vector ﬁeld . . . . . . . . . . . . . . . . . . . . . . . . 110
6.3.4 Laplacian of a scalar ﬁeld . . . . . . . . . . . . . . . . . . . . . . . . 110
6.3.5 Curl of a vector ﬁeld . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.3.6 Divergence of a symmetric 2tensor ﬁeld . . . . . . . . . . . . . . . . 111
6.3.7 Diﬀerential elements of volume . . . . . . . . . . . . . . . . . . . . . 112
6.3.8 Diﬀerential elements of area . . . . . . . . . . . . . . . . . . . . . . . 112
6.4 Examples of Orthogonal Curvilinear Coordinates . . . . . . . . . . . . . . . 113
6.5 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
7 Calculus of Variations 123
7.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7.2 Brief review of calculus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
7.3 A necessary condition for an extremum . . . . . . . . . . . . . . . . . . . . . 128
7.4 Application of necessary condition δF = 0 . . . . . . . . . . . . . . . . . . . 130
7.4.1 The basic problem. Euler equation. . . . . . . . . . . . . . . . . . . . 130
7.4.2 An example. The Brachistochrone Problem. . . . . . . . . . . . . . . 132
7.4.3 A Formalism for Deriving the Euler Equation . . . . . . . . . . . . . 136
7.5 Generalizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.5.1 Generalization: Free endpoint; Natural boundary conditions. . . . . 137
7.5.2 Generalization: Higher derivatives. . . . . . . . . . . . . . . . . . . . 140
7.5.3 Generalization: Multiple functions. . . . . . . . . . . . . . . . . . . . 142
7.5.4 Generalization: End point of extremal lying on a curve. . . . . . . . . 147
7.6 Constrained Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.6.1 Integral constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.6.2 Algebraic constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
x CONTENTS
7.6.3 Diﬀerential constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 155
7.7 WeirstrassErdman corner conditions . . . . . . . . . . . . . . . . . . . . . . 157
7.7.1 Piecewise smooth minimizer with nonsmoothness occuring at a pre
scribed location. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7.7.2 Piecewise smooth minimizer with nonsmoothness occuring at an un
known location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
7.8 Generalization to higher dimensional space. . . . . . . . . . . . . . . . . . . 166
7.9 Second variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
7.10 Suﬃcient condition for convex functionals . . . . . . . . . . . . . . . . . . . 180
7.11 Direct method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
7.11.1 The Ritz method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.12 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Chapter 1
Matrix Algebra and Indicial Notation
Notation:
{a} ..... m×1 matrix, i.e. a column matrix with m rows and one column
a
i
..... element in rowi of the column matrix {a}
[A] ..... m×n matrix
A
ij
..... element in rowi, columnj of the matrix [A]
1.1 Matrix algebra
Even though more general matrices can be considered, for our purposes it is suﬃcient to
consider a matrix to be a rectangular array of real numbers that obeys certain rules of
addition and multiplication. A m×n matrix [A] has m rows and n columns:
[A] =
_
_
_
_
_
A
11
A
12
. . . A
1n
A
21
A
22
. . . A
2n
. . . . . . . . . . . .
A
m1
A
m2
. . . A
mn
_
_
_
_
_
; (1.1)
A
ij
denotes the element located in the ith row and jth column. The column matrix
{x} =
_
_
_
_
_
x
1
x
2
. . .
x
m
_
_
_
_
_
(1.2)
1
2 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
has m rows and one column; The row matrix
{y} = {y
1
, y
2
, . . . , y
n
} (1.3)
has one row and n columns. If all the elements of a matrix are zero it is said to be a null
matrix and is denoted by [0] or {0} as the case may be.
Two m×n matrices [A] and [B] are said to be equal if and only if all of their corresponding
elements are equal:
A
ij
= B
ij
, i = 1, 2, . . . m, j = 1, 2, . . . , n. (1.4)
If [A] and [B] are both m× n matrices, their sum is the m× n matrix [C] denoted by
[C] = [A] + [B] whose elements are
C
ij
= A
ij
+ B
ij
, i = 1, 2, . . . m, j = 1, 2, . . . , n. (1.5)
If [A] is a p ×q matrix and [B] is a q ×r matrix, their product is the p ×r matrix [C] with
elements
C
ij
=
q
k=1
A
ik
B
kj
, i = 1, 2, . . . p, j = 1, 2, . . . , q; (1.6)
one writes [C] = [A][B]. In general [A][B] = [B][A]; therefore rather than referring to [A][B]
as the product of [A] and [B] we should more precisely refer to [A][B] as [A] postmultiplied
by [B]; or [B] premultiplied by [A]. It is worth noting that if two matrices [A] and [B] obey
the equation [A][B] = [0] this does not necessarily mean that either [A] or [B] has to be the
null matrix [0]. Similarly if three matrices [A], [B] and [C] obey [A][B] = [A][C] this does
not necessarily mean that [B] = [C] (even if [A] = [0].) The product by a scalar α of a m×n
matrix [A] is the m×n matrix [B] with components
B
ij
= αA
ij
, i = 1, 2, . . . m, j = 1, 2, . . . , n; (1.7)
one writes [B] = α[A].
Note that a m
1
×n
1
matrix [A
1
] can be postmultiplied by a m
2
×n
2
matrix [A
2
] if and
only if n
1
= m
2
. In particular, consider a m× n matrix [A] and a n × 1 (column) matrix
{x}. Then we can postmultiply [A] by {x} to get the m×1 column matrix [A]{x}; but we
cannot premultiply [A] by {x} (unless m=1), i.e. {x}[A] does not exist is general.
The transpose of the m×n matrix [A] is the n ×m matrix [B] where
B
ij
= A
ji
for each i = 1, 2, . . . n, and j = 1, 2, . . . , m. (1.8)
1.1. MATRIX ALGEBRA 3
Usually one denotes the matrix [B] by [A]
T
. One can verify that
[A + B]
T
= [A]
T
+ [B]
T
, [AB]
T
= [B]
T
[A]
T
. (1.9)
The transpose of a column matrix is a row matrix; and vice versa. Suppose that [A] is a
m× n matrix and that {x} is a m× 1 (column) matrix. Then we can premultiply [A] by
{x}
T
, i.e. {x}
T
[A] exists (and is a 1 ×n row matrix). For any n×1 column matrix {x} note
that
{x}
T
{x} = {x}{x}
T
= x
2
1
+ x
2
2
. . . + x
2
n
=
n
i=1
x
2
i
. (1.10)
A n×n matrix [A] is called a square matrix; the diagonal elements of this matrix are the
A
ii
’s. A square matrix [A] is said to be symmetrical if
A
ij
= A
ji
for each i, j = 1, 2, . . . n; (1.11)
skewsymmetrical if
A
ij
= −A
ji
for each i, j = 1, 2, . . . n. (1.12)
Thus for a symmetric matrix [A] we have [A]
T
= [A]; for a skewsymmetric matrix [A] we
have [A]
T
= −[A]. Observe that each diagonal element of a skewsymmetric matrix must be
zero.
If the oﬀdiagonal elements of a square matrix are all zero, i.e. A
ij
= 0 for each i, j =
1, 2, . . . n, i = j, the matrix is said to be diagonal. If every diagonal element of a diagonal
matrix is 1 the matrix is called a unit matrix and is usually denoted by [I].
Suppose that [A] is a n×n square matrix and that {x} is a n×1 (column) matrix. Then
we can postmultiply [A] by {x} to get a n × 1 column matrix [A]{x}, and premultiply the
resulting matrix by {x}
T
to get a 1 ×1 square matrix, eﬀectively just a scalar, {x}
T
[A]{x}.
Note that
{x}
T
[A]{x} =
n
i=1
n
j=1
A
ij
x
i
x
j
. (1.13)
This is referred to as the quadratic form associated with [A]. In the special case of a diagonal
matrix [A]
{x}
T
[A]{x} = A
11
x
2
1
+ A
22
x
2
1
+ . . . + A
nn
x
2
n
. (1.14)
The trace of a square matrix is the sum of the diagonal elements of that matrix and is
denoted by trace[A]:
trace[A] =
n
i=1
A
ii
. (1.15)
4 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
One can show that
trace([A][B]) = trace([B][A]). (1.16)
Let det[A] denote the determinant of a square matrix. Then for a 2 ×2 matrix
det
_
A
11
A
12
A
21
A
22
_
= A
11
A
22
−A
12
A
21
, (1.17)
and for a 3 ×3 matrix
det
_
_
_
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
_
_
_
= A
11
det
_
A
22
A
23
A
32
A
33
_
−A
12
det
_
A
21
A
23
A
31
A
33
_
+ A
13
det
_
A
21
A
22
A
31
A
32
_
.
(1.18)
The determinant of a n×n matrix is deﬁned recursively in a similar manner. One can show
that
det([A][B]) = (det[A]) (det[B]). (1.19)
Note that trace[A] and det[A] are both scalarvalued functions of the matrix [A].
Consider a square matrix [A]. For each i = 1, 2, . . . , n, a row matrix {a}
i
can be created
by assembling the elements in the ith row of [A]: {a}
i
= {A
i1
, A
i2
, A
i3
, . . . , A
in
}. If the only
scalars α
i
for which
α
1
{a}
1
+ α
2
{a}
2
+ α
3
{a}
3
+ . . . α
n
{a}
n
= {0} (1.20)
are α
1
= α
2
= . . . = α
n
= 0, the rows of [A] are said to be linearly independent. If at least
one of the α’s is nonzero, they are said to be linearly dependent, and then at least one row
of [A] can be expressed as a linear combination of the other rows.
Consider a square matrix [A] and suppose that its rows are linearly independent. Then
the matrix is said to be nonsingular and there exists a matrix [B], usually denoted by
[B] = [A]
−1
and called the inverse of [A], for which [B][A] = [A][B] = [I]. For [A] to be
nonsingular it is necessary and suﬃcient that det[A] = 0. If the rows of [A] are linearly
dependent, the matrix is singular and an inverse matrix does not exist.
Consider a n×n square matrix [A]. First consider the (n−1)×(n−1) matrix obtained by
eliminating the ith row and jth column of [A]; then consider the determinant of that second
matrix; and ﬁnally consider the product of that determinant with (−1)
i+j
. The number thus
obtained is called the cofactor of A
ij
. If [B] is the inverse of [A], [B] = [A]
−1
, then
B
ij
=
cofactor of A
ji
det[A]
(1.21)
1.2. INDICIAL NOTATION 5
If the transpose and inverse of a matrix coincide, i.e. if
[A]
−1
= [A]
T
, (1.22)
then the matrix is said to be orthogonal. Note that for an orthogonal matrix [A], one has
[A][A]
T
= [A]
T
[A] = [I] and that det[A] = ±1.
1.2 Indicial notation
Consider a n × n square matrix [A] and two n × 1 column matrices {x} and {b}. Let A
ij
denote the element of [A] in its i
th
row and j
th
column, and let x
i
and b
i
denote the elements
in the i
th
row of {x} and {b} respectively. Now consider the matrix equation [A]{x} = {b}:
_
_
_
_
_
A
11
A
12
. . . A
1n
A
21
A
22
. . . A
2n
. . . . . . . . . . . .
A
n1
A
n2
. . . A
nn
_
_
_
_
_
_
_
_
_
_
x
1
x
2
. . .
x
n
_
_
_
_
_
=
_
_
_
_
_
b
1
b
2
. . .
b
n
_
_
_
_
_
. (1.23)
Carrying out the matrix multiplication, this is equivalent to the system of linear algebraic
equations
A
11
x
1
+A
12
x
2
+. . . +A
1n
x
n
= b
1
,
A
21
x
1
+A
22
x
2
+. . . +A
2n
x
n
= b
2
,
. . . +. . . +. . . +. . . = . . .
A
n1
x
1
+A
n2
x
2
+. . . +A
nn
x
n
= b
n
.
_
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
_
(1.24)
This system of equations can be written more compactly as
A
i1
x
1
+ A
i2
x
2
+ . . . A
in
x
n
= b
i
with i taking each value in the range 1, 2, . . . n; (1.25)
or even more compactly by omitting the statement “with i taking each value in the range
1, 2, . . . , n”, and simply writing
A
i1
x
1
+ A
i2
x
2
+ . . . + A
in
x
n
= b
i
(1.26)
with the understanding that (1.26) holds for each value of the subscript i in the range i =
1, 2, . . . n. This understanding is referred to as the range convention. The subscript i is called
a free subscript because it is free to take on each value in its range. From here on, we shall
always use the range convention unless explicitly stated otherwise.
6 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
Observe that
A
j1
x
1
+ A
j2
x
2
+ . . . + A
jn
x
n
= b
j
(1.27)
is identical to (1.26); this is because j is a free subscript in (1.27) and so (1.27) is required
to hold “for all j = 1, 2, . . . , n” and this leads back to (1.24). This illustrates the fact that
the particular choice of index for the free subscript in an equation is not important provided
that the same free subscript appears in every symbol grouping.
1
As a second example, suppose that f(x
1
, x
2
, . . . , x
n
) is a function of x
1
, x
2
, . . . , x
n
, Then,
if we write the equation
∂f
∂x
k
= 3x
k
, (1.28)
the index k in it is a free subscript and so takes all values in the range 1, 2, . . . , n. Thus
(1.28) is a compact way of writing the n equations
∂f
∂x
1
= 3x
1
,
∂f
∂x
2
= 3x
2
, . . . ,
∂f
∂x
n
= 3x
n
. (1.29)
As a third example, the equation
A
pq
= x
p
x
q
(1.30)
has two free subscripts p and q, and each, independently, takes all values in the range
1, 2, . . . , n. Therefore (1.30) corresponds to the nine equations
A
11
= x
1
x
1
, A
12
= x
1
x
2
, . . . A
1n
= x
1
x
n
,
A
21
= x
2
x
1
, A
22
= x
2
x
2
, . . . A
2n
= x
2
x
n
,
. . . . . . . . . . . . = . . .
A
n1
= x
n
x
1
, A
n2
= x
n
x
2
, . . . A
nn
= x
n
x
n
.
_
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
_
(1.31)
In general, if an equation involves N free indices, then it represents 3
N
scalar equations.
In order to be consistent it is important that the same free subscript(s) must appear once,
and only once, in every group of symbols in an equation. For example, in equation (1.26),
since the index i appears once in the symbol group A
i1
x
1
, it must necessarily appear once
in each of the remaining symbol groups A
i2
x
2
, A
i3
x
3
, . . . A
in
x
n
and b
i
of that equation.
Similarly since the free subscripts p and q appear in the symbol group on the lefthand
1
By a “symbol group” we mean a set of terms contained between +, − and = signs.
1.3. SUMMATION CONVENTION 7
side of equation (1.30), it must also appear in the symbol group on the righthand side.
An equation of the form A
pq
= x
i
x
j
would violate this consistency requirement as would
A
i1
x
i
+ A
j2
x
2
= 0.
Note ﬁnally that had we adopted the range convention in Section 1.1, we would have
omitted the various “i=1,2,. . . ,n” statements there and written, for example, equation (1.4)
for the equality of two matrices as simply A
ij
= B
ij
; equation (1.5) for the sum of two
matrices as simply C
ij
= A
ij
+ B
ij
; equation (1.7) for the scalar multiple of a matrix as
B
ij
= αA
ij
; equation (1.8) for the transpose of a matrix as simply B
ij
= A
ji
; equation
(1.11) deﬁning a symmetric matrix as simply A
ij
= A
ji
; and equation (1.12) deﬁning a
skewsymmetric matrix as simply A
ij
= −A
ji
.
1.3 Summation convention
Next, observe that (1.26) can be written as
n
j=1
A
ij
x
j
= b
i
. (1.32)
We can simplify the notation even further by agreeing to drop the summation sign and instead
imposing the rule that summation is implied over a subscript that appears twice in a symbol
grouping. With this understanding in force, we would write (1.32) as
A
ij
x
j
= b
i
(1.33)
with summation on the subscript j being implied. A subscript that appears twice in a
symbol grouping is called a repeated or dummy subscript; the subscript j in (1.33) is a
dummy subscript.
Note that
A
ik
x
k
= b
i
(1.34)
is identical to (1.33); this is because k is a dummy subscript in (1.34) and therefore summa
tion on k in implied in (1.34). Thus the particular choice of index for the dummy subscript
is not important.
In order to avoid ambiguity, no subscript is allowed to appear more than twice in any
symbol grouping. Thus we shall never write, for example, A
ii
x
i
= b
i
since, if we did, the
index i would appear 3 times in the ﬁrst symbol group.
8 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
Summary of Rules:
1. Lowercase latin subscripts take on values in the range (1, 2, . . . , n).
2. A given index may appear either once or twice in a symbol grouping. If it appears
once, it is called a free index and it takes on each value in its range. If it appears twice,
it is called a dummy index and summation is implied over it.
3. The same index may not appear more than twice in the same symbol grouping.
4. All symbol groupings in an equation must have the same free subscripts.
Free and dummy indices may be changed without altering the meaning of an expression
provided that one does not violate the preceding rules. Thus, for example, we can change
the free subscript p in every term of the equation
A
pq
x
q
= b
p
(1.35)
to any other index, say k, and equivalently write
A
kq
x
q
= b
k
. (1.36)
We can also change the repeated subscript q to some other index, say s, and write
A
ks
x
s
= b
k
. (1.37)
The three preceding equations are identical.
It is important to emphasize that each of the equations in, for example (1.24), involves
scalar quantities, and therefore, the order in which the terms appear within a symbol group
is irrelevant. Thus, for example, (1.24)
1
is equivalent to x
1
A
11
+ x
2
A
12
+ . . . + x
n
A
1n
=
b
1
. Likewise we can write (1.33) equivalently as x
j
A
ij
= b
i
. Note that both A
ij
x
j
= b
i
and x
j
A
ij
= b
i
represent the matrix equation [A]{x} = {b}; the second equation does not
correspond to {x}[A] = {b}. In an indicial equation it is the location of the subscripts that
is crucial; in particular, it is the location where the repeated subscript appears that tells us
whether {x} multiplies [A] or [A] multiplies {x}.
Note ﬁnally that had we adopted the range and summation conventions in Section 1.1,
we would have written equation (1.6) for the product of two matrices as C
ij
= A
ik
B
kj
;
equation (1.10) for the product of a matrix by its transpose as {x}
T
{x} = x
i
x
i
; equation
(1.13) for the quadratic form as {x}
T
[A]{x} = A
ij
x
i
x
j
; and equation (1.15) for the trace as
trace [A] = A
ii
.
1.4. KRONECKER DELTA 9
1.4 Kronecker delta
The Kronecker Delta, δ
ij
, is deﬁned by
δ
ij
=
_
1 if i = j,
0 if i = j.
(1.38)
Note that it represents the elements of the identity matrix. If [Q] is an orthogonal matrix,
then we know that [Q][Q]
T
= [Q]
T
[Q] = [I]. This implies, in indicial notation, that
Q
ik
Q
jk
= Q
ki
Q
kj
= δ
ij
. (1.39)
The following useful property of the Kronecker delta is sometimes called the substitution
rule. Consider, for example, any column matrix {u} and suppose that one wishes to simplify
the expression u
i
δ
ij
. Recall that u
i
δ
ij
= u
1
δ
1j
+ u
2
δ
2j
+ . . . + u
n
δ
nj
. Since δ
ij
is zero unless
i = j, it follows that all terms on the righthand side vanish trivially except for the one term
for which i = j. Thus the term that survives on the righthand side is u
j
and so
u
i
δ
ij
= u
j
. (1.40)
Thus we have used the facts that (i) since δ
ij
is zero unless i = j, the expression being
simpliﬁed has a nonzero value only if i = j; (ii) and when i = j, δ
ij
is unity. Thus replacing
the Kronecker delta by unity, and changing the repeated subscript i → j, gives u
i
δ
ij
= u
j
.
Similarly, suppose that [A] is a square matrix and one wishes to simplify A
jk
δ
j
. Then by the
same reasoning, we replace the Kronecker delta by unity and change the repeated subscript
j → to obtain
2
A
jk
δ
j
= A
k
. (1.41)
More generally, if δ
ip
multiplies a quantity C
ijk
representing n
4
numbers, one replaces
the Kronecker delta by unity and changes the repeated subscript i →p to obtain
C
ijk
δ
ip
= C
pjk
. (1.42)
The substitution rule applies even more generally: for any quantity or expression T
ipq...z
, one
simply replaces the Kronecker delta by unity and changes the repeated subscript i → j to
obtain
T
ipq...z
δ
ij
= T
jpq...z
. (1.43)
2
Observe that these results are immediately apparent by using matrix algebra. In the ﬁrst example, note
that δ
ji
u
i
(which is equal to the quantity δ
ij
u
i
that is given) is simply the jth element of the column matrix
[I]{u}. Since [I]{u} = {u} the result follows at once. Similarly in the second example, δ
j
A
jk
is simply the
, kelement of the matrix [I][A]. Since [I][A] = [A], the result follows.
10 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
1.5 The alternator or permutation symbol
We now limit attention to subscripts that range over 1, 2, 3 only. The alternator or permu
tation symbol is deﬁned by
e
ijk
=
_
¸
_
¸
_
0 if two or more subscripts i, j, k, are equal,
+1 if the subscripts i, j, k, are in cyclic order,
−1 if the subscripts i, j, k, are in anticyclic order,
=
_
¸
_
¸
_
0 if two or more subscripts i, j, k, are equal,
+1 for (i, j, k) = (1, 2, 3), (2, 3, 1), (3, 1, 2),
−1 for (i, j, k) = (1, 3, 2), (2, 1, 3), (3, 2, 1).
(1.44)
Observe from its deﬁnition that the sign of e
ijk
changes whenever any two adjacent subscripts
are switched:
e
ijk
= −e
jik
= e
jki
. (1.45)
One can show by direct calculation that the determinant of a 3 matrix [A] can be written
in either of two forms
det[A] = e
ijk
A
1i
A
2j
A
3k
or det[A] = e
ijk
A
i1
A
j2
A
k3
; (1.46)
as well as in the form
det[A] =
1
6
e
ijk
e
pqr
A
ip
A
jq
A
kr
. (1.47)
Another useful identity involving the determinant is
e
pqr
det[A] = e
ijk
A
ip
A
jq
A
kr
. (1.48)
The following relation involving the alternator and the Kronecker delta will be useful in
subsequent calculations
e
ijk
e
pqk
= δ
ip
δ
jq
−δ
iq
δ
jp
. (1.49)
It is left to the reader to develop proofs of these identities. They can, of course, be veriﬁed
directly, by simply writing out all of the terms in (1.46)  (1.49).
1.6. WORKED EXAMPLES. 11
1.6 Worked Examples.
Example(1.1): If [A] and [B] are n×n square matrices and {x}, {y}, {z} are n×1 column matrices, express
the matrix equation
{y} = [A]{x} + [B]{z}
as a set of scalar equations.
Solution: By the rules of matrix multiplication, the element y
i
in the i
th
row of {y} is obtained by ﬁrst pairwise
multiplying the elements A
i1
, A
i2
, . . . , A
in
of the i
th
row of [A] by the respective elements x
1
, x
2
, . . . , x
n
of
{x} and summing; then doing the same for the elements of [B] and {z}; and ﬁnally adding the two together.
Thus
y
i
= A
ij
x
j
+B
ij
z
j
,
where summation over the dummy index j is implied, and this equation holds for each value of the free index
i = 1, 2, . . . , n. Note that one can alternatively – and equivalently – write the above equation in any of the
following forms:
y
k
= A
kj
x
j
+B
kj
z
j
, y
k
= A
kp
x
p
+B
kp
z
p
, y
i
= A
ip
x
p
+B
iq
z
q
.
Observe that all rules for indicial notation are satisﬁed by each of the three equations above.
Example(1.2): The n × n matrices [C], [D] and [E] are deﬁned in terms of the two n × n matrices [A] and
[B] by
[C] = [A][B], [D] = [B][A], [E] = [A][B]
T
.
Express the elements of [C], [D] and [E] in terms of the elements of [A] and [B].
Solution: By the rules of matrix multiplication, the element C
ij
in the i
th
row and j
th
column of [C] is
obtained by multiplying the elements of the i
th
row of [A], pairwise, by the respective elements of the j
th
column of [B] and summing. So, C
ij
is obtained by multiplying the elements A
i1
, A
i2
, . . . A
in
by, respectively,
B
1j
, B
2j
, . . . B
nj
and summing. Thus
C
ij
= A
ik
B
kj
;
note that i and j are both free indices here and so this represents n
2
scalar equations; moreover summation
is carried out over the repeated index k. It follows likewise that the equation [D] = [B][A] leads to
D
ij
= B
ik
A
kj
; or equivalently D
ij
= A
kj
B
ik
,
where the second expression was obtained by simply changing the order in which the terms appear in the
ﬁrst expression (since, as noted previously, the order of terms within a symbol group is insigniﬁcant since
these are scalar quantities.) In order to calculate E
ij
, we ﬁrst multiply [A] by [B]
T
to obtain E
ij
= A
ik
B
T
kj
.
However, by deﬁnition of transposition, the i, jelement of a matrix [B]
T
equals the j, ielement of the matrix
[B]: B
T
ij
= B
ji
and so we can write
E
ij
= A
ik
B
jk
.
12 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
All four expressions here involve the ik, kj or jk elements of [A] and [B]. The precise locations of the
subscripts vary and the meaning of the terms depend crucially on these locations. It is worth repeating that
the location of the repeated subscript k tells us what term multiplies what term.
Example(1.3): If [S] is any symmetric matrix and [W] is any skewsymmetric matrix, show that
S
ij
W
ij
= 0.
Solution: Note that both i and j are dummy subscripts here; therefore there are summations over each of
them. Also, there is no free subscript so this is just a single scalar equation.
Whenever there is a dummy subscript, the choice of the particular index for that dummy subscript is
arbitrary, and we can change it to another index, provided that we change both repeated subscripts to the
new symbol (and as long as we do not have any subscript appearing more than twice). Thus, for example,
since i is a dummy subscript in S
ij
W
ij
, we can change i → p and get S
ij
W
ij
= S
pj
W
pj
. Note that we can
change i to any other index except j; if we did change it to j, then there would be four j’s and that violates
one of our rules.
By changing the dummy indices i →p and j →q, we get S
ij
W
ij
= S
pq
W
pq
. We can now change dummy
indices again, from p →j and q →i which gives S
pq
W
pq
= S
ji
W
ji
. On combining, these we get
S
ij
W
ij
= S
ji
W
ji
.
Eﬀectively, we have changed both i and j simultaneously from i →j and j →i.
Next, since [S] is symmetric S
ji
= S
ij
; and since [W] is skewsymmetric, W
ji
= −W
ij
. Therefore
S
ji
W
ji
= −S
ij
W
ij
. Using this in the righthand side of the preceding equation gives
S
ij
W
ij
= −S
ij
W
ij
from which it follows that S
ij
W
ij
= 0.
Remark: As a special case, take S
ij
= u
i
u
j
where {u} is an arbitrary column matrix; note that this [S] is
symmetric. It follows that for any skewsymmetric [W],
W
ij
u
i
u
j
= 0 for all u
i
.
Example(1.4): Show that any matrix [A] can be additively decomposed into the sum of a symmetric matrix
and a skewsymmetric matrix.
Solution: Deﬁne matrices [S] and [W] in terms of the given the matrix [A] as follows:
S
ij
=
1
2
(A
ij
+A
ji
), W
ij
=
1
2
(A
ij
−A
ji
).
It may be readily veriﬁed from these deﬁnitions that S
ij
= S
ji
and that W
ij
= −W
ij
. Thus, the matrix [S]
is symmetric and [W] is skewsymmetric. By adding the two equations in above one obtains
S
ij
+W
ij
= A
ij
,
1.6. WORKED EXAMPLES. 13
or in matrix form, [A] = [S] + [W].
Example (1.5): Show that the quadratic form T
ij
u
i
u
j
is unchanged if T
ij
is replaced by its symmetric part.
i.e. show that for any matrix [T],
T
ij
u
i
u
j
= S
ij
u
i
u
j
for all u
i
where S
ij
=
1
2
(T
ij
+T
ji
). (i)
Solution: The result follows from the following calculation:
T
ij
u
i
u
j
=
_
1
2
T
ij
+
1
2
T
ij
+
1
2
T
ji
−
1
2
T
ji
_
u
i
u
j
=
1
2
(T
ij
+T
ji
) u
i
u
j
+
1
2
(T
ij
−T
ji
) u
i
u
j
= S
ij
u
i
u
j
,
where in the last step we have used the facts that A
ij
= T
ij
− T
ji
is skewsymmetric, that B
ij
= u
i
u
j
is
symmetric, and that for any symmetric matrix [A] and any skewsymmetric matrix [B], one has A
ij
B
ij
= 0.
Example (1.6): Suppose that D
1111
, D
1112
, . . . D
111n
, . . . D
1121
, D
1122
, . . . D
112n
, . . . D
nnnn
are n
4
constants;
and let D
ijk
denote a generic element of this set where each of the subscripts i, j, k, take all values in
the range 1, 2, . . . n. Let [E] be an arbitrary symmetric matrix and deﬁne the elements of a matrix [A] by
A
ij
= D
ijk
E
k
. Show that [A] is unchanged if D
ijk
is replaced by its “symmetric part” C
ijk
where
C
ijk
=
1
2
(D
ijk
+D
ijk
). (i)
Solution: In a manner entirely analogous to the previous example,
A
ij
= D
ijk
E
k
=
_
1
2
D
ijk
+
1
2
D
ijk
+
1
2
D
ijk
−
1
2
D
ijk
_
E
k
=
1
2
(D
ijk
+D
ijk
) E
k
+
1
2
(D
ijk
−D
ijk
) E
k
= C
ijk
E
k
,
where in the last step we have used the fact that (D
ijk
−D
ijk
)E
k
= 0 since D
ijk
−D
ijk
is skew symmetric
in the subscripts k, while E
k
is symmetric in the subscripts k, .
Example (1.7): Evaluate the expression δ
ij
δ
ik
δ
jk
.
Solution: By using the substitution rule, ﬁrst on the repeated index i and then on the repeated index j, we
have δ
ij
δ
ik
δ
jk
= δ
jk
δ
jk
= δ
kk
= δ
11
+δ
22
+. . . +δ
nn
= n.
Example(1.8): Given an orthogonal matrix [Q], use indicial notation to solve the matrix equation [Q]{x} =
{a} for {x}.
14 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
Solution: In indicial form, the equation [Q]{x} = {a} reads
Q
ij
x
j
= a
i
.
Multiplying both sides by Q
ik
gives
Q
ik
Q
ij
x
j
= Q
ik
a
i
.
Since [Q] is orthogonal, we know from (1.39) that Q
rp
Q
rq
= δ
pq
. Thus the preceding equation simpliﬁes to
δ
jk
x
j
= Q
ik
a
i
,
which, by the substitution rule, reduces further to
x
k
= Q
ik
a
i
.
In matrix notation this reads {x} = [Q]
T
{a} which we could, of course, have written down immediately from
the fact that {x} = [Q]
−1
{a}, and for an orthogonal matrix, [Q]
−1
= [Q]
T
.
Example(1.9): Consider the function f(x
1
, x
2
, . . . , x
n
) = A
ij
x
i
x
j
where the A
ij
’s are constants. Calculate
the partial derivatives ∂f/∂x
i
.
Solution: We begin by making two general observations. First, note that because of the summation on
the indices i and j, it is incorrect to conclude that ∂f/∂x
i
= A
ij
x
j
by viewing this in the same way as
diﬀerentiating the function A
12
x
1
x
2
with respect to x
1
. Second, observe that if we diﬀerentiatiate f with
respect to x
i
and write ∂f/∂x
i
= ∂(A
ij
x
i
x
j
)/∂x
i
, we would violate our rules because the righthand side
has the subscript i appearing three times in one symbol grouping. In order to get around this diﬃculty we
make use of the fact that the speciﬁc choice of the index in a dummy subscript is not signiﬁcant and so we
can write f = A
pq
x
p
x
q
.
Diﬀerentiating f and using the fact that [A] is constant gives
∂f
∂x
i
=
∂
∂x
i
(A
pq
x
p
x
q
) = A
pq
∂
∂x
i
(x
p
x
q
) = A
pq
_
∂x
p
∂x
i
x
q
+x
p
∂x
q
∂x
i
_
.
Since the x
i
’s are independent variables, it follows that
∂x
i
∂x
j
=
_
_
_
0 if i = j,
1 if i = j,
i.e.
∂x
i
∂x
j
= δ
ij
.
Using this above gives
∂f
∂x
i
= A
pq
[δ
pi
x
q
+x
p
δ
qi
] = A
pq
δ
pi
x
q
+A
pq
x
p
δ
qi
which, by the substitution rule, simpliﬁes to
∂f
∂x
i
= A
iq
x
q
+A
pi
x
p
= A
ij
x
j
+A
ji
x
j
= (A
ij
+A
ji
)x
j
.
1.6. WORKED EXAMPLES. 15
Example (1.10): Suppose that {x}
T
[A]{x} = 0 for all column matrices {x} where the square matrix [A] is
independent of {x}. What does this imply about [A]?
Solution: We know from a previous example that that if [A] is a skewsymmetric and [S] is symmetric then
A
ij
S
ij
= 0, and as a special case of this that A
ij
x
i
x
j
= 0 for all {x}. Thus a suﬃcient condition for the
given equation to hold is that [A] be skewsymmetric. Now we show that this is also a necessary condition.
We are given that A
ij
x
i
x
j
= 0 for all x
i
. Since this equation holds for all x
i
, we may diﬀerentiate both
sides with respect to x
k
and proceed as follows:
0 =
∂
∂x
k
(A
ij
x
i
x
j
) = A
ij
∂
∂x
k
(x
i
x
j
) = A
ij
∂x
i
∂x
k
x
j
+A
ij
x
i
∂x
j
∂x
k
= A
ij
δ
ik
x
j
+A
ij
x
i
δ
jk
, (i)
where we have used the fact that ∂x
i
/∂x
j
= δ
ij
in the last step. On using the substitution rule, this simpliﬁes
to
A
kj
x
j
+A
ik
x
i
= (A
kj
+A
jk
) x
j
= 0. (ii)
Since this also holds for all x
i
, it may be diﬀerentiated again with respect to x
i
to obtain
(A
kj
+A
jk
)
∂x
j
∂x
i
= (A
kj
+A
jk
) δ
ji
= A
ki
+A
ik
= 0. (iii)
Thus [A] must necessarily be a skew symmetric matrix,
Therefore it is necessary and suﬃcient that [A] be skewsymmetric.
Example (1.11): Let C
ijkl
be a set of n
4
constants. Deﬁne the function
´
W([E]) for all matrices [E] by
´
W([E]) = W(E
11
, E
12
, ....E
nn
) =
1
2
C
ijkl
E
ij
E
kl
. Calculate
∂W
∂E
ij
and
∂
2
W
∂E
ij
∂E
kl
. (i)
Solution: First, since the E
ij
’s are independent variables, it follows that
∂E
pq
∂E
ij
=
_
_
_
1 if p = i and q = j,
0 otherwise.
Therefore,
∂E
pq
∂E
ij
= δ
pi
δ
qj
. (ii)
Keeping this in mind and diﬀerentiating W(E
11
, E
12
, ....E
33
) with respect to E
ij
gives
∂W
∂E
ij
=
∂
∂E
ij
_
1
2
C
pqrs
E
pq
E
rs
_
=
1
2
C
pqrs
_
∂E
pq
∂E
ij
E
rs
+E
pq
∂E
rs
∂E
ij
_
=
1
2
C
pqrs
(δ
pi
δ
qj
E
rs
+δ
ri
δ
sj
E
pq
)
=
1
2
C
ijrs
E
rs
+
1
2
C
pqij
E
pq
=
1
2
(C
ijpq
+C
pqij
) E
pq
.
16 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
where we have made use of the substitution rule. (Note that in the ﬁrst step we wrote W =
1
2
C
pqrs
E
pq
E
rs
rather than W =
1
2
C
ijkl
E
ij
E
kl
because we would violate our rules for indices had we written ∂(
1
2
C
ijkl
E
ij
E
kl
)/∂E
ij
.)
Diﬀerentiating this once more with respect to E
kl
gives
∂
2
W
∂E
ij
∂E
kl
=
∂
∂E
k
_
1
2
(C
ijpq
+C
pqij
) E
pq
_
=
1
2
(C
ijpq
+C
pqij
) δ
pk
δ
ql
(iii)
=
1
2
(C
ijkl
+C
klij
) (iv)
Example (1.12): Evaluate the expression e
ijk
e
kij
.
Solution: By ﬁrst using the skew symmetry property (1.45), then using the identity (1.49), and ﬁnally using
the substitution rule, we have e
ijk
e
kij
= −e
ijk
e
ikj
= −(δ
jk
δ
kj
−δ
jj
δ
kk
) = −(δ
jj
−δ
jj
δ
kk
) = −(3−3×3) = 6.
Example(1.13): Show that
e
ijk
S
jk
= 0 (i)
if and only if the matrix [S] is symmetric.
Solution: First, suppose that [S] is symmetric. Pick and ﬁx the free subscript i at any value i = 1, 2, 3. Then,
we can think of e
ijk
as the j, k element of a 3×3 matrix. Since e
ijk
= −e
ikj
this is a skewsymmetric matrix.
In a previous example we showed that S
ij
W
ij
= 0 for any symmetric matrix [S] and any skewsymmetric
matrix [W]. Consequently (i) must hold.
Conversely suppose that (i) holds for some matrix [S]. Multiplying (i) by e
ipq
and using the identity
(1.49) leads to
e
ipq
e
ijk
S
jk
= (δ
pj
δ
qk
−δ
pk
δ
qj
)S
jk
= S
pq
−S
qp
= 0
where in the last step we have used the substitutin rule. Thus S
pq
= S
qp
and so [S] is symmetric.
Remark: Note as a special case of this result that
e
ijk
v
j
v
k
= 0 (ii)
for any arbitrary column matrix {v}.
References
1. R.A. Frazer, W.J. Duncan and A.R. Collar, Elementary Matrices, Cambridge University Press, 1965.
2. R. Bellman, Introduction to Matrix Analysis, McGrawHill, 1960.
Chapter 2
Vectors and Linear Transformations
Notation:
α ..... scalar
a ..... vector
A ..... linear transformation
As mentioned in the Preface, Linear Algebra is a far richer subject than the very restricted
glimpse provided here might suggest. The discussion in these notes is limited almost entirely
to (a) real 3dimensional Euclidean vector spaces, and (b) to linear transformations that
carry vectors from one vector space into the same vector space. These notes are designed
to review those aspects of linear algebra that will be encountered in our study of continuum
mechanics; it is not meant to be a source for learning the subject of linear algebra for the
ﬁrst time.
The following notation will be consistently used: Greek letters will denote real numbers;
lowercase boldface Latin letters will denote vectors; and uppercase boldface Latin letters will
denote linear transformations. Thus, for example, α, β, γ... will denote scalars (real num
bers); a, b, c, ... will denote vectors; and A, B, C, ... will denote linear transformations. In
particular, “o” will denote the null vector while “0” will denote the null linear transforma
tion.
17
18 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
2.1 Vectors
A vector space V is a collection of elements, called vectors, together with two operations,
addition and multiplication by a scalar. The operation of addition (has certain properties
which we do not list here) and associates with each pair of vectors x and y in V, a vector
denoted by x +y that is also in V. In particular, it is assumed that there is a unique vector
o ∈ V called the null vector such that x+o = x. The operation of scalar multiplication (has
certain properties which we do not list here) and associates with each vector x ∈ V and each
real number α, another vector in V denoted by αx.
Let x
1
, x
2
, . . . , x
k
be k vectors in V. These vectors are said to be linearly independent if
the only real numbers α
1
, α
2
. . . , α
k
for which
α
1
x
1
+ α
2
x
2
· · · + α
k
x
k
= o (2.1)
are the numbers α
1
= α
2
= . . . α
k
= 0. If V contains n linearly independent vectors but
does not contain n + 1 linearly independent vectors, we say that the dimension of V is n.
Unless stated otherwise, from hereon we restrict attention to 3dimensional vector spaces.
If V is a vector space, any set of three linearly independent vectors {e
1
, e
2
, e
3
} is said to
be a basis for V. Given any vector x ∈ V there exist a unique set of numbers ξ
1
, ξ
2
, ξ
3
such
that
x = ξ
1
e
1
+ ξ
2
e
2
+ ξ
3
e
3
; (2.2)
the numbers ξ
1
, ξ
2
, ξ
3
are called the components of x in the basis {e
1
, e
2
, e
3
}.
Let U be a subset of a vector space V; we say that U is a subspace (or linear manifold)
of V if, for every x, y ∈ U and every real number α, the vectors x +y and αx are also in U.
Thus a linear manifold U of V is itself a vector space under the same operations of addition
and multiplication by a scalar as in V.
A scalarproduct (or inner product or dot product) on V is a function which assigns to
each pair of vectors x, y in V a scalar, which we denote by x · y. A scalarproduct has
certain properties which we do not list here except to note that it is required that
x · y = y · x for all x, y ∈ V. (2.3)
A Euclidean vector space is a vector space together with an inner product on that space.
From hereon we shall restrict attention to 3dimensional Euclidean vector spaces and denote
such a space by E
3
.
2.1. VECTORS 19
The length (or magnitude or norm) of a vector x is the scalar denoted by x and deﬁned
by
x = (x · x)
1/2
. (2.4)
A vector has zero length if and only if it is the null vector. A unit vector is a vector of unit
length. The angle θ between two vectors x and y is deﬁned by
cos θ =
x · y
xy
, 0 ≤ θ ≤ π. (2.5)
Two vectors x and y are orthogonal if x · y = 0. It is obvious, nevertheless helpful, to note
that if we are given two vectors x and y where x· y = 0 and y = o, this does not necessarily
imply that x = o; on the other hand if x · y = 0 for every vector y, then x must be the null
vector.
An orthonormal basis is a triplet of mutually orthogonal unit vectors e
1
, e
2
, e
3
∈ E
3
. For
such a basis,
e
i
· e
j
= δ
ij
for i, j = 1, 2, 3, (2.6)
where the Kronecker delta δ
ij
is deﬁned in the usual way by
δ
ij
=
_
1 if i = j,
0 if i = j.
(2.7)
A vectorproduct (or crossproduct) on E
3
is a function which assigns to each ordered pair
of vectors x, y ∈ E
3
, a vector, which we denote by x × y. The vectorproduct must have
certain properties (which we do not list here) except to note that it is required that
y ×x = −x ×y for all x, y ∈ V. (2.8)
One can show that
x ×y = x y sin θ n, (2.9)
where θ is the angle between x and y as deﬁned by (2.5), and n is a unit vector in the
direction x×y which therefore is normal to the plane deﬁned by x and y. Since n is parallel
to x ×y, and since it has unit length, it follows that n = (x ×y)/(x ×y). The magnitude
x × y of the crossproduct can be interpreted geometrically as the area of the triangle
formed by the vectors x and y. A basis {e
1
, e
2
, e
3
} is said to be righthanded if
(e
1
×e
2
) · e
3
> 0. (2.10)
20 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
2.1.1 Euclidean point space
A Euclidean point space P whose elements are called points, is related to a Euclidean vector
space E
3
in the following manner. Every order pair of points (p, q) is uniquely associated
with a vector in E
3
, say
→
pq, such that
(i)
→
pq = −
→
qp for all p, q ∈ P.
(ii)
→
pq +
→
qr=
→
pr for all p, q, r ∈ P.
(iii) given an arbitrary point p ∈ P and an arbitrary vector x ∈ E
3
, there is a unique point
q ∈ P such that x =
→
pq. Here x is called the position of point q relative to the point p.
Pick and ﬁx an arbitrary point o ∈ P (which we call the origin of P) and an arbitrary basis
for E
3
of unit vectors e
1
, e
2
, e
3
. Corresponding to any point p ∈ P there is a unique vector
→
op= x = x
1
e
1
+ x
2
e
2
+ x
3
e
3
∈ E
3
. The triplet (x
1
, x
2
, x
3
) are called the coordinates of p
in the (coordinate) frame F = {o; e
1
, e
2
, e
3
} comprised of the origin o and the basis vectors
e
1
, e
2
, e
3
. If e
1
, e
2
, e
3
is an orthonormal basis, the coordinate frame {o; e
1
, e
2
, e
3
} is called a
rectangular cartesian coordinate frame.
2.2 Linear Transformations.
Consider a threedimensional Euclidean vector space E
3
. Let F be a function (or transfor
mation) which assigns to each vector x ∈ E
3
, a second vector y ∈ E
3
,
y = F(x), x ∈ E
3
, y ∈ E
3
; (2.11)
F is said to be a linear transformation if it is such that
F(αx + βy) = αF(x) + βF(y) (2.12)
for all scalars α, β and all vectors x, y ∈ E
3
. When F is a linear transformation, we usually
omit the parenthesis and write Fx instead of F(x). Note that Fx is a vector, and it is the
image of x under the transformation F.
A linear transformation is deﬁned by the way it operates on vectors in E
3
. A geometric
example of a linear transformation is the “projection operator” Π which projects vectors
onto a given plane P. Let P be the plane normal to the unit vector n.; see Figure 2.1. For
2.2. LINEAR TRANSFORMATIONS. 21
P n
x
Πx
Figure 2.1: The projection Πx of a vector x onto the plane P.
any vector x ∈ E
3
, Πx ∈ P is the vector obtained by projecting x onto P. It can be veriﬁed
geometrically that P is deﬁned by
Πx = x −(x · n)n for all x ∈ E
3
. (2.13)
Linear transformations tell us how vectors are mapped into other vectors. In particular,
suppose that {y
1
, y
2
, y
3
} are any three vectors in E
3
and that {x
1
, x
2
, x
3
} are any three
linearly independent vectors in E
3
. Then there is a unique linear transformation F that
maps {x
1
, x
2
, x
3
} into {y
1
, y
2
, y
3
}: y
1
= Fx
1
, y
2
= Fx
2
, y
3
= Fx
3
. This follows from the
fact that {x
1
, x
2
, x
3
} is a basis for E
3
. Therefore any arbitrary vector x can be expressed
uniquely in the form x = ξ
1
x
1
+ ξ
2
x
2
+ ξ
3
x
3
; consequently the image Fx of any vector x is
given by Fx = ξ
1
y
1
+ ξ
2
y
2
+ ξ
3
y
3
which is a rule for assigning a unique vector Fx to any
given vector x.
The null linear transformation 0 is the linear transformation that takes every vector x
into the null vector o. The identity linear transformation I takes every vector x into itself.
Thus
0x = o, Ix = x for all x ∈ E
3
. (2.14)
Let A and B be linear transformations on E
3
and let α be a scalar. The linear trans
formations A+B, AB and αA are deﬁned as those linear transformations which are such
that
(A+B)x = Ax +Bx for all x ∈ E
3
, (2.15)
(AB)x = A(Bx) for all x ∈ E
3
, (2.16)
(αA)x = α(Ax) for all x ∈ E
3
, (2.17)
respectively; A + B is called the sum of A and B, AB the product, and αA is the scalar
multiple of A by α. In general,
AB = BA. (2.18)
22 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
The range of a linear transformation A (i.e., the collection of all vectors Ax as x takes
all values in E
3
) is a subspace of E
3
. The dimension of this particular subspace is known
as the rank of A. The set of all vectors x for which Ax = o is also a subspace of E
3
; it is
known as the null space of A.
Given any linear transformation A, one can show that there is a unique linear transfor
mation usually denoted by A
T
such that
Ax · y = x · A
T
y for all x, y ∈ E
3
. (2.19)
A
T
is called the transpose of A. One can show that
(αA)
T
= αA
T
, (A+B)
T
= A
T
+B
T
, (AB)
T
= B
T
A
T
. (2.20)
A linear transformation A is said to be symmetric if
A = A
T
; (2.21)
skewsymmetric if
A = −A
T
. (2.22)
Every linear transformation A can be represented as the sum of a symmetric linear trans
formation S and a skewsymmetric linear transformation W as follows:
A = S +W where S =
1
2
(A+A
T
), W =
1
2
(A−A
T
). (2.23)
For every skewsymmetric linear transformation W, it may be shown that
Wx · x = 0 for all x ∈ E
3
; (2.24)
moreover, there exists a vector w (called the axial vector of W) which has the property that
Wx = w×x for all x ∈ E
3
. (2.25)
Given a linear transformation A, if the only vector x for which Ax = o is the zero
vector, then we say that A is nonsingular. It follows from this that if A is nonsingular
then Ax = Ay whenever x = y. Thus, a nonsingular transformation A is a onetoone
transformation in the sense that, for any given y ∈ E
3
, there is one and only one vector x ∈ E
3
for which Ax = y. Consequently, corresponding to any nonsingular linear transformation
2.2. LINEAR TRANSFORMATIONS. 23
A, there exists a second linear transformation, denoted by A
−1
and called the inverse of A,
such that Ax = y if and only if x = A
−1
y, or equivalently, such that
AA
−1
= A
−1
A = I. (2.26)
If {y
1
, y
2
, y
3
} and {x
1
, x
2
, x
3
} are two sets of linearly independent vectors in E
3
, then
there is a unique nonsingular linear transformation F that maps {x
1
, x
2
, x
3
} into {y
1
, y
2
, y
3
}:
y
1
= Fx
1
, y
2
= Fx
2
, y
3
= Fx
3
. The inverse of F maps {y
1
, y
2
, y
3
} into {x
1
, x
2
, x
3
}. If both
bases {x
1
, x
2
, x
3
} and {y
1
, y
2
, y
3
} are righthanded (or both are lefthanded) we say that
the linear transformation F preserves the orientation of the vector space.
If two linear transformations A and B are both nonsingular, then so is AB; moreover,
(AB)
−1
= B
−1
A
−1
. (2.27)
If A is nonsingular then so is A
T
; moreover,
(A
T
)
−1
= (A
−1
)
T
, (2.28)
and so there is no ambiguity in writing this linear transformation as A
−T
.
A linear transformation Q is said to be orthogonal if it preserves length, i.e., if
Qx = x for all x ∈ E
3
. (2.29)
If Q is orthogonal, it follows that it also preserves the inner product:
Qx · Qy = x · y for all x, y ∈ E
3
. (2.30)
Thus an orthogonal linear transformation preserves both the length of a vector and the angle
between two vectors. If Q is orthogonal, it is necessarily nonsingular and
Q
−1
= Q
T
. (2.31)
A linear transformation A is said to be positive deﬁnite if
Ax · x > 0 for all x ∈ E
3
, x = o; (2.32)
positivesemideﬁnite if
Ax · x
¯
≥ 0 for all x ∈ E
3
. (2.33)
24 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
A positive deﬁnite linear transformation is necessarily nonsingular. Moreover, A is positive
deﬁnite if and only if its symmetric part (1/2)(A+A
T
) is positive deﬁnite.
Let A be a linear transformation. A subspace U is known as an invariant subspace of
A if Av ∈ U for all v ∈ U. Given a linear transformation A, suppose that there exists an
associated onedimensional invariant subspace. Since U is onedimensional, it follows that if
v ∈ U then any other vector in U can be expressed in the form λv for some scalar λ. Since
U is an invariant subspace we know in addition that Av ∈ U whenever v ∈ U. Combining
these two fact shows that Av = λv for all v ∈ U. A vector v and a scalar λ such that
Av = λv, (2.34)
are known, respectively, as an eigenvector and an eigenvalue of A. Each eigenvector of A
characterizes a onedimensional invariant subspace of A. Every linear transformation A (on
a 3dimensional vector space E
3
) has at least one eigenvalue.
It can be shown that a symmetric linear transformation A has three real eigenvalues
λ
1
, λ
2
, and λ
3
, and a corresponding set of three mutually orthogonal eigenvectors e
1
, e
2
, and
e
3
. The particular basis of E
3
comprised of {e
1
, e
2
, e
3
} is said to be a principal basis of A.
Every eigenvalue of a positive deﬁnite linear transformation must be positive, and no
eigenvalue of a nonsingular linear transformation can be zero. A symmetric linear transfor
mation is positive deﬁnite if and only if all three of its eigenvalues are positive.
If e and λ are an eigenvector and eigenvalue of a linear transformation A, then for any
positive integer n, it is easily seen that e and λ
n
are an eigenvector and an eigenvalue of A
n
where A
n
= AA...(n times)..AA; this continues to be true for negative integers m provided
A is nonsingular and if by A
−m
we mean (A
−1
)
m
, m > 0.
Finally, according to the polar decomposition theorem, given any nonsingular linear trans
formation F, there exists unique symmetric positive deﬁnite linear transformations U and
V and a unique orthogonal linear transformation R such that
F = RU = VR. (2.35)
If λ and r are an eigenvalue and eigenvector of U, then it can be readily shown that λ and
Rr are an eigenvalue and eigenvector of V.
Given two vectors a, b ∈ E
3
, their tensorproduct is the linear transformation usually
denoted by a ⊗b, which is such that
(a ⊗b)x = (x · b)a for all x ∈ E
3
. (2.36)
2.2. LINEAR TRANSFORMATIONS. 25
Observe that for any x ∈ E
3
, the vector (a ⊗b)x is parallel to the vector a. Thus the range
of the linear transformation a ⊗ b is the onedimensional subspace of E
3
consisting of all
vectors parallel to a. The rank of the linear transformation a ⊗b is thus unity.
For any vectors a, b, c, and d it is easily shown that
(a ⊗b)
T
= b ⊗a, (a ⊗b)(c ⊗d) = (b · c)(a ⊗d). (2.37)
The product of a linear transformation A with the linear transformation a ⊗b gives
A(a ⊗b) = (Aa) ⊗b, (a ⊗b)A = a ⊗(A
T
b). (2.38)
Let {e
1
, e
2
, e
3
} be an orthonormal basis. Since this is a basis, any vector in E
3
, and
therefore in particular each of the vectors Ae
1
, Ae
2
, Ae
3
, can be expressed as a unique
linear combination of the basis vectors e
1
, e
2
, e
3
. It follows that there exist unique real
numbers A
ij
such that
Ae
j
=
3
i=1
A
ij
e
i
, j = 1, 2, 3, (2.39)
where A
ij
is the i
th
component on the vector Ae
j
. They can equivalently be expressed as
A
ij
= e
i
· (Ae
j
). The linear transformation A can now be represented as
A =
3
i=1
3
j=1
A
ij
(e
i
⊗e
j
). (2.40)
One refers to the A
ij
’s as the components of the linear transformation A in the basis
{e
1
, e
2
, e
3
}. Note that
3
i=1
e
i
⊗e
i
= I,
3
i=1
(Ae
i
) ⊗e
i
= A. (2.41)
Let S be a symmetric linear transformation with eigenvalues λ
1
, λ
2
, λ
3
and corresponding
(mutually orthogonal unit) eigenvectors e
1
, e
2
, e
3
. Since Se
j
= λ
j
e
j
for each j = 1, 2, 3, it
follows from (2.39) that the components of S in the principal basis {e
1
, e
2
, e
3
} are S
11
=
λ
1
, S
21
= S
31
= 0; S
12
= 0, S
22
= λ
2
, S
32
= 0; S
13
= S
23
= 0, S
33
= λ
3
. It follows from the
general representation (2.40) that S admits the representation
S =
3
i=1
λ
i
(e
i
⊗e
i
); (2.42)
26 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
this is called the spectral representation of a symmetric linear transformation. It can be
readily shown that, for any positive integer n,
S
n
=
3
i=1
λ
n
i
(e
i
⊗e
i
); (2.43)
if S is symmetric and nonsingular, then
S
−1
=
3
i=1
(1/λ
i
) (e
i
⊗e
i
). (2.44)
If S is symmetric and positive deﬁnite, there is a unique symmetric positive deﬁnite linear
transformation T such that T
2
= S. We call T the positive deﬁnite square root of S and
denote it by T =
√
S. It is readily seen that
√
S =
3
i=i
_
λ
i
(e
i
⊗e
i
). (2.45)
2.3 Worked Examples.
Example 2.1: Given three vectors a, b, c, show that
a · (b ×c) = b · (c ×a) = c · (a ×b).
Solution: By the properties of the vectorproduct, the vector (a + b) is normal to the vector (a + b) × c.
Thus
(a +b) · [(a +b) ×c] = 0.
On expanding this out one obtains
a · (a ×c) +a · (b ×c) +b · (a ×c) +b · (b ×c) = 0.
Since a is normal to (a × c), and b is normal to (b × c), the ﬁrst and last terms in this equation vanish.
Finally, recall that a ×c = −c ×a. Thus the preceding equation simpliﬁes to
a · (b ×c) = b · (c ×a).
This establishes the ﬁrst part of the result. The second part is shown analogously.
Example 2.2: Show that a necessary and suﬃcient condition for three vectors a, b, c in E
3
– none of which
is the null vector – to be linearly dependent is that a · (b ×c) = 0.
2.3. WORKED EXAMPLES. 27
Solution: To show necessity, suppose that the three vectors a, b, c, are linearly dependent. It follows that
αa +βb +γc = o
for some real numbers α, β, γ, at least one of which is non zero. Taking the vectorproduct of this equation
with c and then taking the scalarproduct of the result with a leads to
βa · (b ×c) = 0.
Analogous calculations with the other pairs of vectors, and keeping in mind that a · (b ×c) = b · (c ×a) =
c · (a ×b), leads to
αa · (b ×c) = 0, βa · (b ×c) = 0, γa · (b ×c) = 0.
Since at least one of α, β, γ is nonzero it follows that necessarily a · (b ×c) = o.
To show suﬃciency, let a· (b×c) = 0 and assume that a, b, c are linearly independent. We will show that
this is a contradiction whence a, b, c must be linearly dependent. By the properties of the vectorproduct,
the vector b ×c is normal to the plane deﬁned by the vectors b and c. By assumption, a · (b ×c) = 0, and
this implies that a is normal to b×c. Since we are in E
3
this means that a must lie in the plane deﬁned by
b and c. This means they cannot be linearly independent.
Example 2.3: Interpret the quantity a·(b×c) geometrically in terms of the volume of the tetrahedron deﬁned
by the vectors a, b, c.
Solution: Consider the tetrahedron formed by the three vectors a, b, c as depicted in Figure 2.2. Its volume
V
0
=
1
3
A
0
h
0
where A
0
is the area of its base and h
0
is its height.
n =
a × b
a × b
Volume =
1
3
A
0
×h
0
= a × b A
0
= c · n
h
0
c
a
b
n
Area A
0
Height h
0
Figure 2.2: Volume of the tetrahedron deﬁned by vectors a, b, c.
Consider the triangle deﬁned by the vectors a and b to be the base of the tetrahedron. Its area A
0
can
be written as 1/2 base ×height = 1/2a(b sin θ) where θ is the angle between a and b. However from the
property (2.9) of the vectorproduct we have a ×b = ab sin θ and so A
0
= a ×b/2.
Next, n = (a × b)/a × b is a unit vector that is normal to the base of the tetrahedron, and so the
height of the tetrahedron is h
0
= c · n; see Figure 2.2.
Therefore
V
0
=
1
3
A
0
h
0
=
1
3
_
a ×b
2
_
(c · n) =
1
6
(a ×b) · c. (i)
28 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
Observe that this provides a geometric explanation for why the vectors a, b, c are linearly dependent if and
only if (a ×b) · c = 0.
Example 2.4: Let φ(x) be a scalarvalued function deﬁned on the vector space E
3
. If φ is linear, i.e. if
φ(αx+βy) = αφ(x) +βφ(y) for all scalars α, β and all vectors x, y, show that φ(x) = c· x for some constant
vector c. Remark: This shows that the scalarproduct is the most general scalarvalued linear function of a
vector.
Solution: Let {e
1
, e
3
, e
3
} be any orthonormal basis for E
3
. Then an arbitrary vector x can be written in
terms of its components as x = x
1
e
1
+x
2
e
2
+x
3
e
3
. Therefore
φ(x) = φ(x
1
e
1
+x
2
e
2
+x
3
e
3
)
which because of the linearity of φ leads to
φ(x) = x
1
φ(e
1
) +x
2
φ(e
2
) +x
3
φ(e
3
).
On setting c
i
= φ(e
i
), i = 1, 2, 3, we ﬁnd
φ(x) = x
1
c
1
+x
2
c
2
+x
3
c
3
= c · x
where c = c
1
e
1
+c
2
e
2
+c
3
e
3
.
Example 2.5: If two linear transformations A and B have the property that Ax · y = Bx · y for all vectors
x and y, show that A = B.
Solution: Since (Ax − Bx) · y = 0 for all vectors y, we may choose y = Ax − Bx in this, leading to
Ax −Bx
2
= 0. Since the only vector of zero length is the null vector, this implies that
Ax = Bx for all vectors x (i)
and so A = B.
Example 2.6: Let n be a unit vector, and let P be the plane through o normal to n. Let Π and R be the
transformations which, respectively, project and reﬂect a vector in the plane P.
a. Show that Π and R are linear transformations; Π is called the “projection linear transformation”
while R is known as the “reﬂection linear transformation”.
b. Show that R(Rx) = x for all x ∈ E
3
.
c. Verify that a reﬂection linear transformation Ris nonsingular while a projection linear transformation
Π is singular. What is the inverse of R?
d. Verify that a projection linear transformation Π is symmetric and that a reﬂection linear transforma
tion R is orthogonal.
2.3. WORKED EXAMPLES. 29
P n
x
Πx
(x · n)n
Rx
(x · n)n
Figure 2.3: The projection Πx and reﬂection Rx of a vector x on the plane P.
e. Show that the projection linear transformation and reﬂection linear transformation can be represented
as Π = I −n ⊗n and R = I −2(n ⊗n) respectively.
Solution:
a. Figure 2.3 shows a sketch of the plane P, its unit normal vector n, a generic vector x, its projection
Πx and its reﬂection Rx. By geometry we see that
Πx = x −(x · n)n, Rx = x −2(x · n)n. (i)
These deﬁne the images Πx and Rx of a generic vector x under the transformation Π and R. One
can readily verify that Π and R satisfy the requirement (2.12) of a linear transformation.
b. Applying the deﬁnition (i)
2
of R to the vector Rx gives
R(Rx) = (Rx) −2
_
(Rx) · n
_
n
Replacing Rx on the righthand side of this equation by (i)
2
, and expanding the resulting expression
shows that the righthand side simpliﬁes to x. Thus R(Rx) = x.
c. Applying the deﬁnition (i)
1
of Π to the vector n gives
Πn = n −(n · n)n = n −n = o.
Therefore Πn = o and (since n = o) we see that o is not the only vector that is mapped to the null
vector by Π. The transformation Π is therefore singular.
Next consider the transformation R and consider a vector x that is mapped by it to the null vector,
i.e. Rx = o. Using (i)
2
x = 2(x · n)n.
Taking the scalarproduct of this equation with the unit vector n yields x · n = 2(x · n) from which
we conclude that x · n = 0. Substituting this into the righthand side of the preceding equation leads
to x = o. Therefore Rx = o if and only if x = o and so R is nonsingular.
To ﬁnd the inverse of R, recall from part (b) that R(Rx) = x. Operating on both sides of this
equation by R
−1
gives Rx = R
−1
x. Since this holds for all vectors x it follows that R
−1
= R.
30 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
d. To show that Π is symmetric we simply use its deﬁnition (i)
1
to calculate Πx · y and x · Πy for
arbitrary vectors x and y. This yields
Πx · y =
_
x −(x · n)n
_
· y = x · y −(x · n)(y · n)
and
x · Πy = x ·
_
y −(x · n)n
_
= x · y −(x · n)(y · n).
Thus Πx · y = x · Πy and so Π is symmetric.
To show that R is orthogonal we must show that RR
T
= I or R
T
= R
−1
. We begin by calculating
R
T
. Recall from the deﬁnition (2.19) that the transpose satisﬁes the requirement x · R
T
y = Rx · y.
Using the deﬁnition (i)
2
of R on the righthand side of this equation yields
x · R
T
y = x · y −2(x · n)(y · n).
We can rearrange the righthand side of this equation so it reads
x · R
T
y = x ·
_
y −2(y · n)n
_
.
Since this holds for all x it follows that R
T
y = y − 2(y · n)n. Comparing this with (i)
2
shows that
R
T
= R. In part (c) we showed that R
−1
= R and so it now follows that R
T
= R
−1
. Thus R is
orthogonal.
e. Applying the operation (I −n ⊗n) on an arbitrary vector x gives
_
I −n ⊗n
_
x = x −(n ⊗n)x = x −(x · n)n = Πx
and so Π = I −n ⊗n.
Similarly
_
I −2n ⊗n
_
x = x −2(x · n)n = Rx
and so R = I −2n ⊗n.
Example 2.7: If W is a skew symmetric linear transformation show that
Wx · x = 0 for all x . (i)
Solution: By the deﬁnition (2.19) of the transpose, we have Wx · x = x · W
T
x; and since W = −W
T
for a
skew symmetric linear transformation, this can be written as Wx· x = −x· Wx. Finally the property (2.3)
of the scalarproduct allows this to be written as Wx · x = −Wx · x from which the desired result follows.
Example 2.8: Show that (AB)
T
= B
T
A
T
.
Solution: First, by the deﬁnition (2.19) of the transpose,
(AB)x · y = x · (AB)
T
y . (i)
2.3. WORKED EXAMPLES. 31
Second, note that (AB)x · y = A(Bx) · y. By the deﬁnition of the transpose of A we have A(Bx) · y =
Bx· A
T
y; and by the deﬁnition of the transpose of B we have Bx· A
T
y = x· B
T
A
T
y. Therefore combining
these three equations shows that
(AB)x · y = x · B
T
A
T
y (ii)
On equating these two expressions for (AB)x · y shows that x · (AB)
T
y = x · B
T
A
T
y for all vectors x, y
which establishes the desired result.
Example 2.9: If o is the null vector, then show that Ao = o for any linear transformation A.
Solution: The null vector o has the property that when it is added to any vector, the vector remains
unchanged. Therefore x + o = x, and similarly Ax + o = Ax. However operating on the ﬁrst of these
equations by A shows that Ax + Ao = Ax, which when combined with the second equation yields the
desired result.
Example 2.10: If A and B are nonsingular linear transformations show that AB is also nonsingular and
that (AB)
−1
= B
−1
A
−1
.
Solution: Let C = B
−1
A
−1
. We will show that (AB)C = C(AB) = I and therefore that C is the inverse
of AB. (Since the inverse would thus have been shown to exist, necessarily AB must be nonsingular.)
Observe ﬁrst that
(AB) C = (AB) B
−1
A
−1
= A(BB
−1
)A
−1
= AIA
−1
= I ,
and similarly that
C(AB) = B
−1
A
−1
(AB) = B
−1
(A
−1
A)B == B
−1
IB = I .
Therefore (AB)C = C(AB) = I and so C is the inverse of AB.
Example 2.11: If A is nonsingular, show that (A
−1
)
T
= (A
T
)
−1
.
Solution: Since (A
T
)
−1
is the inverse of A
T
we have (A
T
)
−1
A
T
= I. Postoperating on both sides of this
equation by (A
−1
)
T
gives
(A
T
)
−1
A
T
(A
−1
)
T
= (A
−1
)
T
.
Recall that (AB)
T
= B
T
A
T
for any two linear transformations A and B. Thus the preceding equation
simpliﬁes to
(A
T
)
−1
(A
−1
A)
T
= (A
−1
)
T
Since A
−1
A = I the desired result follows.
Example 2.12: Show that an orthogonal linear transformation Q preserves inner products, i.e. show that
Qx · Qy = x · y for all vectors x, y.
32 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
Solution: Since
(x −y) · (x −y) = x · x +y · y −2x · y
it follows that
x · y =
1
2
_
x
2
+y
2
−x −y
2
_
. (i)
Since this holds for all vectors x, y it must also hold when x and y are replaced by Qx and Qy:
Qx · Qy =
1
2
_
Qx
2
+Qy
2
−Qx −Qy
2
_
.
By deﬁnition, an orthogonal linear transformation Q preserves length, i.e. Qv = v for all vectors v. Thus
the preceding equation simpliﬁes to
Qx · Qy =
1
2
_
x
2
+y
2
−x −y
2
_
. (ii)
Since the righthandsides of the preceding expressions for x · y and Qx · Qy are the same, it follows that
Qx · Qy = x · y.
Remark: Thus an orthogonal linear transformation preserves the length of any vector and the inner product
between any two vectors. It follows therefore that an orthogonal linear transformation preserves the angle
between a pair of vectors as well.
Example 2.13: Let Q be an orthogonal linear transformation. Show that
a. Q is nonsingular, and that
b. Q
−1
= Q
T
.
Solution:
a. To show that Q is nonsingular we must show that the only vector x for which Qx = o is the null
vector x = o. Suppose that Qx = o for some vector x. Taking the norm of the two sides of this
equation leads to Qx = o = 0. However an orthogonal linear transformation preserves length and
therefore Qx = x. Consequently x = 0. However the only vector of zero length is the null vector
and so necessarily x = o. Thus Q is nonsingular.
b. Since Q is orthogonal it preserves the inner product: Qx· Qy = x· y for all vectors x and y. However
the property (2.19) of the transpose shows that Qx· Qy = x· Q
T
Qy. It follows that x· Q
T
Qy = x· y
for all vectors x and y, and therefore that Q
T
Q = I. Thus Q
−1
= Q
T
.
Example 2.14: If α
1
and α
2
are two distinct eigenvalues of a symmetric linear transformation A, show that
the corresponding eigenvectors a
1
and a
2
are orthogonal to each other.
Solution: Recall from the deﬁnition of the transpose that Aa
1
· a
2
= a
1
· A
T
a
2
, and since A is symmetric
that A = A
T
. Thus
Aa
1
· a
2
= a
1
· Aa
2
.
2.3. WORKED EXAMPLES. 33
Since a
1
and a
2
are eigenvectors of A corresponding to the eigenvalues α
1
and α
2
, we have Aa
1
= α
1
a
1
and
Aa
2
= α
2
a
2
. Thus the preceding equation reduces to α
1
a
1
· a
2
= α
2
a
1
· a
2
or equivalently
(α
1
−α
2
)(a
1
· a
2
) = 0.
Since, α
1
= α
2
it follows that necessarily a
1
· a
2
= 0.
Example 2.15: If λ and e are an eigenvalue and eigenvector of an arbitrary linear transformation A, show
that λ and P
−1
e are an eigenvalue and eigenvector of the linear transformation P
−1
AP. Here P is an
arbitrary nonsingular linear transformation.
Solution: Since PP
−1
= I it follows that Ae = APP
−1
e. However we are told that Ae = λe, whence
APP
−1
e = λe. Operating on both sides with P
−1
gives P
−1
APP
−1
e = λP
−1
e which establishes the
result.
Example 2.16: If λ is an eigenvalue of an orthogonal linear transformation Q, show that λ = 1.
Solution: Let λ and e be an eigenvalue and corresponding eigenvector of Q. Thus Qe = λe and so Qe =
λe = λ e. However, Q preserves length and so Qe = e. Thus λ = 1.
Remark: We will show later that +1 is an eigenvalue of a “proper” orthogonal linear transformation on E
3
.
The corresponding eigvector is known as the axis of Q.
Example 2.17: The components of a linear transformation A in an orthonormal basis {e
1
, e
2
, e
3
} are the
unique real numbers A
ij
deﬁned by
Ae
j
=
3
i=1
A
ij
e
i
, j = 1, 2, 3. (i)
Show that the linear transformation A can be represented as
A =
3
i=1
3
j=1
A
ij
(e
i
⊗e
j
). (ii)
Solution: Consider the linear transformation given on the righthand side of (ii) and operate it on an arbitrary
vector x:
_
_
3
i=1
3
j=1
A
ij
(e
i
⊗e
j
)
_
_
x =
3
i=1
3
j=1
A
ij
(x · e
j
)e
i
=
3
i=1
3
j=1
A
ij
x
j
e
i
=
3
j=1
x
j
_
3
i=1
A
ij
e
i
_
,
where we have used the facts that (p⊗q)r = (q· r)p and x
i
= x· e
i
. On using (i) in the right most expression
above, we can continue this calculation as follows:
_
_
3
i=1
3
j=1
A
ij
(e
i
⊗e
j
)
_
_
x =
3
j=1
x
j
Ae
j
= A
3
j=1
x
j
e
j
= Ax.
34 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
The desired result follows from this since this holds for arbitrary vectors x.
Example 2.18: Let Rbe a “rotation transformation” that rotates vectors in IE
3
through an angle θ, 0 < θ < π,
about an axis e (in the sense of the righthand rule). Show that R can be represented as
R = e ⊗e + (e
1
⊗e
1
+e
2
⊗e
2
) cos θ −(e
1
⊗e
2
−e
2
⊗e
1
) sin θ, (i)
where e
1
and e
2
are any two mutually orthogonal vectors such that {e
1
, e
2
, e} forms a righthanded or
thonormal basis for IE
3
.
Solution: We begin by listing what is given to us in the problem statement. Since the transformation R
simply rotates vectors, it necessarily preserves the length of a vector and so
Rx = x for all vectors x. (ii)
In addition, since the angle through which R rotates a vector is θ, the angle between any vector x and its
image Rx is θ:
Rx · x = x
2
cos θ for all vectors x. (iii)
Next, since R rotates vectors about the axis e, the angle between any vector x and e must equal the angle
between Rx and e:
Rx · e = x · e for all vectors x; (iv)
moreover, it leaves the axis e itself unchanged:
Re = e. (v)
And ﬁnally, since the rotation is in the sense of the righthand rule, for any vector x that is not parallelel to
the axis e, the vectors x, Rx and e must obey the inequality
(x ×Rx) · e > 0 for all vectors x that are not parallel to e. (vi)
Let {e
1
, e
2
, e} be a righthanded orthonormal basis. This implies that any vector in E
3
, and therefore
in particular the vectors Re
1
, Re
2
and Re, can be expressed as linear combinations of e
1
, e
2
and e,
Re
1
= R
11
e
1
+R
21
e
2
+R
31
e,
Re
2
= R
12
e
1
+R
22
e
2
+R
32
e,
Re = R
13
e
1
+R
23
e
2
+R
33
e,
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
(vii)
for some unique real numbers R
ij
, i, j = 1, 2, 3.
First, it follows from (v) and (vii)
3
that
R
13
= 0, R
23
= 0, R
33
= 1.
Second, we conclude from (iv) with the choice x = e
1
that Re
1
· e = 0. Similarly Re
2
· e = 0. These together
with (vii) imply that
R
31
= R
32
= 0.
2.3. WORKED EXAMPLES. 35
Third, it follows from (iii) with x = e
1
and (vii)
1
that R
11
= cos θ. One similarly shows that R
22
= cos θ.
Thus
R
11
= R
22
= cos θ.
Collecting these results allows us to write (vii) as
Re
1
= cos θ e
1
+R
21
e
2
,
Re
2
= R
12
e
1
+cos θ e
2
,
Re = e,
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
(viii)
Fourth, the inequality (vi) with the choice x = e
1
, together with (viii) and the fact that {e
1
, e
2
, e} forms
a righthanded basis yields R
21
> 0. Similarly the choice x = e
2
, yields R
12
< 0. Fifth, (ii) with x = e
1
gives Re
1
 = 1 which in view of (viii)
1
requires that R
21
= ±sin θ. Similarly we ﬁnd that R
12
= ±sin θ.
Collecting these results shows that
R
21
= +sin θ, R
12
= −sin θ,
since 0 < θ < π. Thus in conclusion we can write (viii) as
Re
1
= cos θ e
1
+sin θ e
2
,
Re
2
= −sin θ e
1
+cos θ e
2
,
Re = e.
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
(ix)
Finally, recall the representation (2.40) of a linear transformation in terms of its components as deﬁned
in (2.39). Applying this to (ix) allows us to write
R = cos θ (e
1
⊗e
1
) + sin θ (e
2
⊗e
1
) −sin θ (e
1
⊗e
2
) + cos θ (e
2
⊗e
2
) + (e ⊗e) (x)
which can be rearranged to give the desired result.
Example 2.19: If F is a nonsingular linear transformation, show that F
T
F is symmetric and positive deﬁnite.
Solution: For any linear transformations A and B we know that (AB)
T
= B
T
A
T
and (A
T
)
T
= A. It
therefore follows that
(F
T
F)
T
= F
T
(F
T
)
T
= F
T
F; (i)
this shows that F
T
F is symmetric.
In order to show that F
T
F is positive deﬁnite, we consider the quadratic form F
T
Fx · x. By using the
property (2.19) of the transpose, we can write
F
T
Fx · x = (Fx) · (Fx) = Fx
2
≥ 0. (ii)
Further, equality holds here if and only if Fx = o, which, since F is nonsingular, can happen only if x = o.
Thus F
T
Fx · x > 0 for all vectors x = o and so F
T
F is positive deﬁnite.
36 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
Example 2.20: Consider a symmetric positive deﬁnite linear transformation S. Show that it has a unique
symmetric positive deﬁnite square root, i.e. show that there is a unique symmetric positive deﬁnite linear
transformation T for which T
2
= S.
Solution: Since S is symmetric and positive deﬁnite it has three real positive eigenvalues σ
1
, σ
2
, σ
3
with
corresponding eigenvectors s
1
, s
2
, s
3
which may be taken to be orthonormal. Further, we know that S can
be represented as
S =
3
i=1
σ
i
(s
i
⊗s
i
). (i)
If one deﬁnes a linear transformation T by
T =
3
i=1
√
σ
i
(s
i
⊗s
i
) (ii)
one can readily verify that T is symmetric, positive deﬁnite and that T
2
= S. This establishes the existence
of a symmetric positive deﬁnite squareroot of S. What remains is to show uniqueness of this squareroot.
Suppose that S has two symmetric positive deﬁnite square roots T
1
and T
2
: S = T
2
1
= T
2
2
. Let σ > 0
and s be an eigenvalue and corresponding eigenvector of S. Then Ss = σs and so T
2
1
s = σs. Thus we have
(T
1
+
√
σI)(T
1
−
√
σI)s = 0 . (iii)
If we set f = (T
1
−
√
σI)s this can be written as
T
1
f = −
√
σf . (iv)
Thus either f = o or f is an eigenvector of T
1
corresponding to the eigenvalue −
√
σ(< 0). Since T
1
is
positive deﬁnite it cannot have a negative eigenvalue. Thus f = o and so
T
1
s =
√
σs . (v)
It similarly follows that T
2
s =
√
σs and therefore that
T
1
s = T
2
s. (vi)
This holds for every eigenvector s of S: i.e. T
1
s
i
= T
2
s
i
, i = 1, 2, 3. Since the triplet of eigenvectors form a
basis for the underlying vector space this in turn implies that T
1
x = T
2
x for any vector x. Thus T
1
= T
2
.
Example 2.21: Polar Decomposition Theorem: If F is a nonsingular linear transformation, show that there
exists a unique positive deﬁnite symmetric linear transformation U, and a unique orthogonal linear trans
formation R such that F = RU.
Solution: It follows from Example 2.19 that F
T
F is symmetric and positive deﬁnite. It then follows from
Example 2.20 that F
T
F has a unique symmetric positive deﬁnite square root, say, U:
U =
_
F
T
F. (i)
2.3. WORKED EXAMPLES. 37
Finally, since U is positive deﬁnite, it is nonsingular, and its inverse U
−1
exists. Deﬁne the linear
transformation R through:
R = FU
−1
. (ii)
All we have to do is to show that R is orthogonal. But this follows from
R
T
R = (FU
−1
)
T
(FU
−1
) = (U
−1
)
T
F
T
FU
−1
= U
−1
U
2
U
−1
= I. (iii)
In this calculation we have used the fact that U, and so U
−1
, are symmetric. This establishes the proposition
(except for the uniqueness which if left as an exercise).
Example 2.22: The polar decomposition theorem states that any nonsingular linear transformation F can
be represented uniquely in the forms F = RU = VR where R is orthogonal and U and V are symmetric
and positive deﬁnite. Let λ
i
, r
i
, i = 1, 2, 3 be the eigenvalues and eigenvectors of U. From Example 2.15 it
follows that the eigenvalues of V are the same as those of U and that the corresponding eigenvectors
i
of
V are given by
i
= Rr
i
. Thus U and V have the spectral decompositions
U =
3
i=1
λ
i
r
i
⊗r
i
, V =
3
i=1
λ
i
i
⊗
i
.
Show that
F =
3
i=1
λ
i
i
⊗r
i
, R =
3
i=1
i
⊗r
i
.
Solution: First, by using the property (2.38)
1
and
i
= Rr
i
we have
F = RU = R
3
i=1
λ
i
r
i
⊗r
i
=
3
i=1
λ
i
(Rr
i
) ⊗r
i
=
3
i=1
λ
i
i
⊗r
i
. (i)
Next, since U is nonsingular
U
−1
=
3
i=1
λ
−1
i
r
i
⊗r
i
.
and therefore
R = FU
−1
=
3
i=1
λ
i
i
⊗r
i
3
j=1
λ
−1
j
r
j
⊗r
j
=
3
i=1
3
j=1
λ
i
λ
−1
j
(
i
⊗r
i
)(r
j
⊗r
j
).
By using the property (2.37)
2
and the fact that r
i
· r
j
= δ
ij
, we have (
i
⊗r
i
)(r
j
⊗r
j
) = (r
i
· r
j
)(
i
⊗r
j
) =
δ
ij
(
i
⊗r
j
). Therefore
R =
3
i=1
3
j=1
λ
i
λ
−1
j
δ
ij
(
i
⊗r
j
) =
3
i=1
λ
i
λ
−1
i
(
i
⊗r
i
) =
3
i=1
(
i
⊗r
i
). (ii)
Example 2.23: Determine the rank and the null space of the linear transformation C = a ⊗ b where a =
o, b = o.
38 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
Solution: Recall that the rank of any linear transformation A is the dimension of its range. (The range of A
is the particular subspace of E
3
comprised of all vectors Ax as x takes all values in E
3
.) Since Cx = (b· x)a
the vector Cx is parallel to the vector a for every choice of the vector x. Thus the range of C is the set of
vectors parallel to a and its dimension is one. The linear transformation C therefore has rank one.
Recall that the null space of any linear transformation A is the particular subspace of E
3
comprised of
the set of all vectors x for which Ax = o. Since Cx = (b · x)a and a = o, the null space of C consists of all
vectors x for which b · x, i.e. the set of all vectors normal to b.
Example 2.24: Let λ
1
≤ λ
2
≤ λ
3
be the eigenvalues of the symmetric linear transformation S. Show that S
can be expressed in the form
S = (I +a ⊗b)(I +b ⊗a) a = o, b = o, (i)
if and only if
0 ≤ λ
1
≤ 1, λ
2
= 1, λ
3
≥ 1. (ii)
Example 2.25: Calculate the square roots of the identity tensor.
Solution: The identity is certainly a symmetric positive deﬁnite tensor. By the result of a previous example
on the squareroot of a symmetric positive deﬁnite tensor, it follows that there is a unique symmetric positive
deﬁnite tensor which is the square root of I. Obviously, this square root is also I. However, there are other
square roots of I that are not symmetric positive deﬁnite. We are to explore them here: thus we wish to
determine a tensor A on E
3
such that A
2
= I, A = I and A = −I.
First, if Ax = x for every vector x ∈ E
3
, then, by deﬁnition, A = I. Since we are given that A = I,
there must exist at least one nonnull vector x for which Ax = x; call this vector f
1
so that Af
1
= f
1
. Set
e
1
= (A−I) f
1
; (i)
since Af
1
= f
1
, it follows that e
1
= o. Observe that
(A+I) e
1
= (A+I) (A−I)f
1
= (A
2
−I)f
1
= Of
1
= o. (ii)
Therefore
Ae
1
= −e
1
(iii)
and so −1 is an eigenvalue of A with corresponding eigenvector e
1
. Without loss of generality we can assume
that e
1
 = 1.
Second, the fact that A = −I, together with A
2
= I similary implies that there must exist a unit vector
e
2
for which
Ae
2
= e
2
, (iv)
from which we conclude that +1 is an eigenvalue of A with corresponding eigenvector e
2
.
2.3. WORKED EXAMPLES. 39
Third, one can show that {e
1
, e
2
} is a linearly independent pair of vectors. To see this, suppose that for
some scalars ξ
1
, ξ
2
one has
ξ
1
e
1
+ξ
2
e
2
= o.
Operating on this by A yields ξ
1
Ae
1
+ξ
2
Ae
2
= o, which on using (iii) and (iv) leads to
−ξ
1
e
1
+ξ
2
e
2
= o.
Subtracting and adding the preceding two equations shows that ξ
1
e
1
= ξ
2
e
2
= o. Since e
1
and e
2
are
eigenvectors, neither of them is the null vector o, and therefore ξ
1
= ξ
2
= 0. Therefore e
1
and e
2
are linearly
independent.
Fourth, let e
3
be a unit vector that is perpendicular to both e
1
and e
2
. The triplet of vectors {e
1
, e
2
, e
3
}
is linearly independent and therefore forms a basis for E
3
.
Fifth, the components A
ij
of the tensor A in the basis {e
1
, e
2
, e
3
} are given, as usual, by
Ae
j
= A
ij
e
i
. (v)
Comparing (v) with (iii) yields A
11
= −1, A
21
= A
31
= 0, and similarly comparing (v) with (iv) yields
A
22
= 1, A
12
= A
32
= 0. The matrix of components of A in this basis is therefore
[A] =
_
_
_
_
−1 0 A
13
0 1 A
23
0 0 A
33
_
_
_
_
. (vi)
It follows that
[A
2
] = [A]
2
= [A][A] =
_
_
_
_
1 0 −A
13
+A
13
A
33
0 1 A
23
+A
23
A
33
0 0 A
2
33
_
_
_
_
. (vii)
(Notation: [A
2
] is the matrix of components of A
2
while [A]
2
is the square of the matrix of components of
A. Why is [A
2
] = [A]
2
?) However, since A
2
= I, the matrix of components of A
2
in any basis has to be the
identity matrix. Therefore we must have
−A
13
+A
13
A
33
= 0, A
23
+A
23
A
33
= 0, A
2
33
= 1, (viii)
which implies that
either
A
13
= arbitrary,
A
23
= 0,
A
33
= 1,
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
or
A
13
= 0,
A
23
= arbitrary,
A
33
= −1.
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
(ix)
Consequently the matrix [A] must necessarily have one of the two forms
_
_
_
_
−1 0 α
1
0 1 0
0 0 1
_
_
_
_
or
_
_
_
_
−1 0 0
0 1 α
2
0 0 −1
_
_
_
_
, (x)
40 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
where α
1
and α
2
are arbitrary scalars.
Sixth, set
p
1
= e
1
, q
1
= −e
1
+
α
1
2
e
3
. (xi)
Then
p
1
⊗q
1
= −e
1
⊗e
1
+
α
1
2
e
1
⊗e
3
,
and therefore
I + 2p
1
⊗q
1
=
_
e
1
⊗e
1
+e
2
⊗e
2
+e
3
⊗e
3
_
−2e
1
⊗e
1
+α
1
e
1
⊗e
3
= −e
1
⊗e
1
+e
2
⊗e
2
+e
3
⊗e
3
+α
1
e
1
⊗e
3
.
Note from this that the components of the tensor I +2p
1
⊗q
1
are given by (x)
1
. Conversely, one can readily
verify that the tensor
A = I + 2p
1
⊗q
1
(xii)
has the desired properties A
2
= I, A = I, A = −I for any value of the scalar α
1
.
Alternatively set
p
2
= e
2
, q
2
= e
2
+
α
2
2
e
3
. (xiii)
Then
p
2
⊗q
2
= e
2
⊗e
2
+
α
2
2
e
2
⊗e
3
,
and therefore
−I + 2p
2
⊗q
2
=
_
−e
1
⊗e
1
−e
2
⊗e
2
−e
3
⊗e
3
_
+ 2e
2
⊗e
2
+α
2
e
2
⊗e
3
= −e
1
⊗e
1
+e
2
⊗e
2
−e
3
⊗e
3
+α
2
e
2
⊗e
3
.
Note from this that the components of the tensor −I + 2p
2
⊗ q
2
are given by (x)
2
. Conversely, one can
readily verify that the tensor
A = −I + 2p
2
⊗q
2
(xiv)
has the desired properties A
2
= I, A = I, A = −I for any value of the scalar α
2
.
Thus the tensors deﬁned in (xii) and (xiv) are both square roots of the identity tensor that are not
symmetric positive deﬁnite.
References
1. I.M. Gelfand, Lectures on Linear Algebra, Wiley, New York, 1963.
2. P.R. Halmos, Finite Dimensional Vector Spaces, Van Nostrand, New Jersey, 1958.
3. J.K. Knowles, Linear Vector Spaces and Cartesian Tensors, Oxford University Press, New York, 1997.
Chapter 3
Components of Vectors and Tensors.
Cartesian Tensors.
Notation:
α ..... scalar
{a} ..... 3 ×1 column matrix
a ..... vector
a
i
..... i
th
component of the vector a in some basis; or i
th
element of the column matrix {a}
[A] ..... 3 ×3 square matrix
A ..... linear transformation
A
ij
..... i, j component of the linear transformation A in some basis; or i, j element of the square matrix [A]
C
ijk
..... i, j, k, component of 4tensor C in some basis
T
i1i2....in
..... i
1
i
2
....i
n
component of ntensor T in some basis.
3.1 Components of a vector in a basis.
Let IE
3
be a threedimensional Euclidean vector space. A set of three linearly independent
vectors {e
1
, e
2
, e
3
} forms a basis for IE
3
in the sense that an arbitrary vector v can always
be expressed as a linear combination of the three basis vectors; i.e. given any v ∈ IE
3
, there
are unique scalars α, β, γ such that
v = αe
1
+ βe
2
+ γe
3
. (3.1)
If each basis vector e
i
has unit length, and if each pair of basis vectors e
i
, e
j
are mutually
orthogonal, we say that {e
1
, e
2
, e
3
} forms an orthonormal basis for IE
3
. Thus, for an
41
42 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
orthonormal basis,
e
i
· e
j
= δ
ij
(3.2)
where δ
ij
is the Kronecker delta. In these notes we shall always restrict attention to or
thonormal bases unless explicitly stated otherwise. If the basis is righthanded, one has in
addition that
e
i
· (e
j
×e
k
) = e
ijk
(3.3)
where e
ijk
is the alternator introduced previously in (1.44).
The components v
i
of a vector v in a basis {e
1
, e
2
, e
3
} are deﬁned by
v
i
= v · e
i
. (3.4)
The vector can be expressed in terms of its components and the basis vectors as
v = v
i
e
i
. (3.5)
The components of v may be assembled into a column matrix
{v} =
_
_
_
v
1
v
2
v
3
_
_
_
. (3.6)
e
1
e
2 e
3
v
1
v
2
v
3
v
v
2
v
3
v
v
1
Figure 3.1: Components {v
1
, v
2
, v
3
} and {v
1
, v
2
, v
3
} of the same vector v in two diﬀerent bases.
Even though this is obvious from the deﬁnition (3.4), it is still important to emphasize
that the components v
i
of a vector depend on both the vector v and the choice of basis.
Suppose, for example, that we are given two bases {e
1
, e
2
, e
3
} and {e
1
, e
2
, e
3
} as shown in
3.2. COMPONENTS OF A LINEAR TRANSFORMATION IN A BASIS. 43
Figure 3.1. Then the vector v has one set of components v
i
in the ﬁrst basis and a diﬀerent
set of components v
i
in the second basis:
v
i
= v · e
i
, v
i
= v · e
i
. (3.7)
Thus the one vector v can be expressed in either of the two equivalent forms
v = v
i
e
i
or v = v
i
e
i
. (3.8)
The components v
i
and v
i
are related to each other (as we shall discuss later) but in general,
v
i
= v
i
.
Once a basis {e
1
, e
2
, e
3
} is chosen and ﬁxed, there is a unique vector x associated with
any given column matrix {x} such that the components of x in {e
1
, e
2
, e
3
} are {x}. Thus,
once the basis is ﬁxed, there is a onetoone correspondence between column matrices and
vectors. It follows, for example, that once the basis is ﬁxed, the vector equation z = x + y
can be written equivalently as
{z} = {x} +{y} or z
i
= x
i
+ y
i
(3.9)
in terms of the components x
i
, y
i
and z
i
in the given basis.
If u
i
and v
i
are the components of two vectors u and v in a basis, then the scalarproduct
u · v can be expressed as
u · v = u
i
v
i
; (3.10)
the vectorproduct u ×v can be expressed as
u ×v = (e
ijk
u
j
v
k
)e
i
or equivalently as (u ×v)
i
= e
ijk
u
j
v
k
, (3.11)
where e
ijk
is the alternator introduced previously in (1.44).
3.2 Components of a linear transformation in a basis.
Consider a linear transformation A. Any vector in IE
3
can be expressed as a linear combina
tion of the basis vectors e
1
, e
2
and e
3
. In particular this is true of the three vectors Ae
1
, Ae
2
and Ae
3
. Let A
ij
be the ith component of the vector Ae
j
so that
Ae
j
= A
ij
e
i
. (3.12)
44 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
We can also write
A
ij
= e
i
· (Ae
j
). (3.13)
The 9 scalars A
ij
are known as the components of the linear transformation A in the
basis {e
1
, e
2
, e
3
}. The components A
ij
can be assembled into a square matrix:
[A] =
_
_
_
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
_
_
_
. (3.14)
The linear transformation A can be expressed in terms of its components A
ij
and the basis
vectors e
i
as
A =
3
j=1
3
i=1
A
ij
(e
i
⊗e
j
). (3.15)
The components A
ij
of a linear transformation depend on both the linear transformation
A and the choice of basis. Suppose, for example, that we are given two bases {e
1
, e
2
, e
3
}
and {e
1
, e
2
, e
3
}. Then the linear transformation A has one set of components A
ij
in the ﬁrst
basis and a diﬀerent set of components A
ij
in the second basis:
A
ij
= e
i
· (Ae
j
), A
ij
= e
i
· (Ae
j
). (3.16)
The components A
ij
and A
ij
are related to each other (as we shall discuss later) but in
general A
ij
= A
ij
.
The components of the linear transformation A = a ⊗b are
A
ij
= a
i
b
j
. (3.17)
Once a basis {e
1
, e
2
, e
3
} is chosen and ﬁxed, there is a unique linear transformation M
associated with any given square matrix [M] such that the components of M in {e
1
, e
2
, e
3
}
are [M]. Thus, once the basis is ﬁxed, there is a onetoone correspondence between square
matrices and linear transformations. It follows, for example, that the equation y = Ax
relating the linear transformation A and the vectors x and y can be written equivalently as
{y} = [A]{x} or y
i
= A
ij
x
j
(3.18)
in terms of the components A
ij
, x
i
and y
i
in the given basis. Similarly, if A, B and C are
linear transformations such that C = AB, then their component matrices [A], [B] and [C]
are related by
[C] = [A][B] or C
ij
= A
ik
B
kj
. (3.19)
3.3. COMPONENTS IN TWO BASES. 45
The component matrix [I] of the identity linear transformation I in any orthonormal basis
is the unit matrix; its components are therefore given by the Kronecker delta δ
ij
. If [A] and
[A
T
] are the component matrices of the linear transformations A and A
T
, then [A
T
] = [A]
T
and A
T
ij
= A
ji
.
As mentioned in Section 2.2, a symmetric linear transformation S has three real eigen
values λ
1
, λ
2
, λ
3
and corresponding orthonormal eigenvectors e
1
, e
2
, e
3
. The eigenvectors are
referred to as the principal directions of S. The particular basis consisting of the eigenvec
tors is called a principal basis for S. The component matrix [S] of the symmetric linear
transformation S in its principal basis is
[S] =
_
_
_
λ
1
0 0
0 λ
2
0
0 0 λ
3
_
_
_
. (3.20)
As a ﬁnal remark we note that if we are to establish certain results for vectors and linear
transformations, we can, if it is more convenient to do so, pick and ﬁx a basis, and then
work with the components in that basis. If necessary, we can revert back to the vectors and
linear transformations at the end. For example the ﬁrst example in the previous chapter
asked us to show that a · (b × c) = b · (c × a). In terms of components, the left hand
side of this reads a · (b × c) = a
i
(b × c)
i
= a
i
e
ijk
b
j
c
k
= e
ijk
a
i
b
j
c
k
. Similarly the right
hand side reads b · (c × a) = b
i
(c × a)
i
= b
i
e
ijk
c
j
a
k
= e
ijk
a
k
b
i
c
j
. Since i, j, k are dummy
subscripts in the rightmost expression, they can be changed to any other subscript; thus by
changing k → i, i → j and j → k we can write b · (c × a) = e
jki
a
i
b
j
c
k
. Finally recalling
that the sign of e
ijk
changes when any two adjacent subscripts are switched we ﬁnd that
b · (c × a) = e
jki
a
i
b
j
c
k
= −e
jik
a
i
b
j
c
k
= e
ijk
a
i
b
j
c
k
where we have ﬁrst switched the ki and
then the ji in the subscript of the alternator. The rightmost expressions of a · (b ×c) and
b · (c ×a) are identical and therefore this establishes the desired identity.
3.3 Components in two bases.
Consider a 3dimensional Euclidean vector space together with two orthonormal bases {e
1
, e
2
, e
3
}
and {e
1
, e
2
, e
3
}. Since {e
1
, e
2
, e
3
} forms a basis, any vector, and therefore in particular the
vectors e
i
, can be represented as a linear combination of the basis vectors e
1
, e
2
, e
3
. Let Q
ij
be the jth component of the vector e
i
in the basis {e
1
, e
2
, e
3
}:
e
i
= Q
ij
e
j
. (3.21)
46 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
By taking the dotproduct of (NNN) with e
k
, one sees that
Q
ij
= e
i
· e
j
, (3.22)
and so Q
ij
is the cosine of the angle between the basis vectors e
i
and e
j
. Observe from
(NNN) that Q
ji
can also be interpreted as the jth component of e
i
in the basis {e
1
, e
2
, e
3
}
whence we also have
e
i
= Q
ji
e
j
. (3.23)
The 9 numbers Q
ij
can be assembled into a square matrix [Q]. This matrix relates the
two bases {e
1
, e
2
, e
3
} and {e
1
, e
2
, e
3
}. Since both bases are orthonormal it can be readily
shown that [Q] is an orthogonal matrix. If in addition, if one basis can be rotated into the
other, which means that both bases are righthanded or both are lefthanded, then [Q] is a
proper orthogonal matrix and det[Q] = +1; if the two bases are related by a reﬂection, which
means that one basis is righthanded and the other is lefthanded, then [Q] is an improper
orthogonal matrix and det[Q] = −1.
We may now relate the diﬀerent components of a single vector v in two bases.
Let v
i
and v
i
be the ith component of the same vector v in the two bases {e
1
, e
2
, e
3
} and
{e
1
, e
2
, e
3
}. Then one can show that
v
i
= Q
ij
v
j
or equivalently {v
} = [Q]{v} (3.24)
Since [Q] is orthogonal, one also has the inverse relationships
v
i
= Q
ji
v
j
or equivalently {v} = [Q]
T
{v
}. (3.25)
In general, the component matrices {v} and {v
} of a vector v in two diﬀerent bases are
diﬀerent. A vector whose components in every basis happen to be the same is called an
isotropic vector: {v} = [Q]{v} for all orthogonal matrices [Q]. It is possible to show that
the only isotropic vector is the null vector o.
Similarly, we may relate the diﬀerent components of a single linear transforma
tion A in two bases. Let A
ij
and A
ij
be the ijcomponents of the same linear transformation
A in the two bases {e
1
, e
2
, e
3
} and {e
1
, e
2
, e
3
}. Then one can show that
A
ij
= Q
ip
Q
jq
A
pq
or equivalently [A
] = [Q][A][Q]
T
. (3.26)
Since [Q] is orthogonal, one also has the inverse relationships
A
ij
= Q
pi
Q
qj
A
pq
or equivalently [A] = [Q]
T
[A
][Q]. (3.27)
3.4. DETERMINANT, TRACE, SCALARPRODUCT AND NORM 47
In general, the component matrices [A] and [A
] of a linear transformation A in two
diﬀerent bases are diﬀerent. A linear transformation whose components in every basis happen
to be the same is called an isotropic linear transformation: [A] = [Q][A][Q]
T
for all
orthogonal matrices [Q]. It is possible to show that the most general isotropic symmetric
linear transformation is a scalar multiple of the identity αI where α is an arbitrary scalar.
3.4 Scalarvalued functions of linear transformations.
Determinant, trace, scalarproduct and norm.
Let Φ(A; e
1
, e
2
, e
3
) be a scalarvalued function that depends on a linear transformation
A and a (nonnecessarily orthonormal) basis {e
1
, e
2
, e
3
}. For example Φ(A; e
1
, e
2
, e
3
) =
Ae
1
· e
1
. Certain such functions are in fact independent of the basis, so that for every two
(notnecessarily orthonormal) bases {e
1
, e
2
, e
3
} and {e
1
, e
2
, e
3
} one has Φ(A; e
1
, e
2
, e
3
) =
Φ(A; e
1
, e
2
, e
3
), and in such a case we can simply write Φ(A). One example of such a
function is
Φ(A; e
1
, e
2
, e
3
) =
(Ae
1
×Ae
2
) · Ae
3
(e
1
×e
2
) · e
3
, (3.28)
(though it is certainly not obvious that this function is independent of the choice of basis).
Equivalently, let A be a linear transformation and let [A] be the components of A in some
basis {e
1
, e
2
, e
3
}. Let φ([A]) be some realvalued function deﬁned on the set of all square
matrices. If [A
] are the components of A in some other basis {e
1
, e
2
, e
3
}, then in general
φ([A]) = φ[A
]). This means that the function φ depends on the linear transformation A
and the underlying basis. Certain functions φ have the property that φ([A]) = φ[A
]) for
all pairs of bases {e
1
, e
2
, e
3
} and {e
1
, e
2
, e
3
} and therefore such a function depends on the
linear transformation only and not the basis. For such a function we may write φ(A).
We ﬁrst consider two important examples here. Since the components [A] and [A
] of
a linear tranformation A in two bases are related by [A
] = [Q][A][Q]
T
, if we take the
determinant of this matrix equation we get
det[A
] = det([Q][A][Q]
T
) = det[Q] det[A] det[Q]
T
= (det[Q])
2
det[A] = det[A], (3.29)
since the determinant of an orthogonal matrix is ±1. Therefore without ambiguity we may
deﬁne the determinant of a linear transformation A to be the (basis independent) scalar
valued function given by
det A = det[A]. (3.30)
48 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
We will see in an example at the end of this Chapter that the particular function Φ deﬁned
in (3.28) is in fact the determinant det A.
Similarly, we may deﬁne the trace of a linear transformation A to be the (basis inde
pendent) scalarvalued function given by
trace A = tr[A]. (3.31)
In terms of its components in a basis one has
det A = e
ijk
A
1i
A
2j
A
3k
= e
ijk
A
i1
A
j2
A
k3
, traceA = A
ii
; (3.32)
see (1.46). It is useful to note the following properties of the determinant of a linear trans
formation:
det(AB) = det(A) det(B), det(αA) = α
3
det(A), det(A
T
) = det (A). (3.33)
As mentioned previously, a linear transformation A is said to be nonsingular if the only
vector x for which Ax = o is the null vector x = o. Equivalently, one can show that A is
nonsingular if and only if
det A = 0. (3.34)
If A is nonsingular, then
det(A
−1
) = 1/ det(A). (3.35)
Suppose that λ and v = o are an eigenvalue and eigenvector of given a linear transfor
mation A. Then by deﬁnition, Av = λv, or equivalently (A − λI)v = o. Since v = o it
follows that A−λI must be singular and so
det(A−λI) = 0. (3.36)
The eigenvalues are the roots λ of this cubic equation. The eigenvalues and eigenvectors of a
linear transformation do not depend on any choice of basis. Thus the eigenvalues of a linear
transformation are also scalarvalued functions of A whose values depends only on A and
not the basis: λ
i
= λ
i
(A). If S is symmetric, its matrix of components in a principal basis
are
[S] =
_
_
_
_
_
_
λ
1
0 0
0 λ
2
0
0 0 λ
3
_
_
_
_
_
_
. (3.37)
3.4. DETERMINANT, TRACE, SCALARPRODUCT AND NORM 49
The particular scalarvalued functions
I
1
(A) = tr A,
I
2
(A) = 1/2 [(tr A)
2
−tr (A
2
)] ,
I
3
(A) = det A,
(3.38)
will appear frequently in what follows. It can be readily veriﬁed that for any linear trans
formation A and all orthogonal linear transformations Q,
I
1
(Q
T
AQ) = I
1
(A), I
2
(Q
T
AQ) = I
2
(A), I
3
(Q
T
AQ) = I
3
(A), (3.39)
and for this reason the three functions (3.38) are said to be invariant under orthogonal trans
formations. Observe from (3.37) that for a symmetric linear transformation with eigenvalues
λ
1
, λ
2
, λ
3
I
1
(S) = λ
1
+ λ
2
+ λ
3
,
I
2
(S) = λ
1
λ
2
+ λ
2
λ
3
+ λ
3
λ
1
,
I
3
(S) = λ
1
λ
2
λ
3
.
(3.40)
The mapping (3.40) between invariants and eigenvalues is onetoone. In addition one can
show that for any linear transformation A and any real number α,
det(A−αI) = −α
3
+ I
1
(A)α
2
−I
2
(A)α + I
3
(A).
Note in particular that the cubic equation for the eigenvalues of a linear transformation can
be written as
λ
3
−I
1
(A)λ
2
+ I
2
(A)λ −I
3
(A) = 0.
Finally, one can show that
A
3
−I
1
(A)A
2
+ I
2
(A)A−I
3
(A)I = O. (3.41)
which is known as the CayleyHamilton theorem.
One can similarly deﬁne scalarvalued functions of two linear transformations A and B.
The particular function φ(A, B) deﬁned by
φ(A, B) = tr(AB
T
) (3.42)
will play an important role in what follows. Note that in terms of components in a basis,
φ(A, B) = tr(AB
T
) = A
ij
B
ij
. (3.43)
50 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
This particular scalarvalued function is often known as the scalar product of the two
linear transformation A and B and is written as A· B:
A· B = tr(AB
T
). (3.44)
It is natural then to deﬁne the magnitude (or norm) of a linear transformation A, denoted
by A as
A =
√
A· A =
_
tr(AA
T
). (3.45)
Note that in terms of components in a basis,
A
2
= A
ij
A
ij
. (3.46)
Observe the useful property that if A →0, then each component
A
ij
→0. (3.47)
This will be used later when we linearize the theory of large deformations.
3.5 Cartesian Tensors
Consider two orthonormal bases {e
1
, e
2
, e
3
} and {e
1
, e
2
, e
3
}. A quantity whose components
v
i
and v
i
in these two bases are related by
v
i
= Q
ij
v
j
(3.48)
is called a 1
st
order Cartesian tensor or a 1tensor. It follows from our preceding discussion
that a vector is a 1tensor.
A quantity whose components A
ij
and A
ij
in two bases are related by
A
ij
= Q
ip
Q
jq
A
pq
(3.49)
is called a 2
nd
order Cartesian tensor or a 2tensor. It follows from our preceding discussion
that a linear transformation is a 2tensor.
The concept of an n
th
order tensor can be introduced similarly: let T be a physical entity
which, in a given basis {e
1
, e
2
, e
3
}, is deﬁned completely by a set of 3
n
ordered numbers
T
i
1
i
2
....in
. The numbers T
i
1
i
2
....in
are called the components of T in the basis {e
1
, e
2
, e
3
}. If,
for example, T is a scalar, vector or linear transformation, it is represented by 3
0
, 3
1
and 3
2
3.5. CARTESIAN TENSORS 51
components respectively in the given basis. Let {e
1
, e
2
, e
3
} be a second basis related to the
ﬁrst one by the orthogonal matrix [Q], and let T
i
1
i
2
....in
be the components of the entity T
in the second basis. Then, if for every pair of such bases, these two sets of components are
related by
T
i
1
i
2
....in
= Q
i
1
j
1
Q
i
2
j
2
.... Q
injn
T
j
1
j
2
....jn
, (3.50)
the entity T is called a n
th
order Cartesian tensor or more simply an ntensor.
Note that the components of a tensor in an arbitrary basis can be calculated if its com
ponents in any one basis are known.
Two tensors of the same order are added by adding corresponding components.
Recall that the outerproduct of two vectors a and b is the 2−tensor C = a ⊗ b whose
components are given by C
ij
= a
i
b
j
. This can be generalized to higherorder tensors. Given
an ntensor A and an mtensor B their outerproduct is the (m + n)−tensor C = A ⊗ B
whose components are given by
C
i
1
i
2
..inj
1
j
2
..jm
= A
i
1
i
2
...in
B
j
1
j
2
...jm
. (3.51)
Let A be a 2tensor with components A
ij
in some basis. Then “contracting” A over its
subscripts leads to the scalar A
ii
. This can be generalized to higherorder tensors. Let A
be a ntensor with components A
i
1
i
2
...in
in some basis. Then “contracting” A over two of its
subscripts, say the i
j
th and i
k
th subscripts, leads to the (n −2)−tensor whose components
in this basis are A
i
1
i
2
.. i
j−1
p i
j+1
...i
k−1
p i
k+1
.... in
. Contracting over two subscripts involves
setting those two subscripts equal, and therefore summing over them.
Let a, b and T be entities whose components in a basis are denoted by a
i
, b
i
and T
ij
.
Suppose that the components of T in some basis are related to the components of a and b
in that same basis by a
i
= T
ij
b
j
. If a and b are 1tensors, then one can readily show that
T is necessarily a 2tensor. This is called the quotient rule since it has the appearance
of saying that the quotient of two 1tensors is a 2tensor. This rule generalizes naturally to
tensors of more general order. Suppose that A, B and T are entities whose components in a
basis are related by,
A
i
1
i
2
..in
= T
k
1
k
2
...k
B
j
1
j
2
...jm
(3.52)
where some of the subscripts maybe repeated. If it is known that A and B are tensors, then
T is necessarily a tensor as well.
In general, the components of a tensor T in two diﬀerent bases are diﬀerent: T
i
1
i
2
...in
=
T
i
1
i
2
...in
. However, there are certain special tensors whose components in one basis are the
52 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
same as those in any other basis; an example of this is the identity 2tensor I. Such a tensor
is said to be isotropic. In general, a tensor T is said to be an isotropic tensor if its
components have the same values in all bases, i.e. if
T
i
1
i
2
...in
= T
i
1
i
2
...in
(3.53)
in all bases {e
1
, e
2
, e
3
} and {e
1
, e
2
, e
3
}. Equivalently, for an isotropic tensor
T
i
1
i
2
....in
= Q
i
1
j
1
Q
i
2
j
2
....Q
injn
T
j
1
j
2
....jn
for all orthogonal matrices [Q]. (3.54)
One can show that (a) the only isotropic 1tensor is the null vector o; (b) the most general
isotropic 2tensor is a scalar multiple of the identity linear transformation, αI; (c) the most
general isotropic 3tensor is the null 3tensor o; (d) and the most general isotropic 4tensor
C has components (in any basis)
C
ijkl
= αδ
ij
δ
kl
+ βδ
ik
δ
jl
+ γδ
il
δ
jk
(3.55)
where α, β, γ are arbitrary scalars.
3.6 Worked Examples.
In some of the examples below, we are asked to establish certain results for vectors and linear
transformations. As noted previously, whenever it is more convenient we may pick and ﬁx a
basis, and then work using components in that basis. If necessary, we can revert back to the
vectors and linear transformations at the end. We shall do this frequently in what follows
and will not bother to explain this each time.
It is also worth pointing out that in some of the example below calculations involving
vectors and/or linear transformation are carried out without reference to their components.
One might have expected such examples to have been presented in Chapter 2. They are
contained in the present chapter because they all involve either the determinant or trace of a
linear transformation, and we chose to deﬁne these quantities in terms of components (even
though they are basis independent).
Example 3.1: Suppose that A is a symmetric linear transformation. Show that its matrix of components [A]
in any basis is a symmetric matrix.
3.6. WORKED EXAMPLES. 53
Solution: According to (3.13), the components of A in the basis {e
1
, e
2
, e
3
} are deﬁned by
A
ji
= e
j
· Ae
i
. (i)
The property (NNN) of the transpose shows that e
j
· Ae
i
= A
T
e
j
· e
i
, which, on using the fact that A
is symmetric further simpliﬁes to e
j
· Ae
i
= Ae
j
· e
i
; and ﬁnally since the order of the vectors in a scalar
product do not matter we have e
j
· Ae
i
= e
i
· Ae
j
. Thus
A
ji
= e
i
· Ae
j
. (ii)
By (3.13), the right most term here is the A
ij
component of A, and so (ii) yields
A
ji
= A
ij
. (iii)
Thus [A] = [A]
T
and so the matrix [A] is symmetric.
Remark: Conversely, if it is known that the matrix of components [A] of a linear transformation in some
basis is is symmetric, then the linear transformation A is also symmetric.
Example 2.5: Choose any convenient basis and calculate the components of the projection linear transfor
mation Π and the reﬂection linear transformation R in that basis.
Solution: Let e
3
be a unit vector normal to the plane P and let e
1
and e
2
be any two unit vectors in
P such that {e
1
, e
2
, e
3
} forms an orthonormal basis. From an example in the previous chapter we know
that the projection transformation Π and the reﬂection transformation R in the plane P can be written as
Π = I −e
3
⊗e
3
and R = I −2(e
3
⊗e
3
) respectively. Since the components of e
3
in the chosen basis are δ
3i
,
we ﬁnd that
Π
ij
= δ
ij
−(e
3
)
i
(e
3
)
j
= δ
ij
−δ
3i
δ
3j
, R
ij
= δ
ij
−2δ
3i
δ
3j
.
Example 3.2: Consider the scalarvalued function
f(A, B) = trace(AB
T
) (i)
and show that, for all linear transformations A, B, C, and for all scalars α, this functionf has the following
properties:
i) f(A, B) = f(B, A),
ii) f(αA, B) = αf(A, B),
iii) f(A+C, B) = f(A, B) +f(C, B) and
iv) f(A, A) > 0 provided A = 0.
Solution: Let A
ij
and B
ij
be the components of A and B in an arbitrary basis. In terms of these components,
(AB
T
)
ij
= A
ik
B
T
kj
= A
ik
B
jk
and so
f(A, B) = A
ik
B
ik
. (ii)
54 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
It is now trivial to verify that all of the above requirements hold.
Remark: It follows from this that the function f has all of the usual requirements of a scalar product.
Therefore we may deﬁne the scalarproduct of two linear transformations A and B, denoted by A· B, as
A· B = trace(AB
T
) = A
ij
B
ij
. (iii)
Note that, based on this scalarproduct, we can deﬁne the magnitude of a linear transformation to be
A =
√
A· A =
_
A
ij
A
ij
. (iv)
Example 3.3: For any two vectors u and v, show that their crossproduct u×v is orthogonal to both u and
v.
Solution: We are to show, for example, that u · (u ×v) = 0. In terms of their components we can write
u · (u ×v) = u
i
(u ×v)
i
= u
i
(e
ijk
u
j
v
k
) = e
ijk
u
i
u
j
v
k
. (i)
Since e
ijk
= −e
jik
and u
i
u
j
= u
j
u
i
, it follows that e
ijk
is skewsymmetric in the subscripts ij and u
i
u
j
is
symmetric in the subscripts ij. Thus it follows from Example 1.3 that e
ijk
u
i
u
j
= 0 and so u · (u ×v) = 0.
The orthogonality of v and u ×v can be established similarly.
Example 3.4 Suppose that a, b, c, are any three linearly independent vectors and that F be an arbitrary
nonsingular linear transformation. Show that
(Fa ×Fb) · Fc = det F (a ×b) · c (i)
Solution: First consider the lefthand side of (i). On using (3.10), and (3.11), we can express this as
(Fa ×Fb) · Fc = (Fa ×Fb)
i
(Fc)
i
= e
ijk
(Fa)
j
(Fb)
k
(Fc)
i
, (ii)
and consequently
(Fa ×Fb) · Fc = e
ijk
(F
jp
a
p
) (F
kq
b
q
) (F
ir
c
r
) = e
ijk
F
ir
F
jp
F
kq
a
p
b
q
c
r
. (iii)
Turning next to the righthand side of (i), we note that
det F (a ×b) · c = det[F](a ×b)
i
c
i
= det[F]e
ijk
a
j
b
k
c
i
= det[F]e
rpq
a
p
b
q
c
r
. (iv)
Recalling the identity e
rpq
det[F] = e
ijk
F
ir
F
jp
F
kq
in (1.48) for the determinant of a matrix and substituting
this into (iv) gives
det F (a ×b) · c = e
ijk
F
ir
F
jp
F
kq
a
p
b
q
c
r
. (v)
Since the righthand sides of (iii) and (v) are identical, it follows that the lefthand sides must also be equal,
thus establishing the desired result.
3.6. WORKED EXAMPLES. 55
Example 3.5 Suppose that a, b and c are three noncoplanar vectors in IE
3
. Let V
0
be the volume of the
tetrahedron deﬁned by these three vectors. Next, suppose that F is a nonsingular 2tensor and let V denote
the volume of the tetrahedron deﬁned by the vectors Fa, Fb and Fc. Note that the second tetrahedron is
the image of the ﬁrst tetrahedron under the transformation F. Derive a formula for V in terms of V
0
and F.
c
a
b
Fa
Fb
Fc
Volume V
Volume V
0
F
Figure 3.2: Tetrahedron of volume V
0
deﬁned by three noncoplanar vectors a, b and c; and its image
under the linear transformation F.
Solution: Recall from an example in the previous Chapter that the volume V
0
of the tetrahedron deﬁned by
any three noncoplanar vectors a, b, c is
V
0
=
1
6
(a ×b) · c.
The volume V of the tetrahedron deﬁned by the three vectors Fa, Fb, Fc is likewise
V =
1
6
(Fa ×Fb) · Fc.
It follows from the result of the previous example that
V/V
0
= det F
which describes how volumes are mapped by the transformation F.
Example 3.6: Suppose that a and b are two noncolinear vectors in IE
3
. Let α
0
be the area of the par
allelogram deﬁned by these two vectors and let n
0
be a unit vector that is normal to the plane of this
parallelogram. Next, suppose that F is a nonsingular 2tensor and let α and n denote the area and unit
normal to the parallelogram deﬁned by the vectors Fa and Fb. Derive formulas for α and n in terms of
α
0
, n
0
and F.
Solution: By the properties of the vectorproduct we know that
α
0
= a ×b, n
0
=
a ×b
a ×b
;
and similarly that
α = Fa ×Fb, n =
Fa ×Fb
Fa ×Fb
.
56 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
F
a
b
n
0 Area α
0
Fa
Fb
n
Area α
Figure 3.3: Parallelogram of area α
0
with unit normal n
0
deﬁned by two noncolinear vectors a and b;
and its image under the linear transformation F.
Therefore
α
0
n
0
= a ×b, and αn = Fa ×Fb. (i)
But
(Fa ×Fb)
s
= e
sij
(Fa)
i
(Fb)
j
= e
sij
F
ip
a
p
F
jq
b
q
. (ii)
Also recall the identity e
pqr
det[F] = e
ijk
F
ip
F
jq
F
kr
introduced in (1.48). Multiplying both sides of this
identity by F
−1
rs
leads to
e
pqr
det[F]F
−1
rs
= e
ijk
F
ip
F
jq
F
kr
F
−1
rs
= e
ijk
F
ip
F
jq
δ
ks
= e
ijs
F
ip
F
jq
= e
sij
F
ip
F
jq
(iii)
Substituting (iii) into (ii) gives
(Fa ×Fb)
s
= det[F]e
pqr
F
−1
rs
a
p
b
q
= det[F]e
rpq
a
p
b
q
F
−1
rs
= det F(a ×b)
r
F
−T
sr
= det F
_
F
−T
(a ×b)
_
s
and so using (i),
αn = α
0
det F(F
−T
n
0
).
This describes how (vectorial) areas are mapped by the transformation F. Taking the norm of this vector
equation gives
α
α
0
=  det F F
−T
n
0
;
and substituting this result into the preceding equation gives
n =
F
−T
n
0
F
−T
n
0

.
Example 3.5: Let {e
1
, e
2
, e
3
} and {e
1
, e
2
, e
3
} be two bases related by nine scalars Q
ij
through e
i
= Q
ij
e
j
.
Let Q be the linear transformation whose components in the basis {e
1
, e
2
, e
3
} are Q
ij
. Show that
e
i
= Q
T
e
i
;
thus Q
T
is the transformation that carries the ﬁrst basis into the second.
3.6. WORKED EXAMPLES. 57
Solution: Since Q
ij
are the components of the linear transformation Q in the basis {e
1
, e
2
, e
3
}, it follows
from the deﬁnition of components that
Qe
j
= Q
ij
e
i
.
Since [Q] is an orthogonal matrix one readily sees that Q is an orthogonal transformation. Operating on
both sides of the preceding equation by Q
T
and using the orthogonality of Q leads to
e
j
= Q
ij
Q
T
e
i
.
Multiplying both sides of this by Q
kj
and noting by the orthogonality of Q that Q
kj
Q
ij
= δ
ki
, we are now
led to
Q
kj
e
j
= Q
T
e
k
or equivalently
Q
T
e
i
= Q
ij
e
j
.
This, together with the given fact that e
i
= Q
ij
e
j
, yields the desired result.
Example 3.6: Determine the relationship between the components v
i
and v
i
of a vector v in two bases.
Solution: The components v
i
of v in the basis {e
1
, e
2
, e
3
} are deﬁned by
v
i
= v · e
i
,
and its components v
i
in the second basis {e
1
, e
2
, e
3
} are deﬁned by
v
i
= v · e
i
.
It follows from this and (NNN) that
v
i
= v · e
i
= v · (Q
ij
e
j
) = Q
ij
v · e
j
= Q
ij
v
j
.
Thus, the components of the vector v in the two bases are related by
v
i
= Q
ij
v
j
.
Example 3.7: Determine the relationship between the components A
ij
and A
ij
of a linear transformation A
in two bases.
Solution: The components A
ij
of the linear transformation A in the basis {e
1
, e
2
, e
3
} are deﬁned by
A
ij
= e
i
· (Ae
j
), (i)
and its components A
ij
in a second basis {e
1
, e
2
, e
3
} are deﬁned by
A
ij
= e
i
· (Ae
j
). (ii)
58 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
By ﬁrst making use of (NNN), and then (i), we can write (ii) as
A
ij
= e
i
· (Ae
j
) = Q
ip
e
p
· (AQ
jq
e
q
) = Q
ip
Q
jq
e
p
· (Ae
q
) = Q
ip
Q
jq
A
pq
. (iii)
Thus, the components of the linear transformation A in the two bases are related by
A
ij
= Q
ip
Q
jq
A
pq
. (iv)
Example 3.8: Suppose that the basis {e
1
, e
2
, e
3
} is obtained by rotating the basis {e
1
, e
2
, e
3
} through an
angle θ about the unit vector e
3
; see Figure 3.4. Write out the transformation rule for 2tensors explicitly
in this case.
e
1
e
2
e
1
e
2
e
3
, e
3
θ
θ
Figure 3.4: A basis {e
1
, e
2
, e
3
} obtained by rotating the basis {e
1
, e
2
, e
3
} through an angle θ about the
unit vector e
3
.
Solution: In view of the given relationship between the two bases it follows that
e
1
= cos θ e
1
+ sin θ e
2
,
e
2
= −sin θ e
1
+ cos θ e
2
,
e
3
= e
3
.
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
The matrix [Q] which relates the two bases is deﬁned by Q
ij
= e
i
· e
j
, and so it follows that
[Q] =
_
_
_
_
_
_
cos θ sin θ 0
−sin θ cos θ 0
0 0 1
_
_
_
_
_
_
.
3.6. WORKED EXAMPLES. 59
Substituting this [Q] into [A
] = [Q][A][Q]
T
and multiplying out the matrices leads to the 9 equations
A
11
=
A
11
+A
22
2
+
A
11
−A
22
2
cos 2θ +
A
12
+A
21
2
sin 2θ,
A
12
=
A
12
−A
21
2
+
A
12
−A
21
2
cos 2θ −
A
11
−A
22
2
sin 2θ,
A
21
= −
A
12
−A
21
2
−
A
12
−A
21
2
cos 2θ −
A
11
−A
22
2
sin 2θ,
A
22
=
A
11
+A
22
2
−
A
11
−A
22
2
cos 2θ −
A
12
+A
21
2
sin 2θ,
A
13
= A
13
cos θ + A
23
sin θ, A
31
= A
31
cos θ + A
32
sin θ,
A
23
= A
23
cos θ − A
13
sin θ, A
32
= A
32
cos θ − A
31
sin θ,
A
33
= A
33
.
In the special case when [A] is symmetric, and in addition A
13
= A
23
= 0, these nine equations simplify to
A
11
=
A
11
+A
22
2
+
A
11
−A
22
2
cos 2θ + A
12
sin 2θ,
A
22
=
A
11
+A
22
2
−
A
11
−A
22
2
cos 2θ − A
12
sin 2θ,
A
12
= −
A
11
−A
22
2
sin 2θ,
_
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
_
together with A
13
= A
23
= 0 and A
33
= A
33
. These are the wellknown equations underlying the Mohr’s
circle for transforming 2tensors in twodimensions.
Example 3.9:
a. Let a, b and T be entities whose components in some arbitrary basis are a
i
, b
i
and T
ij
. The components
of T in any basis are deﬁned in terms of the components of a and b in that basis by
T
ijk
= a
i
b
j
b
k
. (i)
If a and b are vectors, show that T is a 3tensor.
b. Suppose that A and B are 2tensors and that their components in some basis are related by
A
ij
= C
ijk
B
k
. (ii)
Show that the C
ijk
’s are the components of a 4tensor.
Solution:
a. Let a
i
, a
i
and b
i
, b
i
be the components of a and b in two arbitrary bases. We are told that the
components of the entity T in these two bases are deﬁned by
T
ijk
= a
i
b
j
b
k
, T
ijk
= a
i
b
j
b
k
. (iii)
60 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
Since a and b are known to be vectors, their components transform according to the 1tensor trans
formation rule
a
i
= Q
ij
a
j
, b
i
= Q
ij
b
j
. (iv)
Combining equations (iii) and (iv) gives
T
ijk
= a
i
b
j
b
k
= Q
ip
a
p
Q
jq
b
q
Q
kr
b
r
= Q
ip
Q
jq
Q
kr
a
p
b
q
b
r
= Q
ip
Q
jq
Q
kr
T
pqr
. (v)
Therefore the components of T in two bases transform according to T
ijk
= Q
ip
Q
jq
Q
kr
T
pqr
. Therefore
T is a 3tensor.
b. Let A
ij
, B
ij
, C
ijk
and A
ij
, B
ij
, C
ijk
be the components of A, B, C in two arbitrary bases:
A
ij
= C
ijk
B
k
, A
ij
= C
ijk
B
k
. (vi)
We are told that A and B are 2tensors, whence
A
ij
= Q
ip
Q
jq
A
pq
, B
ij
= Q
ip
Q
jq
B
pq
, (vii)
and we must show that C
ijk
is a 4tensor, i.e that C
ijk
= Q
ip
Q
jq
Q
kr
Q
s
C
pqrs
. Substituting (vii)
into (vi)
2
gives
Q
ip
Q
jq
A
pq
= C
ijk
Q
kp
Q
q
B
pq
, (viii)
Multiplying both sides by Q
im
Q
jn
and using the orthogonality of [Q], i.e. the fact that Q
ip
Q
im
= δ
pm
,
leads to
δ
pm
δ
qn
A
pq
= C
ijk
Q
im
Q
jn
Q
kp
Q
q
B
pq
, (ix)
which by the substitution rule tells us that
A
mn
= C
ijk
Q
im
Q
jn
Q
kp
Q
q
B
pq
, (x)
or on using (vi)
1
in this that
C
mnpq
B
pq
= C
ijk
Q
im
Q
jn
Q
kp
Q
q
B
pq
. (xi)
Since this holds for all matrices [B] we must have
C
mnpq
= C
ijk
Q
im
Q
jn
Q
kp
Q
q
. (xii)
Finally multiplying both sides by Q
am
Q
bn
Q
cp
Q
dq
, using the orthogonality of [Q] and the substitution
rule yields the desired result
Q
am
Q
bn
Q
cp
Q
dq
C
mnpq
= C
abcd
. (xiii)
Example 3.10: Verify that the alternator e
ijk
has the property that
e
ijk
= Q
ip
Q
jq
Q
kr
e
pqr
for all proper orthogonal matrices [Q], (i)
but that more generally
e
ijk
= Q
ip
Q
jq
Q
kr
e
pqr
for all orthogonal matrices [Q]. (ii)
3.6. WORKED EXAMPLES. 61
Note from this that the alternator is not an isotropic 3tensor.
Example 3.11: If C
ijk
is an isotropic 4tensor, show that necessarily C
iik
= αδ
k
for some arbitrary scalar
α.
Solution: Since C
ijkl
is an isotropic 4tensor, by deﬁnition,
C
ijkl
= Q
ip
Q
jq
Q
kr
Q
ls
C
pqrs
for all orthogonal matrices [Q]. On setting i = j in this; then using the orthogonality of [Q]; and ﬁnally
using the substitution rule, we are led to
C
iikl
= Q
ip
Q
iq
Q
kr
Q
ls
C
pqrs
= δ
pq
Q
kr
Q
ls
C
pqrs
= Q
kr
Q
ls
C
pprs
.
Thus C
iik
obeys
C
iikl
= Q
kr
Q
ls
C
pprs
for all orthogonal matrices [Q],
and therefore it is an isotropic 2tensor. The desired result now follows since the most general isotropic
2tensor is a scalar multiple of the identity.
Example 3.12: Show that the most general isotropic vector is the null vector o.
Solution: In order to show this we must determine the most general vector u which is such that
u
i
= Q
ij
u
j
for all orthogonal matrices [Q]. (i)
Since (i) is to hold for all orthogonal matrices [Q], it must necessarily hold for the special choice [Q] = −[I].
Then Q
ij
= −δ
ij
, and so (i) reduces to
u
i
= −δ
ij
u
j
= −u
i
; (ii)
thus u
i
= 0 and so u = o.
Conversely, u = o obviously satisﬁes (i) for all orthogonal matrices [Q]. Thus u = o is the most general
isotropic vector.
Example 3.13: Show that the most general isotropic symmetric tensor is a scalar multiple of the identity.
Solution: We must ﬁnd the most general symmetric 2tensor A whose components in every basis are the
same; i.e.,
[A] = [Q][A][Q]
T
for all orthogonal matrices [Q]. (i)
First, since A is symmetric, we know that there is some basis in which [A] is diagonal. Since A is also
isotropic, it follows that [A] must therefore be diagonal in every basis. Thus [A] has the form
[A] =
_
_
_
λ
1
0 0
0 λ
2
0
0 0 λ
3
_
_
_ (ii)
62 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
in any basis. Thus (i) takes the form
_
_
_
λ
1
0 0
0 λ
2
0
0 0 λ
3
_
_
_ = [Q]
_
_
_
λ
1
0 0
0 λ
2
0
0 0 λ
3
_
_
_[Q
T
] (iii)
for all orthogonal matrices [Q]. Thus (iii) must necessarily hold for the special choice
[Q] =
_
_
_
0 0 1
1 0 0
0 1 0
_
_
_, (iv)
in which case (iii) reduces to
_
_
_
λ
1
0 0
0 λ
2
0
0 0 λ
3
_
_
_ =
_
_
_
λ
2
0 0
0 λ
1
0
0 0 λ
3
_
_
_. (v)
Therefore, λ
1
= λ
2
.
A permutation of this special choice of [Q] similarly shows that λ
2
= λ
3
. Thus λ
1
= λ
2
= λ
3
= say α.
Therefore [A] necessarily must have the form [A] = α[I].
Conversely, by direct substitution, [A] = α[I] is readily shown to obey (i) for any orthogonal matrix [Q].
This establishes the result.
Example 3.14: If W is a skewsymmetric tensor, show that there is a vector w such that Wx = w×x for
all x ∈ IE.
Solution: Let W
ij
be the components of W in some basis and let w be the vector whose components in this
basis are deﬁned by
w
i
= −
1
2
e
ijk
W
jk
. (i)
Then, we merely have to show that w has the desired property stated above.
Multiplying both sides of the preceding equation by e
ipq
and then using the identity e
ijk
e
ipq
= δ
jp
δ
kq
−
δ
jq
δ
kp
, and ﬁnally using the substitution rule gives
e
ipq
w
i
= −
1
2
(δ
jp
δ
kq
−δ
jq
δ
kp
) W
jk
= −
1
2
(W
pq
−W
qp
)
Since W is skewsymmetric we have W
ij
= −W
ji
and thus conclude that
W
ij
= −e
ijk
w
k
.
Now for any vector x,
W
ij
x
j
= −e
ijk
w
k
x
j
= e
ikj
w
k
x
j
= (w×x)
i
.
Thus the vector w deﬁned by (i) has the desired property Wx = w×x.
3.6. WORKED EXAMPLES. 63
Example 3.15: Verify that the 4tensor
C
ijk
= αδ
ij
δ
k
+βδ
ik
δ
j
+γδ
i
δ
jk
, (i)
where α, β, γ are scalars, is isotropic. If this isotropic 4tensor is to possess the symmetry C
ijk
= C
jik
,
show that one must have β = γ.
Solution: In order to verify that C
ijk
are the components of an isotropic 4tensor we have to show that
C
ijk
= Q
ip
Q
jq
Q
kr
Q
s
C
pqrs
for all orthogonal matrices [Q]. The righthand side of this can be simpliﬁed
by using the given form of C
ijk
; the substitution rule; and the orthogonality of [Q] as follows:
Q
ip
Q
jq
Q
kr
Q
s
C
pqrs
= Q
ip
Q
jq
Q
kr
Q
s
(α δ
pq
δ
rs
+β δ
pr
δ
qs
+γ δ
ps
δ
qr
)
= α Q
iq
Q
jq
Q
ks
Q
s
+β Q
ir
Q
js
Q
kr
Q
s
+γ Q
is
Q
jr
Q
kr
Q
s
= α (Q
iq
Q
jq
) (Q
ks
Q
s
) +β (Q
ir
Q
kr
) (Q
js
Q
s
) +γ (Q
is
Q
s
) (Q
jr
Q
kr
)
= α δ
ij
δ
k
+β δ
ik
δ
j
+γ δ
i
δ
jk
= C
ijk
. (ii)
This establishes the desired result.
Turning to the second question, enforcing the requirement C
ijk
= C
jik
on (i) leads, after some simpli
ﬁcation, to
(β −γ) (δ
ik
δ
j
−δ
jk
δ
i
) = 0 . (iii)
Since this must hold for all values of the free indices i, j, k, , it must necessarily hold for the special choice
i = 1, j = 2, k = 1, = 2. Therefore (β −γ)(δ
11
δ
22
−δ
21
δ
12
) = 0 and so
β = γ. (iv)
Remark: We have shown that β = γ is necessary if C given in (i) is to have the symmetry C
ijk
= C
jik
.
One can readily verify that it is suﬃcient as well. It is useful for later use to record here, that the most
general isotropic 4tensor C with the symmetry property C
ijk
= C
jik
is
C
ijk
= αδ
ij
δ
k
+β (δ
ik
δ
j
+δ
i
δ
jk
) (v)
where α and β are scalars.
Remark: Observe that C
ijk
given by (v) automatically has the symmetry C
ijk
= C
kij
.
Example 3.16: If A is a tensor such that
Ax · x = 0 for all x (i)
show that A is necessarily skewsymmetric.
Solution: By deﬁnition of the transpose and the properties of the scalar product, Ax· x = x· A
T
x = A
T
x· x.
Therefore A has the properties that
Ax · x = 0, and A
T
x · x = 0 for all vectors x.
64 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
Adding these two equations gives
Sx · x = 0 where S =
_
A+A
T
_
.
Observe that S is symmetric. Therefore in terms of components in a principal basis of S,
Sx · x = σ
1
x
2
1
+σ
2
x
2
2
+σ
3
x
2
3
= 0
where the σ
k
’s are the eigenvalues of S. Since this must hold for all real numbers x
k
, it follows that every
eigenvalue must vanish: σ
1
= σ
2
= σ
3
= 0. Therefore S = O whence
A = −A
T
.
Remark: An important consequence of this is that if A is a tensor with the property that Ax · x = 0 for all
x, it does not follow that A = 0 necessarily.
Example 2.18: For any orthogonal linear transformation Q, show that det Q = ±1.
Solution: Recall that for any two linear transformations A and B we have det(AB) = det Adet B and
det B = det B
T
. Since QQ
T
= I it now follows that 1 = det I = det(QQ
T
) = det Qdet Q
T
= (det Q)
2
. The
desired result now follows.
Example 2.20: If Q is a proper orthogonal linear transformation on the vector space IE
3
, show that there
exists a vector v such that Qv = v. This vector is known as the axis of Q.
Solution: To show that there is a vector v such that Qv = v, it is suﬃcient to show that Q has an eigenvalue
+1, i.e. that (Q−I)v) = o or equivalently that det(Q−I) = 0.
Since QQ
T
= I we have Q(Q
T
− I) = I − Q. On taking the determinant of both sides and using the
fact that det(AB) = det Adet B we get
det Q det(Q
T
−I) = det (I −Q) . (i)
Recall that det Q = +1 for a proper orthogonal linear transformation, and that det A = A
T
and det(−A) =
(−1)
3
det(A) for a 3dimensional vector space. Therefore this leads to
det(Q−I) = −det(Q−I), (ii)
and the desired result now follows.
Example 2.22: For any linear transformation A, show that det(A−µI) = det(Q
T
AQ−µI) for all orthogonal
linear transformations Q and all scalars µ.
Solution: This follows readily since
det(Q
T
AQ−µI) = det(Q
T
AQ−µQ
T
Q) = det
_
Q
T
(A−µI)Q
_
= det Q
T
det(A−µI) det Q = det(A−µI).
3.6. WORKED EXAMPLES. 65
Remark: Observe from this result that the eigenvalues of Q
T
AQ coincide with those of Q, so that in
particular the same is true of their product and their sum: det(Q
T
AQ) = det A and tr(Q
T
AQ) = tr A.
Example 2.26: Deﬁne a scalarvalued function φ(A; e
1
, e
2
, e
3
) for all linear transformations A and all (not
necessarily) orthonormal bases {e
1
, e
2
, e
3
} by
φ(A; e
1
, e
2
, e
3
) =
Ae
1
· (e
2
×e
3
) +e
1
· (Ae
2
×e
3
) +e
1
· (e
2
×Ae
3
)
e
1
· (e
2
×e
3
)
.
Show that φ(A, e
1
, e
2
, e
3
) is in fact independent of the choice of basis, i.e., show that
φ(A, e
1
, e
2
, e
3
) = I
1
(A, e
1
, e
2
, e
3
)
for any two bases {e
1
, e
2
, e
3
} and {e
1
, e
2
, e
3
}. Thus, we can simply write φ(A) instead of φ(A, e
1
, e
2
, e
3
);
φ(A) is called a scalar invariant of A.
Pick any orthonormal basis and express φ(A) in terms of the components of A in that basis; and hence
show that φ(A) = trace A.
Example 3.7: Let F(t) be a oneparameter familty of nonsingular 2tensors that depends smoothly on the
parameter t. Calculate
d
dt
det F(t).
Solution: From the result of Example 3.NNN we have
_
F(t)a ×F(t)b
_
· F(t)c = det F(t) (a ×b) · c
Diﬀerentiating this with respect to t gives
_
˙
F(t)a ×F(t)b
_
· F(t)c +
_
F(t)a ×
˙
F(t)b
_
· F(t)c +
_
F(t)a ×F(t)b
_
·
˙
F(t)c =
d
dt
det F(t) (a ×b) · c
where we have set
˙
F(t) = dF/dt. We can write this as
_
˙
FF
−1
Fa ×Fb
_
· Fc +
_
Fa ×
˙
FF
−1
Fb
_
· Fc +
_
Fa ×Fb
_
·
˙
FF
−1
Fc =
_
d
dt
det F
_
(a ×b) · c.
In view of the result of Example 3.NNN, this can be written as
trace
_
˙
FF
−1
_
_
Fa ×Fb
_
· Fc =
_
d
dt
det F
_
(a ×b) · c
and now using the result of Example 3.NNN once more we get
trace
_
˙
FF
−1
_
det F (a ×b) · c =
_
d
dt
det F
_
(a ×b) · c.
or
d
dt
det F = trace
_
˙
FF
−1
_
det F.
66 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
Example 2.30: For any integer N > 0, show that the polynomial
P
N
(A) = c
0
I +c
1
A+c
2
A
2
+. . . c
k
A
k
+. . . +c
N
A
N
can be written as a quadratic polynomial of A.
Solution: This follows readily from the CayleyHamilton Theorem (3.41) as follows: suppose that A is non
singular so that I
3
(A) = det A = 0. Then (3.41) shows that A
3
can be written as a linear combination of
I, A and A
2
. Next, multiplying this by A tells us that A
4
can be written as a linear combination of A, A
2
and A
3
, and therefore, on using the result of the previous step, as linear combination of I, A and A
2
. This
process can be continued an arbitrary number of times to see that for any integer k, A
k
can be expressed as
a linear combination of I, A and A
2
. The result thus follows.
Example 2.31: For any linear transformation A show that
det(A−αI) = −α
3
+I
1
(A)α
2
−I
2
(A)α +I
3
(A)
for all real numbers α where I
1
(A), I
2
(A) and I
3
(A) are the principal scalar invariants of A:
I
1
(A) = trace A, I
2
(A) = 1/2[(trace A)
2
−trace(A
2
)], I
3
(A) = det A.
Example 2.32: Calculate the principal scalar invariants I
1
, I
2
and I
3
of the linear transformation a ⊗b.
References
1. H. Jeﬀreys, Cartesian Tensors, Cambridge, 1931.
2. J.K. Knowles, Linear Vector Spaces and Cartesian Tensors, Oxford University Press, New York, 1997.
3. L.A. Segel, Mathematics Applied to Continuum Mechanics, Dover, New York, 1987.
Chapter 4
Characterizing Symmetry: Groups of
Linear Transformations.
Linear transformations are mappings of vector spaces into vector spaces. When an object
is mapped using a linear transformation, certain transformations preserve its symmetry
while others don’t. One way in which to characterize the symmetry of an object is to
consider the collection of all linear transformations that preserve its symmetry. The set of
such transformations depends on the object: for example the set of linear transformations
that preserve the symmetry of a cube is diﬀerent to the set of linear transformations that
preserve the symmetry of a tetrahedron. In this chapter we touch brieﬂy on the question of
characterizing symmetry by linear transformations.
Intuitively a “uniform allaround expansion”, i.e. a linear transformation of the form αI
that rescales the object by changing its size but not its shape, does not aﬀect symmetry.
We are interested in other linear transformations that also preserve symmetry, principally
rotations and reﬂections. In this Chapter we shall consider those linear transformations that
map the object back into itself. The collection of such transformations have certain important
and useful properties.
67
68 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
i
j
B
C
D
A
o
Figure 4.1: Mapping a square into itself.
4.1 An example in twodimensions.
We begin with an illustrative example. Consider a square, ABCD, which lies in a plane
normal to the unit vector k, whose center is at the origin o and whose sides are parallel
to the orthonormal vectors {i, j}. Consider mappings that carry the square into itself. The
vertex A can be placed in one of 4 positions; see Figure 4.1. Once the location of A has
been determined, the vertex B can be placed in one of 2 positions (allowing for reﬂections
or in just one position if only rotations are permitted). And once the locations of A and
B have been ﬁxed, there is no further ﬂexibility and the locations of the remaining vertices
are ﬁxed. Thus there are a total of 4 × 2 = 8 symmetry preserving transformations of the
square, 4 of which are rotations and 4 of which are reﬂections.
Consider the 4 rotations. In order to determine them, we (a) identify the axes of rotational
symmetry and then (b) determine the number of distinct rotations about each such axis.
In the present case there is just 1 axis to consider, viz. k, and we note that 0
o
, 90
o
, 180
o
and 270
o
rotations about this axis map the square back into itself. Thus the following 4
distinct rotations are symmetry transformations: I, R
π/2
k
, R
π
k
, R
3π/2
k
where we are using the
notation introduced previously, i.e. R
φ
n
is a righthanded rotation through an angle φ about
the axis n.
Let G
square
denote the set consisting of these 4 symmetry preserving rotations:
G
square
= {I, R
π/2
k
, R
π
k
, R
3π/2
k
}.
4.2. AN EXAMPLE IN THREEDIMENSIONS. 69
This collection of linear transformations has two important properties: ﬁrst, observe that
the successive application of any two symmetries yields a third symmetry, i.e. if P
1
and P
2
are in G
square
, then so is their product P
1
P
2
. For example, R
π
k
R
π/2
k
= R
3π/2
k
, R
π/2
k
R
3π/2
k
=
I, R
3π/2
k
R
3π/2
k
= R
π
k
etc. Second, observe that if P is any member of G
square
, then so is its
inverse P
−1
. For example (R
π
k
)
−1
= R
π
k
, (R
3π/2
k
)
−1
= R
π/2
k
etc. As we shall see in Section
4.4, these two properties endow the set G
square
with a certain special structure.
Next consider the rotation R
π/2
k
and observe that every element of the set G
square
can be
represented in the form (R
π/2
k
)
n
for the integer choices n = 0, 1, 2, 3. Therefore we can say
that the set G
square
is “generated” by the element R
π/2
k
.
Finally observe that
G
square
= {I, R
π
}
is a subset of G
square
and that it too has the properties that if P
1
, P
2
∈ G
square
then their
product P
1
P
2
is also in G
square
; and if P ∈ G
square
so is its inverse P
−1
.
We shall generalize all of this in Section 4.4.
4.2 An example in threedimensions.
B
C
D
A
i
j
k
o
Figure 4.2: Mapping a cube into itself.
Before considering some general theory, it is useful to consider the threedimensional
version of the previous problem. Consider a cube whose center is at the origin o and whose
70 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
edges are parallel to the orthonormal vectors {i, j, k}, and consider mappings that carry the
cube into itself. Consider a vertex A, and its three adjacent vertices B, C, D. The vertex
A can be placed in one of 8 positions. Once the location of A has been determined, the
vertex B can be placed in one of 3 positions. And once the locations of A and B have been
ﬁxed, the vertex C can be placed in one of 2 positions (allowing for reﬂections or in just one
position if only rotations are permitted). Once the vertices A, B and C have been placed,
the locations of the remaining vertices are ﬁxed. Thus there are a total of 8 × 3 × 2 = 48
symmetry preserving transformations of the cube, 24 of which are rotations and 24 of which
are reﬂections.
First, consider the 24 rotations. In order to determine these rotations we again (a) identify
all axes of rotational symmetry and then (b) determine the number of distinct rotations about
each such axis. In the present case we see that, in addition to the identify transformation I
itself, we have the following rotational transformations that preserve symmetry:
1. There are 3 axes that join the center of one face of the cube to the center of the
opposite face of the cube which we can take to be i, j, k, (which in materials science
are called the {100} directions); and 90
o
, 180
o
and 270
o
rotations about each of these
axes maps the cube back into the cube. Thus the following 3×3 = 9 distinct rotations
are symmetry transformations:
R
π/2
i
, R
π
i
, R
3π/2
i
, R
π/2
j
, R
π
j
, R
3π/2
j
, R
π/2
k
, R
π
k
, R
3π/2
k
2. There are 4 axes that join one vertex of the cube to the diagonally opposite vertex of
the cube which we can take to be i + j + k, i − j + k, i + j − k, i − j − k, (which in
materials science are called the {111} directions); and 120
o
and 240
o
rotations about
each of these axes maps the cube back into the cube. Thus the following 4 × 2 = 8
distinct rotations are symmetry transformations:
R
2π/3
i+j+k
, R
4π/3
i+j+k
, R
2π/3
i−j+k
, R
4π/3
i−j+k
, R
2π/3
i+j−k
, R
4π/3
i+j−k
, R
2π/3
i−j−k
, R
4π/3
i−j−k
.
3. Finally, there are 6 axes that join the center of one edge of the cube to the center
of the diagonally opposite edge of the cube which we can take to be i + j, i − j, i +
k, i − k, j + k, j − k (which in materials science are called the {110} directions); and
4.2. AN EXAMPLE IN THREEDIMENSIONS. 71
180
o
rotations about each of these axes maps the cube back into the cube. Thus the
following 6 ×1 = 6 distinct rotations are symmetry transformations:
R
π
i+j
, R
π
i−j
, R
π
i+k
, R
π
i−k
, R
π
j+k
, R
π
j−k
.
Let G
cube
denote the collection of these 24 symmetry preserving rotations:
G
cube
= {I,
R
π/2
i
, R
π
i
, R
3π/2
i
, R
π/2
j
, R
π
j
, R
3π/2
j
, R
π/2
k
, R
π
k
, R
3π/2
k
R
2π/3
i+j+k
, R
4π/3
i+j+k
, R
2π/3
i−j+k
, R
4π/3
i−j+k
, R
2π/3
i+j−k
, R
4π/3
i+j−k
, R
2π/3
i−j−k
, R
4π/3
i−j−k
,
R
π
i+j
, R
π
i−j
, R
π
i+k
, R
π
i−k
, R
π
j+k
, R
π
j−k
}.
(4.1)
If one considers rotations and reﬂections, then there are 48 elements in this set, where the
24 reﬂections are obtained by multiplying each rotation by −I. (It is important to remark
that this just happens to be true for the cube, but is not generally true. In general, if R is a
rotational symmetry of an object then −R is, of course, a reﬂection, but it need not describe
a reﬂectional symmetry of the object; e.g. see the example of the tetrahedron discussed
later.)
The collection of linear transformations G
cube
has two important properties that one can
verify: (i) if P
1
and P
2
∈ G
cube
, then their product P
1
P
2
is also in G
cube
, and (ii) if P ∈ G
cube
,
then so does its inverse P
−1
.
Next, one can verify that every element of the set G
cube
can be represented in the form
(R
π/2
i
)
p
(R
π/2
j
)
q
(R
π/2
k
)
r
for integer choices of p, q, r. For example the rotation R
2π/3
i+j+k
(about
a {111} axis) and the rotation R
π
i+k
(about a {110} axis) can be represented as
R
2π/3
i+j+k
=
_
R
π/2
k
_
−1
_
R
π/2
j
_
−1
, R
π
i+k
=
_
R
π/2
j
_
−1
_
R
π/2
k
_
2
.
(One way in which to verify this is to use the representation of a rotation tensor determined in
Example 2.18.) Therefore we can say that the set G
cube
is “generated” by the three elements
R
π/2
i
, R
π/2
j
and R
π/2
k
.
72 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
4.3 Lattices.
A geometric structure of particular interest in solid mechanics is a lattice and we now make
a few observations on the symmetry of lattices. The simplest lattice, a Bravais lattice
L{o;
1
,
2
,
3
}, is an inﬁnite set of periodically arranged points in space generated by the
translation of a single point o through three linearly independent lattice vectors {
1
,
2
,
3
}:
L{o;
1
,
2
,
3
} = {x  x = o +
3
n=1
n
i
i
, n
i
∈ Z } (4.2)
where Z is the set of integers. Figure 4.3 shows a twodimensional square lattice and one
possible set of lattice vectors
1
,
2
. (It is clear from the ﬁgure that diﬀerent sets of lattice
vectors can correspond to the same lattice.)
i
j
a
a
1
2
Figure 4.3: A twodimensional square lattice with lattice vectors
1
,
2
.
It can be shown that a linear transformation P maps a lattice back into itself if and only
if
P
i
=
3
j=1
M
ij
j
(4.3)
for some 3 ×3 matrix [M] whose elements M
ij
are integers and where det[M] = 1. Given a
lattice, let G
lattice
be the set of all linear transformations P that map the lattice back into
itself. One can show that if P
1
, P
2
∈ G
lattice
then their product P
1
P
2
is also in G
lattice
; and if
P ∈ G
lattice
so is its inverse P
−1
. The set G
lattice
is called the symmetry group of the lattice;
4.4. GROUPS OF LINEAR TRANSFORMATIONS. 73
and the set of rotations in G
lattice
is known as the point group of the lattice. For example the
point group of a simple cubic lattice
1
is the set G
cube
of 24 rotations given in (4.1).
4.4 Groups of Linear Transformations.
A collection G of nonsingular linear transformations is said to be a group of linear trans
formations if it possesses the following two properties:
(i) if P
1
∈ G and P
2
∈ G then P
1
P
2
∈ G,
(ii) if P ∈ G then P
−1
∈ G.
Note from this that the identity transformation I is necessarily a member of every group G.
Clearly the three sets G
square
, G
cube
and G
lattice
encountered in the previous sections are
groups. One can show that each of the following sets of linear transformations forms a group:
 the set of all orthogonal linear transformations;
 the set of all proper orthogonal linear transformations;
 the set of all unimodular linear transformations
2
(i.e. linear transformations with de
terminant equal to ±1); and
 the set of all proper unimodular linear transformations (i.e. linear transformations
with determinant equal to +1).
The generators of a group G are those elements P
1
, P
2
, . . . , P
n
which, when they and
their inverses are multiplied among themselves in various combinations yield all the elements
of the group. Generators of the groups G
square
and G
cube
were given previously.
In general, a collection of linear transformations G
is said to be a subgroup of a group
G if
(i) G
⊂ G and
(ii) G
is itself a group.
1
There are seven diﬀerent types of symmetry that arise in Bravais lattices, viz. triclinic, monoclinic,
orthorhombic, tetragonal, cubic, trigonal and hexagonal. Because, for example, a cubic lattice can be body
centered or facecentered, and so on, the number of diﬀerent types of lattices is greater than seven.
2
While the determinant of an orthogonal tensor is ±1 the converse is not necessarily true. There are
unimodular tensors, e.g. P = I + αi ⊗ j, that are not orthogonal. Thus the unimodular group is not
equivalent to the orthogonal group.
74 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
One can readily show that the group of proper orthogonal linear transformations is a sub
group of the group of orthogonal linear transformations, which in turn is a subgroup of the
group of unimodular linear transformations. In our ﬁrst example, G
square
is a subgroup of
G
square
.
It should be mentioned that the general theory of groups deals with collections of elements
(together with certain “rules” including “multiplication”) where the elements need not be
linear transformations. For example the set of all integers Z with “multiplication” deﬁned
as the addition of numbers, the identity taken to be zero, and the inverse of x taken to be
−x is a group. Similarly the set of all matrices of the form
_
_
cosh x sinh x
sinh x cosh x
_
_
where −∞< x < ∞,
with “multiplication” deﬁned as matrix multiplication, the identity being the identity matrix,
and the inverse being
_
_
cosh(−x) sinh(−x)
sinh(−x) cosh(−x)
_
_
,
can be shown to be a group. However, our discussion in these notes is limited to groups of
linear transformations.
4.5 Symmetry of a scalarvalued function of symmetric
positivedeﬁnite tensors.
When we discuss the constitutive behavior of a material in Volume 2, we will encounter
a scalarvalued function ψ(C) deﬁned for all symmetric positive deﬁnite tensors C. (This
represents the energy in the material and characterizes its mechanical response). The sym
metry of the material will be characterized by a set G of nonsingular tensors P which has
the property that, for each P ∈ G,
ψ(C) = ψ(P
T
CP) for all symmetric positive −deﬁnite C. (4.4)
4.5. SYMMETRY OF A SCALARVALUED FUNCTION 75
It can be readily shown that this set of tensors G is a group. To see this, ﬁrst let
P
1
, P
2
∈ G so that
ψ(C) = ψ(P
T
1
CP
1
), ψ(C) = ψ(P
T
2
CP
2
), (4.5)
for all symmetric positivedeﬁnite C. Then ψ((P
1
P
2
)
T
CP
1
P
2
) = ψ(P
T
2
(P
T
1
CP
1
)P
2
) =
ψ(P
T
1
CP
1
) = ψ(C) where we have used (4.5)
2
and (4.5)
1
in the penultimate and ultimate
steps respectively. Thus if P
1
and P
2
are in G, then so is P
1
P
2
. Next, suppose that P ∈ G.
Since P is nonsingular, the equation S = P
T
CP provides a onetoone relation between
symmetric positive deﬁnite tensors C and S. Thus, since (4.4) holds for all symmetric
positivedeﬁnite C, it also holds for all symmetric positivedeﬁnite linear transformations
S = P
T
CP. Substituting this into (4.4) gives ψ(S) = ψ(P
−T
SP
−1
) for all symmetric
positivedeﬁnite S; and so P
−1
is also in G. Thus the set G of nonsingular tensors obeying
(4.4) is a group; we shall refer to it as the symmetry group of ψ.
Observe from (4.4) that the symmetry group of ψ contains the elements I and −I, and
as a consequence, if P ∈ G then −P ∈ G also.
To examine an explicit example, consider the function
ψ(C) =
´
ψ
_
det C
_
. (4.6)
It is seen trivially that for this
´
ψ, equation (4.4) holds if and only if det P = ±1. Thus the
symmetry group of this ψ consists of all unimodular tensors ( i.e. tensors with determinant
equal to ±1).
As a second example consider the function
ψ(C) =
´
ψ
_
Cn · n
_
(4.7)
where n is a given ﬁxed unit vector. Let Q
n
be a rotation about the axis n through an
arbitrary angle. Then since n is the axis of Q
n
we know that Q
n
n = n. Therefore
ψ(Q
T
n
CQ
n
) =
´
ψ
_
Q
T
n
CQ
n
n · n
_
=
´
ψ
_
CQ
n
n · Q
n
n
_
=
´
ψ
_
Cn · n
_
= ψ(C). (4.8)
The symmetry group of the function (4.7) therefore contains the set of all rotations about
n. (Are there any other tensors in G?)
76 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
The following result will be useful in Volume 2. Let H be some ﬁxed nonsingular linear
transformation, and consider two functions ψ
1
(C) and ψ
2
(C), each deﬁned for all symmetric
positivedeﬁnite tensors C. Suppose that ψ
1
and ψ
2
are related by
ψ
2
(C) = ψ
1
(H
T
CH) for all symmetric positive −deﬁnite tensors C. (4.9)
If G
1
and G
2
are the symmetry groups of ψ
1
and ψ
2
respectively, then it can be shown that
G
2
= HG
1
H
−1
(4.10)
in the sense that a tensor P ∈ G
1
if and only if the tensor HPH
−1
∈ G
2
. As a special case
of this, if H is a spherical tensor, i.e. if H = αI, then G
1
= G
2
.
Next, note that any nonsingular tensor P can be written as the product of a spherical
tensor αI and a unimodular tensor T as P = (αI)T provided that we take α = ( det P)
1/3
since then det T = ±1. This, together with the special case of the result noted in the
preceding paragraph provides a hint of why we might want to limit attention to unimodular
tensors rather than consider all nonsingular tensors in our discussion of symmetry.
This motivates the following slight modiﬁcation to our original notion of symmetry of a
function ψ(C). We characterize the symmetry of ψ by the set G of unimodular tensors P
which have the property that, for each P ∈ G,
ψ(C) = ψ(P
T
CP) for all symmetric positive −deﬁnite C. (4.11)
It can be readily shown that this set of tensors G is also a group, necessarily a subgroup of
the unimodular group.
A function ψ is said to be isotropic if its symmetry group G contains all orthogonal
tensors. Thus for an isotropic function ψ,
ψ(C) = ψ(P
T
CP) (4.12)
for all symmetric positivedeﬁnite C and all orthogonal P. From a theorem in algebra it
follows that an isotropic function ψ depends on C only through its principal scalar invariants
deﬁned previously in (3.38), i.e. that there exists a function
´
ψ such that
ψ(C) =
´
ψ
_
I
1
(C), I
2
(C), I
3
(C)
_
(4.13)
4.6. WORKED EXAMPLES. 77
where
I
1
(C) = trace C
I
2
(C) = 1/2 [(trace C)
2
−trace (C
2
)] ,
I
3
(C) = det C.
_
¸
_
¸
_
(4.14)
As a second example consider “cubic symmetry” where the symmetry group G coincides
with the set of 24 rotations G
cube
given in (4.1) plus the corresponding reﬂections obtained
by multiplying these rotations by −I. As noted previously, this group is generated by
R
π/2
i
, R
π/2
j
, R
π/2
k
and −I, and contains 24 rotations and 24 reﬂections. Then, according to a
theorem in algebra (see pg 312 of Truesdell and Noll),
ψ(C) =
´
ψ
_
i
1
(C), i
2
(C), i
3
(C), i
4
(C), i
5
(C), i
6
(C), i
7
(C), i
8
(C), i
9
(C)
_
(4.15)
where
i
1
(C) = C
11
+ C
22
+ C
33
,
i
2
(C) = C
22
C
33
+ C
33
C
11
+ C
11
C
22
i
3
(C) = C
11
C
22
C
33
i
4
(C) = C
2
23
+ C
2
31
+ C
2
12
i
5
(C) = C
2
31
C
2
32
+ C
2
12
C
2
23
+ C
2
23
C
2
31
i
6
(C) = C
23
C
31
C
12
i
7
(C) = C
22
C
2
12
+ C
33
C
2
31
+ C
33
C
2
23
+ C
11
C
2
12
+ C
11
C
2
31
+ C
22
C
2
23
i
8
(C) = C
11
C
2
31
C
2
12
+ C
22
C
2
12
C
2
23
+ C
33
C
2
23
C
2
31
i
9
(C) = C
2
23
C
22
C
33
+ C
2
31
C
33
C
11
+ C
2
12
C
11
C
22
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
(4.16)
If G contains I and all rotations R
φ
n
, 0 < φ < 2π, through all angles φ about a ﬁxed axis
n, the corresponding symmetry is called transverse isotropy.
If G includes the three elements −R
π
i
, −R
π
j
, −R
π
k
which represent reﬂections in the
planes normal to i, j and k, the symmetry is called orthotropy.
4.6 Worked Examples.
Example 4.1: Characterize the set H
square
of linear transformations that map a square back into a square,
including both rotations and reﬂections.
78 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
B
C D
A
i
j
Figure 4.4: Mapping a square into itself.
Solution: We return to the problem describe in Section 4.1 and now consider the set rotations and reﬂections
H
square
that map the square back into itself. The set of rotations that do this were determined earlier and
they are
G
square
= {I, R
π/2
k
, R
π
k
, R
3π/2
k
}.
As the 4 reﬂectional symmetries we can pick
H = reﬂection in the horizontal axis i,
V = reﬂection in the vertical axis j,
D = reﬂection in the diagonal with positive slope i +j,
D
= reﬂection in the diagonal with negative slope −i +j,
and so
H
square
=
_
I, R
π/2
k
, R
π
k
, R
3π/2
k
, H, V, D, D
_
. (i)
One can verify that H
square
is a group since it possesses the property that if P
1
and P
2
are two transfor
mations in G, then so is their product P
1
P
2
; e.g. D
= R
3π/2
k
H, D = HR
3π/2
k
etc. And if P is any member
of G, then so is its inverse; e.g. H
−1
= H etc.
Example 4.2: Find the generators of H
square
and all subgroups of H
square
.
Solution: All elements of H
square
can be represented in the form (R
π/2
k
)
i
H
j
for integer choices of i = 0, 1, 2, 3
and j = 0, 1:
R
π
k
= (R
π/2
k
)
2
, R
3π/2
k
= (R
π/2
k
)
3
, I = (R
π/2
k
)
4
.
D
= (R
π/2
k
)
3
H, V = (R
π/2
k
)
2
H, D = R
π/2
k
H.
Therefore the group H
square
is generated by the two elements H and R
π/2
.
One can verify that the following 8 collections of linear transformations are subgroups of H
square
:
_
I, R
π/2
, R
π
, R
3π/2
_
,
_
I, D, D
, R
π
_
, {I, H, V, R
π
} , {I, R
π
} ,
{I, D} ,
_
I, D
_
, {I, H} , {I, V} ,
4.6. WORKED EXAMPLES. 79
Geometrically, each of these subgroups leaves some aspect of the square invariant. The ﬁrst leaves the face
invariant, the second leaves a diagonal invariant, the third leaves the axis invariant, the fourth leaves an axis
and a diagonal invariant etc. There are no other subgroups of H
square
.
Example 4.3: Characterize the rotational symmetry of a regular tetrahedron.
B
C
D
A
i
j
k
p
Figure 4.5: A regular tetrahedron ABCD, three orthonormal vectors {i, j, k} and a unit vector p. The axis
k passes through the vertex A and the centroid of the opposite face BCD, while the unit vector p passes
through the center of the edge AD and the center of the opposite edge BC.
Solution:
1. There are 4 axes like k in the ﬁgure that pass through a vertex of the tetrahedron and the centroid
of the opposite face; and righthanded rotations of 120
o
and 240
o
about each of these axes maps the
tetrahedron back onto itself. Thus these 4 ×2 = 8 distinct rotations – of the form R
2π/3
k
, R
4π/3
k
, etc.
– are symmetry transformations of the tetrahedron.
2. There are three axes like p shown in the ﬁgure that pass through the midpoints of a pair of opposite
edges; and a righthanded rotation through 180
o
about each of these axes maps the tetrahedron
back onto itself. Thus these 3 × 1 = 3 distinct rotations – of the form R
π
p
, etc. – are symmetry
transformations of the tetrahedron.
The group G
tetrahedron
of rotational symmetries of a tetrahedron therefore consists of these 11 rotations
plus the identity transformation I.
Example 4.4: Are all symmetry preserving linear transformations necessarily either rotations or reﬂections?
Solution: We began this chapter by considering the symmetry of a square, and examining the diﬀerent ways
in which the square could be mapped back into itself. Now consider the example of a twodimensional a ×a
80 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
square lattice, i.e. the set of inﬁnite points
L
square
= {x  x = n
1
ai +n
2
aj, n
1
, n
2
∈ Z ≡ Integers} (i)
depicted in Figure 4.6, and examine the diﬀerent ways in which this lattice can be mapped back into itself.
i
j
a
a
1
2
Figure 4.6: A twodimensional × square lattice.
We ﬁrst note that the rotational and reﬂectional symmetry transformations of a a × a square are also
symmetry transformations for the lattice since they leave the lattice invariant. There are however other
transformations, that are neither rotations nor reﬂections, that also leave the lattice invariant. For example,
if for every integer n, one rigidly translates the n
th
row of the lattice by precisely the amount n in the
i direction, one recovers the original lattice. Thus, the “shearing” of the lattice described by the linear
transformation
P = I +ai ⊗j (ii)
is also a symmetry preserving transformation.
Example 4.5: Show that each of the following sets of linear transformations forms a group: all orthogonal
tensors; all proper orthogonal tensors; all unimodular tensors (i.e. tensors with determinant equal to ±1);
and all proper unimodular tensors (i.e. tensors with determinant equal to +1).
Example 4.6: Show that the group of proper orthogonal tensors is a subgroup of the group of orthogonal
tensors, which in turn is a subgroup of the group of unimodular tensors.
Example 4.7: Suppose that a function ψ(C) is deﬁned for all symmetric positive deﬁnite tensors C and that
its symmetry group is the set of all orthogonal tensors. Show that ψ depends on C only through its principal
scalar invariants, i.e. show that there is a function
´
ψ such that
ψ(C) =
´
ψ
_
I
1
(C), I
2
(C), I
3
(C)
_
4.6. WORKED EXAMPLES. 81
where I
i
(C), i = 1, 2, 3, are the principal scalar invariants of C deﬁned previously in (3.38).
Solution: We are given that ψ has the property that for all symmetric positivedeﬁnite tensors C and all
orthogonal tensors Q
ψ(C) = ψ(Q
T
CQ). (i)
In order to prove the desired result it is suﬃcient to show that, if C
1
and C
2
are two symmetric tensors
whose principal invariants I
i
are the same,
I
1
(C
1
) = I
1
(C
2
), I
2
(C
1
) = I
2
(C
2
), I
3
(C
1
) = I
3
(C
2
), (ii)
then ψ(C
1
) = ψ(C
2
).
Recall that the mapping (3.40) between principal invariants and eigenvalues is onetoone. It follows
from this and (ii) that the eigenvalues of C
1
and C
2
are the same. Thus we can write
C
1
=
3
i=1
λ
i
e
(1)
i
⊗e
(1)
i
, C
2
=
3
i=1
λ
i
e
(2)
i
⊗e
(2)
i
, (iii)
where the two sets of orthonormal vectors {e
(1)
1
, e
(1)
2
, e
(1)
3
} and {e
(1)
1
, e
(1)
2
, e
(1)
3
} are the respective principal
bases of C
1
and C
2
. Since each set of basis vectors is orthonormal, there is an orthogonal tensor R that
carries {e
(1)
1
, e
(1)
2
, e
(1)
3
} into {e
(2)
1
, e
(2)
2
, e
(2)
3
}:
Re
(1)
i
= e
(2)
i
, i = 1, 2, 3. (iv)
Thus
R
T
_
3
i=1
λ
i
e
(2)
i
⊗e
(2)
i
_
R =
3
i=1
λ
i
R
T
(e
(2)
i
⊗e
(2)
i
)R =
3
i=1
λ
i
(R
T
e
(2)
i
) ⊗(R
T
e
(2)
i
) =
3
i=1
λ
i
(e
(1)
i
⊗(e
(1)
i
), (v)
and so R
T
C
2
R = C
1
. Therefore ψ(C
1
) = ψ(R
T
C
2
R) = ψ(C
2
) where in the last step we have used (i).
This establishes the desired result.
Example 4.8: Consider a scalarvalued function f(x) that is deﬁned for all vectors x. Let G be the set of all
nonsingular linear transformations P that have the property that for each P ∈ G, one has f(x) = f(Px)
for all vectors x.
i) Show that G is a group.
ii) Find the most general form of f if G contains the set of all orthogonal transformations.
Solution:
i) Suppose that P
1
and P
2
are in G, i.e. that
f(x) = f(P
1
x) for all vectors x, and
f(x) = f(P
2
x) for all vectors x.
_
(i)
82 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
Then
f
_
(P
1
P
2
)x
_
= f
_
P
1
(P
2
x)
_
= f(P
2
x) = f(x)
where in the penultimate and ultimate steps we have used (i)
1
and (i)
2
respectively.
Next, suppose that P ∈ G so that
f(x) = f(Px) for all vectors x.
Since P is nonsingular we can set y = Px and obtain
f(P
−1
y) = f(y) for all vectors y.
It thus follows that G has the two deﬁning properties of a group.
ii) If x
1
and x
2
are two vectors that have the same length, we will show that f(x
1
) = f(x
2
), whence
f(x) depends on x only through its length x, i.e. there exists a function
´
f such that
f(x) =
´
f(x) for all vectors x.
If x
1
and x
2
are two vectors that have the same length, there is a rotation tensor R that carries x
2
to x
1
: Rx
2
= x
1
. Therefore
f(x
1
) = f(Rx
2
) = f(x
2
),
where in the last step we have used the fact that G contains the set of all orthogonal transformations,
i.e. that f(x) = f(Px) for all vectors x and all orthogonal P. This establishes the result claimed
above.
Example 4.9: Consider a scalarvalued function g(C, m⊗m) that is deﬁned for all symmetric positivedeﬁnite
tensors C and all unit vectors m. Let G be the set of all nonsingular linear transformations P that have
the property that for each P ∈ G, one has g(C, n ⊗n) = g(P
T
CP, P
T
(n ⊗n)P) for all symmetric positive
deﬁnite tensors C and some particular unit vector n. If G contains the set of all orthogonal transformations,
show that there exists a function ´ g such that
g(C, n ⊗n) = ´ g
_
I
1
(C), I
2
(C), I
3
(C), I
4
(C, n), I
5
(C, n)
_
where I
1
(C), I
2
(C), I
3
(C) are the three fundamental scalar invariants of C and
I
4
(C, n) = Cn · n, I
5
(C, n) = C
2
n · n.
Remark: Observe that with respect to an orthonormal basis {e
1
, e
2
, e
3
} where e
3
= n one has I
4
= C
33
and
I
5
= C
2
31
+C
2
32
+C
2
33
.
Solution: We are told that
g(C, n ⊗n) = g(Q
T
CQ, Q
T
(n ⊗n)Q) (i)
for all orthogonal Q and all symmetric positive deﬁnite C. As in Example 4.7, it is suﬃcient to show that if
C
1
and C
2
are two symmetric positive deﬁnite linear transformations whose “invariants” I
i
, i = 1, 2, 3, 4, 5”
are the same, i.e.
I
1
(C
1
) = I
1
(C
2
), I
2
(C
1
) = I
2
(C
2
), I
3
(C
1
) = I
3
(C
2
), I
4
(C
1
, n) = I
4
(C
2
, n), I
5
(C
1
, n) = I
5
(C
2
, n) (ii)
4.6. WORKED EXAMPLES. 83
then g(C
1
, n ⊗n) = g(C
2
, n ⊗n). From (ii)
1,2,3
and the analysis in Example 4.7 it follows that there is an
orthogonal tensor R such that R
T
C
2
R = C
1
. It is readily seen from this that R
T
C
2
2
R = C
2
1
as well. It
now follows from this, the fact that R is orthogonal, (ii)
4,5
and the deﬁnitions of I
4
and I
5
that
Rn · Rn = n · n, C
2
Rn · Rn = C
2
n · n, C
2
2
Rn · Rn = C
2
2
n · n, (iii)
and this must hold for all symmetric positive deﬁne C
2
. This implies that
Rn = ±n and consequently R
T
n = ±n,
as may be seen, for example, for expressing (iii) in a principal basis of C
2
. Consequently
g(C
1
, n ⊗n) = g(R
T
C
2
R, (R
T
n) ⊗(R
T
n)) = g(R
T
C
2
R, R
T
(n ⊗n)R) = g(C
2
, n ⊗n)
where we have used (i) in the very last step. This establishes the desired result.
REFERENCES
1. M.A. Armstrong, Groups and Symmetry, SpringerVerlag, 1988.
2. G. Birkhoﬀ and S. MacLane, A Survey of Modern Algebra, MacMillan, 1977.
3. C. Truesdell and W. Noll, The nonlinear ﬁeld theories of mechanics, in Handbuch der Physik, Volume
III/3, edited by S. Flugge, SpringerVerlag, 1965.
4. A.J.M. Spencer, Theory of invariants, in Continuum Physics, Volume I, edited by A.C. Eringen,
Academic Press, 1971.
Chapter 5
Calculus of Vector and Tensor Fields
Notation:
α ..... scalar
{a} ..... 3 ×1 column matrix
a ..... vector
a
i
..... i
th
component of the vector a in some basis; or i
th
element of the column matrix {a}
[A] ..... 3 ×3 square matrix
A ..... secondorder tensor (2tensor)
A
ij
..... i, j component of the 2tensor A in some basis; or i, j element of the square matrix [A]
C ..... fourthorder tensor (4tensor)
C
ijk
..... i, j, k, component of 4tensor C in some basis
T
i1i2....in
..... i
1
i
2
....i
n
component of ntensor T in some basis.
5.1 Notation and deﬁnitions.
Let Rbe a bounded region of threedimensional space whose boundary is denoted by ∂Rand
let x denote the position vector of a generic point in R+ ∂R. We shall consider scalar and
tensor ﬁelds such as φ(x), v(x), A(x) and T(x) deﬁned on R+∂R. The region R+∂R and
these ﬁelds will always be assumed to be suﬃciently regular so as to permit the calculations
carried out below.
While the subject of the calculus of tensor ﬁelds can be dealt with directly, we shall take
the more limited approach of working with the components of these ﬁelds. The components
will always be taken with respect to a single ﬁxed orthonormal basis {e
1
, e
2
, e
3
}. Each com
ponent of say a vector ﬁeld v(x) or a 2tensor ﬁeld A(x) is eﬀectively a scalarvalued function
85
86 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
on threedimensional space, v
i
(x
1
, x
2
, x
3
) and A
ij
(x
1
, x
2
, x
3
), and we can use the wellknown
operations of classical calculus on such ﬁelds such as partial diﬀerentiation with respect to
x
k
.
In order to simplify writing, we shall use the notation that a comma followed by a sub
script denotes partial diﬀerentiation with respect to the corresponding xcoordinate. Thus,
for example, we will write
φ
,i
=
∂φ
∂x
i
, φ
,ij
=
∂
2
φ
∂x
i
∂x
j
, v
i,j
=
∂v
i
∂x
j
, (5.1)
and so on, where v
i
and x
i
are the i
th
components of the vectors v and x in the basis
{e
1
, e
2
, e
3
}.
The gradient of a scalar ﬁeld φ(x) is a vector ﬁeld denoted by grad φ (or ∇φ). Its
i
th
component in the orthonormal basis is
(grad φ)
i
= φ
,i
, (5.2)
so that
grad φ = φ
,i
e
i
.
The gradient of a vector ﬁeld v(x) is a 2tensor ﬁeld denoted by grad v (or ∇v). Its ij
th

component in the orthonormal basis is
(grad v)
ij
= v
i,j
, (5.3)
so that
grad v = v
i,j
e
i
⊗e
j
.
The gradient of a scalar ﬁeld φ in the particular direction of the unit vector n is denoted by
∂φ/∂n and deﬁned by
∂φ
∂n
= ∇φ · n. (5.4)
The divergence of a vector ﬁeld v(x) is a scalar ﬁeld denoted by div v (or ∇· v). It is
given by
div v = v
i,i
. (5.5)
The divergence of a 2tensor ﬁeld A(x) is a vector ﬁeld denoted by div A (or ∇· A). Its
i
th
component in the orthonormal basis is
(div A)
i
= A
ij,j
(5.6)
5.2. INTEGRAL THEOREMS 87
so that
div A = A
ij,j
e
i
.
The curl of a vector ﬁeld v(x) is a vector ﬁeld denoted by curl v (or ∇× v). Its i
th

component in the orthonormal basis is
(curl v)
i
= e
ijk
v
k,j
(5.7)
so that
curl v = e
ijk
v
k,j
e
i
.
The Laplacians of a scalar ﬁeld φ(x), a vector ﬁeld v(x) and a 2tensor ﬁeld A(x) are
the scalar, vector and 2tensor ﬁelds with components
∇
2
φ = φ
,kk
, (∇
2
v)
i
= v
i,kk
, (∇
2
A)
ij
= A
ij,kk
, (5.8)
5.2 Integral theorems
Let D be an arbitrary regular subregion of the region R. The divergence theorem allows
one to relate a surface integral on ∂D to a volume integral on D. In particular, for a scalar
ﬁeld φ(x)
_
∂D
φn dA =
_
D
∇φ dV or
_
∂D
φn
k
dA =
_
D
φ
,k
dV. (5.9)
Likewise for a vector ﬁeld v(x) one has
_
∂D
v · n dA =
_
D
∇· v dV or
_
∂D
v
k
n
k
dA =
_
D
v
k,k
dV, (5.10)
as well as
_
∂D
v ⊗n dA =
_
D
∇v dV or
_
∂D
v
i
n
k
dA =
_
D
v
i,k
dV. (5.11)
More generally for a ntensor ﬁeld T(x) the divergence theorem gives
_
∂D
T
i
1
i
2
...in
n
k
dA =
_
D
∂
∂x
k
(T
i
1
i
2
...in
) dV (5.12)
where some of the subscripts i
1
, i
2
, . . . , i
n
may be repeated and one of them might equal k.
88 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
5.3 Localization
Certain physical principles are described to us in terms of equations that hold on an arbitrary
portion of a body, i.e. in terms of an integral over a subregion D of R. It is often useful
to derive an equivalent statement of such a principle in terms of equations that must hold
at each point x in the body. In what follows, we shall frequently have need to do this, i.e.
convert a “global principle” to an equivalent “local ﬁeld equation”.
Consider for example the scalar ﬁeld φ(x) that is deﬁned and continuous at all x ∈
R+ ∂ R and suppose that
_
D
φ(x) dV = 0 for all subregions D ⊂ R. (5.13)
We will show that this “global principle” is equivalent to the “local ﬁeld equation”
φ(x) = 0 at every point x ∈ R. (5.14)
z
B
(z)
D
Figure 5.1: The region R, a subregion D and a neighborhood B
(z) of the point z.
We will prove this by contradiction. Suppose that (5.14) does not hold. This implies that
there is a point, say z ∈ R, at which φ(z) = 0. Suppose that φ is positive at this point:
φ(z) > 0. Since we are told that φ is continuous, φ is necessarily (strictly) positive in some
neighborhood of z as well. Let B
(z) be a sphere with its center at z and radius > 0. We
can always choose suﬃciently small so that B
(z) is a suﬃciently small neighborhood of z
and
φ(x) > 0 at all x ∈ B
(z). (5.15)
Now pick a region D which is a subset of B
(z). Then φ(x) > 0 for all x ∈ D. Integrating φ
over this D gives
_
D
φ(x) dV > 0 (5.16)
thus contradicting (5.13). An entirely analogous calculation can be carried out in the case
φ(z) < 0. Thus our starting assumption must be false and (5.14) must hold.
5.4. WORKED EXAMPLES. 89
5.4 Worked Examples.
In all of the examples below the region R will be a bounded regular region and its boundary
∂R will be smooth. All ﬁelds are deﬁned on this region and are as smooth as in necessary.
In some of the examples below, we are asked to establish certain results for vector and
tensor ﬁelds. When it is more convenient, we will carry out our calculations by ﬁrst picking
and ﬁxing a basis, and then working with the components in that basis. If necessary, we will
revert back to the vector and tensor ﬁelds at the end. We shall do this frequently in what
follows and will not bother to explain this strategy each time.
Example 5.1: Calculate the gradient of the scalarvalued function φ(x) = Ax · x where A is a constant
2tensor.
Solution: Writing φ in terms of components
φ = A
ij
x
i
x
j
.
Calculating the partial derivative of φ with respect to x
k
yields
φ
,k
= A
ij
(x
i
x
j
)
,k
= A
ij
(x
i,k
x
j
+x
i
x
j,k
) = A
ij
(δ
ik
x
j
+x
i
δ
jk
) = A
kj
x
j
+A
ik
x
i
= (A
kj
+A
jk
)x
j
or equivalently ∇φ = (A+A
T
)x.
Example 5.2: Let v(x) be a vector ﬁeld and let v
i
(x
1
, x
2
, x
3
) be the i
th
component of v in a ﬁxed orthonormal
basis {e
1
, e
2
, e
3
}. For each i and j deﬁne
F
ij
= v
i,j
.
Show that F
ij
are the components of a 2tensor.
Solution: Since v and x are 1tensors, their components obey the transformation rules
v
i
= Q
ik
v
k
, v
i
= Q
ki
v
k
and x
j
= Q
jk
x
k
, x
= Q
j
x
j
Therefore
F
ij
=
∂v
i
∂x
j
=
∂v
i
∂x
∂x
∂x
j
=
∂v
i
∂x
Q
j
=
∂(Q
ik
v
k
)
∂x
Q
j
= Q
ik
Q
j
∂v
k
∂x
= Q
ik
Q
j
F
k
,
which is the transformation rule for a 2tensor.
Example 5.3: If φ(x), u(x) and A(x) are a scalar, vector and 2tensor ﬁelds respectively. Establish the
identities
a. div (φu) = u · grad φ +φ div u
90 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
b. grad (φu) = u ⊗grad φ +φ grad u
c. div (φA) = A grad φ +φ div A
Solution:
a. In terms of components we are asked to show that (φu
i
)
,i
= u
i
φ
,i
+ φ u
i,i
. This follows immediately
by expanding (φu
i
)
,i
using the chain rule.
b. In terms of components we are asked to show that (φu
i
)
,j
= u
i
φ
,j
+ φ u
i,j
. Again, this follows
immediately by expanding (φu
i
)
,j
using the chain rule.
c. In terms of components we are asked to show that (φA
ij
)
,j
= A
ij
φ
,j
+ φ A
ij,j
. Again, this follows
immediately by expanding (φA
ij
)
,j
using the chain rule.
Example 5.4: If φ(x) and v(x) are a scalar and vector ﬁeld respectively, show that
∇×(φv) = φ(∇×v) −v ×∇φ (i)
Solution: Recall that the curl of a vector ﬁeld u can be expressed as ∇×u = e
ijk
u
k,j
e
i
where e
i
is a ﬁxed
basis vector. Thus evaluating ∇×(φv):
∇×(φv) = e
ijk
(φv
k
)
,j
e
i
= e
ijk
φ v
k,j
e
i
+e
ijk
φ
,j
v
k
e
i
= φ ∇×v +∇φ ×v (ii)
from which the desired result follows because a ×b = −b ×a.
Example 5.5: Let u(x) be a vector ﬁeld and deﬁne a second vector ﬁeld ξ(x) by ξ(x) = curl u(x). Show
that
a. ∇· ξ = 0;
b. (∇u −∇u
T
)a = ξ ×a for any vector ﬁeld a(x); and
c. ξ · ξ = ∇u · ∇u −∇u · ∇u
T
Solution: Recall that in terms of its components, ξ = curl u = ∇×u can be expressed as
ξ
i
= e
ijk
u
k,j
. (i)
a. A direct calculation gives
∇· ξ = ξ
i,i
= (e
ijk
u
k,j
)
,i
= e
ijk
u
k,ji
= 0 (ii)
where in the last step we have used the fact that e
ijk
is skewsymmetric in the subscripts i, j, and
u
k,ji
is symmetric in the subscripts i, j (since the order of partial diﬀerentiation can be switched) and
therefore their product vanishes.
5.4. WORKED EXAMPLES. 91
b. Multiplying both sides of (i) by e
ipq
gives
e
ipq
ξ
i
= e
ipq
e
ijk
u
k,j
= (δ
pj
δ
qk
−δ
pk
δ
qj
) u
k,j
= u
q,p
−u
p,q
, (iii)
where we have made use of the identity e
ipq
e
ijk
= δ
pj
δ
qk
− δ
pk
δ
qj
between the alternator and the
Kronecker delta infroduced in (1.49) as well as the substitution rule. Multiplying both sides of this
by a
q
and using the fact that e
ipq
= −e
piq
gives
e
piq
ξ
i
a
q
= (u
p,q
−u
q,p
)a
q
, (iv)
or ξ ×a = (∇u −∇u
T
)a.
c. Since (∇u)
ij
= u
i,j
and the inner product of two 2tensors is A· B = A
ij
B
ij
, the righthand side of
the equation we are asked to establish can be written as ∇u · ∇u − ∇u · ∇u
T
= (∇u)
ij
(∇u)
ij
−
(∇u)
ij
(∇u)
ji
= u
i,j
u
i,j
−u
i,j
u
j,i
. The lefthand side on the hand is ξ · ξ = ξ
i
ξ
i
.
Using (i), the aforementioned identity between the alternator and the Kronecker delta, and the sub
stitution rule leads to the desired result as follows:
ξ
i
ξ
i
= (e
ijk
u
k,j
) (e
ipq
u
p,q
) = (δ
jp
δ
kq
−δ
jq
δ
kp
) u
k,j
u
p,q
= u
q,p
u
p,q
−u
p,q
u
p,q
. (v)
Example 5.6: Let u(x), E(x) and S(x) be, respectively, a vector and two 2tensor ﬁelds. These ﬁelds are
related by
E =
1
2
_
∇u +∇u
T
_
, S = 2µE+λ trace(E) 1, (i)
where λ and µ are constants. Suppose that
u(x) = b
x
r
3
where r = x, x = 0, (ii)
and b is a constant. Use (i)
1
to calculate the ﬁeld E(x) corresponding to the ﬁeld u(x) given in (ii), and then
use (i)
2
to calculate the associated ﬁeld S(x). Thus verify that the ﬁeld S(x) corresponding to (ii) satisﬁes
the diﬀerential equation:
div S = o, x = 0. (iii)
Solution: We proceed in the manner suggested in the problem statement by ﬁrst using (i)
1
to calculate the
E corresponding to the u given by (ii); substituting the result into (i)
2
gives the corresponding S; and ﬁnally
we can then check whether or not this S satisﬁes (iii).
In components,
E
ij
=
1
2
(u
i,j
−u
j,i
) , (iv)
and therefore we begin by calculting u
i,j
. For this, it is convenient to ﬁrst calculate ∂r/∂x
j
= r
,j
. Observe
by diﬀerentiating r
2
= x
2
= x
i
x
i
that
2rr
,j
= 2x
i,j
x
i
= 2δ
ij
x
i
= 2x
j
, (v)
and therefore
r
,j
=
x
j
r
. (vi)
92 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
Now diﬀerentiating the given vector ﬁeld u
i
= bx
i
/r
3
with respect to x
j
gives
u
i,j
=
b
r
3
x
i,j
+bx
i
(r
−3
)
,j
=
b
r
3
δ
ij
−3b
x
i
r
4
r
,j
= b
δ
ij
r
3
−3b
x
i
r
4
x
j
r
= b
δ
ij
r
3
−3b
x
i
x
j
r
5
.
(vii)
Substituting this into (iv) gives us E
ij
:
E
ij
=
1
2
(u
i,j
+u
j,i
) = b
_
δ
ij
r
3
−3
x
i
x
j
r
5
_
. (viii)
Next, substituting (viii) into (i)
2
, gives us S
ij
:
S
ij
= 2µ E
ij
+λE
kk
δ
ij
= 2µb
_
δ
ij
r
3
−
3x
i
x
j
r
5
_
+λb
_
δ
kk
r
3
−3
x
k
x
k
r
5
_
δ
ij
= 2µb
_
δ
ij
r
3
−
3x
i
x
j
r
5
_
+λb
_
3
r
3
−3
r
2
r
5
_
δ
ij
= 2µb
_
δ
ij
r
3
−
3x
i
x
j
r
5
_
.
(ix)
Finally we use this to calculate ∂S
ij
/∂x
j
= S
ij,j
:
1
2µb
S
ij,j
= δ
ij
(r
−3
)
,j
−
3
r
5
(x
i
x
j
)
,j
−3x
i
x
j
(r
−5
)
,j
= δ
ij
_
−
3
r
4
r
,j
_
−
3
r
5
(δ
ij
x
j
+x
i
δ
jj
) −3x
i
x
j
_
−
5
r
6
r
,j
_
= −3
δ
ij
r
4
x
j
r
−
3
r
5
(x
i
+ 3x
i
) +
15x
i
x
j
r
6
x
j
r
= 0. (x)
Example 5.7: Show that
_
∂R
x ⊗n dA = V I, (i)
where V is the volume of the region R, and x is the position vector of a typical point in R +∂R.
Solution: In terms of components in a ﬁxed basis, we have to show that
_
∂R
x
i
n
j
dA = V δ
ij
. (ii)
The result follows immediately by using the divergence theorem (5.11):
_
∂R
x
i
n
j
dA =
_
R
x
i,j
dV =
_
R
δ
ij
dV = δ
ij
_
R
dV = δ
ij
V. (iii)
Example 5.8: Let A(x) be a 2tensor ﬁeld with the property that
_
∂D
A(x)n(x) dA = o for all subregions D ⊂ R, (i)
5.4. WORKED EXAMPLES. 93
where n(x) is the unit outward normal vector at a point x on the boundary ∂D. Show that (i) holds if and
only if div A = o at each point x ∈ R.
Solution: In terms of components in a ﬁxed basis, we are told that
_
∂D
A
ij
(x)n
j
(x) dA = 0 for all subregions D ⊂ R. (ii)
By using the divergence theorem (5.12), this implies that
_
D
A
ij,j
dV = 0 for all subregions D ⊂ R. (iii)
If A
ij,j
is continuous on R, the result established in the previous problem allows us to conclude that
A
ij,j
= 0 at each x ∈ R. (iv)
Conversely if (iv) holds, one can easily reverse the preceding steps to conclude that then (i) also holds. This
shows that (iv) is both necessary and suﬃcient for (i) to hold.
Example 5.9: Let A(x) be a 2tensor ﬁeld which satisﬁes the diﬀerential equation div A = o at each point
in R. Suppose that in addition
_
∂D
x ×An dA = o for all subregions D ⊂ R.
Show that A must be a symmetric 2tensor.
Solution: In terms of components we are given that
_
∂D
e
ijk
x
j
A
kp
n
p
dA = 0,
which on using the divergence theorem yields
_
D
e
ijk
(x
j
A
kp
)
,p
dV =
_
D
e
ijk
[δ
jp
A
kp
+x
j
A
kp,p
] dV = 0.
We are also given that A
ij,j
= 0 at each point in R and so the preceding equation simpliﬁes, after using the
substitution rule, to
_
D
e
ijk
A
kj
dV = 0.
Since this holds for all subregions D ⊂ R we can localize it to
e
ijk
A
kj
= 0 at each x ∈ R.
Finally, multiplying both sides by e
ipq
and using the identity e
ipq
e
ijk
= δ
pj
δ
qk
−δ
pk
δ
qj
in (1.49) yields
(δ
pj
δ
qk
−δ
pk
δ
qj
)A
kj
= A
qp
−A
pq
= 0
and so A is symmetric.
94 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
Example 5.10: Let ε
1
(x
1
, x
2
) and ε
2
(x
1
, x
2
) be deﬁned on a simply connected twodimensional domain R.
Find necessary and suﬃcient conditions under which there exists a function u(x
1
, x
2
) such that
u
,1
= ε
1
, u
,2
= ε
2
for all (x
1
, x
2
) ∈ R. (i)
Solution: In the presence of suﬃcient smoothness, the order of partial diﬀerentiation does not matter and so
we necessarily have u
,12
= u
,21
. Therefore a necessary condition for (i) to hold is that ε
1
, ε
2
obey
ε
1,2
= ε
2,1
for all (x
1
, x
2
) ∈ R. (ii)
C
D
(x
1
, x
2
)
(0, 0)
R
s
n
s
1
= −n
2
, s
2
= n
1
(0, 0)
(x
1
, x
2
)
s
C
R
(ξ
1
, ξ
2
) = (ξ
1
(s), ξ
2
(s))
(s
1
, s
2
) = (ξ
1
(s), ξ
2
(s))
s
(a) (b )
Figure 5.2: (a) Path C from (0, 0) to (x
1
, x
2
). The curve is parameterized by arc length s as ξ
1
= ξ
1
(s), ξ
2
=
ξ
2
(s), 0 ≤ s ≤ s
0
. The unit tangent vector on S, s, has components (s
1
, s
2
). (b) A closed path C
passing
through (0, 0) and (x
1
, x
2
) and coinciding with C over part of its length. The unit outward normal vector on
S is n, and it has components (n
1
, n
2
)
.
To show that (ii) is also suﬃcient for the existence of u, we shall provide a formula for explicitly
calculating the function u in terms of the given functions ε
1
and ε
2
. Let C be an arbitrary regular oriented
curve in R that connects (0, 0) to (x
1
, x
2
). A generic point on the curve is denoted by (ξ
1
, ξ
2
) and the curve
is characterized by the parameterization
ξ
1
= ξ
1
(s), ξ
2
= ξ
2
(s), 0 ≤ s ≤ s
0
, (iii)
where s is arc length on C and (ξ
1
(0), ξ
2
(0)) = (0, 0) and (ξ
1
(s
0
), ξ
2
(s
0
)) = (x
1
, x
2
). We will show that the
function
u(x
1
, x
2
) =
_
s0
0
_
ε
1
(ξ
1
(s), ξ
2
(s))ξ
1
(s) +ε
2
(ξ
1
(s), ξ
2
(s))ξ
2
(s)
_
ds (iv)
satisﬁes the requirement (i) when (ii) holds.
5.4. WORKED EXAMPLES. 95
To see this we must ﬁrst show that the integral (iv) does in fact deﬁne a function of (x
1
, x
2
), i.e. that it
does not depend on the path of integration. (Note that if a function u satisﬁes (i). then so does the function
u + constant and so the dependence on the arbitrary starting point of the integral is to be expected.) Thus
consider a closed path C
that starts and ends at (0, 0) and passes through (x
1
, x
2
) as sketched in Figure
NNN (b). We need to show that
_
C
_
ε
1
(ξ
1
(s), ξ
2
(s))ξ
1
(s) +ε
2
(ξ
1
(s), ξ
2
(s))ξ
2
(s)
_
ds = 0. (v)
Recall that (ξ
1
(s), ξ
2
(s)) are the components of the unit tangent vector on C
at the point (ξ
1
(s), ξ
2
(s)):
s
1
= ξ
1
(s), s
2
= ξ
2
(s). Observe further from the ﬁgure that the components of the unit tangent vector s and
the unit outward normal vector n are related by s
1
= −n
2
and s
2
= n
1
. Thus the lefthand side of (v) can
be written as
_
C
_
ε
1
s
1
+ε
2
s
2
_
ds =
_
C
_
ε
2
n
1
−ε
1
n
2
_
ds =
_
D
_
ε
2,1
−ε
1,2
_
dA (vi)
where we have used the divergence theorem in the last step and D
is the region enclosed by C
. In view of
(ii), this last integral vanishes. Thus the integral (v) vanishes on any closed path C
and so the integral (iv) is
independent of path and depends only on the end points. Thus (iv) does in fact deﬁne a function u(x
1
, x
2
).
Finally it remains to show that the function (iv) satisﬁes the requirements (i). This is readily seen by
writing (iv) in the form
u(x
1
, x
2
) =
_
(x1,x2)
(0,0)
_
ε
1
(ξ
1
, ξ
2
)dξ
1
+ε
2
(ξ
1
, ξ
2
)dξ
2
_
(vii)
and then diﬀerentiating this with respect to x
1
and x
2
.
Example 5.11: Let a
1
(x
1
, x
2
) and a
2
(x
1
, x
2
) be deﬁned on a simply connected twodimensional domain R.
Suppose that a
1
and a
2
satisfy the partial diﬀerential equation
a
1,1
(x
1
, x
2
) +a
2,2
(x
1
, x
2
) = 0 for all (x
1
, x
2
) ∈ R. (i)
Show that (i) holds if and only if there is a function φ(x
1
, x
2
) such that
a
1
(x
1
, x
2
) = φ
,2
(x
1
, x
2
), a
2
(x
1
, x
2
) = −φ
,1
(x
1
, x
2
). (ii)
Solution: This is simply a restatement of the previous example in a form that will ﬁnd useful in what follows.
Example 5.12: Find the most general vector ﬁeld u(x) which satisﬁes the diﬀerential equation
1
2
_
∇u +∇u
T
_
= O at all x ∈ R. (i)
Solution: In terms of components, ∇u = −∇u
T
reads:
u
i,j
= −u
j,i
. (ii)
96 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
Diﬀerentiating this with respect to x
k
, and then changing the order of diﬀerentiation gives
u
i,jk
= −u
j,ik
= −u
j,ki
.
However by (ii), u
j,k
= −u
k,j
. Using this and then changing the order of diﬀerentiation leads to
u
i,jk
= −u
j,ki
= u
k,ji
= u
k,ij
.
Again, by (ii), u
k,i
= −u
i,k
. Using this and changing the order of diﬀerentiation once again leads to
u
i,jk
= u
k,ij
= −u
i,kj
= −u
i,jk
.
It therefore follows that
u
i,jk
= 0.
Integrating this once gives
u
i,j
= C
ij
(iii)
where the C
ij
’s are constants. Integrating this once more gives
u
i
= C
ij
x
j
+c
i
, (iv)
where the c
i
’s are constants. The vector ﬁeld u(x) must necessarily have this form if (ii) is to hold.
To examine suﬃciency, substituting (iv) into (ii) shows that [C] must be skewsymmetric. Thus in
summary the most general vector ﬁeld u(x) that satisﬁes (i) is
u(x) = Cx +c
where C is a constant skewsymmetric 2tensor and c is a constant vector.
Example 5.13: Suppose that a scalarvalued function f(A) is deﬁned for all symmetric tensors A. In terms
of components in a ﬁxed basis we have f = f(A
11
, A
12
, A
13
, A
21
, . . . A
33
). The partial derivatives of f with
respect to A
ij
,
∂f
∂A
ij
, (i)
are the components of a 2tensor. Is this tensor symmetric?
Solution: Consider, for example, the particular function f = A · A = A
ij
A
ij
which, when written out in
components, reads:
f = f
1
(A
11
, A
12
, A
13
, A
21
, . . . A
33
) = A
2
11
+A
2
22
+A
2
33
+ 2A
2
12
+ 2A
2
23
+ 2A
2
31
. (ii)
Proceeding formally and diﬀerentiating (ii) with respect to A
12
, and separately with respect to A
21
, gives
∂f
1
∂A
12
= 4A
12
,
∂f
1
∂A
21
= 0, (iii)
which implies that ∂f
1
/∂A
12
= ∂f
1
/∂A
21
.
5.4. WORKED EXAMPLES. 97
On the other hand, since A
ij
is symmetric we can write
A
ij
=
1
2
(A
ij
+A
ji
) . (iv)
Substituting (iv) into the formula (ii) for f gives f = f
2
(A
11
, A
12
, A
13
, A
21
, . . . A
33
) :
f
2
(A
11
, A
12
, A
13
, A
21
, . . . A
33
)
= A
2
11
+A
2
22
+A
2
33
+ 2
_
1
2
(A
12
+A
21
)
_
2
+ 2
_
1
2
(A
23
+A
31
)
_
2
+ 2
_
1
2
(A
31
+A
13
)
_
2
,
= A
2
11
+A
2
22
+A
2
33
+
1
2
A
2
12
+A
12
A
21
+
1
2
A
2
21
+. . . +
1
2
A
2
31
+A
31
A
13
+
1
2
A
2
13
. (v)
Note that the values of f
1
[A] = f
2
[A] for any symmetric matrix [A]. Diﬀerentiating f
2
leads to
∂f
2
∂A
12
= A
12
+A
21
,
∂f
2
∂A
21
= A
21
+A
12
, (vi)
and so now, ∂f
2
/∂A
12
= ∂f
2
/∂A
21
.
The source of the original diﬃculty is the fact that the 9 A
ij
’s in the argument of f
1
are not independent
variables since A
ij
= A
ji
; and yet we have been calculating partial derivatives as if they were independent.
In fact, the original problem statement itself is illposed since we are asked to calculate ∂f/∂A
ij
but told
that [A] is restricted to being symmetric.
Suppose that f
2
is deﬁned by (v) for all matrices [A] and not just symmetric matrices [A]. We see that
the values of the functions f
1
and f
2
are equal at all symmetric matrices and so in going from f
1
→ f
2
,
we have eﬀectively relaxed the constraint of symmetry and expanded the domain of deﬁnition of f to all
matrices [A]. We may diﬀerentiate f
2
by treating the 9 A
ij
’s to be independent and the result can then be
evaluated at symmetric matrices. We assume that this is what was meant in the problem statement.
In general, if a function f(A
11
, A
12
, . . . A
33
) is expressed in symmetric form, by changing A
ij
→
1
2
(A
ij
+
A
ji
), then ∂f/∂A
ij
will be symmetric; but not otherwise. Throughout these volumes, whenever we encounter
a function of a symmetric tensor, we shall always assume that it has been written in symmetric form; and
therefore its derivative with respect to the tensor can be assumed to be symmetric.
Remark: We will encounter a similar situation involving tensors whose determinant is unity. On occasion we
will have need to diﬀerentiate a function g
1
(A) deﬁned for all tensors with det A = 1 and we shall do this
by extending the deﬁnition of the given function and deﬁning a second function g
2
(A) for all tensors; g
2
is
deﬁned such that g
1
(A) = g
2
(A) for all tensors with unit determinant. We then diﬀerentiate g
2
and evaluate
the result at tensors with unit determinant.
References
1. P. Chadwick, Continuum Mechanics, Chapter 1, Sections 10 and 11, Dover, 1999.
2. M.E. Gurtin, An Introduction to Continuum Mechanics, Chapter 2, Academic Press, 1981.
3. L.A. Segel, Mathematics Applied to Continuum Mechanics, Section 2.3, Dover, New York, 1987.
Chapter 6
Orthogonal Curvilinear Coordinates
6.1 Introductory Remarks
The notes in this section are a somewhat simpliﬁed version of notes developed by Professor
Eli Sternberg of Caltech. The discussion here, which is a general treatment of orthogonal
curvilinear coordinates, is a compromise between a general tensorial treatment that includes
oblique coordinate systems and an adhoc treatment of special orthogonal curvilinear co
ordinate systems. A summary of the main tensor analytic results of this section are given
in equations (6.32)  (6.37) in terms of the scale factors h
i
deﬁned in (6.17) that relate
the rectangular cartesian coordinates (x
1
, x
2
, x
3
) to the orthogonal curvilinear coordinates
(ˆ x
1
, ˆ x
2
, ˆ x
3
).
It is helpful to begin by reviewing a few aspects of the familiar case of circular cylindrical
coordinates. Let {e
1
, e
2
, e
3
} be a ﬁxed orthonormal basis, and let O be a ﬁxed point chosen
as the origin. The point O and the basis {e
1
, e
2
, e
3
}, together, constitute a frame which we
denote by {O; e
1
, e
2
, e
3
}. Consider a generic point P in R
3
whose position vector relative
to this origin O is x. The rectangular cartesian coordinates of the point P in the frame
{O; e
1
, e
2
, e
3
} are the components (x
1
, x
2
, x
3
) of the position vector x in this basis.
We introduce circular cylindrical coordinates (r, θ, z) through the mappings
x
1
= r cos θ; x
2
= r sin θ; x
3
= z;
for all (r, θ, z) ∈ [0, ∞) ×[0, 2π) ×(−∞, ∞).
_
_
_
(6.1)
The mapping (6.1) is onetoone except at r = 0 (i.e. x
1
= x
2
= 0). Indeed (6.1) may be
99
100 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
explicitly inverted for r > 0 to give
r =
_
x
2
1
+ x
2
2
; cos θ = x
1
/r, sin θ = x
2
/r; z = x
3
. (6.2)
For a general set of orthogonal curvilinear coordinates one cannot, in general, explicitly
invert the coordinate mapping in this way.
The Jacobian determinant of the mapping (6.1) is
∆(r, θ, z) = det
_
¸
¸
¸
¸
_
∂x
1
/∂r ∂x
1
/∂θ ∂x
1
/∂z
∂x
2
/∂r ∂x
2
/∂θ ∂x
2
/∂z
∂x
3
/∂r ∂x
3
/∂θ ∂x
3
/∂z
_
¸
¸
¸
¸
_
= r ≥ 0.
Note that ∆(r, θ, z) = 0 if and only if r = 0 and is otherwise strictly positive; this reﬂects
the invertibility of (6.1) on (r, θ, z) ∈ (0, ∞) × [0, 2π) × (−∞, ∞), and the breakdown in
invertibility at r = 0.
x
1
x
2
x
3
z = constant
z
r
θ
r = constant
θ = constant
θ
r
zcoordinate line
rcoordinate line
θcoordinate line
z
Figure 6.1: Circular cylindrical coordinates (r, θ, z).
The circular cylindrical coordinates (r, θ, z) admit the familiar geometric interpretation
illustrated in Figure 6.1. In view of (6.2), one has:
r = r
o
= constant : circular cylinders, co −axial with x
3
−axis,
θ = θ
o
= constant : meridional half −planes through x
3
−axis,
z = z
o
= constant : planes perpendicular to x
3
−axis.
The above surfaces constitute a triply orthogonal family of coordinate surfaces; each “regular
point” of E
3
( i.e. a point at which r > 0) is the intersection of a unique triplet of (mutually
6.1. INTRODUCTORY REMARKS 101
perpendicular) coordinate surfaces. The coordinate lines are the pairwise intersections of
the coordinate surfaces; thus for example as illustrated in Figure 6.1, the line along which
a rcoordinate surface and a zcoordinate surface intersect is a θcoordinate line. Along
any coordinate line only one of the coordinates (r, θ, z) varies, while the other two remain
constant.
In terms of the circular cylindrical coordinates the position vector x can be written as
x = x(r, θ, z) = (r cos θ)e
1
+ (r sin θ)e
2
+ ze
3
. (6.3)
The vectors
∂x/∂r, ∂x/∂θ, ∂x/∂z,
are tangent to the coordinate lines corresponding to r, θ and z respectively. The socalled
metric coeﬃcients h
r
, h
θ
, h
z
denote the magnitudes of these vectors, i.e.
h
r
= ∂x/∂r, h
θ
= ∂x/∂θ, h
z
= ∂x/∂z,
and so the unit tangent vectors corresponding to the respective coordinate lines r, θ and z
are:
e
r
=
1
h
r
(∂x/∂r), e
θ
=
1
h
θ
(∂x/∂θ), e
z
=
1
h
z
(∂x/∂z).
In the present case one has h
r
= 1, h
θ
= r, h
z
= 1 and
e
r
=
1
h
r
(∂x/∂r) = cos θ e
1
+ sin θ e
2
,
e
θ
=
1
h
θ
(∂x/∂θ) = −sin θ e
1
+ cos θe
2
,
e
z
=
1
h
z
(∂x/∂z) = e
3
.
_
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
_
The triplet of vectors {e
r
, e
θ
, e
z
} forms a local orthonormal basis at the point x. They are
local because they depend on the point x; sometimes, when we need to emphasize this fact,
we will write {e
r
(x), e
θ
(x), e
z
(x)}.
In order to calculate the derivatives of various ﬁeld quantities it is clear that we will
need to calculate quantities such as ∂e
r
/∂r, ∂e
r
/∂θ, . . . etc. ; and in order to calculate the
components of these derivatives in the local basis we will need to calculate quantities of the
form
e
r
· (∂e
r
/∂r), e
θ
· (∂e
r
/∂r), e
z
· (∂e
r
/∂r),
e
r
· (∂e
θ
/∂r), e
θ
· (∂e
θ
/∂r), e
z
· (∂e
θ
/∂r), . . . etc.
(6.4)
102 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
Much of the analysis in the general case to follow, leading eventually to Equation (6.30) in
Subsection 6.2.4, is devoted to calculating these quantities.
Notation: As far as possible, we will consistently denote the ﬁxed cartesian coordinate system
and all components and quantities associated with it by symbols such as x
i
, e
i
, f(x
1
, x
2
, x
3
),
v
i
(x
1
, x
2
, x
3
), A
ij
(x
1
, x
2
, x
3
) etc. and we shall consistently denote the local curvilinear coor
dinate system and all components and quantities associated with it by similar symbols with
“hats” over them, e.g. ˆ x
i
, ˆ e
i
,
ˆ
f(ˆ x
1
, ˆ x
2
, ˆ x
3
), ˆ v
i
(ˆ x
1
, ˆ x
2
, ˆ x
3
),
´
A
ij
(ˆ x
1
, ˆ x
2
, ˆ x
3
) etc.
6.2 General Orthogonal Curvilinear Coordinates
Let {e
1
, e
2
, e
3
} be a ﬁxed righthanded orthonormal basis, let O be the ﬁxed point cho
sen as the origin and let {O; e
1
, e
2
, e
3
} be the associated frame. The rectangular cartesian
coordinates of the point with position vector x in this frame are
(x
1
, x
2
, x
3
) where x
i
= x · e
i
.
6.2.1 Coordinate transformation. Inverse transformation.
We introduce curvilinear coordinates (ˆ x
1
, ˆ x
2
, ˆ x
3
) through a triplet of scalar mappings
x
i
= x
i
(ˆ x
1
, ˆ x
2
, ˆ x
3
) for all (ˆ x
1
, ˆ x
2
, ˆ x
3
) ∈
ˆ
R, (6.5)
where the domain of deﬁnition
ˆ
R is a subset of E
3
. Each curvilinear coordinate ˆ x
i
belongs
to some linear interval L
i
, and
ˆ
R = L
1
× L
2
× L
3
. For example in the case of circular
cylindrical coordinates we have L
1
= {(ˆ x
1
 0 ≤ ˆ x
1
< ∞}, L
2
= {(ˆ x
2
 0 ≤ ˆ x
2
< 2π} and
L
3
= {(ˆ x
3
 − ∞ < ˆ x
3
< ∞}, and the “box”
ˆ
R is given by
ˆ
R = {(ˆ x
1
, ˆ x
2
, ˆ x
3
)  0 ≤ ˆ x
1
<
∞, 0 ≤ ˆ x
2
< 2π, −∞< ˆ x
3
< ∞}. Observe that the “box”
ˆ
R includes some but possibly not
all of its faces.
Equation (6.5) may be interpreted as a mapping of
ˆ
R into E
3
. We shall assume that
(x
1
, x
2
, x
3
) ranges over all of E
3
as (ˆ x
1
, ˆ x
2
, ˆ x
3
) takes on all values in
ˆ
R. We assume further
that the mapping (6.5) is onetoone and suﬃciently smooth in the interior of
ˆ
R so that the
inverse mapping
ˆ x
i
= ˆ x
i
(x
1
, x
2
, x
3
) (6.6)
exists and is appropriately smooth at all (x
1
, x
2
, x
3
) in the image of the interior of
ˆ
R.
6.2. GENERAL ORTHOGONAL CURVILINEAR COORDINATES 103
Note that the mapping (6.5) might not be uniquely invertible on some of the faces of
ˆ
R
which are mapped into “singular” lines or surfaces in E
3
. (For example in the case of circular
cylindrical coordinates, ˆ x
1
= r = 0 is a singular surface; see Section 6.1.) Points that are not
on a singular line or surface will be referred to as “regular points” of E
3
.
The Jacobian matrix [J] of the mapping (6.5) has elements
J
ij
=
∂x
i
∂ˆ x
j
(6.7)
and by the assumed smoothness and onetooneness of the mapping, the Jacobian determi
nant does not vanish on the interior of
ˆ
R. Without loss of generality we can take therefore
take it to be positive:
det[J] =
1
6
e
ijk
e
pqr
∂x
i
∂ˆ x
p
∂x
j
∂ˆ x
q
∂x
k
∂ˆ x
r
> 0 . (6.8)
The Jacobian matrix of the inverse mapping (6.6) is [J]
−1
.
The coordinate surface ˆ x
i
= constant is deﬁned by
ˆ x
i
(x
1
, x
2
, x
3
) = ˆ x
o
i
= constant, i = 1, 2, 3;
the pairwise intersections of these surfaces are the corresponding coordinate lines, along
which only one of the curvilinear coordinates varies. Thus every regular point of E
3
is the
point of intersection of a unique triplet of coordinate surfaces and coordinate lines, as is
illustrated in Figure 6.2.
Recall that the tangent vector along an arbitrary regular curve
Γ : x = x(t), (α ≤ t ≤ β), (6.9)
can be taken to be
1
˙ x(t) = ˙ x
i
(t)e
i
; it is oriented in the direction of increasing t. Thus in the
case of the special curve
Γ
1
: x = x(ˆ x
1
, c
2
, c
3
), ˆ x
1
∈ L
1
, c
2
= constant, c
3
= constant,
corresponding to a ˆ x
1
coordinate line, the tangent vector can be taken to be ∂x/∂ˆ x
1
. Gen
eralizing this, ∂x/∂ˆ x
i
are tangent vectors and
ˆ e
i
=
1
∂x/∂ˆ x
i

∂x
∂ˆ x
i
(no sum) (6.10)
1
Here and in the sequel a superior dot indicates diﬀerentiation with respect to the parameter t.
104 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
x
1
x
2
x
3
e
1
e
2
e
3
ˆ e
1
ˆ e
2
ˆ e
3
Q
ij
= ˆ e
i
· e
j
ˆ x
1
ˆ x
2
ˆ x
3 ˆ x
1
coordinate surface
ˆ x
2
coordinate surface
ˆ x
3
coordinate surface
Figure 6.2: Orthogonal curvilinear coordinates (ˆ x
1
, ˆ x
2
, ˆ x
3
) and the associated local orthonormal basis
vectors {ˆ e
1
, ˆ e
2
, ˆ e
3
}. Here ˆ e
i
is the unit tangent vector along the ˆ x
i
coordinate line, the sense of ˆ e
i
being
determined by the direction in which ˆ x
i
increases. The proper orthogonal matrix [Q] characterizes the
rotational transformation relating this basis to the rectangular cartesian basis {e
1
, e
2
, e
3
}.
are the unit tangent vectors along the ˆ x
i
−coordinate lines, both of which point in the sense
of increasing ˆ x
i
. Since our discussion is limited to orthogonal curvilinear coordinate systems,
we must require
for i = j : ˆ e
i
· ˆ e
j
= 0 or
∂x
∂ˆ x
i
·
∂x
∂ˆ x
j
= 0 or
∂x
k
∂ˆ x
i
·
∂x
k
∂ˆ x
j
= 0. (6.11)
6.2.2 Metric coeﬃcients, scale moduli.
Consider again the arbitrary regular curve Γ parameterized by (6.9). If s(t) is the arclength
of Γ, measured from an arbitrary ﬁxed point on Γ, one has
 ˙ s(t) =  ˙ x(t) =
_
˙ x(t) · ˙ x(t). (6.12)
6.2. GENERAL ORTHOGONAL CURVILINEAR COORDINATES 105
One concludes from (6.12), (6.5) and the chain rule that
_
ds
dt
_
2
=
dx
dt
·
dx
dt
=
dx
k
dt
·
dx
k
dt
=
_
∂x
k
∂ˆ x
i
dˆ x
i
dt
_
·
_
∂x
k
∂ˆ x
j
dˆ x
j
dt
_
=
_
∂x
k
∂ˆ x
i
·
∂x
k
∂ˆ x
j
_
dˆ x
i
dt
dˆ x
j
dt
.
where
ˆ x
i
(t) = ˆ x
i
(x
1
(t), x
2
(t), x
3
(t)), (α ≤ t ≤ β),
Thus
_
ds
dt
_
2
= g
ij
dˆ x
i
dt
dˆ x
j
dt
or (ds)
2
= g
ij
dˆ x
i
dˆ x
j
, (6.13)
in which g
ij
are the metric coeﬃcients of the curvilinear coordinate system under consider
ation. They are deﬁned by
g
ij
=
∂x
∂ˆ x
i
·
∂x
∂ˆ x
j
=
∂x
k
∂ˆ x
i
∂x
k
∂ˆ x
j
. (6.14)
Note that
g
ij
= 0, (i = j), (6.15)
as a consequence of the orthogonality condition (6.11). Observe that in terms of the Jacobian
matrix [J] deﬁned earlier in (6.7) we can write g
ij
= J
ki
J
kj
or equivalently [g] = [J]
T
[J].
Because of (6.14) and (6.15) the metric coeﬃcients can be written as
g
ij
= h
i
h
j
δ
ij
, (6.16)
where the scale moduli h
i
are deﬁned by
23
h
i
=
√
g
ii
=
¸
∂x
k
∂ˆ x
i
∂x
k
∂ˆ x
i
=
¸
_
∂x
1
∂ˆ x
i
_
2
+
_
∂x
2
∂ˆ x
i
_
2
+
_
∂x
3
∂ˆ x
i
_
2
> 0, (6.17)
noting that h
i
= 0 is precluded by (6.8). The matrix of metric coeﬃcients is therefore
[g] =
_
_
_
_
_
_
h
2
1
0 0
0 h
2
2
0
0 0 h
2
3
_
_
_
_
_
_
. (6.18)
From (6.13), (6.14), (6.17) follows
(ds)
2
= (h
1
dˆ x
1
)
2
+ (h
2
dˆ x
2
)
2
+ (h
3
dˆ x
3
)
2
along Γ, (6.19)
2
Here and henceforth the underlining of one of two or more repeated indices indicates suspended sum
mation with respect to this index.
3
Some authors such as Love deﬁne h
i
as 1/
√
g
ii
instead of as
√
g
ii
106 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
which reveals the geometric signiﬁcance of the scale moduli, i.e.
h
i
=
ds
dˆ x
i
along the ˆ x
i
−coordinate lines. (6.20)
It follows from (6.17), (6.10) and (6.11) that the unit vector ˆ e
i
can be expressed as
ˆ e
i
=
1
h
i
∂x
∂ˆ x
i
, (6.21)
and therefore the proper orthogonal matrix [Q] relating the two sets of basis vectors is given
by
Q
ij
= ˆ e
i
· e
j
=
1
h
i
∂x
j
∂ˆ x
i
(6.22)
6.2.3 Inverse partial derivatives
In view of (6.5), (6.6) one has the identity
x
i
= x
i
(ˆ x
1
(x
1
, x
2
, x
3
), ˆ x
2
(x
1
, x
2
, x
3
), ˆ x
3
(x
1
, x
2
, x
3
)),
so that from the chain rule,
∂x
i
∂ˆ x
k
∂ˆ x
k
∂x
j
= δ
ij
.
Multiply this by ∂x
i
/∂ˆ x
m
, noting the implied contraction on the index i, and use (6.14),
(6.16) to conﬁrm that
∂x
j
∂ˆ x
m
= g
km
∂ˆ x
k
∂x
j
= h
2
m
∂ˆ x
m
∂x
j
.
Thus the inverse partial derivatives are given by
∂ˆ x
i
∂x
j
=
1
h
2
i
∂x
j
∂ˆ x
i
. (6.23)
By (6.22), the elements of the matrix [Q] that relates the two coordinate systems can be
written in the alternative form
Q
ij
= ˆ e
i
· e
j
= h
i
∂ˆ x
i
∂x
j
. (6.24)
Moreover, (6.23) and (6.17) yield the following alternative expressions for h
i
:
h
i
= 1
_
_
_
∂ˆ x
i
∂x
1
_
2
+
_
∂ˆ x
i
∂x
2
_
2
+
_
∂ˆ x
i
∂x
3
_
2
.
6.2. GENERAL ORTHOGONAL CURVILINEAR COORDINATES 107
6.2.4 Components of ∂ˆ e
i
/∂ˆ x
j
in the local basis (ˆ e
1
, ˆ e
2
, ˆ e
3
)
In order to calculate the derivatives of various ﬁeld quantities it is clear that we will need to
calculate the quantities ∂ˆ e
i
/∂ˆ x
j
; and in order to calculate the components of these derivatives
in the local basis we will need to calculate quantities of the form ˆ e
k
· ∂ˆ e
i
/∂ˆ x
j
. Calculating
these quantities is an essential prerequisite for the transformation of basic tensoranalytic
relations into arbitrary orthogonal curvilinear coordinates, and this subsection is devoted to
this calculation.
From (6.21),
∂ˆ e
i
∂ˆ x
j
· ˆ e
k
=
_
−
1
h
2
i
∂h
i
∂ˆ x
j
∂x
∂ˆ x
i
+
1
h
i
∂
2
x
∂ˆ x
i
∂ˆ x
j
_
·
1
h
k
∂x
∂ˆ x
k
,
while by (6.14), (6.17),
∂x
∂ˆ x
i
·
∂x
∂ˆ x
j
= g
ij
= δ
ij
h
i
h
j
. (6.25)
Therefore
∂ˆ e
i
∂ˆ x
j
· ˆ e
k
= −
δ
ik
h
i
∂h
i
∂ˆ x
j
+ +
1
h
i
h
k
∂
2
x
∂ˆ x
i
∂ˆ x
k
·
∂x
∂x
k
. (6.26)
In order to express the second derivative term in (6.26) in terms of the scalemoduli and
their ﬁrst partial derivatives, we begin by diﬀerentiating (6.25) with respect to ˆ x
k
. Thus,
∂
2
x
∂ˆ x
i
∂ˆ x
k
·
∂x
∂ˆ x
j
+
∂
2
x
∂ˆ x
j
∂ˆ x
k
·
∂x
∂ˆ x
i
= δ
ij
∂
∂ˆ x
k
(h
i
h
j
) . (6.27)
If we refer to (6.27) as (a), and let (b) and (c) be the identities resulting from (6.27) when
(i, j, k) are replaced by (j, k, i) and (k, i, j), respectively, then
1
2
{(b)+(c) (a)} is readily found
to yield
∂
2
x
∂ˆ x
i
∂ˆ x
j
·
∂x
∂ˆ x
k
=
1
2
_
δ
jk
∂
∂ˆ x
i
(h
j
h
k
) + δ
ki
∂
∂ˆ x
j
(h
k
h
i
) −δ
ij
∂
∂ˆ x
k
(h
i
h
j
)
_
. (6.28)
Substituting (6.28) into (6.26) leads to
∂ˆ e
i
∂ˆ x
j
· ˆ e
k
= −
δ
ik
h
i
∂h
i
∂ˆ x
j
+
1
2h
i
h
k
_
δ
jk
∂
∂ˆ x
i
(h
j
h
k
) + δ
ki
∂
∂ˆ x
j
(h
k
h
i
) −δ
ij
∂
∂ˆ x
k
(h
i
h
j
)
_
. (6.29)
Equation (6.29) provides the explicit expressions for the terms ∂ˆ e
i
/∂ˆ x
j
· ˆ e
k
that we sought.
108 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
Observe the following properties that follow from it:
∂ˆ e
i
∂ˆ x
j
· ˆ e
k
= 0 if (i, j, k) distinct,
∂ˆ e
i
∂ˆ x
j
· ˆ e
k
= 0 if k = i,
∂ˆ e
i
∂ˆ x
i
· ˆ e
k
= −
1
h
k
∂h
i
∂ˆ x
k
, if i = k
∂ˆ e
i
∂ˆ x
k
· ˆ e
k
= −
1
h
i
∂h
k
∂ˆ x
i
if i = k.
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
(6.30)
6.3 Transformation of Basic Tensor Relations
Let T be a cartesian tensor ﬁeld of order N ≥ 1, deﬁned on a region R ⊂ E
3
and suppose
that the points of R are regular points of E
3
with respect to a given orthogonal curvilinear
coordinate system.
The curvilinear components
ˆ
T
ijk...n
of T are the components of T in the local basis
(ˆ e
1
, ˆ e
2
, ˆ e
3
). Thus,
ˆ
T
ij...n
= Q
ip
Q
jq
. . . Q
nr
T
pq...r
, where Q
ip
= ˆ e
i
· e
p
. (6.31)
6.3.1 Gradient of a scalar ﬁeld
Let φ(x) be a scalarvalued function and let v(x) denote its gradient:
v = ∇φ or equivalently v
i
= φ
,i
.
The components of v in the two bases {e
1
, e
2
, e
3
} and {ˆ e
1
, ˆ e
2
, ˆ e
3
} are related in the usual
way by
ˆ v
k
= Q
ki
v
i
and so
ˆ v
k
= Q
ki
v
i
= Q
ki
∂φ
∂x
i
.
On using (6.22) this leads to
ˆ v
k
=
_
1
h
k
∂x
i
∂ˆ x
k
_
∂φ
∂x
i
=
1
h
k
∂φ
∂x
i
∂x
i
∂ˆ x
k
,
6.3. TRANSFORMATION OF BASIC TENSOR RELATIONS 109
so that by the chain rule
ˆ v
k
=
1
h
k
∂
ˆ
φ
∂ˆ x
k
, (6.32)
where we have set
ˆ
φ(ˆ x
1
, ˆ x
2
, ˆ x
3
) = φ(x
1
(ˆ x
1
, ˆ x
2
, ˆ x
3
), x
2
(ˆ x
1
, ˆ x
2
, ˆ x
3
), x
3
(ˆ x
1
, ˆ x
2
, ˆ x
3
)).
6.3.2 Gradient of a vector ﬁeld
Let v(x) be a vectorvalued function and let W(x) denote its gradient:
W = ∇v or equivalently W
ij
= v
i,j
.
The components of W and v in the two bases {e
1
, e
2
, e
3
} and {ˆ e
1
, ˆ e
2
, ˆ e
3
} are related in the
usual way by
´
W
ij
= Q
ip
Q
jq
W
pq
, v
p
= Q
np
ˆ v
n
,
and therefore
´
W
ij
= Q
ip
Q
jq
∂v
p
∂x
q
= Q
ip
Q
jq
∂
∂x
q
(Q
np
ˆ v
n
) = Q
ip
Q
jq
∂
∂ˆ x
m
(Q
np
ˆ v
n
)
∂ˆ x
m
∂x
q
.
Thus by (6.24)
4
´
W
ij
= Q
ip
Q
jq
3
m=1
1
h
m
Q
mq
∂
∂ˆ x
m
(Q
np
ˆ v
n
) .
Since Q
jq
Q
mq
= δ
mj
, this simpliﬁes to
´
W
ij
= Q
ip
1
h
j
∂
∂ˆ x
j
(Q
np
ˆ v
n
),
which, on expanding the terms in parentheses, yields
´
W
ij
=
1
h
j
_
∂ˆ v
i
∂ˆ x
j
+ Q
ip
∂Q
np
∂ˆ x
j
ˆ v
n
_
.
However by (6.22)
Q
ip
∂Q
np
∂ˆ x
j
ˆ v
n
= Q
ip
∂
∂ˆ x
j
(ˆ e
n
· e
p
)ˆ v
n
= ˆ e
i
·
∂ˆ e
n
∂ˆ x
j
ˆ v
n
,
and so
´
W
ij
=
1
h
j
_
∂ˆ v
i
∂ˆ x
j
+
_
ˆ e
i
·
∂ˆ e
n
∂ˆ x
j
_
ˆ v
n
_
, (6.33)
in which the coeﬃcient in brackets is given by (6.29).
4
We explicitly use the summation sign in this equation (and elsewhere) when an index is repeated 3 or
more times, and we wish sum over it.
110 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
6.3.3 Divergence of a vector ﬁeld
Let v(x) be a vectorvalued function and let W(x) = ∇v(x) denote its gradient. Then
div v = trace W = v
i,i
.
Therefore from (6.33), the invariance of the trace of W, and (6.30),
div v = trace W = trace
´
W =
´
W
ii
=
3
i=1
1
h
i
_
∂ˆ v
i
∂ˆ x
i
+
n=i
1
h
n
∂h
i
∂ˆ x
n
ˆ v
n
_
.
Collecting terms involving ˆ v
1
, ˆ v
2
, and ˆ v
3
alone, one has
div v =
1
h
1
∂ˆ v
1
∂ˆ x
1
+
ˆ v
1
h
2
h
1
∂h
2
∂ˆ x
1
+
ˆ v
1
h
3
h
1
∂h
3
∂ˆ x
1
+ . . . + . . .
Thus
div v =
1
h
1
h
2
h
3
_
∂
∂ˆ x
1
(h
2
h
3
ˆ v
1
) +
∂
∂ˆ x
2
(h
3
h
1
ˆ v
2
) +
∂
∂ˆ x
3
(h
1
h
2
ˆ v
3
)
_
. (6.34)
6.3.4 Laplacian of a scalar ﬁeld
Let φ(x) be a scalarvalued function. Since
∇
2
φ = div(grad φ) = φ
,kk
the results from Subsections (6.3.1) and (6.3.3) permit us to infer that
∇
2
φ =
1
h
1
h
2
h
3
_
∂
∂ˆ x
1
_
h
2
h
3
h
1
∂
ˆ
φ
∂ˆ x
1
_
+
∂
∂ˆ x
2
_
h
3
h
1
h
2
∂
ˆ
φ
∂ˆ x
2
_
+
∂
∂ˆ x
3
_
h
1
h
2
h
3
∂
ˆ
φ
∂ˆ x
3
_
_
(6.35)
where we have set
ˆ
φ(ˆ x
1
, ˆ x
2
, ˆ x
3
) = φ(x
1
(ˆ x
1
, ˆ x
2
, ˆ x
3
), x
2
(ˆ x
1
, ˆ x
2
, ˆ x
3
), x
3
(ˆ x
1
, ˆ x
2
, ˆ x
3
)).
6.3.5 Curl of a vector ﬁeld
Let v(x) be a vectorvalued ﬁeld and let w(x) be its curl so that
w = curl v or equivalently w
i
= e
ijk
v
k,j
.
6.3. TRANSFORMATION OF BASIC TENSOR RELATIONS 111
Let
W = ∇v or equivalently W
ij
= v
i,j
.
Then as we have shown in an earlier chapter
w
i
= e
ijk
W
kj
, ´ w
i
= e
ijk
´
W
kj
.
Consequently from Subsection 6.3.2,
´ w
i
=
3
j=1
1
h
j
e
ijk
_
∂ˆ v
k
∂ˆ x
j
+
_
ˆ e
k
·
∂ˆ e
n
∂ˆ x
j
_
ˆ v
n
_
.
By (6.30), the second term within the braces sums out to zero unless n = j. Thus, using the
second of (6.30), one arrives at
´ w
i
=
3
j,k=1
1
h
j
e
ijk
_
∂ˆ v
k
∂ˆ x
j
−
1
h
k
∂h
j
∂ˆ x
k
ˆ v
j
_
.
This yields
´ w
1
=
1
h
2
h
3
_
∂
∂ˆ x
2
(h
3
ˆ v
3
) −
∂
∂ˆ x
3
(h
2
ˆ v
2
)
_
,
´ w
2
=
1
h
3
h
1
_
∂
∂ˆ x
3
(h
1
ˆ v
1
) −
∂
∂ˆ x
1
(h
3
ˆ v
3
)
_
,
´ w
3
=
1
h
1
h
2
_
∂
∂ˆ x
1
(h
2
ˆ v
2
) −
∂
∂ˆ x
2
(h
1
ˆ v
1
)
_
.
(6.36)
6.3.6 Divergence of a symmetric 2tensor ﬁeld
Let S(x) be a symmetric 2tensor ﬁeld and let v(x) denote its divergence:
v = div S, S = S
T
, or equivalently v
i
= S
ij,j
, S
ij
= S
ji
.
The components of v and S in the two bases {e
1
, e
2
, e
3
} and {ˆ e
1
, ˆ e
2
, ˆ e
3
} are related in the
usual way by
ˆ v
i
= Q
ip
v
p
, S
ij
= Q
mi
Q
nj
ˆ
S
mn
,
and consequently
ˆ v
i
= Q
ip
S
pj,j
= Q
ip
∂
∂x
j
(Q
mp
Q
nj
ˆ
S
mn
) = Q
ip
∂
∂ˆ x
k
(Q
mp
Q
nj
ˆ
S
mn
)
∂ˆ x
k
∂x
j
.
112 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
By using (6.24), the orthogonality of the matrix [Q], (6.30) and
ˆ
S
ij
=
ˆ
S
ji
we obtain
ˆ v
1
=
1
h
1
h
2
h
3
_
∂
∂ˆ x
1
(h
2
h
3
ˆ
S
11
) +
∂
∂ˆ x
2
(h
3
h
1
ˆ
S
12
) +
∂
∂ˆ x
3
(h
1
h
2
ˆ
S
13
)
_
+
1
h
1
h
2
∂h
1
∂ˆ x
2
ˆ
S
12
+
1
h
1
h
3
∂h
1
∂ˆ x
3
ˆ
S
13
−
1
h
1
h
2
∂h
2
∂ˆ x
1
ˆ
S
22
−
1
h
1
h
3
∂h
3
∂ˆ x
1
ˆ
S
33
,
(6.37)
with analogous expressions for ˆ v
2
and ˆ v
3
.
Equations (6.32)  (6.37) provide the fundamental expressions for the basic tensoranalytic
quantities that we will need. Observe that they reduce to their classical rectangular cartesian
forms in the special case x
i
= ˆ x
i
(in which case h
1
= h
2
= h
3
= 1).
6.3.7 Diﬀerential elements of volume
When evaluating a volume integral over a region D, we sometimes ﬁnd it convenient to
transform it from the form
_
D
_
. . .
_
dx
1
dx
2
dx
3
into an equivalent expression of the form
_
D
_
. . .
_
dˆ x
1
dˆ x
2
dˆ x
3
.
In order to do this we must relate dx
1
dx
2
dx
3
to dˆ x
1
dˆ x
2
dˆ x
3
. By (6.22),
det[Q] =
1
h
1
h
2
h
3
det[J].
However since [Q] is a proper orthogonal matrix its determinant takes the value +1. Therefore
det[J] = h
1
h
2
h
3
and so the basic relation dx
1
dx
2
dx
3
= det[J] dˆ x
1
dˆ x
2
dˆ x
3
leads to
dx
1
dx
2
dx
3
= h
1
h
2
h
3
dˆ x
1
dˆ x
2
dˆ x
3
. (6.38)
6.3.8 Diﬀerential elements of area
Let d
ˆ
A
1
denote a diﬀerential element of (vector) area on a ˆ x
1
coordinate surface so that
d
ˆ
A
1
= (dˆ x
2
∂x/∂ˆ x
2
) × (dˆ x
3
∂x/∂ˆ x
3
). In view of (6.21) this leads to d
ˆ
A
1
= (dˆ x
2
h
2
ˆ e
2
) ×
(dˆ x
3
h
3
ˆ e
3
) = h
2
h
3
dˆ x
2
dˆ x
3
ˆ e
1
. Thus the diﬀerential elements of (scalar) area on the ˆ x
1
, ˆ x
2

and ˆ x
3
coordinate surfaces are given by
d
ˆ
A
1
= h
2
h
3
dˆ x
2
dˆ x
3
, d
ˆ
A
2
= h
3
h
1
dˆ x
3
dˆ x
1
, d
ˆ
A
3
= h
1
h
2
dˆ x
1
dˆ x
2
, (6.39)
respectively.
6.4. EXAMPLES OF ORTHOGONAL CURVILINEAR COORDINATES 113
6.4 Some Examples of Orthogonal Curvilinear Coordi
nate Systems
Circular Cylindrical Coordinates (r, θ, z):
x
1
= r cos θ, x
2
= r sin θ, x
3
= z;
for all (r, θ, z) ∈ [0, ∞) ×[0, 2π) ×(−∞, ∞);
h
r
= 1, h
θ
= r, h
z
= 1.
_
¸
¸
_
¸
¸
_
(6.40)
Spherical Coordinates (r, θ, φ):
x
1
= r sin θ cos φ, x
2
= r sin θ sin φ, x
3
= r cos θ;
for all (r, θ, φ) ∈ [0, ∞) ×[0, 2π) ×(−π, π];
h
r
= 1, h
θ
= r, h
φ
= r sin θ .
_
¸
¸
_
¸
¸
_
(6.41)
Elliptical Cylindrical Coordinates (ξ, η, z):
x
1
= a cosh ξ cos η, x
2
= a sinh ξ sin η, x
3
= z;
for all (ξ, η, z) ∈ [0, ∞) ×(−π, π] ×(−∞, ∞);
h
ξ
= h
η
= a
_
sinh
2
ξ + sin
2
η, h
z
= 1 .
_
¸
¸
_
¸
¸
_
(6.42)
Parabolic Cylindrical Coordinates (u, v, w):
x
1
=
1
2
(u
2
−v
2
), x
2
= uv, x
3
= w;
for all (u, v, w) ∈ (−∞, ∞) ×[0, ∞) ×(−∞, ∞);
h
u
= h
v
=
√
u
2
+ v
2
, h
z
= 1 .
_
¸
¸
_
¸
¸
_
(6.43)
6.5 Worked Examples.
Example 6.1: Let E(x) be a symmetric 2tensor ﬁeld that is related to a vector ﬁeld u(x) through
E =
1
2
_
∇u +∇u
T
_
.
114 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
In a cartesian coordinate system this can be written equivalently as
E
ij
=
1
2
_
∂u
i
∂x
j
+
∂u
j
∂x
i
_
.
Establish the analogous formulas in a general orthogonal curvilinear coordinate system.
Solution: Using the result from Subsection 6.3.2 and the formulas for ˆ e
k
· (∂ˆ e
i
/∂ˆ x
j
one ﬁnds after elementary
simpliﬁcation that
´
E
11
=
1
h
1
∂ˆ u
1
∂ˆ x
1
+
1
h
1
h
2
∂h
1
∂ˆ x
2
ˆ u
2
+
1
h
1
h
3
∂h
1
∂ˆ x
3
ˆ u
3
, E
22
= . . . , E
33
= . . . ,
´
E
12
=
´
E
21
=
1
2
_
h
1
h
2
∂
∂ˆ x
2
_
ˆ u
1
h
1
_
+
h
2
h
1
∂
∂ˆ x
1
_
ˆ u
2
h
2
__
, E
23
= . . . , E
31
= . . . ,
_
¸
¸
_
¸
¸
_
(i)
Example 6.2: Consider a symmetric 2tensor ﬁeld S(x) and a vector ﬁeld b(x) that satisfy the equation
div S +b = o.
In a cartesian coordinate system this can be written equivalently as
∂S
ij
∂x
j
+b
i
= 0.
Establish the analogous formulas in a general orthogonal curvilinear coordinate system.
Solution: From the results in Subsection 6.3.6 we have
1
h
1
h
2
h
3
_
∂
∂ˆ x
1
(h
2
h
3
´
S
11
) +
∂
∂ˆ x
2
(h
3
h
1
´
S
12
) +
∂
∂ˆ x
3
(h
1
h
2
´
S
13
)
_
+
1
h
1
h
2
∂h
1
∂ˆ x
2
´
S
12
+
1
h
1
h
3
∂h
1
∂ˆ x
3
´
S
13
−
1
h
1
h
2
∂h
2
∂ˆ x
1
´
S
22
−
1
h
1
h
3
∂h
3
∂ˆ x
1
´
S
33
+
ˆ
b
1
= 0,
. . . . . . . . . etc.
(i)
where
ˆ
b
i
= Q
ip
b
p
Example 6.3: Consider circular cylindrical coordinates (ˆ x
1
, ˆ x
2
, ˆ x
3
) = (r, θ, z) which are related to (x
1
, x
2
, x
3
)
through
x
1
= r cos θ, x
2
= r sin θ, x
3
= z,
0 ≤ r < ∞, 0 ≤ θ < 2π, −∞< z < ∞.
_
_
_
Let f(x) be a scalarvalued ﬁeld, u(x) a vectorvalued ﬁeld, and S(x) a symmetric 2tensor ﬁeld. Express
the following quanties,
(a) grad f
6.5. WORKED EXAMPLES. 115
x
1
x
2
x
3
r
θ
z
e
r
e
θ
e
z
e
r
= cos θ e
1
+ sin θ e
2
,
e
θ
= −sin θ e
1
+ cos θ e
2
,
e
z
= e
3
,
Figure 6.3: Cylindrical coordinates (r, θ, z) and the associated local curvilinear orthonormal basis
{e
r
, e
θ
, e
z
}.
(b) ∇
2
f
(c) div u
(d) curl u
(e)
1
2
_
∇u +∇u
T
_
and
(f) div S
in this coordinate system.
Solution: We simply need to specialize the basic results established in Section 6.3.
In the present case we have
(ˆ x
1
, ˆ x
2
, ˆ x
3
) = (r, θ, z) (i)
and the coordinate mapping (6.5) takes the particular form
x
1
= r cos θ, x
2
= r sin θ, x
3
= z. (ii)
The matrix [∂x
i
/∂ˆ x
j
] therefore specializes to
_
_
_
_
_
_
∂x
1
/∂r ∂x
1
/∂θ ∂x
1
/∂z
∂x
2
/∂r ∂x
2
/∂θ ∂x
2
/∂z
∂x
3
/∂r ∂x
3
/∂θ ∂x
3
/∂z
_
_
_
_
_
_
=
_
_
_
_
_
_
cos θ −r sin θ 0
sin θ r cos θ 0
0 0 1
_
_
_
_
_
_
,
116 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
and the scale moduli are
h
r
=
¸
_
∂x
1
∂r
_
2
+
_
∂x
2
∂r
_
2
+
_
∂x
3
∂r
_
2
= 1,
h
θ
=
¸
_
∂x
1
∂θ
_
2
+
_
∂x
2
∂θ
_
2
+
_
∂x
3
∂θ
_
2
= r,
h
z
=
¸
_
∂x
1
∂z
_
2
+
_
∂x
2
∂z
_
2
+
_
∂x
3
∂z
_
2
= 1.
(iii)
We use the natural notation
(u
r
, u
θ
, u
z
) = (ˆ u
1
, ˆ u
2
, ˆ u
3
) (iv)
for the components of a vector ﬁeld, and
(S
rr
, S
rθ
, . . .) = (
ˆ
S
11
,
ˆ
S
12
, . . .) (v)
for the components of a 2tensor ﬁeld, and
(e
r
, e
θ
, e
z
) = (ˆ e
1
, ˆ e
2
, ˆ e
3
) (vi)
for the unit vectors associated with the local cylindrical coordinate system.
From (ii),
x = (r cos θ)e
1
+ (r sin θ)e
2
+ (z)e
3
,
and therefore on using (6.21) and (iii) we obtain the following expressions for the unit vectors associated
with the local cylindrical coordinate system:
e
r
= cos θ e
1
+ sin θ e
2
,
e
θ
= −sin θ e
1
+ cos θ e
2
,
e
z
= e
3
,
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
which, in this case, could have been obtained geometrically from Figure 6.3.
(a) Substituting (i) and (iii) into (6.32) gives
∇f =
_
∂
´
f
∂r
_
e
r
+
_
1
r
∂
´
f
∂θ
_
e
θ
+
_
∂
´
f
∂z
_
e
z
where we have set
´
f(r, θ, z) = f(x
1
, x
2
, x
3
).
(b) Substituting (i) and (iii) into (6.35) gives
∇
2
´
f =
∂
2 ´
f
∂r
2
+
1
r
∂
´
f
∂r
+
1
r
2
∂
2 ´
f
∂θ
2
+
∂
2 ´
f
∂z
2
where we have set
´
f(r, θ, z) = f(x
1
, x
2
, x
3
).
6.5. WORKED EXAMPLES. 117
(c) Substituting (i) and (iii) into (6.34) gives
div u =
∂u
r
∂r
+
1
r
u
r
+
1
r
∂u
θ
∂θ
+
∂u
z
∂z
(d) Substituting (i) and (iii) into (6.36) gives
curl u =
_
1
r
∂u
z
∂θ
−
∂u
θ
∂z
_
e
r
+
_
∂u
r
∂z
−
∂u
z
∂r
_
e
θ
+
_
∂u
θ
∂r
+
u
θ
r
−
1
r
∂u
r
∂θ
_
e
z
(e) Set E=(1/2)(∇u + ∇u
T
). Substituting (i) and (iii) into (6.33) enables us to calculate ∇u whence we
can calculate E. Writing the cylindrical components
ˆ
E
ij
of E as
(E
rr
, E
rθ
, E
rz
, . . .) = (
ˆ
E
11
,
ˆ
E
12
,
ˆ
E
13
. . .),
one ﬁnds
E
rr
=
∂u
r
∂r
,
E
θθ
=
1
r
∂u
θ
∂θ
+
u
r
r
,
E
zz
=
∂u
z
∂z
,
E
rθ
=
1
2
_
1
r
∂u
r
∂θ
+
∂u
θ
∂r
−
u
θ
r
_
,
E
θz
=
1
2
_
∂u
θ
∂z
+
1
r
∂u
z
∂θ
_
,
E
zr
=
1
2
_
∂u
z
∂r
+
∂u
r
∂z
_
.
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
Alternatively these could have been obtained from the results of Example 6.1.
(f) Finally, substituting (i) and (iii) into (6.37) gives
div S =
_
∂S
rr
∂r
+
1
r
∂S
rθ
∂θ
+
∂S
rz
∂z
+
S
rr
−S
θθ
r
_
e
r
+
_
∂S
rθ
∂r
+
1
r
∂S
θθ
∂θ
+
∂S
θz
∂z
+
2S
rθ
r
_
e
θ
+
_
∂S
zr
∂r
+
1
r
∂S
zθ
∂θ
+
∂S
zz
∂z
+
S
zr
r
_
e
z
Alternatively these could have been obtained from the results of Example 6.2.
Example 6.4: Consider spherical coordinates (ˆ x
1
, ˆ x
2
, ˆ x
3
) = (r, θ, φ) which are related to (x
1
, x
2
, x
3
) through
x
1
= r sin θ cos φ, x
2
= r sin θ sin φ, x
3
= r cos θ,
0 ≤ r < ∞, 0 ≤ θ ≤ π, 0 ≤ φ < 2π.
_
_
_
Let f(x) be a scalarvalued ﬁeld, u(x) a vectorvalued ﬁeld, and S(x) a symmetric 2tensor ﬁeld. Express
the following quanties,
118 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
x
1
x
2
x
3
r
θ
φ
e
r
= (sin θ cos φ) e
1
+ (sin θ sin φ) e
2
+ cos θ e
3
,
e
θ
= (cos θ cos φ) e
1
+ (cos θ sin φ) e
2
− sin θ e
3
,
e
φ
= −sin φ e
1
+ cos φ e
2
,
e
r
e
θ
e
φ
Figure 6.4: Spherical coordinates (r, θ, φ) and the associated local curvilinear orthonormal basis {e
r
, e
θ
, e
φ
}.
(a) grad f
(b) div u
(c) ∇
2
f
(d) curl u
(e)
1
2
_
∇u +∇u
T
_
and
(f) div S
in this coordinate system.
Solution: We simply need to specialize the basic results established in Section 6.3.
In the present case we have
(ˆ x
1
, ˆ x
2
, ˆ x
3
) = (r, θ, φ), (i)
and the coordinate mapping (6.5) takes the particular form
x
1
= r sin θ cos φ, x
2
= r sin θ sin φ, x
3
= r cos θ. (ii)
The matrix [∂x
i
/∂ˆ x
j
] therefore specializes to
_
_
_
_
_
_
∂x
1
/∂r ∂x
1
/∂θ ∂x
1
/∂φ
∂x
2
/∂r ∂x
2
/∂θ ∂x
2
/∂φ
∂x
3
/∂r ∂x
3
/∂θ ∂x
3
/∂φ
_
_
_
_
_
_
=
_
_
_
_
_
_
sin θ cos φ r cos θ cos φ −r sin θ sin φ
sin θ sin φ r cos θ sin φ r sin θ cos φ
cos θ −r sin θ 0
_
_
_
_
_
_
6.5. WORKED EXAMPLES. 119
and the scale moduli are
h
r
=
¸
_
∂x
1
∂r
_
2
+
_
∂x
2
∂r
_
2
+
_
∂x
3
∂r
_
2
= 1,
h
θ
=
¸
_
∂x
1
∂θ
_
2
+
_
∂x
2
∂θ
_
2
+
_
∂x
3
∂θ
_
2
= r,
h
φ
=
¸
_
∂x
1
∂φ
_
2
+
_
∂x
2
∂φ
_
2
+
_
∂x
3
∂φ
_
2
= r sin θ.
(iii)
We use the natural notation
(u
r
, u
θ
, u
φ
) = (ˆ u
1
, ˆ u
2
, ˆ u
3
) (iv)
for the components of a vector ﬁeld,
(S
rr
, S
rθ
, S
rφ
. . .) = (
ˆ
S
11
,
ˆ
S
12
,
ˆ
S
13
. . .) (v)
for the components of a 2tensor ﬁeld, and
(e
r
, e
θ
, e
φ
) = (ˆ e
1
, ˆ e
2
, ˆ e
3
) (vi)
for the unit vectors associated with the local spherical coordinate system.
From (ii),
x = (r sin θ cos φ)e
1
+ (r sin θ sin φ)e
2
+ (r cos θ)e
3
,
and therefore on using (6.21) and (iii) we obtain the following expressions for the unit vectors associated
with the local spherical coordinate system:
e
r
= (sin θ cos φ) e
1
+ (sin θ sin φ) e
2
+ cos θ e
3
,
e
θ
= (cos θ cos φ) e
1
+ (cos θ sin φ) e
2
− sin θ e
3
,
e
φ
= −sin φ e
1
+ cos φ e
2
,
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
which, in this case, could have been obtained geometrically from Figure 6.4.
(a) Substituting (i) and (iii) into (6.32) gives
∇f =
_
∂
´
f
∂r
_
e
r
+
_
1
r
∂
´
f
∂θ
_
e
θ
+
_
1
r sin θ
∂
´
f
∂φ
_
e
φ
.
where we have set
´
f(r, θ, φ) = f(x
1
, x
2
, x
3
).
(b) Substituting (i) and (iii) into (6.35) gives
∇
2
f =
∂
2 ´
f
∂r
2
+
2
r
∂
´
f
∂r
+
1
r
2
∂
2 ´
f
∂θ
2
+
1
r
2
cot θ
∂
´
f
∂θ
+
1
r
2
sin
2
θ
∂
2 ´
f
∂φ
2
where we have set
´
f(r, θ, φ) = f(x
1
, x
2
, x
3
).
120 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
(c) Substituting (i) and (iii) into (6.34) gives
div u =
1
r
2
sin θ
_
∂
∂r
(r
2
sin θ u
r
) +
∂
∂θ
(r sin θu
θ
) +
∂
∂φ
(ru
φ
)
_
.
(d) Substituting (i) and (iii) into (6.36) gives
curl u =
_
1
r
2
sin θ
_
∂
∂θ
(r sin θv
φ
) −
∂
∂φ
(rv
θ
)
__
e
r
+
_
1
r sin θ
_
∂v
r
∂φ
−
∂
∂r
(r sin θv
φ
)
__
e
θ
+
_
1
r
_
∂
∂r
(rv
θ
) −
∂v
r
∂θ
__
e
φ
.
(e) Set E=(1/2)(∇u + ∇u
T
). We substitute (i) and (iii) into (6.33) to calculate ∇u from which one can
calculate E. Writing the spherical components
´
E
ij
of E as
(E
rr
, E
rθ
, E
rφ
, . . .) = (
´
E
11
,
´
E
12
,
´
E
13
. . .),
one ﬁnds
E
rr
=
∂u
r
∂r
,
E
θθ
=
1
r
∂u
θ
∂θ
+
u
r
r
,
E
φφ
=
1
r sin θ
∂u
φ
∂φ
+
u
r
r
+
cot θ
r
u
θ
,
E
rθ
=
1
2
_
1
r
∂u
r
∂θ
+
∂u
θ
∂r
−
u
θ
r
_
,
E
θφ
=
1
2
_
1
r sin θ
∂u
θ
∂φ
+
1
r
∂u
φ
∂θ
−
cot θ
r
u
φ
_
,
E
φr
=
1
2
_
1
r sin θ
∂u
r
∂φ
+
∂u
φ
∂r
−
u
φ
r
_
,
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
Alternatively these could have been obtained from the results of Example 6.1.
(f) Finally substituting (i) and (iii) into (6.37) gives
div S =
_
∂S
rr
∂r
+
1
r
∂S
rθ
∂θ
+
1
r sin θ
∂S
rφ
∂φ
+
1
r
[2S
rr
−S
θθ
−S
φφ
+ cot θS
rθ
]
_
e
r
+
_
∂S
rθ
∂r
+
1
r
∂S
θθ
∂θ
+
1
r sin θ
∂S
θφ
∂φ
+
1
r
[3S
rθ
+ cot θ(S
θθ
−S
φφ
)]
_
e
θ
+
_
∂S
rφ
∂r
+
1
r
∂S
θφ
∂θ
+
1
r sin θ
∂S
φφ
∂φ
+
1
r
[3S
rφ
+ 2 cot θS
θφ
]
_
e
φ
Alternatively these could have been obtained from the results of Example 6.2.
Example 6.5: Show that the matrix [Q] deﬁned by (6.22) is a proper orthogonal matrix.
Proof: From (6.22),
Q
ij
=
1
h
i
∂x
j
∂ˆ x
i
,
6.5. WORKED EXAMPLES. 121
and therefore
Q
ik
Q
jk
=
1
h
i
h
j
∂x
k
∂ˆ x
i
∂x
k
∂ˆ x
j
=
1
h
i
h
j
g
ij
= δ
ij
,
where in the penultimate step we have used (6.14) and in the ultimate step we have used (6.16). Thus [Q]
is an orthogonal matrix. Next, from (6.22) and (6.7),
Q
ij
=
1
h
i
∂x
j
∂ˆ x
i
=
1
h
i
J
ji
where J
ij
= ∂x
i
/∂ˆ x
j
are the elements of the Jacobian matrix. Thus
det[Q] =
1
h
1
h
2
h
3
det[J] > 0
where the inequality is a consequence of the inequalities in (6.8) and (6.17). Hence [Q] is proper orthogonal.
References
1. H. Reismann and P.S. Pawlik, Elasticity: Theory and Applications, Wiley, 1980.
2. L.A. Segel, Mathematics Applied to Continuum Mechanics, Dover, New York, 1987.
3. E. Sternberg, (Unpublished) Lecture Notes for AM 135: Elasticity, California Institute of Technology,
Pasadena, California, 1976.
Chapter 7
Calculus of Variations
7.1 Introduction.
Numerous problems in physics can be formulated as mathematical problems in optimization.
For example in optics, Fermat’s principle states that the path taken by a ray of light in
propagating from one point to another is the path that minimizes the travel time. Most
equilibrium theories of mechanics involve ﬁnding a conﬁguration of a system that minimizes
its energy. For example a heavy cable that hangs under gravity between two ﬁxed pegs
adopts the shape that, from among all possible shapes, minimizes the gravitational potential
energy of the system. Or, if we subject a straight beam to a compressive load, its deformed
conﬁguration is the shape which minimizes the total energy of the system. Depending on
the load, the energy minimizing conﬁguration may be straight or bent (buckled). If we dip a
(nonplanar) wire loop into soapy water, the soap ﬁlm that forms across the loop is the one
that minimizes the surface energy (which under most circumstances equals minimizing the
surface area of the soap ﬁlm). Another common problem occurs in geodesics where, given
some surface and two points on it, we want to ﬁnd the path of shortest distance joining those
two points which lies entirely on the given surface.
In each of these problem we have a scalarvalued quantity F such as energy or time that
depends on a function φ such as the shape or path, and we want to ﬁnd the function φ that
minimizes the quantity F of interest. Note that the scalarvalued function F is deﬁned on a
set of functions. One refers to F as a functional and writes F{φ}.
As a speciﬁc example, consider the socalled Brachistochrone Problem. We are given two
123
124 CHAPTER 7. CALCULUS OF VARIATIONS
points (0, 0) and (1, h) in the x, yplane, with h > 0, that are to be joined by a smooth wire.
A bead is released from rest from the point (0, 0) and slides along the wire due to gravity.
For what shape of wire is the time of travel from (0, 0) to (1, h) least?
x
y = φ(x)
g
y
(0, 0)
(1, h)
1
Figure 7.1: Curve joining (0, 0) to (1, h) along which a bead slides under gravity.
In order to formulate this problem, let y = φ(x), 0 ≤ x ≤ 1, describe a generic curve
joining (0, 0) to (1, h). Let s(t) denote the distance traveled by the bead along the wire at
time t so that v(t) = ds/dt is its corresponding speed. The travel time of the bead is
T =
_
ds
v
where the integral is taken along the entire path. In the question posed to us, we are to ﬁnd
the curve, i.e. the function φ(x), which makes T a minimum. Since we are to minimize T by
varying φ, it is natural to ﬁrst rewrite the formula for T in a form that explicitly displays
its dependency on φ.
Note ﬁrst, that by elementary calculus, the arc length ds is related to dx by
ds =
_
dx
2
+ dy
2
=
_
1 + (dy/dx)
2
dx =
_
1 + (φ
)
2
dx
and so we can write
T =
_
1
0
_
1 + (φ
)
2
v
dx.
Next, we wish to express the speed v in terms of φ. If (x(t), y(t)) denote the coordinates
of the bead at time t, the conservation of energy tells us that the sum of the potential and
kinetic energies does not vary with time:
−mgφ(x(t)) +
1
2
mv
2
(t) = 0,
7.1. INTRODUCTION. 125
where the right hand side is the total energy at the initial instant. Solving this for v gives
v =
_
2gφ.
Finally, substituting this back into the formula for the travel time gives
T{φ} =
_
1
0
¸
1 + (φ
)
2
2gφ
dx. (7.1)
Given a curve characterized by y = φ(x), this formula gives the corresponding travel time
for the bead. Our task is to ﬁnd, from among all such curves, the one that minimizes T{φ}.
This minimization takes place over a set of functions φ. In order to complete the for
mulation of the problem, we should carefully characterize this set of “admissible functions”
(or “test functions”). A generic curve is described by y = φ(x), 0 ≤ x ≤ 1. Since we are
only interested in curves that pass through the points (0, 0) and (1, h) we must require that
φ(0) = 0, φ(1) = h. Finally, for analytical reasons we only consider curves that are continu
ous and have a continous slope, i.e. φ and φ
are both continuous on [0, 1]. Thus the set A of
admissible functions that we wish to consider is
A =
_
φ(·)
¸
¸
φ : [0, 1] →R, φ ∈ C
1
[0, 1], φ(0) = 0, φ(1) = h
_
. (7.2)
Our task is to minimize T{φ} over the set A.
Remark: Since the shortest distance between two points is given by the straight line that
joins them, it is natural to wonder whether a straight line is also the curve that gives the
minimum travel time. To investigate this, consider (a) a straight line, and (b) a circular arc,
that joins (0, 0) to (1, h). Use (7.1) to calculate the travel time for each of these paths and
show that the straight line is not the path that gives the least travel time.
Remark: One can consider various variants of the Brachistochrone Problem. For example, the
length of the curve joining the two points might be prescribed, in which case the minimization
is to be carried out subject to the constraint that the length is given. Or perhaps the position
of the left hand end might be prescribed as above, but the right hand end of the wire might
be allowed to lie anywhere on the vertical line through x = 1. Or, there might be some
prohibited region of the x, yplane through which the path is disallowed from passing. And
so on.
In summary, in the simplest problem in the calculus of variations we are required to ﬁnd
a function φ(x) ∈ C
1
[0, 1] that minimizes a functional F{φ} of the form
F{φ} =
_
1
0
f(x, φ, φ
)dx
126 CHAPTER 7. CALCULUS OF VARIATIONS
over an admissible set of test functions A. The test functions (or admissible functions) φ are
subject to certain conditions including smoothness requirements; possibly (but not neces
sarily) boundary conditions at both ends x = 0, 1; and possibly (but not necessarily) side
constraints of various forms. Other types of problems will be also be encountered in what
follows.
7.2 Brief review of calculus.
Perhaps it is useful to begin by reviewing the familiar question of minimization in calculus.
Consider a subset A of ndimensional space R
n
and let F(x) = F(x
1
, x
2
, . . . , x
n
) be a real
valued function deﬁned on A. We say that x
o
∈ A is a minimizer of F if
1
F(x) ≥ F(x
o
) for all x ∈ A. (7.3)
Sometimes we are only interested in ﬁnding a “local minimizer”, i.e. a point x
o
that
minimizes F relative to all x that are “close” to x
0
. In order to speak of such a notion we
must have a measure of “closeness”. Thus suppose that the vector space R
n
is Euclidean so
that a norm is deﬁned on R
n
. Then we say that x
o
is a local minimizer of F if F(x) ≥ F(x
o
)
for all x in a neighborhood of x
o
, i.e. if
F(x) ≥ F(x
o
) for all x such that x −x
o
 < r (7.4)
for some r > 0.
Deﬁne the function
ˆ
F(ε) for −ε
0
< ε < ε
0
by
ˆ
F(ε) = F(x
0
+ εn) (7.5)
where n is a ﬁxed vector and ε
0
is small enough to ensure that x
0
+εn ∈ A for all ε ∈ (−ε
0
, ε
0
).
In the presence of suﬃcient smoothness we can write
ˆ
F(ε) −
ˆ
F(0) =
ˆ
F
(0)ε +
1
2
ˆ
F
(0)ε
2
+ O(ε
3
). (7.6)
Since F(x
0
+εn) ≥ F(x
0
) it follows that
ˆ
F(ε) ≥
ˆ
F(0). Thus if x
0
is to be a minimizier of F
it is necessary that
ˆ
F
(0) = 0,
ˆ
F
(0) ≥ 0. (7.7)
1
A maximizer of F is a minimizer of −F so we don’t need to address maximizing separately from mini
mizing.
7.2. BRIEF REVIEW OF CALCULUS. 127
It is customary to use the following notation and terminology: we set
δF(x
o
, n) =
ˆ
F
(0), (7.8)
which is called the ﬁrst variation of F and similarly set
δ
2
F(x
o
, n) =
ˆ
F
(0) (7.9)
which is called the second variation of F. At an interior local minimizer x
0
, one necessarily
must have
δF(x
o
, n) = 0 and δ
2
F(x
o
, n) ≥ 0 for all unit vectors n. (7.10)
In the present setting of calculus, we know from (7.5), (7.8) that δF(x
o
, n) = ∇F(x
o
) · n
and that δ
2
F(x
o
, n) = (∇∇F(x
o
))n·n. Here the vector ﬁeld ∇F is the gradient of F and the
tensor ﬁeld ∇∇F is the gradient of ∇F. Therefore (7.10) is equivalent to the requirements
that
∇F(x
o
) · n = 0 and (∇∇F(x
o
))n · n ≥ 0 for all unit vectors n (7.11)
or equivalently
n
i=1
∂F
∂x
i
¸
¸
¸
¸
¸
x=x
0
n
i
= 0 and
n
i=1
n
j=1
∂
2
F
∂x
i
∂x
j
¸
¸
¸
¸
¸
x=x
0
n
i
n
j
≥ 0 (7.12)
whence we must have ∇F(x
o
) = o and the Hessian ∇∇F(x
o
) must be positive semideﬁnite.
Remark: It is worth recalling that a function need not have a minimizer. For example, the
function F
1
(x) = x deﬁned on A
1
= (−∞, ∞) is unbounded as x →±∞. Another example is
given by the function F
2
(x) = x deﬁned on A
2
= (−1, 1) noting that F
2
≥ −1 on A
2
; however,
while the value of F
2
can get as close as one wishes to −1, it cannot actually achieve the
value −1 since there is no x ∈ A
2
at which f(x) = −1; note that −1 / ∈ A
2
. Finally, consider
the function F
3
(x) deﬁned on A
3
= [−1, 1] where F
3
(x) = 1 for −1 ≤ x ≤ 0 and F(x) = x
for 0 < x ≤ 1; the value of F
3
can get as close as one wishes to 0 but cannot achieve it since
F(0) = 1. In the ﬁrst example A
1
was unbounded. In the second, A
2
was bounded but open.
And in the third example A
3
was bounded and closed but the function was discontinuous on
A
3
. In order for a minimizer to exist, A must be compact (i.e. bounded and closed). It can
be shown that if A is compact and if F is continuous on A then F assumes both maximum
and minimum values on A.
128 CHAPTER 7. CALCULUS OF VARIATIONS
7.3 The basic idea: necessary conditions for a mini
mum: δF = 0, δ
2
F ≥ 0.
In the calculus of variations, we are typically given a functional F deﬁned on a function
space A, where F : A → R, and we are asked to ﬁnd a function φ
o
∈ A that minimizes F
over A: i.e. to ﬁnd φ
o
∈ A for which
F{φ} ≥ F{φ
o
} for all φ ∈ A.
x
0
y
(0, a)
(1, b)
y = φ
1
(x)
1
y = φ
2
(x)
Figure 7.2: Two functions φ
1
and φ
2
that are “close” in the sense of the norm  · 
0
but not in the sense
of the norm  · 
1
.
Most often, we will be looking for a local (or relative) minimizer, i.e. for a function φ
0
that minimizes F relative to all “nearby functions”. This requires that we select a norm so
that the distance between two functions can be quantiﬁed. For a function φ in the set of
functions that are continuous on an interval [x
1
, x
2
], i.e. for φ ∈ C[x
1
, x
2
], one can deﬁne a
norm by
φ
0
= max
x
1
≤x≤x
2
φ(x).
For a function φ in the set of functions that are continuous and have continuous ﬁrst deriva
tives on [x
1
, x
2
], i.e. for φ ∈ C
1
[x
1
, x
2
] one can deﬁne a norm by
φ
1
= max
x
1
≤x≤x
2
φ(x) + max
x
1
≤x≤x
2
φ
(x);
and so on. (Of course the norm φ
0
can also be used on C
1
[x
1
, x
2
].)
7.3. A NECESSARY CONDITION FOR AN EXTREMUM 129
When seeking a local minimizer of a functional F we might say we want to ﬁnd φ
0
for
which
F{φ} ≥ F{φ
o
} for all admissible φ such that φ −φ
0

0
< r
for some r > 0. In this case the minimizer φ
0
is being compared with all admissible functions
φ whose values are close to those of φ
0
for all x
1
≤ x ≤ x
2
. Such a local minimizer is called a
strong minimizer. On the other hand, when seeking a local minimizer we might say we want
to ﬁnd φ
0
for which
F{φ} ≥ F{φ
o
} for all admissible φ such that φ −φ
0

1
< r
for some r > 0. In this case the minimizer is being compared with all functions whose values
and whose ﬁrst derivatives are close to those of φ
0
for all x
1
≤ x ≤ x
2
. Such a local minimizer
is called a weak minimizer. A strong minimizer is automatically a weak minimizer.
Unless explicitly stated otherwise, in this Chapter we will be examining weak local ex
trema. The approach for ﬁnding such extrema of a functional is essentially the same as that
used in the more familiar case of calculus reviewed in the preceding subsection. Consider
a functional F{φ} deﬁned on a function space A and suppose that φ
o
∈ A minimizes F. In
order to determine φ
0
we consider the oneparameter family of admissible functions
φ(x; ε) = φ
0
(x) + ε η(x) (7.13)
that are close to φ
0
; here ε is a real variable in the range −ε
0
< ε < ε
0
and η(x) is a once
continuously diﬀerentiable function. Since φ is to be admissible, we must have φ
0
+ εη ∈ A
for each ε ∈ (−ε
0
, ε
0
). Deﬁne a function
ˆ
F(ε) by
ˆ
F(ε) = F{φ
0
+ εη}, −ε
0
< ε < ε
0
. (7.14)
Since φ
0
minimizes F it follows that F{φ
0
+ εη} ≥ F{φ
0
} or equivalently
ˆ
F(ε) ≥
ˆ
F(0).
Therefore ε = 0 minimizes
ˆ
F(ε). The ﬁrst and second variations of F are deﬁned by
δF{φ
0
, η} =
ˆ
F
(0) and δ
2
F{φ
0
, η} =
ˆ
F
(0) respectively, and so if φ
0
minimizes F, then
it is necessary that
δF{φ
0
, η} = 0, δ
2
F{φ
0
, η} ≥ 0. (7.15)
These are necessary conditions on a minimizer φ
o
. We cannot go further in general.
In any speciﬁc problem, such as those in the subsequent sections, the necessary condition
δF{φ
o
, η} = 0 can be further simpliﬁed by exploiting the fact that it must hold for all
admissible η. This allows one to eliminate η leading to a condition (or conditions) that only
involves the minimizer φ
0
.
130 CHAPTER 7. CALCULUS OF VARIATIONS
Remark: Note that when η is independent of ε the functions φ
0
(x) and φ
0
(x) + εη(x), and
their derivatives, are close to each other for small ε. On the other hand the functions φ
0
(x)
and φ
0
(x)+ε sin(x/ε) are close to each other but their derivatives are not close to each other.
Throughout these notes we will consider functions η that are independent of ε and so, as
noted previously, we will be restricting attention exclusively to weak minimizers.
7.4 Application of the necessary condition δF = 0 to
the basic problem. Euler equation.
7.4.1 The basic problem. Euler equation.
Consider the following class of problems: let A be the set of all continuously diﬀerentiable
functions φ(x) deﬁned for 0 ≤ x ≤ 1 with φ(0) = a, φ(1) = b:
A =
_
φ(·)
¸
¸
φ : [0, 1] →R, φ ∈ C
1
[0, 1], φ(0) = a, φ(1) = b
_
. (7.16)
Let f(x, y, z) be a given function, deﬁned and smooth for all real x, y, z. Deﬁne a functional
F{φ}, for every φ ∈ A, by
F{φ} =
_
1
0
f (x, φ(x), φ
(x)) dx. (7.17)
We wish to ﬁnd a function φ ∈ A which minimizes F{φ}.
x
0
y
(0, a)
(1, b)
y = φ
0
(x) + εη(x)
y = φ
0
(x)
Figure 7.3: The minimizer φ
0
and a neighboring function φ
0
+εη.
7.4. APPLICATION OF NECESSARY CONDITION δF = 0 131
Suppose that φ
0
(x) ∈ A is a minimizer of F, so that F{φ} ≥ F{φ
0
} for all φ ∈ A. In
order to determine φ
0
we consider the one parameter family of admissible functions φ(x; ε) =
φ
0
(x) + ε η(x) where ε is a real variable in the range −ε
0
< ε < ε
0
and η(x) is a once
continuously diﬀerentiable function on [0, 1]; see Figure 7.3. Since φ must be admissible we
need φ
0
+ ε η ∈ A for each ε. Therefore we must have φ(0, ε) = a and φ(1, ε) = b which in
turn requires that
η(0) = η(1) = 0. (7.18)
Pick any function η(x) with the property (7.18) and ﬁx it. Deﬁne the function
ˆ
F(ε) =
F{φ
0
+ εη} so that
ˆ
F(ε) = F{φ
0
+ εη} =
_
1
0
f(x, φ
0
+ εη, φ
0
+ εη
) dx. (7.19)
We know from the analysis of the preceding section that a necessary condition for φ
0
to
minimize F is that
δF{φ
o
, η} =
ˆ
F
(0) = 0. (7.20)
On using the chainrule, we ﬁnd
ˆ
F
(ε) from (7.19) to be
ˆ
F
(ε) =
_
1
0
_
∂f
∂y
(x, φ
0
+ εη, φ
0
+ εη
) η +
∂f
∂z
(x, φ
0
+ εη, φ
0
+ εη
)η
_
dx,
and so (7.20) leads to
δF{φ
o
, η} =
ˆ
F
(0) =
_
1
0
_
∂f
∂y
(x, φ
0
, φ
0
)η +
∂f
∂z
(x, φ
0
, φ
0
)η
_
dx = 0. (7.21)
Thus far we have simply repeated the general analysis of the preceding section in the
context of the particular functional (7.17). Our goal is to ﬁnd φ
0
and so we must eliminate η
from (7.21). To do this we rearrange the terms in (7.21) into a convenient form and exploit
the fact that (7.21) must hold for all functions η that satisfy (7.18).
In order to do this we proceed as follows: Integrating the second term in (7.21) by parts
gives
_
1
0
_
∂f
∂z
_
η
dx =
_
η
∂f
∂z
_
x=1
x=0
−
_
1
0
d
dx
_
∂f
∂z
_
η dx .
However by (7.18) we have η(0) = η(1) = 0 and therefore the ﬁrst term on the righthand
side drops out. Thus (7.21) reduces to
_
1
0
_
∂f
∂y
−
d
dx
_
∂f
∂z
__
η dx = 0. (7.22)
132 CHAPTER 7. CALCULUS OF VARIATIONS
Though we have viewed η as ﬁxed up to this point, we recognize that the above derivation
is valid for all once continuously diﬀerentiable functions η(x) which have η(0) = η(1) = 0.
Therefore (7.22) must hold for all such functions.
Lemma: The following is a basic result from calculus: Let p(x) be a continuous function on [0, 1] and suppose
that
_
1
0
p(x)n(x)dx = 0
for all continuous functions n(x) with n(0) = n(1) = 0. Then,
p(x) = 0 for 0 ≤ x ≤ 1.
In view of this Lemma we conclude that the integrand of (7.22) must vanish and therefore
obtain the diﬀerential equation
d
dx
_
∂f
∂z
(x, φ
0
, φ
0
)
_
−
∂f
∂y
(x, φ
0
, φ
0
) = 0 for 0 ≤ x ≤ 1. (7.23)
This is a diﬀerential equation for φ
0
, which together with the boundary conditions
φ
0
(0) = a, φ
0
(1) = b, (7.24)
provides the mathematical problem governing the minimizer φ
0
(x). The diﬀerential equation
(7.23) is referred to as the Euler equation (sometimes referred to as the EulerLagrange
equation) associated with the functional (7.17).
Notation: In order to avoid the (precise though) cumbersome notation above, we shall drop
the subscript “0” from the minimizing function φ
0
; moreover, we shall write the Euler equa
tion (7.23) as
d
dx
_
∂f
∂φ
(x, φ, φ
)
_
−
∂f
∂φ
(x, φ, φ
) = 0, (7.25)
where, in carrying out the partial diﬀerentiation in (7.25), one treats x, φ and φ
as if they
were independent variables.
7.4.2 An example. The Brachistochrone Problem.
Consider the Brachistochrone Problem formulated in the ﬁrst example of Section 7.1. Here
we have
f(x, φ, φ
) =
¸
1 + (φ
)
2
2gφ
7.4. APPLICATION OF NECESSARY CONDITION δF = 0 133
and we wish to ﬁnd the function φ
0
(x) that minimizes
F{φ} =
_
1
0
f(x, φ(x), φ
(x))dx =
_
1
0
¸
1 + [φ
(x)]
2
2gφ(x)
dx
over the class of functions φ(x) that are continuous and have continuous ﬁrst derivatives on
[0,1], and satisfy the boundary conditions φ(0) = 0, φ(1) = h. Treating x, φ and φ
as if they
are independent variables and diﬀerentiating the function f(x, φ, φ
) gives:
∂f
∂φ
=
¸
1 + (φ
)
2
2g
1
2(φ)
3/2
,
∂f
∂φ
=
φ
_
2gφ(1 + (φ
)
2
)
,
and therefore the Euler equation (7.23) specializes to
d
dx
_
φ
_
(φ)(1 + (φ
)
2
)
_
−
_
1 + (φ
)
2
2(φ)
3/2
= 0, 0 < x < 1, (7.26)
with associated boundary conditions
φ(0) = 0, φ(1) = h. (7.27)
The minimizer φ(x) therefore must satisfy the boundaryvalue problem consisting of the
secondorder (nonlinear) ordinary diﬀerential equation (7.26) and the boundary conditions
(7.27).
The rest of this subsection has nothing to do with the calculus of variations. It is simply
concerned with the solving the boundary value problem (7.26), (7.27). We can write the
diﬀerential equation as
φ
_
φ(1 + (φ
)
2
)
d
dφ
_
φ
_
φ(1 + (φ
)
2
)
_
+
1
2φ
2
= 0
which can be immediately integrated to give
1
(φ
(x))
2
=
φ(x)
c
2
−φ(x)
(7.28)
where c is a constant of integration that is to be determined.
It is most convenient to ﬁnd the path of fastest descent in parametric form, x = x(θ), φ =
φ(θ), θ
1
< θ < θ
2
, and to this end we adopt the substitution
φ =
c
2
2
(1 −cos θ) = c
2
sin
2
(θ/2), θ
1
< θ < θ
2
. (7.29)
134 CHAPTER 7. CALCULUS OF VARIATIONS
Diﬀerentiating this with respect to x gives
φ
(x) =
c
2
2
sin θ θ
(x)
so that, together with (7.28) and (7.29), this leads to
dx
dθ
=
c
2
2
(1 −cos θ)
which integrates to give
x =
c
2
2
(θ −sin θ) + c
1
, θ
1
< θ < θ
2
. (7.30)
We now turn to the boundary conditions. The requirement φ(x) = 0 at x = 0, together
with (7.29) and (7.30), gives us θ
1
= 0 and c
1
= 0. We thus have
x =
c
2
2
(θ −sin θ),
φ =
c
2
2
(1 −cos θ),
_
¸
¸
_
¸
¸
_
0 ≤ θ ≤ θ
2
. (7.31)
The remaining boundary condition φ(x) = h at x = 1 gives the following two equations for
ﬁnding the two constants θ
2
and c:
1 =
c
2
2
(θ
2
−sin θ
2
),
h =
c
2
2
(1 −cos θ
2
).
_
¸
¸
_
¸
¸
_
(7.32)
Once this pair of equations is solved for c and θ
2
then (7.31) provides the solution of the
problem. We now address the solvability of (7.32).
To this end, ﬁrst, if we deﬁne the function p(θ) by
p(θ) =
θ −sin θ
1 −cos θ
, 0 < θ < 2π, (7.33)
then, by dividing the ﬁrst equation in (7.32) by the second, we see that θ
2
is a root of the
equation
p(θ
2
) = h
−1
. (7.34)
One can readily verify that the function p(θ) has the properties
p →0 as θ →0+, p →∞ as θ →2π−,
dp
dθ
=
cos θ/2
sin
3
θ/2
(tan θ/2 −θ/2) > 0 for 0 < θ < 2π.
7.4. APPLICATION OF NECESSARY CONDITION δF = 0 135
p(θ)
θ
h
−1
θ
2 2π
Figure 7.4: A graph of the function p(θ) deﬁned in (7.33) versus θ. Note that given any h > 0 the equation
h
−1
= p(θ) has a unique root θ = θ
2
∈ (0, 2π).
Therefore it follows that as θ goes from 0 to 2π, the function p(θ) increases monotonically
from 0 to ∞; see Figure 7.4. Therefore, given any h > 0, the equation p(θ
2
) = h
−1
can be
solved for a unique value of θ
2
∈ (0, 2π). The value of c is then given by (7.32)
1
.
Thus in summary, the path of minimum descent is given by the curve deﬁned in (7.31)
with the values of θ
2
and c given by (7.34) and (7.32)
1
respectively. Figure 7.5 shows that
the curve (7.31) is a cycloid – the path traversed by a point on the rim of a wheel that rolls
without slipping.
A
P
θ
P
P
A
A
π θ =
x(θ) y(θ) ( , )
P
'
R
x
y
R
y(θ) = AP
− AP cos θ = R(1 − cos θ)
x(θ) = PP
− AP sin θ = R(θ − sin θ)
Figure 7.5: A cycloid x = x(θ), y = y(θ) is generated by rolling a circle along the xaxis as shown, the
parameter θ having the signiﬁcance of being the angle of rolling.
136 CHAPTER 7. CALCULUS OF VARIATIONS
7.4.3 A Formalism for Deriving the Euler Equation
In order to expedite the steps involved in deriving the Euler equation, one usually uses the
following formal procedure. First, we adopt the following notation: if H{φ} is any quantity
that depends on φ, then by δH we mean
2
δH = H(φ + εη) −H(φ) up to linear terms. (7.35)
that is,
δH = ε
dH{φ + εη}
dε
¸
¸
¸
¸
ε=0
, (7.36)
For example, by δφ we mean
δφ = (φ + εη) −(φ) = εη; (7.37)
by δφ
we mean
δφ
= (φ
+ εη
) −(φ
) = εη
= (δφ)
; (7.38)
by δf we mean
δf = f(x, φ + εη, φ
+ εη
) −f(x, φ, φ
)
=
∂f
∂φ
(x, φ, φ
) εη +
∂f
∂φ
(x, φ, φ
) εη
=
_
∂f
∂φ
_
δφ +
_
∂f
∂φ
_
δφ
;
(7.39)
and by δF, or δ
_
1
0
f dx, we mean
δF = F{φ + εη} −F{φ} = ε
_
d
dε
F{φ + εη}
_
ε=0
= ε
_
1
0
__
∂f
∂φ
_
η +
_
∂f
∂φ
_
η
_
dx
=
_
1
0
_
∂f
∂φ
δφ +
∂f
∂φ
δφ
_
dx =
_
1
0
δf dx.
(7.40)
We refer to δφ(x) as an admissible variation. When η(0) = η(1) = 0, it follows that
δφ(0) = δφ(1) = 0.
2
Note the following minor change in notation: what we call δH here is what we previously would have
called ε δH.
7.5. GENERALIZATIONS. 137
We refer to δF as the ﬁrst variation of the functional F. Observe from (7.40) that
δF = δ
_
1
0
f dx =
_
1
0
δf dx. (7.41)
Finally observe that the necessary condition for a minimum that we wrote down previously
can be written as
δF{φ, δφ} = 0 for all admissible variations δφ. (7.42)
For purposes of illustration, let us now repeat our previous derivation of the Euler equa
tion using this new notation
3
. Given the functional F, a necessary condition for an extremum
of F is
δF = 0
and so our task is to calculate δF:
δF = δ
_
1
0
f dx =
_
1
0
δf dx.
Since f = f(x, φ, φ
), this in turn leads to
4
δF =
_
1
0
__
∂f
∂φ
_
δφ +
_
∂f
∂φ
_
δφ
_
dx.
From here on we can proceed as before by setting δF = 0, integrating the second term by
parts, and using the boundary conditions and the arbitrariness of an admissible variation
δφ(x) to derive the Euler equation.
7.5 Generalizations.
7.5.1 Generalization: Free endpoint; Natural boundary conditions.
Consider the following modiﬁed problem: suppose that we want to ﬁnd the function φ(x)
from among all once continuously diﬀerentiable functions that makes the functional
F{φ} =
_
1
0
f(x, φ, φ
) dx
3
If ever in doubt about a particular step during a calculation, always go back to the meaning of the
symbols δφ, etc. or revert to using ε, η etc.
4
Note that the variation δ does not operate on x since it is the function φ that is being varied not the
independent variable x. So in particular, δf = f
φ
δφ +f
φ
δφ
and not δf = f
x
δx +f
φ
δφ +f
φ
δφ
.
138 CHAPTER 7. CALCULUS OF VARIATIONS
a minimum. Note that we do not restrict attention here to those functions that satisfy
φ(0) = a, φ(1) = b. So the set of admissible functions A is
A =
_
φ(·) φ : [0, 1] →R, φ ∈ C
1
[0, 1]
_
(7.43)
Note that the class of admissible functions A is much larger than before. The functional
F{φ} is deﬁned for all φ ∈ A by
F{φ} =
_
1
0
f(x, φ, φ
) dx. (7.44)
We begin by calculating the ﬁrst variation of F:
δF = δ
_
1
0
f dx =
_
1
0
δf dx =
_
1
0
__
∂f
∂φ
_
δφ +
_
∂f
∂φ
_
δφ
_
dx (7.45)
Integrating the last term by parts yields
δF =
_
1
0
_
∂f
∂φ
−
d
dx
_
∂f
∂φ
__
δφ dx +
_
∂f
∂φ
δφ
_
1
0
. (7.46)
Since δF = 0 at an extremum, we must have
_
1
0
_
∂f
∂φ
−
d
dx
_
∂f
∂φ
__
δφ dx +
_
∂f
∂φ
δφ
_
1
0
= 0 (7.47)
for all admissible variations δφ(x). Note that the boundary term in (7.47) does not automat
ically drop out now because δφ(0) and δφ(1) do not have to vanish. First restrict attention
to all variations δφ with the additional property δφ(0) = δφ(1) = 0; equation (7.47) must
necessarily hold for all such variations δφ. The boundary terms now drop out and by the
Lemma in Section 7.4.1 it follows that
d
dx
_
∂f
∂φ
_
−
∂f
∂φ
= 0 for 0 < x < 1. (7.48)
This is the same Euler equation as before. Next, return to (7.47) and keep (7.48) in mind.
We see that we must have
∂f
∂φ
¸
¸
¸
¸
x=1
δφ(1) −
∂f
∂φ
¸
¸
¸
¸
x=0
δφ(0) = 0 (7.49)
for all admissible variations δφ. Since δφ(0) and δφ(1) are both arbitrary (and not necessarily
zero), (7.49) requires that
∂f
∂φ
= 0 at x = 0 and x = 1. (7.50)
7.5. GENERALIZATIONS. 139
Equation (7.50) provides the boundary conditions to be satisﬁed by the extremizing function
φ(x). These boundary conditions were determined as part of the extremization; they are
referred to as natural boundary conditions in contrast to boundary conditions that are given
as part of a problem statement.
Example: Reconsider the Brachistochrone Problem analyzed previously but now suppose
that we want to ﬁnd the shape of the wire that commences from (0, 0) and ends somewhere
on the vertical through x = 1; see Figure 7.6. The only diﬀerence between this and the ﬁrst
Brachistochrone Problem is that here the set of admissible functions is
A
2
=
_
φ(·)
¸
¸
φ : [0, 1] →R, φ ∈ C
1
[0, 1], φ(0) = 0
_
;
note that there is no restriction on φ at x = 1. Our task is to minimize the travel time of
the bead T{φ} over the set A
2
.
x
0
g
y
y = φ(x)
1
Figure 7.6: Curve joining (0, 0) to an arbitrary point on the vertical line through x = 1.
The minimizer must satisfy the same Euler equation (7.26) as in the ﬁrst problem, and
the same boundary condition φ(0) = 0 at the left hand end. To ﬁnd the natural boundary
condition at the other end, recall that
f(x, φ, φ
) =
¸
1 + (φ
)
2
2gφ
.
Diﬀerentiating this gives
∂f
∂φ
=
φ
_
2gφ(1 + (φ
)
2
)
.
and so by (7.50), the natural boundary coundition is
φ
_
2gφ(1 + (φ
)
2
)
= 0 at x = 1,
140 CHAPTER 7. CALCULUS OF VARIATIONS
which simpliﬁes to
φ
(1) = 0.
.
7.5.2 Generalization: Higher derivatives.
The functional F{φ} considered above involved a function φ and its ﬁrst derivative φ
. One
can consider functionals that involve higher derivatives of φ, for example
F{φ} =
_
1
0
f(x, φ, φ
, φ
) dx.
We begin with the formulation and analysis of a speciﬁc example and then turn to some
theory.
u
x
y
M
M
u
φ = u
Energy per unit length =
1
2
Mφ
=
1
2
EI(u
)
2
δφ
M = EIφ
Figure 7.7: The neutral axis of a beam in reference (straight) and deformed (curved) states. The bold lines
represent a crosssection of the beam in the reference and deformed states. In the classical BernoulliEuler
theory of beams, crosssections remain perpendicular to the neutral axis.
Example: The BernoulliEuler Beam. Consider an elastic beam of length L and bending
stiﬀness EI, which is clamped at its left hand end. The beam carries a distributed load p(x)
along its length and a concentrated force F at the right hand end x = L; both loads act in
the −ydirection. Let u(x), 0 ≤ x ≤ L, be a geometrically admissible deﬂection of the beam.
Since the beam is clamped at the left hand end this means that u(x) is any (smooth enough)
function that satisﬁes the geometric boundary conditions
u(0) = 0, u
(0) = 0; (7.51)
7.5. GENERALIZATIONS. 141
the boundary condition (7.51)
1
describes the geometric condition that the beam is clamped
at x = 0 and therefore cannot deﬂect at that point; the boundary condition (7.51)
2
describes
the geometric condition that the beam is clamped at x = 0 and therefore cannot rotate at
the left end. The set of admissible test functions that we consider is
A =
_
u(·)  u : [0, L] →R, u ∈ C
4
[0, L], u(0) = 0, u
(0) = 0
_
, (7.52)
which consists of all “geometrically possible conﬁgurations”.
From elasticity theory we know that the elastic energy associated with a deformed con
ﬁguration of the beam is (1/2)EI(u
)
2
per unit length. Therefore the total potential energy
of the system is
Φ{u} =
_
L
0
1
2
EI(u
(x))
2
dx −
_
L
0
p(x)u(x) dx −F u(L), (7.53)
where the last two terms represent the potential energy of the distributed and concentrated
loading respectively; the negative sign in front of these terms arises because the loads act
in the −ydirection while u is the deﬂection in the +ydirection. Note that the integrand
of the functional involves the higher derivative term u
. In addition, note that only two
boundary conditions u(0) = 0, u
(0) = 0 are given and so we expect to derive additional
natural boundary conditions at the right hand end x = L.
The actual deﬂection of the beam minimizes the potential energy (7.53) over the set
(7.52). We proceed in the usual way by calculating the ﬁrst variation δΦ and setting it equal
to zero:
δΦ = 0.
By using (7.53) this can be written explicitly as
_
L
0
EI u
δu
dx −
_
L
0
p δu dx −F δu(L) = 0.
Twice integrating the ﬁrst term by parts leads to
_
L
0
EI u
δu dx −
_
L
0
p δu dx −F δu(L) −
_
EIu
δu
_
L
0
+
_
EIu
δu
_
L
0
= 0.
The given boundary conditions (7.51) require that an admissible variation δu must obey
δu(0) = 0, δu
(0) = 0. Therefore the preceding equation simpliﬁes to
_
L
0
(EI u
−p) δu dx −[EIu
(L) + F] δu(L) + EIu
(L)δu
(L) = 0.
142 CHAPTER 7. CALCULUS OF VARIATIONS
Since this must hold for all admissible variations δu(x), it follows in the usual way that the
extremizing function u(x) must obey
EI u
(x) −p(x) = 0 for 0 < x < L,
EI u
(L) + F = 0,
EI u
(L) = 0.
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
(7.54)
Thus the extremizer u(x) obeys the fourth order linear ordinary diﬀerential equation (7.54)
1
,
the prescribed boundary conditions (7.51) and the natural boundary conditions (7.54)
2,3
.
The natural boundary condition (7.54)
2
describes the mechanical condition that the beam
carries a concentrated force F at the right hand end; and the natural boundary condition
(7.54)
3
describes the mechanical condition that the beam is free to rotate (and therefore has
zero “bending moment”) at the right hand end.
Exercise: Consider the functional
F{φ} =
_
1
0
f(x, φ, φ
, φ
)dx
deﬁned on the set of admissible functions A consisting of functions φ that are deﬁned and
four times continuously diﬀerentiable on [0, 1] and that satisfy the four boundary conditions
φ(0) = φ
0
, φ
(0) = φ
0
, φ(1) = φ
1
, φ
(1) = φ
1
.
Show that the function φ that extremizes F over the set A must satisfy the Euler equation
∂f
∂φ
−
d
dx
_
∂f
∂φ
_
+
d
2
dx
2
_
∂f
∂φ
_
= 0 for 0 < x < 1
where, as before, the partial derivatives ∂f/∂φ, ∂f/∂φ
and ∂f/∂φ
are calculated by treating
φ, φ
and φ
as if they are independent variables in f(x, φ, φ
, φ
).
7.5.3 Generalization: Multiple functions.
The functional F{φ} considered above involved a single function φ and its derivatives. One
can consider functionals that involve multiple functions, for example a functional
F{u, v} =
_
1
0
f(x, u, u
, v, v
) dx
7.5. GENERALIZATIONS. 143
that involves two functions u(x) and v(x). We begin with the formulation and analysis of a
speciﬁc example and then turn to some theory.
Example: The Timoshenko Beam. Consider a beam of length L, bending stiﬀness
5
EI and
shear stiﬀness GA. The beam is clamped at x = 0, it carries a distributed load p(x) along its
length which acts in the −ydirection, and carries a concentrated force F at the end x = L,
also in the −ydirection.
In the simplest model of a beam – the socalled BernoulliEuler model – the deformed
state of the beam is completely deﬁned by the deﬂection u(x) of the centerline (the neutral
axis) of the beam. In that theory, shear deformations are neglected and therefore a cross
section of the beam remains perpendicular to the neutral axis even in the deformed state.
Here we discuss a more general theory of beams, one that accounts for shear deformations.
u
φ
u
φ 
x
y
Figure 7.8: The neutral axis of a beam in reference (straight) and deformed (curved) states. The bold lines
represent a crosssection of the beam in the reference and deformed states. The thin line is perpendicular
to the deformed neutral axis, so that in the classical BernoulliEuler theory of beams, where crosssections
remain perpendicular to the neutral axis, the thin line and the bold line would coincide. The angle between
the vertical and the bold line if φ. The angle between the neutral axis and the horizontal, which equals the
angle between the perpendicular to the neutral axis (the thin line) and the vertical dashed line, is u
. The
decrease in the angle between the crosssection and the neutral axis is therefore u
−φ.
In the theory considered here, a crosssection of the beam is not constrained to remain
perpendicular to the neutral axis. Thus a deformed state of the beam is characterized by two
5
E and G are the Young’s modulus and shear modulus of the material, while I and A are the second
moment of crosssection and the area of the crosssection respectively.
144 CHAPTER 7. CALCULUS OF VARIATIONS
ﬁelds: one, u(x), characterizes the deﬂection of the centerline of the beam at a location x,
and the second, φ(x), characterizes the rotation of the crosssection at x. (In the Bernoulli
Euler model, φ(x) = u
(x) since for small angles, the rotation equals the slope.) The fact
that the left hand end is clamped implies that the point x = 0 cannot deﬂect and that the
crosssection at x = 0 cannot rotate. Thus we have the geometric boundary conditions
u(0) = 0, φ(0) = 0. (7.55)
Note that the zero rotation boundary condition is φ(0) = 0 and not u
(0) = 0.
In the more accurate beam theory discussed here, the socalled Timoshenko beam theory,
one does not neglect shear deformations and so u(x) and φ(x) are (geometrically) independent
functions. Since the shear strain is deﬁned as the change in angle between two ﬁbers that
are initially at right angles to each other, the shear strain in the present situation is
γ(x) = u
(x) −φ(x);
see Figure 7.8. Observe that in the BernoulliEuler theory γ(x) = 0.
γ
S = GAγ
S
S
M
δφ
M = EIφ
M
Figure 7.9: Basic constitutive relationships for a beam.
The basic equations of elasticity tell us that the momentcurvature relation for bending
is
M(x) = EIφ
(x)
7.5. GENERALIZATIONS. 145
and that the associated elastic energy per unit length of the beam, (1/2)Mφ
, is
1
2
EI(φ
(x))
2
.
Similarly, we know from elasticity that the shear forceshear strain relation for a beam is
6
S(x) = GAγ(x)
and that the associated elastic energy per unit length of the beam, (1/2)Sγ, is
1
2
GA(γ(x))
2
.
The total potential energy of the system is thus
Φ = Φ{u, φ} =
_
L
0
_
1
2
EI(φ
(x))
2
+
1
2
GA(u
(x) −φ(x))
2
_
dx −
_
L
0
pu(x)dx −Fu(L),
(7.56)
where the last two terms in this expression represent the potential energy of the distributed
and concentrated loading respectively (and the negative signs arise because u is the deﬂection
in the +ydirection while the loadings p and F are applied in the −ydirection). We allow
for the possibility that p, EI and GA may vary along the length of the beam and therefore
might be functions of x.
The displacement and rotation ﬁelds u(x) and φ(x) associated with an equilibrium con
ﬁguration of the beam minimizes the potential energy Φ{u, φ} over the admissible set A of
test functions where take
A = {u(·), φ(·)
¸
¸
¸u : [0, l] →R, φ : [0, l] →R, u ∈ C
2
([0, L]), φ ∈ C
2
([0, L]), u(0) = 0, φ(0) = 0}.
Note that all admissible functions are required to satisfy the geometric boundary conditions
(7.55).
To ﬁnd a minimizer of Φ we calculate its ﬁrst variation which from (7.56) is
δΦ =
_
L
0
_
EIφ
δφ
+ GA(u
−φ)(δu
−δφ)
_
dx −
_
L
0
p δu dx −F δu(L).
6
Since the top and bottom faces of the diﬀerential element shown in Figure 7.9 are free of shear traction,
we know that the element is not in a state of simple shear. Instead, the shear stress must vary with y such
that it vanishes at the top and bottom. In engineering practice, this is taken into account approximately by
replacing GA by κGA where the heuristic parameter κ ≈ 0.8 −0.9.
146 CHAPTER 7. CALCULUS OF VARIATIONS
Integrating the terms involving δu
and δφ
by parts gives
δΦ =
_
EIφ
δφ
_
L
0
−
_
L
0
d
dx
_
EIφ
_
δφdx
+
_
GA(u
−φ)δu
_
L
0
−
_
L
0
d
dx
_
GA(u
−φ)
_
δu dx −
_
L
0
GA(u
−φ)δφdx
−
_
L
0
p δu dx −Fδu(L).
Finally on using the facts that an admissible variation must satisfy δu(0) = 0 and δφ(0) = 0,
and collecting the like terms in the preceding equation leads to
δΦ = EIφ
(L) δφ(L) +
_
GA
_
u
(L) −φ(L)
_
−F
_
δu(L)
−
_
L
0
_
d
dx
_
EIφ
_
+ GA(u
−φ)
_
δφ(x) dx
−
_
L
0
_
d
dx
_
GA(u
−φ)
_
+ p
_
δu(x) dx.
(7.57)
At a minimizer, we have δΦ = 0 for all admissible variations. Since the variations
δu(x), δφ(x) are arbitrary on 0 < x < L and since δu(L) and δφ(L) are also arbitrary,
it follows from (7.57) that the ﬁeld equations
d
dx
_
EIφ
_
+ GA(u
−φ) = 0, 0 < x < L,
d
dx
_
GA(u
−φ)
_
+ p = 0, 0 < x < L,
_
¸
¸
_
¸
¸
_
(7.58)
and the natural boundary conditions
EIφ
(L) = 0, GA
_
u
(L) −φ(L)
_
= F (7.59)
must hold.
Thus in summary, an equilibrium conﬁguration of the beam is described by the deﬂec
tion u(x) and rotation φ(x) that satisfy the diﬀerential equations (7.58) and the boundary
conditions (7.55), (7.59). [Remark: Can you recover the BernoulliEuler theory from the
Timoshenko theory in the limit as the shear rigidity GA →∞?]
Exercise: Consider a smooth function f(x, y
1
, y
2
, . . . , y
n
, z
1
, z
2
, . . . , z
n
) deﬁned for all x, y
1
, y
2
, . . . , y
n
,
z
1
, . . . , z
n
. Let φ
1
(x), φ
2
(x), . . . , φ
n
(x) be n oncecontinuously diﬀerentiable functions on [0, 1]
7.5. GENERALIZATIONS. 147
with φ
i
(0) = a
i
, φ
i
(1) = b
i
. Let F be the functional deﬁned by
F {φ
1
, φ
2
, . . . , φ
n
} =
_
1
0
f(x, φ
1
, φ
2
, . . . , φ
n
, φ
1
, φ
2
, . . . , φ
n
) dx (7.60)
on the set of all such admissible functions. Show that the functions φ
1
(x), φ
2
(x), . . . , φ
n
(x)
that extremize F must necessarily satisfy the n Euler equations
d
dx
_
∂f
∂φ
i
_
−
∂f
∂φ
i
= 0 for 0 < x < 1, (i = 1, 2, . . . , n). (7.61)
7.5.4 Generalization: End point of extremal lying on a curve.
Consider the set A of all functions that describe curves in the x, yplane that commence from
a given point (0, a) and end at some point on the curve G(x, y) = 0. We wish to minimize a
functional F{φ} over this set of functions.
x
0
y = φ(x)
y
G(x, y) = 0
(0, a)
y = φ(x) + δφ(x)
x
R
x
R
+ δx
R
Figure 7.10: Curve joining (0, a) to an arbitrary point on the given curve G(x, y) = 0.
Suppose that φ(x) ∈ A is a minimizer of F. Let x = x
R
be the abscissa of the point at
which the curve y = φ(x) intersects the curve G(x, y) = 0. Observe that x
R
is not known a
priori and is to be determined along with φ. Moreover, note that the abscissa of the point
at which a neighboring curve deﬁned by y = φ(x) + δφ(x) intersects the curve G = 0 is not
x
R
but x
R
+ δx
R
; see Figure 7.10.
At the minimizer,
F{φ} =
_
x
R
0
f(x, φ, φ
)dx
148 CHAPTER 7. CALCULUS OF VARIATIONS
and at a neighboring test function
F{φ + δφ} =
_
x
R
+δx
R
0
f(x, φ + δφ, φ
+ δφ
)dx.
Therefore on calculating the ﬁrst variation δF, which equals the linearized form of F{φ +
δφ} −F{φ}, we ﬁnd
δF =
_
x
R
+δx
R
0
_
f(x, φ, φ
) + f
φ
(x, φ, φ
)δφ + f
φ
(x, φ, φ
)δφ
_
dx −
_
x
R
0
f(x, φ, φ
)dx
where we have set f
φ
= ∂f/∂φ and f
φ
= ∂f/∂φ
. This leads to
δF =
_
x
R
+δx
R
x
R
f(x, φ, φ
)dx +
_
x
R
0
_
f
φ
(x, φ, φ
)δφ + f
φ
(x, φ, φ
)δφ
_
dx
which in turn reduces to
δF = f
_
x
R
, φ(x
R
), φ
(x
R
)
_
δx
R
+
_
x
R
0
_
f
φ
δφ + f
φ
δφ
_
dx.
Thus setting the ﬁrst variation δF equal to zero gives
f
_
x
R
, φ(x
R
), φ
(x
R
)
_
δx
R
+
_
x
R
0
_
f
φ
δφ + f
φ
δφ
_
dx = 0.
After integrating the last term by parts we get
f
_
x
R
, φ(x
R
), φ
(x
R
)
_
δx
R
+
_
f
φ
δφ
_
x
R
0
+
_
x
R
0
_
f
φ
−
d
dx
f
φ
_
δφdx = 0
which, on using the fact that δφ(0) = 0, reduces to
f
_
x
R
, φ(x
R
), φ
(x
R
)
_
δx
R
+ f
φ
_
x
R
, φ(x
R
), φ
(x
R
)
_
δφ(x
R
) +
_
x
R
0
_
f
φ
−
d
dx
f
φ
_
δφdx = 0.
(7.62)
First limit attention to the subset of all test functions that terminate at the same point
(x
R
, φ(x
R
)) as the minimizer. In this case δx
R
= 0 and δφ(x
R
) = 0 and so the ﬁrst two
terms in (7.62) vanish. Since this specialized version of equation (7.62) must hold for all such
variations δφ(x), this leads to the Euler equation
f
φ
−
d
dx
f
φ
= 0, 0 ≤ x ≤ x
R
. (7.63)
We now return to arbitrary admissible test functions. Substituting (7.63) into (7.62) gives
f
_
x
R
, φ(x
R
), φ
(x
R
)
_
δx
R
+ f
φ
_
x
R
, φ(x
R
), φ
(x
R
)
_
δφ(x
R
) = 0 (7.64)
7.5. GENERALIZATIONS. 149
which must hold for all admissible δx
R
and δφ(x
R
). It is important to observe that since
admissible test curves must end on the curve G = 0, the quantities δx
R
and δφ(x
R
) are not
independent of each other. Thus (7.64) does not hold for all δx
R
and δφ(x
R
); only for those
that are consistent with this geometric requirement. The requirement that the minimizing
curve and the neighboring test curve terminate on the curve G(x, y) = 0 implies that
G(x
R
, φ(x
R
)) = 0, G(x
R
+ δx
R
, φ(x
R
+ δx
R
) + δφ(x
R
+ δx
R
)) = 0, .
Note that linearization gives
G(x
R
+ δx
R
, φ(x
R
+ δx
R
) + δφ(x
R
+ δx
R
))
= G(x
R
+ δx
R
, φ(x
R
) + φ
(x
R
)δx
R
+ δφ(x
R
))
= G(x
R
, φ(x
R
)) + G
x
(x
R
, φ(x
R
))δx
R
+ G
y
(x
R
, φ(x
R
))
_
φ
(x
R
)δx
R
+ δφ(x
R
)
_
,
= G(x
R
, φ(x
R
)) +
_
G
x
(x
R
, φ(x
R
)) + φ
(x
R
)G
y
(x
R
, φ(x
R
))
_
δx
R
+ G
y
(x
R
, φ(x
R
)) δφ(x
R
).
where we have set G
x
= ∂G/∂x and G
y
= ∂G/∂x. Setting δG = G(x
R
+δx
R
, φ(x
R
+δx
R
) +
δφ(x
R
+δx
R
)) −G(x
R
, φ(x
R
)) = 0 thus leads to the following relation between the variations
δx
R
and δφ(x
R
):
_
G
x
(x
R
, φ(x
R
)) + φ
(x
R
)G
y
(x
R
, φ(x
R
))
_
δx
R
+ G
y
(x
R
, φ(x
R
)) δφ(x
R
) = 0. (7.65)
Thus (7.64) must hold for all δx
R
and δφ(x
R
) that satisfy (7.65). This implies that
7
f
_
x
R
, φ(x
R
), φ
(x
R
)
_
−λ
_
G
x
(x
R
, φ(x
R
)) + φ
(x
R
)G
y
(x
R
, φ(x
R
))
_
= 0,
f
φ
_
x
R
, φ(x
R
), φ
(x
R
)
_
−λG
y
(x
R
, φ(x
R
)) = 0,
_
_
_
for some constant λ (referred to as a Lagrange multiplier). We can use the second equation
above to simplify the ﬁrst equation which then leads to the pair of equations
f
_
x
R
, φ(x
R
), φ
(x
R
)
_
−φ
(x
R
)f
φ
_
x
R
, φ(x
R
), φ
(x
R
)
_
−λG
x
(x
R
, φ(x
R
)) = 0,
f
φ
_
x
R
, φ(x
R
), φ
(x
R
)
_
−λG
y
(x
R
, φ(x
R
)) = 0.
_
_
_
(7.66)
7
It may be helpful to recall from calculus that if we are to minimize a function I(ε
1
, ε
2
), we must satisfy
the condition dI = (∂I/∂ε
1
)dε
1
+ (∂I/∂ε
2
)dε
2
= 0. But if this minimization is carried out subject to the
side constraint J(ε
1
, ε
2
) = 0 then we must respect the side condition dJ = (∂J/∂ε
1
)dε
1
+ (∂J/∂ε
2
)dε
2
= 0.
Under these circumstances, one ﬁnds that that one must require the conditions ∂I/∂ε
1
= λ∂J/∂ε
1
, ∂I/∂ε
2
=
λ∂J/∂ε
2
where the Lagrange multiplier λ is unknown and is also to be determined. The constrain equation
J = 0 provides the extra condition required for this purpose.
150 CHAPTER 7. CALCULUS OF VARIATIONS
Equation (7.66) provides two natural boundary conditions at the right hand end x = x
R
.
In summary: an extremal φ(x) must satisfy the diﬀerential equations (7.63) on 0 ≤
x ≤ x
R
, the boundary condition φ = a at x = 0, the two natural boundary conditions
(7.66) at x = x
R
, and the equation G(x
R
, φ(x
R
)) = 0. (Note that the presence of the
additional unknown λ is compensated for by the imposition of the additional condition
G(x
R
, φ(x
R
)) = 0.)
Example: Suppose that G(x, y) = c
1
x+c
2
y+c
3
and that we are to ﬁnd the curve of shortest
length that commences from (0, a) and ends on G = 0.
Since ds =
_
dx
2
+ dy
2
=
_
1 + (φ
)
2
dx we are to minimize the functional
F =
_
x
R
0
_
1 + (φ
)
2
dx.
Thus
f(x, φ, φ
) =
_
1 + (φ
)
2
, f
φ
(x, φ, φ
) = 0 and f
φ
(x, φ, φ
) =
φ
_
1 + (φ
)
2
. (7.67)
On using (7.67), the Euler equation (7.63) can be integrated immediately to give
φ
(x) = constant for 0 ≤ x ≤ x
R
.
The boundary condition at the left hand end is
φ(0) = a,
while the boundary conditions (7.66) at the right hand end give
1
_
1 + φ
2
(x
R
)
= λc
1
,
φ
(x
R
)
_
1 + φ
2
(x
R
)
= λc
2
.
Finally the condition G(x
R
, φ(x
R
)) = 0 requires that
c
1
x
R
+ c
2
φ(x
R
) + c
3
= 0.
Solving the preceding equations leads to the minimizer
φ(x) = (c
2
/c
1
)x + a for 0 ≤ x ≤ −
c
1
(ac
2
+ c
3
)
c
2
1
+ c
2
2
.
7.6. CONSTRAINED MINIMIZATION 151
7.6 Constrained Minimization
7.6.1 Integral constraints.
Consider a problem of the following general form: ﬁnd admissible functions φ
1
(x), φ
2
(x) that
minimizes
F{φ
1
, φ
2
} =
_
1
0
f(x, φ
1
(x), φ
2
(x), φ
1
(x), φ
2
(x)) dx (7.68)
subject to the constraint
G(φ
1
, φ
2
) =
_
1
0
f(x, φ
1
(x), φ
2
(x), φ
1
(x), φ
2
(x)) dx = 0. (7.69)
For reasons of clarity we shall return to the more detailed approach where we introduce
parameters ε
1
, ε
2
and functions η
1
(x), η
1
(x), rather than following the formal approach using
variations δφ
1
(x), δφ
2
(x). Accordingly, suppose that the pair φ
1
(x), φ
2
(x) is the minimizer. By
evaluating F and G on a family of neighboring admissible functions φ
1
(x) +ε
1
η
1
(x), φ
2
(x) +
ε
2
η
2
(x) we have
ˆ
F(ε
1
, ε
2
) = F{φ
1
(x) + ε
1
η
1
(x), φ
2
(x) + ε
2
η
2
(x)},
ˆ
G(ε
1
, ε
2
) = G(φ
1
(x) + ε
1
η
1
(x), φ
2
(x) + ε
2
η
2
(x)) = 0.
(7.70)
If we begin by keeping η
1
and η
2
ﬁxed, this is a classical minimization problem for a function
of two variables: we are to minimize the function
ˆ
F(ε
1
, ε
2
) with respect to the variables
ε
1
and ε
2
, subject to the constraint
ˆ
G(ε
1
, ε
2
) = 0. A necessary condition for this is that
d
ˆ
F(ε
1
, ε
2
) = 0, i.e. that
d
ˆ
F =
∂
ˆ
F
∂ε
1
dε
1
+
∂
ˆ
F
∂ε
2
dε
2
= 0, (7.71)
for all dε
1
, dε
2
that are consistent with the constraint. Because of the constraint, dε
1
and dε
2
cannot be varied independently. Instead the constraint requires that they be related by
d
ˆ
G =
∂
ˆ
G
∂ε
1
dε
1
+
∂
ˆ
G
∂ε
2
dε
2
= 0. (7.72)
If we didn’t have the constraint, then (7.71) would imply the usual requirements ∂
ˆ
F/∂ε
1
=
∂
ˆ
F/∂ε
2
= 0. However when the constraint equation (7.72) holds, (7.71) only requires that
∂
ˆ
F
∂ε
1
= λ
∂
ˆ
G
∂ε
1
,
∂
ˆ
F
∂ε
2
= λ
∂
ˆ
G
∂ε
2
, (7.73)
152 CHAPTER 7. CALCULUS OF VARIATIONS
for some constant λ, or equivalently
∂
∂ε
1
(
ˆ
F −λ
ˆ
G) = 0,
∂
∂ε
2
(
ˆ
F −λ
ˆ
G) = 0. (7.74)
Therefore minimizing
ˆ
F subject to the constraint
ˆ
G = 0 is equivalent to minimizing
ˆ
F −λ
ˆ
G
without regard to the constraint; λ is known as a Lagrange multiplier. Proceeding from
here on leads to the Euler equation associated with F −λG. The presence of the additional
unknown parameter λ is balanced by the availability of the constraint equation G = 0.
Example: Consider a heavy inextensible cable of mass per unit length m that hangs under
gravity. The two ends of the cable are held at the same vertical height, a distance 2H apart.
The cable has a given length L. We know from physics that the cable adopts a shape that
minimizes the potential energy. We are asked to determine this shape.
x
0
y = φ(x)
g
y
H −H
Let y = φ(x), −H ≤ x ≤ H, describe an admissible shape of the cable. The potential
energy of the cable is determined by integrating mgφ with respect to the arc length s along
the cable which, since ds =
_
dx
2
+ dy
2
=
_
1 + (φ
)
2
dx, is given by
V {φ} =
_
L
0
mgφds = mg
_
H
−H
φ
_
1 + (φ
)
2
dx. (7.75)
Since the cable is inextensible, its length
{φ} =
_
L
0
ds =
_
H
−H
_
1 + (φ
)
2
dx (7.76)
must equal L. Therefore we are asked to ﬁnd a function φ(x) with φ(−H) = φ(H), that
minimizes V {φ} subject to the constraint {φ} = L. According to the theory developed
7.6. CONSTRAINED MINIMIZATION 153
above, this function must satisfy the Euler equation associated with the functional V {φ} −
λ{φ} where the Lagrange multiplier λ is a constant. The resulting boundary value problem
together with the constraint = L yields the shape of the cable φ(x).
Calculating the ﬁrst variation of V −λmg, where the constant λ is a Lagrange multiplier,
leads to the Euler equation
d
dx
_
(φ −λ)
φ
_
1 + (φ
)
2
_
−
_
1 + (φ
)
2
= 0, −H < x < H.
This can be integrated once to yield
φ
=
_
(φ −λ)
2
c
2
− 1
where c is a constant of integration. Integrating this again leads to
φ(x) = c cosh[(x + d)/c] + λ, −H < x < H,
where d is a second constant of integration. For symmetry, we must have φ
(0) = 0 and
therefore d = 0. Thus
φ(x) = c cosh(x/c) + λ, −H < x < H. (7.77)
The constant λ in (7.77) is simply a reference height. For example we could take the xaxis
to pass through the two pegs in which case φ(±H) = 0 and then λ = −c cosh(H/c) and so
φ(x) = c
_
cosh(x/c) − cosh(H/c)
_
, −H < x < H. (7.78)
Substituting (7.78) into the constraint condition {φ} = L with given by (7.76) yields
L = 2c sinh(H/c). (7.79)
Thus in summary, if equation (7.79) can be solved for c, then (7.78) gives the equation
describing the shape of the cable.
All that remains is to examine the solvability of (7.79). To this end set z = H/c and
µ = L/(2H). Then we must solve µz = sinh z where µ > 1 is a constant. (The requirement
µ > 1 follows from the physical necessity that the distance between the pegs, 2H, be less
than the length of the rope, L.) One can show that as z increases from 0 to ∞, the function
sinh z −µz starts from the value 0, decreases monotonically to some ﬁnite negative value at
some z = z
∗
> 0, and then increases monotonically to ∞. Thus for each µ > 0 the function
sinh z − µz vanishes at some unique positive value of z. Consequently (7.79) has a unique
root c > 0.
154 CHAPTER 7. CALCULUS OF VARIATIONS
7.6.2 Algebraic constraints
Now consider a problem of the following general type: ﬁnd a pair of admissible functions
φ
1
(x), φ
2
(x) that minimizes
_
1
0
f(x, φ
1
, φ
2
, φ
1
, φ
2
)dx
subject to the algebraic constraint
g(x, φ
1
(x), φ
2
(x)) = 0 for 0 ≤ x ≤ 1.
One can show that a necessary condition is that the minimizer should satisfy the Euler
equation associated with f −λg. In this problem the Lagrange multiplier λ may be a function
of x.
Example: Consider a conical surface characterized by
g(x
1
, x
2
, x
3
) = x
2
1
+ x
2
2
−R
2
(x
3
) = 0, R(x
3
) = x
3
tan α, x
3
> 0.
Let P = (p
1
, p
2
, p
3
) and Q = (q
1
, q
2
, q
3
), q
3
> p
3
, be two points on this surface. A smooth
wire lies entirely on the conical surface and joins the points P and Q. A bead slides along
the wire under gravity, beginning at rest from P. From among all such wires, we are to ﬁnd
the one that gives the minimum travel time.
x
1
x
2
x
3
P
Q
g
Figure 7.11: A curve that joins the points (p
1
, p
2
, p
3
) to (q
2
, q
2
, q
3
) and lies on the conical surface x
2
1
+
x
2
2
−x
2
3
tan
2
α = 0.
Suppose that the wire can be described parametrically by x
1
= φ
1
(x
3
), x
2
= φ
2
(x
3
) for
p
3
≤ x
3
≤ q
3
. (Not all of the permissible curves can be described this way and so by using
7.6. CONSTRAINED MINIMIZATION 155
this characterization we are limiting ourselves to a subset of all the permited curves.) Since
the curve has to lie on the conical surface it is necessary that
g(φ
1
(x
3
), φ
2
(x
3
), x
3
) = 0, p
3
≤ x
3
≤ q
3
. (7.80)
The travel time is found by integrating ds/v along the path. The arc length ds along the
path is given by
ds =
_
dx
2
1
+ dx
2
2
+ dx
2
3
=
_
(φ
1
)
2
+ (φ
2
)
2
+ 1 dx
3
.
The conservation of energy tells us that
1
2
mv
2
(t) −mgx
3
(t) = −mgp
3
, or
v =
_
2g(x
3
−p
3
).
Therefore the travel time is
T{φ
1
, φ
2
} =
_
q
3
p
3
_
(φ
1
)
2
+ (φ
2
)
2
+ 1
_
2g(x
3
−p
3
)
dx
3
.
Our task is to minimize T{φ
1
, φ
2
} over the set of admissible functions
A =
_
(φ
1
, φ
2
)
¸
¸
¸ φ
i
: [p
3
, q
3
] →R, φ
i
∈ C
2
[p
3
, q
3
], φ
i
(p
3
) = p
i
, φ
i
(q
3
) = q
i
, i = 1, 2
_
,
subject to the constraint
g(φ
1
(x
3
), φ
2
(x
3
), x
3
) = 0, p
3
≤ x
3
≤ q
3
.
According to the theory developed the solution is given by solving the Euler equations
associated with f −λ(x
3
)g where
f(x
3
, φ
1
, φ
2
, φ
1
, φ
2
) =
_
(φ
1
)
2
+ (φ
2
)
2
+ 1
_
2g(x
3
−p
3
)
and g(x
1
, x
2
, x
3
) = x
2
1
+ x
2
2
−x
2
3
tan
2
α,
subject to the prescribed conditions at the ends and the constraint g(x
1
, x
2
, x
3
) = 0.
7.6.3 Diﬀerential constraints
Now consider a problem of the following general type: ﬁnd a pair of admissible functions
φ
1
(x), φ
2
(x) that minimizes
_
1
0
f(x, φ
1
, φ
2
, φ
1
, φ
2
)dx
156 CHAPTER 7. CALCULUS OF VARIATIONS
subject to the diﬀerential equation constraint
g(x, φ
1
(x), φ
2
(x), φ
1
(x), φ
2
(x)) = 0, for 0 ≤ x ≤ 1.
Suppose that the constraint is not integrable, i.e. suppose that there does not exist a func
tion h(x, φ
1
(x), φ
2
(x)) such that g = dh/dx. (In dynamics, such constraints are called non
holonomic.) One can show that it is necessary that the minimizer satisfy the Euler equation
associated with f − λg. In these problems, the Lagrange multiplier λ may be a function of
x.
Example: Determine functions φ
1
(x) and φ
2
(x) that minimize
_
1
0
f(x, φ
1
, φ
1
, φ
2
)dx
over an admissible set of functions subject to the nonholonomic constraint
g(x, φ
1
, φ
2
, φ
1
, φ
2
) = φ
2
−φ
1
= 0, for 0 ≤ x ≤ 1. (7.81)
According to the theory above, the minimizers satisfy the Euler equations
d
dx
_
∂h
∂φ
1
_
−
∂h
∂φ
1
= 0,
d
dx
_
∂h
∂φ
2
_
−
∂h
∂φ
2
= 0 for 0 < x < 1, (7.82)
where h = f −λg. On substituting for f and g, these Euler equations reduce to
d
dx
_
∂f
∂φ
1
+ λ
_
−
∂f
∂φ
1
= 0,
d
dx
_
∂f
∂φ
2
_
+ λ = 0 for 0 < x < 1. (7.83)
Thus the functions φ
1
(x), φ
2
(x), λ(x) are determined from the three diﬀerential equations
(7.81), (7.83).
Remark: Note by substituting the constraint into the integrand of the functional that we can
equivalently pose this problem as one for determining the function φ
1
(x) that minimizes
_
1
0
f(x, φ
1
, φ
1
, φ
1
)dx
over an admissible set of functions.
7.7. WEIRSTRASSERDMAN CORNER CONDITIONS 157
7.7 Piecewise smooth minimizers. WeirstrassErdman
corner conditions.
In order to motivate the discussion to follow, ﬁrst consider the problem of minimizing the
functional
F{φ} =
_
1
0
((φ
)
2
−1)
2
dx (7.84)
over functions φ with φ(0) = φ(1) = 0.
This is apparently a problem of the classical type where in the present case we are
to minimize the integral of f(x, φ, φ
) =
_
(φ
)
2
− 1
¸
2
with respect to x over the interval
[0, 1]. Assuming that the class of admissible functions are those that are C
1
[0, 1] and satisfy
φ(0) = φ(1) = 0, the minimizer must necessarily satisfy the Euler equation
d
dx
(∂f/∂φ
) −
(∂f/∂φ) = 0. In the present case this specializes to 2[ (φ
)
2
−1](2φ
) = constant for 0 ≤ x ≤ 1,
which in turn gives φ
(x) = constant for 0 ≤ x ≤ 1. On enforcing the boundary conditions
φ(0) = φ(1) = 0, this gives φ(x) = 0 for 0 ≤ x ≤ 1. This is an extremizer of F{φ} over
the class of admissible functions under consideration. It is readily seen from (7.84) that the
value of F at this particular function φ(x) = 0 is F = 1
Note from (7.84) that F ≥ 0. It is natural to wonder whether there is a function φ
∗
(x)
that gives F{φ
∗
} = 0. If so, φ
∗
would be a minimizer. If there is such a function φ
∗
, we know
that it cannot belong to the class of admissible functions considered above, since if it did,
we would have found it from the preceding calculation. Therefore if there is a function φ
∗
of
this type, it does not belong to the set of functions A. The functions in A were required to
be C
1
[0, 1] and to vanish at the two ends x = 0 and x = 1. Since φ
∗
/ ∈ A it must not satisfy
one or both of these two conditions. The problem statement requires that the boundary
conditions must hold. Therefore it must be true that φ is not as smooth as C
1
[0, 1].
If there is a function φ
∗
such that F{φ
∗
} = 0, it follows from the nonnegative character
of the integrand in (7.84) that the integrand itself should vanish almost everywhere in [0, 1].
This requires that φ
(x) = ±1 almost everywhere in [0, 1]. The piecewise linear function
φ
∗
(x) =
_
_
_
x for 0 ≤ x ≤ 1/2,
(1 −x) for 1/2 ≤ x ≤ 1,
(7.85)
has this property. It is continuous, is piecewise C
1
, and gives F{φ
∗
} = 0. Moreover φ
∗
(x)
satisﬁes the Euler equation except at x = 1/2.
158 CHAPTER 7. CALCULUS OF VARIATIONS
But is it legitimate for us to consider piecewise smooth functions? If so are there are
any restrictions that we must enforce? Physical problems involving discontinuities in certain
physical ﬁelds or their derivatives often arise when, for example, the problem concerns an
interface separating two diﬀerent mateirals. A speciﬁc example will be considered below.
7.7.1 Piecewise smooth minimizer with nonsmoothness occuring
at a prescribed location.
Suppose that we wish to extremize the functional
F{φ} =
_
1
0
f(x, φ, φ
)dx
over some suitable set of admissible functions, and suppose further that we know that the
extremal φ(x) is continuous but has a discontinuity in its slope at x = s: i.e. φ
(s−) = φ
(s+)
where φ
(s±) denotes the limiting values of φ
(s ± ε) as ε → 0. Thus the set of admissible
functions is composed of all functions that are smooth on either side of x = s, that are
continuous at x = s, and that satisfy the given boundary conditions φ(0) = φ
0
, φ(1) = φ
1
:
A = {φ(·)
¸
¸
φ : [0, 1] →R, φ ∈ C
1
([0, s) ∪ (s, 1]), φ ∈ C[0, 1], φ(0) = φ
0
, φ(1) = φ
1
} .
Observe that an admissible function is required to be continuous on [0, 1], required to have
a continuous ﬁrst derivative on either side of x = s, and its ﬁrst derivative is permitted to
have a jump discontinuity at a given location x = s.
1
x
y
s
φ(x)
φ(x) + δφ(x)
φ(x)
φ(x) + δφ(x)
Figure 7.12: Extremal φ(x) and a neighboring test function φ(x) +δφ(x) both with kinks at x = s.
7.7. WEIRSTRASSERDMAN CORNER CONDITIONS 159
Suppose that F is extremized by a function φ(x) ∈ A and suppose that this extremal has
a jump discontinuity in its ﬁrst derivative at x = s. Let δφ(x) be an admissible variation
which means that the neighboring function φ(x) + δ(x) is also in A which means that it is
C
1
on [0, s) ∪(s, 1] and (may) have a jump discontinuity in its ﬁrst derivative at the location
x = s; see Figure 7.12. This implies that
δφ(x) ∈ C([0, 1]), δφ(x) ∈ C
1
([0, s) ∪ (s, 1]), δφ(0) = δφ(1) = 0.
In view of the lack of smoothness at x = s it is convenient to split the integral into two parts
and write
F{φ} =
_
s
0
f(x, φ, φ
)dx +
_
1
s
f(x, φ, φ
)dx,
and
F{φ + δφ} =
_
s
0
f(x, φ + δφ, φ
+ δφ
)dx +
_
1
s
f(x, φ + δφ, φ
+ δφ
)dx.
Upon calculating δF, which by deﬁnition equals F{φ+δφ} −F{φ} upto terms linear in δφ,
and setting δF = 0, we obtain
_
s
0
(f
φ
δφ + f
φ
δφ
) dx +
_
1
s
(f
φ
δφ + f
φ
δφ
) dx = 0.
Integrating the terms involving δφ
by parts leads to
_
s
0
_
f
φ
−
d
dx
(f
φ
)
_
δφ dx +
_
1
s
_
f
φ
−
d
dx
(f
φ
)
_
δφ dx +
_
∂f
∂φ
δφ
_
s−
x=0
+
_
∂f
∂φ
δφ
_
1
x=s+
= 0.
However, since δφ(0) = δφ(1) = 0, this simpliﬁes to
_
1
0
_
∂f
∂φ
−
d
dx
_
∂f
∂φ
__
δφ(x) dx +
_
∂f
∂φ
¸
¸
¸
¸
x=s−
−
∂f
∂φ
¸
¸
¸
¸
x=s+
_
δφ(s) = 0.
First, if we limit attention to variations that are such that δφ(s) = 0, the second term
in the equation above vanishes, and only the integral remains. Since δφ(x) can be chosen
arbitrarily for all x ∈ (0, 1), x = s, this implies that the term within the brackets in the
integrand must vanish at each of these x’s. This leads to the Euler equation
∂f
∂φ
−
d
dx
_
∂f
∂φ
_
= 0 for 0 < x < 1, x = s.
Second, when this is substituted back into the equation above it, the integral now disappears.
Since the resulting equation must hold for all variations δφ(s), it follows that we must have
∂f
∂φ
¸
¸
¸
x=s−
=
∂f
∂φ
¸
¸
¸
x=s+
160 CHAPTER 7. CALCULUS OF VARIATIONS
at x = s. This is a “matching condition” or “jump condition” that relates the solution on
the left of x = s to the solution on its right. The matching condition shows that even though
φ
has a jump discontinuity at x = s, the quantity ∂f/∂φ
is continuous at this point.
Thus in summary an extremal φ must obey the following boundary value problem:
d
dx
_
∂f
∂φ
_
−
∂f
∂φ
= 0 for 0 < x < 1, x = s,
φ(0) = φ
0
,
φ(1) = φ
1
,
∂f
∂φ
¸
¸
¸
x=s−
=
∂f
∂φ
¸
¸
¸
x=s+
at x = s.
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
(7.86)
x
0
y
n
1
(x, y)
n
2
(x, y)
(a, A)
(b, B)
x = 0, −∞< y < ∞
θ
+
θ
−
Figure 7.13: Ray of light in a twophase material.
Example: Consider a twophase material that occupies all of x, yspace. The material oc
cupying x < 0 is diﬀerent to the material occupying x > 0 and so x = 0 is the interface
between the two materials. In particular, suppose that the refractive indices of the materials
occupying x < 0 and x > 0 are n
1
(x, y) and n
2
(x, y) respectively; see Figure 7.13. We are
asked to determine the path y = φ(x), a ≤ x ≤ b, followed by a ray of light travelling from
a point (a, A) in the left halfplane to the point (b, B) in the right halfplane. In particular,
we are to determine the conditions at the point where the ray crosses the interface between
the two media.
7.7. WEIRSTRASSERDMAN CORNER CONDITIONS 161
According to Fermat’s principle, a ray of light travelling between two given points follows
the path that it can traverse in the shortest possible time. Also, we know that light travels
at a speed c/n(x, y) where n(x, y) is the index of refraction at the point (x, y). Thus the
transit time is determined by integrating n/c along the path followed by the light, which,
since ds =
_
1 + (φ
)
2
dx can be written as
T{φ} =
_
b
a
1
c
n(x, φ(x))
_
1 + (φ
)
2
dx.
Thus the problem at hand is to determine φ that minimizes the functional T{φ} over the
set of admissible functions
A = {φ(·)
¸
¸
φ ∈ C[a, b], φ ∈ C
1
([a, 0) ∪ (0, b]), φ(a) = A, φ(b) = B}.
Note that this set of admissible functions allows the path followed by the light to have a
kink at x = 0 even though the path is continuous.
The functional we are asked to minimize can be written in the standard form
T{φ} =
_
b
a
f(x, φ, φ
) dx where f(x, φ, φ
) =
n(x, φ)
c
_
1 + (φ
)
2
.
Therefore
∂f
∂φ
=
n(x, φ)
c
φ
_
1 + (φ
)
2
and so the matching condition at the kink at x = 0 requires that
n
c
φ
_
1 + (φ
)
2
be continuous at x = 0.
Observe that, if θ is the angle made by the ray of light with the xaxis at some point along
its path, then tan θ = φ
and so sin θ = φ
/
_
1 + (φ
)
2
. Therefore the matching condition
requires that nsin θ be continuous, or
n
+
sin θ
+
= n
−
sin θ
−
where n
±
and θ
±
are the limiting values of n(x, φ(x)) and θ(x) as x → 0±. This is Snell’s
wellknown law of refraction.
162 CHAPTER 7. CALCULUS OF VARIATIONS
7.7.2 Piecewise smooth minimizer with nonsmoothness occuring
at an unknown location
Suppose again that we wish to extremize the functional
F(φ) =
_
1
0
f(x, φ, φ
) dx
over the admissible set of functions
A = {φ(·) : φ : [0, 1] →R, φ ∈ C[0, 1], φ ∈ C
1
p
[0, 1], φ(0) = a, φ(1) = b}
Just as before, the admissible functions are continuous and have a piecewise continuous ﬁrst
derivative. However in contrast to the preceding case, if there is discontinuity in the ﬁrst
derivative of φ at some location x = s, the location s is not known a priori and so is also to
be determined.
1
x
y
s
s + δs
φ(x)
φ(x) + δφ(x)
φ(x)
φ(x) + δφ(x)
Figure 7.14: Extremal φ(x) with a kink at x = s and a neighboring test function φ(x) +δφ(x) with kinks
at x = s and s +δs.
Suppose that F is extremized by the function φ(x) and that it has a jump discontinuity
in its ﬁrst derivative at x = s; (we shall say that φ has a “kink” at x = s). Suppose further
that φ is C
1
on either side of x = s. Consider a variation δφ(x) that vanishes at the two ends
x = 0 and x = 1, is continuous on [0, 1], is C
1
on [0, 1] except at x = s + δs where it has a
jump discontinuity in its ﬁrst derivative:
δφ ∈ C[0, 1] ∪ C
1
[0, s + δs) ∪ C
1
(s + δs, 1], δφ(0) = δφ(1) = 0.
Note that φ(x) + δφ(x) has kinks at both x = s and x = s + δs. Note further that we have
varied the function φ(x) and the location of the kink s. See Figure 7.14.
7.7. WEIRSTRASSERDMAN CORNER CONDITIONS 163
Since the extremal φ(x) has a kink at x = s it is convenient to split the integral and
express F{φ} as
F{φ} =
_
s
0
f(x, φ, φ
)dx +
_
1
s
f(x, φ, φ
)dx.
Similarly since the the neigboring function φ(x) + δ(x) has kinks at x = s and x = s + δs,
it is convenient to express F{φ + δφ} by splitting the integral into three terms as follows:
F{φ + δφ} =
_
s
0
f(x, φ + δφ, φ
+ δφ
)dx +
_
s+δs
s
f(x, φ + δφ, φ
+ δφ
)dx
+
_
1
s+δs
f(x, φ + δφ, φ
+ δφ
)dx.
We can now calculate the ﬁrst variation δF which, by deﬁnition, equals F{φ + δφ} −F{φ}
upto terms linear in δφ. Calculating δF in this way and setting the result equal to zero, leads
after integrating by parts, to
_
1
0
Aδφ(x)dx + B δφ(s) + C δs = 0,
where
A =
∂f
∂φ
−
d
dx
_
∂f
∂φ
_
,
B =
_
∂f
∂φ
_
x=s−
−
_
∂f
∂φ
_
x=s+
,
C =
_
f −φ
∂f
∂φ
_
x=s−
−
_
f −φ
∂f
∂φ
_
x=s+
.
(7.87)
By the arbitrariness of the variations above, it follows in the usual way that A, B and C all
must vanish. This leads to the usual Euler equation on (0, s) ∪ (s, 1), and the following two
additional requirements at x = s:
∂f
∂φ
¸
¸
¸
s−
=
∂f
∂φ
¸
¸
¸
s+
, (7.88)
_
f −φ
∂f
∂φ
_
¸
¸
¸
s−
=
_
f −φ
∂f
∂φ
_
¸
¸
¸
s+
. (7.89)
The two matching conditions (or jump conditions) (7.88) and (7.89) are known as the
WierstrassErdmann corner conditions (the term “corner” referring to the “kink” in φ).
Equation (7.88) is the same condition that was derived in the preceding subsection.
164 CHAPTER 7. CALCULUS OF VARIATIONS
Example: Find the extremals of the functional
F(φ) =
_
4
0
f(x, φ, φ
) dx =
_
4
0
(φ
−1)
2
(φ
+ 1)
2
dx
over the set of piecewise smooth functions subject to the end conditions φ(0) = 0, φ(4) = 2.
For simplicity, restrict attention to functions that have at most one point at which φ
has a
discontinuity.
Here
f(x, φ, φ
) =
_
(φ
)
2
−1
¸
2
(7.90)
and therefore on diﬀerentiating f,
∂f
∂φ
= 4φ
_
(φ
)
2
−1
¸
,
∂f
∂φ
= 0. (7.91)
Consequently the Euler equation (at points of smoothness) is
d
dx
f
φ
−f
φ
=
d
dx
_
4φ
_
(φ
)
2
−1
_¸
= 0. (7.92)
First, consider an extremal that is smooth everywhere. (Such an extremal might not, of
course, exist.) In this case the Euler equation (7.92) holds on the entire interval (0, 4) and
so we conclude that φ
(x) = constant for 0 ≤ x ≤ 4. On integrating this and using the
boundary conditions φ(0) = 0, φ(4) = 2, we ﬁnd that φ(x) = x/2, 0 ≤ x ≤ 4, is a smooth
extremal. In order to compare this with what follows, it is helpful to call this, say, φ
0
. Thus
φ
o
(x) =
1
2
x for 0 ≤ x ≤ 4,
is a smooth extremal of F.
Next consider a piecewise smooth extremizer of F which has a kink at some location
x = s; the value of s ∈ (0, 4) is not known a priori and is to be determined. (Again, such an
extremal might not, of course, exist.) The Euler equation (7.92) now holds on either side of
x = s and so we ﬁnd from (7.92) that φ
= c = constant on (0, s) and φ
= d = constant on
(s, 4) where c = d; (if c = d there would be no kink at x = s and we have already dealt with
this case above). Thus
φ
(x) =
_
_
_
c for 0 < x < s,
d for s < x < 4.
7.7. WEIRSTRASSERDMAN CORNER CONDITIONS 165
Integrating this, separately on (0, s) and (s, 4), and enforcing the boundary conditions φ(0) =
0, φ(4) = 2, leads to
φ(x) =
_
_
_
cx for 0 ≤ x ≤ s,
d(x −4) + 2 for s ≤ x ≤ 4.
(7.93)
Since φ is required to be continuous, we must have φ(s−) = φ(s+) which requires that
cs = d(s −4) + 2 whence
s =
2 −4d
c −d
. (7.94)
Note that s would not exist if c = d.
All that remains is to ﬁnd c and d, and the two WeirstrassErdmann corner conditions
(7.88), (7.89) provide us with the two equations for doing this. From (7.90), (7.91) and (7.93),
∂f
∂φ
=
_
_
_
4c(c
2
−1) for 0 < x < s,
4d(d
2
−1) for s < x < 4.
and
f −φ
∂f
∂φ
=
_
_
_
−(c
2
−1)(1 + 3c
2
) for 0 < x < s,
−(d
2
−1)(1 + 3d
2
) for s < x < 4.
Therefore the WeirstrassErdmann corner conditions (7.88) and (7.89), which require re
spectively the continuity of ∂f/∂φ
and f−φ
∂f/∂φ
at x = s, give us the pair of simultaneous
equations
c(c
2
−1) = d(d
2
−1),
(c
2
−1)(1 + 3c
2
) = (d
2
−1)(1 + 3d
2
).
_
_
_
Keeping in mind that c = d and solving these equations leads to the two solutions:
c = 1, d = −1, and c = −1, d = 1.
Corresponding to the former we ﬁnd from (7.94) that s = 3, while the latter leads to s = 1.
Thus from (7.93) there are two piecewise smooth extremals φ
1
(x) and φ
2
(x) of the assumed
form:
φ
1
(x) =
_
_
_
x for 0 ≤ x ≤ 3,
−x + 6 for 3 ≤ x ≤ 4.
φ
2
(x) =
_
_
_
−x for 0 ≤ x ≤ 1,
x −2 for 1 ≤ x ≤ 4.
166 CHAPTER 7. CALCULUS OF VARIATIONS
φ
0
(x)
φ
1
(x)
φ
2
(x)
1 2 3
x
y
1
2
3
4
φ
2
(x)
φ
1
(x)
Figure 7.15: Smooth extremal φ
0
(x) and piecewise smooth extremals φ
1
(x) and φ
2
(x).
Figure 7.15 shows graphs of φ
o
, φ
1
and φ
2
. By evaluating the functional F at each of the
extremals φ
0
, φ
1
and φ
2
, we ﬁnd
F{φ
o
} = 9/4, F{φ
1
} = F{φ
2
} = 0.
Remark: By inspection of the given functional
F(φ) =
_
4
0
_
(φ
)
2
−1
¸
2
dx,
it is clear that (a) F ≥ 0, and (b) F = 0 if and only if φ
= ±1 everywhere (except at
isolated points where φ
may be undeﬁned). The extremals φ
1
and φ
2
have this property and
therefore correspond to absolute minimizers of F.
7.8 Generalization to higher dimensional space.
In order to help motivate the way in which we will approach higherdimensional problems
(which will in fact be entirely parallel to the approach we took for onedimensional problems)
we begin with some preliminary observations.
First, consider the onedimensional variational problem of minimizing a functional
F{φ} =
_
1
0
f(x, φ, φ
, φ
) dx
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 167
on a set of suitably smooth functions with no prescribed boundary conditions at either
end. The analogous twodimensional problem would be to consider a set of suitably smooth
functions φ(x, y) deﬁned on a domain D of the x, yplane and to minimize a given functional
F{φ} =
_
D
f(x, y, φ, ∂φ/∂x, ∂φ/∂y, ∂
2
φ/∂x
2
, ∂
2
φ/∂x∂y, ∂
2
φ/∂y
2
) dA
over this set of functions with no boundary conditions prescribed anywhere on the boundary
∂D.
In deriving the Euler equation in the onedimensional case our strategy was to exploit
the fact that the variation δφ(x) was arbitrary in the interior 0 < x < 1 of the domain. This
motivated us to express the integrand in the form of some quantity A (independent of any
variations) multiplied by δφ(x). Then, the arbitrariness of δφ allowed us to conclude that A
must vanish on the entire domain. We approach twodimensional problems similarly and our
strategy will be to exploit the fact that δφ(x, y) is arbitrary in the interior of D and so we
attempt to express the integrand as some quantity A that is independent of any variations
multiplied by δφ. Similarly concerning the boundary terms, in the onedimensional case we
were able to exploit the fact that δφ and its derivative δφ
are arbitrary at the boundary
points x = 0 and x = 1, and this motivated us to express the boundary terms as some
quantity B that is independent of any variations multiplied by δφ(0), another quantity C
independent of any variations multiplied by δφ
(0), and so on. We approach twodimensional
problems similarly and our strategy for the boundary terms is to exploit the fact that δφ
and its normal derivative ∂(δφ)/∂n are arbitrary on the boundary ∂D. Thus the goal in
our calculations will be to express the boundary terms as some quantity independent of any
variations multiplied by δφ, another quantity independent of any variations multiplied by
∂(δφ)/∂n etc. Thus in the twodimensional case our strategy will be to take the ﬁrst variation
of F and carry out appropriate calculations that lead us to an equation of the form
δF =
_
D
Aδφ(x, y) dA +
_
∂D
B δφ(x, y) ds +
_
∂D
C
_
∂
∂n
(δφ(x, y))
_
ds = 0 (7.95)
where A, B, C are independent of δφ and its derivatives and the latter two integrals are on
the boundary of the domain D. We then exploit the arbitrariness of δφ(x, y) on the interior
of the domain of integration, and the arbitrariness of δφ and ∂(δφ)/∂n on the boundary
∂D to conclude that the minimizer must satisfy the partial diﬀerential equation A = 0 for
(x, y) ∈ D and the boundary conditions B = C = 0 on ∂D.
Next, recall that one of the steps involved in calculating the minimizer of a onedimensional
problem is integration by parts. This converts a term that is an integral over [0, 1] into terms
168 CHAPTER 7. CALCULUS OF VARIATIONS
that are only evaluated on the boundary points x = 0 and 1. The analog of this in higher
dimensions is carried out using the divergence theorem, which in twodimensions reads
_
D
_
∂P
∂x
+
∂Q
∂y
_
dA =
_
∂D
(Pn
x
+ Qn
y
) ds (7.96)
which expresses the left hand side, which is an integral over D, in a form that only involves
terms on the boundary. Here n
x
, n
y
are the components of the unit normal vector n on ∂D
that points out of D. Note that in the special case where P = ∂χ/∂x and Q = ∂χ/∂y for
some χ(x, y) the integrand of the right hand side is ∂χ/∂n.
Remark: The derivative of a function φ(x, y) in a direction corresponding to a unit vector m
is written as ∂φ/∂m and deﬁned by ∂φ/∂m = ∇φ· m = (∂φ/∂x) m
x
+∂φ/∂y) m
y
where m
x
and m
y
are the components of m in the x and ydirections respectively. On the boundary
∂D of a two dimensional domain D we frequently need to calculate the derivative of φ in
directions n and s that are normal and tangential to ∂D. In vector form we have
∇φ =
∂φ
∂x
i +
∂φ
∂y
j =
∂φ
∂n
n +
∂φ
∂s
s
where i and j are unit vectors in the x and ydirections. Recall also that a function φ(x, y)
and its tangential derivative ∂φ/∂s along the boundary ∂D are not independent of each other
in the following sense: if one knows the values of φ along ∂D one can diﬀerentiate φ along
the boundary to get ∂φ/∂s; and conversely if one knows the values of ∂φ/∂s along ∂D one
can integrate it along the boundary to ﬁnd φ to within a constant. This is why equation
(7.95) does not involve a term of the form E ∂(δφ)/∂s integrated along the boundary ∂D
since it can be rewritten as the integral of −(∂E/∂s) δφ along the boundary
Example 1: A stretched membrane. A stretched ﬂexible membrane occupies a regular
region D of the x, yplane. A pressure p(x, y) is applied normal to the surface of the membrane
in the negative zdirection. Let u(x, y) be the resulting deﬂection of the membrane in the
zdirection. The membrane is ﬁxed along its entire edge ∂D and so
u = 0 for (x, y) ∈ ∂D. (7.97)
One can show that the potential energy Φ associated with any deﬂection u that is compatible
with the given boundary condition is
Φ{u} =
_
D
1
2
¸
¸
∇u
¸
¸
2
dA −
_
D
pu dA
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 169
where we have taken the relevant stiﬀness of the membrane to be unity. The actual deﬂection
of the membrane is the function that minimizes the potential energy over the set of test
functions
A = {u
¸
¸
u ∈ C
2
(D), u = 0 for (x, y) ∈ ∂D}.
x
y
0
∂D
D
Fixed
Figure 7.16: A stretched elastic membrane whose midplane occupies a region D of the x, yplane and
whose boundary ∂D is ﬁxed. The membrane surface is subjected to a pressure loading p(x, y) that acts in
the negative zdirection.
Since
Φ{u} =
_
D
1
2
(u
,x
u
,x
+ u
,y
u
,y
) dA −
_
D
pu dA,
its ﬁrst variation is
δΦ =
_
D
(u
,x
δu
,x
+ u
,y
δu
,y
) dA −
_
D
pδudA,
where an admissible variation δu(x, y) vanishes on ∂D. Here we are using the notation
that a comma followed by a subscript denotes partial diﬀerentiation with respect to the
corresponding coordinate, for example u
,x
= ∂u/∂x and u
,xy
= ∂
2
u/∂x∂y. In order to make
use of the divergence theorem and convert the area integral into a boundary integral we
must write the integrand so that it involves terms of the form (. . .)
,x
+ (. . .)
,y
; see (7.96).
This suggests that we rewrite the preceding equation as
δΦ =
_
D
_
(u
,x
δu)
,x
+ (u
,y
δu)
,y
−(u
,xx
+ u
,yy
)δu
_
dA −
_
D
pδudA,
or equivalently as
δΦ =
_
D
_
(u
,x
δu)
,x
+ (u
,y
δu)
,y
_
dA −
_
D
_
u
,xx
+ u
,yy
+ p
_
δu dA.
170 CHAPTER 7. CALCULUS OF VARIATIONS
By using the divergence theorem on the ﬁrst integral we get
δΦ =
_
∂D
_
u
,x
n
x
+ u
,y
n
y
_
δu ds −
_
D
(u
,xx
+ u
,yy
+ p) δu dA
where n is a unit outward normal along ∂D. We can write this equivalently as
δΦ =
_
∂D
∂u
∂n
δu ds −
_
D
_
∇
2
u + p
_
δu dA. (7.98)
Since the variation δu vanishes on ∂D the ﬁrst integral drops out and we are left with
δΦ = −
_
D
_
∇
2
u + p
_
δu dA (7.99)
which must vanish for all admissible variations δu(x, y). Thus the minimizer satisﬁes the
partial diﬀerential equation
∇
2
u + p = 0 for (x, y) ∈ D
which is the Euler equation in this case that is to be solved subject to the prescribed boundary
condition (7.97). Note that if some part of the boundary of D had not been ﬁxed, then we
would not have δu = 0 on that part in which case (7.98) and (7.99) would yield the natural
boundary condition ∂φ/∂n = 0 on that segment.
NNN Show the calculations for just one term w
2
xx
. Include nu = 0 in text.NNN
NNN Check signs of terms and sign conventionNNN
Example 2: The Kirchhoﬀ theory of plates. We consider the bending of a thin plate
according to the socalled Kirchhoﬀ theory. Solely for purposes of mathematical simplicity
we shall assume that the Poisson ratio ν of the material is zero. A discussion of the case
ν = 0 can be found in many books, for example, in “Energy & Finite Elements Methods in
Structural Mechanic” by I.H. Shames & C.L. Dym. When ν = 0 the plate bending stiﬀness
D = Et
3
/12 where E is the Young’s modulus of the material and t is the thickness of the
plate. The midplane of the plate occupies a domain of the x, yplane and w(x, y) denotes
the deﬂection (displacement) of a point on the midplane in the zdirection. The basic con
stitutive relationships of elastic plate theory relate the internal moments M
x
, M
y
, M
xy
, M
yx
(see Figure 7.17) to the second derivatives of the displacement ﬁeld w
,xx
, w
,yy
, w
,xy
by
M
x
= −Dw
,xx
, M
y
= −Dw
,yy
, M
xy
= M
yx
= −Dw
,xy
, (7.100)
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 171
where a comma followed by a subscript denotes partial diﬀerentiation with respect to the
corresponding coordinate and D is the plate bending stiﬀness; and the shear forces in the
plate are given by
V
x
= −D(∇
2
w)
,x
, V
y
= −D(∇
2
w)
,y
. (7.101)
The elastic energy per unit volume of the plate is given by
1
2
_
M
x
w
,xx
+ M
y
w
,yy
+ M
xy
w
,xy
+ M
yx
w
,yx
_
=
D
2
_
w
2
,xx
+ w
2
,yy
+ 2w
2
,xy
_
. (7.102)
x
y
z
M
x
M
y
M
xy
V
x
V
y
M
x
= D(w
,xx
+ νw
yy
)
M
y
= D(w
,yy
+ νw
xx
)
D(1 −ν)w
,xy
V
x
= D(∇
2
w)
,y
V
y
= D(∇
2
w)
,x
dx
dy
t
M
yx
M
xy
= M
yx
=
Figure 7.17: A diﬀerential element dx×dy ×t of a thin plate. A bold arrow represents a force and thus V
x
and V
y
are (shear) forces. A bold arrow with two arrow heads represents a moment whose sense is given by
the right hand rule. Thus M
xy
and M
yx
are (twisting) moments while M
x
and M
y
are (bending) moments.
x
y
a
b
0
Clamped
x
y
Ω
∂Ω
1
∂Ω
2
0
Free
Clamped
n
s
n
x
= 1, n
y
= 0
s
x
= 0, s
y
= 1
Free
∂Ω
2
Figure 7.18: Left: A thin elastic plate whose midplane occupies a region Ω of the x, yplane. The segment
∂Ω
1
of the boundary is clamped while the remainder ∂Ω
2
is free of loading. Right: A rectangular a ×b plate
with a load free edge at its right hand side x = a, 0 ≤ y ≤ b.
172 CHAPTER 7. CALCULUS OF VARIATIONS
It is worth noting the following puzzling question: Consider the rectangular plate shown
in the right hand diagram of Figure 7.18. Based on Figure 7.17 we know that there is a
bending moment M
x
, a twisting moment M
xy
, and a shear force V
x
acting on any surface
x = constant in the plate. Therefore, in particular, since the right hand edge x = a is free
of loading one would expect to have the three conditions M
x
= M
xy
= V
x
= 0 along that
boundary. However we will ﬁnd that the diﬀerential equation to be solved in the interior of
the plate requires (and can only accommodate) two boundary conditions at any point on
the edge. The question then arises as to what the correct boundary conditions on this edge
should be. Our variational approach will give us precisely two natural boundary conditions
on this edge. They will involve M
x
, M
xy
and V
x
but will not require that each of them must
vanish individually.
Consider a thin elastic plate whose midplane occupies a domain Ω of the x, yplane as
shown in the left hand diagram of Figure 7.18. A normal loading p(x, y) is applied on the
ﬂat face of the plate in the −zdirection. A part of the plate boundary denoted by ∂Ω
1
is
clamped while the remainder ∂Ω
2
is free of any external loading. Thus if w(x, y) denotes the
deﬂection of the plate in the zdirection we have the geometric boundary conditions
w = ∂w/∂n = 0 for (x, y) ∈ ∂Ω
1
. (7.103)
The total potential energy of the system is
Φ{w} =
_
Ω
_
D
2
_
w
2
,xx
+ 2w
2
,xy
+ w
2
,yy
_
−p w
_
dA (7.104)
where the ﬁrst group of terms represents the elastic energy in the plate and the last term
represents the potential energy of the pressure loading (the negative sign arising from the
fact that p acts in the minus zdirection while w is the deﬂection in the positive z direction).
This functional Φ is deﬁned on the set of all kinematically admissible deﬂection ﬁelds which
is the set of all suitably smooth functions w(x, y) that satisfy the geometric requirements
(7.103). The actual deﬂection ﬁeld is the one that minimizes the potential energy Φ over this
set.
We now determine the Euler equation and natural boundary conditions associated with
(7.104) by calculating the ﬁrst variation of Φ{w} and setting it equal to zero:
_
Ω
_
w
,xx
δw
,xx
+ 2w
,xy
δw
,xy
+ w
,yy
δw
,yy
−
p
D
δw
_
dA = 0. (7.105)
To simplify this we begin be rearranging the terms into a form that will allow us to use
the divergence theorem, thereby converting part of the area integral on Ω into a boundary
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 173
integral on ∂Ω. In order to use the divergence theorem we must write the integrand so that
it involves terms of the form (. . .)
,x
+ (. . .)
,y
; see (7.96). Accordingly we rewrite (7.105) as
0 =
_
Ω
_
w
,xx
δw
,xx
+ 2w
,xy
δw
,xy
+ w
,yy
δw
,yy
−(p/D)δw
_
dA,
=
_
Ω
__
w
,xx
δw
,x
+ w
,xy
δw
,y
_
,x
+
_
w
,xy
δw
,x
+ w
,yy
δw
,y
_
,y
−w
,xxx
δw
,x
−w
,xxy
δw
,y
−w
,xyy
δw
,x
−w
,yyy
δw
,y
−(p/D)δw
_
dA,
=
_
∂Ω
__
w
,xx
δw
,x
+ w
,xy
δw
,y
_
n
x
+
_
w
,xy
δw
,x
+ w
,yy
δw
,y
_
n
y
_
ds
−
_
Ω
_
w
,xxx
δw
,x
+ w
,xxy
δw
,y
+ w
,xyy
δw
,x
+ w
,yyy
δw
,y
+ (p/D)δw
_
dA,
=
_
∂Ω
I
1
ds −
_
Ω
I
2
dA.
(7.106)
We have used the divergence theorem (7.96) in going from the second equation above to the
third equation. In order to facilitate further simpliﬁcation, in the last step we have let I
1
and I
2
denote the integrands of the boundary and area integrals.
To simplify the area integral in (7.106) we again rearrange the terms in I
2
into a form
that will allow us to use the divergence theorem. Thus
_
Ω
I
2
dA =
_
Ω
_
w
,xxx
δw
,x
+ w
,xxy
δw
,y
+ w
,xyy
δw
,x
+ w
,yyy
δw
,y
+ p/Dδw
_
dA,
=
_
Ω
__
w
,xxx
δw + w
,xyy
δw
_
,x
+
_
w
,xxy
δw + w
,yyy
δw
_
,y
−
_
w
,xxxx
+ 2w
,xxyy
+ w
,yyyy
−p/D
_
δw
_
dA,
=
_
∂Ω
__
w
,xxx
δw + w
,xyy
δw
_
n
x
+
_
w
,xxy
δw + w
,yyy
δw
_
n
y
_
ds
−
_
Ω
_
∇
4
w −(p/D)
_
δwdA,
=
_
∂Ω
_
w
,xxx
n
x
+ w
,xyy
n
x
+ w
,xxy
n
y
+ w
,yyy
n
y
_
δwds
−
_
Ω
_
∇
4
w −p/D
_
δwdA,
=
_
∂Ω
P
1
δwds −
_
Ω
P
2
δwdA,
(7.107)
174 CHAPTER 7. CALCULUS OF VARIATIONS
where we have set
P
1
= w
,xxx
n
x
+ w
,xyy
n
x
+ w
,xxy
n
y
+ w
,yyy
n
y
and P
2
= ∇
4
w −p/D. (7.108)
In the preceding calculation, we have used the divergence theorem (7.96) in going from the
second equation in (7.107) to the third equation, and we have set
∇
4
w = ∇
2
(∇
2
w) = w
,xxxx
+ 2w
,xxyy
+ w
,yyyy
.
Next we simplify the boundary term in (7.106) by converting the derivatives of the
variation with respect to x and y into derivatives with respect to normal and tangential
coordinates n and s. To do this we use the fact that ∇δw = δw
,x
i + δw
,y
j = δw
,n
n + δw
,s
s
from which it follows that δw
,x
= δw
,n
n
x
+ δw
,s
s
x
and δw
,y
= δw
,n
n
y
+ δw
,s
s
y
. Thus from
(7.106),
_
∂Ω
I
1
ds =
_
∂Ω
__
w
,xx
n
x
δw
,x
+ w
,xy
n
x
δw
,y
_
+
_
w
,xy
n
y
δw
,x
+ w
,yy
n
y
δw
,y
__
ds,
=
_
∂Ω
__
w
,xx
n
x
+ w
,xy
n
y
_
δw
,x
+
_
w
,xy
n
x
+ w
,yy
n
y
_
δw
,y
_
ds,
=
_
∂Ω
__
w
,xx
n
x
+ w
,xy
n
y
__
δw
,n
n
x
+ δw
,s
s
x
_
+
_
w
,xy
n
x
+ w
,yy
n
y
__
δw
,n
n
y
+ δw
,s
s
y
__
ds,
=
_
∂Ω
__
w
,xx
n
2
x
+ w
,xy
n
x
n
y
+ w
,xy
n
x
n
y
+ w
,yy
n
2
y
_
δw
,n
+
_
w
,xx
n
x
s
x
+ w
,xy
s
x
n
y
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y
_
δw
,s
_
ds,
=
_
∂Ω
__
w
,xx
n
2
x
+ w
,xy
n
x
n
y
+ w
,xy
n
x
n
y
+ w
,yy
n
2
y
_
δw
,n
+ I
3
_
ds.
(7.109)
To further simplify this we have set I
3
equal to the last expression in (7.109) and this term
can be written as
_
∂Ω
I
3
ds =
_
∂Ω
__
w
,xx
n
x
s
x
+ w
,xy
s
x
n
y
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y
_
δw
,s
_
ds,
=
_
∂Ω
__
w
,xx
n
x
s
x
+ w
,xy
s
x
n
y
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y
_
δw
_
,s
−
__
w
,xx
n
x
s
x
+ w
,xy
s
x
n
y
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y
_
,s
δw
_
ds.
(7.110)
If a ﬁeld f(x, y) varies smoothly along ∂Ω, and if the curve ∂Ω itself is smooth, then
_
∂Ω
∂f
∂s
ds = 0 (7.111)
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 175
since this is an integral over a closed curve
8
. It follows from this that the ﬁrst term in the
last expression of (7.110) vanishes and so
_
∂Ω
I
3
ds = −
_
∂Ω
__
w
,xx
n
x
s
x
+ w
,xy
s
x
n
y
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y
_
,s
δw
_
ds.
(7.112)
Substituting (7.112) into (7.109) yields
_
∂Ω
I
1
ds =
_
∂Ω
P
3
∂
∂n
(δw) ds −
_
∂Ω
∂
∂s
(P
4
) δwds, (7.113)
where we have set
P
3
= w
,xx
n
2
x
+ w
,xy
n
x
n
y
+ w
,xy
n
x
n
y
+ w
,yy
n
2
y
,
P
4
= w
,xx
n
x
s
x
+ w
,xy
n
y
s
x
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y
.
(7.114)
Finally, substituting (7.113) and (7.107) into (7.106) leads to
_
Ω
P
2
δwdA −
_
∂Ω
_
P
1
+
∂
∂s
(P
4
)
_
δwds +
_
∂Ω
P
3
∂
∂n
(δw) ds = 0 (7.115)
which must hold for all admissible variations δw. First restrict attention to variations which
vanish on the boundary ∂Ω and whose normal derivative ∂(δw)/∂n also vanish on ∂Ω. This
leads us to the Euler equation P
2
= 0:
∇
4
w −p/D = 0 for (x, y) ∈ Ω. (7.116)
Returning to (7.115) with this gives
−
_
∂Ω
_
P
1
+
∂
∂s
(P
4
)
_
δwds +
_
∂Ω
P
3
∂
∂n
(δw) ds = 0. (7.117)
Since the portion ∂Ω
1
of the boundary is clamped we have w = ∂w/∂n = 0 for (x, y) ∈ ∂Ω
1
.
Thus the variations δw and ∂(δw)/∂n must also vanish on ∂Ω
1
. Thus (7.117) simpliﬁes to
−
_
∂Ω
2
_
P
1
+
∂
∂s
(P
4
)
_
δwds +
_
∂Ω
2
P
3
∂
∂n
(δw) ds = 0 (7.118)
for variations δw and ∂(δw)/∂n that are arbitrary on ∂Ω
2
where ∂Ω
2
is the complement of
∂Ω
1
, i.e. ∂Ω = ∂Ω
1
∪ ∂Ω
2
. Thus we conclude that P
1
+ ∂P
4
/∂s = 0 and P
3
= 0 on ∂Ω
2
:
w
,xxx
n
x
+ w
,xyy
n
x
+ w
,xxy
n
y
+ w
,yyy
n
y
+
∂
∂s
(w
,xx
n
x
s
x
+ w
,xy
n
y
s
x
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y
) = 0
w
,xx
n
2
x
+ w
,xy
n
x
n
y
+ w
,xy
n
x
n
y
+ w
,yy
n
2
y
= 0,
_
¸
¸
¸
_
¸
¸
¸
_
for (x, y) ∈ ∂Ω
2
.
(7.119)
8
In the present setting one would have this degree of smoothness if there are no concentrated loads applied
on the boundary of the plate ∂Ω and the boundary curve itself has no corners.
176 CHAPTER 7. CALCULUS OF VARIATIONS
Thus, in summary, the Kirchhoﬀ theory of plates for the problem at hand requires that
one solve the ﬁeld equation (7.116) on Ω subjected to the displacement boundary conditions
(7.103) on ∂Ω
1
and the natural boundary conditions (7.119) on ∂Ω
2
.
Remark: If we deﬁne the moments M
n
, M
ns
and force V
n
by
M
n
= −D (w
,xx
n
x
n
x
+ w
,xy
n
x
n
y
+ w
,yx
n
y
n
x
+ w
,yy
n
y
n
y
)
M
ns
= −D (w
,xx
n
x
s
x
+ w
,xy
n
y
s
x
+ w
,yx
n
x
s
y
+ w
,yy
n
y
s
y
)
V
n
= −D (w
,xxx
n
x
+ w
,xyy
n
x
+ w
,yxx
n
y
+ w
,yyy
n
y
)
(7.120)
then the two natural boundary conditions can be written as
M
n
= 0,
∂
∂s
_
M
ns
_
+ V
n
= 0. (7.121)
As a special case suppose that the plate is rectangular, 0 ≤ x ≤ a, 0 ≤ y ≤ b and that
the right edge x = a, 0 ≤ y ≤ b is free of load; see the right diagram in Figure 7.18. Then
n
x
= 1, n
y
= 0, s
x
= 0, s
y
= 1 on ∂Ω
2
and so (7.120) simpliﬁes to
M
n
= −D w
,xx
M
ns
= −D w
,yx
V
n
= −D (w
,xxx
+ w
,xyy
)
(7.122)
which because of (7.100) shows that in this case M
n
= M
x
, M
ns
= M
xy
, V
n
= V
x
. Thus the
natural boundary conditions (7.121) can be written as
M
x
= 0,
∂
∂y
_
M
xy
_
+ V
x
= 0. (7.123)
This answers the question we posed soon after (7.101) as to what the correct boundary
conditions on a free edge should be. We had noted that intuitively we would have expected
the moments and forces to vanish on a free edge and therefore that M
x
= M
xy
= V
x
= 0 there;
but this is in contradiction to the mathematical fact that the diﬀerential equation (7.116)
only requires two conditions at each point on the boundary. The two natural boundary
conditions (7.123) require that certain combinations of M
x
, M
xy
, V
x
vanish but not that all
three vanish.
Example 3: Minimal surface equation. Let C be a closed curve in R
3
as sketched in
Figure 7.19. From among all surfaces S in R
3
that have C as its boundary, we wish to
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 177
determine the surface that has minimum area. As a physical example, if C corresponds to
a wire loop which we dip in a soapy solution, a thin soap ﬁlm will form across C. The
surface that forms is the one that, from among all possible surfaces S that are bounded
by C, minimizes the total surface energy of the ﬁlm; which (if the surface energy density is
constant) is the surface with minimal area.
y
x
z
D
∂D
C
:
z = φ(x, y)
z = h(x, y)
S :
i
j
k
Figure 7.19: The closed curve C in R
3
is given. From among all surfaces S in R
3
that have C as its boundary,
the surface with minimal area is to be sought. The curve ∂D is the projection of C onto the x, yplane.
Let C be a closed curve in R
3
. Suppose that its projection onto the x, yplane is denoted
by ∂D and let D denote the simply connected region contained within ∂D; see Figure 7.19.
Suppose that C is characterized by z = h(x, y) for (x, y) ∈ ∂D. Let z = φ(x, y) for (x, y) ∈ D
describe a surface S in R
3
that has C as its boundary; necessarily φ = h on ∂D. Thus the
admissible set of functions we are considering are
A{φ
¸
¸
φ : D →R, φ ∈ C
1
(D), φ = h on ∂D} .
Consider a rectangular diﬀerential element on the x, yplane that is contained within D.
The vector joining (x, y) to (x+dx, y) is dx = dx i while the vector joining (x, y) to (x, y+dy)
is dy = dy j. If du and dv are vectors on the surface z = φ(x, y) whose projections are dx
and dy respectively, then we know that
du = dx i + φ
x
dx k, dv = dy j + φ
y
dy k.
The vectors du and dv deﬁne a parallelogram on the surface z = φ(x, y) and the area of
this parallelogram is du ×dv. Thus the area of a diﬀerential element on S is
du ×dv =
¸
¸
¸ −φ
x
dxdy i −φ
y
dxdy j + dxdy k
¸
¸
¸ =
_
1 + φ
2
x
+ φ
2
y
dxdy.
178 CHAPTER 7. CALCULUS OF VARIATIONS
Consequently the problem at hand is to minimize the functional
F{φ} =
_
D
_
1 + φ
2
x
+ φ
2
y
dA.
over the set of admissible functions
A = {φ  φ : D →R, φ ∈ C
2
(D), φ = h on ∂D}.
It is left as an exercise to show that setting the ﬁrst variation of F equal to zero leads to the
socalled minimal surface equation
(1 + φ
2
y
)φ
xx
−2φ
x
φ
y
φ
xy
+ (1 + φ
2
x
)φ
yy
= 0.
Remark: See en.wikipedia.org/wiki/Soap bubble and www.susqu.edu/facstaﬀ/b/brakke/ for
additional discussion.
7.9 Second variation. Another necessary condition for
a minimum.
In order to illustrate the basic ideas of this section in the simplest possible setting, we conﬁne
the discussion to the particular functional
F{φ} =
_
1
0
f(x, φ, φ
)dx
deﬁned over a set of admissible functions A. Suppose that a particular function φ minimizes
F, and that for some given function η, the oneparameter family of functions φ + εη are
admissible for all suﬃciently small values of the parameter ε. Deﬁne
ˆ
F(ε) by
ˆ
F(ε) = F{φ + εη} =
_
1
0
f(x, φ + εη, φ
+ εη
)dx,
so that by Taylor expansion,
ˆ
F(ε) =
ˆ
F(0) + ε
ˆ
F
(0) +
ε
2
2
ˆ
F
(0) + O(ε
3
),
where
ˆ
F(0) =
_
1
0
f(x, φ, φ
)dx = F{φ},
ε
ˆ
F
(0) = δF{φ, η},
ε
2
F
(0) = ε
2
_
1
0
{f
φφ
η
2
+ 2f
φφ
ηη
+ f
φ
φ
(η
)
2
} dx
def
= δ
2
F{φ, η}.
7.9. SECOND VARIATION 179
Since φ minimizes F, it follows that ε = 0 minimizes
ˆ
F(ε), and consequently that
δ
2
F{φ, η} ≥ 0,
in addition to the requirement δF{φ, η} = 0. Thus a necessary condition for a function φ to
minimize a functional F is that the second variation of F be nonnegative for all admissible
variations δφ:
δ
2
F{φ, δφ} =
_
1
0
_
f
φφ
(δφ)
2
+ 2f
φφ
(δφ)(δφ
) + f
φ
φ
(δφ
)
2
_
dx ≥ 0, (7.124)
where we have set δφ = εη. The inequality is reversed if φ maximizes F.
The condition δ
2
F{φ, η} ≥ 0 is necessary but not suﬃcient for the functional F to have
a minimum at φ. We shall not discuss suﬃcient conditions in general in these notes.
Proposition: Legendre Condition: A necessary condition for (7.124) to hold is that
f
φ
φ
(x, φ(x), φ
(x)) ≥ 0 for 0 ≤ x ≤ 1
for the minimizing function φ.
Example: Consider a curve in the x, yplane characterized by y = φ(x) that begins at (0, φ
0
)
and ends at (1, φ
1
). From among all such curves, ﬁnd the one that, when rotated about the
xaxis, generates the surface of minimum area.
Thus we are asked to minimize the functional
F{φ} =
_
1
0
f(x, φ, φ
) dx where f(x, φ, φ
) = φ
_
1 + (φ
)
2
,
over a set of admissible functions that satisfy the boundary conditions φ(0) = φ
0
, φ(1) = φ
1
.
A function φ that minimizes F must satisfy the boundary value problem consisting of
the Euler equation and the given boundary conditions:
d
dx
_
φφ
_
1 + (φ
)
2
_
−
_
1 + (φ
)
2
= 0,
φ(0) = φ
0
, φ(1) = φ
1
.
_
¸
¸
_
¸
¸
_
The general solution of this Euler equation is
φ(x) = αcosh
x −β
α
for 0 ≤ x ≤ 1,
180 CHAPTER 7. CALCULUS OF VARIATIONS
where the constants α and β are determined through the boundary conditions. To test the
Legendre condition we calculate f
φ
φ
and ﬁnd that
f
φ
φ
=
φ
(
_
1 + (φ
)
2
)
3
,
which, when evaluated at the particular function φ(x) = αcosh(x −β)/α yields
f
φ
φ

φ=αcosh(x−β)/α
=
α
cosh
2
(x −β)/α
.
Therefore as long as α > 0 the Legendre condition is satisﬁed.
7.10 Suﬃcient condition for minimization of convex
functionals
x
0
y
x
1
x
2
y = F(x)
F(x
1
) ≥ F(x
2
) + F
(x
2
)(x
1
−x
2
) for all x
1
, x
2
∈ D.
D
y = F(x
2
) + F
(x
2
)(x −x
2
)
Figure 7.20: A convex curve y = F(x) lies above the tangent line through any point of the curve.
We now turn to a brief discussion of suﬃcient conditions for a minimum for a special
class of functionals. It is useful to begin by reviewing the question of ﬁnding the minimum of
a realvalued function of a real variable. A function F(x) deﬁned for x ∈ A with continuous
ﬁrst derivatives is said to be convex if
F(x
1
) ≥ F(x
2
) + F
(x
2
)(x
1
−x
2
) for all x
1
, x
2
∈ A;
7.10. SUFFICIENT CONDITION FOR CONVEX FUNCTIONALS 181
see Figure 7.20 for a geometric interpretation of convexity. If a convex function has a station
ary point, say at x
o
, then it follows by setting x
2
= x
o
in the preceding equation that x
o
is a
minimizer of F. Therefore a stationary point of a convex function is necessarily a minimizer.
If F is strictly convex on A, i.e. if F is convex and F(x
1
) = F(x
2
) + F
(x
2
)(x
1
− x
2
) if and
only if x
1
= x
2
, then F can only have one stationary point and so can only have one interior
minimum.
This is also true for a realvalued function F with continuous ﬁrst derivatives on a domain
A in R
n
, where convexity is deﬁned by
9
F(x
1
) ≥ F(x
2
) + δF(x
2
, x
1
−x
2
) for all x
1
, x
2
∈ A.
If a convex function has a stationary point at, say, x
o
, then since δF(x
o
, y) = 0 for all y
it follows that x
o
is a minimizer of F. Therefore a stationary point of a convex function
is necessarily a minimizer. If F is strictly convex on A, i.e. if F is convex and F(x
1
) =
F(x
2
) + δF(x
2
, x
1
− x
2
) if and only if x
1
= x
2
, then F can have only one stationary point
and so can have only one interior minimum.
We now turn to a functional F{φ} which is said to be convex on A if
F{φ + η} ≥ F{φ} + δF{φ, η} for all φ, φ + η ∈ A.
If F is stationary at φ
o
∈ A, then δF{φ
o
, η} = 0 for all admissible η, and it follows that φ
o
is in fact a minimizer of F. Therefore a stationary point of a convex functional is necessarily
a minimizer.
For example, consider the generic functional
F{φ} =
_
1
0
f(x, φ, φ
)dx. (7.125)
Then
δF{φ, η} =
_
1
0
_
∂f
∂φ
η +
∂f
∂φ
η
_
dx
and so the convexity condition F{φ + η} −F{φ} ≥ δF{φ, η} takes the special form
_
1
0
_
f(x, φ + η, φ
+ η
) −f(x, φ, φ
)
_
dx ≥
_
1
0
_
∂f
∂φ
η +
∂f
∂φ
η
_
dx. (7.126)
In general it might not be simple to test whether this condition holds in a particular case.
It is readily seen that a suﬃcient condition for (7.126) to hold is that the integrands satisfy
9
See equation (??) for the deﬁnition of δF(x, y).
182 CHAPTER 7. CALCULUS OF VARIATIONS
the inequality
f(x, y + v, z + w) −f(x, y, z) ≥
∂f
∂y
v +
∂f
∂z
w (7.127)
for all (x, y, z), (x, y + v, z + w) in the domain of f. This is precisely the requirement that
the function f(x, y, z) be a convex function of y, z at ﬁxed x.
Thus in summary: if the integrand f of the functional F deﬁned in (7.125) satisﬁes the
convexity condition (7.127), then, a function φ that extremizes F is in fact a minimizer of F.
Note that this is simply a suﬃcient condition for ensuring that an extremum is a minimum.
Remark: In the special case where f(x, y, z) is independent of y, one sees from basic calculus
that if ∂
2
f/∂z
2
> 0 then f(x, z) is a strictly convex function of z at each ﬁxed x.
Example: Geodesics. Find the curve of shortest length that lies entirely on a circular
cylinder of radius a, beginning (in circular cylindrical coordinates (r, θ, ξ)) at (a, θ
1
, ξ
1
) and
ending at (a, θ
2
, ξ
2
) as shown in the ﬁgure.
a
ξ
(a, θ
1
, ξ
1
)
(a, θ
2
, ξ
2
)
a
θ1
θ
2
x = a cos θ
y = a sin θ
= ξ(θ)
θ
1
≤ θ ≤ θ
2
ξ
Figure 7.21: A curve that lies entirely on a circular cylinder of radius a, beginning (in circular cylindrical
coordinates) at (a, θ
1
, ξ
1
) and ending at (a, θ
2
, ξ
2
).
We can characterize a curve in R
3
using a parametric characterization using circular
cylindrical coordinates by r = r(θ), ξ = ξ(θ), θ
1
≤ θ ≤ θ
2
. When the curve lies on the surface
of a circular cylinder of radius a this specializes to
r = a, ξ = ξ(θ) for θ
1
≤ θ ≤ θ
2
.
7.11. DIRECT METHOD 183
Since the arc length can be written as
ds =
_
dr
2
+ r
2
dθ
2
+ dξ
2
=
_
_
r
(θ)
_
2
+
_
r(θ)
_
2
+
_
ξ
(θ)
_
2
dθ = =
_
_
a
2
+
_
ξ
(θ)
_
2
dθ.
our task is to minimize the functional
F{ξ} =
_
θ
2
θ
1
f(θ, ξ(θ), ξ
(θ)) dθ where f(x, y, z) =
√
a
2
+ z
2
over the set of all suitably smooth functions ξ(θ) deﬁned for θ
1
≤ θ ≤ θ
2
which satisfy
ξ(θ
1
) = ξ
1
, ξ(θ
2
) = ξ
2
.
Evaluating the necessary condition δF = 0 leads to the Euler equation. This second order
diﬀerential equation for ξ(θ) can be readily solved, which after using the boundary conditions
ξ(θ
1
) = ξ
1
, ξ(θ
2
) = ξ
2
leads to
ξ(θ) = ξ
1
+
_
ξ
1
−ξ
2
θ
1
−θ
2
_
(θ −θ
1
). (7.128)
Direct diﬀerentiation of f(x, y, z) =
√
a
2
+ z
2
shows that
∂
2
f
∂z
2
=
a
2
(a
2
+ z
2
)
3/2
> 0
and so f is a strictly convex function of z. Thus the curve of minimum length is given
uniquely by (7.128) – a helix. Note that if the circular cylindrical surface is cut along a
vertical line and unrolled into a ﬂat sheet, this curve unfolds into a straight line.
7.11 Direct method of the calculus of variations and
minimizing sequences.
We now turn to a diﬀerent method of seeking minima, and for purposes of introduction, begin
by reviewing the familiar case of a realvalued function f(x) of a real variable x ∈ (−∞, ∞).
Consider the speciﬁc example f(x) = x
2
. This function is nonnegative and has a minimum
value of zero which it attains at x = 0. Consider the sequence of numbers
x
0
, x
1
, x
2
, x
3
. . . x
k
, . . . where x
k
=
1
2
k
,
and note that
lim
k→∞
f(x
k
) = 0.
184 CHAPTER 7. CALCULUS OF VARIATIONS
2 1 1 2
1
2
3
2 1 1 2
1
2
3
(a) (b)
f(x) = 1
f(x) = x
2
f(x) = x
2
x x
Figure 7.22: (a) The function f(x) = x
2
for −∞ < x < ∞ and (b) the function f(x) = 1 for x ≤ 0,
f(x) = x
2
for x > 0.
The sequence 1/2, 1/2
2
, . . . , 1/2
k
, . . . is called a minimizing sequence in the sense that
the value of the function f(x
k
) converges to the minimum value of f as k → ∞. Moreover,
observe that
lim
k→∞
x
k
= 0
as well, and so the sequence itself converges to the minimizer of f, i.e. to x = 0. This latter
feature is true because in this example
f( lim
k→∞
x
k
) = lim
n→∞
f(x
k
).
As we know, not all functions have a minimum value, even if they happen to have a ﬁnite
greatest lower bound. We now consider an example to illustrate the fact that a minimizing
sequence can be used to ﬁnd the greatest lower bound of a function that does not have a
minimum. Consider the function f(x) = 1 for x ≤ 0 and f(x) = x
2
for x > 0. This function
is nonnegative, and in fact, it can take values arbitrarily close to the value 0. However it
does not have a minimum value since there is no value of x for which f(x) = 0; (note that
f(0) = 1). The greatest lower bound or inﬁmum (denoted by “inf”) of f is
inf
−∞<x<∞
f(x) = 0.
Again consider the sequence of numbers
x
0
, x
1
, x
2
, x
3
. . . x
k
, . . . where x
k
=
1
2
k
,
and note that
lim
k→∞
f(x
k
) = 0.
7.11. DIRECT METHOD 185
In this case the value of the function f(x
k
) converges to the inﬁmum of f as k →∞. However
since
lim
k→∞
x
k
= 0
the limit of the sequence itself is x = 0 and f(0) is not the inﬁmum of f. This is because in
this example
f( lim
k→∞
x
k
) = lim
n→∞
f(x
k
).
Returning now to a functional, suppose that we are to ﬁnd the inﬁmum (or the minimum
if it exists) of a functional F{φ} over an admissible set of functions A. Let
inf
φ∈A
F{φ} = m (> −∞).
Necessarily there must exist a sequence of functions φ
1
, φ
2
, . . . in A such that
lim
n→∞
F{φ
k
} = m;
such a sequence is called a minimizing sequence.
If the sequence φ
1
, φ
2
, . . . converges to a limiting function φ
∗
, and if
F{ lim
n→∞
φ
k
} = lim
n→∞
F{φ
k
},
then it follows that F{φ
∗
} = m and the function φ
∗
is the minimizer of F. The functions φ
k
of a minimizing sequence can be considered to be approximate solutions of the minimization
problem.
Just as in the second example of this section, in some variational problems the limiting
function φ
∗
of a minimizing sequence φ
1
, φ
2
, . . . does not minimize the functional F; see the
last Example of this section.
7.11.1 The Ritz method
Suppose that we are to minimize a functional F{φ} over an admissible set A. Consider an
inﬁnite sequence of functions φ
1
, φ
2
, . . . in A. Let A
p
be the subset of functions in A that
can be expressed as a linear combination of the ﬁrst p functions φ
1
, φ
2
, . . . φ
p
. In order to
minimize F over the subset A
p
we must simply minimize
´
F(α
1
, α
2
, . . . , α
p
) = F{α
1
φ
1
+ α
2
φ
2
+ . . . + α
p
φ
p
}
186 CHAPTER 7. CALCULUS OF VARIATIONS
with respect to the real parameters α
1
, α
2
, . . . α
p
. Suppose that the minimum of F on A
p
is
denoted by m
p
. Clearly A
1
⊂ A
2
⊂ A
3
. . . ⊂ A and therefore m
1
≥ m
2
≥ m
3
. . .
10
. Thus, in
the socalled Ritz Method, we minimize F over a subset A
p
to ﬁnd an approximate minimizer;
moreover, increasing the value of p improves the approximation in the sense of the preceding
footnote.
Example: Consider an elastic bar of length L and modulus E that is ﬁxed at both ends and
carries a distributed axial load b(x). A displacement ﬁeld u(x) must satisfy the boundary
conditions u(0) = u(L) = 0 and the associated potential energy is
F{u} =
_
L
0
1
2
E(u
)
2
dx −
_
L
0
bu dx.
We now use the Ritz method to ﬁnd an approximate displacement ﬁeld that minimizes
F. Consider the sequence of functions v
1
, v
2
, v
3
, . . . where
v
p
= sin
pπx
L
;
observe that v
p
(0) = v
p
(L) = 0 for all intergers p. Consider the function
u
n
(x) =
n
p=1
α
p
sin
pπx
L
for any integer n ≥ 1 and evaluate
´
F(α
1
, α
2
, . . . α
n
) = F{u
n
} =
_
L
0
1
2
E(u
n
)
2
dx −
_
L
0
bu
n
dx.
Since
_
L
0
2 cos
pπx
L
cos
qπx
L
dx =
_
_
_
0 for p = q,
L for p = q,
it follows that
_
L
0
(u
n
)
2
dx =
_
L
0
_
n
p=1
α
p
pπ
L
cos
pπx
L
__
n
q=1
α
q
qπ
L
cos
qπx
L
_
dx =
1
2
n
p=1
α
2
p
p
2
π
2
L
Therefore
´
F(α
1
, α
2
, . . . α
n
) = F{u
n
} =
n
p=1
_
1
4
E α
2
p
p
2
π
2
L
−α
p
_
L
0
b sin
pπx
L
dx
_
(7.129)
10
If the sequence φ
1
, φ
2
, . . . is complete, and the functional F{φ} is continuous in the appropriate norm,
then one can show that lim
p→∞
m
p
= m.
7.12. WORKED EXAMPLES. 187
To minimize
´
F(α
1
, α
2
, . . . α
n
) with respect to α
p
we set ∂
´
F/∂α
p
= 0. This leads to
α
p
=
_
L
0
b sin
pπx
L
dx
E
p
2
π
2
2L
for p = 1, 2, . . . n. (7.130)
Therefore by substituting (7.130) into (7.129) we ﬁnd that the nterm Ritz approximation
of the energy is
−
n
p=1
1
4
E α
2
p
p
2
π
2
L
where α
p
=
_
L
0
b sin
pπx
L
dx
E
p
2
π
2
2L
,
and the corresponding approximate displacement ﬁeld is given by
u
n
=
n
p=1
α
p
sin
pπx
L
where α
p
=
_
L
0
b sin
pπx
L
dx
E
p
2
π
2
2L
.
References
1. J.L. Troutman, Variational Calculus with Elementary Convexity, SpringerVerlag, 1983.
2. C. Fox, An Introduction to the Calculus of Variations, Dover, 1987.
3. G. Bliss, Lectures on the Calculus of Variations, University of Chicago Press, 1946.
4. L.A. Pars, Calculus of Variations, Wiley, 1963.
5. R. Weinstock, Calculus of Variations with Applications, Dover, 1952.
6. I.M. Gelfand and S.V. Fomin, Calculus of Variations, PrenticeHall, 1963.
7. T. Mura and T. Koya, Variational Methods in Mechanics, Oxford, 1992.
7.12 Worked Examples.
Example 7.N: Consider two given points (x
1
, h
1
) and (x
2
, h
2
), with h
1
> h
2
, that are to be joined by a
smooth wire. The wire is permited to have any shape, provided that it does not enter into the interior of the
circular region (x − x
0
)
2
+ (y − y
0
)
2
≤ R
2
. A bead is released from rest from the point (x
1
, h
1
) and slides
188 CHAPTER 7. CALCULUS OF VARIATIONS
x
0
y = φ(x)
g
y
(x
1
, h
1
)
(x
2
, h
2
)
x
1
x
2
Figure 7.23: A curve y = φ(x) joining (x
1
, h
1
) to (x
2
, h
2
) which is disallowed from entering the forbidden
region (x −x
0
)
2
+ (φ(x) −y
0
)
2
< R
2
.
along the wire (without friction) due to gravity. For what shape of wire is the time of travel from (x
1
, h
1
) to
(x
2
, h
2
) least?
Here the wire may not enter into the interior of the prescribed circular region . Therefore in considering
diﬀerent wires that connect (x
1
, h
1
) to (x
2
, h
2
), we may only consider those that lie entirely outside this
region:
(x −x
0
)
2
+ (φ(x) −y
0
)
2
≥ R
2
, x
1
≤ x ≤ x
2
. (i)
The travel time of the bead is again given by (7.1) and the test functions must satisfy the same requirements
as in the ﬁrst example except that, in addition, they must be such satisfy the (inequality) constraint (i). Our
task is to minimize T{φ} over the set A
1
subject to the constratint (i).
Example 7.N: Buckling: Consider a beam whose centerline occupies the interval y = 0, 0 < x < L, in an
undeformed conﬁguration. A compressive force P is applied at x = L and the beam adopts a buckled shape
described by y = φ(x). Figure NNN shows the centerline of the beam in both the undeformed and deformed
conﬁgurations. The beam is ﬁxed by a pin at x = 0; the end x = L is also pinned but is permitted to move
along the xaxis. The prescribed geometric boundary conditions on the deﬂected shape of the beam are thus
φ(0) = φ(L) = 0.
By geometry, the curvature κ(x) of a curve y = φ(x) is given by
κ(x) =
φ
(x)
[1 + (φ
(x))
2
]
3/2
.
From elasticity theory we know that the bending energy per unit length of a beam is (1/2)Mκ and that
the bending moment M is related to the curvature κ by M = EIκ where EI is the bending stiﬀness of the
beam. Thus the bending energy associated with a diﬀerential element of the beam is (1/2)EIκ
2
ds where ds
7.12. WORKED EXAMPLES. 189
x
y
P
L
O
Figure 7.24: An elastic beam in undeformed (lower ﬁgure) and buckled (upper ﬁgure) conﬁgurations.
is arc length along the deformed beam. Thus the total bending energy in the beam is
_
L
0
1
2
EI κ
2
(x) ds
where the arc length s is related to the coordinate x by the geometric relation
ds =
_
1 + (φ
(x))
2
dx.
Thus the total bending energy of the beam is
_
L
0
1
2
EI
(φ
)
2
[1 + (φ
)
2
]
5/2
dx.
Next we need to account for the potential energy associated with the compressive force P on the beam.
Since the change in length of a diﬀerential element is ds − dx, the amount by which the right hand end of
the beam moves leftwards is
−
_
_
L
0
ds −
_
L
0
dx
_
= −
_
_
L
0
_
1 + (φ
)
2
dx − L
_
.
Thus the potential energy associated with the applied force P is
− P
_
_
L
0
_
1 + (φ
)
2
dx − L
_
.
Therefore the total potential energy of the system is
Φ{x, φ, φ
, φ
} =
_
L
0
1
2
EI
(φ
)
2
[1 + (φ
)
2
]
5/2
dx −
_
L
0
P
_
_
1 + (φ
)
2
− 1
_
dx.
The Euler equation, which for such a functional has the general form
d
2
dx
2
_
f
φ
_
−
d
dx
_
f
φ
_
+ f
φ
= 0,
190 CHAPTER 7. CALCULUS OF VARIATIONS
simpliﬁes in the present case since f does not depend explicitly on φ. The last term above therefore drops out
and the resulting equation can be integrated once immediately. This eventually leads to the Euler equation
d
dx
_
φ
[1 + (φ
)
2
]
5/2
_
+
φ
2[1 + (φ
)
2
]
1/2
_
P
EI/2
+ 5
_
φ
[1 + (φ
)
2
]
3/2
_
2
_
= c
where c is a constant of integration, and the natural boundary conditions are
φ
(0) = φ
(L) = 0.
Example 7.N: Linearize BVP in buckling problem above. Also, approximate the energy and derive Euler
equation associated with it.
Example 7.N: u(x, t) where 0 ≤ x ≤ L, 0 ≤ t ≤ T Functional
F{u} =
_
T
0
_
L
0
_
1
2
u
2
t
−
1
2
u
2
x
_
dxdt
Euler equation (Wave equation)
u
tt
−u
xx
= 0.
Example 7.N: Physical example? Functional
F{u} =
_
T
0
_
L
0
_
1
2
u
2
t
−
_
1
2
u
2
x
+
1
2
m
2
u
2
_
_
dxdt
Euler equation (KleinGordon equation)
u
tt
−u
xx
+m
2
u = 0.
Example 7.N: Null lagrangian
Example 7.2: Soap Film Problem. Consider two circular wires, each of radius R, that are placed coaxially, a
distance H apart. The planes deﬁned by the two circles are parallel to each other and perpendicular to their
common axis. This arrangement of wires is dipped into a soapy bath and taken out. Determine the shape of
the soap ﬁlm that forms.
We shall assume that the soap ﬁlm adopts the shape with minimum surface energy, which implies that we
are to ﬁnd the shape with minimum surface area. Suppose that the ﬁlm spans across the two circular wires.
7.12. WORKED EXAMPLES. 191
H
x
y
R
R
−H
y = φ(x)
By symmetry, the surface must coincide with the surface of revolution of some curve y = φ(x), −H ≤ x ≤ H.
By geometry, the surface area of this ﬁlm is
Area{φ} = 2π
_
φ(x)ds = 2π
_
H
−H
φ(x)
_
1 + (φ
)
2
dx,
where we have used the fact that ds =
_
1 + (φ
)
2
dx, and this is to be minimized subject to the requirements
φ(−H) = φ(H) = R and
φ(x) ≥ 0 for −H < x < H.
In order to determine the shape that minimizes the surface area we calculate its ﬁrst variation δArea
and set it equal to zero. This gives the Euler equation
d
dx
_
φφ
_
1 + (φ
)
2
_
−
_
1 + (φ
)
2
= 0
which we can write as
φφ
_
1 + (φ
)
2
d
dx
_
φφ
_
1 + (φ
)
2
_
− φφ
= 0
or
d
dx
_
φφ
_
1 + (φ
)
2
_
2
−
d
dx
(φ)
2
= 0.
This can be integrated to give
(φ
)
2
=
_
φ
c
_
2
−1
where c is a constant. Integrating again and using the boundary conditions φ(H) = φ(−H) = R, leads to
φ(x) = c cosh
_
x
c
_
(i)
where c is to be determined from the algebraic equation
cosh
H
c
=
R
c
. (ii)
192 CHAPTER 7. CALCULUS OF VARIATIONS
0.5 1 1.5 2 2.5 3
2
4
6
8
10
ζ = cosh ξ
ζ = µξ, µ > µ
∗
ζ = µ
∗
ξ
ζ = µξ, µ < µ
∗
ξ
ζ
Figure 7.25: Intersection of the curve described by ζ = cosh ξ with the straight line ζ = µξ.
Given H and R, if this equation can be solved for c, then the minimizing shape is given by (i) with this value
(or values) of c.
To examine the solvability of (ii) set ξ = H/c and µ = R/H and then this equation can be written as
cosh ξ = µξ.
As seen from Figure 7.25, the graph ζ = cosh ξ intersects the straight line ζ = µξ twice if µ > µ
∗
; once if
µ = µ
∗
; and there is no intersection if µ < µ
∗
. Here µ
∗
≈ 1.50888 is found by solving the pair of algebraic
equations cosh ξ = µ
∗
ξ, sinh ξ = µ
∗
where the latter equation reﬂects the tangency of the two curves at the
contact point in this limiting case.
Thus in summary, if R < µ
∗
H there is no shape of the soap ﬁlm that extremizes the surface area; if
R = µ
∗
H there is a unique shape of the soap ﬁlm given by (i) that extremizes the surface area; if R > µ
∗
H
there are two shapes of the soap ﬁlm given by (i) that extremize the surface area (and further analysis
investigating the stability of these conﬁgurations is needed in order to determine the physically realized
shape).
Remark: In order to understand what happens when R < µ
∗
H consider the following heuristic argument.
There are three possible conﬁgurations of the soap ﬁlm to consider: one, the ﬁlm bridges across from one
circular wire to the other but it does not form on the ﬂat faces of the two circular wires themselves (which
is the case analyzed above); two, the ﬁlm forms on each circular wire but does not bridge the two wires;
and three, the ﬁlm does both of the above. We can immediately discard the third case since it involves more
surface area than either of the ﬁrst two cases. Consider the ﬁrst possibility: the soap ﬁlm spans across the
two circular wires and, as an approximation, suppose that it forms a circular cylinder of radius R and length
2H. In this case the area of the soap ﬁlm is 2πR(2H). In the second case, the soap ﬁlm covers only the two
end regions formed by the circular wires and so the area of the soap ﬁlm is 2×πR
2
. Since 4πRH < 2πR
2
for
2H < R, and 4πRH > 2πR
2
for 2H > R, this suggests that the soap ﬁlm will span across the two circular
7.12. WORKED EXAMPLES. 193
wires if R > 2H, whereas the soap will not span across the two circular wires if R < 2H (and would instead
cover only the two circular ends).
Example 7.4: Minimum Drag Problem. Consider a spacecraft whose shape is to be designed such that the
drag on it is a minimum. The outer surface of the spacecraft is composed of two segments as shown in
Figure 7.26: the inner portion (x = 0 with 0 < y < h
1
in the ﬁgure) is a ﬂat circular disk shaped nose of
radius h
1
, and the outer portion is obtained by rigidly rotating the curve y = φ(x), 0 < x < 1, about the
xaxis. We are told that φ(0) = h
1
, φ(1) = h
2
with the value of h
2
being given; the value of h
1
however is
not prescribed and is to be chosen along with the function φ(x) such that the drag is minimized.
x
y
y = φ(x)
dF = pdA
θ
p ∼ (V cos θ)
2 s
0 1
h
1
h
2
V
dA = 2πφds
Figure 7.26: The shape of the space craft with minimum drag is generated by rotating the curve y = φ(x)
about the xaxis. The space craft moves at a speed V in the −xdirection.
According to the most elementary model of drag (due to Newton), the pressure at some point on a surface
is proportional to the square of the normal speed of that point. Thus if the space craft has speed V relative to
the surrounding medium, the pressure on the body at some generic point is proportional to (V cos θ)
2
where
θ is the angle shown in Figure 7.26; this acts on a diﬀerential area dA = 2πyds = 2πφds. The horizontal
component of this force is therefore obtained by integrating dF cos θ = (V cos θ)
2
×(2πφds) ×cos θ over the
entire body. Thus the drag D is given, in suitable units, by
D = πh
2
1
+ 2π
_
1
0
φ(φ
)
3
[1 + (φ
)
2
]
dx,
where we have used the fact that ds = dx
_
1 + (φ
)
2
and cos θ = φ
/
_
1 + (φ
)
2
.
To optimize this we calculate the ﬁrst variation of D, remembering that both the function φ and the
parameter h
1
can be varied. Thus we are led to
δD = 2πh
1
δh
1
+ 2π
_
1
0
(φ
)
3
[1 + (φ
)
2
]
δφ dx + 2π
_
1
0
φ
_
3(φ
)
2
1 + (φ
)
2
−
2(φ
)
4
[1 + (φ
)
2
]
2
_
δφ
dx,
= 2πh
1
δh
1
+ 2π
_
1
0
(φ
)
3
[1 + (φ
)
2
]
δφ dx + 2π
_
1
0
_
φ(φ
)
2
(3 + (φ
)
2
)
[1 + (φ
)
2
]
2
_
δφ
dx.
194 CHAPTER 7. CALCULUS OF VARIATIONS
Integrating the last term by parts and recalling that δφ(1) = 0 and δφ(0) = δh
1
(since the value of φ(1) is
prescribed but the value of φ(0) = h
1
is not) we are led to
δD = 2πh
1
δh
1
+2π
_
1
0
(φ
)
3
[1 + (φ
)
2
]
δφ dx −2π
_
1
0
d
dx
_
φ(φ
)
2
(3 + (φ
)
2
)
[1 + (φ
)
2
]
2
_
δφ dx −
2πφ(φ
)
2
(3 + (φ
)
2
)
[1 + (φ
)
2
]
2
¸
¸
¸
¸
x=0
δh
1
.
The arbitrariness of δφ and δh
1
now yield
d
dx
_
φ(φ
)
2
(3 + (φ
)
2
)
[1 + (φ
)
2
]
2
_
−
(φ
)
3
[1 + (φ
)
2
]
= 0 for 0 < x < 1,
(φ
)
2
(3 + (φ
)
2
)
[1 + (φ
)
2
]
2
¸
¸
¸
¸
x=0
= 1.
(i)
The diﬀerential equation in (i)
1
and the natural boundary condition (i)
2
can be readily reduced to
d
dx
_
φ(φ
)
3
[1 + (φ
)
2
]
2
_
= 0 for 0 < x < 1,
φ
(0) = 1.
(ii)
The diﬀerential equation (ii) tells us that
φ(φ
)
3
[1 + (φ
)
2
]
2
= c
1
for 0 < x < 1, (iii)
where c
1
is a constant. Together with the given boundary conditions, we are therefore to solve the diﬀerential
equation (iii) subject to the conditions
φ(0) = h
1
, φ(1) = h
2
, φ
(0) = 1, (iv)
in order to ﬁnd the shape φ(x) and the parameter h
1
.
Since φ(0) = h
1
and φ
(0) = 1 this shows that c
1
= h
1
/4.
It is most convenient to write the solution of (iii) with c
1
= h
1
/4 parametrically by setting φ
= ξ. This
leads to
φ =
h
1
4
_
ξ
−3
+ 2ξ
−1
+ξ
_
,
x =
h
1
4
_
3
4
ξ
−4
+ξ
−2
+ log ξ
_
+c
2
,
_
¸
¸
¸
_
¸
¸
¸
_
1 > ξ > ξ
2
,
where c
2
is a constant of integration and ξ is the parameter. On physical grounds we expect that the slope φ
will decrease with increasing x and so we have supposed that ξ decreases as x increases; thus as x increases
from 0 to 1 we have supposed that ξ decreases from ξ
1
to ξ
2
(where we know that ξ
1
= φ
(0) = 1). Since
ξ = φ
= 1 when x = 0 the preceding equation gives c
2
= −7h
1
/16. Thus
φ =
h
1
4
_
ξ
−3
+ 2ξ
−1
+ξ
_
,
x =
h
1
4
_
3
4
ξ
−4
+ξ
−2
+ log ξ −
7
4
_
,
_
¸
¸
¸
_
¸
¸
¸
_
1 > ξ > ξ
2
. (v)
7.12. WORKED EXAMPLES. 195
The boundary condition φ = h
1
, φ
= 1 at x = 0 has already been satisﬁed. The boundary condition
φ = h
2
, x = 1 at ξ = ξ
2
requires that
h
2
=
h
1
4
_
ξ
−3
2
+ 2ξ
−1
2
+ξ
2
_
,
1 =
h
1
4
_
3
4
ξ
−4
2
+ξ
−2
2
+ log ξ
2
−
7
4
_
,
_
¸
¸
¸
_
¸
¸
¸
_
(vi)
which are two equations for determining h
2
and ξ
2
. Dividing the ﬁrst of (vi) by the second yields a single
equation for ξ
2
:
ξ
5
2
−h
2
ξ
4
2
log ξ
2
+
7
4
h
2
ξ
4
2
+ 2ξ
3
2
−h
2
ξ
2
2
+ξ
2
−
3
4
h
2
= 0. (vii)
If this can be solved for ξ
2
, then either equation in (vi) gives the value of h
1
and (v) then provides a parametric
description of the optimal shape. For example if we take h
2
= 1 then the root of (vii) is ξ
2
≈ 0.521703 and
then h
1
≈ 0.350943.
Example 7.5: Consider the variational problem where we are asked to minimize the functional
F{φ} =
_
1
0
f(φ, φ
)dx
over some admissible set of functions A. Note that this is a special case of the standard problem where the
function f(x, φ, φ
) is not explicitly dependent on x. In the present case f depends on x only through φ(x)
and φ
(x).
The Euler equation is given, as usual, by
d
dx
f
φ
− f
φ
= 0 for 0 < x < 1.
Multiplying this by φ
gives
φ
d
dx
f
φ
− φ
f
φ
= 0 for 0 < x < 1,
which can be written equivalently as
_
d
dx
(φ
f
φ
) −φ
f
φ
_
−
_
d
dx
f −φ
f
φ
_
= 0 for 0 < x < 1.
Since this simpliﬁes to
d
dx
_
φ
f
φ
−f
_
= 0 for 0 < x < 1,
it follows that in this special case the Euler equation can be integrated once to have the simpliﬁed form
φ
f
φ
−f = constant for 0 < x < 1.
Remark: We could have taken advantage of this in, for example, the preceding problem.
Example 7.6: Elastic bar. The following problem arises when examining the equilibrium state of a one
dimensional bar composed of a nonlinearly elastic material. An equilibrium state of the bar is characterized
196 CHAPTER 7. CALCULUS OF VARIATIONS
by a displacement ﬁeld u(x) and the material of which the bar is composed is characterized by a potential
´
W(u
). It is convenient to denote the derivative of W by σ,
´ σ(u
) =
´
W
(u
),
so that then σ(x) = ´ σ(u
(x)) represents the stress at the point x in the bar. The bar has unit crosssectional
area and occupies the interval 0 ≤ x ≤ L in a reference conﬁguration. The end x = 0 of the bar is ﬁxed, so
that u(0) = 0, a prescribed force P is applied at the end x = L, and a distributed force per unit length b(x)
is applied along the length of the bar.
An admissible displacement ﬁeld is required to be continuous on [0, L], piecewise continuously diﬀeren
tiable on [0, L] and to conform to the boundary condition u(0) = 0. The total potential energy associated
with any admissible displacement ﬁeld is
V {u} =
_
L
0
´
W(u
(x))dx −
_
L
0
b(x)u(x)dx − Pu(L),
which can be written in the conventional form
V {u} =
_
L
0
f(x, u, u
)dx where f(x, u, u
) =
´
W(u
) −bu −Pu
.
The actual displacement ﬁeld minimizes the potential energy V over the admissible set, and so the three
basic ingredients of the theory can now be derived as follows:
i. At any point x at which the displacement ﬁeld is smooth, the Euler equation
d
dx
_
∂f
∂u
_
−
∂f
∂u
= 0
takes the explicit form
d
dx
´
W
(u
) +b = 0,
which can be written in terms of stress as
d
dx
σ +b = 0.
ii. The displacement ﬁeld u(x) satisﬁes the prescribed boundary condition u = 0 at x = 0. The natural
boundary condition at the right hand end is given, according to equation (7.50) in Section 7.5.1, by
f
u
= 0 at x = L,
which in the present case reduces to
σ(x) =
´
W
(u
(x) = P at x = L.
iii. Finally, suppose that u
has a jump discontinuity at some location x = s. Then the ﬁrst Weirstrass
Erdmann corner condition (7.88) requires that ∂f/∂u
be continuous at x = s, i.e. that the stress
σ(x) must be continuous at x = s:
σ
¸
¸
x=s−
= σ
¸
¸
x=s+
. (i)
7.12. WORKED EXAMPLES. 197
The second WeirstrassErdmann corner condition (7.89) requires that f −u
∂f/∂u
be continuous at
x = s, i.e. that the quantity W −u
σ must be continuous at x = s:
W −u
σ
¸
¸
¸
x=s−
= W −u
σ
¸
¸
¸
x=s+
(ii)
Remark: The generalization of the quantity W −u
σ to 3dimensions in known as the Eshelby tensor.
0
u
W(u
)
0
σ
σ(u
)
u
(s+) u
(s−)
u
(a) (b )
Figure 7.27: (a) A nonmonotonic (risingfallingrising) stress response function ´ σ(u
) and (b) the corre
sponding nonconvex energy
´
W(u
).
In order to illustrate how a discontinuity in u
can arise in an elastic bar, observe ﬁrst that according
to the ﬁrst WeirstrassErdmann condition (i), the stress σ on either side of x = s has to be continuous.
Thus if the function ´ σ(u
) is monotonically increasing, then it follows that σ = ´ σ(u
) has a unique solution
u
corresponding to a given σ, and so u
must also be continuous at x = s. On the other hand if ´ σ(u
)
is a nonmonotonic function as, for example, shown in Figure 7.27(a), then more than one value of u
can
correspond to the same value of σ, and so in such a case, even though σ(x) is continuous at x = s it is possible
for u
to be discontinuous, i.e. for u
(s−) = u
(s+), as shown in the ﬁgure. The energy function
´
W(u
) sketched
in Figure 7.27(b) corresponds to the stress function ´ σ(u
) shown in Figure 7.27(a). In particular, the values
of u
at which ´ σ has a local maximum and local minimum, correspond to inﬂection points of the energy
function
´
W(u
) since
´
W
= 0 when ´ σ
= 0.
The second WeirstrassErdmann condition (ii) tells us that the stress σ at the discontinuity has to have
a special value. To see this we write out (ii) explicitly as
´
W(u
(s+)) −u
(s+)σ =
´
W(u
(s−) −u
(s−)σ
and then use ´ σ(u
) =
´
W
(u
) to express it in the form
_
u
(s+)
u
(s−)
´ σ(v)dv = σ
_
u
(s+) −u
(s−)
¸
. (iii)
198 CHAPTER 7. CALCULUS OF VARIATIONS
This implies that the value of σ must be such that the area under the stress response curve in Figure 7.27(a)
from u
(s−) to u
(s+) must equal the area of the rectangle which has the same base and has height σ; or
equivalently that the two shaded areas in Figure 7.27(a) must be equal.
Example 7.7: Nonsmooth extremal. Find a curve that extremizes
F{φ} =
_
1
0
f(x, φ(x), φ
(x))dx,
that begins from (0, a), and ends at (1, b) after contacting a given curve y = g(x).
Remark: By identifying the curve y = g(x) with the surface of a mirror and specializing the functional F to
the travel time of light, one can thus derive the law of reﬂection for light.
Example 7.8: Inequality Constraint. Find a curve that extremizes
I(φ) =
_
a
0
(φ
(x))
3
dx, φ(0) = φ(a) = 0,
that is prohibited from entering the interior of the circle
(x −a/2)
2
+y
2
= b
2
.
Example 7.9: An example to caution against simpleminded discretization. (Due to John Ball). Let
F{u} =
_
1
0
(u
3
(x) −x)
2
(u
(x))
2
dx
for all functions such that u(0) = 0, u(1) = 1. Clearly F{u} ≥ 0. Moreover F{¯ u} = 0 for ¯ u(x) = x
1/3
.
Therefore the minimizer of F{u} is ¯ u(x) = x
1/3
.
Discretize the interval [0, 1] into N segments, and take a piecewise linear test function that is linear on
each segment. Calculate the functional F at this test function, next minimize it at ﬁxed N, and ﬁnally take
its limit as N tends to inﬁnity. What do you get? (You will get an answer but not the correct one.)
To anticipate this diﬃculty in a diﬀerent way, consider a 2 element discretization, and take the continuous
test function
u
1
(x) =
_
_
_
cx for 0 < x < h,
¯ u(x) for h < x < 1.
(i)
Calculate F{u} for this function. Take limit as h → 0 and observe that F{u} does not go to zero (i.e. to
F{¯ u}).
7.12. WORKED EXAMPLES. 199
Example 7.10: Legendre necessary condition for a local minimum: Let
F{ε} =
_
1
0
f(x, φ +εη, φ
+εη
)dx
for all functions η(x) with η(0) = η(1) = 0. Show that
F
(0) =
_
1
0
_
f
φφ
η
2
+ 2f
φφ
ηη
+f
φ
φ
(η
)
2
_
dx.
Suppose that F
(0) ≥ 0 for all admissible functions η. Show that it is necessary that
f
φ
φ
(x, φ(x), φ
(x)) ≥ 0 for 0 ≤ x ≤ 1.
Example 7.11: Bending of a thin plate. Consider a thin rectangular plate of dimensions a ×b that occupies
the region A = {(x, y)  0 < x < a, 0 < y < b} of the x, yplane. A distributed pressure loading p(x, y)
is applied on the planar face of the plate in the zdirection, and the resulting deﬂection of the plate in the
zdirection is denoted by w(x, y). The edges x = 0 and y = 0 of the plate are clamped, which implies the
geometric restrictions that the plate cannot deﬂect nor rotate along these edges:
w = 0,
∂w
∂x
= 0 on x = 0, 0 < y < b,
w = 0,
∂w
∂y
= 0 on y = 0, 0 < x < a;
_
¸
¸
_
¸
¸
_
(i)
the edge y = b is hinged, which means that its deﬂection must be zero but there is no geometric restriction
on the slope:
w = 0, on y = b, 0 < x < a; (ii)
and ﬁnally the edge x = a is free in the sense that the deﬂection and the slope are not geometrically restricted
in any way.
The potential energy of the plate and loading associated with an admissible deﬂection ﬁeld, i.e. a function
w(x, y) that obeys (i) and (i) is given by
Φ{w} =
D
2
_
A
_
_
∂
2
w
∂x
2
+
∂
2
w
∂y
2
_
2
−2(1 −ν)
_
∂
2
w
∂x
2
∂
2
w
∂y
2
−
_
∂
2
w
∂x∂y
_
2
__
dxdy −
_
A
pw dxdy. (iii)
where D and ν are constants. The actual deﬂection of the plate is given by the minimizer of Φ. We are asked
to derive the Euler equation and the natural boundary conditions to be satisﬁed by minimizing w.
Answer: The Euler equation is
∂
4
w
∂x
4
+ 2
∂
4
w
∂x
2
∂y
2
+
∂
4
w
∂x
4
=
p
D
0 < x < a, 0 < y < b, (iv)
and the natural boundary conditions are
∂
2
w
∂y
2
+ν
∂
2
w
∂x
2
= 0, on y = b, 0 < x < a; (v)
200 CHAPTER 7. CALCULUS OF VARIATIONS
and
∂
2
w
∂x
2
+ν
∂
2
w
∂y
2
= 0 and
∂
3
w
∂x
3
+ 2(1 −ν)
∂
3
w
∂x∂y
2
= 0, on x = a, 0 < y < b; (vi)
Example 7.12: Consider the functional
F{φ} =
_
1
0
_
_
(φ
)
2
−1
_
2
+φ
2
_
dx, φ(0) = φ(1) = 0, (i)
and determine a minimizing sequence φ
1
, φ
2
, φ
3
, . . . such that F{φ
k
} approaches its inﬁmum as k → ∞. If
the minimizing sequence itself converges to φ
∗
show that F{φ
∗
} is not the inﬁmum of F.
Remark: Note that this functional is nonnegative. If the functional takes the value zero, then, since its
integrand is the sum of two nonnegative terms, each of those terms must vanish individually. Thus we must
have φ
(x) = ±1 and φ(x) = 0 on the interval 0 < x < 1. These cannot be both satisﬁed by a regular
function.
0 1
h
h =
1
k
nh (n + 1)h
h/2
x
φ
k
(x) = x −nh
φ
k
(x) = −x + (n + 1)
Figure 7.28: Sawtooth function φ
k
(x) with k local maxima and linear segments of slope ±1.
Let φ
k
(x) be the piecewise linear sawtooth function with klocal maxima as shown in Figure 7.28; the
slope of each linear segment is ±1. Note that the base h = 1/k and the height is h/2. Thus as k increases
there are more and more teeth, each of which has a smaller base and smaller height. Observe that the ﬁrst
term in the integrand of (i) vanishes identically for any k; the second term, which equals the area under the
square of φ
k
approaches zero as k → ∞. Thus the family of sawteeth functions is a minimizing sequence
of (i). However, note that since φ
k
(x) → 0 at each ﬁxed x as k → ∞ the limiting function to which this
sequence converges, i.e. φ(x) = 0, is not a minimizer of (i).
2
3
Electronic Publication
Rohan Abeyaratne Quentin Berg Professor of Mechanics Department of Mechanical Engineering 77 Massachusetts Institute of Technology Cambridge, MA 021394307, USA Copyright c by Rohan Abeyaratne, 1987 All rights reserved
Abeyaratne, Rohan, 1952Lecture Notes on The Mechanics of Elastic Solids. Volume 1: A Brief Review of Some Mathematical Preliminaries / Rohan Abeyaratne – 1st Edition – Cambridge, MA:
ISBN13: 9780979186509 ISBN10: 0979186501
QC
Please send corrections, suggestions and comments to abeyaratne.vol.1@gmail.com
Updated June 25 2007
4
i Dedicated with admiration and aﬀection to Matt Murphy and the miracle of science. . for the gift of renaissance.
.
I have had the opportunity to regularly teach the second and third of these subjects. Advanced Mechanical Behavior of Materials. Molecular Modeling and Simulation for Mechanics.iii PREFACE The Department of Mechanical Engineering at MIT oﬀers a series of graduate level subjects on the Mechanics of Solids and Structures which include: 2. 2. and Computational Mechanics of Materials. Morton E. My understanding of elasticity as well as these notes have also beneﬁtted greatly from many useful conversations with Kaushik Bhattacharya. I have been most fortunate to have had the opportunity to apprentice under these inspiring and distinctive scholars. colleague and friend Jim Knowles. reﬁned and expanded on every following occasion that I taught these classes. . Eliot Fried. The material in the current presentation is still meant to be a set of lecture notes. Over the years.071: 2. and the current three volumes are comprised of the lecture notes I developed for them. not a text book. Janet Blume. The ﬁrst draft of these notes was produced in 1987 and they have been corrected. Solid Mechanics: Elasticity. Finite Element Analysis of Solids and Fluids.080: 2. Structural Mechanics. I would especially like to acknowledge a great many illuminating and stimulating interactions with my mentor. It has been organized as follows: Volume I: A Brief Review of Some Mathematical Preliminaries Volume II: Continuum Mechanics Volume III: Elasticity My appreciation for mechanics was nucleated by Professors Douglas Amarasekara and Munidasa Ranaweera of the (then) University of Ceylon. Solid Mechanics: Plasticity and Inelastic Deformation. and was subsequently shaped and grew substantially under the inﬂuence of Professors James K.094: 2.073: 2.074: 2. Mechanics of Continuous Media.072 and 2.075: 2.072: 2. whose inﬂuence on me cannot be overstated.083).099: Mechanics of Solid Materials. Knowles and Eli Sternberg of the California Institute of Technology.074 (formerly known as 2.095: 2. I am also indebted to the many MIT students who have given me enormous fulﬁllment and joy to be part of their education.
Oxford University Press. I cannot recall every source I have used but certainly they include those listed at the end of each chapter. Ericksen. Phoebus Rosakis. Academic Press. Linear Algebra is a far richer subject than the treatment here. Prentice Hall. Gelfand and S. California Institute of Technology. 1963. which is limited to real 3dimensional Euclidean vector spaces. Sternberg. and sections on the socalled Eshelby problem and the eﬀective behavior of twophase materials in Volume III. The topics covered in Volumes II and III are largely those one would expect to see covered in such a set of lecture notes.M. James. There are a number of Worked Examples at the end of each chapter which are an essential part of the notes. or it illustrates a general concept. J. CA 1978. or a proof. It is most certainly not meant to be a source for learning these topics for the ﬁrst time. Calculus of Variations. J. Continuum Mechanics: Concise Theory and Problems. selective and limited in scope.V. or it establishes a result that will be used subsequently (possibly in a later volume). David M. For example. Volume II: Continuum Mechanics P. which I gratefully acknowledge. Gurtin. Knowles and E. . 1997. Parks. Many of these examples either provide. Dover. (Unpublished) Lecture Notes for AM136: Finite Elasticity. An Introduction to Continuum Mechanics. designed to review those aspects of mathematics that will be encountered in the subsequent volumes. results.L. Pasadena. The content of these notes are entirely classical. more details. Knowles. M.iv Gurtin. New York. 1981. Richard D. and illustrative examples. Stelios Kyriakides. in the best sense of the word. Chadwick. Stewart Silling and Nicolas Triantafyllidis. Personal taste has led me to include a few special (but still wellknown) topics. Fomin. Introduction to the Thermodynamics of Solids. I have drawn on a number of sources over the years as I prepared my lectures. The treatment is concise. K. In a more general sense the broad approach and philosophy taken has been inﬂuenced by: Volume I: A Brief Review of Some Mathematical Preliminaries I. Volume I of these notes provides a collection of essential deﬁnitions. Chapman and Hall. 1991. Examples of this include sections on the statistical mechanical theory of polymer chains and the lattice theory of crystalline solids in the discussion of constitutive theory in Volume II. J.K. of a result that had been quoted previously in the text.E.1999. and none of the material here is original. Linear Vector Spaces and Cartesian Tensors.
Volume II. in Mechanics of Solids .. Truesdell and W. 1976. (Unpublished) Lecture Notes for AM135: Elasticity. Thus. b. Knowles. u Volume IIII: Elasticity M. As much as possible this notation will also be used in Volumes II and III though there will be some lapses (for reasons of tradition). A. Gurtin. H. Truesdell. E. Timoshenko and J. SpringerVerlag. Volume III/3. B. Pasadena. The linear theory of elasticity.. for example. Noll. . Edited by S. γ. α.. A Treatise on the Mathematical Theory of Elasticity. will denote scalars (real numbers). and A. 1944. lowercase boldface Latin letters will denote vectors. will denote linear transformations. The nonlinear ﬁeld theories of mechanics. β. 1984. 1987. CA.N. . c.. In particular. edited by C. Springer. Fl¨gge. Dover. California Institute of Technology. will denote vectors. 1965. in Handb¨ch der u Physik. Goodier. McGrawHill. S.E.. and uppercase boldface Latin letters will denote linear transformations. Love. a. J. .v C. “o” will denote the null vector while “0” will denote the null linear transformation. Theory of Elasticity.. K. C. The following notation will be used consistently in Volume I: Greek letters will denote real numbers. P.
vi .
. . . . . . . . . . . . . . . . . . . . . . . .2 3. . 2. . . . . . Linear Transformations. . 3 Components of Tensors. .1 1. . . . . . . . . . . . . . . . . . . . . . .3 3.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . vii . .2 2. . . . . . .1. . . . . . . . . . . . . . . . . . . . Determinant.4 1. . . . . . . Components of a linear transformation in a basis. . . . . . . . . . . . . . . . . . . . . . . . trace. . . . . . . . . . . . . . . . . The alternator or permutation symbol . . . . . . . . . . . . . . Components in two bases. . . . . . . . . . Indicial notation .1 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Contents 1 Matrix Algebra and Indicial Notation 1. . . .3 Euclidean point space . . . . . .6 Matrix algebra . . . Worked Examples. . . . . . . . . . . .2 1. . .4 Components of a vector in a basis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cartesian Tensors 3. . . . . . . . . . . . . . . . . . . . . . . Worked Examples.5 1. . .3 1. . . . . . . scalarproduct and norm . . . . . . . . . . . . 2 Vectors and Linear Transformations 2. . . . . . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kronecker delta . . . . . 1 1 5 7 9 10 11 17 18 20 20 26 41 41 43 45 47 Summation convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .1 6. . . . . . . . . . . .4 Notation and deﬁnitions. . . . . . . . . . . . . .2 Introductory Remarks . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 5. . . . . . . . . . . . . . 108 Gradient of a vector ﬁeld . 107 e ˆ e ˆ ˆ 6. . . . . .2. . . . . . . .1 6. . . . . . General Orthogonal Curvilinear Coordinates . . . . . . . . . . . . . . . . . 5 Calculus of Vector and Tensor Fields 5. .2 5. . . . . .5 4. . . . . . . . . . . . . . . . . . .1 5. . . . . Worked Examples. . . Symmetry of a scalarvalued function . . . . . . . . . . . Inverse transformation. . . . . . . .3 6. . . . . . . . . . 109 . . . . . . . . . . . . . . . . .6 An example in twodimensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Worked Examples. . . . .2 6. 106 Components of ∂ˆi /∂ xj in the local basis (ˆ1 . . . . . . . . . .4 Coordinate transformation. . . . . . . . . . .1 6. . . . . . . . . . . . . . . . . . . . Groups of Linear Transformations. . . . Worked Examples. . . . . . . . . . . . . . . . . . . . . .2 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Orthogonal Curvilinear Coordinates 6. . . . . . . . . . . . . . . . 104 Inverse partial derivatives .5 3. . . . scale moduli. . . . . . . . . . . . . . . . . . . .viii 3. . .1 4. . . . . . . . e3 ) .3 Transformation of Basic Tensor Relations . . . . . . . . . . 50 52 67 68 69 72 73 74 77 85 85 87 88 89 99 99 4 Symmetry: Groups of Linear Transformations 4. . . . . . . . . . . . . . . 108 6. . . . . . . . . . . . . . . . Lattices. . . .6 CONTENTS Cartesian Tensors . . . . . . . . . . 102 6. . . . . . . . . . . . .2. . . . .2 Gradient of a scalar ﬁeld . An example in threedimensions. .2. . . . . . 102 Metric coeﬃcients. . . . . . . . . . . . . .4 4. . . . . . . . . . . . .3. . . . . . Localization . . . . . . . . . . Integral theorems .2. e2 . . . . . . . . .
.3 The basic problem. 113 123 7 Calculus of Variations 7.4. . . . . .3 7. . .3. . . . . .7 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Generalizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Diﬀerential elements of area . . . . 130 An example.CONTENTS 6. . . . .5. . . . . . . . . . . . . . .5. . . . . . . .5 6. . . . . . . . . . . . .4 Generalization: Free endpoint.6 6. .1 7. . . . . 112 Examples of Orthogonal Curvilinear Coordinates . .3. . . . . . . . . . . . . . . .3. . . . . . . . .1 7. . . . . 110 Divergence of a symmetric 2tensor ﬁeld . . . . 147 7. . . . . . . 130 7. . . . 154 . . .3. . . . . . . . . . . . . . .2 7.1 7. . . 126 A necessary condition for an extremum . . . . Euler equation. . . 111 Diﬀerential elements of volume . . . . . . . . . . . . . . .5 ix Divergence of a vector ﬁeld . . . . . . . . . . .4.4 6. . . . . . . . . . . . . . . . . . . . . . . . 151 7. . . . . . . 137 Generalization: Higher derivatives.4 Introduction. . . 136 7. . . . . . . 113 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Application of necessary condition δF = 0 . . . . . . .5. . . . .4 6. .2 Integral constraints. . . . . . . . . . . . . . . . .6. . . . . . . . .2 7. . . . 123 Brief review of calculus. . . . . . . . . . . . . . . . . . .2 7. . The Brachistochrone Problem. . . . . . . . . . .5. . . . . . . .6 Constrained Minimization . . . . . . . . . . 142 Generalization: End point of extremal lying on a curve. . . . . 151 Algebraic constraints . . . . . . . . . . . .6. .3. . . . . . . . .8 6. . . . . . . . . . . . . . .1 7. . . . . . . 132 A Formalism for Deriving the Euler Equation . . . . . . . . . . . . . . . . . . . . . . Natural boundary conditions. . . . . . . . . . . . . . . . . . . .3. . . . . . . 140 Generalization: Multiple functions. . 137 7. . . . . . . . . .3 6. . . 110 Laplacian of a scalar ﬁeld .3 7. . . . 110 Curl of a vector ﬁeld . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . .
. . . . . .11. . . . . . . . . . . . . . . . . . . . . . .11 Direct method . . . . . . .1 Piecewise smooth minimizer with nonsmoothness occuring at a prescribed location. . . . . . . . . . .12 Worked Examples. . . .7. . . .1 The Ritz method . .6. . . . . . . . . . . . . . . . . . . . 158 Piecewise smooth minimizer with nonsmoothness occuring at an unknown location . . . . . . . 183 7. . . . . . . . . . . . . . . . . . . . 187 . . . . . . . . . . . . . . . . . . . .3 7. . . . . . . . . . . . . . . 157 7. . . . . . . 178 7.x 7. Second variation . . . . . . . . . . . .2 7. 155 WeirstrassErdman corner conditions . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Suﬃcient condition for convex functionals . . . . . . . . . . . . . . . . . . . .7 CONTENTS Diﬀerential constraints . . . . . . . . 162 . . . 166 7. . . . . . . . . 185 7. . . . . . . . . . . . . . . . . . . . . .8 7. . . . . . . . . . . . . . . . . . 180 7. . . . . .9 Generalization to higher dimensional space. . . . . . .
Chapter 1 Matrix Algebra and Indicial Notation Notation: {a} . i.. columnj of the matrix [A] 1.... A1n A2n . .. xm (1... for our purposes it is suﬃcient to consider a matrix to be a rectangular array of real numbers that obeys certain rules of addition and multiplication...... Amn .. . m × 1 matrix..... [A] .. a column matrix with m rows and one column element in rowi of the column matrix {a} m × n matrix element in rowi..2) . A m × n matrix [A] has m rows and n columns: [A] = A11 A21 ... (1.1) Aij denotes the element located in the ith row and jth column. Am2 .... Am1 A12 A22 ...e... .... ai .1 Matrix algebra Even though more general matrices can be considered. Aij .. The column matrix {x} = 1 x1 x2 .
. It is worth noting that if two matrices [A] and [B] obey the equation [A][B] = [0] this does not necessarily mean that either [A] or [B] has to be the null matrix [0]. . j = 1. (1. [B] and [C] obey [A][B] = [A][C] this does not necessarily mean that [B] = [C] (even if [A] = [0]. n. j = 1.5) If [A] is a p × q matrix and [B] is a q × r matrix. 2. one writes [B] = α[A]. their product is the p × r matrix [C] with elements q Cij = k=1 Aik Bkj . 2. In general [A][B] = [B][A]. Then we can postmultiply [A] by {x} to get the m × 1 column matrix [A]{x}. .6) one writes [C] = [A][B]. 2. . . .8) i = 1. . m. 2. . .) The product by a scalar α of a m × n matrix [A] is the m × n matrix [B] with components Bij = αAij . Two m×n matrices [A] and [B] are said to be equal if and only if all of their corresponding elements are equal: Aij = Bij . (1. The transpose of the m × n matrix [A] is the n × m matrix [B] where Bij = Aji for each i = 1. 2. . . and j = 1. . Note that a m1 × n1 matrix [A1 ] can be postmultiplied by a m2 × n2 matrix [A2 ] if and only if n1 = m2 . i = 1. q. {x}[A] does not exist is general. . . The row matrix {y} = {y1 . or [B] premultiplied by [A]. i. . If all the elements of a matrix are zero it is said to be a null matrix and is denoted by [0] or {0} as the case may be. consider a m × n matrix [A] and a n × 1 (column) matrix {x}. . In particular. yn } (1. . 2. i = 1. n. m. Similarly if three matrices [A]. their sum is the m × n matrix [C] denoted by [C] = [A] + [B] whose elements are Cij = Aij + Bij . . m. .7) . but we cannot premultiply [A] by {x} (unless m=1). i = 1. . . j = 1. .3) has one row and n columns. . p. n. m. . . j = 1.4) If [A] and [B] are both m × n matrices. . . . . therefore rather than referring to [A][B] as the product of [A] and [B] we should more precisely refer to [A][B] as [A] postmultiplied by [B]. . . . (1.e. 2. MATRIX ALGEBRA AND INDICIAL NOTATION has m rows and one column. 2. . (1. 2. . . y2 .2 CHAPTER 1. n. 2. (1. . . . .
the diagonal elements of this matrix are the Aii ’s. One can verify that [A + B]T = [A]T + [B]T . n. j = 1. In the special case of a diagonal matrix [A] {x}T [A]{x} = A11 x2 + A22 x2 + . Then we can premultiply [A] by {x}T . Suppose that [A] is a m × n matrix and that {x} is a m × 1 (column) matrix.10) A n × n matrix [A] is called a square matrix. 2. . j = 1. {x}T [A] exists (and is a 1 × n row matrix). i = j. .e. .11) (1. (1. Suppose that [A] is a n × n square matrix and that {x} is a n × 1 (column) matrix. n. If every diagonal element of a diagonal matrix is 1 the matrix is called a unit matrix and is usually denoted by [I]. . n. i. and vice versa. + x2 = 1 2 n x2 . For any n × 1 column matrix {x} note that n {x}T {x} = {x}{x}T = x2 + x2 . + Ann x2 . and premultiply the resulting matrix by {x}T to get a 1 × 1 square matrix.1. (1. i. A square matrix [A] is said to be symmetrical if Aij = Aji skewsymmetrical if Aij = −Aji for each i.e. [AB]T = [B]T [A]T .13) This is referred to as the quadratic form associated with [A]. . MATRIX ALGEBRA Usually one denotes the matrix [B] by [A]T . . i i=1 (1. Observe that each diagonal element of a skewsymmetric matrix must be zero. . . . . . eﬀectively just a scalar. Note that n n {x}T [A]{x} = Aij xi xj . 3 (1.1.9) The transpose of a column matrix is a row matrix. the matrix is said to be diagonal.12) Thus for a symmetric matrix [A] we have [A]T = [A]. {x}T [A]{x}. for a skewsymmetric matrix [A] we have [A]T = −[A]. . Then we can postmultiply [A] by {x} to get a n × 1 column matrix [A]{x}. (1. . Aij = 0 for each i. 2. j = 1. 2.14) 1 1 n The trace of a square matrix is the sum of the diagonal elements of that matrix and is denoted by trace[A]: n trace[A] = i=1 Aii . i=1 j=1 for each i.15) . (1. If the oﬀdiagonal elements of a square matrix are all zero.
the rows of [A] are said to be linearly independent. Consider a square matrix [A].16) (1. αn {a}n = {0} (1. For each i = 1. then consider the determinant of that second matrix. a row matrix {a}i can be created by assembling the elements in the ith row of [A]: {a}i = {Ai1 . and then at least one row of [A] can be expressed as a linear combination of the other rows. = αn = 0. The number thus obtained is called the cofactor of Aij . . . . For [A] to be nonsingular it is necessary and suﬃcient that det[A] = 0. Ai3 . Then for a 2 × 2 matrix det A11 A12 A21 A22 = A11 A22 − A12 A21 . .4 One can show that CHAPTER 1.19) Note that trace[A] and det[A] are both scalarvalued functions of the matrix [A]. . Ain }. If the only scalars αi for which α1 {a}1 + α2 {a}2 + α3 {a}3 + . Consider a n×n square matrix [A]. usually denoted by [B] = [A]−1 and called the inverse of [A]. Consider a square matrix [A] and suppose that its rows are linearly independent. (1. . Ai2 . the matrix is singular and an inverse matrix does not exist. . Let det[A] denote the determinant of a square matrix. then Bij = cofactor of Aji det[A] (1. are α1 = α2 = . Then the matrix is said to be nonsingular and there exists a matrix [B]. If [B] is the inverse of [A]. they are said to be linearly dependent. MATRIX ALGEBRA AND INDICIAL NOTATION trace([A][B]) = trace([B][A]).18) The determinant of a n × n matrix is deﬁned recursively in a similar manner. . First consider the (n−1)×(n−1) matrix obtained by eliminating the ith row and jth column of [A].20) and for a 3 × 3 matrix A11 A12 A13 det A21 A22 A23 = A11 det A31 A32 A33 A22 A23 A32 A33 − A12 det A21 A23 A31 A33 + A13 det A21 A22 A31 A32 . If the rows of [A] are linearly dependent. n. 2. . One can show that det([A][B]) = (det[A]) (det[B]). (1. . for which [B][A] = [A][B] = [I].21) . If at least one of the α’s is nonzero. . [B] = [A]−1 .17) (1. . and ﬁnally consider the product of that determinant with (−1)i+j .
. Let Aij denote the element of [A] in its ith row and j th column. . 1.26) holds for each value of the subscript i in the range i = 1. n”.24) .23) . +A1n xn = b1 .22) then the matrix is said to be orthogonal. .. this is equivalent to the system of linear algebraic equations A11 x1 +A12 x2 + . ... A21 x1 +A22 x2 + . INDICIAL NOTATION If the transpose and inverse of a matrix coincide. .. 5 (1.. (1. if [A]−1 = [A]T . .. .. we shall always use the range convention unless explicitly stated otherwise. . (1. A1n A b 2 21 A22 . . and let xi and bi denote the elements in the ith row of {x} and {b} respectively.. .. . . .. and simply writing Ai1 x1 + Ai2 x2 + .. . 2. .. n. . . Note that for an orthogonal matrix [A].. An1 An2 ... . From here on... one has [A][A]T = [A]T [A] = [I] and that det[A] = ±1.. +. The subscript i is called a free subscript because it is free to take on each value in its range. . . . . . Ann xn bn or even more compactly by omitting the statement “with i taking each value in the range 1.e. = . . This understanding is referred to as the range convention. +Ann xn = bn . n. 2.26) with the understanding that (1. A2n x2 (1.. +.. . = . +A2n xn = b2 .2 Indicial notation This system of equations can be written more compactly as Ai1 x1 + Ai2 x2 + . .25) Consider a n × n square matrix [A] and two n × 1 column matrices {x} and {b}. . +.. . .2. i. 2. + Ain xn = bi (1. . .1. Now consider the matrix equation [A]{x} = {b}: b1 x1 A11 A12 . Ain xn = bi Carrying out the matrix multiplication. .. with i taking each value in the range 1. An1 x1 +An2 x2 + .
.. . . (1. this is because j is a free subscript in (1. Then. n.. + Ajn xn = bj (1. . ∂x1 ∂f = 3x2 . . Thus (1. n. . . . . xn . . then it represents 3N scalar equations.. . . ∂f = 3xn . In order to be consistent it is important that the same free subscript(s) must appear once.24)..31) . . suppose that f (x1 . 2. MATRIX ALGEBRA AND INDICIAL NOTATION Aj1 x1 + Aj2 x2 + . . .30) corresponds to the nine equations A11 = x1 x1 .6 Observe that CHAPTER 1. . 2. x2 . 2. in equation (1. n” and this leads back to (1. . . . .30) has two free subscripts p and q.26). A22 = x2 x2 . . A21 = x2 x1 . Ain xn and bi of that equation. independently. . . ... it must necessarily appear once in each of the remaining symbol groups Ai2 x2 . the equation Apq = xp xq (1. ..27) is required to hold “for all j = 1. if an equation involves N free indices. This illustrates the fact that the particular choice of index for the free subscript in an equation is not important provided that the same free subscript appears in every symbol grouping. takes all values in the range 1.. Therefore (1. . .. . An2 = xn x2 . ∂x2 .. . − and = signs. (1.27) and so (1. . since the index i appears once in the symbol group Ai1 x1 . ∂xn (1. Ai3 x3 .29) As a third example. In general. ..27) is identical to (1. .. For example.26). and only once. . in every group of symbols in an equation. . Ann = xn xn . . A1n = x1 xn . x2 . An1 = xn x1 . if we write the equation ∂f = 3xk . Similarly since the free subscripts p and q appear in the symbol group on the lefthand 1 By a “symbol group” we mean a set of terms contained between +. A12 = x1 x2 . . = .28) is a compact way of writing the n equations ∂f = 3x1 .28) ∂xk the index k in it is a free subscript and so takes all values in the range 1. . .. and each.1 As a second example. xn ) is a function of x1 . A2n = x2 xn .
Thus we shall never write. equation (1. equation (1.11) deﬁning a symmetric matrix as simply Aij = Aji . no subscript is allowed to appear more than twice in any symbol grouping. this is because k is a dummy subscript in (1.34) is identical to (1. for example. A subscript that appears twice in a symbol grouping is called a repeated or dummy subscript. Aii xi = bi since. Note ﬁnally that had we adopted the range convention in Section 1. the subscript j in (1. we would have omitted the various “i=1. . it must also appear in the symbol group on the righthand side.8) for the transpose of a matrix as simply Bij = Aji . In order to avoid ambiguity. j=1 (1. Thus the particular choice of index for the dummy subscript is not important. equation (1. for example. and equation (1.5) for the sum of two matrices as simply Cij = Aij + Bij .32) as Aij xj = bi (1.1. SUMMATION CONVENTION 7 side of equation (1.34).33). Note that Aik xk = bi (1. . .33) is a dummy subscript.12) deﬁning a skewsymmetric matrix as simply Aij = −Aji ..7) for the scalar multiple of a matrix as Bij = αAij . we would write (1.1.32) We can simplify the notation even further by agreeing to drop the summation sign and instead imposing the rule that summation is implied over a subscript that appears twice in a symbol grouping. An equation of the form Apq = xi xj would violate this consistency requirement as would Ai1 xi + Aj2 x2 = 0.34) and therefore summation on k in implied in (1.30).3.4) for the equality of two matrices as simply Aij = Bij .n” statements there and written. the index i would appear 3 times in the ﬁrst symbol group. if we did.26) can be written as n Aij xj = bi . With this understanding in force.33) with summation on the subscript j being implied. observe that (1.3 Summation convention Next. . 1. equation (1. equation (1.2.
(1. we would have written equation (1. If it appears twice. Lowercase latin subscripts take on values in the range (1. + xn A1n = b1 . it is called a free index and it takes on each value in its range.15) for the trace as trace [A] = Aii . Thus.33) equivalently as xj Aij = bi . .6) for the product of two matrices as Cij = Aik Bkj . it is the location where the repeated subscript appears that tells us whether {x} multiplies [A] or [A] multiplies {x}. in particular. . Thus.1. equation (1.13) for the quadratic form as {x}T [A]{x} = Aij xi xj . n). it is called a dummy index and summation is implied over it. and equation (1. for example. The three preceding equations are identical. involves scalar quantities. say s. We can also change the repeated subscript q to some other index. the second equation does not correspond to {x}[A] = {b}. we can change the free subscript p in every term of the equation Apq xq = bp to any other index. for example (1. 4.35) . . and therefore.36) (1. If it appears once. It is important to emphasize that each of the equations in. Note ﬁnally that had we adopted the range and summation conventions in Section 1. . Note that both Aij xj = bi and xj Aij = bi represent the matrix equation [A]{x} = {b}.24). Free and dummy indices may be changed without altering the meaning of an expression provided that one does not violate the preceding rules. 2. equation (1. (1. the order in which the terms appear within a symbol group is irrelevant. All symbol groupings in an equation must have the same free subscripts. . for example. MATRIX ALGEBRA AND INDICIAL NOTATION 1. and write Aks xs = bk . 2. In an indicial equation it is the location of the subscripts that is crucial.10) for the product of a matrix by its transpose as {x}T {x} = xi xi . The same index may not appear more than twice in the same symbol grouping.24)1 is equivalent to x1 A11 + x2 A12 + . and equivalently write Akq xq = bk . 3.8 Summary of Rules: CHAPTER 1. A given index may appear either once or twice in a symbol grouping. Likewise we can write (1.37) (1. say k. .
suppose that [A] is a square matrix and one wishes to simplify Ajk δ j .z . δij . (1.40) Thus we have used the facts that (i) since δij is zero unless i = j. Since [I]{u} = {u} the result follows at once. If [Q] is an orthogonal matrix.41) More generally. the result follows. Similarly.1.39) The following useful property of the Kronecker delta is sometimes called the substitution rule. Thus replacing the Kronecker delta by unity. any column matrix {u} and suppose that one wishes to simplify the expression ui δij .. it follows that all terms on the righthand side vanish trivially except for the one term for which i = j..z δij = Tjpq. that Qik Qjk = Qki Qkj = δij . Since [I][A] = [A]. note that δji ui (which is equal to the quantity δij ui that is given) is simply the jth element of the column matrix [I]{u}. we replace the Kronecker delta by unity and change the repeated subscript j → to obtain2 Ajk δ j = A k ..4. δ j Ajk is simply the . Recall that ui δij = u1 δ1j + u2 δ2j + . one simply replaces the Kronecker delta by unity and changes the repeated subscript i → j to obtain Tipq. then we know that [Q][Q]T = [Q]T [Q] = [I]... is deﬁned by δij = 1 if i = j.z . gives ui δij = uj . (1. in indicial notation. Thus the term that survives on the righthand side is uj and so ui δij = uj . if δip multiplies a quantity Cij k representing n4 numbers. Consider. 2 . (1. for example. kelement of the matrix [I][A]. This implies. . Similarly in the second example. one replaces the Kronecker delta by unity and changes the repeated subscript i → p to obtain Cij k δip = Cpj k . 0 if i = j. .43) Observe that these results are immediately apparent by using matrix algebra.38) Note that it represents the elements of the identity matrix. + un δnj .4 Kronecker delta The Kronecker Delta. δij is unity.42) The substitution rule applies even more generally: for any quantity or expression Tipq. Then by the same reasoning. and changing the repeated subscript i → j. (1. (ii) and when i = j. (1. In the ﬁrst example. KRONECKER DELTA 9 1. (1.. the expression being simpliﬁed has a nonzero value only if i = j. Since δij is zero unless i = j.
(2. eijk (1.48) The following relation involving the alternator and the Kronecker delta will be useful in subsequent calculations eijk epqk = δip δjq − δiq δjp . are in cyclic order. 3). are in anticyclic order.44) Observe from its deﬁnition that the sign of eijk changes whenever any two adjacent subscripts are switched: eijk = −ejik = ejki . 3).5 The alternator or permutation symbol We now limit attention to subscripts that range over 1. k.45) One can show by direct calculation that the determinant of a 3 matrix [A] can be written in either of two forms det[A] = eijk A1i A2j A3k as well as in the form det[A] = 1 eijk epqr Aip Ajq Akr .46) Another useful identity involving the determinant is epqr det[A] = eijk Aip Ajq Akr . 2. k. 3.47) or det[A] = eijk Ai1 Aj2 Ak3 . (2. 1). 2). (1. (3. k) = (1. are equal. = +1 for (i. j. (1. by simply writing out all of the terms in (1. j.49). j. k) = (1.(1. −1 for (i. 2). (3.46) . directly. 1. are equal. 1). k.49) It is left to the reader to develop proofs of these identities. The alternator or permutation symbol is deﬁned by 0 if two or more subscripts i. = +1 if the subscripts i. MATRIX ALGEBRA AND INDICIAL NOTATION 1. 0 if two or more subscripts i. 6 (1. be veriﬁed . 3.10 CHAPTER 1. −1 if the subscripts i. (1. k. They can. 2. 1. j. 3 only. j. 2. of course. j. (1.
jelement of a matrix [B]T equals the j. or equivalently Dij = Akj Bik . B2j . B1j . 2. Bnj and summing. n. by deﬁnition of transposition. as noted previously. {y}. Example(1.6. the element Cij in the ith row and j th column of [C] is obtained by multiplying the elements of the ith row of [A]. Express the elements of [C].1. x2 . . ielement of the matrix T [B]: Bij = Bji and so we can write Eij = Aik Bjk . . . Ai2 . where the second expression was obtained by simply changing the order in which the terms appear in the ﬁrst expression (since. moreover summation is carried out over the repeated index k. It follows likewise that the equation [D] = [B][A] leads to Dij = Bik Akj .6 Worked Examples. .2): The n × n matrices [C]. pairwise. Thus yi = Aij xj + Bij zj . Ain of the ith row of [A] by the respective elements x1 . Ai2 . xn of {x} and summing. So. Solution: By the rules of matrix multiplication. Cij is obtained by multiplying the elements Ai1 . the element yi in the ith row of {y} is obtained by ﬁrst pairwise multiplying the elements Ai1 . . Ain by. note that i and j are both free indices here and so this represents n2 scalar equations. and this equation holds for each value of the free index i = 1. [D] and [E] are deﬁned in terms of the two n × n matrices [A] and [B] by [C] = [A][B]. then doing the same for the elements of [B] and {z}. {z} are n × 1 column matrices. . However. [E] = [A][B]T . express the matrix equation {y} = [A]{x} + [B]{z} as a set of scalar equations. . Thus Cij = Aik Bkj . we ﬁrst multiply [A] by [B]T to obtain Eij = Aik Bkj . Example(1. . . the i. . [D] = [B][A]. where summation over the dummy index j is implied. [D] and [E] in terms of the elements of [A] and [B]. . by the respective elements of the j th column of [B] and summing. yi = Aip xp + Biq zq . . the order of terms within a symbol group is insigniﬁcant since T these are scalar quantities. Observe that all rules for indicial notation are satisﬁed by each of the three equations above. and ﬁnally adding the two together. yk = Akp xp + Bkp zp .1): If [A] and [B] are n × n square matrices and {x}. respectively. . . . Solution: By the rules of matrix multiplication. . WORKED EXAMPLES.) In order to calculate Eij . . . Note that one can alternatively – and equivalently – write the above equation in any of the following forms: yk = Akj xj + Bkj zj . 11 1. .
take Sij = ui uj where {u} is an arbitrary column matrix.4): Show that any matrix [A] can be additively decomposed into the sum of a symmetric matrix and a skewsymmetric matrix. By adding the two equations in above one obtains Sij = Sij + Wij = Aij . . from p → j and q → i which gives Spq Wpq = Sji Wji . MATRIX ALGEBRA AND INDICIAL NOTATION All four expressions here involve the ik. and we can change it to another index. Also. Wji = −Wij . Remark: As a special case. we get Sij Wij = Spq Wpq . It follows that for any skewsymmetric [W ]. Wij = (Aij − Aji ).3): If [S] is any symmetric matrix and [W ] is any skewsymmetric matrix. we can change i → p and get Sij Wij = Spj Wpj . Next. Note that we can change i to any other index except j. Thus. Therefore Sji Wji = −Sij Wij . Thus. By changing the dummy indices i → p and j → q.12 CHAPTER 1. if we did change it to j. On combining. Eﬀectively. kj or jk elements of [A] and [B]. provided that we change both repeated subscripts to the new symbol (and as long as we do not have any subscript appearing more than twice). Whenever there is a dummy subscript. for example. these we get Sij Wij = Sji Wji . Example(1. show that Sij Wij = 0. Solution: Note that both i and j are dummy subscripts here. We can now change dummy indices again. 2 2 It may be readily veriﬁed from these deﬁnitions that Sij = Sji and that Wij = −Wij . Wij ui uj = 0 for all ui . there is no free subscript so this is just a single scalar equation. therefore there are summations over each of them. since [S] is symmetric Sji = Sij . note that this [S] is symmetric. we have changed both i and j simultaneously from i → j and j → i. the choice of the particular index for that dummy subscript is arbitrary. Using this in the righthand side of the preceding equation gives Sij Wij = −Sij Wij from which it follows that Sij Wij = 0. the matrix [S] is symmetric and [W ] is skewsymmetric. Solution: Deﬁne matrices [S] and [W ] in terms of the given the matrix [A] as follows: 1 1 (Aij + Aji ). and since [W ] is skewsymmetric. The precise locations of the subscripts vary and the meaning of the terms depend crucially on these locations. It is worth repeating that the location of the repeated subscript k tells us what term multiplies what term. Example(1. then there would be four j’s and that violates one of our rules. since i is a dummy subscript in Sij Wij .
Tij ui uj = Sij ui uj for all ui where Sij = 1 (Tij + Tji ). . Let [E] be an arbitrary symmetric matrix and deﬁne the elements of a matrix [A] by Aij = Dijk Ek . WORKED EXAMPLES. k. one has Aij Bij = 0. Show that [A] is unchanged if Dijk is replaced by its “symmetric part” Cijk where Cijk = 1 (Dijk + Dij k ).8): Given an orthogonal matrix [Q]. 13 Example (1. take all values in the range 1. we have δij δik δjk = δjk δjk = δkk = δ11 + δ22 + . ﬁrst on the repeated index i and then on the repeated index j.1. . . show that for any matrix [T ]. Solution: By using the substitution rule. Example(1. is skew symmetric Example (1. . 2 (i) Solution: In a manner entirely analogous to the previous example. D1122 . j. n. . 2. . that Bij = ui uj is symmetric.5): Show that the quadratic form Tij ui uj is unchanged if Tij is replaced by its symmetric part. + δnn = n. or in matrix form. D1112 . i.6. . D112n . and let Dijk denote a generic element of this set where each of the subscripts i. use indicial notation to solve the matrix equation [Q]{x} = {a} for {x}.7): Evaluate the expression δij δik δjk . . . Dnnnn are n4 constants. . D111n . while Ek is symmetric in the subscripts k. D1121 . . Aij = Dijk Ek = = = 1 1 1 1 Dijk + Dijk + Dij k − Dij k Ek 2 2 2 2 1 1 (Dijk + Dij k ) Ek + (Dijk − Dij k ) Ek 2 2 Cijk Ek . . and that for any symmetric matrix [A] and any skewsymmetric matrix [B]. . . .6): Suppose that D1111 . = where in the last step we have used the facts that Aij = Tij − Tji is skewsymmetric. . [A] = [S] + [W ]. 2 (i) Solution: The result follows from the following calculation: Tij ui uj = 1 1 1 1 Tij + Tij + Tji − Tji ui uj 2 2 2 2 1 1 (Tij + Tji ) ui uj + (Tij − Tji ) ui uj 2 2 = Sij ui uj . k where in the last step we have used the fact that (Dijk − Dij k ) Ek = 0 since Dijk − Dij in the subscripts k. . Example (1.e. . .
Example(1. In order to get around this diﬃculty we make use of the fact that the speciﬁc choice of the index in a dummy subscript is not signiﬁcant and so we can write f = Apq xp xq . Second. [Q]−1 = [Q]T . ∂xi = ∂xj 1 if i = j. of course. note that because of the summation on the indices i and j. . . Multiplying both sides by Qik gives Qik Qij xj = Qik ai . reduces further to xk = Qik ai . xn ) = Aij xi xj where the Aij ’s are constants.14 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION Solution: In indicial form. Since [Q] is orthogonal. i. . it follows that 0 if i = j. we know from (1. by the substitution rule.39) that Qrp Qrq = δpq . Calculate the partial derivatives ∂f /∂xi . . simpliﬁes to ∂f = Aiq xq + Api xp = Aij xj + Aji xj = (Aij + Aji )xj . have written down immediately from the fact that {x} = [Q]−1 {a}. ∂xi = δij .9): Consider the function f (x1 . Diﬀerentiating f and using the fact that [A] is constant gives ∂ ∂f ∂ ∂xp ∂xq = (Apq xp xq ) = Apq (xp xq ) = Apq xq + xp ∂xi ∂xi ∂xi ∂xi ∂xi Since the xi ’s are independent variables. First. observe that if we diﬀerentiatiate f with respect to xi and write ∂f /∂xi = ∂(Aij xi xj )/∂xi . x2 . . and for an orthogonal matrix. the equation [Q]{x} = {a} reads Qij xj = ai . it is incorrect to conclude that ∂f /∂xi = Aij xj by viewing this in the same way as diﬀerentiating the function A12 x1 x2 with respect to x1 . Solution: We begin by making two general observations. by the substitution rule. In matrix notation this reads {x} = [Q]T {a} which we could. which. Thus the preceding equation simpliﬁes to δjk xj = Qik ai .e. we would violate our rules because the righthand side has the subscript i appearing three times in one symbol grouping. ∂xj Using this above gives ∂f = Apq [δpi xq + xp δqi ] = Apq δpi xq + Apq xp δqi ∂xi which. ∂xi .
2 . it follows that 1 if p = i and q = j. Since this equation holds for all xi . it may be diﬀerentiated again with respect to xi to obtain (Akj + Ajk ) ∂xj = (Akj + Ajk ) δji = Aki + Aik = 0. ∂xk ∂xk ∂xk ∂xk (i) where we have used the fact that ∂xi /∂xj = δij in the last step. .Enn ) = 1 2 Cijkl Eij Ekl . Therefore it is necessary and suﬃcient that [A] be skewsymmetric. Calculate ∂W ∂Eij and ∂2W .. What does this imply about [A]? Solution: We know from a previous example that that if [A] is a skewsymmetric and [S] is symmetric then Aij Sij = 0.. Therefore. WORKED EXAMPLES.1. Example (1. Deﬁne the function W ([E]) for all matrices [E] by W ([E]) = W (E11 . Now we show that this is also a necessary condition. ∂xi (iii) Thus [A] must necessarily be a skew symmetric matrix..11): Let Cijkl be a set of n4 constants. ∂Epq = ∂Eij 0 otherwise.. E12 . Thus a suﬃcient condition for the given equation to hold is that [A] be skewsymmetric. we may diﬀerentiate both sides with respect to xk and proceed as follows: 0= ∂ ∂xi ∂xj ∂ (Aij xi xj ) = Aij (xi xj ) = Aij xj + Aij xi = Aij δik xj + Aij xi δjk . and as a special case of this that Aij xi xj = 0 for all {x}. E12 . On using the substitution rule. We are given that Aij xi xj = 0 for all xi .6.. since the Eij ’s are independent variables. 15 Example (1..10): Suppose that {x}T [A]{x} = 0 for all column matrices {x} where the square matrix [A] is independent of {x}. (ii) Since this also holds for all xi . ∂Epq = δpi δqj .E33 ) with respect to Eij gives ∂W ∂ = ∂Eij ∂Eij 1 Cpqrs Epq Ers 2 = 1 Cpqrs 2 ∂Epq ∂Ers Ers + Epq ∂Eij ∂Eij = = = 1 Cpqrs (δpi δqj Ers + δri δsj Epq ) 2 1 1 Cijrs Ers + Cpqij Epq 2 2 1 (Cijpq + Cpqij ) Epq . ∂Eij ∂Ekl (i) Solution: First. ∂Eij (ii) Keeping this in mind and diﬀerentiating W (E11 . . this simpliﬁes to Akj xj + Aik xi = (Akj + Ajk ) xj = 0.
and ﬁnally using the substitution rule. 2.49). then using the identity (1. Introduction to Matrix Analysis. Solution: First.) 2 Diﬀerentiating this once more with respect to Ekl gives ∂2W ∂ = ∂Eij ∂Ekl ∂Ek 1 (Cijpq + Cpqij ) Epq 2 = 1 (Cijpq + Cpqij ) δpk δql 2 1 (Cijkl + Cklij ) 2 (iii) = (iv) Example (1. W. MATRIX ALGEBRA AND INDICIAL NOTATION where we have made use of the substitution rule. Example(1. Pick and ﬁx the free subscript i at any value i = 1. suppose that [S] is symmetric. Since eijk = −eikj this is a skewsymmetric matrix.R. In a previous example we showed that Sij Wij = 0 for any symmetric matrix [S] and any skewsymmetric matrix [W ]. Solution: By ﬁrst using the skew symmetry property (1.49) leads to eipq eijk Sjk = (δpj δqk − δpk δqj )Sjk = Spq − Sqp = 0 where in the last step we have used the substitutin rule. k element of a 3 × 3 matrix. Thus Spq = Sqp and so [S] is symmetric.A. Conversely suppose that (i) holds for some matrix [S].16 CHAPTER 1. Then. Elementary Matrices.45). McGrawHill. 2.12): Evaluate the expression eijk ekij . R. 3. we have eijk ekij = −eijk eikj = −(δjk δkj −δjj δkk ) = −(δjj −δjj δkk ) = −(3−3×3) = 6. R. Bellman.13): Show that eijk Sjk = 0 if and only if the matrix [S] is symmetric. Collar. Multiplying (i) by eipq and using the identity (1. 1960. (ii) (i) References 1. Cambridge University Press. 1965.J. (Note that in the ﬁrst step we wrote W = 1 Cpqrs Epq Ers 2 1 rather than W = 2 Cijkl Eij Ekl because we would violate our rules for indices had we written ∂( 1 Cijkl Eij Ekl )/∂Eij . we can think of eijk as the j. Frazer. Duncan and A. . Consequently (i) must hold. Remark: Note as a special case of this result that eijk vj vk = 0 for any arbitrary column matrix {v}.
C.. The following notation will be consistently used: Greek letters will denote real numbers.. . γ. and A... linear transformation As mentioned in the Preface. Thus. for example. will denote scalars (real numbers). c. it is not meant to be a source for learning the subject of linear algebra for the ﬁrst time.. and (b) to linear transformations that carry vectors from one vector space into the same vector space. and uppercase boldface Latin letters will denote linear transformations. 17 . lowercase boldface Latin letters will denote vectors. vector A .. b. scalar a .. These notes are designed to review those aspects of linear algebra that will be encountered in our study of continuum mechanics. a. α. β. will denote vectors.. will denote linear transformations. “o” will denote the null vector while “0” will denote the null linear transformation...Chapter 2 Vectors and Linear Transformations Notation: α .. B.. . Linear Algebra is a far richer subject than the very restricted glimpse provided here might suggest. The discussion in these notes is limited almost entirely to (a) real 3dimensional Euclidean vector spaces.... In particular....
From hereon we shall restrict attention to 3dimensional Euclidean vector spaces and denote such a space by E3 . xk be k vectors in V. y in V a scalar. A scalarproduct has certain properties which we do not list here except to note that it is required that x·y =y·x for all x. The operation of addition (has certain properties which we do not list here) and associates with each pair of vectors x and y in V. In particular. . for every x. x2 .3) A Euclidean vector space is a vector space together with an inner product on that space. which we denote by x · y. . VECTORS AND LINEAR TRANSFORMATIONS 2. Let U be a subset of a vector space V. ξ3 are called the components of x in the basis {e1 .1) are the numbers α1 = α2 = . together with two operations. Given any vector x ∈ V there exist a unique set of numbers ξ1 . . αk = 0. e3 } is said to be a basis for V. . from hereon we restrict attention to 3dimensional vector spaces. If V is a vector space. called vectors. we say that U is a subspace (or linear manifold) of V if. ξ2 . If V contains n linearly independent vectors but does not contain n + 1 linearly independent vectors. it is assumed that there is a unique vector o ∈ V called the null vector such that x + o = x. αk for which α1 x1 + α2 x2 · · · + αk xk = o (2.1 Vectors A vector space V is a collection of elements. ξ3 such that x = ξ1 e1 + ξ2 e2 + ξ3 e3 . the vectors x + y and αx are also in U. we say that the dimension of V is n. e2 . α2 . e3 }. . Thus a linear manifold U of V is itself a vector space under the same operations of addition and multiplication by a scalar as in V. A scalarproduct (or inner product or dot product) on V is a function which assigns to each pair of vectors x. . Unless stated otherwise. . . . (2. e2 . y ∈ V. These vectors are said to be linearly independent if the only real numbers α1 . y ∈ U and every real number α. ξ2 .18 CHAPTER 2. a vector denoted by x + y that is also in V. (2. The operation of scalar multiplication (has certain properties which we do not list here) and associates with each vector x ∈ V and each real number α. Let x1 . . any set of three linearly independent vectors {e1 . addition and multiplication by a scalar. another vector in V denoted by αx.2) the numbers ξ1 .
to note that if we are given two vectors x and y where x · y = 0 and y = o. It is obvious. e3 } is said to be righthanded if (e1 × e2 ) · e3 > 0. (2. y ∈ E3 . (2.9) for all x. it follows that n = (x × y)/(x × y). (2. 0 if i = j.7) A vectorproduct (or crossproduct) on E3 is a function which assigns to each ordered pair of vectors x. VECTORS 19 by The length (or magnitude or norm) of a vector x is the scalar denoted by x and deﬁned x = (x · x)1/2 . Since n is parallel to x × y.5) Two vectors x and y are orthogonal if x · y = 0. then x must be the null vector. e2 . j = 1. y ∈ V. A basis {e1 . The angle θ between two vectors x and y is deﬁned by cos θ = x·y . A unit vector is a vector of unit length. (2.8) where θ is the angle between x and y as deﬁned by (2. For such a basis.4) A vector has zero length if and only if it is the null vector.10) .2. An orthonormal basis is a triplet of mutually orthogonal unit vectors e1 . on the other hand if x · y = 0 for every vector y. (2. xy 0 ≤ θ ≤ π. (2. which we denote by x × y. this does not necessarily imply that x = o.5). ei · ej = δij for i. (2. The vectorproduct must have certain properties (which we do not list here) except to note that it is required that y × x = −x × y One can show that x × y = x y sin θ n.1. e2 . 2. and since it has unit length. The magnitude x × y of the crossproduct can be interpreted geometrically as the area of the triangle formed by the vectors x and y. 3.6) where the Kronecker delta δij is deﬁned in the usual way by δij = 1 if i = j. e3 ∈ E3 . a vector. nevertheless helpful. and n is a unit vector in the direction x × y which therefore is normal to the plane deﬁned by x and y.
(2. e1 . e2 . e3 is an orthonormal basis. x3 ) are called the coordinates of p in the (coordinate) frame F = {o. e3 . 2. Consider a threedimensional Euclidean vector space E3 .2 Linear Transformations.1 Euclidean point space A Euclidean point space P whose elements are called points. e3 .. y ∈ E3 .20 CHAPTER 2. say pq. (iii) given an arbitrary point p ∈ P and an arbitrary vector x ∈ E3 . and it is the image of x under the transformation F. q. The triplet (x1 .11) F is said to be a linear transformation if it is such that F(αx + βy) = αF(x) + βF(y) (2. When F is a linear transformation. For .1.12) for all scalars α. we usually omit the parenthesis and write Fx instead of F(x). e1 . Let F be a function (or transformation) which assigns to each vector x ∈ E3 . x2 . Pick and ﬁx an arbitrary point o ∈ P (which we call the origin of P) and an arbitrary basis for E3 of unit vectors e1 . Note that Fx is a vector. VECTORS AND LINEAR TRANSFORMATIONS 2. y ∈ E3 . e2 . Let P be the plane normal to the unit vector n. e2 . e3 } comprised of the origin o and the basis vectors e1 . e2 . q) is uniquely associated → with a vector in E3 . the coordinate frame {o. β and all vectors x. see Figure 2. If e1 . e3 } is called a rectangular cartesian coordinate frame. is related to a Euclidean vector space E3 in the following manner. there is a unique point → q ∈ P such that x =pq. A geometric example of a linear transformation is the “projection operator” Π which projects vectors onto a given plane P. for all p. y = F(x). x ∈ E3 .1. e2 . Here x is called the position of point q relative to the point p. such that (i) pq = − qp (ii) pq + qr=pr → → → → → for all p. Every order pair of points (p. r ∈ P. A linear transformation is deﬁned by the way it operates on vectors in E3 . q ∈ P. Corresponding to any point p ∈ P there is a unique vector → op= x = x1 e1 + x2 e2 + x3 e3 ∈ E3 . a second vector y ∈ E3 .
x3 } is a basis for E3 .13) Linear transformations tell us how vectors are mapped into other vectors.15) (AB)x = A(Bx) (αA)x = α(Ax) for all x ∈ E3 . LINEAR TRANSFORMATIONS.14) Let A and B be linear transformations on E3 and let α be a scalar. This follows from the fact that {x1 . In particular. consequently the image Fx of any vector x is given by Fx = ξ1 y1 + ξ2 y2 + ξ3 y3 which is a rule for assigning a unique vector Fx to any given vector x. (2. The identity linear transformation I takes every vector x into itself. x3 } are any three linearly independent vectors in E3 .16) (2. y2 = Fx2 . AB = BA.1: The projection Πx of a vector x onto the plane P. y3 = Fx3 .2.2.18) . It can be veriﬁed geometrically that P is deﬁned by Πx = x − (x · n)n for all x ∈ E3 . AB the product. Then there is a unique linear transformation F that maps {x1 . A + B is called the sum of A and B. x2 . P n x 21 Πx Figure 2. and αA is the scalar multiple of A by α. x2 . The linear transformations A + B. y2 . (2. (2. y3 }: y1 = Fx1 . x2 . (2. Therefore any arbitrary vector x can be expressed uniquely in the form x = ξ1 x1 + ξ2 x2 + ξ3 x3 . (2. y3 } are any three vectors in E3 and that {x1 .17) respectively. Ix = x for all x ∈ E3 . The null linear transformation 0 is the linear transformation that takes every vector x into the null vector o. x3 } into {y1 . suppose that {y1 . AB and αA are deﬁned as those linear transformations which are such that (A + B)x = Ax + Bx for all x ∈ E3 . any vector x ∈ E3 . for all x ∈ E3 . In general. Thus 0x = o. Πx ∈ P is the vector obtained by projecting x onto P. y2 .
it may be shown that Wx · x = 0 for all x ∈ E3 .. (AB)T = BT AT . corresponding to any nonsingular linear transformation .21) Every linear transformation A can be represented as the sum of a symmetric linear transformation S and a skewsymmetric linear transformation W as follows: 1 1 A = S + W where S = (A + AT ). Consequently.22) (2. if the only vector x for which Ax = o is the zero vector.19) AT is called the transpose of A. y ∈ E3 .22 CHAPTER 2. It follows from this that if A is nonsingular then Ax = Ay whenever x = y. VECTORS AND LINEAR TRANSFORMATIONS The range of a linear transformation A (i. a nonsingular transformation A is a onetoone transformation in the sense that. (A + B)T = AT + BT . The set of all vectors x for which Ax = o is also a subspace of E3 . 2 2 For every skewsymmetric linear transformation W. for any given y ∈ E3 .24) (2. (2. it is known as the null space of A. there exists a vector w (called the axial vector of W) which has the property that Wx = w × x for all x ∈ E3 . the collection of all vectors Ax as x takes all values in E3 ) is a subspace of E3 . one can show that there is a unique linear transformation usually denoted by AT such that Ax · y = x · AT y for all x. (2. Thus.e. W = (A − AT ). Given any linear transformation A. (2. One can show that (αA)T = αAT . there is one and only one vector x ∈ E3 for which Ax = y. then we say that A is nonsingular. skewsymmetric if A = −AT .23) moreover. (2.20) A linear transformation A is said to be symmetric if A = AT . The dimension of this particular subspace is known as the rank of A.25) Given a linear transformation A. (2.
positivesemideﬁnite if Ax · x ≥ 0 for all x ∈ E3 . If Q is orthogonal. The inverse of F maps {y1 . y ∈ E3 . or equivalently. y2 . LINEAR TRANSFORMATIONS. moreover. (2. then so is AB. y2 . y3 = Fx3 . If A is nonsingular then so is AT . then there is a unique nonsingular linear transformation F that maps {x1 . If both bases {x1 .28) Thus an orthogonal linear transformation preserves both the length of a vector and the angle between two vectors. y2 . and so there is no ambiguity in writing this linear transformation as A−T . (AB)−1 = B−1 A−1 .32) (2. such that AA−1 = A−1 A = I.2.2. x2 . i.26) If {y1 .27) (2. it follows that it also preserves the inner product: Qx · Qy = x · y for all x. x = o. (2. y3 }: y1 = Fx1 . y3 } and {x1 . ¯ (2.29) (2. x3 } and {y1 . x3 }. x2 . moreover. A linear transformation Q is said to be orthogonal if it preserves length. y3 } are righthanded (or both are lefthanded) we say that the linear transformation F preserves the orientation of the vector space. if Qx = x for all x ∈ E3 .30) (2. x2 . there exists a second linear transformation. 23 A. such that Ax = y if and only if x = A−1 y. A linear transformation A is said to be positive deﬁnite if Ax · x > 0 for all x ∈ E3 . If two linear transformations A and B are both nonsingular.33) (2. x2 . y2 = Fx2 . (AT )−1 = (A−1 )T . denoted by A−1 and called the inverse of A. x3 } are two sets of linearly independent vectors in E3 .31) . x3 } into {y1 . it is necessarily nonsingular and Q−1 = QT . If Q is orthogonal. y3 } into {x1 .e. y2 ..
(n times). e3 } is said to be a principal basis of A. according to the polar decomposition theorem. A subspace U is known as an invariant subspace of A if Av ∈ U for all v ∈ U. e2 . and a corresponding set of three mutually orthogonal eigenvectors e1 . m > 0. which is such that (a ⊗ b)x = (x · b)a for all x ∈ E3 . there exists unique symmetric positive deﬁnite linear transformations U and V and a unique orthogonal linear transformation R such that F = RU = VR. (2. and λ3 .. (2. VECTORS AND LINEAR TRANSFORMATIONS A positive deﬁnite linear transformation is necessarily nonsingular. Moreover. respectively. Every eigenvalue of a positive deﬁnite linear transformation must be positive.36) . it is easily seen that e and λn are an eigenvector and an eigenvalue of An where An = AA.34) are known. Every linear transformation A (on a 3dimensional vector space E3 ) has at least one eigenvalue. then for any positive integer n.. b ∈ E3 . Each eigenvector of A characterizes a onedimensional invariant subspace of A. It can be shown that a symmetric linear transformation A has three real eigenvalues λ1 . suppose that there exists an associated onedimensional invariant subspace. If e and λ are an eigenvector and eigenvalue of a linear transformation A. e2 . Let A be a linear transformation. as an eigenvector and an eigenvalue of A.35) If λ and r are an eigenvalue and eigenvector of U. λ2 . Finally. Combining these two fact shows that Av = λv for all v ∈ U. Since U is an invariant subspace we know in addition that Av ∈ U whenever v ∈ U.. A symmetric linear transformation is positive deﬁnite if and only if all three of its eigenvalues are positive.24 CHAPTER 2. and e3 . it follows that if v ∈ U then any other vector in U can be expressed in the form λv for some scalar λ. Given a linear transformation A. The particular basis of E3 comprised of {e1 . Given two vectors a. this continues to be true for negative integers m provided A is nonsingular and if by A−m we mean (A−1 )m . their tensorproduct is the linear transformation usually denoted by a ⊗ b. and no eigenvalue of a nonsingular linear transformation can be zero. A is positive deﬁnite if and only if its symmetric part (1/2)(A + AT ) is positive deﬁnite. Since U is onedimensional.AA. (2. A vector v and a scalar λ such that Av = λv. then it can be readily shown that λ and Rr are an eigenvalue and eigenvector of V. given any nonsingular linear transformation F.
42) . (2. e3 . λ2 . 3.40) One refers to the Aij ’s as the components of the linear transformation A in the basis {e1 . Note that 3 3 i=1 ei ⊗ ei = I.2.39) that the components of S in the principal basis {e1 . e3 }. any vector in E3 . e3 . (2. (a ⊗ b)A = a ⊗ (AT b). it follows from (2.41) Let S be a symmetric linear transformation with eigenvalues λ1 . and d it is easily shown that (a ⊗ b)T = b ⊗ a. (2. It follows from the general representation (2. S13 = S23 = 0. 25 Observe that for any x ∈ E3 . e3 } be an orthonormal basis. λ3 and corresponding (mutually orthogonal unit) eigenvectors e1 . e2 . Since Sej = λj ej for each j = 1. e2 . e3 } are S11 = λ1 . the vector (a ⊗ b)x is parallel to the vector a. c. For any vectors a. Since this is a basis. Ae2 .38) Let {e1 . S22 = λ2 . can be expressed as a unique linear combination of the basis vectors e1 . The rank of the linear transformation a ⊗ b is thus unity.37) The product of a linear transformation A with the linear transformation a ⊗ b gives A(a ⊗ b) = (Aa) ⊗ b. The linear transformation A can now be represented as 3 3 A= i=1 j=1 Aij (ei ⊗ ej ). e2 . S21 = S31 = 0. (2. LINEAR TRANSFORMATIONS. and therefore in particular each of the vectors Ae1 . Thus the range of the linear transformation a ⊗ b is the onedimensional subspace of E3 consisting of all vectors parallel to a. S12 = 0. S32 = 0. j = 1.39) where Aij is the ith component on the vector Aej . i=1 (Aei ) ⊗ ei = A. (a ⊗ b)(c ⊗ d) = (b · c)(a ⊗ d). e2 .2. S33 = λ3 . b.40) that S admits the representation 3 S= i=1 λi (ei ⊗ ei ). They can equivalently be expressed as Aij = ei · (Aej ). 2. Ae3 . e2 . It follows that there exist unique real numbers Aij such that 3 Aej = i=1 Aij ei . 2. 3. (2. (2.
45) 2. i (2. Solution: By the properties of the vectorproduct. It is readily seen that √ 3 S= i=i λi (ei ⊗ ei ). the ﬁrst and last terms in this equation vanish. show that a · (b × c) = b · (c × a) = c · (a × b). (2. (2. Example 2. b.1: Given three vectors a. then 3 S−1 = i=1 (1/λi ) (ei ⊗ ei ). On expanding this out one obtains a · (a × c) + a · (b × c) + b · (a × c) + b · (b × c) = 0. Finally. c in E3 – none of which is the null vector – to be linearly dependent is that a · (b × c) = 0. This establishes the ﬁrst part of the result. Thus (a + b) · [(a + b) × c] = 0. b. Thus the preceding equation simpliﬁes to a · (b × c) = b · (c × a). the vector (a + b) is normal to the vector (a + b) × c.26 CHAPTER 2. We call T the positive deﬁnite square root of S and √ denote it by T = S. VECTORS AND LINEAR TRANSFORMATIONS this is called the spectral representation of a symmetric linear transformation. It can be readily shown that.3 Worked Examples. for any positive integer n. c. The second part is shown analogously. 3 S = i=1 n λn (ei ⊗ ei ). there is a unique symmetric positive deﬁnite linear transformation T such that T2 = S. recall that a × c = −c × a. .43) if S is symmetric and nonsingular. and b is normal to (b × c). Example 2.44) If S is symmetric and positive deﬁnite. Since a is normal to (a × c).2: Show that a necessary and suﬃcient condition for three vectors a.
γa · (b × c) = 0. c are linearly independent.2: Volume of the tetrahedron deﬁned by vectors a. c. This means they cannot be linearly independent. the vector b × c is normal to the plane deﬁned by the vectors b and c. c. n = (a × b)/a × b is a unit vector that is normal to the base of the tetrahedron. β. and this implies that a is normal to b × c. γ. c as depicted in Figure 2. However from the property (2. Since we are in E3 this means that a must lie in the plane deﬁned by b and c. c. b. Analogous calculations with the other pairs of vectors. 3 Height Heigh h0 Area A0 c b a n olume Volume = 1 3 A0 × h0 A0 = a × b h0 = c · n a×b n= a × b Figure 2. γ is nonzero it follows that necessarily a · (b × c) = o. and keeping in mind that a · (b × c) = b · (c × a) = c · (a × b). βa · (b × c) = 0. β. Consider the triangle deﬁned by the vectors a and b to be the base of the tetrahedron. at least one of which is non zero. Example 2. Next. It follows that αa + βb + γc = o for some real numbers α. To show suﬃciency.2. let a·(b×c) = 0 and assume that a. a · (b × c) = 0.9) of the vectorproduct we have a × b = ab sin θ and so A0 = a × b/2. leads to αa · (b × c) = 0. b. WORKED EXAMPLES.3: Interpret the quantity a·(b×c) geometrically in terms of the volume of the tetrahedron deﬁned by the vectors a.3. Therefore V0 = 1 1 A0 h0 = 3 3 a × b 2 (c · n) = 1 (a × b) · c. b. By assumption. b. are linearly dependent. Since at least one of α. Its area A0 can be written as 1/2 base × height = 1/2a(b sin θ) where θ is the angle between a and b. suppose that the three vectors a. 6 (i) .2. 27 Solution: To show necessity. see Figure 2. b. c must be linearly dependent. Its volume V0 = 1 A0 h0 where A0 is the area of its base and h0 is its height. and so the height of the tetrahedron is h0 = c · n.2. Taking the vectorproduct of this equation with c and then taking the scalarproduct of the result with a leads to βa · (b × c) = 0. By the properties of the vectorproduct. We will show that this is a contradiction whence a. b. Solution: Consider the tetrahedron formed by the three vectors a.
b. if φ(αx + βy) = αφ(x) + βφ(y) for all scalars α.28 CHAPTER 2. and let P be the plane through o normal to n. leading to Ax − Bx2 = 0. What is the inverse of R? d. VECTORS AND LINEAR TRANSFORMATIONS Observe that this provides a geometric explanation for why the vectors a.6: Let n be a unit vector. show that φ(x) = c · x for some constant vector c. Show that R(Rx) = x for all x ∈ E3 . c are linearly dependent if and only if (a × b) · c = 0.e. . Example 2. β and all vectors x. e3 . c.4: Let φ(x) be a scalarvalued function deﬁned on the vector space E3 . Π is called the “projection linear transformation” while R is known as the “reﬂection linear transformation”. If φ is linear. project and reﬂect a vector in the plane P. b. Verify that a reﬂection linear transformation R is nonsingular while a projection linear transformation Π is singular. On setting ci = φ(ei ). Show that Π and R are linear transformations. show that A = B. this implies that Ax = Bx and so A = B. e3 } be any orthonormal basis for E3 . we ﬁnd φ(x) = x1 c1 + x2 c2 + x3 c3 = c · x where c = c1 e1 + c2 e2 + c3 e3 .5: If two linear transformations A and B have the property that Ax · y = Bx · y for all vectors x and y. 3. a. Therefore φ(x) = φ(x1 e1 + x2 e2 + x3 e3 ) which because of the linearity of φ leads to φ(x) = x1 φ(e1 ) + x2 φ(e2 ) + x3 φ(e3 ). Remark: This shows that the scalarproduct is the most general scalarvalued linear function of a vector. for all vectors x (i) Example 2. i. Solution: Since (Ax − Bx) · y = 0 for all vectors y. 2. Solution: Let {e1 . Let Π and R be the transformations which. we may choose y = Ax − Bx in this. Since the only vector of zero length is the null vector. Then an arbitrary vector x can be written in terms of its components as x = x1 e1 + x2 e2 + x3 e3 . Verify that a projection linear transformation Π is symmetric and that a reﬂection linear transformation R is orthogonal. respectively. i = 1. Example 2. y.
. its projection Πx and its reﬂection Rx. (i) These deﬁne the images Πx and Rx of a generic vector x under the transformation Π and R. WORKED EXAMPLES. Applying the deﬁnition (i)2 of R to the vector Rx gives R(Rx) = (Rx) − 2 (Rx) · n n Replacing Rx on the righthand side of this equation by (i)2 .e. Rx = o.2. recall from part (b) that R(Rx) = x. Since this holds for all vectors x it follows that R−1 = R. Operating on both sides of this equation by R−1 gives Rx = R−1 x. Applying the deﬁnition (i)1 of Π to the vector n gives Πn = n − (n · n)n = n − n = o. and expanding the resulting expression shows that the righthand side simpliﬁes to x. Thus R(Rx) = x. One can readily verify that Π and R satisfy the requirement (2. P n x Πx Rx (x · n)n 29 (x · n)n Figure 2. Next consider the transformation R and consider a vector x that is mapped by it to the null vector. Rx = x − 2(x · n)n. Substituting this into the righthand side of the preceding equation leads to x = o. The transformation Π is therefore singular. Show that the projection linear transformation and reﬂection linear transformation can be represented as Π = I − n ⊗ n and R = I − 2(n ⊗ n) respectively.3 shows a sketch of the plane P. its unit normal vector n. Solution: a. a generic vector x. Therefore Rx = o if and only if x = o and so R is nonsingular. c. Using (i)2 x = 2(x · n)n. e.3. Therefore Πn = o and (since n = o) we see that o is not the only vector that is mapped to the null vector by Π. By geometry we see that Πx = x − (x · n)n. i. To ﬁnd the inverse of R.3: The projection Πx and reﬂection Rx of a vector x on the plane P. b. Figure 2.12) of a linear transformation. Taking the scalarproduct of this equation with the unit vector n yields x · n = 2(x · n) from which we conclude that x · n = 0.
Thus R is orthogonal.19) of the transpose. we have Wx · x = x · WT x.19) that the transpose satisﬁes the requirement x · RT y = Rx · y. by the deﬁnition (2. To show that Π is symmetric we simply use its deﬁnition (i)1 to calculate Πx · y and x · Πy for arbitrary vectors x and y.3) of the scalarproduct allows this to be written as Wx · x = −Wx · x from which the desired result follows. Solution: First.30 CHAPTER 2. Applying the operation (I − n ⊗ n) on an arbitrary vector x gives I − n ⊗ n x = x − (n ⊗ n)x = x − (x · n)n = Πx and so Π = I − n ⊗ n. Finally the property (2. e. Using the deﬁnition (i)2 of R on the righthand side of this equation yields x · RT y = x · y − 2(x · n)(y · n). To show that R is orthogonal we must show that RRT = I or RT = R−1 . (i) Solution: By the deﬁnition (2. In part (c) we showed that R−1 = R and so it now follows that RT = R−1 . this can be written as Wx · x = −x · Wx. Recall from the deﬁnition (2. This yields Πx · y = x − (x · n)n · y = x · y − (x · n)(y · n) and x · Πy = x · y − (x · n)n = x · y − (x · n)(y · n). (AB)x · y = x · (AB)T y . Comparing this with (i)2 shows that RT = R. We begin by calculating RT .8: Show that (AB)T = BT AT . Example 2.19) of the transpose.7: If W is a skew symmetric linear transformation show that Wx · x = 0 for all x . Since this holds for all x it follows that RT y = y − 2(y · n)n. Similarly I − 2n ⊗ n x = x − 2(x · n)n = Rx and so R = I − 2n ⊗ n. We can rearrange the righthand side of this equation so it reads x · RT y = x · y − 2(y · n)n . Example 2. and since W = −WT for a skew symmetric linear transformation. Thus Πx · y = x · Πy and so Π is symmetric. (i) . VECTORS AND LINEAR TRANSFORMATIONS d.
which when combined with the second equation yields the desired result. Example 2. Postoperating on both sides of this equation by (A−1 )T gives (AT )−1 AT (A−1 )T = (A−1 )T . However operating on the ﬁrst of these equations by A shows that Ax + Ao = Ax. and by the deﬁnition of the transpose of B we have Bx · AT y = x · BT AT y. i. the vector remains unchanged. Thus the preceding equation simpliﬁes to (AT )−1 (A−1 A)T = (A−1 )T Since A−1 A = I the desired result follows.3. WORKED EXAMPLES.) Observe ﬁrst that (AB) C = (AB) B−1 A−1 = A(BB−1 )A−1 = AIA−1 = I . Example 2. y. necessarily AB must be nonsingular. Solution: The null vector o has the property that when it is added to any vector. y which establishes the desired result. . Therefore x + o = x.11: If A is nonsingular. Therefore combining these three equations shows that (AB)x · y = x · BT AT y (ii) On equating these two expressions for (AB)x · y shows that x · (AB)T y = x · BT AT y for all vectors x. Example 2. show that Qx · Qy = x · y for all vectors x. then show that Ao = o for any linear transformation A. (Since the inverse would thus have been shown to exist.10: If A and B are nonsingular linear transformations show that AB is also nonsingular and that (AB)−1 = B−1 A−1 . Example 2. note that (AB)x · y = A(Bx) · y.12: Show that an orthogonal linear transformation Q preserves inner products. 31 Second.9: If o is the null vector. By the deﬁnition of the transpose of A we have A(Bx) · y = Bx · AT y. We will show that (AB)C = C(AB) = I and therefore that C is the inverse of AB. and similarly Ax + o = Ax. Therefore (AB)C = C(AB) = I and so C is the inverse of AB.2. Solution: Let C = B−1 A−1 .e. show that (A−1 )T = (AT )−1 . and similarly that C (AB) = B−1 A−1 (AB) = B−1 (A−1 A)B == B−1 IB = I . Solution: Since (AT )−1 is the inverse of AT we have (AT )−1 AT = I. Recall that (AB)T = BT AT for any two linear transformations A and B.
2 Since this holds for all vectors x. and since A is symmetric that A = AT .14: If α1 and α2 are two distinct eigenvalues of a symmetric linear transformation A. Thus Aa1 · a2 = a1 · Aa2 .e. Thus Q is nonsingular. Solution: a. However an orthogonal linear transformation preserves length and therefore Qx = x. However the only vector of zero length is the null vector and so necessarily x = o. y it must also hold when x and y are replaced by Qx and Qy: x·y = Qx · Qy = 1 Qx2 + Qy2 − Qx − Qy2 . Qv = v for all vectors v. Remark: Thus an orthogonal linear transformation preserves the length of any vector and the inner product between any two vectors. i. Q−1 = QT . Solution: Recall from the deﬁnition of the transpose that Aa1 · a2 = a1 · AT a2 . Since Q is orthogonal it preserves the inner product: Qx · Qy = x · y for all vectors x and y. it follows that Qx · Qy = x · y. Suppose that Qx = o for some vector x. However the property (2. b. Taking the norm of the two sides of this equation leads to Qx = o = 0. Example 2.13: Let Q be an orthogonal linear transformation.19) of the transpose shows that Qx · Qy = x · QT Qy. Thus the preceding equation simpliﬁes to Qx · Qy = 1 x2 + y2 − x − y2 2 . VECTORS AND LINEAR TRANSFORMATIONS (x − y) · (x − y) = x · x + y · y − 2x · y 1 x2 + y2 − x − y2 . Example 2. To show that Q is nonsingular we must show that the only vector x for which Qx = o is the null vector x = o. It follows that x · QT Qy = x · y for all vectors x and y. Consequently x = 0. and therefore that QT Q = I.32 Solution: Since CHAPTER 2. (ii) Since the righthandsides of the preceding expressions for x · y and Qx · Qy are the same. Q is nonsingular. 2 it follows that (i) By deﬁnition. . Thus Q−1 = QT . Show that a. show that the corresponding eigenvectors a1 and a2 are orthogonal to each other. It follows therefore that an orthogonal linear transformation preserves the angle between a pair of vectors as well. and that b. an orthogonal linear transformation Q preserves length.
WORKED EXAMPLES. 2.15: If λ and e are an eigenvalue and eigenvector of an arbitrary linear transformation A. On using (i) in the right most expression above. Solution: Since PP−1 = I it follows that Ae = APP−1 e. Remark: We will show later that +1 is an eigenvalue of a “proper” orthogonal linear transformation on E3 . 3. Thus λ = 1. 33 Since a1 and a2 are eigenvectors of A corresponding to the eigenvalues α1 and α2 . Thus Qe = λe and so Qe = λe = λ e.16: If λ is an eigenvalue of an orthogonal linear transformation Q. α1 = α2 it follows that necessarily a1 · a2 = 0. show that λ = 1. However we are told that Ae = λe. e3 } are the unique real numbers Aij deﬁned by 3 Aej = i=1 Aij ei . i=1 j=1 j=1 i=1 i=1 j=1 Aij (ei ⊗ ej ) x = xj Aej = A xj ej = Ax. e2 . Operating on both sides with P−1 gives P−1 APP−1 e = λP−1 e which establishes the result. (i) Show that the linear transformation A can be represented as 3 3 A= i=1 j=1 Aij (ei ⊗ ej ). However. Solution: Let λ and e be an eigenvalue and corresponding eigenvector of Q. whence APP−1 e = λe. j = 1.2. Example 2. j=1 j=1 . we have Aa1 = α1 a1 and Aa2 = α2 a2 .17: The components of a linear transformation A in an orthonormal basis {e1 . Thus the preceding equation reduces to α1 a1 · a2 = α2 a1 · a2 or equivalently (α1 − α2 )(a1 · a2 ) = 0. The corresponding eigvector is known as the axis of Q. we can continue this calculation as follows: 3 3 3 3 i=1 j=1 Aij (ei ⊗ ej ) x = i=1 j=1 Aij (x · ej )ei = Aij xj ei = xj Aij ei . Q preserves length and so Qe = e. (ii) Solution: Consider the linear transformation given on the righthand side of (ii) and operate it on an arbitrary vector x: 3 3 3 3 3 3 3 3 where we have used the facts that (p⊗q)r = (q·r)p and xi = x·ei .3. Here P is an arbitrary nonsingular linear transformation. show that λ and P−1 e are an eigenvalue and eigenvector of the linear transformation P−1 AP. Example 2. Example 2. Since.
the angle between any vector x and its image Rx is θ: Rx · x = x2 cos θ for all vectors x. e} be a righthanded orthonormal basis. Since the transformation R simply rotates vectors.34 CHAPTER 2. we conclude from (iv) with the choice x = e1 that Re1 · e = 0. 0 < θ < π. Second. e2 and e. VECTORS AND LINEAR TRANSFORMATIONS The desired result follows from this since this holds for arbitrary vectors x. R33 = 1. First. can be expressed as linear combinations of e1 . (iii) Next. and therefore in particular the vectors Re1 . j = 1. the vectors x. the angle between any vector x and e must equal the angle between Rx and e: Rx · e = x · e for all vectors x. (iv) moreover. Solution: We begin by listing what is given to us in the problem statement. it leaves the axis e itself unchanged: Re = e. (ii) In addition. R23 = 0. (v) And ﬁnally. (vii) Re2 = R12 e1 + R22 e2 + R32 e. (i) where e1 and e2 are any two mutually orthogonal vectors such that {e1 . e2 . since R rotates vectors about the axis e. Re1 = R11 e1 + R21 e2 + R31 e. This implies that any vector in E3 . Similarly Re2 · e = 0. Show that R can be represented as R = e ⊗ e + (e1 ⊗ e1 + e2 ⊗ e2 ) cos θ − (e1 ⊗ e2 − e2 ⊗ e1 ) sin θ. R13 = 0.18: Let R be a “rotation transformation” that rotates vectors in IE3 through an angle θ. it follows from (v) and (vii)3 that Let {e1 . for any vector x that is not parallelel to the axis e. 2. it necessarily preserves the length of a vector and so Rx = x for all vectors x. . Re = R13 e1 + R23 e2 + R33 e. Example 2. (vi) for some unique real numbers Rij . i. 3. e} forms a righthanded orthonormal basis for IE3 . e2 . These together with (vii) imply that R31 = R32 = 0. about an axis e (in the sense of the righthand rule). since the angle through which R rotates a vector is θ. Re2 and Re. since the rotation is in the sense of the righthand rule. Rx and e must obey the inequality (x × Rx) · e > 0 for all vectors x that are not parallel to e.
39). Collecting these results shows that R21 = + sin θ. It therefore follows that (FT F)T = FT (FT )T = FT F. . recall the representation (2. + cos θ e2 . Similarly the choice x = e2 . it follows from (iii) with x = e1 and (vii)1 that R11 = cos θ. Thus FT Fx · x > 0 for all vectors x = o and so FT F is positive deﬁnite. (ii) with x = e1 gives Re1  = 1 which in view of (viii)1 requires that R21 = ± sin θ. Thus R11 = R22 = cos θ. 35 Third. the inequality (vi) with the choice x = e1 . (viii) Fourth. can happen only if x = o. In order to show that FT F is positive deﬁnite. By using the property (2. Fifth. show that FT F is symmetric and positive deﬁnite. Similarly we ﬁnd that R12 = ± sin θ. we consider the quadratic form FT Fx · x. (ii) Further. (ix) Finally. equality holds here if and only if Fx = o. One similarly shows that R22 = cos θ. yields R12 < 0.3. Solution: For any linear transformations A and B we know that (AB)T = BT AT and (AT )T = A.19: If F is a nonsingular linear transformation. R12 = − sin θ. we can write FT Fx · x = (Fx) · (Fx) = Fx2 ≥ 0. (i) this shows that FT F is symmetric. which. Collecting these results allows us to write (vii) as Re1 Re2 Re = = = cos θ e1 R12 e1 e.19) of the transpose. e} forms a righthanded basis yields R21 > 0.40) of a linear transformation in terms of its components as deﬁned in (2. e2 . + cos θ e2 . since 0 < θ < π. since F is nonsingular. Thus in conclusion we can write (viii) as Re1 Re2 Re = cos θ e1 = − sin θ e1 = e. Applying this to (ix) allows us to write R = cos θ (e1 ⊗ e1 ) + sin θ (e2 ⊗ e1 ) − sin θ (e1 ⊗ e2 ) + cos θ (e2 ⊗ e2 ) + (e ⊗ e) which can be rearranged to give the desired result. +R21 e2 .2. together with (viii) and the fact that {e1 . + sin θ e2 . (x) Example 2. WORKED EXAMPLES.
Thus f = o and so √ (v) T1 s = σs . show that there is a unique symmetric positive deﬁnite linear transformation T for which T2 = S. (i) . T1 si = T2 si . Then Ss = σs and so T2 s = σs. 3. σ2 . Further. Thus we have 1 √ √ (T1 + σI)(T1 − σI)s = 0 . Example 2. VECTORS AND LINEAR TRANSFORMATIONS Example 2.e. i. positive deﬁnite and that T2 = S. i = 1.20 that FT F has a unique symmetric positive deﬁnite square root. say. (iii) If we set f = (T1 − √ σI)s this can be written as √ T1 f = − σf . This establishes the existence of a symmetric positive deﬁnite squareroot of S. Let σ > 0 2 1 and s be an eigenvalue and corresponding eigenvector of S. Solution: It follows from Example 2. U: U= FT F.19 that FT F is symmetric and positive deﬁnite. s3 which may be taken to be orthonormal. 2. show that there exists a unique positive deﬁnite symmetric linear transformation U. (i) If one deﬁnes a linear transformation T by 3 T= i=1 √ σi (si ⊗ si ) (ii) one can readily verify that T is symmetric.20: Consider a symmetric positive deﬁnite linear transformation S.21: Polar Decomposition Theorem: If F is a nonsingular linear transformation. Solution: Since S is symmetric and positive deﬁnite it has three real positive eigenvalues σ1 . we know that S can be represented as 3 S= i=1 σi (si ⊗ si ). It then follows from Example 2. Since the triplet of eigenvectors form a basis for the underlying vector space this in turn implies that T1 x = T2 x for any vector x.e. Suppose that S has two symmetric positive deﬁnite square roots T1 and T2 : S = T2 = T2 . It similarly follows that T2 s = √ σs and therefore that T1 s = T2 s. s2 . What remains is to show uniqueness of this squareroot. Show that it has a unique symmetric positive deﬁnite square root. Since T1 is positive deﬁnite it cannot have a negative eigenvalue. (vi) This holds for every eigenvector s of S: i.36 CHAPTER 2. Thus T1 = T2 . (iv) √ Thus either f = o or f is an eigenvector of T1 corresponding to the eigenvalue − σ(< 0). and a unique orthogonal linear transformation R such that F = RU. σ3 with corresponding eigenvectors s1 .
22: The polar decomposition theorem states that any nonsingular linear transformation F can be represented uniquely in the forms F = RU = VR where R is orthogonal and U and V are symmetric and positive deﬁnite. 3 be the eigenvalues and eigenvectors of U.15 it follows that the eigenvalues of V are the same as those of U and that the corresponding eigenvectors i of V are given by i = Rri .37)2 and the fact that ri · rj = δij . All we have to do is to show that R is orthogonal. (ii) Example 2. are symmetric. 37 Deﬁne the linear (ii) (iii) In this calculation we have used the fact that U. But this follows from RT R = (FU−1 )T (FU−1 ) = (U−1 )T FT FU−1 = U−1 U2 U−1 = I. Show that F= 3 3 λi i=1 i ⊗ ri . Finally. Example 2. transformation R through: R = FU−1 . Let λi .38)1 and 3 = Rri we have 3 3 F = RU = R i=1 λi ri ⊗ ri = i=1 λi (Rri ) ⊗ ri = λi i=1 i ⊗ ri . ri . and so U−1 . V= i=1 λi i ⊗ i . This establishes the proposition (except for the uniqueness which if left as an exercise). since U is nonsingular 3 U−1 = i=1 λ−1 ri ⊗ ri . since U is positive deﬁnite. (i) Next. i By using the property (2. 2. WORKED EXAMPLES. Therefore 3 3 3 ⊗ ri )(rj ⊗ rj ) = (ri · rj )( 3 ⊗ rj ) = R= i=1 j=1 λi λ−1 δij ( j i ⊗ rj ) = λi λ−1 ( i i=1 i ⊗ ri ) = ( i=1 i ⊗ ri ). Solution: First. it is nonsingular.23: Determine the rank and the null space of the linear transformation C = a ⊗ b where a = o. b = o. we have ( δij ( i ⊗ rj ). . and its inverse U−1 exists.2.3. Thus U and V have the spectral decompositions 3 3 U= i=1 λi ri ⊗ ri . by using the property (2. i and therefore 3 3 3 3 R = FU−1 = i=1 λi i ⊗ ri j=1 λ−1 rj ⊗ rj = j λi λ−1 ( j i=1 j=1 i i ⊗ ri )(rj ⊗ rj ). From Example 2. i R= i=1 i ⊗ ri . i = 1.
) Since Cx = (b · x)a the vector Cx is parallel to the vector a for every choice of the vector x. the fact that A = −I. Therefore Ae1 = −e1 (iii) (ii) (i) and so −1 is an eigenvalue of A with corresponding eigenvector e1 . it follows that there is a unique symmetric positive deﬁnite tensor which is the square root of I.e. Solution: The identity is certainly a symmetric positive deﬁnite tensor. (The range of A is the particular subspace of E3 comprised of all vectors Ax as x takes all values in E3 . We are to explore them here: thus we wish to determine a tensor A on E3 such that A2 = I. b = o. VECTORS AND LINEAR TRANSFORMATIONS Solution: Recall that the rank of any linear transformation A is the dimension of its range. Observe that (A + I) e1 = (A + I) (A − I)f1 = (A2 − I)f1 = Of1 = o. A = I. Recall that the null space of any linear transformation A is the particular subspace of E3 comprised of the set of all vectors x for which Ax = o. The linear transformation C therefore has rank one. First. Example 2. Obviously. A = I and A = −I. . it follows that e1 = o. call this vector f1 so that Af1 = f1 . Show that S can be expressed in the form S = (I + a ⊗ b)(I + b ⊗ a) if and only if 0 ≤ λ1 ≤ 1. However. By the result of a previous example on the squareroot of a symmetric positive deﬁnite tensor. then. Since Cx = (b · x)a and a = o.25: Calculate the square roots of the identity tensor. Since we are given that A = I. if Ax = x for every vector x ∈ E3 . by deﬁnition. Thus the range of C is the set of vectors parallel to a and its dimension is one. the set of all vectors normal to b. together with A2 = I similary implies that there must exist a unit vector e2 for which Ae2 = e2 .38 CHAPTER 2. Second. (iv) from which we conclude that +1 is an eigenvalue of A with corresponding eigenvector e2 . there must exist at least one nonnull vector x for which Ax = x. λ3 ≥ 1. i. since Af1 = f1 . the null space of C consists of all vectors x for which b · x. (ii) a = o. λ2 = 1. this square root is also I. Without loss of generality we can assume that e1  = 1. there are other square roots of I that are not symmetric positive deﬁnite. Set e1 = (A − I) f1 .24: Let λ1 ≤ λ2 ≤ λ3 be the eigenvalues of the symmetric linear transformation S. (i) Example 2.
Since e1 and e2 are eigenvectors. WORKED EXAMPLES. 33 (viii) (ix) Consequently the matrix [A] must necessarily have one of the two forms −1 0 α1 −1 0 0 0 1 0 0 1 α2 or 0 0 1 0 0 −1 . A12 = A32 = 0. arbitrary. (vi) 0 0 A33 0 1 0 [A2 ] = [A]2 = [A][A] = 0 0 1 −A13 + A13 A33 A2 33 A23 + A23 A33 . = 1. Why is [A2 ] = [A]2 ?) However. e2 . Therefore e1 and e2 are linearly independent. ξ2 one has ξ1 e1 + ξ2 e2 = o. suppose that for some scalars ξ1 . neither of them is the null vector o. The triplet of vectors {e1 . Subtracting and adding the preceding two equations shows that ξ1 e1 = ξ2 e2 = o. 39 Third. e3 } is linearly independent and therefore forms a basis for E3 . e3 } are given. (v) It follows that Comparing (v) with (iii) yields A11 = −1. and therefore ξ1 = ξ2 = 0. A23 + A23 A33 = 0. e2 . A21 = A31 = 0. as usual. e2 } is a linearly independent pair of vectors. since A2 = I. which implies that A13 either A23 A33 = arbitrary.3. the components Aij of the tensor A in the basis {e1 . (vii) (Notation: [A2 ] is the matrix of components of A2 while [A]2 is the square of the matrix of components of A. Therefore we must have −A13 + A13 A33 = 0. To see this. A13 or A23 A33 = = = 0. Fourth. and similarly comparing (v) with (iv) yields A22 = 1. The matrix of components of A in this basis is therefore −1 0 A13 [A] = 0 1 A23 . −1. (x) . which on using (iii) and (iv) leads to −ξ1 e1 + ξ2 e2 = o.2. A2 = 1. Operating on this by A yields ξ1 Ae1 + ξ2 Ae2 = o. one can show that {e1 . Fifth. the matrix of components of A2 in any basis has to be the identity matrix. let e3 be a unit vector that is perpendicular to both e1 and e2 . = 0. by Aej = Aij ei .
Then p2 ⊗ q2 = e2 ⊗ e2 + and therefore −I + 2p2 ⊗ q2 = − e1 ⊗ e1 − e2 ⊗ e2 − e3 ⊗ e3 + 2e2 ⊗ e2 + α2 e2 ⊗ e3 q2 = e2 + α2 e3 . Van Nostrand. 1963. 2 = −e1 ⊗ e1 + e2 ⊗ e2 − e3 ⊗ e3 + α2 e2 ⊗ e3 . Wiley. 2 = −e1 ⊗ e1 + e2 ⊗ e2 + e3 ⊗ e3 + α1 e1 ⊗ e3 . . Note from this that the components of the tensor I + 2p1 ⊗ q1 are given by (x)1 . 2. References 1. New York. Oxford University Press. Gelfand. I. Alternatively set p2 = e2 . A = I. A = −I for any value of the scalar α2 . one can readily verify that the tensor A = I + 2p1 ⊗ q1 (xii) has the desired properties A2 = I. Then p1 ⊗ q1 = −e1 ⊗ e1 + and therefore I + 2p1 ⊗ q1 = e1 ⊗ e1 + e2 ⊗ e2 + e3 ⊗ e3 − 2e1 ⊗ e1 + α1 e1 ⊗ e3 q1 = −e1 + α1 e3 .40 CHAPTER 2. 1997. New Jersey. Linear Vector Spaces and Cartesian Tensors. 2 (xi) α1 e1 ⊗ e3 . Halmos. Thus the tensors deﬁned in (xii) and (xiv) are both square roots of the identity tensor that are not symmetric positive deﬁnite. one can readily verify that the tensor A = −I + 2p2 ⊗ q2 (xiv) has the desired properties A2 = I. Lectures on Linear Algebra. J. 2 (xiii) α2 e2 ⊗ e3 . Conversely. set p1 = e1 . Sixth. Knowles. 3.M. New York.R. 1958. VECTORS AND LINEAR TRANSFORMATIONS where α1 and α2 are arbitrary scalars. P. Note from this that the components of the tensor −I + 2p2 ⊗ q2 are given by (x)2 . A = I.K. Finite Dimensional Vector Spaces. Conversely. A = −I for any value of the scalar α1 .
.... e3 } forms a basis for IE3 in the sense that an arbitrary vector v can always be expressed as a linear combination of the three basis vectors... for an 41 ... . .e. j component of the linear transformation A in some basis... Thus. ......... . Let IE3 be a threedimensional Euclidean vector space.1) If each basis vector ei has unit length. A set of three linearly independent vectors {e1 .... 3. scalar 3 × 1 column matrix vector ith component of the vector a in some basis...... .. k. Notation: α {a} a ai [A] A Aij Cijk Ti1 i2 .. or ith element of the column matrix {a} 3 × 3 square matrix linear transformation i. Cartesian Tensors. e2 . and if each pair of basis vectors ei ...... ej are mutually orthogonal. i. j. or i. j element of the square matrix [A] i. (3. e2 .. . given any v ∈ IE3 . there are unique scalars α.... γ such that v = αe1 + βe2 + γe3 .Chapter 3 Components of Vectors and Tensors.1 Components of a vector in a basis. . e3 } forms an orthonormal basis for IE3 . β.. we say that {e1 .in component of ntensor T in some basis.. ... component of 4tensor C in some basis i1 i2 ..in .
44). e3 } are deﬁned by vi = v · ei . that we are given two bases {e1 . it is still important to emphasize that the components vi of a vector depend on both the vector v and the choice of basis. CARTESIAN TENSORS orthonormal basis. v3 } and {v1 . e2 .1: Components {v1 .5) (3. v2 . Suppose. v3 (3. e3 } and {e1 . Even though this is obvious from the deﬁnition (3. The components vi of a vector v in a basis {e1 .2) where δij is the Kronecker delta.42 CHAPTER 3.4) (3. one has in addition that ei · (ej × ek ) = eijk where eijk is the alternator introduced previously in (1. COMPONENTS OF TENSORS. v2 . If the basis is righthanded. e2 . In these notes we shall always restrict attention to orthonormal bases unless explicitly stated otherwise.4). e2 . The vector can be expressed in terms of its components and the basis vectors as v = vi ei . for example. v3 } of the same vector v in two diﬀerent bases. e3 } as shown in . ei · ej = δij (3.6) v3 v v3 e3 v e2 v1 v2 v1 v2 e1 Figure 3.3) (3. The components of v may be assembled into a column matrix v1 {v} = v2 .
3.2 Components of a linear transformation in a basis. then the scalarproduct u · v can be expressed as u · v = ui vi . vi = v · ei .10) the vectorproduct u × v can be expressed as u × v = (eijk uj vk )ei or equivalently as (u × v)i = eijk uj vk .44). Let Aij be the ith component of the vector Aej so that Aej = Aij ei .11) (3.3. (3. (3. It follows.7) Thus the one vector v can be expressed in either of the two equivalent forms v = vi ei or v = vi ei .8) The components vi and vi are related to each other (as we shall discuss later) but in general. once the basis is ﬁxed. COMPONENTS OF A LINEAR TRANSFORMATION IN A BASIS. If ui and vi are the components of two vectors u and v in a basis. Consider a linear transformation A. e2 .1. (3. Then the vector v has one set of components vi in the ﬁrst basis and a diﬀerent set of components vi in the second basis: vi = v · ei . (3.2. Ae2 and Ae3 . the vector equation z = x + y can be written equivalently as {z} = {x} + {y} or zi = xi + yi in terms of the components xi . (3. In particular this is true of the three vectors Ae1 . for example. e3 } are {x}. there is a unique vector x associated with any given column matrix {x} such that the components of x in {e1 . yi and zi in the given basis. Once a basis {e1 .12) . e3 } is chosen and ﬁxed. vi = vi . e2 . that once the basis is ﬁxed. 43 Figure 3.9) where eijk is the alternator introduced previously in (1. there is a onetoone correspondence between column matrices and vectors. e2 and e3 . Thus. Any vector in IE3 can be expressed as a linear combination of the basis vectors e1 .
(3. Aij = ei · (Aej ). Similarly. (3.16) The components Aij and Aij are related to each other (as we shall discuss later) but in general Aij = Aij . e3 } are [M ]. Thus. e2 . Suppose. (3.19) . for example. once the basis is ﬁxed. COMPONENTS OF TENSORS. for example.13) The 9 scalars Aij are known as the components of the linear transformation A in the basis {e1 .15) The components Aij of a linear transformation depend on both the linear transformation A and the choice of basis. The components Aij can be assembled into a square matrix: A11 A12 A13 [A] = A21 A22 A23 . CARTESIAN TENSORS We can also write Aij = ei · (Aej ). B and C are linear transformations such that C = AB. (3. if A. e2 .44 CHAPTER 3. e3 }.17) Once a basis {e1 . [B] and [C] are related by [C] = [A][B] or Cij = Aik Bkj . then their component matrices [A]. e3 } and {e1 . Then the linear transformation A has one set of components Aij in the ﬁrst basis and a diﬀerent set of components Aij in the second basis: Aij = ei · (Aej ). that we are given two bases {e1 . e2 . It follows. xi and yi in the given basis. there is a onetoone correspondence between square matrices and linear transformations. (3. e3 } is chosen and ﬁxed. The components of the linear transformation A = a ⊗ b are Aij = ai bj . that the equation y = Ax relating the linear transformation A and the vectors x and y can be written equivalently as {y} = [A]{x} or yi = Aij xj (3. e2 .18) in terms of the components Aij . e3 }. e2 .14) A31 A32 A33 3 3 The linear transformation A can be expressed in terms of its components Aij and the basis vectors ei as A= j=1 i=1 Aij (ei ⊗ ej ). there is a unique linear transformation M associated with any given square matrix [M ] such that the components of M in {e1 . (3.
any vector. ij As mentioned in Section 2. λ2 . the left hand side of this reads a · (b × c) = ai (b × c)i = ai eijk bj ck = eijk ai bj ck .2. e2 . Let Qij be the jth component of the vector ei in the basis {e1 . e3 . e2 . e2 . Since {e1 . Finally recalling that the sign of eijk changes when any two adjacent subscripts are switched we ﬁnd that b · (c × a) = ejki ai bj ck = −ejik ai bj ck = eijk ai bj ck where we have ﬁrst switched the ki and then the ji in the subscript of the alternator. Since i. e3 } and {e1 . The eigenvectors are referred to as the principal directions of S. The rightmost expressions of a · (b × c) and b · (c × a) are identical and therefore this establishes the desired identity. For example the ﬁrst example in the previous chapter asked us to show that a · (b × c) = b · (c × a). (3. The particular basis consisting of the eigenvectors is called a principal basis for S. In terms of components. 3. e3 . e3 } forms a basis. and therefore in particular the vectors ei . if it is more convenient to do so. its components are therefore given by the Kronecker delta δij . λ3 and corresponding orthonormal eigenvectors e1 . i → j and j → k we can write b · (c × a) = ejki ai bj ck . e2 . If necessary. (3. then [AT ] = [A]T and AT = Aji . thus by changing k → i. j.3. a symmetric linear transformation S has three real eigenvalues λ1 . can be represented as a linear combination of the basis vectors e1 . k are dummy subscripts in the rightmost expression. Similarly the righthand side reads b · (c × a) = bi (c × a)i = bi eijk cj ak = eijk ak bi cj . we can. Consider a 3dimensional Euclidean vector space together with two orthonormal bases {e1 . e2 . If [A] and [AT ] are the component matrices of the linear transformations A and AT . e2 .21) . 45 The component matrix [I] of the identity linear transformation I in any orthonormal basis is the unit matrix.3 Components in two bases. pick and ﬁx a basis.20) 0 0 λ3 As a ﬁnal remark we note that if we are to establish certain results for vectors and linear transformations. COMPONENTS IN TWO BASES. e3 }: ei = Qij ej . and then work with the components in that basis.3. we can revert back to the vectors and linear transformations at the end. The component matrix [S] of the symmetric linear transformation S in its principal basis is λ1 0 0 [S] = 0 λ2 0 . e3 }. they can be changed to any other subscript.
(3. one also has the inverse relationships vi = Qji vj or equivalently {v} = [Q]T {v }. Let vi and vi be the ith component of the same vector v in the two bases {e1 . if one basis can be rotated into the other. if the two bases are related by a reﬂection. Similarly. e3 } and {e1 . e2 . one sees that Qij = ei · ej . Observe from (NNN) that Qji can also be interpreted as the jth component of ei in the basis {e1 . which means that one basis is righthanded and the other is lefthanded. Then one can show that Aij = Qip Qjq Apq or equivalently [A ] = [Q][A][Q]T . COMPONENTS OF TENSORS. If in addition. (3. e2 . e2 . the component matrices {v} and {v } of a vector v in two diﬀerent bases are diﬀerent. e3 } and {e1 . Let Aij and Aij be the ijcomponents of the same linear transformation A in the two bases {e1 .24) Since [Q] is orthogonal. A vector whose components in every basis happen to be the same is called an isotropic vector: {v} = [Q]{v} for all orthogonal matrices [Q]. This matrix relates the two bases {e1 . then [Q] is an improper orthogonal matrix and det[Q] = −1. which means that both bases are righthanded or both are lefthanded. e3 } and {e1 .23) The 9 numbers Qij can be assembled into a square matrix [Q].26) Since [Q] is orthogonal. Since both bases are orthonormal it can be readily shown that [Q] is an orthogonal matrix. e3 }. we may relate the diﬀerent components of a single linear transformation A in two bases.27) . e3 } whence we also have ei = Qji ej .46 CHAPTER 3. We may now relate the diﬀerent components of a single vector v in two bases.25) In general. Then one can show that vi = Qij vj or equivalently {v } = [Q]{v} (3. (3. then [Q] is a proper orthogonal matrix and det[Q] = +1. e2 . e2 . (3. e2 . e3 }. e3 }. (3. e2 .22) and so Qij is the cosine of the angle between the basis vectors ei and ej . CARTESIAN TENSORS By taking the dotproduct of (NNN) with ek . It is possible to show that the only isotropic vector is the null vector o. one also has the inverse relationships Aij = Qpi Qqj Apq or equivalently [A] = [Q]T [A ][Q].
trace. e2 . e1 . e3 ) be a scalarvalued function that depends on a linear transformation A and a (nonnecessarily orthonormal) basis {e1 . If [A ] are the components of A in some other basis {e1 . One example of such a function is (Ae1 × Ae2 ) · Ae3 . e2 . if we take the determinant of this matrix equation we get det[A ] = det([Q][A][Q]T ) = det[Q] det[A] det[Q]T = (det[Q])2 det[A] = det[A]. e2 . Since the components [A] and [A ] of a linear tranformation A in two bases are related by [A ] = [Q][A][Q]T . e3 }. e2 . e1 .4. e1 . Equivalently. e2 .3. This means that the function φ depends on the linear transformation A and the underlying basis.4 Scalarvalued functions of linear transformations. e3 ) = Ae1 · e1 . e2 . Certain functions φ have the property that φ([A]) = φ[A ]) for all pairs of bases {e1 . (3. e2 . e3 ) = (e1 × e2 ) · e3 (though it is certainly not obvious that this function is independent of the choice of basis). TRACE. (3. e2 . (3. e2 . 3. e3 ) = Φ(A. For example Φ(A.29) since the determinant of an orthogonal matrix is ±1. e1 . For such a function we may write φ(A). e2 . A linear transformation whose components in every basis happen to be the same is called an isotropic linear transformation: [A] = [Q][A][Q]T for all orthogonal matrices [Q]. Let Φ(A. Therefore without ambiguity we may deﬁne the determinant of a linear transformation A to be the (basis independent) scalarvalued function given by det A = det[A]. then in general φ([A]) = φ[A ]). We ﬁrst consider two important examples here. e3 }. e3 ).30) . and in such a case we can simply write Φ(A). e2 . e3 } and {e1 . scalarproduct and norm. so that for every two (notnecessarily orthonormal) bases {e1 . Let φ([A]) be some realvalued function deﬁned on the set of all square matrices. e3 }. Certain such functions are in fact independent of the basis. SCALARPRODUCT AND NORM 47 In general. It is possible to show that the most general isotropic symmetric linear transformation is a scalar multiple of the identity αI where α is an arbitrary scalar.28) Φ(A. e3 } and {e1 . e3 } and therefore such a function depends on the linear transformation only and not the basis. e3 } one has Φ(A. e1 . e2 . Determinant. DETERMINANT. the component matrices [A] and [A ] of a linear transformation A in two diﬀerent bases are diﬀerent. let A be a linear transformation and let [A] be the components of A in some basis {e1 .
CARTESIAN TENSORS We will see in an example at the end of this Chapter that the particular function Φ deﬁned in (3. a linear transformation A is said to be nonsingular if the only vector x for which Ax = o is the null vector x = o. (3.36) The eigenvalues are the roots λ of this cubic equation. det(αA) = α3 det(A). Similarly. Since v = o it follows that A − λI must be singular and so det(A − λI) = 0. traceA = Aii .35) Suppose that λ and v = o are an eigenvalue and eigenvector of given a linear transformation A. det(AT ) = det (A).48 CHAPTER 3.46). Av = λv. It is useful to note the following properties of the determinant of a linear transformation: det(AB) = det(A) det(B). Thus the eigenvalues of a linear transformation are also scalarvalued functions of A whose values depends only on A and not the basis: λi = λi (A).33) As mentioned previously. The eigenvalues and eigenvectors of a linear transformation do not depend on any choice of basis. then det(A−1 ) = 1/ det(A). Equivalently. (3. one can show that A is nonsingular if and only if det A = 0.32) (3.28) is in fact the determinant det A.37) 0 0 λ3 . we may deﬁne the trace of a linear transformation A to be the (basis independent) scalarvalued function given by trace A = tr[A].31) see (1. (3. COMPONENTS OF TENSORS. its matrix of components in a principal basis are λ1 0 0 [S] = 0 λ2 0 . Then by deﬁnition. If S is symmetric. In terms of its components in a basis one has det A = eijk A1i A2j A3k = eijk Ai1 Aj2 Ak3 . (3.34) If A is nonsingular. (3. or equivalently (A − λI)v = o. (3.
which is known as the CayleyHamilton theorem.3.42) (3.40) between invariants and eigenvalues is onetoone.39) and for this reason the three functions (3. The mapping (3. Finally. It can be readily veriﬁed that for any linear transformation A and all orthogonal linear transformations Q. DETERMINANT.41) (3. B) = tr(ABT ) (3. I3 (S) = λ1 λ2 λ3 . one can show that A3 − I1 (A)A2 + I2 (A)A − I3 (A)I = O. det(A − αI) = −α3 + I1 (A)α2 − I2 (A)α + I3 (A). I3 (QT AQ) = I3 (A). I2 (QT AQ) = I2 (A). SCALARPRODUCT AND NORM The particular scalarvalued functions I1 (A) = tr A. One can similarly deﬁne scalarvalued functions of two linear transformations A and B.38) are said to be invariant under orthogonal transformations. φ(A. λ3 I1 (S) = λ1 + λ2 + λ3 . Observe from (3. B) = tr(ABT ) = Aij Bij . Note that in terms of components in a basis. I1 (QT AQ) = I1 (A). λ2 . In addition one can show that for any linear transformation A and any real number α.37) that for a symmetric linear transformation with eigenvalues λ1 . (3.4.38) will appear frequently in what follows. 49 (3. I3 (A) = det A. I2 (S) = λ1 λ2 + λ2 λ3 + λ3 λ1 . B) deﬁned by φ(A.43) . I2 (A) = 1/2 [(tr A)2 − tr (A2 )] . Note in particular that the cubic equation for the eigenvalues of a linear transformation can be written as λ3 − I1 (A)λ2 + I2 (A)λ − I3 (A) = 0. The particular function φ(A. TRACE. (3.40) will play an important role in what follows.
e2 . in a given basis {e1 . Note that in terms of components in a basis. A quantity whose components vi and vi in these two bases are related by vi = Qij vj (3. e3 }. e3 }. e3 }. e2 . (3. This will be used later when we linearize the theory of large deformations... is deﬁned completely by a set of 3n ordered numbers Ti1 i2 . A2 = Aij Aij . If. COMPONENTS OF TENSORS. The concept of an nth order tensor can be introduced similarly: let T be a physical entity which.in .in are called the components of T in the basis {e1 . It follows from our preceding discussion that a vector is a 1tensor. CARTESIAN TENSORS This particular scalarvalued function is often known as the scalar product of the two linear transformation A and B and is written as A · B: A · B = tr(ABT ).5 Cartesian Tensors Consider two orthonormal bases {e1 . T is a scalar. e2 . it is represented by 30 .48) is called a 1st order Cartesian tensor or a 1tensor. It follows from our preceding discussion that a linear transformation is a 2tensor. denoted by A as √ (3.47) (3. The numbers Ti1 i2 . for example.44) It is natural then to deﬁne the magnitude (or norm) of a linear transformation A.45) A = A · A = tr(AAT ).50 CHAPTER 3.49) is called a 2nd order Cartesian tensor or a 2tensor. e3 } and {e1 . e2 . (3..46) 3. then each component Aij → 0.. A quantity whose components Aij and Aij in two bases are related by Aij = Qip Qjq Apq (3. 31 and 32 . Observe the useful property that if A → 0... vector or linear transformation.
If it is known that A and B are tensors... Suppose that the components of T in some basis are related to the components of a and b in that same basis by ai = Tij bj . e3 } be a second basis related to the ﬁrst one by the orthogonal matrix [Q]. This can be generalized to higherorder tensors. in .50) the entity T is called a nth order Cartesian tensor or more simply an ntensor. (3. This can be generalized to higherorder tensors..k Bj1 j2 .. if for every pair of such bases.in Bj1 j2 . and therefore summing over them. This rule generalizes naturally to tensors of more general order. However.. then T is necessarily a tensor as well....in in some basis...in = Qi1 j1 Qi2 j2 ...in j1 j2 ...jm (3... Suppose that A. Then.ik−1 p ik+1 . b and T be entities whose components in a basis are denoted by ai . these two sets of components are related by Ti1 i2 . B and T are entities whose components in a basis are related by... Let a. Contracting over two subscripts involves setting those two subscripts equal.3. Then “contracting” A over its subscripts leads to the scalar Aii . If a and b are 1tensors.. e2 . ij−1 p ij+1 . the components of a tensor T in two diﬀerent bases are diﬀerent: Ti1 i2 ..in = Ti1 i2 .52) where some of the subscripts maybe repeated.. This is called the quotient rule since it has the appearance of saying that the quotient of two 1tensors is a 2tensor.51) Let A be a 2tensor with components Aij in some basis. Note that the components of a tensor in an arbitrary basis can be calculated if its components in any one basis are known... Ai1 i2 . Let A be a ntensor with components Ai1 i2 ..in ... bi and Tij . Let {e1 .. (3.in be the components of the entity T in the second basis. Two tensors of the same order are added by adding corresponding components...jm = Ai1 i2 . there are certain special tensors whose components in one basis are the .in = Tk1 k2 . Recall that the outerproduct of two vectors a and b is the 2−tensor C = a ⊗ b whose components are given by Cij = ai bj ... Given an ntensor A and an mtensor B their outerproduct is the (m + n)−tensor C = A ⊗ B whose components are given by Ci1 i2 . CARTESIAN TENSORS 51 components respectively in the given basis..jn . and let Ti1 i2 . In general. say the ij th and ik th subscripts...5. Qin jn Tj1 j2 .jm . Then “contracting” A over two of its subscripts. leads to the (n − 2)−tensor whose components in this basis are Ai1 i2 .. then one can readily show that T is necessarily a 2tensor.
One might have expected such examples to have been presented in Chapter 2. . β. for an isotropic tensor Ti1 i2 . γ are arbitrary scalars. e3 }. (d) and the most general isotropic 4tensor C has components (in any basis) Cijkl = αδij δkl + βδik δjl + γδil δjk where α. We shall do this frequently in what follows and will not bother to explain this each time.jn for all orthogonal matrices [Q].6 Worked Examples.. a tensor T is said to be an isotropic tensor if its components have the same values in all bases. we can revert back to the vectors and linear transformations at the end.. (3.... i.55) 3. e3 } and {e1 .in in all bases {e1 . Equivalently.1: Suppose that A is a symmetric linear transformation. COMPONENTS OF TENSORS. (c) the most general isotropic 3tensor is the null 3tensor o. They are contained in the present chapter because they all involve either the determinant or trace of a linear transformation. As noted previously. Example 3.54) (3... and we chose to deﬁne these quantities in terms of components (even though they are basis independent). e2 . In general.in = Qi1 j1 Qi2 j2 . whenever it is more convenient we may pick and ﬁx a basis. Such a tensor is said to be isotropic. (3... αI.52 CHAPTER 3. It is also worth pointing out that in some of the example below calculations involving vectors and/or linear transformation are carried out without reference to their components. we are asked to establish certain results for vectors and linear transformations. e2 .Qin jn Tj1 j2 . and then work using components in that basis..e. In some of the examples below. an example of this is the identity 2tensor I. CARTESIAN TENSORS same as those in any other basis. Show that its matrix of components [A] in any basis is a symmetric matrix. (b) the most general isotropic 2tensor is a scalar multiple of the identity linear transformation..53) One can show that (a) the only isotropic 1tensor is the null vector o.in = Ti1 i2 . If necessary... if Ti1 i2 .
Thus [A] = [A]T and so the matrix [A] is symmetric. e2 .3. From an example in the previous chapter we know that the projection transformation Π and the reﬂection transformation R in the plane P can be written as Π = I − e3 ⊗ e3 and R = I − 2(e3 ⊗ e3 ) respectively. and ﬁnally since the order of the vectors in a scalar product do not matter we have ej · Aei = ei · Aej . Rij = δij − 2δ3i δ3j . T (ABT )ij = Aik Bkj = Aik Bjk and so f (A.6. e3 } forms an orthonormal basis.13). 53 (i) The property (NNN) of the transpose shows that ej · Aei = AT ej · ei . (ii) . WORKED EXAMPLES. Thus Aji = ei · Aej . if it is known that the matrix of components [A] of a linear transformation in some basis is is symmetric. and for all scalars α. and so (ii) yields Aji = Aij . B) = Aik Bik . B) = trace(ABT ) (i) and show that. on using the fact that A is symmetric further simpliﬁes to ej · Aei = Aej · ei . this functionf has the following properties: i) f (A. Remark: Conversely. (iii) (ii) Example 2. Solution: Let e3 be a unit vector normal to the plane P and let e1 and e2 be any two unit vectors in P such that {e1 . B). e2 . B) = αf (A. B) = f (A. Example 3. C.2: Consider the scalarvalued function f (A. A) > 0 provided A = 0. By (3. Since the components of e3 in the chosen basis are δ3i . Solution: Let Aij and Bij be the components of A and B in an arbitrary basis.13). we ﬁnd that Πij = δij − (e3 )i (e3 )j = δij − δ3i δ3j . In terms of these components. then the linear transformation A is also symmetric. for all linear transformations A. the components of A in the basis {e1 . B) = f (B. iii) f (A + C. e3 } are deﬁned by Aji = ej · Aei . Solution: According to (3. B) + f (C. the right most term here is the Aij component of A. which.5: Choose any convenient basis and calculate the components of the projection linear transformation Π and the reﬂection linear transformation R in that basis. A). B. ii) f (αA. B) and iv) f (A.
Remark: It follows from this that the function f has all of the usual requirements of a scalar product.4 Suppose that a. we note that det F (a × b) · c = det[F ](a × b)i ci = det[F ]eijk aj bk ci = det[F ]erpq ap bq cr . CARTESIAN TENSORS It is now trivial to verify that all of the above requirements hold. for example.3 that eijk ui uj = 0 and so u · (u × v) = 0. Note that. that u · (u × v) = 0.10).54 CHAPTER 3. and consequently (Fa × Fb) · Fc = eijk (Fjp ap ) (Fkq bq ) (Fir cr ) = eijk Fir Fjp Fkq ap bq cr . COMPONENTS OF TENSORS. as A · B = trace(ABT ) = Aij Bij . Show that (Fa × Fb) · Fc = det F (a × b) · c Solution: First consider the lefthand side of (i). . On using (3. based on this scalarproduct. Turning next to the righthand side of (i). The orthogonality of v and u × v can be established similarly. (iv) (iii) (ii) (i) Recalling the identity erpq det[F ] = eijk Fir Fjp Fkq in (1. (i) Since eijk = −ejik and ui uj = uj ui .11). b. we can deﬁne the magnitude of a linear transformation to be A = √ A·A= Aij Aij . Solution: We are to show. thus establishing the desired result. show that their crossproduct u × v is orthogonal to both u and v. c. (iv) (iii) Example 3. Thus it follows from Example 1. and (3. are any three linearly independent vectors and that F be an arbitrary nonsingular linear transformation. we can express this as (Fa × Fb) · Fc = (Fa × Fb)i (Fc)i = eijk (Fa)j (Fb)k (Fc)i . (v) Since the righthand sides of (iii) and (v) are identical.3: For any two vectors u and v. it follows that eijk is skewsymmetric in the subscripts ij and ui uj is symmetric in the subscripts ij. In terms of their components we can write u · (u × v) = ui (u × v)i = ui (eijk uj vk ) = eijk ui uj vk . Therefore we may deﬁne the scalarproduct of two linear transformations A and B. it follows that the lefthand sides must also be equal. Example 3.48) for the determinant of a matrix and substituting this into (iv) gives det F (a × b) · c = eijk Fir Fjp Fkq ap bq cr . denoted by A · B.
n0 and F. Derive a formula for V in terms of V0 and F. Next. 6 The volume V of the tetrahedron deﬁned by the three vectors Fa.5 Suppose that a. Example 3. a × b . b. Let V0 be the volume of the tetrahedron deﬁned by these three vectors. suppose that F is a nonsingular 2tensor and let α and n denote the area and unit normal to the parallelogram deﬁned by the vectors Fa and Fb. 6 F Fc Fb It follows from the result of the previous example that V /V0 = det F which describes how volumes are mapped by the transformation F. Next. WORKED EXAMPLES. and its image under the linear transformation F. Volume V olume Volume V0 olume c b a Fa Figure 3. and similarly that α = Fa × Fb. 55 Example 3. Fb. b and c. suppose that F is a nonsingular 2tensor and let V denote the volume of the tetrahedron deﬁned by the vectors Fa. b and c are three noncoplanar vectors in IE3 . Solution: Recall from an example in the previous Chapter that the volume V0 of the tetrahedron deﬁned by any three noncoplanar vectors a. Note that the second tetrahedron is the image of the ﬁrst tetrahedron under the transformation F. Fb and Fc.6: Suppose that a and b are two noncolinear vectors in IE3 . c is 1 V0 = (a × b) · c. Let α0 be the area of the parallelogram deﬁned by these two vectors and let n0 be a unit vector that is normal to the plane of this parallelogram.2: Tetrahedron of volume V0 deﬁned by three noncoplanar vectors a.6. Derive formulas for α and n in terms of α0 . Solution: By the properties of the vectorproduct we know that α0 = a × b. Fc is likewise V = 1 (Fa × Fb) · Fc. Fa × Fb n0 = a×b .3. n= Fa × Fb .
e2 . α0 and substituting this result into the preceding equation gives n= F−T n0 . But (Fa × Fb)s = esij (Fa)i (Fb)j = esij Fip ap Fjq bq . Taking the norm of this vector equation gives α =  det F F−T n0 .5: Let {e1 . e3 } and {e1 . and its image under the linear transformation F. Let Q be the linear transformation whose components in the basis {e1 . e2 . This describes how (vectorial) areas are mapped by the transformation F. thus QT is the transformation that carries the ﬁrst basis into the second.3: Parallelogram of area α0 with unit normal n0 deﬁned by two noncolinear vectors a and b. e2 . CARTESIAN TENSORS F n0 Area α Area α0 b a n Fb Fa Figure 3. COMPONENTS OF TENSORS. (i) Also recall the identity epqr det[F ] = eijk Fip Fjq Fkr introduced in (1. (ii) and αn = Fa × Fb. αn = α0 det F(F−T n0 ). Therefore α0 n0 = a × b. e3 } be two bases related by nine scalars Qij through ei = Qij ej .48).56 CHAPTER 3. . Multiplying both sides of this −1 identity by Frs leads to −1 −1 epqr det[F ]Frs = eijk Fip Fjq Fkr Frs = eijk Fip Fjq δks = eijs Fip Fjq = esij Fip Fjq (iii) Substituting (iii) into (ii) gives −1 −1 −T (Fa × Fb)s = det[F ]epqr Frs ap bq = det[F ]erpq ap bq Frs = det F(a × b)r Fsr = det F F−T (a × b) s and so using (i). e3 } are Qij . Show that ei = QT ei . F−T n0  Example 3.
yields the desired result.7: Determine the relationship between the components Aij and Aij of a linear transformation A in two bases. This. Solution: The components Aij of the linear transformation A in the basis {e1 . we are now led to Qkj ej = QT ek or equivalently QT ei = Qij ej . e3 } are deﬁned by Aij = ei · (Aej ). Example 3. It follows from this and (NNN) that vi = v · ei = v · (Qij ej ) = Qij v · ej = Qij vj . Solution: The components vi of v in the basis {e1 . e2 . e2 . e3 }. Example 3. Multiplying both sides of this by Qkj and noting by the orthogonality of Q that Qkj Qij = δki .3. e3 } are deﬁned by Aij = ei · (Aej ). e2 . and its components vi in the second basis {e1 . e3 } are deﬁned by vi = v · e i . (ii) (i) . Thus.6: Determine the relationship between the components vi and vi of a vector v in two bases. e2 .6. WORKED EXAMPLES. e3 } are deﬁned by vi = v · e i . 57 Solution: Since Qij are the components of the linear transformation Q in the basis {e1 . together with the given fact that ei = Qij ej . it follows from the deﬁnition of components that Qej = Qij ei . and its components Aij in a second basis {e1 . e2 . Operating on both sides of the preceding equation by QT and using the orthogonality of Q leads to ej = Qij QT ei . Since [Q] is an orthogonal matrix one readily sees that Q is an orthogonal transformation. the components of the vector v in the two bases are related by vi = Qij vj .
e2 . 1 . and then (i). see Figure 3. (iv) (iii) Example 3. e2 . e2 . e3 . we can write (ii) as Aij = ei · (Aej ) = Qip ep · (AQjq eq ) = Qip Qjq ep · (Aeq ) = Qip Qjq Apq . e3. Solution: In view of the given relationship between the two bases it follows that e1 e2 e3 = = = cos θ e1 + sin θ e2 . CARTESIAN TENSORS By ﬁrst making use of (NNN). Thus. e2 . e3 } is obtained by rotating the basis {e1 .8: Suppose that the basis {e1 . e3 } obtained by rotating the basis {e1 .58 CHAPTER 3. e3 } through an angle θ about the unit vector e3 . cos θ sin θ cos θ 0 0 The matrix [Q] which relates the two bases is deﬁned by Qij = ei · ej .4: A basis {e1 . e3 θ e2 θ e1 e2 e1 Figure 3.4. − sin θ e1 + cos θ e2 . e3 } through an angle θ about the unit vector e3 . Write out the transformation rule for 2tensors explicitly in this case. and so it follows that [Q] = − sin θ 0 0 . the components of the linear transformation A in the two bases are related by Aij = Qip Qjq Apq . COMPONENTS OF TENSORS.
(iii) (ii) (i) . WORKED EXAMPLES. These are the wellknown equations underlying the Mohr’s circle for transforming 2tensors in twodimensions. show that T is a 3tensor. these nine equations simplify to A11 − A22 A11 + A22 + cos 2θ + A12 sin 2θ. 2 2 2 = = A13 cos θ + A23 sin θ.3. = 2 2 2 A12 − A21 A12 − A21 A11 − A22 = − − cos 2θ − sin 2θ. Suppose that A and B are 2tensors and that their components in some basis are related by Aij = Cijk Bk . and in addition A13 = A23 = 0. Solution: a. A32 = A32 cos θ − A31 sin θ.6. b. Show that the Cijk ’s are the components of a 4tensor. bi and Tij . In the special case when [A] is symmetric. = A23 cos θ − A13 sin θ. = A33 . Tijk = ai bj bk . b and T be entities whose components in some arbitrary basis are ai . Substituting this [Q] into [A ] = [Q][A][Q]T and multiplying out the matrices leads to the 9 equations A11 A12 A21 A22 A13 A23 A33 A11 + A22 A11 − A22 A12 + A21 + cos 2θ + sin 2θ. 2 Example 3. Let ai . 59 together with A13 = A23 = 0 and A33 = A33 . 2 2 2 A11 + A22 A11 − A22 A12 + A21 = − cos 2θ − sin 2θ. If a and b are vectors.9: a. A31 = A31 cos θ + A32 sin θ. We are told that the components of the entity T in these two bases are deﬁned by Tijk = ai bj bk . Let a. A11 = 2 2 A11 + A22 A11 − A22 A22 = − cos 2θ − A12 sin 2θ. ai and bi . bi be the components of a and b in two arbitrary bases. 2 2 2 A12 − A21 A11 − A22 A12 − A21 + cos 2θ − sin 2θ. The components of T in any basis are deﬁned in terms of the components of a and b in that basis by Tijk = ai bj bk . 2 2 A11 − A22 A12 = − sin 2θ.
C in two arbitrary bases: Aij = Cijk Bk . (ii) for all proper orthogonal matrices [Q]. leads to δpm δqn Apq = Cijk Qim Qjn Qkp Q q Bpq .e that Cijk = Qip Qjq Qkr Q s Cpqrs . (ix) which by the substitution rule tells us that Amn = Cijk Qim Qjn Qkp Q q Bpq . (iv) Combining equations (iii) and (iv) gives Tijk = ai bj bk = Qip ap Qjq bq Qkr br = Qip Qjq Qkr ap bq br = Qip Qjq Qkr Tpqr . (xii) (xi) (x) Finally multiplying both sides by Qam Qbn Qcp Qdq .60 CHAPTER 3. (i) . or on using (vi)1 in this that Cmnpq Bpq = Cijk Qim Qjn Qkp Q q Bpq . Bij . i. using the orthogonality of [Q] and the substitution rule yields the desired result Qam Qbn Qcp Qdq Cmnpq = Cabcd . Bij = Qip Qjq Bpq . Let Aij . Cijk be the components of A. (vi) and we must show that Cijk is a 4tensor. Substituting (vii) into (vi)2 gives Qip Qjq Apq = Cijk Qkp Q q Bpq . Since this holds for all matrices [B] we must have Cmnpq = Cijk Qim Qjn Qkp Q q . their components transform according to the 1tensor transformation rule ai = Qij aj . (xiii) Example 3. bi = Qij bj . the fact that Qip Qim = δpm . i.10: Verify that the alternator eijk has the property that eijk = Qip Qjq Qkr epqr but that more generally eijk = Qip Qjq Qkr epqr for all orthogonal matrices [Q]. Therefore T is a 3tensor. b. We are told that A and B are 2tensors. B. CARTESIAN TENSORS Since a and b are known to be vectors. whence Aij = Qip Qjq Apq .e. (v) Therefore the components of T in two bases transform according to Tijk = Qip Qjq Qkr Tpqr . COMPONENTS OF TENSORS. (vii) Aij = Cijk Bk . Cijk and Aij . Bij . (viii) Multiplying both sides by Qim Qjn and using the orthogonality of [Q].
Example 3. WORKED EXAMPLES.13: Show that the most general isotropic symmetric tensor is a scalar multiple of the identity. Conversely. Since A is also isotropic. i. u = o obviously satisﬁes (i) for all orthogonal matrices [Q].e. Thus u = o is the most general isotropic vector. by deﬁnition. Note from this that the alternator is not an isotropic 3tensor. show that necessarily Ciik = αδk for some arbitrary scalar α. we are led to Ciikl = Qip Qiq Qkr Qls Cpqrs = δpq Qkr Qls Cpqrs = Qkr Qls Cpprs . Solution: In order to show this we must determine the most general vector u which is such that ui = Qij uj for all orthogonal matrices [Q]. Example 3. Solution: We must ﬁnd the most general symmetric 2tensor A whose components in every basis are the same.11: If Cijk is an isotropic 4tensor. Thus Ciik obeys Ciikl = Qkr Qls Cpprs for all orthogonal matrices [Q]. Solution: Since Cijkl is an isotropic 4tensor. The desired result now follows since the most general isotropic 2tensor is a scalar multiple of the identity.6. since A is symmetric. (ii) thus ui = 0 and so u = o. we know that there is some basis in which [A] is diagonal. [A] = [Q][A][Q]T for all orthogonal matrices [Q].. (i) Since (i) is to hold for all orthogonal matrices [Q]. Then Qij = −δij . and ﬁnally using the substitution rule. (i) First.3. and therefore it is an isotropic 2tensor. then using the orthogonality of [Q]. Thus [A] has the form λ1 0 0 [A] = 0 (ii) λ2 0 0 0 λ3 . and so (i) reduces to ui = −δij uj = −ui . Cijkl = Qip Qjq Qkr Qls Cpqrs for all orthogonal matrices [Q]. On setting i = j in this. 61 Example 3. it follows that [A] must therefore be diagonal in every basis. it must necessarily hold for the special choice [Q] = −[I].12: Show that the most general isotropic vector is the null vector o.
This establishes the result. Thus (iii) must necessarily hold for the special choice 0 0 1 [Q] = 1 0 0 . (i) 2 Then. [A] = α[I] is readily shown to obey (i) for any orthogonal matrix [Q]. Wij xj = −eijk wk xj = eikj wk xj = (w × x)i . and ﬁnally using the substitution rule gives 1 1 eipq wi = − (δjp δkq − δjq δkp ) Wjk = − (Wpq − Wqp ) 2 2 Since W is skewsymmetric we have Wij = −Wji and thus conclude that Wij = −eijk wk . CARTESIAN TENSORS in which case (iii) reduces to for all orthogonal matrices [Q]. Thus the vector w deﬁned by (i) has the desired property Wx = w × x. by direct substitution. show that there is a vector w such that Wx = w × x for all x ∈ IE. λ3 in any basis. Example 3. λ1 = λ2 .14: If W is a skewsymmetric tensor. we merely have to show that w has the desired property stated above. COMPONENTS OF TENSORS.62 CHAPTER 3. Therefore [A] necessarily must have the form [A] = α[I]. 0 1 0 λ1 0 0 0 λ2 0 0 λ2 0 = 0 λ3 0 0 λ1 0 0 0 . Thus λ1 = λ2 = λ3 = say α. Solution: Let Wij be the components of W in some basis and let w be the vector whose components in this basis are deﬁned by 1 wi = − eijk Wjk . . Now for any vector x. Thus (i) takes the form λ1 0 λ2 0 0 0 λ1 0 0 = [Q] 0 0 λ3 0 λ2 0 0 0 [QT ] λ3 (iii) (iv) (v) Therefore. A permutation of this special choice of [Q] similarly shows that λ2 = λ3 . Multiplying both sides of the preceding equation by eipq and then using the identity eijk eipq = δjp δkq − δjq δkp . Conversely.
k.6. for all x (i) . Therefore (β − γ)(δ11 δ22 − δ21 δ12 ) = 0 and so β = γ. enforcing the requirement Cijk = Cjik on (i) leads. One can readily verify that it is suﬃcient as well. Example 3. = 2. The righthand side of this can be simpliﬁed by using the given form of Cijk . it must necessarily hold for the special choice i = 1. β. γ are scalars. 63 (i) where α. j. Solution: By deﬁnition of the transpose and the properties of the scalar product. show that one must have β = γ. Ax · x = x · AT x = AT x · x. (ii) This establishes the desired result. after some simpliﬁcation. is isotropic. (iii) Since this must hold for all values of the free indices i. Therefore A has the properties that Ax · x = 0. that the most general isotropic 4tensor C with the symmetry property Cijk = Cjik is Cijk = αδij δk + β (δik δj + δi δjk ) where α and β are scalars. j = 2.16: If A is a tensor such that Ax · x = 0 show that A is necessarily skewsymmetric. k = 1. WORKED EXAMPLES.15: Verify that the 4tensor Cijk = αδij δk + βδik δj + γδi δjk . and the orthogonality of [Q] as follows: Qip Qjq Qkr Q s Cpqrs = = = = = Qip Qjq Qkr Q s (α δpq δrs + β δpr δqs + γ δps δqr ) α Qiq Qjq Qks Q s + β Qir Qjs Qkr Q s + γ Qis Qjr Qkr Q s α (Qiq Qjq ) (Qks Q s ) + β (Qir Qkr ) (Qjs Q s ) + γ (Qis Q s ) (Qjr Qkr ) α δij δk + β δik δj + γ δi δjk Cijk . Remark: Observe that Cijk given by (v) automatically has the symmetry Cijk = Ck ij . Solution: In order to verify that Cijk are the components of an isotropic 4tensor we have to show that Cijk = Qip Qjq Qkr Q s Cpqrs for all orthogonal matrices [Q]. It is useful for later use to record here. (v) Example 3. (iv) Remark: We have shown that β = γ is necessary if C given in (i) is to have the symmetry Cijk = Cjik . to (β − γ) (δik δj − δjk δi ) = 0 . and AT x · x = 0 for all vectors x. .3. If this isotropic 4tensor is to possess the symmetry Cijk = Cjik . the substitution rule. Turning to the second question.
Since QQT = I it now follows that 1 = det I = det(QQT ) = det Q det QT = (det Q)2 . Example 2. it follows that every eigenvalue must vanish: σ1 = σ2 = σ3 = 0. show that there exists a vector v such that Qv = v. Therefore in terms of components in a principal basis of S. Solution: Recall that for any two linear transformations A and B we have det(AB) = det A det B and det B = det BT . Therefore this leads to det(Q − I) = − det(Q − I). that (Q − I)v) = o or equivalently that det(Q − I) = 0. show that det Q = ±1. it is suﬃcient to show that Q has an eigenvalue +1. i. Sx · x = σ1 x2 + σ2 x2 + σ3 x2 = 0 1 2 3 where the σk ’s are the eigenvalues of S. (i) Recall that det Q = +1 for a proper orthogonal linear transformation. and the desired result now follows. On taking the determinant of both sides and using the fact that det(AB) = det A det B we get det Q det(QT − I) = det (I − Q) .e. and that det A = AT and det(−A) = (−1)3 det(A) for a 3dimensional vector space. Solution: This follows readily since det(QT AQ−µI) = det(QT AQ−µQT Q) = det QT (A − µI)Q = det QT det(A−µI) det Q = det(A−µI).22: For any linear transformation A. Remark: An important consequence of this is that if A is a tensor with the property that Ax · x = 0 for all x. CARTESIAN TENSORS Adding these two equations gives Sx · x = 0 where S = A + AT .20: If Q is a proper orthogonal linear transformation on the vector space IE3 . Solution: To show that there is a vector v such that Qv = v. Since this must hold for all real numbers xk . The desired result now follows. it does not follow that A = 0 necessarily. (ii) Example 2. This vector is known as the axis of Q. . Observe that S is symmetric. Since QQT = I we have Q(QT − I) = I − Q. Example 2. show that det(A−µI) = det(QT AQ−µI) for all orthogonal linear transformations Q and all scalars µ.18: For any orthogonal linear transformation Q. Therefore S = O whence A = −AT . COMPONENTS OF TENSORS.64 CHAPTER 3.
Calculate d det F(t). WORKED EXAMPLES.6. e1 . and now using the result of Example 3. so that in particular the same is true of their product and their sum: det(QT AQ) = det A and tr(QT AQ) = tr A. Pick any orthonormal basis and express φ(A) in terms of the components of A in that basis. e3 ) is in fact independent of the choice of basis. 65 Remark: Observe from this result that the eigenvalues of QT AQ coincide with those of Q. Thus.. e3 } by φ(A. show that φ(A. e1 . e2 . Example 2. this can be written as ˙ trace FF−1 Fa × Fb · Fc = d det F dt (a × b) · c d det F dt (a × b) · c.NNN once more we get ˙ trace FF−1 det F (a × b) · c = or d det F dt (a × b) · c.3. We can write this as ˙ ˙ ˙ FF−1 Fa × Fb · Fc + Fa × FF−1 Fb · Fc + Fa × Fb · FF−1 Fc = In view of the result of Example 3.7: Let F(t) be a oneparameter familty of nonsingular 2tensors that depends smoothly on the parameter t. e3 }. e2 . e1 . and hence show that φ(A) = trace A. e1 . dt . e2 . e2 .e. e3 ). e3 ) = Ae1 · (e2 × e3 ) + e1 · (Ae2 × e3 ) + e1 · (e2 × Ae3 ) . d ˙ det F = trace FF−1 det F. e1 · (e2 × e3 ) Show that φ(A. e1 . e2 . φ(A) is called a scalar invariant of A. e2 .NNN we have F(t)a × F(t)b · F(t)c = det F(t) (a × b) · c Diﬀerentiating this with respect to t gives d ˙ ˙ ˙ F(t)a × F(t)b · F(t)c + F(t)a × F(t)b · F(t)c + F(t)a × F(t)b · F(t)c = det F(t) (a × b) · c dt ˙ where we have set F(t) = dF/dt. e2 . e3 ) = I1 (A. e1 . e2 .26: Deﬁne a scalarvalued function φ(A. e3 } and {e1 . e3 ) for all linear transformations A and all (not necessarily) orthonormal bases {e1 . e2 . e3 ) for any two bases {e1 . Example 3. dt Solution: From the result of Example 3.NNN. i. we can simply write φ(A) instead of φ(A.
ck Ak + . Mathematics Applied to Continuum Mechanics. as linear combination of I. A and A2 . I2 (A) and I3 (A) are the principal scalar invariants of A: I1 (A) = trace A.31: For any linear transformation A show that det(A − αI) = −α3 + I1 (A)α2 − I2 (A)α + I3 (A) for all real numbers α where I1 (A). A and A2 . Solution: This follows readily from the CayleyHamilton Theorem (3. 1987. show that the polynomial PN (A) = c0 I + c1 A + c2 A2 + . I2 and I3 of the linear transformation a ⊗ b.A.66 CHAPTER 3.30: For any integer N > 0.32: Calculate the principal scalar invariants I1 . I2 (A) = 1/2[(trace A)2 − trace(A2 )]. . Then (3. Oxford University Press. New York. Knowles. Dover. 1997. Ak can be expressed as a linear combination of I. References 1. 1931. . Jeﬀreys. 3. CARTESIAN TENSORS Example 2. and therefore.41) shows that A3 can be written as a linear combination of I.41) as follows: suppose that A is nonsingular so that I3 (A) = det A = 0.K. Cambridge. A2 and A3 . Linear Vector Spaces and Cartesian Tensors. Cartesian Tensors. . Example 2. . Example 2. The result thus follows. H. 2. This process can be continued an arbitrary number of times to see that for any integer k. I3 (A) = det A. New York. + cN AN can be written as a quadratic polynomial of A. on using the result of the previous step. . Next. multiplying this by A tells us that A4 can be written as a linear combination of A. Segel. J. L. A and A2 . COMPONENTS OF TENSORS.
67 . Intuitively a “uniform allaround expansion”. does not aﬀect symmetry. certain transformations preserve its symmetry while others don’t. When an object is mapped using a linear transformation. In this chapter we touch brieﬂy on the question of characterizing symmetry by linear transformations. i.Chapter 4 Characterizing Symmetry: Groups of Linear Transformations. In this Chapter we shall consider those linear transformations that map the object back into itself. principally rotations and reﬂections. The collection of such transformations have certain important and useful properties.e. One way in which to characterize the symmetry of an object is to consider the collection of all linear transformations that preserve its symmetry. The set of such transformations depends on the object: for example the set of linear transformations that preserve the symmetry of a cube is diﬀerent to the set of linear transformations that preserve the symmetry of a tetrahedron. a linear transformation of the form αI that rescales the object by changing its size but not its shape. We are interested in other linear transformations that also preserve symmetry. Linear transformations are mappings of vector spaces into vector spaces.
And once the locations of A and B have been ﬁxed. there is no further ﬂexibility and the locations of the remaining vertices are ﬁxed. 4. Consider a square. the vertex B can be placed in one of 2 positions (allowing for reﬂections or in just one position if only rotations are permitted). Once the location of A has to the orthonormal vectors {i. Consider the 4 rotations. The been determined. R where we are using the k k k φ notation introduced previously. In order to determine them. which lies in a plane normal to the unit vector k. Rπ .1. k. R π/2 π/2 3π/2 k . 4 of which are rotations and 4 of which are reﬂections. Rn is a righthanded rotation through an angle φ about the axis n. distinct rotations are symmetry transformations: I. ABCD.1: Mapping a square into itself. R Let Gsquare denote the set consisting of these 4 symmetry preserving rotations: Gsquare = {I. see Figure 4. Thus the following 4 . j}. Rπ . We begin with an illustrative example.1 An example in twodimensions. Thus there are a total of 4 × 2 = 8 symmetry preserving transformations of the square.e. viz. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS C j D B A o i Figure 4.68 CHAPTER 4. In the present case there is just 1 axis to consider. and we note that 0o . 180o and 270o rotations about this axis map the square back into itself. R }. i. k k 3π/2 . 90o . Consider mappings that carry the square into itself. we (a) identify the axes of rotational symmetry and then (b) determine the number of distinct rotations about each such axis. whose center is at the origin o and whose sides are parallel vertex A can be placed in one of 4 positions.
if P1 and P2 are in Gsquare .4. Consider a cube whose center is at the origin o and whose . AN EXAMPLE IN THREEDIMENSIONS. 3. R R = k k k k k 3π/2 3π/2 I. then so is their product P1 P2 . As we shall see in Section k k k k 4. i. R R = Rπ etc. and if P ∈ G square so is its inverse P−1 .4.2: Mapping a cube into itself. We shall generalize all of this in Section 4. then so is its k k k 3π/2 −1 π/2 −1 inverse P . Second. For example. 69 This collection of linear transformations has two important properties: ﬁrst. 2. k Next consider the rotation R Finally observe that G square = {I.2. For example (Rπ )−1 = Rπ . 1. Before considering some general theory.2 An example in threedimensions. and observe that every element of the set Gsquare can be k π/2 represented in the form (R )n for the integer choices n = 0.e. Rπ R = R .4. (R ) =R etc. these two properties endow the set Gsquare with a certain special structure. k j B o i C D A Figure 4. observe that if P is any member of Gsquare . it is useful to consider the threedimensional version of the previous problem. P2 ∈ G square then their product P1 P2 is also in G square . Rπ } is a subset of Gsquare and that it too has the properties that if P1 . Therefore we can say k π/2 that the set Gsquare is “generated” by the element R . π/2 π/2 3π/2 π/2 3π/2 4. observe that the successive application of any two symmetries yields a third symmetry.
and consider mappings that carry the cube into itself. 180o and 270o rotations about each of these axes maps the cube back into the cube. R 4π/3 i+j+k . j. First. i − j + k. R . and 90o . 24 of which are rotations and 24 of which i . consider the 24 rotations. In the present case we see that. Consider a vertex A. R . There are 3 axes that join the center of one face of the cube to the center of the opposite face of the cube which we can take to be i. R i i 3π/2 . SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS edges are parallel to the orthonormal vectors {i. Finally. Thus the following 3 × 3 = 9 distinct rotations are symmetry transformations: R π/2 symmetry preserving transformations of the cube. Once the location of A has been determined. Rπ . and . (which in materials science are called the {100} directions). Once the vertices A. R 2π/3 . the vertex C can be placed in one of 2 positions (allowing for reﬂections or in just one position if only rotations are permitted). in addition to the identify transformation I itself. (which in i+j+k . j − k (which in materials science are called the {110} directions). Thus there are a total of 8 × 3 × 2 = 48 are reﬂections. B and C have been placed. C. j + k. and its three adjacent vertices B. i − k. R π/2 j . Thus the following 4 × 2 = 8 distinct rotations are symmetry transformations: R 2π/3 the cube which we can take to be i + j + k. D. In order to determine these rotations we again (a) identify all axes of rotational symmetry and then (b) determine the number of distinct rotations about each such axis. i + j − k. The vertex A can be placed in one of 8 positions. k}. and 120o and 240o rotations about each of these axes maps the cube back into the cube. R j j 3π/2 . Rπ . k.70 CHAPTER 4. i − j. the locations of the remaining vertices are ﬁxed. R . R k k k π/2 3π/2 2. i + k. R . i − j − k. And once the locations of A and B have been ﬁxed. i−j+k i−j+k i+j−k i+j−k i−j−k i−j−k 4π/3 2π/3 4π/3 2π/3 4π/3 3. R . There are 4 axes that join one vertex of the cube to the diagonally opposite vertex of materials science are called the {111} directions). j. the vertex B can be placed in one of 3 positions. we have the following rotational transformations that preserve symmetry: 1. Rπ . R . there are 6 axes that join the center of one edge of the cube to the center of the diagonally opposite edge of the cube which we can take to be i + j.
Rπ+k . R 3π/2 . Rπ+k .2.) The collection of linear transformations Gcube has two important properties that one can rotational symmetry of an object then −R is. (It is important to remark that this just happens to be true for the cube. i i i i j j If one considers rotations and reﬂections. R i i . (One way in which to verify this is to use the representation of a rotation tensor determined in Example 2. a reﬂection. R 4π/3 2π/3 i−j+k i+j−k i+j−k . q. verify: (i) if P1 and P2 ∈ Gcube . Thus the following 6 × 1 = 6 distinct rotations are symmetry transformations: Rπ+j . R π/2 k . For example the rotation R (about i j k i+j+k a {111} axis) and the rotation Rπ+k (about a {110} axis) can be represented as i (R R = R i+j+k k 2π/3 π/2 −1 π/2 p Next. i i i i j j Let Gcube denote the collection of these 24 symmetry preserving rotations: Gcube = {I.) Therefore we can say that the set Gcube is “generated” by the three elements R π/2 i . where the 24 reﬂections are obtained by multiplying each rotation by −I. and (ii) if P ∈ Gcube . R 2π/3 π/2 3π/2 . R 2π/3 i−j−k . then there are 48 elements in this set. . Rπ . R 4π/3 3π/2 2π/3 4π/3 i+j+k i+j+k i−j+k . of course. ) (R )q (R )r for integer choices of p.g. one can verify that every element of the set Gcube can be represented in the form π/2 π/2 2π/3 R j π/2 −1 . Rπ . R k k . Rπ−j . AN EXAMPLE IN THREEDIMENSIONS. 71 180o rotations about each of these axes maps the cube back into the cube. Rπ−k }. i−j−k (4.4. Rπ+k . Rπ+k = R i j π/2 −1 R k π/2 2 .R π/2 j and R π/2 k . R R π/2 i .1) 4π/3 Rπ+j . see the example of the tetrahedron discussed later. but it need not describe then so does its inverse P−1 . In general. Rπ−k . if R is a a reﬂectional symmetry of the object. Rπ−j . e. Rπ−k . Rπ . but is not generally true. Rπ+k . R . then their product P1 P2 is also in Gcube .18. R . R . R j j j . r. Rπ−k .
1. (It is clear from the ﬁgure that diﬀerent sets of lattice vectors can correspond to the same lattice. Given a lattice.) a a j 2 i 1 Figure 4. 3} = {x  x = o + n=1 ni i . P2 ∈ Glattice then their product P1 P2 is also in Glattice . 2. let Glattice be the set of all linear transformations P that map the lattice back into itself. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS 4. It can be shown that a linear transformation P maps a lattice back into itself if and only if 3 P i = j=1 Mij j (4. 2 .3 Lattices. Figure 4. 3 1 . 2. is an inﬁnite set of periodically arranged points in space generated by the 2 . and if P ∈ Glattice so is its inverse P−1 . .2) where Z is the set of integers. n i ∈ Z } (4. translation of a single point o through three linearly independent lattice vectors { 1 . The set Glattice is called the symmetry group of the lattice. 3 }: L{o.3 shows a twodimensional square lattice and one possible set of lattice vectors 1.3) for some 3 × 3 matrix [M ] whose elements Mij are integers and where det[M ] = 1.72 CHAPTER 4. 3 }. The simplest lattice.3: A twodimensional square lattice with lattice vectors 1. One can show that if P1 . 2. A geometric structure of particular interest in solid mechanics is a lattice and we now make a few observations on the symmetry of lattices. a Bravais lattice L{o.
G if In general.the set of all proper unimodular linear transformations (i. GROUPS OF LINEAR TRANSFORMATIONS. linear transformations with determinant equal to ±1). There are unimodular tensors.g. 73 and the set of rotations in Glattice is known as the point group of the lattice. .the set of all unimodular linear transformations2 (i. orthorhombic.the set of all orthogonal linear transformations. that are not orthogonal.4 Groups of Linear Transformations.1). Pn which. Because. P2 . 4. Gcube and Glattice encountered in the previous sections are their inverses are multiplied among themselves in various combinations yield all the elements of the group. the number of diﬀerent types of lattices is greater than seven. monoclinic. cubic.4. For example the point group of a simple cubic lattice1 is the set Gcube of 24 rotations given in (4.4. Generators of the groups Gsquare and Gcube were given previously. groups. A collection G of nonsingular linear transformations is said to be a group of linear transformations if it possesses the following two properties: (i) (ii) if P1 ∈ G and P2 ∈ G then P1 P2 ∈ G.the set of all proper orthogonal linear transformations. Thus the unimodular group is not equivalent to the orthogonal group. viz. trigonal and hexagonal. The generators of a group G are those elements P1 . 1 .e. 2 While the determinant of an orthogonal tensor is ±1 the converse is not necessarily true. . . triclinic. One can show that each of the following sets of linear transformations forms a group: . e. and . tetragonal. and so on. linear transformations with determinant equal to +1). . for example. . There are seven diﬀerent types of symmetry that arise in Bravais lattices. if P ∈ G then P−1 ∈ G.e. . Note from this that the identity transformation I is necessarily a member of every group G. a cubic lattice can be bodycentered or facecentered. when they and Clearly the three sets Gsquare . a collection of linear transformations G is said to be a subgroup of a group (i) G ⊂ G and (ii) G is itself a group. P = I + αi ⊗ j.
The symmetry of the material will be characterized by a set G of nonsingular tensors P which has the property that. the identity being the identity matrix. and the inverse of x taken to be −x is a group. For example the set of all integers Z with “multiplication” deﬁned as the addition of numbers. 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS One can readily show that the group of proper orthogonal linear transformations is a subgroup of the group of orthogonal linear transformations. which in turn is a subgroup of the group of unimodular linear transformations. (This represents the energy in the material and characterizes its mechanical response). for each P ∈ G. . ψ(C) = ψ(PT CP) for all symmetric positive − deﬁnite C. In our ﬁrst example. However.74 CHAPTER 4. we will encounter a scalarvalued function ψ(C) deﬁned for all symmetric positive deﬁnite tensors C. cosh(−x) sinh(−x) sinh(−x) cosh(−x) can be shown to be a group. the identity taken to be zero.4) . (4. Similarly the set of all matrices of the form and the inverse being cosh x sinh x sinh x cosh x where − ∞ < x < ∞. G square is a subgroup of Gsquare . It should be mentioned that the general theory of groups deals with collections of elements (together with certain “rules” including “multiplication”) where the elements need not be linear transformations. with “multiplication” deﬁned as matrix multiplication. When we discuss the constitutive behavior of a material in Volume 2. our discussion in these notes is limited to groups of linear transformations.5 Symmetry of a scalarvalued function of symmetric positivedeﬁnite tensors.
Thus. and To examine an explicit example.4.4) is a group. 2 (4. suppose that P ∈ G. then so is P1 P2 . n n (4. Therefore ψ(QT CQn ) = ψ QT CQn n · n = ψ CQn n · Qn n = ψ Cn · n = ψ(C).e. Thus the symmetry group of this ψ consists of all unimodular tensors ( i.5)2 and (4. (4.4) holds if and only if det P = ±1. Then ψ((P1 P2 )T CP1 P2 ) = ψ(PT (PT CP1 )P2 ) = 2 1 ψ(PT CP1 ) = ψ(C) where we have used (4. since (4. we shall refer to it as the symmetry group of ψ. SYMMETRY OF A SCALARVALUED FUNCTION 75 P1 .7) therefore contains the set of all rotations about n. Next. tensors with determinant equal to ±1). and so P−1 is also in G.8) The symmetry group of the function (4. To see this.5. As a second example consider the function ψ(C) = ψ Cn · n (4.5)1 in the penultimate and ultimate 1 steps respectively. Since P is nonsingular. ﬁrst let ψ(C) = ψ(PT CP1 ). Let Qn be a rotation about the axis n through an arbitrary angle. equation (4.6) as a consequence. if P ∈ G then −P ∈ G also. consider the function ψ(C) = ψ det C . Observe from (4. Thus if P1 and P2 are in G.5) for all symmetric positivedeﬁnite C.4) that the symmetry group of ψ contains the elements I and −I. the equation S = PT CP provides a onetoone relation between symmetric positive deﬁnite tensors C and S. Thus the set G of nonsingular tensors obeying (4. (Are there any other tensors in G?) .7) where n is a given ﬁxed unit vector.4) gives ψ(S) = ψ(P−T SP−1 ) for all symmetric positivedeﬁnite S. Then since n is the axis of Qn we know that Qn n = n. 1 ψ(C) = ψ(PT CP2 ). Substituting this into (4. P2 ∈ G so that It can be readily shown that this set of tensors G is a group.4) holds for all symmetric positivedeﬁnite C. It is seen trivially that for this ψ. it also holds for all symmetric positivedeﬁnite linear transformations S = PT CP.
38). A function ψ is said to be isotropic if its symmetry group G contains all orthogonal ψ(C) = ψ(PT CP) (4. I2 (C). if H is a spherical tensor. together with the special case of the result noted in the preceding paragraph provides a hint of why we might want to limit attention to unimodular tensors rather than consider all nonsingular tensors in our discussion of symmetry. This motivates the following slight modiﬁcation to our original notion of symmetry of a function ψ(C).e. Suppose that ψ1 and ψ2 are related by ψ2 (C) = ψ1 (HT CH) for all symmetric positive − deﬁnite tensors C.76 CHAPTER 4.12) tensors. then G1 = G2 . As a special case Next. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS The following result will be useful in Volume 2.13) . each deﬁned for all symmetric positivedeﬁnite tensors C. From a theorem in algebra it follows that an isotropic function ψ depends on C only through its principal scalar invariants deﬁned previously in (3.10) in the sense that a tensor P ∈ G1 if and only if the tensor HPH−1 ∈ G2 . (4. necessarily a subgroup of the unimodular group. for each P ∈ G. Thus for an isotropic function ψ. This. that there exists a function ψ such that ψ(C) = ψ I1 (C). then it can be shown that G2 = HG1 H−1 of this.e. i. I3 (C) (4. if H = αI. ψ(C) = ψ(PT CP) for all symmetric positive − deﬁnite C. (4. note that any nonsingular tensor P can be written as the product of a spherical tensor αI and a unimodular tensor T as P = (αI)T provided that we take α = ( det P)1/3 since then det T = ±1. i.11) It can be readily shown that this set of tensors G is also a group. We characterize the symmetry of ψ by the set G of unimodular tensors P which have the property that. (4. Let H be some ﬁxed nonsingular linear transformation. and consider two functions ψ1 (C) and ψ2 (C). for all symmetric positivedeﬁnite C and all orthogonal P.9) If G1 and G2 are the symmetry groups of ψ1 and ψ2 respectively.
ψ(C) = ψ i1 (C). i9 (C) where i1 (C) i2 (C) i3 (C) i4 (C) i5 (C) i6 (C) i7 (C) i8 (C) i9 (C) = = = = = = = = = C11 + C22 + C33 . C22 C33 + C33 C11 + C11 C22 C11 C22 C33 2 2 2 C23 + C31 + C12 2 2 2 2 2 2 C31 C32 + C12 C23 + C23 C31 C23 C31 C12 2 2 2 2 2 2 C22 C12 + C33 C31 + C33 C23 + C11 C12 + C11 C31 + C22 C23 2 2 2 2 2 2 C11 C31 C12 + C22 C12 C23 + C33 C23 C31 2 2 2 C23 C22 C33 + C31 C33 C11 + C12 C11 C22 (4.1: Characterize the set Hsquare of linear transformations that map a square back into a square. the symmetry is called orthotropy. −Rπ which represent reﬂections in the i j k planes normal to i.15) (4. through all angles φ about a ﬁxed axis n If G includes the three elements −Rπ . Example 4. . I3 (C) = det C. As noted previously. this group is generated by π/2 π/2 π/2 As a second example consider “cubic symmetry” where the symmetry group G coincides . and contains 24 rotations and 24 reﬂections.4. i2 (C). j and k.6. 0 < φ < 2π. i3 (C).6 Worked Examples. where I1 (C) = trace C I2 (C) = 1/2 [(trace C)2 − trace (C2 )] . Then. 4. −Rπ . i7 (C). i5 (C).16) n. including both rotations and reﬂections. R and −I. WORKED EXAMPLES. i8 (C). according to a i j k theorem in algebra (see pg 312 of Truesdell and Noll). If G contains I and all rotations Rφ .14) with the set of 24 rotations Gcube given in (4. i6 (C). 77 (4. i4 (C).1) plus the corresponding reﬂections obtained R by multiplying these rotations by −I. R . the corresponding symmetry is called transverse isotropy.
D = R k k −1 of G. Solution: All elements of Hsquare can be represented in the form (R )i Hj for integer choices of i = 0. D} . then so is their product P1 P2 . {I. = reﬂection in the diagonal with negative slope −i + j. I = (R )4 . 1: π/2 π/2 π/2 3π/2 Rπ = (R )2 .g. R3π/2 . {I. . V} . R }. Rπ/2 . Rπ . Solution: We return to the problem describe in Section 4. V. {I. H. H = H etc.g. {I. Therefore the group Hsquare is generated by the two elements H and Rπ/2 . I. H. {I. D . The set of rotations that do this were determined earlier and they are π/2 3π/2 Gsquare = {I. Rπ } . Rπ . 3 k and j = 0. D .4: Mapping a square into itself. V = (R )2 H. e. k k k k k D = (R π/2 3 π/2 k ) H. = reﬂection in the vertical axis j. R π/2 = reﬂection in the horizontal axis i. V. D. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS j D C i A B Figure 4. And if P is any member mations in G. D . Rπ } . (i) One can verify that Hsquare is a group since it possesses the property that if P1 and P2 are two transfor3π/2 3π/2 H. H} .1 and now consider the set rotations and reﬂections Hsquare that map the square back into itself. Rπ . R k 3π/2 k . R . 2. D.2: Find the generators of Hsquare and all subgroups of Hsquare .78 CHAPTER 4. k k k As the 4 reﬂectional symmetries we can pick H V D D and so Hsquare = I. = reﬂection in the diagonal with positive slope i + j. Rπ . k . One can verify that the following 8 collections of linear transformations are subgroups of Hsquare : I. 1. k π/2 D=R π/2 k H. D = HR etc. I. R = (R )3 . e. Example 4. then so is its inverse.
4: Are all symmetry preserving linear transformations necessarily either rotations or reﬂections? Solution: We began this chapter by considering the symmetry of a square. The axis k passes through the vertex A and the centroid of the opposite face BCD. Now consider the example of a twodimensional a × a . There are no other subgroups of Hsquare . k A B p D j C i Figure 4. Example 4. Example 4.R .5: A regular tetrahedron ABCD. j. Solution: 1. each of these subgroups leaves some aspect of the square invariant. etc.4. the fourth leaves an axis and a diagonal invariant etc. The ﬁrst leaves the face invariant. Thus these 3 × 1 = 3 distinct rotations – of the form Rπ . the third leaves the axis invariant. 79 Geometrically. three orthonormal vectors {i. WORKED EXAMPLES. the second leaves a diagonal invariant. etc. 2. k k – are symmetry transformations of the tetrahedron. There are three axes like p shown in the ﬁgure that pass through the midpoints of a pair of opposite edges. Thus these 4 × 2 = 8 distinct rotations – of the form R .3: Characterize the rotational symmetry of a regular tetrahedron. and a righthanded rotation through 180o about each of these axes maps the tetrahedron back onto itself. while the unit vector p passes through the center of the edge AD and the center of the opposite edge BC. The group Gtetrahedron of rotational symmetries of a tetrahedron therefore consists of these 11 rotations plus the identity transformation I.6. and examining the diﬀerent ways in which the square could be mapped back into itself. There are 4 axes like k in the ﬁgure that pass through a vertex of the tetrahedron and the centroid of the opposite face. – are symmetry p transformations of the tetrahedron. and righthanded rotations of 120o and 240o about each of these axes maps the 2π/3 4π/3 tetrahedron back onto itself. k} and a unit vector p.
n2 ∈ Z ≡ Integers} (i) depicted in Figure 4. which in turn is a subgroup of the group of unimodular tensors.6. I3 (C) . Show that ψ depends on C only through its principal scalar invariants. There are however other transformations. Example 4.5: Show that each of the following sets of linear transformations forms a group: all orthogonal tensors. the “shearing” of the lattice described by the linear transformation P = I + ai ⊗ j (ii) is also a symmetry preserving transformation. Thus. one rigidly translates the nth row of the lattice by precisely the amount n in the i direction.e. if for every integer n. and all proper unimodular tensors (i. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS square lattice. show that there is a function ψ such that ψ(C) = ψ I1 (C).6: Show that the group of proper orthogonal tensors is a subgroup of the group of orthogonal tensors. the set of inﬁnite points Lsquare = {x  x = n1 ai + n2 aj.6: A twodimensional × square lattice. Example 4. i. i.7: Suppose that a function ψ(C) is deﬁned for all symmetric positive deﬁnite tensors C and that its symmetry group is the set of all orthogonal tensors. tensors with determinant equal to +1). and examine the diﬀerent ways in which this lattice can be mapped back into itself. n1 . a a j 2 i 1 Figure 4.e. all unimodular tensors (i. I2 (C). For example. that also leave the lattice invariant. that are neither rotations nor reﬂections. all proper orthogonal tensors. We ﬁrst note that the rotational and reﬂectional symmetry transformations of a a × a square are also symmetry transformations for the lattice since they leave the lattice invariant.80 CHAPTER 4. one recovers the original lattice.e. tensors with determinant equal to ±1). Example 4.e.
(v) (1) (1) and so RT C2 R = C1 . e2 . (2) i = 1. e3 } and {e1 . ii) Find the most general form of f if G contains the set of all orthogonal transformations. e3 } are the respective principal bases of C1 and C2 . 2. e2 . Therefore ψ(C1 ) = ψ(RT C2 R) = ψ(C2 ) where in the last step we have used (i). Thus we can write 3 3 I2 (C1 ) = I2 (C2 ). e2 . if C1 and C2 are two symmetric tensors whose principal invariants Ii are the same. one has f (x) = f (Px) for all vectors x. e3 }: Rei Thus 3 3 3 3 (1) (1) = ei . Recall that the mapping (3. that f (x) f (x) = = f (P1 x) f (P2 x) for all vectors x. then ψ(C1 ) = ψ(C2 ). Since each set of basis vectors is orthonormal.38). i = 1. (ii) C1 = i=1 λi ei (1) ⊗ ei . for all vectors x. 81 Solution: We are given that ψ has the property that for all symmetric positivedeﬁnite tensors C and all orthogonal tensors Q ψ(C) = ψ(QT CQ). e2 .8: Consider a scalarvalued function f (x) that is deﬁned for all vectors x. 3. there is an orthogonal tensor R that (1) (1) (1) (2) (2) (2) carries {e1 . (1) (1) (2) (iii) where the two sets of orthonormal vectors {e1 . e3 } into {e1 . 3. Solution: i) Suppose that P1 and P2 are in G. (1) (1) (1) C2 = i=1 (1) λi ei (2) ⊗ ei . This establishes the desired result. 2. Example 4.40) between principal invariants and eigenvalues is onetoone. (iv) RT i=1 λi ei (2) ⊗ ei (2) R= i=1 λi RT (ei ⊗ ei )R = (2) (2) i=1 λi (RT ei ) ⊗ (RT ei ) = (2) (2) i=1 λi (ei ⊗ (ei ). are the principal scalar invariants of C deﬁned previously in (3. i.4.e. WORKED EXAMPLES. where Ii (C). (i) In order to prove the desired result it is suﬃcient to show that. I1 (C1 ) = I1 (C2 ). Let G be the set of all nonsingular linear transformations P that have the property that for each P ∈ G. I3 (C1 ) = I3 (C2 ). It follows from this and (ii) that the eigenvalues of C1 and C2 are the same. i) Show that G is a group.6. and (i) .
n) where I1 (C). Let G be the set of all nonsingular linear transformations P that have the property that for each P ∈ G. e2 . i = 1. e3 } where e3 = n one has I4 = C33 and 2 2 2 I5 = C31 + C32 + C33 . I4 (C1 . suppose that P ∈ G so that f (x) = f (Px) for all vectors x. 4. n ⊗ n) = g(QT CQ. Example 4. n) = I4 (C2 . I5 (C. It thus follows that G has the two deﬁning properties of a group. whence f (x) depends on x only through its length x. I2 (C). I3 (C).e. show that there exists a function g such that g(C. there is a rotation tensor R that carries x2 to x1 : Rx2 = x1 . As in Example 4. i. n) = I5 (C2 . we will show that f (x1 ) = f (x2 ). there exists a function f such that f (x) = f (x) for all vectors x.82 Then CHAPTER 4. n ⊗ n) = g I1 (C). where in the last step we have used the fact that G contains the set of all orthogonal transformations. PT (n ⊗ n)P) for all symmetric positivedeﬁnite tensors C and some particular unit vector n. I4 (C. n). I3 (C1 ) = I3 (C2 ). n) = C2 n · n. I5 (C1 . If x1 and x2 are two vectors that have the same length. This establishes the result claimed above. Next. I5 (C. ii) If x1 and x2 are two vectors that have the same length. Solution: We are told that g(C. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS f (P1 P2 )x = f P1 (P2 x) = f (P2 x) = f (x) where in the penultimate and ultimate steps we have used (i)1 and (i)2 respectively. n). that f (x) = f (Px) for all vectors x and all orthogonal P. QT (n ⊗ n)Q) (i) for all orthogonal Q and all symmetric positive deﬁnite C. Since P is nonsingular we can set y = Px and obtain f (P−1 y) = f (y) for all vectors y. it is suﬃcient to show that if C1 and C2 are two symmetric positive deﬁnite linear transformations whose “invariants” Ii . i. i. Remark: Observe that with respect to an orthonormal basis {e1 .e. n) = Cn · n.9: Consider a scalarvalued function g(C. I2 (C). n ⊗ n) = g(PT CP. 3. one has g(C. I1 (C1 ) = I1 (C2 ). If G contains the set of all orthogonal transformations. n) (ii) . 5” are the same. m⊗m) that is deﬁned for all symmetric positivedeﬁnite tensors C and all unit vectors m. Therefore f (x1 ) = f (Rx2 ) = f (x2 ). I3 (C) are the three fundamental scalar invariants of C and I4 (C. 2.e. I2 (C1 ) = I2 (C2 ).7.
This establishes the desired result. This implies that Rn = ±n and consequently RT n = ±n. 4.6. M. From (ii)1.4. the fact that R is orthogonal. Birkhoﬀ and S. for expressing (iii) in a principal basis of C2 . Academic Press. 2 2 (iii) and this must hold for all symmetric positive deﬁne C2 . in Handbuch der Physik. . (ii)4. Volume I. 2. Volume III/3. MacLane. A.A. Noll. n ⊗ n) = g(RT C2 R. Armstrong. (RT n) ⊗ (RT n)) = g(RT C2 R. in Continuum Physics. RT (n ⊗ n)R) = g(C2 . REFERENCES 1. C2 Rn · Rn = C2 n · n.7 it follows that there is an orthogonal tensor R such that RT C2 R = C1 . Flugge. A Survey of Modern Algebra. It 2 1 now follows from this. 1988. SpringerVerlag. MacMillan. G. SpringerVerlag.M. C. edited by S. Spencer. 83 then g(C1 .2. 1971.5 and the deﬁnitions of I4 and I5 that Rn · Rn = n · n.J.3 and the analysis in Example 4. Eringen. 1977. Truesdell and W. n ⊗ n) where we have used (i) in the very last step. n ⊗ n). WORKED EXAMPLES. n ⊗ n) = g(C2 . edited by A. 3. 1965. It is readily seen from this that RT C2 R = C2 as well. Theory of invariants. Consequently g(C1 . C2 Rn · Rn = C2 n · n. as may be seen. for example.C. The nonlinear ﬁeld theories of mechanics. Groups and Symmetry.
.
. Each component of say a vector ﬁeld v(x) or a 2tensor ﬁeld A(x) is eﬀectively a scalarvalued function 85 . e3 }..in component of ntensor T in some basis. v(x)... .. A(x) and T(x) deﬁned on R + ∂R. The components will always be taken with respect to a single ﬁxed orthonormal basis {e1 . .. we shall take the more limited approach of working with the components of these ﬁelds.. We shall consider scalar and tensor ﬁelds such as φ(x). scalar 3 × 1 column matrix vector ith component of the vector a in some basis..in ... ..1 Notation and deﬁnitions..... e2 . j element of the square matrix [A] fourthorder tensor (4tensor) i. While the subject of the calculus of tensor ﬁelds can be dealt with directly..... ... . ...Chapter 5 Calculus of Vector and Tensor Fields Notation: α {a} a ai [A] A Aij C Cijk Ti1 i2 .. j component of the 2tensor A in some basis. k.... or i.. ... 5. j. .......... The region R + ∂R and these ﬁelds will always be assumed to be suﬃciently regular so as to permit the calculations carried out below... or ith element of the column matrix {a} 3 × 3 square matrix secondorder tensor (2tensor) i.... . component of 4tensor C in some basis i1 i2 ... Let R be a bounded region of threedimensional space whose boundary is denoted by ∂R and let x denote the position vector of a generic point in R + ∂R.
x2 .i ei .i .1) and so on.3) The gradient of a scalar ﬁeld φ in the particular direction of the unit vector n is denoted by ∂φ/∂n and deﬁned by ∂φ = φ · n.i = ∂φ .j = ∂vi . vi (x1 . we shall use the notation that a comma followed by a subscript denotes partial diﬀerentiation with respect to the corresponding xcoordinate. e2 . CALCULUS OF VECTOR AND TENSOR FIELDS on threedimensional space. The divergence of a 2tensor ﬁeld A(x) is a vector ﬁeld denoted by div A (or ith component in the orthonormal basis is (div A)i = Aij. Its ij th  (5. so that grad φ = φ. ∂xj (5. we will write φ.j .6) .ij = ∂ 2φ .86 CHAPTER 5. (5. ∂xi ∂xj vi.4) ∂n The divergence of a vector ﬁeld v(x) is a scalar ﬁeld denoted by div v (or given by div v = vi. (5. Its (5. x2 . and we can use the wellknown operations of classical calculus on such ﬁelds such as partial diﬀerentiation with respect to xk . It is (5.5) · A). x3 ).2) v). Its (grad φ)i = φ. for example.i . e3 }. x3 ) and Aij (x1 .j ei ⊗ ej . where vi and xi are the ith components of the vectors v and x in the basis {e1 . ∂xi φ. The gradient of a scalar ﬁeld φ(x) is a vector ﬁeld denoted by grad φ (or i component in the orthonormal basis is th φ). The gradient of a vector ﬁeld v(x) is a 2tensor ﬁeld denoted by grad v (or component in the orthonormal basis is (grad v)ij = vi. Thus.j · v). In order to simplify writing. so that grad v = vi.
. . In particular. a vector ﬁeld v(x) and a 2tensor ﬁeld A(x) are the scalar. i2 . 87 × v). . (5.8) 5. The divergence theorem allows one to relate a surface integral on ∂D to a volume integral on D.10) ∂D D as well as v ⊗ n dA = v dV D or ∂D vi nk dA = D vi.11) ∂D More generally for a ntensor ﬁeld T(x) the divergence theorem gives Ti1 i2 .kk .9) Likewise for a vector ﬁeld v(x) one has v · n dA = · v dV or ∂D vk nk dA = D vk.. INTEGRAL THEOREMS so that div A = Aij.j so that curl v = eijk vk.j ei . The curl of a vector ﬁeld v(x) is a vector ﬁeld denoted by curl v (or component in the orthonormal basis is (curl v)i = eijk vk.12) where some of the subscripts i1 .k dV. (5. vector and 2tensor ﬁelds with components 2 φ = φ.kk .in nk dA = ∂D D ∂ (Ti i .5. Its ith (5.k dV. in may be repeated and one of them might equal k. (5.kk .i ) dV ∂xk 1 2 n (5. ( 2 v)i = vi. .k dV. .7) The Laplacians of a scalar ﬁeld φ(x). (5. for a scalar ﬁeld φ(x) φn dA = ∂D D φ dV or ∂D φnk dA = D φ.j ei ..2... ( 2 A)ij = Aij.2 Integral theorems Let D be an arbitrary regular subregion of the region R.
88
CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
5.3
Localization
Certain physical principles are described to us in terms of equations that hold on an arbitrary portion of a body, i.e. in terms of an integral over a subregion D of R. It is often useful to derive an equivalent statement of such a principle in terms of equations that must hold at each point x in the body. In what follows, we shall frequently have need to do this, i.e. convert a “global principle” to an equivalent “local ﬁeld equation”. Consider for example the scalar ﬁeld φ(x) that is deﬁned and continuous at all x ∈ R + ∂ R and suppose that φ(x) dV = 0
D
for all subregions D ⊂ R. at every point x ∈ R.
(5.13)
We will show that this “global principle” is equivalent to the “local ﬁeld equation” φ(x) = 0 (5.14)
D
z
B (z)
Figure 5.1: The region R, a subregion D and a neighborhood B (z) of the point z. We will prove this by contradiction. Suppose that (5.14) does not hold. This implies that there is a point, say z ∈ R, at which φ(z) = 0. Suppose that φ is positive at this point: φ(z) > 0. Since we are told that φ is continuous, φ is necessarily (strictly) positive in some neighborhood of z as well. Let B (z) be a sphere with its center at z and radius > 0. We can always choose suﬃciently small so that B (z) is a suﬃciently small neighborhood of z and φ(x) > 0 at all x ∈ B (z). (5.15) Now pick a region D which is a subset of B (z). Then φ(x) > 0 for all x ∈ D. Integrating φ over this D gives φ(x) dV > 0
D
(5.16)
thus contradicting (5.13). An entirely analogous calculation can be carried out in the case φ(z) < 0. Thus our starting assumption must be false and (5.14) must hold.
5.4. WORKED EXAMPLES.
89
5.4
Worked Examples.
In all of the examples below the region R will be a bounded regular region and its boundary ∂R will be smooth. All ﬁelds are deﬁned on this region and are as smooth as in necessary. In some of the examples below, we are asked to establish certain results for vector and tensor ﬁelds. When it is more convenient, we will carry out our calculations by ﬁrst picking and ﬁxing a basis, and then working with the components in that basis. If necessary, we will revert back to the vector and tensor ﬁelds at the end. We shall do this frequently in what follows and will not bother to explain this strategy each time.
Example 5.1: Calculate the gradient of the scalarvalued function φ(x) = Ax · x where A is a constant 2tensor. Solution: Writing φ in terms of components φ = Aij xi xj . Calculating the partial derivative of φ with respect to xk yields φ,k = Aij (xi xj ),k = Aij (xi,k xj + xi xj,k ) = Aij (δik xj + xi δjk ) = Akj xj + Aik xi = (Akj + Ajk )xj or equivalently φ = (A + AT )x.
Example 5.2: Let v(x) be a vector ﬁeld and let vi (x1 , x2 , x3 ) be the ith component of v in a ﬁxed orthonormal basis {e1 , e2 , e3 }. For each i and j deﬁne Fij = vi,j . Show that Fij are the components of a 2tensor. Solution: Since v and x are 1tensors, their components obey the transformation rules vi = Qik vk , vi = Qki vk Therefore Fij = and xj = Qjk xk , x = Qj xj
∂vi ∂vi ∂x ∂vi ∂(Qik vk ) ∂vk = = Qj = Qj = Qik Qj = Qik Qj Fk , ∂xj ∂x ∂xj ∂x ∂x ∂x
which is the transformation rule for a 2tensor.
Example 5.3: If φ(x), u(x) and A(x) are a scalar, vector and 2tensor ﬁelds respectively. Establish the identities a. div (φu) = u · grad φ + φ div u
90
CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
b. grad (φu) = u ⊗ grad φ + φ grad u c. div (φA) = A grad φ + φ div A
Solution: a. In terms of components we are asked to show that (φui ),i = ui φ,i + φ ui,i . This follows immediately by expanding (φui ),i using the chain rule. b. In terms of components we are asked to show that (φui ),j = ui φ,j + φ ui,j . Again, this follows immediately by expanding (φui ),j using the chain rule. c. In terms of components we are asked to show that (φAij ),j = Aij φ,j + φ Aij,j . Again, this follows immediately by expanding (φAij ),j using the chain rule.
Example 5.4: If φ(x) and v(x) are a scalar and vector ﬁeld respectively, show that × (φv) = φ( × v) − v × φ (i) × u = eijk uk,j ei where ei is a ﬁxed ×v+ φ×v (ii)
Solution: Recall that the curl of a vector ﬁeld u can be expressed as basis vector. Thus evaluating × (φv):
× (φv) = eijk (φvk ),j ei = eijk φ vk,j ei + eijk φ,j vk ei = φ from which the desired result follows because a × b = −b × a.
Example 5.5: Let u(x) be a vector ﬁeld and deﬁne a second vector ﬁeld ξ(x) by ξ(x) = curl u(x). Show that a. · ξ = 0; uT )a = ξ × a for any vector ﬁeld a(x); and u· u− u· uT × u can be expressed as (i)
b. ( u − c. ξ · ξ =
Solution: Recall that in terms of its components, ξ = curl u = ξi = eijk uk,j . a. A direct calculation gives
· ξ = ξi,i = (eijk uk,j ),i = eijk uk,ji = 0
(ii)
where in the last step we have used the fact that eijk is skewsymmetric in the subscripts i, j, and uk,ji is symmetric in the subscripts i, j (since the order of partial diﬀerentiation can be switched) and therefore their product vanishes.
5.4. WORKED EXAMPLES.
b. Multiplying both sides of (i) by eipq gives eipq ξi = eipq eijk uk,j = (δpj δqk − δpk δqj ) uk,j = uq,p − up,q ,
91
(iii)
where we have made use of the identity eipq eijk = δpj δqk − δpk δqj between the alternator and the Kronecker delta infroduced in (1.49) as well as the substitution rule. Multiplying both sides of this by aq and using the fact that eipq = −epiq gives epiq ξi aq = (up,q − uq,p )aq , or ξ × a = ( u − uT )a. (iv)
c. Since ( u)ij = ui,j and the inner product of two 2tensors is A · B = Aij Bij , the righthand side of the equation we are asked to establish can be written as u · u − u · uT = ( u)ij ( u)ij − ( u)ij ( u)ji = ui,j ui,j − ui,j uj,i . The lefthand side on the hand is ξ · ξ = ξi ξi . Using (i), the aforementioned identity between the alternator and the Kronecker delta, and the substitution rule leads to the desired result as follows: ξi ξi = (eijk uk,j ) (eipq up,q ) = (δjp δkq − δjq δkp ) uk,j up,q = uq,p up,q − up,q up,q . (v)
Example 5.6: Let u(x), E(x) and S(x) be, respectively, a vector and two 2tensor ﬁelds. These ﬁelds are related by 1 E= u + uT , S = 2µE + λ trace(E) 1, (i) 2 where λ and µ are constants. Suppose that u(x) = b x r3 where r = x, x = 0, (ii)
and b is a constant. Use (i)1 to calculate the ﬁeld E(x) corresponding to the ﬁeld u(x) given in (ii), and then use (i)2 to calculate the associated ﬁeld S(x). Thus verify that the ﬁeld S(x) corresponding to (ii) satisﬁes the diﬀerential equation: div S = o, x = 0. (iii) Solution: We proceed in the manner suggested in the problem statement by ﬁrst using (i)1 to calculate the E corresponding to the u given by (ii); substituting the result into (i)2 gives the corresponding S; and ﬁnally we can then check whether or not this S satisﬁes (iii). In components, 1 (ui,j − uj,i ) , (iv) 2 and therefore we begin by calculting ui,j . For this, it is convenient to ﬁrst calculate ∂r/∂xj = r,j . Observe by diﬀerentiating r2 = x2 = xi xi that Eij = 2rr,j = 2xi,j xi = 2δij xi = 2xj , and therefore r,j = xj . r (v)
(vi)
∂R (ii) The result follows immediately by using the divergence theorem (5. (iii) Example 5.j + bxi (r−3 ). we have to show that xi nj dA = V δij . 3 3 r r r r5 (ix) Finally we use this to calculate ∂Sij /∂xj = Sij. r r (vii) Substituting this into (iv) gives us Eij : Eij = 1 (ui.j r3 xi xj δij = b 3 − 3b 4 r r r = = = b xi δ − 3b 4 r.j : 1 Sij.11): xi nj dA = ∂R R xi. substituting (viii) into (i)2 . δij xj 3 15xi xj xj − 5 (xi + 3xi ) + r4 r r r6 r (x) Example 5.j dV = R δij dV = δij R dV = δij V.i ) = b 2 xi xj δij −3 5 3 r r .j r5 r = = = −3 0.j b xi.7: Show that ∂R x ⊗ n dA = V I.j + uj. (i) .j r5 − 3 5 (δij xj + xi δjj ) − 3xi xj − 6 r. and x is the position vector of a typical point in R + ∂R.j − δij − 3 r.j 2µb = δij (r−3 ). gives us Sij : Sij = = 2µ Eij + λEkk δij = 2µb 2µb δij 3xi xj − 3 r r5 δij 3xi xj δkk xk xk − + λb −3 δij 3 5 3 r r r r5 r2 3xi xj 3 δij − 3 5 δij = 2µb − + λb .j − 3xi xj (r−5 ).j 3 ij r r xi xj δij b 3 − 3b 5 .92 CHAPTER 5.j r4 3 (xi xj ). (viii) Next. (i) where V is the volume of the region R.8: Let A(x) be a 2tensor ﬁeld with the property that A(x)n(x) dA = o ∂D for all subregions D ⊂ R. Solution: In terms of components in a ﬁxed basis. CALCULUS OF VECTOR AND TENSOR FIELDS Now diﬀerentiating the given vector ﬁeld ui = bxi /r3 with respect to xj gives ui.
5.4. Solution: In terms of components in a ﬁxed basis. Solution: In terms of components we are given that eijk xj Akp np dA = 0.j = 0 at each point in R and so the preceding equation simpliﬁes.p dV = D D eijk [δjp Akp + xj Akp.p ] dV = 0. Suppose that in addition ∂D x × An dA = o for all subregions D ⊂ R. D Since this holds for all subregions D ⊂ R we can localize it to eijk Akj = 0 at each x ∈ R.12). (ii) By using the divergence theorem (5.j = 0 at each x ∈ R. (iv) Conversely if (iv) holds. one can easily reverse the preceding steps to conclude that then (i) also holds. the result established in the previous problem allows us to conclude that Aij. after using the substitution rule. Show that A must be a symmetric 2tensor. multiplying both sides by eipq and using the identity eipq eijk = δpj δqk − δpk δqj in (1.9: Let A(x) be a 2tensor ﬁeld which satisﬁes the diﬀerential equation div A = o at each point in R. we are told that Aij (x)nj (x) dA = 0 ∂D for all subregions D ⊂ R.j dV = 0 D for all subregions D ⊂ R. This shows that (iv) is both necessary and suﬃcient for (i) to hold. We are also given that Aij. Finally. . Example 5. (iii) If Aij.49) yields (δpj δqk − δpk δqj )Akj = Aqp − Apq = 0 and so A is symmetric. this implies that Aij. ∂D which on using the divergence theorem yields eijk (xj Akp ). Show that (i) holds if and only if div A = o at each point x ∈ R.j is continuous on R. to eijk Akj dV = 0. WORKED EXAMPLES. 93 where n(x) is the unit outward normal vector at a point x on the boundary ∂D.
94 CHAPTER 5. (i) Solution: In the presence of suﬃcient smoothness. ε2 obey ε1. Let C be an arbitrary regular oriented curve in R that connects (0. ξ2 (s))ξ2 (s) ds (iv) 0 satisﬁes the requirement (i) when (ii) holds. s2 ). x2 ) be deﬁned on a simply connected twodimensional domain R. (0 0) (0. CALCULUS OF VECTOR AND TENSOR FIELDS Example 5. x2 ). x2 ) = ε1 (ξ1 (s). The unit outward normal vector on S is n. ξ2(s)) s s R C (x1. ξ2 = ξ2 (s).2 = ε2 for all (x1 . x2 ) such that u. ξ2 (0)) = (0. x2 ) ∈ R.2 = ε2. 0 ≤ s ≤ s0 .1 for all (x1 . (b) A closed path C passing through (0. x2 ). x2 ). (ii) (a) (b) R (x1.21 . (iii) where s is arc length on C and (ξ1 (0). ξ2 ) and the curve is characterized by the parameterization ξ1 = ξ1 (s). The curve is parameterized by arc length s as ξ1 = ξ1 (s). s2 = n 1 Figure 5. the order of partial diﬀerentiation does not matter and so we necessarily have u. ξ2(s)) )) s1 = −n2. 0) and (x1 . 0) to (x1 . ξ2) = (ξ1(s). To show that (ii) is also suﬃcient for the existence of u. x2) C )) (ξ1.10: Let ε1 (x1 . n2 ) . ξ2 (s0 )) = (x1 . has components (s1 . We will show that the function s0 u(x1 . A generic point on the curve is denoted by (ξ1 . u. 0) to (x1 . x2 ) and ε2 (x1 . 0 ≤ s ≤ s0 .1 = ε1 . and it has components (n1 .12 = u. 0) and (ξ1 (s0 ). s2) = (ξ1(s). The unit tangent vector on S. x2 ) and coinciding with C over part of its length. s.2: (a) Path C from (0. Therefore a necessary condition for (i) to hold is that ε1 . x2) D s n (0. 0) (0 (s1. x2 ) ∈ R. ξ2 = ξ2 (s). ξ2 (s))ξ1 (s) + ε2 (ξ1 (s). . we shall provide a formula for explicitly calculating the function u in terms of the given functions ε1 and ε2 . Find necessary and suﬃcient conditions under which there exists a function u(x1 .
then so does the function u + constant and so the dependence on the arbitrary starting point of the integral is to be expected. x2 ). x2 ). x2 ) ∈ R. Thus the lefthand side of (v) can be written as ε1 s1 + ε2 s2 ds = C C ε2 n1 − ε1 n2 ds = D ε2. Finally it remains to show that the function (iv) satisﬁes the requirements (i). We need to show that ε1 (ξ1 (s).1 (x1 . ξ2 )dξ1 + ε2 (ξ1 . WORKED EXAMPLES.5.11: Let a1 (x1 .0) ε1 (ξ1 . that it does not depend on the path of integration. Example 5. x2 ) as sketched in Figure NNN (b). Thus the integral (v) vanishes on any closed path C and so the integral (iv) is independent of path and depends only on the end points.4.1 − ε1. (i) Show that (i) holds if and only if there is a function φ(x1 . s2 = ξ2 (s). (ii) Solution: This is simply a restatement of the previous example in a form that will ﬁnd useful in what follows.12: Find the most general vector ﬁeld u(x) which satisﬁes the diﬀerential equation 1 2 Solution: In terms of components. x2 ). x2 ) and a2 (x1 .j = −uj. Suppose that a1 and a2 satisfy the partial diﬀerential equation a1.e. Thus (iv) does in fact deﬁne a function u(x1 .1 (x1 . x2 ) = 0 for all (x1 .2 (x1 . ξ2 )dξ2 (vii) and then diﬀerentiating this with respect to x1 and x2 . In view of (ii). x2 ) such that a1 (x1 .i . (ii) . C (v) Recall that (ξ1 (s).2 dA (vi) where we have used the divergence theorem in the last step and D is the region enclosed by C . Example 5. u+ uT = O at all x ∈ R. Observe further from the ﬁgure that the components of the unit tangent vector s and the unit outward normal vector n are related by s1 = −n2 and s2 = n1 . This is readily seen by writing (iv) in the form (x1 . this last integral vanishes. x2 ) = φ. 0) and passes through (x1 . i. x2 ) be deﬁned on a simply connected twodimensional domain R. ξ2 (s))ξ1 (s) + ε2 (ξ1 (s). ξ2 (s)) are the components of the unit tangent vector on C at the point (ξ1 (s).2 (x1 .x2 ) u(x1 . (Note that if a function u satisﬁes (i). x2 ) + a2. ξ2 (s))ξ2 (s) ds = 0. x2 ) = (0. x2 ) = −φ. 95 To see this we must ﬁrst show that the integral (iv) does in fact deﬁne a function of (x1 . ξ2 (s)): s1 = ξ1 (s). a2 (x1 .) Thus consider a closed path C that starts and ends at (0. x2 ). (i) u = − uT reads: ui.
kj = −ui. However by (ii). . . Integrating this once gives ui. . reads: f = f1 (A11 . where the ci ’s are constants. Again. for example.ij = −ui. A33 ) = A2 + A2 + A2 + 2A2 + 2A2 + 2A2 . A12 . .jk = uk.13: Suppose that a scalarvalued function f (A) is deﬁned for all symmetric tensors A.ij . substituting (iv) into (ii) shows that [C] must be skewsymmetric.ki = uk. Using this and then changing the order of diﬀerentiation leads to ui.i = −ui.k . A13 . the particular function f = A · A = Aij Aij which. uk. ∂f . It therefore follows that ui. . Integrating this once more gives ui = Cij xj + ci . uj.ik = −uj. . Is this tensor symmetric? Solution: Consider.j .j = Cij where the Cij ’s are constants. Thus in summary the most general vector ﬁeld u(x) that satisﬁes (i) is u(x) = Cx + c where C is a constant skewsymmetric 2tensor and c is a constant vector. gives ∂f1 = 4A12 . To examine suﬃciency.ki .jk . A21 . In terms of components in a ﬁxed basis we have f = f (A11 . A21 . by (ii). (i) ∂Aij are the components of a 2tensor. A33 ). ∂f1 = 0. and separately with respect to A21 . The partial derivatives of f with respect to Aij . 11 22 33 12 23 31 (ii) Proceeding formally and diﬀerentiating (ii) with respect to A12 . ∂A12 which implies that ∂f1 /∂A12 = ∂f1 /∂A21 . (iv) (iii) Example 5. when written out in components. The vector ﬁeld u(x) must necessarily have this form if (ii) is to hold.96 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS Diﬀerentiating this with respect to xk .jk = −uj. A13 .ji = uk.jk = 0. ∂A21 (iii) .jk = −uj. and then changing the order of diﬀerentiation gives ui.k = −uk. A12 . Using this and changing the order of diﬀerentiation once again leads to ui.
L.3. We assume that this is what was meant in the problem statement. Section 2. On the other hand. References 1. A33 ) : Aij = f2 (A11 . In fact. A12 . 1999. . and therefore its derivative with respect to the tensor can be assumed to be symmetric. In general. by changing Aij → 1 (Aij + 2 Aji ). ∂f2 /∂A12 = ∂f2 /∂A21 . since Aij is symmetric we can write 1 (Aij + Aji ) . then ∂f /∂Aij will be symmetric. g2 is deﬁned such that g1 (A) = g2 (A) for all tensors with unit determinant. Gurtin. . 3. WORKED EXAMPLES.A. . 2 Substituting (iv) into the formula (ii) for f gives f = f2 (A11 . Dover. . We see that the values of the functions f1 and f2 are equal at all symmetric matrices and so in going from f1 → f2 . Academic Press. A21 . 1981. Throughout these volumes. New York. Chapter 1. Chadwick. A13 . and yet we have been calculating partial derivatives as if they were independent. 2. . Dover. . A13 .E. . M. ∂A21 2 2 2 97 (iv) . . On occasion we will have need to diﬀerentiate a function g1 (A) deﬁned for all tensors with det A = 1 and we shall do this by extending the deﬁnition of the given function and deﬁning a second function g2 (A) for all tensors. Mathematics Applied to Continuum Mechanics. 1987. Suppose that f2 is deﬁned by (v) for all matrices [A] and not just symmetric matrices [A]. P. whenever we encounter a function of a symmetric tensor. Diﬀerentiating f2 leads to = A2 + A2 + A2 + 2 11 22 33 1 (A12 + A21 ) 2 +2 ∂f2 = A12 + A21 . ∂f2 = A21 + A12 . . . the original problem statement itself is illposed since we are asked to calculate ∂f /∂Aij but told that [A] is restricted to being symmetric. (v) (vi) The source of the original diﬃculty is the fact that the 9 Aij ’s in the argument of f1 are not independent variables since Aij = Aji . if a function f (A11 . Sections 10 and 11. Remark: We will encounter a similar situation involving tensors whose determinant is unity. A12 . . An Introduction to Continuum Mechanics. Chapter 2. we shall always assume that it has been written in symmetric form. A21 . but not otherwise. A33 ) 1 1 (A23 + A31 ) + 2 (A31 + A13 ) 2 2 1 2 1 2 1 2 1 = A2 + A2 + A2 + A12 + A12 A21 + A21 + . We then diﬀerentiate g2 and evaluate the result at tensors with unit determinant.4.5. ∂A12 and so now. we have eﬀectively relaxed the constraint of symmetry and expanded the domain of deﬁnition of f to all matrices [A]. A12 . + A31 + A31 A13 + A2 . We may diﬀerentiate f2 by treating the 9 Aij ’s to be independent and the result can then be evaluated at symmetric matrices. 11 22 33 2 2 2 2 13 Note that the values of f1 [A] = f2 [A] for any symmetric matrix [A]. Continuum Mechanics. Segel. A33 ) is expressed in symmetric form. .
.
32) . x1 = x2 = 0). for all (r. which is a general treatment of orthogonal curvilinear coordinates. e1 . e3 }.(6. The rectangular cartesian coordinates of the point P in the frame {O. e2 . z) through the mappings x1 = r cos θ. The discussion here. Let {e1 .17) that relate the rectangular cartesian coordinates (x1 . x3 = z. z) ∈ [0. x3 ) of the position vector x in this basis.1) is onetoone except at r = 0 (i. A summary of the main tensor analytic results of this section are given in equations (6. e3 } are the components (x1 . θ. x ˆ ˆ It is helpful to begin by reviewing a few aspects of the familiar case of circular cylindrical coordinates. e2 . e2 . x3 ). ∞) × [0. x2 = r sin θ.e.37) in terms of the scale factors hi deﬁned in (6. 2π) × (−∞. We introduce circular cylindrical coordinates (r. e2 . e3 } be a ﬁxed orthonormal basis. constitute a frame which we denote by {O. The point O and the basis {e1 . Consider a generic point P in R3 whose position vector relative to this origin O is x. Indeed (6.1) The mapping (6.1) may be . x2 . together. x2 . x2 . e3 }.Chapter 6 Orthogonal Curvilinear Coordinates 6. x3 ) to the orthogonal curvilinear coordinates (ˆ1 .1 Introductory Remarks The notes in this section are a somewhat simpliﬁed version of notes developed by Professor Eli Sternberg of Caltech. 99 (6. is a compromise between a general tensorial treatment that includes oblique coordinate systems and an adhoc treatment of special orthogonal curvilinear coordinate systems. and let O be a ﬁxed point chosen as the origin. ∞). θ. e1 .
∂x3 /∂z Note that ∆(r. The Jacobian determinant of the mapping (6. planes perpendicular to x3 − axis.2) For a general set of orthogonal curvilinear coordinates one cannot. θ.1) on (r. ∞). x3 z = constan constant z rcoordinate line coordinate θcoordinate line coordinate θ constant θ = constan r θ x1 coordinate zcoordinate line z r constant r = constan x2 Figure 6. 2π) × (−∞. one has: r = ro = constant : θ = θo = constant : z = zo = constant : circular cylinders. (6. ∞) × [0. this reﬂects the invertibility of (6. z) admit the familiar geometric interpretation illustrated in Figure 6. and the breakdown in invertibility at r = 0. explicitly invert the coordinate mapping in this way.1: Circular cylindrical coordinates (r. 2 1 cos θ = x1 /r. θ. The circular cylindrical coordinates (r. sin θ = x2 /r. θ. each “regular point” of E3 ( i. in general.1) ∂x1 /∂r ∂x1 /∂θ ∆(r.100 CHAPTER 6. meridional half − planes through x3 − axis.1. θ. z).2). z = x3 .e. a point at which r > 0) is the intersection of a unique triplet of (mutually . The above surfaces constitute a triply orthogonal family of coordinate surfaces. θ. co − axial with x3 − axis. ORTHOGONAL CURVILINEAR COORDINATES explicitly inverted for r > 0 to give r= x2 + x2 . z) = 0 if and only if r = 0 and is otherwise strictly positive. In view of (6. z) = det ∂x2 /∂r ∂x2 /∂θ ∂x3 /∂r ∂x3 /∂θ is ∂x1 /∂z ∂x2 /∂z = r ≥ 0. z) ∈ (0.
. hr = ∂x/∂r. ez (x)}. ez = (∂x/∂z). . . θ.6. θ and z respectively. ez = hz . hθ = ∂x/∂θ. i. hz = ∂x/∂z. er = hr 1 eθ = (∂x/∂θ) = − sin θ e1 + cos θe2 . . eθ . (6. (6.1. eθ · (∂eθ /∂r). Along any coordinate line only one of the coordinates (r. and so the unit tangent vectors corresponding to the respective coordinate lines r. INTRODUCTORY REMARKS 101 perpendicular) coordinate surfaces. z) varies. ∂er /∂θ.4) er · (∂eθ /∂r). . The coordinate lines are the pairwise intersections of the coordinate surfaces. They are local because they depend on the point x. we will write {er (x).3) The triplet of vectors {er . In order to calculate the derivatives of various ﬁeld quantities it is clear that we will need to calculate quantities such as ∂er /∂r. θ and z are: 1 1 1 eθ = (∂x/∂θ). θ. . hr hθ hz In the present case one has hr = 1. hθ = r. ∂x/∂θ. and in order to calculate the components of these derivatives in the local basis we will need to calculate quantities of the form er · (∂er /∂r). The socalled metric coeﬃcients hr . .1. while the other two remain constant. when we need to emphasize this fact. hz denote the magnitudes of these vectors. sometimes. ez · (∂er /∂r). hθ . The vectors ∂x/∂r. z) = (r cos θ)e1 + (r sin θ)e2 + ze3 . eθ · (∂er /∂r).e. hz = 1 and 1 (∂x/∂r) = cos θ e1 + sin θ e2 . ez · (∂eθ /∂r). ∂x/∂z. eθ (x). etc. ez } forms a local orthonormal basis at the point x. er = (∂x/∂r). hθ 1 (∂x/∂z) = e3 . etc. In terms of the circular cylindrical coordinates the position vector x can be written as x = x(r. are tangent to the coordinate lines corresponding to r. thus for example as illustrated in Figure 6. the line along which a rcoordinate surface and a zcoordinate surface intersect is a θcoordinate line.
ˆ Equation (6.5) is onetoone and suﬃciently smooth in the interior of R so that the inverse mapping xi = xi (x1 . We shall assume that ˆ (x1 . x3 ) ∈ R. x3 ) through a triplet of scalar mappings x ˆ ˆ ˆ xi = xi (ˆ1 . is devoted to calculating these quantities. x3 ) ˆ ˆ (6. x2 . and the “box” R is given by R = {(ˆ1 . We introduce curvilinear coordinates (ˆ1 . we will consistently denote the ﬁxed cartesian coordinate system and all components and quantities associated with it by symbols such as xi . Observe that the “box” R includes some but possibly not ˆ ˆ all of its faces. x3 ). We assume further x ˆ ˆ ˆ that the mapping (6. x2 . x2 . x2 .5) ˆ where the domain of deﬁnition R is a subset of E3 . f (x1 . For example in the case of circular cylindrical coordinates we have L1 = {(ˆ1  0 ≤ x1 < ∞}. Each curvilinear coordinate xi belongs ˆ ˆ to some linear interval Li . x3 ). and we shall consistently denote the local curvilinear coordinate system and all components and quantities associated with it by similar symbols with “hats” over them. x3 ) in the image of the interior of R.2. xi . x2 . vi (ˆ1 . x ˆ ˆ x ˆ ˆ (6. ˆ ˆ ˆx ˆ ˆ ˆ x ˆ ˆ x ˆ ˆ 6.4. x2 . x2 . leading eventually to Equation (6. x3 ). Aij (x1 . Notation: As far as possible.6) ˆ exists and is appropriately smooth at all (x1 . x3 ) ranges over all of E3 as (ˆ1 . f (ˆ1 .5) may be interpreted as a mapping of R into E3 .102 CHAPTER 6. e3 } be a ﬁxed righthanded orthonormal basis. x3 ) for all (ˆ1 . The rectangular cartesian coordinates of the point with position vector x in this frame are (x1 .1 Coordinate transformation. ei . x3 ) etc.30) in Subsection 6. L2 = {(ˆ2  0 ≤ x2 < 2π} and x ˆ x ˆ ˆ ˆ L3 = {(ˆ3  − ∞ < x3 < ∞}. x2 . and R = L1 × L2 × L3 . x3 ) where xi = x · ei . x3 ) takes on all values in R. ei . x3 )  0 ≤ x1 < x ˆ x ˆ ˆ ˆ ˆ ∞. e. x2 . Aij (ˆ1 . x2 . x2 . 6. vi (x1 . e2 . ORTHOGONAL CURVILINEAR COORDINATES Much of the analysis in the general case to follow. −∞ < x3 < ∞}.2 General Orthogonal Curvilinear Coordinates Let {e1 . x2 . x2 . e3 } be the associated frame. let O be the ﬁxed point chosen as the origin and let {O.g. 0 ≤ x2 < 2π.2. Inverse transformation. . x3 ) etc. e1 . e2 . x3 ). x2 . x2 .
2. c3 = constant. The coordinate surface xi = constant is deﬁned by ˆ xi (x1 . the tangent vector can be taken to be ∂x/∂ x1 . Without loss of generality we can take therefore take it to be positive: 1 ∂xi ∂xj ∂xk det[J] = eijk epqr > 0.2. GENERAL ORTHOGONAL CURVILINEAR COORDINATES 103 ˆ Note that the mapping (6.1.6. x x1 ∈ L1 . Genˆ ˆ eralizing this.5) might not be uniquely invertible on some of the faces of R which are mapped into “singular” lines or surfaces in E3 . ˆ ˆi i = 1.10) Here and in the sequel a superior dot indicates diﬀerentiation with respect to the parameter t. . ˆ c2 = constant. ∂x/∂ xi are tangent vectors and ˆ ˆ ei = 1 1 ∂x ∂x/∂ xi  ∂ xi ˆ ˆ (no sum) (6. (α ≤ t ≤ β). Thus every regular point of E3 is the point of intersection of a unique triplet of coordinate surfaces and coordinate lines. Thus in the ˙ case of the special curve Γ1 : x = x(ˆ1 . see Section 6.6) is [J]−1 . 3. c2 . c3 ). x2 . x1 = r = 0 is a singular surface. the Jacobian determiˆ nant does not vanish on the interior of R. the pairwise intersections of these surfaces are the corresponding coordinate lines. along which only one of the curvilinear coordinates varies.9) ˙ can be taken to be1 x(t) = xi (t)ei . (6.7) and by the assumed smoothness and onetooneness of the mapping. x3 ) = xo = constant. The Jacobian matrix [J] of the mapping (6. corresponding to a x1 coordinate line. Recall that the tangent vector along an arbitrary regular curve Γ : x = x(t). it is oriented in the direction of increasing t. (6.5) has elements Jij = ∂xi ∂ xj ˆ (6.8) 6 ∂ xp ∂ xq ∂ x r ˆ ˆ ˆ The Jacobian matrix of the inverse mapping (6. (For example in the case of circular cylindrical coordinates. 2. as is illustrated in Figure 6.) Points that are not ˆ on a singular line or surface will be referred to as “regular points” of E3 .
scale moduli. (6. If s(t) is the arclength of Γ. ˆ we must require for i = j : ˆ ˆ ei · ej = 0 or ∂x ∂x · =0 ∂ xi ∂ xj ˆ ˆ or ∂xk ∂xk · = 0.104 CHAPTER 6. e2 . ∂ xi ∂ xj ˆ ˆ (6. Since our discussion is limited to orthogonal curvilinear coordinate systems.9). e2 .11) 6. e3 }.2: Orthogonal curvilinear coordinates (ˆ1 .2. Consider again the arbitrary regular curve Γ parameterized by (6. measured from an arbitrary ﬁxed point on Γ. the sense of ei being e ˆ ˆ ˆ determined by the direction in which xi increases. both of which point in the sense ˆ of increasing xi . x3 ) and the associated local orthonormal basis x ˆ ˆ ˆ ˆ vectors {ˆ1 .12) . are the unit tangent vectors along the xi − coordinate lines. x2 . Here ei is the unit tangent vector along the xi coordinate line.2 Metric coeﬃcients. e3 }. ORTHOGONAL CURVILINEAR COORDINATES x3 ˆ x1coordinate surface ˆ coordinate x3 ˆ e3 x2coordinate surface ˆ coordinate ˆ e1 x1 ˆ ˆ e2 x3coordinate surface ˆ coordinate x2 ˆ e3 e2 e1 ˆ Qij = ei · ej x2 x1 Figure 6. one has ˙ s(t) = x(t) = ˙ ˙ ˙ x(t) · x(t). The proper orthogonal matrix [Q] characterizes the ˆ rotational transformation relating this basis to the rectangular cartesian basis {e1 .
(6. dt dt (α ≤ t ≤ β). 0 h2 3 along Γ.17) noting that hi = 0 is precluded by (6. ˆ ˆ Thus 2 2 105 = dx dx dxk dxk · = · = dt dt dt dt ∂xk dˆi x ∂ xi dt ˆ · ∂xk dˆj x ∂ xj dt ˆ = ∂xk ∂xk · ∂ xi ∂ xj ˆ ˆ dˆi dˆj x x . ds dˆi dˆj x x = gij or (ds)2 = gij dˆi dˆj . (6.18) h2 0 .6. (6.14) gij = ∂ xi ∂ xj ˆ ˆ ∂ x i ∂ xj ˆ ˆ Note that gij = 0.5) and the chain rule that ds dt where xi (t) = xi (x1 (t).13).16) + ∂x2 ∂ xi ˆ 2 + ∂x3 ∂ xi ˆ 2 > 0. Because of (6. x3 (t)). x2 (t). √ √ 3 Some authors such as Love deﬁne hi as 1/ gii instead of as gii . (6.15) as a consequence of the orthogonality condition (6.14) and (6.19) (ds)2 = (h1 dˆ1 )2 + (h2 dˆ2 )2 + (h3 dˆ3 )2 x x x 2 Here and henceforth the underlining of one of two or more repeated indices indicates suspended summation with respect to this index. (6.14).2.12). x x (6. GENERAL ORTHOGONAL CURVILINEAR COORDINATES One concludes from (6.17) follows matrix of metric coeﬃcients is therefore 0 0 2 (6. They are deﬁned by ∂xk ∂xk ∂x ∂x · = . Observe that in terms of the Jacobian matrix [J] deﬁned earlier in (6. (6.8). The h2 1 [g] = 0 0 From (6.7) we can write gij = Jki Jkj or equivalently [g] = [J]T [J].11).13) dt dt dt in which gij are the metric coeﬃcients of the curvilinear coordinate system under consideration. (i = j).15) the metric coeﬃcients can be written as gij = hi hj δij . (6. where the scale moduli hi are deﬁned by23 √ hi = gii = ∂xk ∂xk = ∂ xi ∂ xi ˆ ˆ ∂x1 ∂ xi ˆ 2 (6.
hi = ds dˆi x along the xi − coordinate lines. .6) one has the identity xi = xi (ˆ1 (x1 . m ∂ xm ˆ ∂xj ∂xj Thus the inverse partial derivatives are given by ∂ xi ˆ 1 ∂xj = 2 . x3 )). ORTHOGONAL CURVILINEAR COORDINATES which reveals the geometric signiﬁcance of the scale moduli.3 Inverse partial derivatives In view of (6. hi ∂ xi ˆ (6. (6. ∂xi ∂ xk ˆ = δij .5). ∂xj hi ∂ xi ˆ (6.21) and therefore the proper orthogonal matrix [Q] relating the two sets of basis vectors is given by 1 ∂xj ˆ (6. x3 ).e. (6.14).24) Moreover.23) By (6. i. x2 . and use (6. ∂ xk ∂xj ˆ Multiply this by ∂xi /∂ xm .16) to conﬁrm that ∂xj ∂ xm ˆ ∂ xk ˆ = gkm = h2 . x3 ).22) Qij = ei · ej = hi ∂ xi ˆ 6. x3 (x1 .17). (6.22). x2 (x1 . the elements of the matrix [Q] that relates the two coordinate systems can be written in the alternative form ˆ Qij = ei · ej = hi ∂ xi ˆ .17) yield the following alternative expressions for hi : hi = 1 ∂ xi ˆ ∂x1 2 + ∂ xi ˆ ∂x2 2 + ∂ xi ˆ ∂x3 2 . x2 .10) and (6.2.20) ˆ It follows from (6.23) and (6. x2 . ˆ (6. x ˆ ˆ so that from the chain rule. ∂xj (6.106 CHAPTER 6. ˆ (6. noting the implied contraction on the index i.11) that the unit vector ei can be expressed as ˆ ei = 1 ∂x .
and in order to calculate the components of these derivatives e ˆ ˆ in the local basis we will need to calculate quantities of the form ek · ∂ˆi /∂ xj .25) − 1 ∂hi ∂x 1 ∂ 2x + h2 ∂ xj ∂ xi hi ∂ xi ∂ xj ˆ ˆ ˆ i ˆ · 1 ∂x . ∂ xj ˆ hi ∂ xj ˆ hi hk ∂ xi ∂ xk ∂xk ˆ ˆ (6.21).17).26) leads to ∂ˆi e δik ∂hi 1 ˆ · ek = − + ∂ xj ˆ hi ∂ xj 2hi hk ˆ δjk ∂ ∂ ∂ (hj hk ) + δki (hk hi ) − δij (hi hj ) .27) If we refer to (6. i. From (6.2.6.29) provides the explicit expressions for the terms ∂ˆi /∂ xj · ek that we sought. hk ∂ xk ˆ In order to express the second derivative term in (6.26) (6.27) as (a). and this subsection is devoted to this calculation. k. (6. j. GENERAL ORTHOGONAL CURVILINEAR COORDINATES 107 6. k) are replaced by (j. ∂ˆi e ˆ · ek = ∂ xj ˆ while by (6. ∂ xi ∂ xj ˆ ˆ Therefore δik ∂hi 1 ∂x ∂ˆi e ∂ 2x ˆ · ek = − ++ · . e ˆ ˆ . Calculating e ˆ these quantities is an essential prerequisite for the transformation of basic tensoranalytic relations into arbitrary orthogonal curvilinear coordinates.4 Components of ∂ˆi /∂ xj in the local basis (ˆ1 .27) when 1 (i.25) with respect to xk . e3 ) e ˆ e ˆ ˆ In order to calculate the derivatives of various ﬁeld quantities it is clear that we will need to calculate the quantities ∂ˆi /∂ xj .14).29) Equation (6. i) and (k. and let (b) and (c) be the identities resulting from (6. Thus. ∂ xi ˆ ∂ xj ˆ ∂ xk ˆ (6.28) Substituting (6. e2 . j). respectively. ∂ xi ∂ xk ∂ xj ∂ x j ∂ xk ∂ x i ˆ ˆ ˆ ˆ ˆ ˆ ∂ xk ˆ (6.26) in terms of the scalemoduli and their ﬁrst partial derivatives.28) into (6. we begin by diﬀerentiating (6.2. ∂x ∂x · = gij = δij hi hj . ˆ ∂ 2x ∂x ∂ 2x ∂x ∂ · + · = δij (hi hj ) . ∂ xi ˆ ∂ xj ˆ ∂ xk ˆ (6. then 2 {(b)+(c) (a)} is readily found to yield ∂x 1 ∂ 2x · = ∂ xi ∂ xj ∂ xk ˆ ˆ ˆ 2 δjk ∂ ∂ ∂ (hj hk ) + δki (hk hi ) − δij (hi hj ) .
108
CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
Observe the following properties that follow from it: ∂ˆi e ˆ · ek = 0 if (i, j, k) distinct, ∂ xj ˆ ∂ˆi e ˆ · ek = 0 if k = i, ∂ xj ˆ ∂ˆi e 1 ∂hi ˆ · ek = − , if i = k ∂ xi ˆ hk ∂ xk ˆ ∂ˆi e 1 ∂hk ˆ · ek = − if i = k. ∂ xk ˆ hi ∂ xi ˆ
(6.30)
6.3
Transformation of Basic Tensor Relations
Let T be a cartesian tensor ﬁeld of order N ≥ 1, deﬁned on a region R ⊂ E3 and suppose that the points of R are regular points of E3 with respect to a given orthogonal curvilinear coordinate system. ˆ The curvilinear components Tijk...n of T are the components of T in the local basis (ˆ1 , e2 , e3 ). Thus, e ˆ ˆ ˆ Tij...n = Qip Qjq . . . Qnr Tpq...r , where ˆ Qip = ei · ep . (6.31)
6.3.1
Gradient of a scalar ﬁeld
Let φ(x) be a scalarvalued function and let v(x) denote its gradient: v= φ or equivalently vi = φ,i .
The components of v in the two bases {e1 , e2 , e3 } and {ˆ1 , e2 , e3 } are related in the usual e ˆ ˆ way by vk = Qki vi ˆ and so vk = Qki vi = Qki ˆ On using (6.22) this leads to vk = ˆ 1 ∂xi hk ∂ xk ˆ ∂φ 1 ∂φ ∂xi = , ∂xi hk ∂xi ∂ xk ˆ ∂φ . ∂xi
6.3. TRANSFORMATION OF BASIC TENSOR RELATIONS so that by the chain rule vk = ˆ where we have set ˆx ˆ ˆ φ(ˆ1 , x2 , x3 ) = φ(x1 (ˆ1 , x2 , x3 ), x2 (ˆ1 , x2 , x3 ), x3 (ˆ1 , x2 , x3 )). x ˆ ˆ x ˆ ˆ x ˆ ˆ
109
ˆ 1 ∂φ , hk ∂ xk ˆ
(6.32)
6.3.2
Gradient of a vector ﬁeld
Let v(x) be a vectorvalued function and let W(x) denote its gradient: W= v or equivalently Wij = vi,j .
The components of W and v in the two bases {e1 , e2 , e3 } and {ˆ1 , e2 , e3 } are related in the e ˆ ˆ usual way by Wij = Qip Qjq Wpq , vp = Qnp vn , ˆ and therefore Wij = Qip Qjq Thus by (6.24)4 Wij = Qip Qjq Since Qjq Qmq = δmj , this simpliﬁes to Wij = Qip 1 ∂ (Qnp vn ), ˆ hj ∂ xj ˆ 1 ∂ Qmq (Qnp vn ) . ˆ hm ∂ xm ˆ m=1
3
∂vp ∂ xm ˆ ∂ ∂ = Qip Qjq (Qnp vn ) = Qip Qjq ˆ (Qnp vn ) ˆ . ∂xq ∂xq ∂ xm ˆ ∂xq
which, on expanding the terms in parentheses, yields Wij = However by (6.22) Qip and so Wij = ∂Qnp ∂ ∂ˆn e ˆ vn = Qip ˆ (ˆn · ep )ˆn = ei · e v vn , ˆ ∂ xj ˆ ∂ xj ˆ ∂ xj ˆ 1 hj ∂ˆn e ∂ˆi v ˆ + ei · vn , ˆ ∂ xj ˆ ∂ xj ˆ (6.33) 1 hj ∂ˆi v ∂Qnp + Qip vn . ˆ ∂ xj ˆ ∂ xj ˆ
in which the coeﬃcient in brackets is given by (6.29).
We explicitly use the summation sign in this equation (and elsewhere) when an index is repeated 3 or more times, and we wish sum over it.
4
110
CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
6.3.3
Divergence of a vector ﬁeld
v(x) denote its gradient. Then
Let v(x) be a vectorvalued function and let W(x) =
div v = trace W = vi,i . Therefore from (6.33), the invariance of the trace of W, and (6.30),
3
div v = trace W = trace W = Wii =
i=1
1 hi
∂ˆi v + ∂ xi ˆ
n=i
1 ∂hi vn ˆ hn ∂ xn ˆ
.
Collecting terms involving v1 , v2 , and v3 alone, one has ˆ ˆ ˆ div v = Thus div v = 1 ∂ˆ1 v v1 ∂h2 ˆ v1 ∂h3 ˆ + + + ... + ... h1 ∂ x1 h2 h1 ∂ x1 h3 h1 ∂ x1 ˆ ˆ ˆ ∂ ∂ ∂ (h2 h3 v1 ) + ˆ (h3 h1 v2 ) + ˆ (h1 h2 v3 ) . ˆ ∂ x1 ˆ ∂ x2 ˆ ∂ x3 ˆ (6.34)
1 h1 h2 h3
6.3.4
Laplacian of a scalar ﬁeld
Let φ(x) be a scalarvalued function. Since
2
φ = div(grad φ) = φ,kk
the results from Subsections (6.3.1) and (6.3.3) permit us to infer that
2
φ=
1 h1 h2 h3
ˆ ˆ ˆ ∂ h2 h3 ∂ φ ∂ h3 h1 ∂ φ ∂ h1 h2 ∂ φ + + ∂ x1 h1 ∂ x1 ˆ ˆ ∂ x2 h2 ∂ x2 ˆ ˆ ∂ x3 h3 ∂ x3 ˆ ˆ
(6.35)
where we have set ˆx ˆ ˆ φ(ˆ1 , x2 , x3 ) = φ(x1 (ˆ1 , x2 , x3 ), x2 (ˆ1 , x2 , x3 ), x3 (ˆ1 , x2 , x3 )). x ˆ ˆ x ˆ ˆ x ˆ ˆ
6.3.5
Curl of a vector ﬁeld
Let v(x) be a vectorvalued ﬁeld and let w(x) be its curl so that w = curl v or equivalently wi = eijk vk,j .
6.3. TRANSFORMATION OF BASIC TENSOR RELATIONS Let W= v or equivalently Wij = vi,j . Then as we have shown in an earlier chapter wi = eijk Wkj , Consequently from Subsection 6.3.2,
3
111
wi = eijk Wkj .
wi =
j=1
1 eijk hj
∂ˆk v ∂ˆn e ˆ + ek · vn . ˆ ∂ xj ˆ ∂ xj ˆ
By (6.30), the second term within the braces sums out to zero unless n = j. Thus, using the second of (6.30), one arrives at wi = This yields w1 = w2 = w3 = 1 h2 h3 1 h3 h1 1 h1 h2 ∂ ∂ (h3 v3 ) − ˆ (h2 v2 ) , ˆ ∂ x2 ˆ ∂ x3 ˆ ∂ ∂ (h1 v1 ) − ˆ (h3 v3 ) , ˆ ∂ x3 ˆ ∂ x1 ˆ ∂ ∂ (h2 v2 ) − ˆ (h1 v1 ) . ˆ ∂ x1 ˆ ∂ x2 ˆ (6.36) 1 eijk hj j,k=1
3
∂ˆk v 1 ∂hj − vj . ˆ ∂ xj ˆ hk ∂ xk ˆ
6.3.6
Divergence of a symmetric 2tensor ﬁeld
Let S(x) be a symmetric 2tensor ﬁeld and let v(x) denote its divergence: v = div S, S = ST , or equivalently vi = Sij,j , Sij = Sji .
The components of v and S in the two bases {e1 , e2 , e3 } and {ˆ1 , e2 , e3 } are related in the e ˆ ˆ usual way by ˆ vi = Qip vp , Sij = Qmi Qnj Smn , ˆ and consequently vi = Qip Spj,j = Qip ˆ ∂ ∂ ∂ xk ˆ ˆ ˆ (Qmp Qnj Smn ) = Qip (Qmp Qnj Smn ) . ∂xj ∂ xk ˆ ∂xj
112
CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
ˆ ˆ By using (6.24), the orthogonality of the matrix [Q], (6.30) and Sij = Sji we obtain v1 = ˆ 1 h1 h2 h3 ∂ ∂ ∂ ˆ ˆ ˆ (h2 h3 S11 ) + (h3 h1 S12 ) + (h1 h2 S13 ) ∂ x1 ˆ ∂ x2 ˆ ∂ x3 ˆ
1 ∂h1 ˆ 1 ∂h1 ˆ 1 ∂h2 ˆ 1 ∂h3 ˆ + S12 + S13 − S22 − S33 , h1 h2 ∂ x2 ˆ h1 h3 ∂ x3 ˆ h1 h2 ∂ x1 ˆ h1 h3 ∂ x1 ˆ with analogous expressions for v2 and v3 . ˆ ˆ
(6.37)
Equations (6.32)  (6.37) provide the fundamental expressions for the basic tensoranalytic quantities that we will need. Observe that they reduce to their classical rectangular cartesian forms in the special case xi = xi (in which case h1 = h2 = h3 = 1). ˆ
6.3.7
Diﬀerential elements of volume
When evaluating a volume integral over a region D, we sometimes ﬁnd it convenient to transform it from the form . . . dx1 dx2 dx3
D
into an equivalent expression of the form
D
. . . dˆ1 dˆ2 dˆ3 . x x x
In order to do this we must relate dx1 dx2 dx3 to dˆ1 dˆ2 dˆ3 . By (6.22), x x x det[Q] = 1 det[J]. h1 h2 h3
However since [Q] is a proper orthogonal matrix its determinant takes the value +1. Therefore det[J] = h1 h2 h3 and so the basic relation dx1 dx2 dx3 = det[J] dˆ1 dˆ2 dˆ3 leads to x x x dx1 dx2 dx3 = h1 h2 h3 dˆ1 dˆ2 dˆ3 . x x x (6.38)
6.3.8
Diﬀerential elements of area
ˆ Let dA1 denote a diﬀerential element of (vector) area on a x1 coordinate surface so that ˆ ˆ ˆ ˆ dA1 = (dˆ2 ∂x/∂ x2 ) × (dˆ3 ∂x/∂ x3 ). In view of (6.21) this leads to dA1 = (dˆ2 h2 e2 ) × x ˆ x ˆ x ˆ (dˆ3 h3 e3 ) = h2 h3 dˆ2 dˆ3 e1 . Thus the diﬀerential elements of (scalar) area on the x1 , x2 x x x ˆ ˆ ˆ and x3 coordinate surfaces are given by ˆ ˆ dA1 = h2 h3 dˆ2 dˆ3 , x x respectively. ˆ dA2 = h3 h1 dˆ3 dˆ1 , x x ˆ dA3 = h1 h2 dˆ1 dˆ2 , x x (6.39)
∞) × (−∞. w) ∈ (−∞. 2π) × (−π. θ. x3 = w.5 Worked Examples. η. z): x1 = r cos θ. (6. (6. hr = 1. z) ∈ [0. θ. ∞). for all (r. x2 = a sinh ξ sin η. ∞). 2 2 hξ = hη = a sinh ξ + sin η.43) 6. φ) ∈ [0. (6. for all (r. w): 1 x1 = 2 (u2 − v 2 ). hz = 1 . hφ = r sin θ . √ 2 + v2. 2π) × (−∞. EXAMPLES OF ORTHOGONAL CURVILINEAR COORDINATES 113 6.4 Some Examples of Orthogonal Curvilinear Coordinate Systems Circular Cylindrical Coordinates (r. θ. Spherical Coordinates (r. for all (ξ.42) for all (u. θ. . x2 = uv.6. x3 = z. π]. x2 = r sin θ sin φ. ∞) × [0. z): x1 = a cosh ξ cos η.4.41) Parabolic Cylindrical Coordinates (u. Example 6.40) Elliptical Cylindrical Coordinates (ξ. v. φ): x1 = r sin θ cos φ. x2 = r sin θ. ∞) × [0.1: Let E(x) be a symmetric 2tensor ﬁeld that is related to a vector ﬁeld u(x) through E= 1 2 u+ uT . v. hz = 1. x3 = r cos θ. ∞) × [0. hθ = r. π] × (−∞. z) ∈ [0. η. hr = 1. hθ = r. ∞) × (−π. ∞). (6. hu = hv = u hz = 1 . x3 = z.
ˆ h1 ∂ x1 ˆ h1 h2 ∂ x2 ˆ h1 h3 ∂ x3 ˆ 1 h1 ∂ u1 ˆ h2 ∂ u2 ˆ = E21 = + 2 h2 ∂ x2 h1 ˆ h1 ∂ x1 h2 ˆ = E33 = . ∂xj Establish the analogous formulas in a general orthogonal curvilinear coordinate system. u(x) a vectorvalued ﬁeld. . . z) which are related to (x1 .114 CHAPTER 6. . . x3 ) = (r. E23 = .3. . θ. . . −∞ < z < ∞. x3 = z. .. x3 ) x ˆ ˆ through x1 = r cos θ. .6 we have 1 h1 h2 h3 ∂ ∂ ∂ (h2 h3 S11 ) + (h3 h1 S12 ) + (h1 h2 S13 ) ∂ x1 ˆ ∂ x2 ˆ ∂ x3 ˆ 1 ∂h1 1 ∂h1 1 ∂h2 1 ∂h3 + S12 + S13 − S22 − S33 + ˆ1 = 0. .3: Consider circular cylindrical coordinates (ˆ1 . . In a cartesian coordinate system this can be written equivalently as ∂Sij + bi = 0. 0 ≤ θ < 2π. where ˆi = Qip bp b Example 6. E11 E12 E22 = . Establish the analogous formulas in a general orthogonal curvilinear coordinate system. E31 = . . Let f (x) be a scalarvalued ﬁeld. . (a) grad f . . . ˆ e ˆ Solution: Using the result from Subsection 6. ORTHOGONAL CURVILINEAR COORDINATES In a cartesian coordinate system this can be written equivalently as Eij = 1 2 ∂ui ∂uj + ∂xj ∂xi . b h1 h2 ∂ x2 ˆ h1 h3 ∂ x3 ˆ h1 h2 ∂ x1 ˆ h1 h3 ∂ x1 ˆ (i) .3. x2 .. etc.2 and the formulas for ek ·(∂ˆi /∂ xj one ﬁnds after elementary simpliﬁcation that 1 ∂ u1 ˆ 1 ∂h1 1 ∂h1 + u2 + ˆ u3 . . Solution: From the results in Subsection 6.2: Consider a symmetric 2tensor ﬁeld S(x) and a vector ﬁeld b(x) that satisfy the equation div S + b = o. . Express the following quanties. 0 ≤ r < ∞.. x2 = r sin θ. x2 . (i) Example 6. . and S(x) a symmetric 2tensor ﬁeld.
5. The matrix [∂xi /∂ xj ] therefore specializes to ˆ ∂x2 /∂r ∂x3 /∂r ∂x1 /∂r ∂x1 /∂θ ∂x2 /∂θ ∂x3 /∂θ ∂x1 /∂z ∂x2 /∂z ∂x3 /∂z sin θ 0 cos θ −r sin θ r cos θ 0 0 . z) and the associated local curvilinear orthonormal basis {er . (b) 2 f (c) div u (d) curl u (e) 1 2 u+ uT and (f) div S in this coordinate system. (ii) (i) = . WORKED EXAMPLES. Solution: We simply need to specialize the basic results established in Section 6. θ.3: Cylindrical coordinates (r. eθ .5) takes the particular form x1 = r cos θ. x3 ) = (r. e3 . 1 0 x2 = r sin θ. cos θ e1 + sin θ e2. x3 = z. z) x ˆ ˆ and the coordinate mapping (6.3. θ. x2 . eθ = − sin θ e1 + cos θ e2.6. ez = er = 115 x3 ez eθ z r θ x1 er x2 Figure 6. In the present case we have (ˆ1 . ez }.
We use the natural notation (ur . x = (r cos θ)e1 + (r sin θ)e2 + (z)e3 .21) and (iii) we obtain the following expressions for the unit vectors associated with the local cylindrical coordinate system: er = cos θ e1 + sin θ e2 . From (ii). ORTHOGONAL CURVILINEAR COORDINATES hr hθ hz = = = ∂x1 ∂r ∂x1 ∂θ ∂x1 ∂z 2 + 2 ∂x2 ∂r ∂x2 ∂θ ∂x2 ∂z 2 + 2 ∂x3 ∂r ∂x3 ∂θ ∂x3 ∂z 2 = 2 1. in this case. z) = f (x1 . x2 . θ. e2 . . x2 . . and therefore on using (6. uz ) = (ˆ1 . eθ = − sin θ e1 + cos θ e2 . and (er . 2 + + = 1. Srθ . . . could have been obtained geometrically from Figure 6.3. (iii) + 2 + 2 = r. . θ.) for the components of a 2tensor ﬁeld. (b) Substituting (i) and (iii) into (6. uθ .35) gives 2 f= ∂2f 1 ∂f 1 ∂2f ∂2f + + 2 2 + 2 2 ∂r r ∂r r ∂θ ∂z where we have set f (r. and ˆ ˆ (Srr . z) = f (x1 . . x3 ). ez = e3 . S12 . u2 . (a) Substituting (i) and (iii) into (6.32) gives f= ∂f ∂r er + 1 ∂f r ∂θ eθ + ∂f ∂z ez where we have set f (r. ez ) = (ˆ1 . e3 ) e ˆ ˆ for the unit vectors associated with the local cylindrical coordinate system. (vi) (v) (iv) which. x3 ). eθ .) = (S11 .116 and the scale moduli are CHAPTER 6. u3 ) u ˆ ˆ for the components of a vector ﬁeld. .
2. Example 6. substituting (i) and (iii) into (6. Erθ .36) gives curl u = 1 ∂uz ∂uθ − r ∂θ ∂z er + ∂ur ∂uz − ∂z ∂r eθ + ∂uθ uθ 1 ∂ur + − ∂r r r ∂θ ez u whence we (e) Set E=(1/2)( u + uT ). x2 . ∂r 1 ∂uθ ur + . 0 ≤ φ < 2π. one ﬁnds Err Eθθ Ezz Erθ Eθz Ezr = = = = = = ∂ur . E12 .34) gives div u = ∂ur 1 1 ∂uθ ∂uz + ur + + ∂r r r ∂θ ∂z 117 (d) Substituting (i) and (iii) into (6.5. ∂z ∂uθ uθ 1 1 ∂ur + − . E13 .). u(x) a vectorvalued ﬁeld. 2 r ∂θ ∂r r 1 ∂uθ 1 ∂uz + . . . and S(x) a symmetric 2tensor ﬁeld. θ. x3 ) = (r. Express the following quanties.37) gives div S = + + ∂Srr 1 ∂Srθ ∂Srz Srr − Sθθ + + + ∂r r ∂θ ∂z r ∂Srθ 1 ∂Sθθ ∂Sθz 2Srθ + + + ∂r r ∂θ ∂z r ∂Szr 1 ∂Szθ ∂Szz Szr + + + ∂r r ∂θ ∂z r eθ ez er Alternatively these could have been obtained from the results of Example 6. .33) enables us to calculate ˆ can calculate E.4: Consider spherical coordinates (ˆ1 . 0 ≤ r < ∞. .6. φ) which are related to (x1 . Writing the cylindrical components Eij of E as ˆ ˆ ˆ (Err . Let f (x) be a scalarvalued ﬁeld. (f ) Finally. r ∂θ r ∂uz .1. x2 = r sin θ sin φ. 0 ≤ θ ≤ π. 2 ∂r ∂z Alternatively these could have been obtained from the results of Example 6. WORKED EXAMPLES. (c) Substituting (i) and (iii) into (6. 2 ∂z r ∂θ ∂ur 1 ∂uz + . Substituting (i) and (iii) into (6. . x3 ) through x ˆ ˆ x1 = r sin θ cos φ. x3 = r cos θ. . Erz .) = (E11 . x2 .
x3 ) = (r. x2 φ er eφ eθ θ r x1 Figure 6. ORTHOGONAL CURVILINEAR COORDINATES er = (sin θ cos φ) e1 + (sin θ sin φ) e2 + cos θ e3. eφ }. x2 . φ) and the associated local curvilinear orthonormal basis {er . eθ . (a) grad f (b) div u (c) 2 f (d) curl u (e) 1 2 u+ uT and (f) div S in this coordinate system.5) takes the particular form x1 = r sin θ cos φ. (ii) (i) = . θ.4: Spherical coordinates (r.3.118 x3 CHAPTER 6. θ. Solution: We simply need to specialize the basic results established in Section 6. x ˆ ˆ and the coordinate mapping (6. φ). x3 = r cos θ. In the present case we have (ˆ1 . The matrix [∂xi /∂ xj ] therefore specializes to ˆ ∂x2 /∂r ∂x3 /∂r ∂x1 /∂r ∂x1 /∂θ ∂x2 /∂θ ∂x3 /∂θ ∂x2 /∂φ ∂x3 /∂φ ∂x1 /∂φ sin θ sin φ cos θ sin θ cos φ r cos θ cos φ r cos θ sin φ −r sin θ −r sin θ sin φ r sin θ cos φ 0 x2 = r sin θ sin φ. eφ = − sin φ e1 + cos φ e2. sin sin cos cos eθ = (cos θ cos φ) e1 + (cos θ sin φ) e2 − sin θ e3.
θ. Srφ . . x3 ).6. . S13 . ˆ ˆ ˆ (Srr . φ) = f (x1 . From (ii).) for the components of a 2tensor ﬁeld. e2 . x2 . .21) and (iii) we obtain the following expressions for the unit vectors associated with the local spherical coordinate system: er = (sin θ cos φ) e1 + (sin θ sin φ) e2 + cos θ e3 . We use the natural notation (ur . in this case. (iii) + 2 + 2 = r. WORKED EXAMPLES. uφ ) = (ˆ1 . x3 ). u2 . and the scale moduli are hr hθ hφ = = = ∂x1 ∂r ∂x1 ∂θ ∂x1 ∂φ 2 119 + 2 ∂x2 ∂r ∂x2 ∂θ ∂x2 ∂φ 2 + 2 ∂x3 ∂r ∂x3 ∂θ ∂x3 ∂φ 2 = 2 1. eφ ) = (ˆ1 . . (vi) (v) (iv) which. and (er .4. x = (r sin θ cos φ)e1 + (r sin θ sin φ)e2 + (r cos θ)e3 . eφ = − sin φ e1 + cos φ e2 . S12 . where we have set f (r.5. θ. uθ .) = (S11 . eθ . u3 ) u ˆ ˆ for the components of a vector ﬁeld. φ) = f (x1 .35) gives 2 f= ∂2f 2 ∂f 1 ∂2f 1 ∂f 1 ∂2f + + 2 2 + 2 cot θ + 2 2 ∂r2 r ∂r r ∂θ r ∂θ r sin θ ∂φ2 where we have set f (r. Srθ .32) gives f= ∂f ∂r er + 1 ∂f r ∂θ eθ + 1 ∂f r sin θ ∂φ eφ . eθ = (cos θ cos φ) e1 + (cos θ sin φ) e2 − sin θ e3 . e3 ) e ˆ ˆ for the unit vectors associated with the local spherical coordinate system. (a) Substituting (i) and (iii) into (6. x2 . (b) Substituting (i) and (iii) into (6. 2 + + = r sin θ. . could have been obtained geometrically from Figure 6. and therefore on using (6.
u from which one can er + 1 ∂vr ∂ − (r sin θvφ ) r sin θ ∂φ ∂r eθ (e) Set E=(1/2)( u + uT ).22). Example 6. Erφ . 2 r sin θ ∂φ ∂r r Alternatively these could have been obtained from the results of Example 6.33) to calculate calculate E. hi ∂ xi ˆ . Writing the spherical components Eij of E as (Err . We substitute (i) and (iii) into (6. (d) Substituting (i) and (iii) into (6.120 CHAPTER 6. Qij = 1 ∂xj . ORTHOGONAL CURVILINEAR COORDINATES (c) Substituting (i) and (iii) into (6.37) gives div S = + + 1 ∂Srθ 1 ∂Srφ 1 ∂Srr + + + [2Srr − Sθθ − Sφφ + cot θSrθ ] er ∂r r ∂θ r sin θ ∂φ r ∂Srθ 1 ∂Sθθ 1 ∂Sθφ 1 + + + [3Srθ + cot θ(Sθθ − Sφφ )] eθ ∂r r ∂θ r sin θ ∂φ r ∂Srφ 1 ∂Sθφ 1 ∂Sφφ 1 + + + [3Srφ + 2 cot θSθφ ] eφ ∂r r ∂θ r sin θ ∂φ r Alternatively these could have been obtained from the results of Example 6. E13 .1.5: Show that the matrix [Q] deﬁned by (6. . 2 r sin θ ∂φ r ∂θ r 1 1 ∂ur ∂uφ uφ + − . .) = (E11 . (f ) Finally substituting (i) and (iii) into (6. Proof: From (6.).34) gives div u = ∂ 2 1 ∂ ∂ (r sin θ ur ) + (r sin θuθ ) + (ruφ ) r2 sin θ ∂r ∂θ ∂φ .22) is a proper orthogonal matrix. . r sin θ ∂φ r r 1 1 ∂ur ∂uθ uθ + − . . r ∂θ r 1 ∂uφ ur cot θ + + uθ . E12 .36) gives curl u = + 1 ∂ ∂ (r sin θvφ ) − (rvθ ) r2 sin θ ∂θ ∂φ ∂vr 1 ∂ (rvθ ) − r ∂r ∂θ eφ . one ﬁnds Err Eθθ Eφφ Erθ Eθφ Eφr = = = = = = ∂ur . ∂r 1 ∂uθ ur + . Erθ . 2 r ∂θ ∂r r 1 ∂uθ 1 ∂uφ cot θ 1 + − uφ . .2.
Wiley. Thus ˆ det[Q] = 1 det[J] > 0 h1 h2 h3 where the inequality is a consequence of the inequalities in (6. L. hi hj ∂ xi ∂ xj ˆ ˆ hi hj where in the penultimate step we have used (6. Hence [Q] is proper orthogonal. Next. References 1. 1987. Segel.A. Thus [Q] is an orthogonal matrix. New York. California Institute of Technology. Pawlik.17). Reismann and P. H. E. 3. from (6. . California. 2.S. Elasticity: Theory and Applications. Pasadena.16).5. Dover. (Unpublished) Lecture Notes for AM 135: Elasticity. Qij = 1 ∂xj 1 Jji = hi ∂ xi ˆ hi where Jij = ∂xi /∂ xj are the elements of the Jacobian matrix.8) and (6. Mathematics Applied to Continuum Mechanics. 1976. WORKED EXAMPLES. 1980.22) and (6.7). Sternberg.14) and in the ultimate step we have used (6.6. and therefore Qik Qjk = 121 1 ∂xk ∂xk 1 = gij = δij .
.
One refers to F as a functional and writes F {φ}.Chapter 7 Calculus of Variations 7. its deformed conﬁguration is the shape which minimizes the total energy of the system. Depending on the load. Note that the scalarvalued function F is deﬁned on a set of functions. We are given two 123 . For example a heavy cable that hangs under gravity between two ﬁxed pegs adopts the shape that. and we want to ﬁnd the function φ that minimizes the quantity F of interest. In each of these problem we have a scalarvalued quantity F such as energy or time that depends on a function φ such as the shape or path. if we subject a straight beam to a compressive load. If we dip a (nonplanar) wire loop into soapy water. we want to ﬁnd the path of shortest distance joining those two points which lies entirely on the given surface. consider the socalled Brachistochrone Problem. Numerous problems in physics can be formulated as mathematical problems in optimization. For example in optics. Or. from among all possible shapes.1 Introduction. the energy minimizing conﬁguration may be straight or bent (buckled). Another common problem occurs in geodesics where. given some surface and two points on it. minimizes the gravitational potential energy of the system. As a speciﬁc example. the soap ﬁlm that forms across the loop is the one that minimizes the surface energy (which under most circumstances equals minimizing the surface area of the soap ﬁlm). Most equilibrium theories of mechanics involve ﬁnding a conﬁguration of a system that minimizes its energy. Fermat’s principle states that the path taken by a ray of light in propagating from one point to another is the path that minimizes the travel time.
0) to (1. that by elementary calculus. we wish to express the speed v in terms of φ. In order to formulate this problem.1: Curve joining (0. yplane.124 CHAPTER 7. Since we are to minimize T by varying φ. The travel time of the bead is T = ds v where the integral is taken along the entire path. h) along which a bead slides under gravity. In the question posed to us. 0) and slides along the wire due to gravity. A bead is released from rest from the point (0. (0 0) 1 x g (1. h) in the x. the arc length ds is related to dx by ds = and so we can write T = 0 dx2 + dy 2 = 1 + (dy/dx)2 dx = 1 + (φ )2 dx 1 1 + (φ )2 dx. we are to ﬁnd the curve.e. the function φ(x). (1 h) y = φ(x) y Figure 7. 0 ≤ x ≤ 1. 0) and (1. i. For what shape of wire is the time of travel from (0. let y = φ(x). it is natural to ﬁrst rewrite the formula for T in a form that explicitly displays its dependency on φ. v Next. If (x(t). CALCULUS OF VARIATIONS points (0. 0) to (1. the conservation of energy tells us that the sum of the potential and kinetic energies does not vary with time: 1 −mgφ(x(t)) + mv 2 (t) = 0. y(t)) denote the coordinates of the bead at time t. describe a generic curve joining (0. with h > 0. which makes T a minimum. Note ﬁrst. h). 0) to (1. h) least? (0. Let s(t) denote the distance traveled by the bead along the wire at time t so that v(t) = ds/dt is its corresponding speed. 2 . that are to be joined by a smooth wire.
we should carefully characterize this set of “admissible functions” (or “test functions”). Use (7.7. Finally. and (b) a circular arc. but the right hand end of the wire might be allowed to lie anywhere on the vertical line through x = 1. consider (a) a straight line. 0) and (1. This minimization takes place over a set of functions φ. φ and φ are both continuous on [0. φ. 1] → R. 2gφ (7. Our task is to ﬁnd. for analytical reasons we only consider curves that are continuous and have a continous slope. Finally. Or perhaps the position of the left hand end might be prescribed as above. 1]. 1] that minimizes a functional F {φ} of the form 1 (7. INTRODUCTION.e. from among all such curves.1) Given a curve characterized by y = φ(x). Remark: One can consider various variants of the Brachistochrone Problem. φ(1) = h .1) to calculate the travel time for each of these paths and show that the straight line is not the path that gives the least travel time. A generic curve is described by y = φ(x). In summary. φ ∈ C 1 [0. φ(0) = 0. h) we must require that φ(0) = 0. Remark: Since the shortest distance between two points is given by the straight line that joins them. the one that minimizes T {φ}. 0) to (1. In order to complete the formulation of the problem. yplane through which the path is disallowed from passing. 0 ≤ x ≤ 1. the length of the curve joining the two points might be prescribed.2) F {φ} = f (x. 125 where the right hand side is the total energy at the initial instant. in which case the minimization is to be carried out subject to the constraint that the length is given.1. in the simplest problem in the calculus of variations we are required to ﬁnd a function φ(x) ∈ C 1 [0. Solving this for v gives v= 2gφ. h). φ(1) = h. Our task is to minimize T {φ} over the set A. there might be some prohibited region of the x. Thus the set A of admissible functions that we wish to consider is A = φ(·) φ : [0. this formula gives the corresponding travel time for the bead. Or. And so on. that joins (0. To investigate this. 1]. φ )dx 0 . it is natural to wonder whether a straight line is also the curve that gives the minimum travel time. i. Since we are only interested in curves that pass through the points (0. For example. substituting this back into the formula for the travel time gives 1 T {φ} = 0 1 + (φ )2 dx.
2 Brief review of calculus.e. Perhaps it is useful to begin by reviewing the familiar question of minimization in calculus.126 CHAPTER 7. i. F (0) ≥ 0. In order to speak of such a notion we must have a measure of “closeness”. In the presence of suﬃcient smoothness we can write 1ˆ ˆ ˆ ˆ F (ε) − F (0) = F (0)ε + F (0)ε2 + O(ε3 ).3) Sometimes we are only interested in ﬁnding a “local minimizer”. 1 . Thus if x0 is to be a minimizier of F it is necessary that ˆ ˆ F (0) = 0. . x2 . We say that xo ∈ A is a minimizer of F if1 F (x) ≥ F (xo ) for all x ∈ A. Other types of problems will be also be encountered in what follows. . . . xn ) be a realvalued function deﬁned on A.6) ˆ ˆ Since F (x0 + εn) ≥ F (x0 ) it follows that F (ε) ≥ F (0). possibly (but not necessarily) boundary conditions at both ends x = 0. Consider a subset A of ndimensional space Rn and let F (x) = F (x1 . (7. CALCULUS OF VARIATIONS over an admissible set of test functions A. a point xo that minimizes F relative to all x that are “close” to x0 . 2 (7.7) A maximizer of F is a minimizer of −F so we don’t need to address maximizing separately from minimizing. i. The test functions (or admissible functions) φ are subject to certain conditions including smoothness requirements. Then we say that xo is a local minimizer of F if F (x) ≥ F (xo ) for all x in a neighborhood of xo .5) (7. and possibly (but not necessarily) side constraints of various forms.4) where n is a ﬁxed vector and ε0 is small enough to ensure that x0 +εn ∈ A for all ε ∈ (−ε0 .e. Thus suppose that the vector space Rn is Euclidean so that a norm is deﬁned on Rn . 1. ˆ Deﬁne the function F (ε) for −ε0 < ε < ε0 by ˆ F (ε) = F (x0 + εn) (7. if F (x) ≥ F (xo ) for all x such that x − xo  < r for some r > 0. (7. 7. ε0 ).
At an interior local minimizer x0 .10) is equivalent to the requirements that F (xo ) · n = 0 or equivalently n and ( F(xo ))n · n ≥ 0 for all unit vectors n (7. A must be compact (i. n) = F (0) 127 (7. n) ≥ 0 for all unit vectors n. A2 was bounded but open. (7. however.12) whence we must have F (xo ) = o and the Hessian F (xo ) must be positive semideﬁnite. And in the third example A3 was bounded and closed but the function was discontinuous on A3 . BRIEF REVIEW OF CALCULUS. the function F1 (x) = x deﬁned on A1 = (−∞. ∞) is unbounded as x → ±∞. bounded and closed). 1) noting that F2 ≥ −1 on A2 . consider / the function F3 (x) deﬁned on A3 = [−1.2. Here the vector ﬁeld F is the gradient of F and the tensor ﬁeld F is the gradient of F . Another example is given by the function F2 (x) = x deﬁned on A2 = (−1. the value of F3 can get as close as one wishes to 0 but cannot achieve it since F (0) = 1. 1] where F3 (x) = 1 for −1 ≤ x ≤ 0 and F (x) = x for 0 < x ≤ 1. For example.8) that δF (xo . which is called the ﬁrst variation of F and similarly set ˆ δ 2 F (xo .9) which is called the second variation of F . it cannot actually achieve the value −1 since there is no x ∈ A2 at which f (x) = −1. It is customary to use the following notation and terminology: we set ˆ δF (xo . Finally.7. while the value of F2 can get as close as one wishes to −1. In the ﬁrst example A1 was unbounded. (7. . Remark: It is worth recalling that a function need not have a minimizer. note that −1 ∈ A2 . n) = 0 and δ 2 F(xo . we know from (7.10) In the present setting of calculus.e. one necessarily must have δF (xo .5). It can be shown that if A is compact and if F is continuous on A then F assumes both maximum and minimum values on A. Therefore (7.8) (7. n) = F (0). n) = F (xo ) · n and that δ 2 F (xo . n) = ( F (xo ))n·n.11) i=1 ∂F ∂xi n n ni = 0 x=x0 and i=1 j=1 ∂ 2F ∂xi ∂xj x=x0 ni nj ≥ 0 (7. In order for a minimizer to exist. In the second.
and we are asked to ﬁnd a function φo ∈ A that minimizes F over A: i.e. This requires that we select a norm so that the distance between two functions can be quantiﬁed. i. x2 ]. where F : A → R. we are typically given a functional F deﬁned on a function space A. x1 ≤x≤x2 For a function φ in the set of functions that are continuous and have continuous ﬁrst derivatives on [x1 . we will be looking for a local (or relative) minimizer.e. x2 ] one can deﬁne a norm by φ1 = max φ(x) + x1 ≤x≤x2 x1 ≤x≤x2 max φ (x).e. one can deﬁne a norm by φ0 = max φ(x). (0 a) 0 x 1 Figure 7. b) (1 y = φ1(x) (0. to ﬁnd φo ∈ A for which F {φ} ≥ F {φo } y for all φ ∈ A. and so on. y = φ2(x) (1. x2 ].128 CHAPTER 7. for φ ∈ C[x1 . x2 ]. for a function φ0 that minimizes F relative to all “nearby functions”. Most often.3 The basic idea: necessary conditions for a minimum: δF = 0. (Of course the norm φ0 can also be used on C 1 [x1 . In the calculus of variations.) . x2 ].2: Two functions φ1 and φ2 that are “close” in the sense of the norm  · 0 but not in the sense of the norm  · 1 . for φ ∈ C 1 [x1 . i. For a function φ in the set of functions that are continuous on an interval [x1 . δ 2F ≥ 0. i.e. CALCULUS OF VARIATIONS 7.
In this case the minimizer is being compared with all functions whose values and whose ﬁrst derivatives are close to those of φ0 for all x1 ≤ x ≤ x2 . Consider a functional F {φ} deﬁned on a function space A and suppose that φo ∈ A minimizes F . η} ≥ 0. and so if φ0 minimizes F . δ 2 F {φ0 . Such a local minimizer is called a weak minimizer. In this case the minimizer φ0 is being compared with all admissible functions φ whose values are close to those of φ0 for all x1 ≤ x ≤ x2 . The ﬁrst and second variations of F are deﬁned by ˆ ˆ δF {φ0 . In order to determine φ0 we consider the oneparameter family of admissible functions φ(x. the necessary condition δF {φo . then it is necessary that δF {φ0 . (7.14) ˆ ˆ Since φ0 minimizes F it follows that F {φ0 + εη} ≥ F {φ0 } or equivalently F (ε) ≥ F (0).13) that are close to φ0 . ε0 ). η} = F (0) and δ 2 F {φ0 . ˆ Therefore ε = 0 minimizes F (ε). On the other hand. Such a local minimizer is called a strong minimizer. η} = 0 can be further simpliﬁed by exploiting the fact that it must hold for all admissible η. We cannot go further in general.3. . η} = F (0) respectively. A strong minimizer is automatically a weak minimizer. ε) = φ0 (x) + ε η(x) (7. (7. Unless explicitly stated otherwise. The approach for ﬁnding such extrema of a functional is essentially the same as that used in the more familiar case of calculus reviewed in the preceding subsection. −ε0 < ε < ε0 . here ε is a real variable in the range −ε0 < ε < ε0 and η(x) is a once continuously diﬀerentiable function. we must have φ0 + εη ∈ A ˆ for each ε ∈ (−ε0 . when seeking a local minimizer we might say we want to ﬁnd φ0 for which F {φ} ≥ F {φo } for all admissible φ such that φ − φ0 1 < r for some r > 0. This allows one to eliminate η leading to a condition (or conditions) that only involves the minimizer φ0 . A NECESSARY CONDITION FOR AN EXTREMUM 129 When seeking a local minimizer of a functional F we might say we want to ﬁnd φ0 for which F {φ} ≥ F {φo } for all admissible φ such that φ − φ0 0 < r for some r > 0.7. such as those in the subsequent sections. Deﬁne a function F (ε) by ˆ F (ε) = F {φ0 + εη}. η} = 0. In any speciﬁc problem. Since φ is to be admissible.15) These are necessary conditions on a minimizer φo . in this Chapter we will be examining weak local extrema.
as noted previously. φ(1) = b . Deﬁne a functional F {φ}. Consider the following class of problems: let A be the set of all continuously diﬀerentiable functions φ(x) deﬁned for 0 ≤ x ≤ 1 with φ(0) = a. (7. 0 (7. y y = φ0(x) + εη(x) (1.3: The minimizer φ0 and a neighboring function φ0 + εη. φ ∈ C 1 [0. Euler equation. On the other hand the functions φ0 (x) and φ0 (x)+ε sin(x/ε) are close to each other but their derivatives are not close to each other. deﬁned and smooth for all real x. . φ (x)) dx. φ(1) = b: A = φ(·) φ : [0.16) Let f (x.17) We wish to ﬁnd a function φ ∈ A which minimizes F {φ}. and their derivatives.1 The basic problem. are close to each other for small ε.4 Application of the necessary condition δF = 0 to the basic problem. z. (0 a) y = φ0(x) 0 x Figure 7. for every φ ∈ A. b) (1 (0.4. 7. 7. z) be a given function.130 CHAPTER 7. φ(x). by 1 F {φ} = f (x. y. 1]. we will be restricting attention exclusively to weak minimizers. Throughout these notes we will consider functions η that are independent of ε and so. Euler equation. CALCULUS OF VARIATIONS Remark: Note that when η is independent of ε the functions φ0 (x) and φ0 (x) + εη(x). 1] → R. y. φ(0) = a.
(7. Therefore we must have φ(0. φ0 + εη.18) and ﬁx it.21) must hold for all functions η that satisfy (7. 1]. In order to do this we proceed as follows: Integrating the second term in (7. 0 (7. ε) = a and φ(1. φ0 . (7. φ0 + εη )η ∂y ∂z dx.18) ˆ Pick any function η(x) with the property (7.20) ˆ On using the chainrule.22) . In order to determine φ0 we consider the one parameter family of admissible functions φ(x. To do this we rearrange the terms in (7. η} = F (0) = 0. φ0 )η ∂y ∂z dx = 0. φ0 + εη ) dx. see Figure 7. (7. (7.17). φ0 )η + (x. η} = F (0) = 1 0 ∂f ∂f (x. Our goal is to ﬁnd φ0 and so we must eliminate η from (7. ε) = φ0 (x) + ε η(x) where ε is a real variable in the range −ε0 < ε < ε0 and η(x) is a once continuously diﬀerentiable function on [0. we ﬁnd F (ε) from (7. ∂z ∂z x=0 ∂z 0 0 dx However by (7.21).7.20) leads to ˆ δF {φo .18).19) We know from the analysis of the preceding section that a necessary condition for φ0 to minimize F is that ˆ δF {φo .19) to be ˆ F (ε) = 0 1 ∂f ∂f (x.21) by parts gives x=1 1 1 ∂f ∂f d ∂f η dx = η − η dx .18) we have η(0) = η(1) = 0 and therefore the ﬁrst term on the righthand side drops out.21) reduces to 1 0 ∂f d − ∂y dx ∂f ∂z η dx = 0. φ0 + εη. Deﬁne the function F (ε) = F {φ0 + εη} so that ˆ F (ε) = F {φ0 + εη} = 1 f (x. φ0 + εη ) η + (x.21) into a convenient form and exploit the fact that (7. φ0 . Thus (7.3. φ0 + εη. so that F {φ} ≥ F {φ0 } for all φ ∈ A. Since φ must be admissible we need φ0 + ε η ∈ A for each ε. ε) = b which in turn requires that η(0) = η(1) = 0. and so (7.21) Thus far we have simply repeated the general analysis of the preceding section in the context of the particular functional (7.4. APPLICATION OF NECESSARY CONDITION δF = 0 131 Suppose that φ0 (x) ∈ A is a minimizer of F .
dx ∂z ∂y This is a diﬀerential equation for φ0 . p(x) = 0 for 0 ≤ x ≤ 1. 1] and suppose that 1 p(x)n(x)dx = 0 0 for all continuous functions n(x) with n(0) = n(1) = 0. φ ) = 0. one treats x. Lemma: The following is a basic result from calculus: Let p(x) be a continuous function on [0. Consider the Brachistochrone Problem formulated in the ﬁrst example of Section 7. CALCULUS OF VARIATIONS Though we have viewed η as ﬁxed up to this point. φ ) − (x.23) provides the mathematical problem governing the minimizer φ0 (x). Here we have 1 + (φ )2 f (x.132 CHAPTER 7. φ and φ as if they were independent variables.24) (7. φ0 ) = 0 for 0 ≤ x ≤ 1.17). Then. we recognize that the above derivation is valid for all once continuously diﬀerentiable functions η(x) which have η(0) = η(1) = 0. we shall write the Euler equation (7. moreover. which together with the boundary conditions φ0 (0) = a. φ0 (1) = b. φ. Notation: In order to avoid the (precise though) cumbersome notation above.23) is referred to as the Euler equation (sometimes referred to as the EulerLagrange equation) associated with the functional (7.25). φ.22) must hold for all such functions. The Brachistochrone Problem. we shall drop the subscript “0” from the minimizing function φ0 . 7.2 An example. Therefore (7. (7.22) must vanish and therefore obtain the diﬀerential equation d ∂f ∂f (x.1.4.23) as d ∂f ∂f (x.25) dx ∂φ ∂φ where. φ0 ) − (x. The diﬀerential equation (7. φ ) = 2gφ . φ0 . In view of this Lemma we conclude that the integrand of (7. (7. φ0 . φ. in carrying out the partial diﬀerentiation in (7.
7. 2g 2(φ)3/2 ∂f = ∂φ φ 2gφ(1 + (φ )2 ) .23) specializes to d dx φ (φ)(1 + (φ )2 ) − 1 + (φ )2 = 0.28) . φ(1) = h.29) (7. 2(φ)3/2 0 < x < 1. (7. (7. φ. φ ) gives: ∂f = ∂φ 1 + (φ )2 1 . 2 θ1 < θ < θ2 . φ (x))dx = 0 0 1 + [φ (x)]2 dx 2gφ(x) over the class of functions φ(x) that are continuous and have continuous ﬁrst derivatives on [0. It is simply concerned with the solving the boundary value problem (7. (7. and satisfy the boundary conditions φ(0) = 0. φ(1) = h.27).1]. θ1 < θ < θ2 . φ(x).26) and the boundary conditions (7. We can write the diﬀerential equation as φ φ(1 + (φ )2 ) d dφ φ φ(1 + (φ )2 ) + 1 = 0 2φ2 which can be immediately integrated to give 1 φ(x) = 2 2 (φ (x)) c − φ(x) where c is a constant of integration that is to be determined.26). and to this end we adopt the substitution φ= c2 (1 − cos θ) = c2 sin2 (θ/2). Treating x. φ = φ(θ). and therefore the Euler equation (7. (7. φ and φ as if they are independent variables and diﬀerentiating the function f (x. x = x(θ).27).27) The minimizer φ(x) therefore must satisfy the boundaryvalue problem consisting of the secondorder (nonlinear) ordinary diﬀerential equation (7.4. APPLICATION OF NECESSARY CONDITION δF = 0 and we wish to ﬁnd the function φ0 (x) that minimizes 1 1 133 F {φ} = f (x. It is most convenient to ﬁnd the path of fastest descent in parametric form.26) with associated boundary conditions φ(0) = 0. The rest of this subsection has nothing to do with the calculus of variations.
(7. this leads to φ (x) = dx c2 = (1 − cos θ) dθ 2 which integrates to give x= c2 (θ − sin θ) + c1 . together with (7. dp cos θ/2 = (tan θ/2 − θ/2) > 0 for 0 < θ < 2π.134 Diﬀerentiating this with respect to x gives CHAPTER 7. (7.30) The remaining boundary condition φ(x) = h at x = 1 gives the following two equations for ﬁnding the two constants θ2 and c: c2 1 = (θ2 − sin θ2 ). We now address the solvability of (7.29) and (7.30). ﬁrst. we see that θ2 is a root of the equation p(θ2 ) = h−1 . then.29).32).32) c2 h = (1 − cos θ2 ). 2 θ1 < θ < θ2 . dθ sin3 θ/2 .34) One can readily verify that the function p(θ) has the properties p → 0 as θ → 0+.33) We now turn to the boundary conditions. The requirement φ(x) = 0 at x = 0. 2 0 ≤ θ ≤ θ2 . gives us θ1 = 0 and c1 = 0. (7. 2 To this end. p → ∞ as θ → 2π−.28) and (7. CALCULUS OF VARIATIONS c2 sin θ θ (x) 2 so that. if we deﬁne the function p(θ) by p(θ) = θ − sin θ . 1 − cos θ 0 < θ < 2π. φ = 2 Once this pair of equations is solved for c and θ2 then (7.32) by the second. together with (7.31) c2 (1 − cos θ). We thus have c2 x = (θ − sin θ). by dividing the ﬁrst equation in (7.31) provides the solution of the problem. 2 (7. (7.
34) and (7. Therefore it follows that as θ goes from 0 to 2π.5: A cycloid x = x(θ).5 shows that the curve (7. 2π). y = y(θ) is generated by rolling a circle along the xaxis as shown. the equation p(θ2 ) = h−1 can be solved for a unique value of θ2 ∈ (0.7.31) with the values of θ2 and c given by (7. P P' x R P A (x(θ). the parameter θ having the signiﬁcance of being the angle of rolling. the function p(θ) increases monotonically from 0 to ∞.31) is a cycloid – the path traversed by a point on the rim of a wheel that rolls without slipping.4.4. 2π). see Figure 7. Thus in summary. The value of c is then given by (7.33) versus θ. APPLICATION OF NECESSARY CONDITION δF = 0 p(θ) 135 h−1 θ2 2π θ Figure 7. given any h > 0. y(θ)) θ A θ=π A R x(θ) = P P − AP sin θ = R(θ − sin θ) y(θ) = AP − AP cos θ = R(1 − cos θ) (1 y P Figure 7. the path of minimum descent is given by the curve deﬁned in (7. Therefore. Figure 7.32)1 respectively. Note that given any h > 0 the equation h−1 = p(θ) has a unique root θ = θ2 ∈ (0.32)1 . .4: A graph of the function p(θ) deﬁned in (7.
by δf we mean δf = f (x.39) and by δF . Note the following minor change in notation: what we call δH here is what we previously would have called ε δH. CALCULUS OF VARIATIONS 7. φ ) ∂f ∂f (x. one usually uses the following formal procedure. φ. φ + εη. we mean δF = F {φ + εη} − F {φ} = ε 1 d F {φ + εη} dε η dx = 0 ε=0 = ε 0 1 ∂f ∂φ η+ ∂f ∂φ dx 1 (7.35) dH{φ + εη} dε . φ.38) (7. When η(0) = η(1) = 0.136 CHAPTER 7.37) (7. 2 . = 0 ∂f ∂f δφ + δφ ∂φ ∂φ We refer to δφ(x) as an admissible variation. δH = ε For example. ∂φ ∂φ = 1 (7. we adopt the following notation: if H{φ} is any quantity that depends on φ. by δφ we mean δφ = (φ + εη ) − (φ ) = εη = (δφ) . it follows that δφ(0) = δφ(1) = 0. then by δH we mean2 δH = H(φ + εη) − H(φ) up to linear terms.40) δf dx. φ ) εη ∂φ ∂φ ∂f ∂f = δφ + δφ .36) (7. φ. ε=0 (7. or δ 0 f dx. by δφ we mean δφ = (φ + εη) − (φ) = εη. φ ) εη + (x. First.4.3 A Formalism for Deriving the Euler Equation In order to expedite the steps involved in deriving the Euler equation. φ + εη ) − f (x. that is.
φ ) dx 0 If ever in doubt about a particular step during a calculation. Observe from (7.5 7.42) For purposes of illustration. integrating the second term by parts. etc. GENERALIZATIONS. From here on we can proceed as before by setting δF = 0. 4 Note that the variation δ does not operate on x since it is the function φ that is being varied not the independent variable x. η etc. a necessary condition for an extremum of F is δF = 0 and so our task is to calculate δF : 1 1 δF = δ 0 f dx = 0 δf dx. Since f = f (x. So in particular.5. let us now repeat our previous derivation of the Euler equation using this new notation3 . Generalization: Free endpoint. always go back to the meaning of the symbols δφ. (7. φ. Natural boundary conditions. φ. this in turn leads to4 1 δF = 0 ∂f ∂φ δφ + ∂f ∂φ δφ dx.41) Finally observe that the necessary condition for a minimum that we wrote down previously can be written as δF {φ. φ ).7.40) that 1 1 137 δF = δ 0 f dx = 0 δf dx. (7. or revert to using ε. We refer to δF as the ﬁrst variation of the functional F. δφ} = 0 for all admissible variations δφ. Given the functional F . .5. δf = fφ δφ + fφ δφ and not δf = fx δx + fφ δφ + fφ δφ . and using the boundary conditions and the arbitrariness of an admissible variation δφ(x) to derive the Euler equation.1 Generalizations. Consider the following modiﬁed problem: suppose that we want to ﬁnd the function φ(x) from among all once continuously diﬀerentiable functions that makes the functional 1 F {φ} = 3 f (x. 7.
47) for all admissible variations δφ(x). (7. return to (7.49) requires that ∂f = 0 at x = 0 and x = 1. Note that the boundary term in (7. 0 (7. (7. ∂φ (7.138 CHAPTER 7. Since δφ(0) and δφ(1) are both arbitrary (and not necessarily zero). 0 (7. The boundary terms now drop out and by the Lemma in Section 7.46) Since δF = 0 at an extremum. equation (7. 1] (7. Note that we do not restrict attention here to those functions that satisfy φ(0) = a. φ. φ(1) = b.43) Note that the class of admissible functions A is much larger than before.4. First restrict attention to all variations δφ with the additional property δφ(0) = δφ(1) = 0.47) must necessarily hold for all such variations δφ.47) does not automatically drop out now because δφ(0) and δφ(1) do not have to vanish. Next.45) Integrating the last term by parts yields 1 δF = 0 d ∂f − ∂φ dx ∂f ∂φ δφ dx + ∂f δφ ∂φ 1 .1 it follows that d ∂f dx ∂φ − ∂f = 0 for 0 < x < 1.44) We begin by calculating the ﬁrst variation of F : 1 1 1 δF = δ 0 f dx = 0 δf dx = 0 ∂f ∂φ δφ + ∂f ∂φ δφ dx (7.48) in mind. φ ∈ C 1 [0.50) ∂φ .47) and keep (7. So the set of admissible functions A is A = φ(·) φ : [0. The functional F {φ} is deﬁned for all φ ∈ A by 1 F {φ} = f (x.48) This is the same Euler equation as before.49) x=1 for all admissible variations δφ. We see that we must have ∂f ∂φ δφ(1) − ∂f ∂φ δφ(0) = 0 x=0 (7. φ ) dx. we must have 1 0 ∂f d − ∂φ dx ∂f ∂φ δφ dx + ∂f δφ ∂φ 1 = 0 0 (7. 1] → R. CALCULUS OF VARIATIONS a minimum.
recall that f (x.26) as in the ﬁrst problem. φ ∈ C 1 [0. see Figure 7. Example: Reconsider the Brachistochrone Problem analyzed previously but now suppose that we want to ﬁnd the shape of the wire that commences from (0. φ. note that there is no restriction on φ at x = 1. These boundary conditions were determined as part of the extremization. Our task is to minimize the travel time of the bead T {φ} over the set A2 .6: Curve joining (0. φ ) = Diﬀerentiating this gives ∂f = ∂φ φ 2gφ(1 + (φ )2 ) φ 2gφ(1 + (φ )2 ) . GENERALIZATIONS. 1] → R. 139 Equation (7. 0 1 x g y = φ(x) y Figure 7.50). they are referred to as natural boundary conditions in contrast to boundary conditions that are given as part of a problem statement. The minimizer must satisfy the same Euler equation (7. 1]. 1 + (φ )2 .5.7. To ﬁnd the natural boundary condition at the other end.6. . φ(0) = 0 . 0) and ends somewhere on the vertical through x = 1. and the same boundary condition φ(0) = 0 at the left hand end. The only diﬀerence between this and the ﬁrst Brachistochrone Problem is that here the set of admissible functions is A2 = φ(·) φ : [0. 2gφ and so by (7. the natural boundary coundition is = 0 at x = 1.50) provides the boundary conditions to be satisﬁed by the extremizing function φ(x). 0) to an arbitrary point on the vertical line through x = 1.
In the classical BernoulliEuler theory of beams. Since the beam is clamped at the left hand end this means that u(x) is any (smooth enough) function that satisﬁes the geometric boundary conditions u(0) = 0. Let u(x). φ ) dx. φ .7: The neutral axis of a beam in reference (straight) and deformed (curved) states. The beam carries a distributed load p(x) along its length and a concentrated force F at the right hand end x = L.2 Generalization: Higher derivatives. δφ Energy per unit length = 1 M φ = 1 EI(u )2 er 2 2 M M u M = EIφ φ=u y x u Figure 7. 7.51) . The functional F {φ} considered above involved a function φ and its ﬁrst derivative φ . 0 We begin with the formulation and analysis of a speciﬁc example and then turn to some theory. Example: The BernoulliEuler Beam. (7. both loads act in the −ydirection. 0 ≤ x ≤ L. which is clamped at its left hand end. u (0) = 0. φ. Consider an elastic beam of length L and bending stiﬀness EI. CALCULUS OF VARIATIONS φ (1) = 0.5. One can consider functionals that involve higher derivatives of φ. for example 1 F {φ} = f (x. crosssections remain perpendicular to the neutral axis. The bold lines represent a crosssection of the beam in the reference and deformed states.140 which simpliﬁes to CHAPTER 7. be a geometrically admissible deﬂection of the beam. .
Twice integrating the ﬁrst term by parts leads to L 0 L EI u δu dx − 0 p δu dx − F δu(L) − EIu δu L 0 L + EIu δu 0 = 0. GENERALIZATIONS. By using (7. Therefore the total potential energy of the system is L . (7. We proceed in the usual way by calculating the ﬁrst variation δΦ and setting it equal to zero: δΦ = 0.51)1 describes the geometric condition that the beam is clamped at x = 0 and therefore cannot deﬂect at that point. u (0) = 0 which consists of all “geometrically possible conﬁgurations”.5. (7. u ∈ C 4 [0. L].52) Φ{u} = 0 1 EI(u (x))2 dx − 2 L 0 p(x)u(x) dx − F u(L). 141 the boundary condition (7. note that only two boundary conditions u(0) = 0.53) where the last two terms represent the potential energy of the distributed and concentrated loading respectively.53) over the set (7. u(0) = 0. the boundary condition (7. the negative sign in front of these terms arises because the loads act in the −ydirection while u is the deﬂection in the +ydirection. In addition. Note that the integrand of the functional involves the higher derivative term u .52).53) this can be written explicitly as L 0 L EI u δu dx − 0 p δu dx − F δu(L) = 0. The actual deﬂection of the beam minimizes the potential energy (7.51) require that an admissible variation δu must obey δu(0) = 0. The given boundary conditions (7.7. δu (0) = 0. u (0) = 0 are given and so we expect to derive additional natural boundary conditions at the right hand end x = L.51)2 describes the geometric condition that the beam is clamped at x = 0 and therefore cannot rotate at the left end. From elasticity theory we know that the elastic energy associated with a deformed conﬁguration of the beam is (1/2)EI(u )2 per unit length. L] → R. Therefore the preceding equation simpliﬁes to L (EI u 0 − p) δu dx − [EIu (L) + F ] δu(L) + EIu (L)δu (L) = 0. . The set of admissible test functions that we consider is A = u(·)  u : [0.
51) and the natural boundary conditions (7.54)1 . v. φ (1) = φ1 . Thus the extremizer u(x) obeys the fourth order linear ordinary diﬀerential equation (7.54)2 describes the mechanical condition that the beam carries a concentrated force F at the right hand end. it follows in the usual way that the extremizing function u(x) must obey EI u (x) − p(x) = 0 for 0 < x < L. One can consider functionals that involve multiple functions. v} = f (x. ∂f /∂φ and ∂f /∂φ are calculated by treating φ. φ. Exercise: Consider the functional 1 F {φ} = f (x. 7. and the natural boundary condition (7. the prescribed boundary conditions (7. EI u (L) = 0. φ.5.3 Generalization: Multiple functions. as before. v ) dx 0 . The functional F {φ} considered above involved a single function φ and its derivatives. u. The natural boundary condition (7. for example a functional 1 F {u.54) EI u (L) + F = 0. 1] and that satisfy the four boundary conditions φ(0) = φ0 . φ (0) = φ0 . (7. φ . the partial derivatives ∂f /∂φ.3 . φ )dx 0 deﬁned on the set of admissible functions A consisting of functions φ that are deﬁned and four times continuously diﬀerentiable on [0. φ .54)2.54)3 describes the mechanical condition that the beam is free to rotate (and therefore has zero “bending moment”) at the right hand end. φ ). u . φ(1) = φ1 .142 CHAPTER 7. CALCULUS OF VARIATIONS Since this must hold for all admissible variations δu(x). φ and φ as if they are independent variables in f (x. Show that the function φ that extremizes F over the set A must satisfy the Euler equation d ∂f − ∂φ dx ∂f ∂φ d2 + 2 dx ∂f ∂φ =0 for 0 < x < 1 where.
while I and A are the second moment of crosssection and the area of the crosssection respectively. The thin line is perpendicular to the deformed neutral axis.5. is u . also in the −ydirection. In that theory. Consider a beam of length L. and carries a concentrated force F at the end x = L. In the theory considered here. 143 that involves two functions u(x) and v(x). it carries a distributed load p(x) along its length which acts in the −ydirection. We begin with the formulation and analysis of a speciﬁc example and then turn to some theory. where crosssections remain perpendicular to the neutral axis. one that accounts for shear deformations. 5 . bending stiﬀness5 EI and shear stiﬀness GA. The decrease in the angle between the crosssection and the neutral axis is therefore u − φ. In the simplest model of a beam – the socalled BernoulliEuler model – the deformed state of the beam is completely deﬁned by the deﬂection u(x) of the centerline (the neutral axis) of the beam.8: The neutral axis of a beam in reference (straight) and deformed (curved) states. The angle between the neutral axis and the horizontal. shear deformations are neglected and therefore a crosssection of the beam remains perpendicular to the neutral axis even in the deformed state. The bold lines represent a crosssection of the beam in the reference and deformed states. which equals the angle between the perpendicular to the neutral axis (the thin line) and the vertical dashed line. Here we discuss a more general theory of beams. Thus a deformed state of the beam is characterized by two E and G are the Young’s modulus and shear modulus of the material. a crosssection of the beam is not constrained to remain perpendicular to the neutral axis.7. the thin line and the bold line would coincide. GENERALIZATIONS. φ u φ u y x Figure 7. Example: The Timoshenko Beam. The beam is clamped at x = 0. so that in the classical BernoulliEuler theory of beams. The angle between the vertical and the bold line if φ.
) The fact that the left hand end is clamped implies that the point x = 0 cannot deﬂect and that the crosssection at x = 0 cannot rotate. Observe that in the BernoulliEuler theory γ(x) = 0.8. The basic equations of elasticity tell us that the momentcurvature relation for bending is M (x) = EIφ (x) . δφ S γ M M S S = GAγ GA M = EIφ Figure 7. the socalled Timoshenko beam theory. CALCULUS OF VARIATIONS ﬁelds: one.9: Basic constitutive relationships for a beam.55) Note that the zero rotation boundary condition is φ(0) = 0 and not u (0) = 0. (7. In the more accurate beam theory discussed here. φ(0) = 0. see Figure 7. one does not neglect shear deformations and so u(x) and φ(x) are (geometrically) independent functions. Since the shear strain is deﬁned as the change in angle between two ﬁbers that are initially at right angles to each other. φ(x) = u (x) since for small angles. and the second. characterizes the deﬂection of the centerline of the beam at a location x. Thus we have the geometric boundary conditions u(0) = 0. (In the BernoulliEuler model. u(x). the rotation equals the slope.144 CHAPTER 7. the shear strain in the present situation is γ(x) = u (x) − φ(x). characterizes the rotation of the crosssection at x. φ(x).
is 1 GA(γ(x))2 . φ} = 0 1 1 EI(φ (x))2 + GA(u (x) − φ(x))2 2 2 L dx − 0 pu(x)dx − F u(L). l] → R. and that the associated elastic energy per unit length of the beam. The displacement and rotation ﬁelds u(x) and φ(x) associated with an equilibrium conﬁguration of the beam minimizes the potential energy Φ{u.9.7. φ : [0. In engineering practice. . Instead. we know that the element is not in a state of simple shear. φ ∈ C 2 ([0.56) where the last two terms in this expression represent the potential energy of the distributed and concentrated loading respectively (and the negative signs arise because u is the deﬂection in the +ydirection while the loadings p and F are applied in the −ydirection).55). L]).56) is L L δΦ = 0 6 EIφ δφ + GA(u − φ)(δu − δφ) dx − 0 p δu dx − F δu(L). GENERALIZATIONS. (1/2)Sγ. is 1 EI(φ (x))2 . L]). EI and GA may vary along the length of the beam and therefore might be functions of x. the shear stress must vary with y such that it vanishes at the top and bottom. Note that all admissible functions are required to satisfy the geometric boundary conditions (7.8 − 0. We allow for the possibility that p.5. u ∈ C 2 ([0. Since the top and bottom faces of the diﬀerential element shown in Figure 7. φ(0) = 0}. we know from elasticity that the shear forceshear strain relation for a beam is6 S(x) = GAγ(x) and that the associated elastic energy per unit length of the beam. (1/2)M φ . 2 145 Similarly. (7. φ} over the admissible set A of test functions where take A = {u(·). φ(·) u : [0. l] → R. u(0) = 0. 2 The total potential energy of the system is thus L Φ = Φ{u. this is taken into account approximately by replacing GA by κGA where the heuristic parameter κ ≈ 0.9 are free of shear traction. To ﬁnd a minimizer of Φ we calculate its ﬁrst variation which from (7.
[Remark: Can you recover the BernoulliEuler theory from the Timoshenko theory in the limit as the shear rigidity GA → ∞?] Exercise: Consider a smooth function f (x. Since the variations δu(x). must hold. yn . an equilibrium conﬁguration of the beam is described by the deﬂection u(x) and rotation φ(x) that satisfy the diﬀerential equations (7. . zn ) deﬁned for all x. φn (x) be n oncecontinuously diﬀerentiable functions on [0. Let φ1 (x). . 0 < x < L. .59). . yn . δφ(x) are arbitrary on 0 < x < L and since δu(L) and δφ(L) are also arbitrary. y2 . . . . φ2 (x).58) and the boundary conditions (7. and collecting the like terms in the preceding equation leads to δΦ = EIφ (L) δφ(L) + GA u (L) − φ(L) − F δu(L) L − − 0 L 0 d EIφ + GA(u − φ) δφ(x) dx dx d GA(u − φ) + p δu(x) dx. dx (7. it follows from (7. y2 .59) . dx (7. z1 . . . 1] GA u (L) − φ(L) = F (7. Finally on using the facts that an admissible variation must satisfy δu(0) = 0 and δφ(0) = 0. z1 .57) that the ﬁeld equations d EIφ + GA(u − φ) = 0. . zn . Thus in summary.58) d GA(u − φ) + p = 0. . . .57) At a minimizer.55). . z2 . . (7. 0 < x < L. .146 CHAPTER 7. y1 . dx and the natural boundary conditions EIφ (L) = 0. . CALCULUS OF VARIATIONS Integrating the terms involving δu and δφ by parts gives δΦ = + − EIφ δφ d EIφ δφ dx 0 0 dx L L d GA(u − φ)δu − GA(u − φ) δu dx − 0 0 dx L L − L 0 GA(u − φ)δφ dx L 0 p δu dx − F δu(L). we have δΦ = 0 for all admissible variations. y1 . . . .
61) 7. n). F {φ} = 0 xR f (x.4 Generalization: End point of extremal lying on a curve. . φn . Observe that xR is not known a priori and is to be determined along with φ. φn ) dx 0 (7. φn } = f (x. . . At the minimizer. y = φ(x) 0 xR xR + δxR x Figure 7. φ )dx . . . with φi (0) = ai .10. φ2 . . φ1 . see Figure 7. Suppose that φ(x) ∈ A is a minimizer of F . φ. . (i = 1. φ2 (x).60) on the set of all such admissible functions. a) and end at some point on the curve G(x. 2. . φ2 . . (7. y) = 0.10: Curve joining (0. .5. GENERALIZATIONS. . . y) = 0. (0 a) G(x. . Moreover. note that the abscissa of the point at which a neighboring curve deﬁned by y = φ(x) + δφ(x) intersects the curve G = 0 is not xR but xR + δxR . φn (x) that extremize F must necessarily satisfy the n Euler equations d ∂f ∂f =0 − dx ∂φi ∂φi for 0 < x < 1. . Consider the set A of all functions that describe curves in the x. . . . We wish to minimize a functional F {φ} over this set of functions. yplane that commence from a given point (0. y) = 0 x. y y = φ(x) + δφ(x) (0. . φ1 . . φi (1) = bi .5. φ2 . Let x = xR be the abscissa of the point at which the curve y = φ(x) intersects the curve G(x. a) to an arbitrary point on the given curve G(x. Let F be the functional deﬁned by 1 147 F {φ1 .7. Show that the functions φ1 (x). . y) = 0.
φ )δφ + fφ (x. φ(xR ). φ (xR ) δxR + fφ xR . reduces to f xR . φ )dx 0 where we have set fφ = ∂f /∂φ and fφ = ∂f /∂φ . φ(xR ). φ. φ. dx (7. φ.62) vanish. φ (xR ) δxR + 0 fφ δφ + fφ δφ dx. we ﬁnd xR +δxR xR δF = 0 f (x. on using the fact that δφ(0) = 0. φ (xR ) δxR + 0 fφ δφ + fφ δφ dx = 0. After integrating the last term by parts we get f xR . φ(xR ). φ(xR ). which equals the linearized form of F {φ + δφ} − F {φ}.64) . φ )δφ dx which in turn reduces to xR δF = f xR . This leads to xR +δxR xR δF = xR f (x. (7. Thus setting the ﬁrst variation δF equal to zero gives xR f xR . φ )δφ + fφ (x. CALCULUS OF VARIATIONS xR +δxR F {φ + δφ} = f (x. φ. φ )dx + 0 fφ (x. 0 Therefore on calculating the ﬁrst variation δF . φ(xR ). φ(xR )) as the minimizer. φ(xR ).63) We now return to arbitrary admissible test functions.62) First limit attention to the subset of all test functions that terminate at the same point (xR . φ + δφ )dx.62) gives f xR . Substituting (7.63) into (7.148 and at a neighboring test function CHAPTER 7. φ ) + fφ (x. φ + δφ. φ )δφ dx − f (x. this leads to the Euler equation fφ − d fφ = 0. φ.62) must hold for all such variations δφ(x). φ (xR ) δφ(xR ) = 0 (7. φ. In this case δxR = 0 and δφ(xR ) = 0 and so the ﬁrst two terms in (7. Since this specialized version of equation (7. φ (xR ) δxR + fφ δφ xR 0 xR + 0 fφ − d fφ δφ dx = 0 dx xR which. φ(xR ). φ. φ (xR ) δxR + fφ xR . dx 0 ≤ x ≤ xR . φ (xR ) δφ(xR ) + 0 fφ − d fφ δφ dx = 0.
φ(xR )) + φ (xR )Gy (xR .7. φ(xR )) = 0 thus leads to the following relation between the variations δxR and δφ(xR ): Gx (xR . for some constant λ (referred to as a Lagrange multiplier). φ(xR ). φ(xR + δxR ) + δφ(xR + δxR )) = G(xR + δxR . the quantities δxR and δφ(xR ) are not independent of each other. Setting δG = G(xR + δxR . φ(xR )) δxR + Gy (xR . φ(xR )) δφ(xR ). φ (xR ) − λGy (xR . φ (xR ) − λGy (xR . φ (xR ) − φ (xR )fφ xR . φ(xR )) + φ (xR )Gy (xR . ∂I/∂ε2 = λ∂J/∂ε2 where the Lagrange multiplier λ is unknown and is also to be determined. φ(xR + δxR ) + δφ(xR + δxR )) = 0. Thus (7. 149 which must hold for all admissible δxR and δφ(xR ). φ (xR ) − λGx (xR . φ(xR ). 7 G(xR + δxR . φ(xR + δxR ) + δφ(xR + δxR )) − G(xR . φ(xR ) + φ (xR )δxR + δφ(xR )) = G(xR . Thus (7. We can use the second equation above to simplify the ﬁrst equation which then leads to the pair of equations f xR . where we have set Gx = ∂G/∂x and Gy = ∂G/∂x. (7. φ(xR ). φ(xR )) δxR + Gy (xR . φ(xR ). φ(xR )) + Gx (xR . φ(xR )) = 0. we must satisfy the condition dI = (∂I/∂ε1 )dε1 + (∂I/∂ε2 )dε2 = 0. φ (xR ) − λ Gx (xR . only for those that are consistent with this geometric requirement. The constrain equation J = 0 provides the extra condition required for this purpose. = G(xR . It is important to observe that since admissible test curves must end on the curve G = 0. ε2 ) = 0 then we must respect the side condition dJ = (∂J/∂ε1 )dε1 + (∂J/∂ε2 )dε2 = 0. Under these circumstances. This implies that7 f xR . φ(xR )) δφ(xR ) = 0. φ(xR )) = 0. (7. y) = 0 implies that G(xR . one ﬁnds that that one must require the conditions ∂I/∂ε1 = λ∂J/∂ε1 . φ(xR )) + Gx (xR . Note that linearization gives G(xR + δxR . φ(xR )) = 0.65) It may be helpful to recall from calculus that if we are to minimize a function I(ε1 .64) does not hold for all δxR and δφ(xR ). φ(xR ). GENERALIZATIONS.5. φ(xR )) = 0. ε2 ).64) must hold for all δxR and δφ(xR ) that satisfy (7. fφ xR . φ(xR )) = 0.66) fφ xR . φ(xR )) + φ (xR )Gy (xR .65). φ(xR ))δxR + Gy (xR . . . φ(xR )) φ (xR )δxR + δφ(xR ) . The requirement that the minimizing curve and the neighboring test curve terminate on the curve G(x. But if this minimization is carried out subject to the side constraint J(ε1 .
(7.63) can be integrated immediately to give φ (x) = constant The boundary condition at the left hand end is φ(0) = a. (Note that the presence of the additional unknown λ is compensated for by the imposition of the additional condition G(xR . In summary: an extremal φ(x) must satisfy the diﬀerential equations (7. c2 + c2 1 2 .150 CHAPTER 7.) Example: Suppose that G(x. while the boundary conditions (7. the Euler equation (7. φ. φ ) = 0 and fφ (x. the boundary condition φ = a at x = 0. φ. φ. φ(xR )) = 0 requires that c1 xR + c2 φ(xR ) + c3 = 0. CALCULUS OF VARIATIONS Equation (7. φ(xR )) = 0. and the equation G(xR . Solving the preceding equations leads to the minimizer φ(x) = (c2 /c1 )x + a for 0 ≤ x ≤ − c1 (ac2 + c3 ) .63) on 0 ≤ x ≤ xR . a) and ends on G = 0.66) at the right hand end give 1 1 + φ 2 (xR ) = λc1 .66) provides two natural boundary conditions at the right hand end x = xR . the two natural boundary conditions (7. φ (xR ) 1 + φ 2 (xR ) = λc2 .66) at x = xR .67). φ ) = 1 + (φ )2 . fφ (x. Finally the condition G(xR . Since ds = dx2 + dy 2 = 1 + (φ )2 dx we are to minimize the functional xR F = 0 1 + (φ )2 dx. y) = c1 x+c2 y +c3 and that we are to ﬁnd the curve of shortest length that commences from (0. φ ) = φ 1 + (φ )2 .67) On using (7. Thus f (x. for 0 ≤ x ≤ xR . φ(xR )) = 0.
72) holds. φ2 (x). suppose that the pair φ1 (x). η1 (x). φ2 ) = 0 f (x. (7. φ1 (x). A necessary condition for this is that ˆ dF (ε1 . ε2 ) = G(φ1 (x) + ε1 η1 (x). dε2 that are consistent with the constraint. φ1 (x). φ2 (x).6 7.e. ∂ε1 ∂ε1 ˆ ˆ ∂F ∂G =λ . φ2 (x)) dx 0 (7.72) ˆ If we didn’t have the constraint. φ2 } = subject to the constraint 1 f (x.71) would imply the usual requirements ∂ F /∂ε1 = ˆ ∂ F /∂ε2 = 0. φ2 (x) + ε2 η2 (x)}. (7. δφ2 (x). φ2 (x) + ε2 η2 (x)) = 0. (7. φ2 (x) that minimizes 1 F {φ1 . i. ∂ε2 ∂ε2 (7. ˆ G(ε1 . ε2 ) = 0.1 Constrained Minimization Integral constraints. By evaluating F and G on a family of neighboring admissible functions φ1 (x) + ε1 η1 (x). (7. Because of the constraint. dε1 and dε2 cannot be varied independently.73) .7. ε2 ) with respect to the variables ˆ ε1 and ε2 .6. φ2 (x) is the minimizer. this is a classical minimization problem for a function ˆ of two variables: we are to minimize the function F (ε1 .68) G(φ1 . ε2 ) = F {φ1 (x) + ε1 η1 (x). φ2 (x) + ε2 η2 (x) we have ˆ F (ε1 . φ2 (x)) dx = 0. rather than following the formal approach using variations δφ1 (x). then (7.69) For reasons of clarity we shall return to the more detailed approach where we introduce parameters ε1 . subject to the constraint G(ε1 . φ1 (x). Consider a problem of the following general form: ﬁnd admissible functions φ1 (x).71) ∂ε1 ∂ε2 for all dε1 . ∂ε1 ∂ε2 (7. ε2 and functions η1 (x).70) If we begin by keeping η1 and η2 ﬁxed. ε2 ) = 0. CONSTRAINED MINIMIZATION 151 7. Instead the constraint requires that they be related by ˆ dG = ˆ ˆ ∂G ∂G dε1 + dε2 = 0. φ1 (x). that ˆ ˆ ∂F ∂F ˆ dF = dε1 + dε2 = 0. However when the constraint equation (7.6. Accordingly.71) only requires that ˆ ˆ ∂G ∂F =λ .
We are asked to determine this shape.74) ˆ ˆ ˆ ˆ Therefore minimizing F subject to the constraint G = 0 is equivalent to minimizing F − λG without regard to the constraint. CALCULUS OF VARIATIONS ∂ ˆ ˆ (F − λG) = 0.76) must equal L. ∂ε2 (7. y y = φ(x) g −H 0 H x Let y = φ(x). The presence of the additional unknown parameter λ is balanced by the availability of the constraint equation G = 0. ∂ε1 CHAPTER 7. λ is known as a Lagrange multiplier. −H ≤ x ≤ H. its length L H {φ} = ds = 0 −H 1 + (φ )2 dx (7. that minimizes V {φ} subject to the constraint {φ} = L. since ds = dx2 + dy 2 = 1 + (φ )2 dx. The cable has a given length L. The two ends of the cable are held at the same vertical height. We know from physics that the cable adopts a shape that minimizes the potential energy.152 for some constant λ. The potential energy of the cable is determined by integrating mgφ with respect to the arc length s along the cable which. is given by L H V {φ} = mgφ ds = mg 0 −H φ 1 + (φ )2 dx. Therefore we are asked to ﬁnd a function φ(x) with φ(−H) = φ(H). (7.75) Since the cable is inextensible. According to the theory developed . Proceeding from here on leads to the Euler equation associated with F − λG. describe an admissible shape of the cable. Example: Consider a heavy inextensible cable of mass per unit length m that hangs under gravity. or equivalently ∂ ˆ ˆ (F − λG) = 0. a distance 2H apart.
−H < x < H.79). the function sinh z − µz starts from the value 0.76) yields (7. Consequently (7. this function must satisfy the Euler equation associated with the functional V {φ} − λ {φ} where the Lagrange multiplier λ is a constant.6. −H < x < H. if equation (7. Then we must solve µz = sinh z where µ > 1 is a constant. be less than the length of the rope. (7. 2H. and then increases monotonically to ∞. (7. CONSTRAINED MINIMIZATION 153 above.78) into the constraint condition {φ} = L with L = 2c sinh(H/c). then (7. given by (7. Thus for each µ > 0 the function sinh z − µz vanishes at some unique positive value of z. −H < x < H. L. we must have φ (0) = 0 and therefore d = 0. (The requirement µ > 1 follows from the physical necessity that the distance between the pegs.7.) One can show that as z increases from 0 to ∞.78) Substituting (7. where the constant λ is a Lagrange multiplier.77) The constant λ in (7. Thus φ(x) = c cosh(x/c) + λ. . For example we could take the xaxis to pass through the two pegs in which case φ(±H) = 0 and then λ = −c cosh(H/c) and so φ(x) = c cosh(x/c) − cosh(H/c) . The resulting boundary value problem together with the constraint = L yields the shape of the cable φ(x). −H < x < H. decreases monotonically to some ﬁnite negative value at some z = z∗ > 0.79) Thus in summary.79) has a unique root c > 0. Calculating the ﬁrst variation of V −λmg . To this end set z = H/c and µ = L/(2H).77) is simply a reference height. leads to the Euler equation d dx (φ − λ) φ 1 + (φ )2 − 1 + (φ )2 = 0. Integrating this again leads to φ = φ(x) = c cosh[(x + d)/c] + λ. This can be integrated once to yield (φ − λ)2 − 1 c2 where c is a constant of integration. All that remains is to examine the solvability of (7.78) gives the equation describing the shape of the cable. For symmetry.79) can be solved for c. where d is a second constant of integration.
φ2 (x) that minimizes 1 f (x. CALCULUS OF VARIATIONS 7. be two points on this surface. we are to ﬁnd the one that gives the minimum travel time. φ2 (x)) = 0 for 0 ≤ x ≤ 1. 2 3 Suppose that the wire can be described parametrically by x1 = φ1 (x3 ). x3 ) = x2 + x2 − R2 (x3 ) = 0. p2 .2 Algebraic constraints Now consider a problem of the following general type: ﬁnd a pair of admissible functions φ1 (x). φ1 . p2 . One can show that a necessary condition is that the minimizer should satisfy the Euler equation associated with f −λg. x3 > 0. Let P = (p1 . From among all such wires. beginning at rest from P. q3 ) and lies on the conical surface x2 + 1 x2 − x2 tan2 α = 0. q3 ). (Not all of the permissible curves can be described this way and so by using . q2 . q2 . φ1 . 1 2 R(x3 ) = x3 tan α. A bead slides along the wire under gravity. p3 ) and Q = (q1 . φ2 )dx 0 subject to the algebraic constraint g(x. x2 = φ2 (x3 ) for p3 ≤ x3 ≤ q3 . φ1 (x). x2 . x1 g x2 P Q x3 Figure 7.11: A curve that joins the points (p1 . A smooth wire lies entirely on the conical surface and joins the points P and Q.154 CHAPTER 7.6. p3 ) to (q2 . φ2 . In this problem the Lagrange multiplier λ may be a function of x. q3 > p3 . Example: Consider a conical surface characterized by g(x1 .
7. i = 1. φ2 ) φi : [p3 . φi ∈ C 2 [p3 . φ1 .3 Diﬀerential constraints Now consider a problem of the following general type: ﬁnd a pair of admissible functions φ1 (x). (7. φ2 . p3 ≤ x3 ≤ q3 . x2 . φ1 . The arc length ds along the path is given by ds = dx2 + dx2 + dx2 = 3 2 1 (φ1 )2 + (φ2 )2 + 1 dx3 . q3 ]. φ2 . φ2 )dx 0 . φ1 . x2 . CONSTRAINED MINIMIZATION 155 this characterization we are limiting ourselves to a subset of all the permited curves. φ2 ) = (φ1 )2 + (φ2 )2 + 1 2g(x3 − p3 ) and g(x1 . φ2 } = (φ1 )2 + (φ2 )2 + 1 2g(x3 − p3 ) dx3 . φ2 (x3 ). q3 ] → R.) Since the curve has to lie on the conical surface it is necessary that g(φ1 (x3 ).6. 1 The conservation of energy tells us that 2 mv 2 (t) − mgx3 (t) = −mgp3 . 2 . T {φ1 . φ2 } over the set of admissible functions A = (φ1 . or v= Therefore the travel time is q3 2g(x3 − p3 ). x3 ) = x2 + x2 − x2 tan2 α.6. x3 ) = 0. φ2 (x) that minimizes 1 f (x. subject to the constraint g(φ1 (x3 ). According to the theory developed the solution is given by solving the Euler equations associated with f − λ(x3 )g where f (x3 . p3 Our task is to minimize T {φ1 .7.80) The travel time is found by integrating ds/v along the path. 3 1 2 subject to the prescribed conditions at the ends and the constraint g(x1 . φ1 . φ2 (x3 ). x3 ) = 0. φi (q3 ) = qi . φi (p3 ) = pi . x3 ) = 0. p3 ≤ x3 ≤ q3 .
φ1 (x). φ1 . φ1 (x).82) where h = f − λg. suppose that there does not exist a function h(x. φ1 )dx 0 over an admissible set of functions. the Lagrange multiplier λ may be a function of x. Remark: Note by substituting the constraint into the integrand of the functional that we can equivalently pose this problem as one for determining the function φ1 (x) that minimizes 1 f (x. φ2 ) = φ2 − φ1 = 0. (7. φ2 (x)) = 0.83).81). λ(x) are determined from the three diﬀerential equations (7. φ1 . (7. φ2 . the minimizers satisfy the Euler equations ∂h d ∂h = 0. (7. φ1 (x). dx ∂φ1 ∂φ1 d ∂f +λ=0 dx ∂φ2 for 0 < x < 1. φ1 . Suppose that the constraint is not integrable. these Euler equations reduce to d ∂f ∂f +λ − = 0. φ1 . On substituting for f and g. CALCULUS OF VARIATIONS subject to the diﬀerential equation constraint g(x. φ2 (x). for 0 ≤ x ≤ 1. − dx ∂φ1 ∂φ1 d ∂h ∂h =0 − dx ∂φ2 ∂φ2 for 0 < x < 1.156 CHAPTER 7. (7. such constraints are called nonholonomic. φ2 (x). for 0 ≤ x ≤ 1. . φ2 (x)) such that g = dh/dx. Example: Determine functions φ1 (x) and φ2 (x) that minimize 1 f (x. φ2 )dx 0 over an admissible set of functions subject to the nonholonomic constraint g(x. φ1 . (In dynamics.83) Thus the functions φ1 (x).) One can show that it is necessary that the minimizer satisfy the Euler equation associated with f − λg.81) According to the theory above.e. φ1 . i. In these problems.
If there is such a function φ∗ .84) that F ≥ 0. Assuming that the class of admissible functions are those that are C 1 [0. 1] and satisfy d φ(0) = φ(1) = 0. Since φ∗ ∈ A it must not satisfy / one or both of these two conditions. Therefore if there is a function φ∗ of this type. The problem statement requires that the boundary conditions must hold. φ. this gives φ(x) = 0 for 0 ≤ x ≤ 1.84) that the value of F at this particular function φ(x) = 0 is F = 1 Note from (7. φ∗ (x) = (7. and gives F {φ∗ } = 0. it follows from the nonnegative character of the integrand in (7. If so. is piecewise C 1 . . In the present case this specializes to 2[ (φ )2 −1](2φ ) = constant for 0 ≤ x ≤ 1. 1] and to vanish at the two ends x = 0 and x = 1. It is continuous.7. The piecewise linear function x for 0 ≤ x ≤ 1/2.7. Moreover φ∗ (x) satisﬁes the Euler equation except at x = 1/2. It is readily seen from (7. φ∗ would be a minimizer. On enforcing the boundary conditions φ(0) = φ(1) = 0. it does not belong to the set of functions A. we would have found it from the preceding calculation.7 Piecewise smooth minimizers. The functions in A were required to be C 1 [0.84) This is apparently a problem of the classical type where in the present case we are 2 to minimize the integral of f (x.85) (1 − x) for 1/2 ≤ x ≤ 1. 1]. 0 ((φ )2 − 1)2 dx (7. the minimizer must necessarily satisfy the Euler equation dx (∂f /∂φ ) − (∂f /∂φ) = 0. φ ) = (φ )2 − 1 with respect to x over the interval [0. 1]. In order to motivate the discussion to follow. If there is a function φ∗ such that F {φ∗ } = 0. WEIRSTRASSERDMAN CORNER CONDITIONS 157 7. This is an extremizer of F {φ} over the class of admissible functions under consideration. This requires that φ (x) = ±1 almost everywhere in [0. has this property.84) that the integrand itself should vanish almost everywhere in [0. ﬁrst consider the problem of minimizing the functional 1 F {φ} = over functions φ with φ(0) = φ(1) = 0. Therefore it must be true that φ is not as smooth as C 1 [0. we know that it cannot belong to the class of admissible functions considered above. 1]. which in turn gives φ (x) = constant for 0 ≤ x ≤ 1. since if it did. WeirstrassErdman corner conditions. 1]. It is natural to wonder whether there is a function φ∗ (x) that gives F {φ∗ } = 0.
1 Piecewise smooth minimizer with nonsmoothness occuring at a prescribed location. 1]. and its ﬁrst derivative is permitted to have a jump discontinuity at a given location x = s. CALCULUS OF VARIATIONS But is it legitimate for us to consider piecewise smooth functions? If so are there are any restrictions that we must enforce? Physical problems involving discontinuities in certain physical ﬁelds or their derivatives often arise when.e. s) ∪ (s. and suppose further that we know that the extremal φ(x) is continuous but has a discontinuity in its slope at x = s: i. for example. φ ∈ C[0. Suppose that we wish to extremize the functional 1 F {φ} = f (x. φ (s−) = φ (s+) where φ (s±) denotes the limiting values of φ (s ± ε) as ε → 0. 1]. .7. 7. y φ(x) + δφ(x) φ(x) + δφ(x) φ(x) φ(x) s 1 x Figure 7. 1]). Observe that an admissible function is required to be continuous on [0. φ )dx 0 over some suitable set of admissible functions. φ(1) = φ1 } . required to have a continuous ﬁrst derivative on either side of x = s. the problem concerns an interface separating two diﬀerent mateirals. 1] → R. φ(0) = φ0 .158 CHAPTER 7. that are continuous at x = s. A speciﬁc example will be considered below. φ ∈ C 1 ([0. φ(1) = φ1 : A = {φ(·) φ : [0. φ. Thus the set of admissible functions is composed of all functions that are smooth on either side of x = s.12: Extremal φ(x) and a neighboring test function φ(x) + δφ(x) both with kinks at x = s. and that satisfy the given boundary conditions φ(0) = φ0 .
φ. 1] and (may) have a jump discontinuity in its ﬁrst derivative at the location x = s. In view of the lack of smoothness at x = s it is convenient to split the integral into two parts and write s 1 F {φ} = and F {φ + δφ} = 0 s f (x. 1 f (x. φ + δφ. when this is substituted back into the equation above it. φ )dx. and setting δF = 0. and only the integral remains. φ. φ + δφ )dx + s f (x. x = s. δφ(x) ∈ C 1 ([0. 1]). This implies that δφ(x) ∈ C([0. s) ∪ (s. Since the resulting equation must hold for all variations δφ(s). which by deﬁnition equals F {φ + δφ} − F {φ} upto terms linear in δφ. if we limit attention to variations that are such that δφ(s) = 0.7. the integral now disappears. Let δφ(x) be an admissible variation which means that the neighboring function φ(x) + δ(x) is also in A which means that it is C 1 on [0.7.12. φ + δφ. Since δφ(x) can be chosen arbitrarily for all x ∈ (0. WEIRSTRASSERDMAN CORNER CONDITIONS 159 Suppose that F is extremized by a function φ(x) ∈ A and suppose that this extremal has a jump discontinuity in its ﬁrst derivative at x = s. this implies that the term within the brackets in the integrand must vanish at each of these x’s. we obtain s 1 (fφ δφ + fφ δφ ) dx + 0 s (fφ δφ + fφ δφ ) dx = 0. 1). x=s+ However. x = s. φ + δφ )dx. see Figure 7. δφ(0) = δφ(1) = 0. this simpliﬁes to 1 0 ∂f d − ∂φ dx ∂f ∂φ δφ(x) dx + ∂f ∂φ x=s− − ∂f ∂φ δφ(s) = 0. φ )dx + 0 s f (x. Integrating the terms involving δφ by parts leads to s 0 d (fφ ) δφ dx + fφ − dx 1 s d (fφ ) δφ dx + fφ − dx ∂f δφ ∂φ s− + x=0 ∂f δφ ∂φ 1 = 0. This leads to the Euler equation ∂f d − ∂φ dx ∂f ∂φ =0 for 0 < x < 1. the second term in the equation above vanishes. s) ∪ (s. it follows that we must have ∂f ∂φ x=s− = ∂f ∂φ x=s+ . Second. since δφ(0) = δφ(1) = 0. x=s+ First. 1]). Upon calculating δF .
13. y) respectively. y) x. B) in the right halfplane. (b. A) in the left halfplane to the point (b. The matching condition shows that even though φ has a jump discontinuity at x = s.13: Ray of light in a twophase material. The material occupying x < 0 is diﬀerent to the material occupying x > 0 and so x = 0 is the interface between the two materials. . x = s. Thus in summary an extremal φ must obey the following boundary value problem: ∂f d ∂f − =0 for 0 < x < 1. n2(x. CALCULUS OF VARIATIONS at x = s. yspace. followed by a ray of light travelling from a point (a. B) n1(x. the quantity ∂f /∂φ is continuous at this point.160 CHAPTER 7. A) a. In particular. y) b. dx ∂φ ∂φ φ(0) = φ0 . a ≤ x ≤ b. x = 0. we are to determine the conditions at the point where the ray crosses the interface between the two media. This is a “matching condition” or “jump condition” that relates the solution on the left of x = s to the solution on its right. Example: Consider a twophase material that occupies all of x. ∂f ∂f = at x = s. see Figure 7. suppose that the refractive indices of the materials occupying x < 0 and x > 0 are n1 (x. (7. In particular. ∂φ x=s− ∂φ x=s+ y x. y) and n2 (x. θ+ θ− 0 x (a.86) φ(1) = φ1 . We are asked to determine the path y = φ(x). −∞ < y < ∞ Figure 7.
Note that this set of admissible functions allows the path followed by the light to have a kink at x = 0 even though the path is continuous. φ ) dx a where f (x. y) where n(x. c Thus the problem at hand is to determine φ that minimizes the functional T {φ} over the set of admissible functions A = {φ(·) φ ∈ C[a. b]. Therefore the matching condition requires that n sin θ be continuous. Also. φ(x)) and θ(x) as x → 0±. φ) c 1 + (φ )2 . which. a ray of light travelling between two given points follows the path that it can traverse in the shortest possible time. φ(a) = A. b]). Thus the transit time is determined by integrating n/c along the path followed by the light. φ) ∂f = ∂φ c φ 1 + (φ )2 and so the matching condition at the kink at x = 0 requires that n c φ 1 + (φ )2 be continuous at x = 0. φ ∈ C 1 ([a. y). φ ) = n(x. φ(x)) 1 + (φ )2 dx. . 0) ∪ (0. or n+ sin θ+ = n− sin θ− where n± and θ± are the limiting values of n(x. y) is the index of refraction at the point (x. The functional we are asked to minimize can be written in the standard form b T {φ} = Therefore f (x.7. if θ is the angle made by the ray of light with the xaxis at some point along its path. This is Snell’s wellknown law of refraction. n(x. Observe that. φ(b) = B}. φ. then tan θ = φ and so sin θ = φ / 1 + (φ )2 . we know that light travels at a speed c/n(x.7. since ds = 1 + (φ )2 dx can be written as b T {φ} = a 1 n(x. φ. WEIRSTRASSERDMAN CORNER CONDITIONS 161 According to Fermat’s principle.
(we shall say that φ has a “kink” at x = s).7. Consider a variation δφ(x) that vanishes at the two ends x = 0 and x = 1. is C 1 on [0. Suppose further that φ is C 1 on either side of x = s. φ ∈ Cp [0. See Figure 7. the location s is not known a priori and so is also to be determined. Note that φ(x) + δφ(x) has kinks at both x = s and x = s + δs. φ.14. However in contrast to the preceding case. 1] ∪ C 1 [0. 1] except at x = s + δs where it has a jump discontinuity in its ﬁrst derivative: δφ ∈ C[0. if there is discontinuity in the ﬁrst derivative of φ at some location x = s. Suppose that F is extremized by the function φ(x) and that it has a jump discontinuity in its ﬁrst derivative at x = s.14: Extremal φ(x) with a kink at x = s and a neighboring test function φ(x) + δφ(x) with kinks at x = s and s + δs. 1].162 CHAPTER 7. φ(0) = a. 1]. 1] → R. φ ∈ C[0. 1]. the admissible functions are continuous and have a piecewise continuous ﬁrst derivative. 1]. s + δs) ∪ C 1 (s + δs. φ ) dx over the admissible set of functions 1 A = {φ(·) : φ : [0. . is continuous on [0.2 Piecewise smooth minimizer with nonsmoothness occuring at an unknown location Suppose again that we wish to extremize the functional 1 F (φ) = 0 f (x. y φ(x) + δφ(x) φ(x) + δφ(x) φ(x) φ(x) s s + δs 1 x Figure 7. Note further that we have varied the function φ(x) and the location of the kink s. CALCULUS OF VARIATIONS 7. δφ(0) = δφ(1) = 0. φ(1) = b} Just as before.
φ.89) are known as the WierstrassErdmann corner conditions (the term “corner” referring to the “kink” in φ). to 1 A δφ(x)dx + B δφ(s) + C δs = 0. 0 where A = ∂f d − ∂φ dx ∂f ∂φ ∂f ∂φ − . Similarly since the the neigboring function φ(x) + δ(x) has kinks at x = s and x = s + δs. φ + δφ )dx + 0 1 s f (x. φ + δφ. by deﬁnition. x=s+ By the arbitrariness of the variations above. φ )dx + 0 s f (x.89) The two matching conditions (or jump conditions) (7.88) and (7. equals F {φ + δφ} − F {φ} upto terms linear in δφ. Calculating δF in this way and setting the result equal to zero. Equation (7. φ )dx.87) ∂f ∂φ x=s− C = f −φ ∂f ∂φ x=s− − f −φ . 1). φ + δφ )dx. φ + δφ )dx f (x. x=s+ (7. leads after integrating by parts. WEIRSTRASSERDMAN CORNER CONDITIONS 163 Since the extremal φ(x) has a kink at x = s it is convenient to split the integral and express F {φ} as s 1 F {φ} = f (x. it is convenient to express F {φ + δφ} by splitting the integral into three terms as follows: s s+δs F {φ + δφ} = + f (x.7. (7. and the following two additional requirements at x = s: ∂f ∂φ f −φ ∂f ∂φ = ∂f ∂φ . . This leads to the usual Euler equation on (0. φ + δφ. s+δs We can now calculate the ﬁrst variation δF which. it follows in the usual way that A. ∂f ∂φ (7. φ + δφ. s) ∪ (s.88) is the same condition that was derived in the preceding subsection. φ. ∂f ∂φ B = .7. B and C all must vanish.88) s− s+ s− = f −φ s+ .
(Such an extremal might not. of course. φ(4) = 2. 4) and so we conclude that φ (x) = constant for 0 ≤ x ≤ 4. Next consider a piecewise smooth extremizer of F which has a kink at some location x = s.) In this case the Euler equation (7. ∂φ (7.92) holds on the entire interval (0. φ (x) = d for s < x < 4. we ﬁnd that φ(x) = x/2. 4) where c = d. Thus c for 0 < x < s.92) now holds on either side of x = s and so we ﬁnd from (7. CALCULUS OF VARIATIONS Example: Find the extremals of the functional 4 4 F (φ) = 0 f (x. of course. such an extremal might not. 0 ≤ x ≤ 4. (Again. ∂f = 4φ (φ )2 − 1 .92) First. Here f (x. φ. φ ) = (φ )2 − 1 and therefore on diﬀerentiating f . exist. for 0 ≤ x ≤ 4. say. 4) is not known a priori and is to be determined.164 CHAPTER 7. s) and φ = d = constant on (s. φ(4) = 2.91) 2 (7. On integrating this and using the boundary conditions φ(0) = 0. φ0 . (7.90) Consequently the Euler equation (at points of smoothness) is d d fφ − fφ = dx dx 4φ (φ )2 − 1 = 0. φ. In order to compare this with what follows. φ ) dx = 0 (φ − 1)2 (φ + 1)2 dx over the set of piecewise smooth functions subject to the end conditions φ(0) = 0. consider an extremal that is smooth everywhere. is a smooth extremal. restrict attention to functions that have at most one point at which φ has a discontinuity. the value of s ∈ (0. ∂φ ∂f = 0. For simplicity. Thus 1 φo (x) = x 2 is a smooth extremal of F . .) The Euler equation (7. (if c = d there would be no kink at x = s and we have already dealt with this case above).92) that φ = c = constant on (0. it is helpful to call this. exist.
Keeping in mind that c = d and solving these equations leads to the two solutions: c = 1. d = 1. Integrating this. −x for 0 ≤ x ≤ 1. φ(4) = 2. φ(x) = (7. while the latter leads to s = 1. (c2 − 1)(1 + 3c2 ) = (d2 − 1)(1 + 3d2 ). Corresponding to the former we ﬁnd from (7.91) and (7. . and the two WeirstrassErdmann corner conditions (7. give us the pair of simultaneous equations c(c2 − 1) = d(d2 − 1). leads to cx for 0 ≤ x ≤ s.88) and (7.93). which require respectively the continuity of ∂f /∂φ and f −φ ∂f /∂φ at x = s. WEIRSTRASSERDMAN CORNER CONDITIONS 165 Since φ is required to be continuous. d = −1.7. Therefore the WeirstrassErdmann corner conditions (7.90). From (7.88).94) that s = 3. we must have φ(s−) = φ(s+) which requires that cs = d(s − 4) + 2 whence 2 − 4d . φ1 (x) = −x + 6 for 3 ≤ x ≤ 4.94) s= c−d Note that s would not exist if c = d. 4c(c2 − 1) for 0 < x < s. (7.89).89) provide us with the two equations for doing this. All that remains is to ﬁnd c and d.93) there are two piecewise smooth extremals φ1 (x) and φ2 (x) of the assumed form: x for 0 ≤ x ≤ 3. and c = −1. (7. 4). ∂f = ∂φ 4d(d2 − 1) for s < x < 4. Thus from (7.93) d(x − 4) + 2 for s ≤ x ≤ 4. (7. −(d2 − 1)(1 + 3d2 ) for s < x < 4. and enforcing the boundary conditions φ(0) = 0.7. and ∂f f −φ = ∂φ −(c2 − 1)(1 + 3c2 ) for 0 < x < s. separately on (0. s) and (s. φ2 (x) = x − 2 for 1 ≤ x ≤ 4.
φ1 and φ2 .15: Smooth extremal φ0 (x) and piecewise smooth extremals φ1 (x) and φ2 (x). By evaluating the functional F at each of the extremals φ0 . φ . we ﬁnd F {φo } = 9/4. φ1 and φ2 . Figure 7. 2 it is clear that (a) F ≥ 0. consider the onedimensional variational problem of minimizing a functional 1 F {φ} = f (x. CALCULUS OF VARIATIONS y 3 2 1 φ1(x) φ1(x) φ0(x) 1 2 3 4 x φ2(x) φ2(x) Figure 7. First. and (b) F = 0 if and only if φ = ±1 everywhere (except at isolated points where φ may be undeﬁned). The extremals φ1 and φ2 have this property and therefore correspond to absolute minimizers of F .166 CHAPTER 7. φ. 7.15 shows graphs of φo .8 Generalization to higher dimensional space. Remark: By inspection of the given functional 4 F (φ) = 0 (φ )2 − 1 dx. In order to help motivate the way in which we will approach higherdimensional problems (which will in fact be entirely parallel to the approach we took for onedimensional problems) we begin with some preliminary observations. F {φ1 } = F {φ2 } = 0. φ ) dx 0 .
167 on a set of suitably smooth functions with no prescribed boundary conditions at either end. y) on the interior of the domain of integration. in the onedimensional case we were able to exploit the fact that δφ and its derivative δφ are arbitrary at the boundary points x = 0 and x = 1. y) dA + ∂D B δφ(x. another quantity C independent of any variations multiplied by δφ (0).7. ∂ 2 φ/∂x∂y. B. Thus the goal in our calculations will be to express the boundary terms as some quantity independent of any variations multiplied by δφ. y)) ds = 0 ∂n (7. ∂ 2 φ/∂x2 . The analogous twodimensional problem would be to consider a set of suitably smooth functions φ(x. 1] into terms . This motivated us to express the integrand in the form of some quantity A (independent of any variations) multiplied by δφ(x).8. the arbitrariness of δφ allowed us to conclude that A must vanish on the entire domain. and so on. C are independent of δφ and its derivatives and the latter two integrals are on the boundary of the domain D. ∂φ/∂x. yplane and to minimize a given functional F {φ} = f (x. Next. Similarly concerning the boundary terms. another quantity independent of any variations multiplied by ∂(δφ)/∂n etc. y) deﬁned on a domain D of the x. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. y. In deriving the Euler equation in the onedimensional case our strategy was to exploit the fact that the variation δφ(x) was arbitrary in the interior 0 < x < 1 of the domain. y) is arbitrary in the interior of D and so we attempt to express the integrand as some quantity A that is independent of any variations multiplied by δφ. ∂ 2 φ/∂y 2 ) dA D over this set of functions with no boundary conditions prescribed anywhere on the boundary ∂D. We approach twodimensional problems similarly and our strategy will be to exploit the fact that δφ(x. y) ∈ D and the boundary conditions B = C = 0 on ∂D. φ. We approach twodimensional problems similarly and our strategy for the boundary terms is to exploit the fact that δφ and its normal derivative ∂(δφ)/∂n are arbitrary on the boundary ∂D. ∂φ/∂y. recall that one of the steps involved in calculating the minimizer of a onedimensional problem is integration by parts. Then. Thus in the twodimensional case our strategy will be to take the ﬁrst variation of F and carry out appropriate calculations that lead us to an equation of the form δF = D A δφ(x. and this motivated us to express the boundary terms as some quantity B that is independent of any variations multiplied by δφ(0). and the arbitrariness of δφ and ∂(δφ)/∂n on the boundary ∂D to conclude that the minimizer must satisfy the partial diﬀerential equation A = 0 for (x.95) where A. This converts a term that is an integral over [0. y) ds + ∂D C ∂ (δφ(x. We then exploit the arbitrariness of δφ(x.
and ydirections. y) the integrand of the right hand side is ∂χ/∂n. The analog of this in higher dimensions is carried out using the divergence theorem. which is an integral over D. which in twodimensions reads ∂Q ∂P + ∂x ∂y dA = ∂D (P nx + Qny ) ds (7. The membrane is ﬁxed along its entire edge ∂D and so u=0 for (x. In vector form we have φ= ∂φ ∂φ ∂φ ∂φ i+ j= n+ s ∂x ∂y ∂n ∂s where i and j are unit vectors in the x. Note that in the special case where P = ∂χ/∂x and Q = ∂χ/∂y for some χ(x. Remark: The derivative of a function φ(x. yplane. On the boundary ∂D of a two dimensional domain D we frequently need to calculate the derivative of φ in directions n and s that are normal and tangential to ∂D. y) is applied normal to the surface of the membrane in the negative zdirection. in a form that only involves terms on the boundary. A pressure p(x. (7.96) D which expresses the left hand side. y) and its tangential derivative ∂φ/∂s along the boundary ∂D are not independent of each other in the following sense: if one knows the values of φ along ∂D one can diﬀerentiate φ along the boundary to get ∂φ/∂s. y) in a direction corresponding to a unit vector m is written as ∂φ/∂m and deﬁned by ∂φ/∂m = φ · m = (∂φ/∂x) mx + ∂φ/∂y) my where mx and my are the components of m in the x. y) be the resulting deﬂection of the membrane in the zdirection. and conversely if one knows the values of ∂φ/∂s along ∂D one can integrate it along the boundary to ﬁnd φ to within a constant.and ydirections respectively.168 CHAPTER 7.97) One can show that the potential energy Φ associated with any deﬂection u that is compatible with the given boundary condition is Φ{u} = D 1 2 u dA − 2 pu dA D . CALCULUS OF VARIATIONS that are only evaluated on the boundary points x = 0 and 1. Recall also that a function φ(x. Let u(x. ny are the components of the unit normal vector n on ∂D that points out of D. Here nx .95) does not involve a term of the form E ∂(δφ)/∂s integrated along the boundary ∂D since it can be rewritten as the integral of −(∂E/∂s) δφ along the boundary Example 1: A stretched membrane. This is why equation (7. A stretched ﬂexible membrane occupies a regular region D of the x. y) ∈ ∂D.
Since Φ{u} = D 1 (u. y) vanishes on ∂D.x + (.x + u.x + u.y δu). Here we are using the notation that a comma followed by a subscript denotes partial diﬀerentiation with respect to the corresponding coordinate. . for example u.x δu.96). The actual deﬂection of the membrane is the function that minimizes the potential energy over the set of test functions A = {u u ∈ C 2 (D).y − (u.x δu). In order to make use of the divergence theorem and convert the area integral into a boundary integral we must write the integrand so that it involves terms of the form (.x + (u.8.y δu). .x u. GENERALIZATION TO HIGHER DIMENSIONAL SPACE.y ) dA − 2 pu dA.xx + u.).xx + u.yy + p δu dA.x δu). D or equivalently as δΦ = D (u.y . D where an admissible variation δu(x.xy = ∂ 2 u/∂x∂y. This suggests that we rewrite the preceding equation as δΦ = D (u.y dA − u.). 169 where we have taken the relevant stiﬀness of the membrane to be unity.x = ∂u/∂x and u. y) that acts in the negative zdirection.16: A stretched elastic membrane whose midplane occupies a region D of the x.yy )δu dA − pδu dA. D . The membrane surface is subjected to a pressure loading p(x.y δu. . yplane and whose boundary ∂D is ﬁxed. D its ﬁrst variation is δΦ = D (u.y u. y) ∈ ∂D}. see (7.x + (u. . u = 0 for (x.y ) dA − pδu dA. y 0 x D ∂D Fixed Figure 7.7.
We consider the bending of a thin plate according to the socalled Kirchhoﬀ theory.98) Since the variation δu vanishes on ∂D the ﬁrst integral drops out and we are left with δΦ = − 2 D u + p δu dA (7.170 CHAPTER 7.xy . y) ∈ D which is the Euler equation in this case that is to be solved subject to the prescribed boundary condition (7. My . We can write this equivalently as δΦ = ∂D ∂u δu ds − ∂n 2 D u + p δu dA. The basic constitutive relationships of elastic plate theory relate the internal moments Mx .xx . (7. CALCULUS OF VARIATIONS By using the divergence theorem on the ﬁrst integral we get δΦ = ∂D u.yy .H.98) and (7.y ny δu ds − (u.97). y). Myx (see Figure 7. y) denotes the deﬂection (displacement) of a point on the midplane in the zdirection. Note that if some part of the boundary of D had not been ﬁxed.17) to the second derivatives of the displacement ﬁeld w. My = −Dw. The midplane of the plate occupies a domain of the x.100) .NNN NNN Check signs of terms and sign conventionNNN Example 2: The Kirchhoﬀ theory of plates. then we would not have δu = 0 on that part in which case (7. Dym.yy . Shames & C.xx + u. When ν = 0 the plate bending stiﬀness D = Et3 /12 where E is the Young’s modulus of the material and t is the thickness of the plate. Mxy = Myx = −Dw.x nx + u. yplane and w(x.xx . (7.99) which must vanish for all admissible variations δu(x. w. in “Energy & Finite Elements Methods in Structural Mechanic” by I.L. Thus the minimizer satisﬁes the partial diﬀerential equation 2 u+p=0 for (x.yy + p) δu dA D where n is a unit outward normal along ∂D. Mxy . w. Solely for purposes of mathematical simplicity we shall assume that the Poisson ratio ν of the material is zero.99) would yield the natural boundary condition ∂φ/∂n = 0 on that segment.xy by Mx = −Dw. 2 NNN Show the calculations for just one term wxx . Include nu = 0 in text. A discussion of the case ν = 0 can be found in many books. for example.
and the shear forces in the plate are given by Vx = −D( 2 w). The segment ∂Ω1 of the boundary is clamped while the remainder ∂Ω2 is free of loading.8.x Vy y Figure 7.17: A diﬀerential element dx × dy × t of a thin plate. 171 where a comma followed by a subscript denotes partial diﬀerentiation with respect to the corresponding coordinate and D is the plate bending stiﬀness.7.y Vy = D(∇2w). (7. 0 ≤ y ≤ b.xx + My w.xx + νwyy ) My = D(w. A bold arrow with two arrow heads represents a moment whose sense is given by the right hand rule.102) z dy dx Mx = D(w. Right: A rectangular a × b plate with a load free edge at its right hand side x = a. ny = 0 0 x Ω 0 s a sx = 0.y .yx = w + w.y + νwxx) .yy + 2w.yy + Mxy w. sy = 1 n x ∂Ω2 Free ree Figure 7.x .xx (7.xy (1 t Mxy x Vx Mx My Myx Vx = D(∇2w).101) The elastic energy per unit volume of the plate is given by D 2 1 2 2 Mx w.xy + Myx w. 2 2 . A bold arrow represents a force and thus Vx and Vy are (shear) forces. GENERALIZATION TO HIGHER DIMENSIONAL SPACE.yy Mxy = Myx = D(1 − ν)w. .xy . Thus Mxy and Myx are (twisting) moments while Mx and My are (bending) moments.18: Left: A thin elastic plate whose midplane occupies a region Ω of the x. yplane. Clamped ∂Ω1 Clamped y y Clamped Clamped b ∂Ω2 Free ree nx = 1. Vy = −D( 2 w).
A part of the plate boundary denoted by ∂Ω1 is clamped while the remainder ∂Ω2 is free of any external loading. y) that satisfy the geometric requirements (7.18. Our variational approach will give us precisely two natural boundary conditions on this edge. Consider a thin elastic plate whose midplane occupies a domain Ω of the x. in particular.xy δw.yy − p w dA 2 (7. D (7. However we will ﬁnd that the diﬀerential equation to be solved in the interior of the plate requires (and can only accommodate) two boundary conditions at any point on the edge.xx + 2w. They will involve Mx . since the right hand edge x = a is free of loading one would expect to have the three conditions Mx = Mxy = Vx = 0 along that boundary. We now determine the Euler equation and natural boundary conditions associated with (7. (7.xx δw. Based on Figure 7.xx + 2w.103) D 2 2 2 w.xy + w. y) denotes the deﬂection of the plate in the zdirection we have the geometric boundary conditions w = ∂w/∂n = 0 The total potential energy of the system is Φ{w} = Ω for (x. The question then arises as to what the correct boundary conditions on this edge should be. Therefore. and a shear force Vx acting on any surface x = constant in the plate. Thus if w(x.103).18.17 we know that there is a bending moment Mx .104) where the ﬁrst group of terms represents the elastic energy in the plate and the last term represents the potential energy of the pressure loading (the negative sign arising from the fact that p acts in the minus zdirection while w is the deﬂection in the positive z direction). Mxy and Vx but will not require that each of them must vanish individually. yplane as shown in the left hand diagram of Figure 7.yy δw.xy + w. y) is applied on the ﬂat face of the plate in the −zdirection. thereby converting part of the area integral on Ω into a boundary . y) ∈ ∂Ω1 . a twisting moment Mxy .172 CHAPTER 7. This functional Φ is deﬁned on the set of all kinematically admissible deﬂection ﬁelds which is the set of all suitably smooth functions w(x.104) by calculating the ﬁrst variation of Φ{w} and setting it equal to zero: w.yy − p δw dA = 0.105) Ω To simplify this we begin be rearranging the terms into a form that will allow us to use the divergence theorem. CALCULUS OF VARIATIONS It is worth noting the following puzzling question: Consider the rectangular plate shown in the right hand diagram of Figure 7. The actual deﬂection ﬁeld is the one that minimizes the potential energy Φ over this set. A normal loading p(x.
8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE.y −w.y nx + w.y − (p/D)δw dA.xxyy + w.xxx δw + w.).xx + 2w.xx δw.x + (.y ny ds − w.y . To simplify the area integral in (7.105) as 0 = Ω w.96).x − w.xxy ny + w.y + (p/D)δw dA. .xx δw.x + w.yyy ny δw ds − 4 Ω w − p/D δw dA.x − w. see (7.x + w.yyy δw .xxy δw + w.y + w.).x + w.7. 173 integral on ∂Ω.yy δw.yyy δw. w. In order to facilitate further simpliﬁcation.xyy δw Ω .yyyy − p/D δw dA.yyy δw.x = Ω + w.xy δw.xyy δw. = ∂Ω w.xy δw.xy δw.yyy δw.xy δw.xxx δw.y + w.xy δw.96) in going from the second equation above to the third equation.x + w.xyy nx + w. P2 δw dA.xyy δw.xyy δw. = ∂Ω (7.106) we again rearrange the terms in I2 into a form that will allow us to use the divergence theorem.y − w.xxx δw.yy − (p/D)δw dA.xxx nx + w.y + p/D δw dA.xyy δw nx + w. Accordingly we rewrite (7.xx δw.x = + w. (7.x + w.x + w.x + w.yy δw. In order to use the divergence theorem we must write the integrand so that it involves terms of the form (.y .xxy δw. w.x + w. Ω = ∂Ω P1 δw ds − .xxy δw.107) = ∂Ω w.xxy δw. Thus I Ω 2 dA = Ω w. .yy δw.yyy δw ny ds − 4 Ω w − (p/D) δw dA.y . .xxy δw + w. Ω We have used the divergence theorem (7.y − w.106) w. Ω = ∂Ω I1 ds − I2 dA. in the last step we have let I1 and I2 denote the integrands of the boundary and area integrals.xxx δw.xy + w. .xxxx + 2w.xxx δw + w.
s sx and δw.n x y + w.107) to the third equation.xy ny δw.xx nx sx + w.n ny + δw. I ∂Ω 1 = ∂Ω ds = ∂Ω w. CALCULUS OF VARIATIONS P1 = w.xy nx ny + w. Next we simplify the boundary term in (7.yyyy .111) ∂Ω . then ∂f ds ∂s = 0 (7.xyy nx + w.xy ny δw.xy nx sy + w.s s from which it follows that δw.xy nx sy + w.yy n2 δw.xx nx + w.109) and this term can be written as I ∂Ω 3 ds = ∂Ω w.x = δw.xy sx ny + w. Thus from (7.xxy ny + w. − w.y = δw.x i + δw. w.xy nx + w.174 where we have set CHAPTER 7. y) varies smoothly along ∂Ω.xx nx sx + w.xy nx δw.s = (7. x y (7.yyy ny and P2 = 4 w − p/D.n nx + δw.y j = δw.yy ny sy δw.yy ny δw.xy nx + w.yy ny sy . w.n n + δw.xxyy + w.xx n2 + w.y ds.n nx + δw.yy ny sy δw ∂Ω .106).s ds.xy nx ny + w.s sy ds.xx nx sx + w.n ny + δw. w.yy ny δw.108) In the preceding calculation. To do this we use the fact that δw = δw.xy sx ny + w.yy ny δw.109) To further simplify this we have set I3 equal to the last expression in (7. w.106) by converting the derivatives of the variation with respect to x and y into derivatives with respect to normal and tangential coordinates n and s.xx nx sx + w.xxx nx + w.n + I3 ds.s sy .y + w.xy sx ny + w.96) in going from the second equation in (7.xy nx sy + w.xxxx + 2w.110) δw ds.yy n2 δw. we have used the divergence theorem (7.x + w.xy nx ny + w.xx n2 + w. and we have set 4 w= 2 ( 2 w) = w.xx nx δw.xy sx ny + w.xx nx + w.s If a ﬁeld f (x. = ∂Ω w.s sx + w.x + w.s ds.xy nx sy + w.y ds.xy ny ∂Ω = = ∂Ω δw.yy ny sy δw. (7.xy nx ny + w.x + w. and if the curve ∂Ω itself is smooth.
112) into (7.xy nx sy + w. GENERALIZATION TO HIGHER DIMENSIONAL SPACE.xxy ny + w.109) yields I1 ds = ∂Ω ∂Ω P3 ∂ (δw) ds − ∂n ∂Ω ∂ (P4 ) δw ds.yy ny sy ∂Ω . y) ∈ Ω. x y P4 = w.xyy nx + w. Thus the variations δw and ∂(δw)/∂n must also vanish on ∂Ω1 . ∂Ω = ∂Ω1 ∪ ∂Ω2 . Thus (7.s δw ds.115) which must hold for all admissible variations δw.116) Returning to (7.115) with this gives − P1 + ∂Ω δw ds + ∂ (δw) ds = 0. P3 ∂Ω (7. ∂s (7.yy ny sy ) = 0 for (x. (7.yy ny sy .113) and (7.xy nx ny + w. It follows from this that the ﬁrst term in the last expression of (7. 175 since this is an integral over a closed curve8 .xx n2 + w. ∂n (7.xxx nx + w.yyy ny ∂ + ∂s (w.yy ny = 0.7.119) In the present setting one would have this degree of smoothness if there are no concentrated loads applied on the boundary of the plate ∂Ω and the boundary curve itself has no corners. First restrict attention to variations which vanish on the boundary ∂Ω and whose normal derivative ∂(δw)/∂n also vanish on ∂Ω.8.112) Substituting (7.xy nx ny + w. (7.117) Since the portion ∂Ω1 of the boundary is clamped we have w = ∂w/∂n = 0 for (x. y) ∈ ∂Ω1 .xx nx sx + w. Thus we conclude that P1 + ∂P4 /∂s = 0 and P3 = 0 on ∂Ω2 : w.107) into (7.114) P2 δw dA − P1 + ∂Ω ∂ (P4 ) ∂s δw ds + ∂Ω P3 ∂ (δw) ds = 0 ∂n (7. Finally. 2 2 w. substituting (7.xy sx ny + w.xy ny sx + w.yy n2 . 8 .xy nx sy + w.106) leads to Ω (7.118) for variations δw and ∂(δw)/∂n that are arbitrary on ∂Ω2 where ∂Ω2 is the complement of ∂Ω1 .110) vanishes and so I ∂Ω 3 ds = − w.xy ny sx + w. y) ∈ ∂Ω2 .xx nx + w.xy nx ny + w.117) simpliﬁes to − P1 + ∂Ω2 ∂ (P4 ) ∂s δw ds + ∂Ω2 P3 ∂ (δw) ds = 0 ∂n (7.e.113) where we have set P3 = w. i. This leads us to the Euler equation P2 = 0: 4 w − p/D = 0 ∂ (P4 ) ∂s for (x.xx nx sx + w.xy nx sy + w.xy nx ny + w.xx nx sx + w.
Mns = Mxy . the Kirchhoﬀ theory of plates for the problem at hand requires that one solve the ﬁeld equation (7.xx nx sx + w.yy ny ny ) (7.18.123) require that certain combinations of Mx . we wish to .xx (7.yx nx sy + w.yyy ny ) ∂ Mns + Vn = 0. Mxy .121) As a special case suppose that the plate is rectangular.xy ny sx + w.xxx + w. but this is in contradiction to the mathematical fact that the diﬀerential equation (7.116) on Ω subjected to the displacement boundary conditions (7.yxx ny + w.119) on ∂Ω2 .120) Mns = −D (w. 0 ≤ x ≤ a. Then nx = 1. sx = 0.xxx nx + w. Vn = Vx . Let C be a closed curve in R3 as sketched in Figure 7.xyy nx + w.19.yx ny nx + w.xx nx nx + w. 0 ≤ y ≤ b is free of load.100) shows that in this case Mn = Mx .176 CHAPTER 7. From among all surfaces S in R3 that have C as its boundary.101) as to what the correct boundary conditions on a free edge should be. ny = 0.xyy ) which because of (7. (7.122) Mns = −D w.yy ny sy ) Vn = −D (w.yx Vn = −D (w.123) This answers the question we posed soon after (7.121) can be written as Mx = 0. in summary. Remark: If we deﬁne the moments Mn .120) simpliﬁes to Mn = −D w. The two natural boundary conditions (7. ∂ Mxy + Vx = 0. sy = 1 on ∂Ω2 and so (7. see the right diagram in Figure 7.103) on ∂Ω1 and the natural boundary conditions (7. 0 ≤ y ≤ b and that the right edge x = a. Thus the natural boundary conditions (7. Mns and force Vn by Mn = −D (w.116) only requires two conditions at each point on the boundary. Example 3: Minimal surface equation.xy nx ny + w. ∂y (7. ∂s then the two natural boundary conditions can be written as Mn = 0. Vx vanish but not that all three vanish. CALCULUS OF VARIATIONS Thus. We had noted that intuitively we would have expected the moments and forces to vanish on a free edge and therefore that Mx = Mxy = Vx = 0 there.
y) for (x. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. a thin soap ﬁlm will form across C. y) is dx = dx i while the vector joining (x. The vectors du and dv deﬁne a parallelogram on the surface z = φ(x. yplane is denoted by ∂D and let D denote the simply connected region contained within ∂D. yplane. if C corresponds to a wire loop which we dip in a soapy solution. y) for (x.19. Let C be a closed curve in R3 . x y . y) y x. which (if the surface energy density is constant) is the surface with minimal area. φ ∈ C 1 (D).19: The closed curve C in R3 is given. Thus the area of a diﬀerential element on S is du × dv = − φx dxdy i − φy dxdy j + dxdy k = 1 + φ2 + φ2 dxdy. y+dy) is dy = dy j. see Figure 7. y) ∈ D describe a surface S in R3 that has C as its boundary. Thus the admissible set of functions we are considering are A{φ φ : D → R. dv = dy j + φy dy k. Suppose that C is characterized by z = h(x. Suppose that its projection onto the x. y) k j D i ∂D x the surface with minimal area is to be sought. necessarily φ = h on ∂D. from among all possible surfaces S that are bounded by C. Figure 7. 177 determine the surface that has minimum area. y) to (x+dx. The vector joining (x. y) ∈ ∂D. From among all surfaces S in R3 that have C as its boundary. The curve ∂D is the projection of C onto the x. C : z = h(x. Let z = φ(x. φ = h on ∂D} . minimizes the total surface energy of the ﬁlm.8. If du and dv are vectors on the surface z = φ(x. y) to (x. y) and the area of this parallelogram is du × dv. then we know that du = dx i + φx dx k. Consider a rectangular diﬀerential element on the x.7. yplane that is contained within D. As a physical example. y) whose projections are dx and dy respectively. z x. The surface that forms is the one that. S : z = φ(x.
φ ∈ C 2 (D). It is left as an exercise to show that setting the ﬁrst variation of F equal to zero leads to the socalled minimal surface equation (1 + φ2 )φxx − 2φx φy φxy + (1 + φ2 )φyy = 0. we conﬁne the discussion to the particular functional 1 F {φ} = f (x. 2 where ˆ F (0) ˆ εF (0) = 1 0 1 f (x.org/wiki/Soap bubble and www. def .178 CHAPTER 7. Suppose that a particular function φ minimizes F . x y D 7. 0 f (x. 1 0 ε2 F (0) = ε2 {fφφ η 2 + 2fφφ ηη + fφ φ (η )2 } dx = δ 2 F {φ. = δF {φ.9 Second variation. 1 + φ2 + φ2 dA.wikipedia. CALCULUS OF VARIATIONS Consequently the problem at hand is to minimize the functional F {φ} = over the set of admissible functions A = {φ  φ : D → R. Deﬁne F (ε) by ˆ F (ε) = F {φ + εη} = so that by Taylor expansion. η}. the oneparameter family of functions φ + εη are ˆ admissible for all suﬃciently small values of the parameter ε. φ = h on ∂D}. In order to illustrate the basic ideas of this section in the simplest possible setting. ε2 ˆ ˆ ˆ ˆ F (ε) = F (0) + εF (0) + F (0) + O(ε3 ). φ + εη. φ. Another necessary condition for a minimum.edu/facstaﬀ/b/brakke/ for additional discussion. φ + εη )dx. φ )dx = F {φ}. and that for some given function η. η}. φ.susqu. φ )dx 0 deﬁned over a set of admissible functions A. y x Remark: See en.
it follows that ε = 0 minimizes F (ε). yplane characterized by y = φ(x) that begins at (0. A function φ that minimizes F must satisfy the boundary value problem consisting of the Euler equation and the given boundary conditions: d φφ − 1 + (φ )2 = 0. From among all such curves. φ(1) = φ1 .124) to hold is that fφ φ (x. η} ≥ 0 is necessary but not suﬃcient for the functional F to have a minimum at φ. φ. The inequality is reversed if φ maximizes F . Proposition: Legendre Condition: A necessary condition for (7. SECOND VARIATION ˆ Since φ minimizes F . φ (x)) ≥ 0 for the minimizing function φ. Thus we are asked to minimize the functional 1 for 0 ≤ x ≤ 1 F {φ} = f (x. . δφ} = 0 fφφ (δφ)2 + 2fφφ (δφ)(δφ ) + fφ φ (δφ )2 dx ≥ 0. η} = 0. dx 1 + (φ )2 φ(0) = φ0 . φ1 ). 179 in addition to the requirement δF {φ. (7. The condition δ 2 F {φ. when rotated about the xaxis. φ0 ) and ends at (1. We shall not discuss suﬃcient conditions in general in these notes. generates the surface of minimum area.7. φ ) dx 0 where f (x. φ. The general solution of this Euler equation is φ(x) = α cosh x−β α for 0 ≤ x ≤ 1. φ(1) = φ1 .9.124) where we have set δφ = εη. φ ) = φ 1 + (φ )2 . Thus a necessary condition for a function φ to minimize a functional F is that the second variation of F be nonnegative for all admissible variations δφ: 1 δ 2 F {φ. and consequently that δ 2 F {φ. φ(x). η} ≥ 0. over a set of admissible functions that satisfy the boundary conditions φ(0) = φ0 . ﬁnd the one that. Example: Consider a curve in the x.
A function F (x) deﬁned for x ∈ A with continuous ﬁrst derivatives is said to be convex if F (x1 ) ≥ F (x2 ) + F (x2 )(x1 − x2 ) for all x1 . x2 ∈ D. . It is useful to begin by reviewing the question of ﬁnding the minimum of a realvalued function of a real variable. To test the Legendre condition we calculate fφ φ and ﬁnd that fφ φ = φ ( 1 + (φ )2 )3 . cosh (x − β)/α 2 Therefore as long as α > 0 the Legendre condition is satisﬁed. when evaluated at the particular function φ(x) = α cosh(x − β)/α yields fφ φ φ=α cosh(x−β)/α = α . We now turn to a brief discussion of suﬃcient conditions for a minimum for a special class of functionals.10 Suﬃcient condition for minimization of convex functionals y F (x1) ≥ F (x2) + F (x2)(x1 − x2) for all x1. y = F (x) y = F (x2) + F (x2)(x − x2) 0 x2 x1 x D Figure 7.20: A convex curve y = F (x) lies above the tangent line through any point of the curve. which.180 CHAPTER 7. x2 ∈ A. 7. CALCULUS OF VARIATIONS where the constants α and β are determined through the boundary conditions.
xo .e. φ. i. say at xo .10. η} = 1 0 0 f (x. then F can only have one stationary point and so can only have one interior minimum. . Therefore a stationary point of a convex function is necessarily a minimizer. then since δF (xo . x2 ∈ A. if F is convex and F (x1 ) = F (x2 ) + δF (x2 . If F is strictly convex on A. If a convex function has a stationary point at. If a convex function has a stationary point.e. It is readily seen that a suﬃcient condition for (7. then it follows by setting x2 = xo in the preceding equation that xo is a minimizer of F . then F can have only one stationary point and so can have only one interior minimum. Therefore a stationary point of a convex functional is necessarily a minimizer. η} takes the special form f (x. SUFFICIENT CONDITION FOR CONVEX FUNCTIONALS 181 see Figure 7. φ + η ∈ A. then δF {φo . η} = 0 for all admissible η. If F is stationary at φo ∈ A. φ. x1 − x2 ) if and only if x1 = x2 . For example. if F is convex and F (x1 ) = F (x2 ) + F (x2 )(x1 − x2 ) if and only if x1 = x2 . where convexity is deﬁned by9 F (x1 ) ≥ F (x2 ) + δF (x2 . and it follows that φo is in fact a minimizer of F . consider the generic functional 1 F {φ} = Then δF {φ. φ + η. Therefore a stationary point of a convex function is necessarily a minimizer. say. We now turn to a functional F {φ} which is said to be convex on A if F {φ + η} ≥ F {φ} + δF {φ. (7. 0 1 (7.126) In general it might not be simple to test whether this condition holds in a particular case. y). i. η} for all φ. x1 − x2 ) for all x1 . y) = 0 for all y it follows that xo is a minimizer of F . This is also true for a realvalued function F with continuous ﬁrst derivatives on a domain A in Rn . φ + η ) − f (x. φ )dx. φ ) dx ≥ 0 ∂f ∂f η+ η ∂φ ∂φ dx.125) ∂f ∂f η+ η ∂φ ∂φ 1 dx and so the convexity condition F {φ + η} − F {φ} ≥ δF {φ. If F is strictly convex on A.126) to hold is that the integrands satisfy 9 See equation (??) for the deﬁnition of δF (x.7.20 for a geometric interpretation of convexity.
182 the inequality CHAPTER 7. θ2.127) for all (x. one sees from basic calculus that if ∂ 2 f /∂z 2 > 0 then f (x. z) is independent of y. ξ1) x = a cos θ y = a sin θ ξ = ξ(θ) θ1 ≤ θ ≤ θ 2 a. This is precisely the requirement that the function f (x. When the curve lies on the surface of a circular cylinder of radius a this specializes to r = a. z) be a convex function of y. ξ2 ) as shown in the ﬁgure.127). θ2 .125) satisﬁes the convexity condition (7. Find the curve of shortest length that lies entirely on a circular cylinder of radius a. ξ1 ) and ending at (a. ξ = ξ(θ) for θ1 ≤ θ ≤ θ2 . then. θ1 . θ1 ≤ θ ≤ θ2 . y + v. z). ξ2 ). y. z + w) in the domain of f . Remark: In the special case where f (x. θ1 .21: A curve that lies entirely on a circular cylinder of radius a. (x. z at ﬁxed x. Example: Geodesics. θ. z) is a strictly convex function of z at each ﬁxed x. ξ1 ) and ending at (a. y. beginning (in circular cylindrical coordinates (r. (a. CALCULUS OF VARIATIONS f (x. z) ≥ ∂f ∂f v+ w ∂y ∂z (7. We can characterize a curve in R3 using a parametric characterization using circular cylindrical coordinates by r = r(θ). a function φ that extremizes F is in fact a minimizer of F . y. ξ2) Figure 7. y + v. ξ = ξ(θ). Thus in summary: if the integrand f of the functional F deﬁned in (7. . ξ a θ1 θ2 a a. ξ)) at (a. beginning (in circular cylindrical coordinates) at (a. θ2 . z + w) − f (x. y. Note that this is simply a suﬃcient condition for ensuring that an extremum is a minimum. θ1. (a.
xk .11. Thus the curve of minimum length is given uniquely by (7. This function is nonnegative and has a minimum value of zero which it attains at x = 0. x 3 . ξ(θ2 ) = ξ2 . We now turn to a diﬀerent method of seeking minima. and for purposes of introduction.128) – a helix. y. this curve unfolds into a straight line. . Evaluating the necessary condition δF = 0 leads to the Euler equation.7. x 2 . DIRECT METHOD Since the arc length can be written as ds = dr2 + r2 dθ2 + dξ 2 = r (θ) 2 183 + r(θ) 2 + ξ (θ) 2 dθ = = a2 + ξ (θ) 2 dθ. . y. z) = √ a2 + z 2 over the set of all suitably smooth functions ξ(θ) deﬁned for θ1 ≤ θ ≤ θ2 which satisfy ξ(θ1 ) = ξ1 . Consider the sequence of numbers x 0 . ξ(θ2 ) = ξ2 leads to ξ(θ) = ξ1 + Direct diﬀerentiation of f (x. begin by reviewing the familiar case of a realvalued function f (x) of a real variable x ∈ (−∞. 7. ξ(θ). our task is to minimize the functional θ2 F {ξ} = f (θ. 2k lim f (xk ) = 0.128) a2 + z 2 shows that a2 ∂ 2f = 2 >0 ∂z 2 (a + z 2 )3/2 and so f is a strictly convex function of z. which after using the boundary conditions ξ(θ1 ) = ξ1 . Consider the speciﬁc example f (x) = x2 . . This second order diﬀerential equation for ξ(θ) can be readily solved. . z) = √ ξ1 − ξ2 θ1 − θ2 (θ − θ1 ). ∞). (7. . and note that k→∞ where xk = 1 . x 1 .11 Direct method of the calculus of variations and minimizing sequences. Note that if the circular cylindrical surface is cut along a vertical line and unrolled into a ﬂat sheet. ξ (θ)) dθ θ1 where f (x. .
184
CHAPTER 7. CALCULUS OF VARIATIONS
3
f(x) = x2
3
f(x) = x2
2
2
f(x) = 1
1 1
2
1
1
2
x
2
1
1
2
x
(a)
(b)
Figure 7.22: (a) The function f (x) = x2 for −∞ < x < ∞ and (b) the function f (x) = 1 for x ≤ 0,
f (x) = x2 for x > 0.
The sequence 1/2, 1/22 , . . . , 1/2k , . . . is called a minimizing sequence in the sense that the value of the function f (xk ) converges to the minimum value of f as k → ∞. Moreover, observe that lim xk = 0
k→∞
as well, and so the sequence itself converges to the minimizer of f , i.e. to x = 0. This latter feature is true because in this example f ( lim xk ) = lim f (xk ).
k→∞ n→∞
As we know, not all functions have a minimum value, even if they happen to have a ﬁnite greatest lower bound. We now consider an example to illustrate the fact that a minimizing sequence can be used to ﬁnd the greatest lower bound of a function that does not have a minimum. Consider the function f (x) = 1 for x ≤ 0 and f (x) = x2 for x > 0. This function is nonnegative, and in fact, it can take values arbitrarily close to the value 0. However it does not have a minimum value since there is no value of x for which f (x) = 0; (note that f (0) = 1). The greatest lower bound or inﬁmum (denoted by “inf”) of f is
−∞<x<∞
inf
f (x) = 0.
Again consider the sequence of numbers x 0 , x 1 , x 2 , x 3 . . . xk , . . . and note that
k→∞
where xk =
1 , 2k
lim f (xk ) = 0.
7.11. DIRECT METHOD
185
In this case the value of the function f (xk ) converges to the inﬁmum of f as k → ∞. However since lim xk = 0
k→∞
the limit of the sequence itself is x = 0 and f (0) is not the inﬁmum of f . This is because in this example f ( lim xk ) = lim f (xk ).
k→∞ n→∞
Returning now to a functional, suppose that we are to ﬁnd the inﬁmum (or the minimum if it exists) of a functional F {φ} over an admissible set of functions A. Let
φ∈A
inf F {φ} = m (> −∞).
Necessarily there must exist a sequence of functions φ1 , φ2 , . . . in A such that
n→∞
lim F {φk } = m;
such a sequence is called a minimizing sequence. If the sequence φ1 , φ2 , . . . converges to a limiting function φ∗ , and if F { lim φk } = lim F {φk },
n→∞ n→∞
then it follows that F {φ∗ } = m and the function φ∗ is the minimizer of F . The functions φk of a minimizing sequence can be considered to be approximate solutions of the minimization problem. Just as in the second example of this section, in some variational problems the limiting function φ∗ of a minimizing sequence φ1 , φ2 , . . . does not minimize the functional F ; see the last Example of this section.
7.11.1
The Ritz method
Suppose that we are to minimize a functional F {φ} over an admissible set A. Consider an inﬁnite sequence of functions φ1 , φ2 , . . . in A. Let Ap be the subset of functions in A that can be expressed as a linear combination of the ﬁrst p functions φ1 , φ2 , . . . φp . In order to minimize F over the subset Ap we must simply minimize F (α1 , α2 , . . . , αp ) = F {α1 φ1 + α2 φ2 + . . . + αp φp }
186
CHAPTER 7. CALCULUS OF VARIATIONS
with respect to the real parameters α1 , α2 , . . . αp . Suppose that the minimum of F on Ap is denoted by mp . Clearly A1 ⊂ A2 ⊂ A3 . . . ⊂ A and therefore m1 ≥ m2 ≥ m3 . . .10 . Thus, in the socalled Ritz Method, we minimize F over a subset Ap to ﬁnd an approximate minimizer; moreover, increasing the value of p improves the approximation in the sense of the preceding footnote. Example: Consider an elastic bar of length L and modulus E that is ﬁxed at both ends and carries a distributed axial load b(x). A displacement ﬁeld u(x) must satisfy the boundary conditions u(0) = u(L) = 0 and the associated potential energy is
L
F {u} =
0
1 E(u )2 dx − 2
L
bu dx.
0
We now use the Ritz method to ﬁnd an approximate displacement ﬁeld that minimizes F . Consider the sequence of functions v1 , v2 , v3 , . . . where pπx ; vp = sin L observe that vp (0) = vp (L) = 0 for all intergers p. Consider the function
n
un (x) =
p=1
αp sin
pπx L
for any integer n ≥ 1 and evaluate
L
F (α1 , α2 , . . . αn ) = F {un } = Since
L
0
1 E(un )2 dx − 2 0
L
bun dx.
0
2 cos
0
it follows that
L L n
pπx qπx cos dx = L L
n
for p = q,
L for p = q, 1 dx = 2
n 2 αp p=1
(un ) dx =
0 0
2
pπx pπ cos αp L L p=1
n
qπ qπx αq cos L L q=1
L
p2 π 2 L
Therefore F (α1 , α2 , . . . αn ) = F {un } =
10
p=1
2 2 1 2 p π E αp − αp 4 L
b sin
0
pπx dx L
(7.129)
If the sequence φ1 , φ2 , . . . is complete, and the functional F {φ} is continuous in the appropriate norm, then one can show that lim mp = m.
p→∞
7.12. WORKED EXAMPLES. To minimize F (α1 , α2 , . . . αn ) with respect to αp we set ∂ F /∂αp = 0. This leads to αp =
L 0
187
b sin pπx dx L E
p2 π 2 2L
for p = 1, 2, . . . n.
(7.130)
Therefore by substituting (7.130) into (7.129) we ﬁnd that the nterm Ritz approximation of the energy is
n
−
p=1
2 2 1 2 p π E αp 4 L
where αp =
L 0
b sin pπx dx L E
p2 π 2 2L
,
and the corresponding approximate displacement ﬁeld is given by pπx un = αp sin L p=1
n
where αp =
L 0
b sin pπx dx L E
p2 π 2 2L
.
References 1. J.L. Troutman, Variational Calculus with Elementary Convexity, SpringerVerlag, 1983. 2. C. Fox, An Introduction to the Calculus of Variations, Dover, 1987. 3. G. Bliss, Lectures on the Calculus of Variations, University of Chicago Press, 1946. 4. L.A. Pars, Calculus of Variations, Wiley, 1963. 5. R. Weinstock, Calculus of Variations with Applications, Dover, 1952. 6. I.M. Gelfand and S.V. Fomin, Calculus of Variations, PrenticeHall, 1963. 7. T. Mura and T. Koya, Variational Methods in Mechanics, Oxford, 1992.
7.12
Worked Examples.
Example 7.N: Consider two given points (x1 , h1 ) and (x2 , h2 ), with h1 > h2 , that are to be joined by a smooth wire. The wire is permited to have any shape, provided that it does not enter into the interior of the circular region (x − x0 )2 + (y − y0 )2 ≤ R2 . A bead is released from rest from the point (x1 , h1 ) and slides
0 < x < L. A compressive force P is applied at x = L and the beam adopts a buckled shape described by y = φ(x). in an undeformed conﬁguration. they must be such satisfy the (inequality) constraint (i). Figure NNN shows the centerline of the beam in both the undeformed and deformed conﬁgurations. The beam is ﬁxed by a pin at x = 0. [1 + (φ (x))2 ]3/2 From elasticity theory we know that the bending energy per unit length of a beam is (1/2)M κ and that the bending moment M is related to the curvature κ by M = EIκ where EI is the bending stiﬀness of the beam. x1 ≤ x ≤ x2 . By geometry.23: A curve y = φ(x) joining (x1 . h1 ) to (x2 . Example 7. Therefore in considering diﬀerent wires that connect (x1 . the end x = L is also pinned but is permitted to move along the xaxis. h2 ) which is disallowed from entering the forbidden region (x − x0 )2 + (φ(x) − y0 )2 < R2 . (i) The travel time of the bead is again given by (7.N: Buckling: Consider a beam whose centerline occupies the interval y = 0. the curvature κ(x) of a curve y = φ(x) is given by κ(x) = φ (x) . For what shape of wire is the time of travel from (x1 . in addition. The prescribed geometric boundary conditions on the deﬂected shape of the beam are thus φ(0) = φ(L) = 0. h1) y = φ(x) g (x2. we may only consider those that lie entirely outside this region: (x − x0 )2 + (φ(x) − y0 )2 ≥ R2 . h2) 0 x1 x2 x Figure 7. CALCULUS OF VARIATIONS (x1.1) and the test functions must satisfy the same requirements as in the ﬁrst example except that. h2 ). h1 ) to (x2 . h1 ) to (x2 . Thus the bending energy associated with a diﬀerential element of the beam is (1/2)EIκ2 ds where ds . along the wire (without friction) due to gravity. Our task is to minimize T {φ} over the set A1 subject to the constratint (i). h2 ) least? Here the wire may not enter into the interior of the prescribed circular region .188 y CHAPTER 7.
2 [1 + (φ )2 ]5/2 Next we need to account for the potential energy associated with the compressive force P on the beam. is arc length along the deformed beam. the amount by which the right hand end of the beam moves leftwards is L L L − 0 ds − dx 0 = − 0 1 + (φ )2 dx − L . 1 (φ )2 EI dx.7. Therefore the total potential energy of the system is L Φ{x. Since the change in length of a diﬀerential element is ds − dx. WORKED EXAMPLES. φ } = 0 1 (φ )2 EI dx − 2 [1 + (φ )2 ]5/2 L P 0 1 + (φ )2 − 1 dx.24: An elastic beam in undeformed (lower ﬁgure) and buckled (upper ﬁgure) conﬁgurations. The Euler equation. . 189 P y O L Figure 7. φ.12. φ . Thus the potential energy associated with the applied force P is L −P 0 1 + (φ )2 dx − L . which for such a functional has the general form d2 fφ dx2 − d fφ dx + fφ = 0. Thus the total bending energy in the beam is L 0 x 1 EI κ2 (x) ds 2 where the arc length s is related to the coordinate x by the geometric relation ds = Thus the total bending energy of the beam is L 0 1 + (φ (x))2 dx.
190 CHAPTER 7. and the natural boundary conditions are φ (0) = φ (L) = 0. We shall assume that the soap ﬁlm adopts the shape with minimum surface energy. Example 7. Example 7. .N: u(x. This eventually leads to the Euler equation d dx φ [1 + (φ )2 ]5/2 + φ 2[1 + (φ )2 ]1/2 P φ + 5 EI/2 [1 + (φ )2 ]3/2 2 =c where c is a constant of integration. each of radius R. t) where 0 ≤ x ≤ L. CALCULUS OF VARIATIONS simpliﬁes in the present case since f does not depend explicitly on φ.N: Null lagrangian Example 7. The last term above therefore drops out and the resulting equation can be integrated once immediately.N: Linearize BVP in buckling problem above. This arrangement of wires is dipped into a soapy bath and taken out. Example 7. Determ