Lecture Notes on

The Mechanics of Elastic Solids
Volume 1: A Brief Review of Some Mathematical
Preliminaries
Version 1.0
Rohan Abeyaratne
Quentin Berg Professor of Mechanics
Department of Mechanical Engineering
MIT
Copyright c Rohan Abeyaratne, 1987
All rights reserved.
http://web.mit.edu/abeyaratne/lecture notes.html
December 2, 2006
2
3
Electronic Publication
Rohan Abeyaratne
Quentin Berg Professor of Mechanics
Department of Mechanical Engineering
77 Massachusetts Institute of Technology
Cambridge, MA 02139-4307, USA
Copyright c by Rohan Abeyaratne, 1987
All rights reserved
Abeyaratne, Rohan, 1952-
Lecture Notes on The Mechanics of Elastic Solids. Volume 1: A Brief Review of Some Math-
ematical Preliminaries / Rohan Abeyaratne – 1st Edition – Cambridge, MA:
ISBN-13: 978-0-9791865-0-9
ISBN-10: 0-9791865-0-1
QC
Please send corrections, suggestions and comments to abeyaratne.vol.1@gmail.com
Updated June 25 2007
4
i
Dedicated with admiration and affection
to Matt Murphy and the miracle of science,
for the gift of renaissance.
iii
PREFACE
The Department of Mechanical Engineering at MIT offers a series of graduate level sub-
jects on the Mechanics of Solids and Structures which include:
2.071: Mechanics of Solid Materials,
2.072: Mechanics of Continuous Media,
2.074: Solid Mechanics: Elasticity,
2.073: Solid Mechanics: Plasticity and Inelastic Deformation,
2.075: Advanced Mechanical Behavior of Materials,
2.080: Structural Mechanics,
2.094: Finite Element Analysis of Solids and Fluids,
2.095: Molecular Modeling and Simulation for Mechanics, and
2.099: Computational Mechanics of Materials.
Over the years, I have had the opportunity to regularly teach the second and third of
these subjects, 2.072 and 2.074 (formerly known as 2.083), and the current three volumes
are comprised of the lecture notes I developed for them. The first draft of these notes was
produced in 1987 and they have been corrected, refined and expanded on every following
occasion that I taught these classes. The material in the current presentation is still meant
to be a set of lecture notes, not a text book. It has been organized as follows:
Volume I: A Brief Review of Some Mathematical Preliminaries
Volume II: Continuum Mechanics
Volume III: Elasticity
My appreciation for mechanics was nucleated by Professors Douglas Amarasekara and
Munidasa Ranaweera of the (then) University of Ceylon, and was subsequently shaped and
grew substantially under the influence of Professors James K. Knowles and Eli Sternberg
of the California Institute of Technology. I have been most fortunate to have had the
opportunity to apprentice under these inspiring and distinctive scholars. I would especially
like to acknowledge a great many illuminating and stimulating interactions with my mentor,
colleague and friend Jim Knowles, whose influence on me cannot be overstated.
I am also indebted to the many MIT students who have given me enormous fulfillment
and joy to be part of their education.
My understanding of elasticity as well as these notes have also benefitted greatly from
many useful conversations with Kaushik Bhattacharya, Janet Blume, Eliot Fried, Morton E.
iv
Gurtin, Richard D. James, Stelios Kyriakides, David M. Parks, Phoebus Rosakis, Stewart
Silling and Nicolas Triantafyllidis, which I gratefully acknowledge.
Volume I of these notes provides a collection of essential definitions, results, and illus-
trative examples, designed to review those aspects of mathematics that will be encountered
in the subsequent volumes. It is most certainly not meant to be a source for learning these
topics for the first time. The treatment is concise, selective and limited in scope. For exam-
ple, Linear Algebra is a far richer subject than the treatment here, which is limited to real
3-dimensional Euclidean vector spaces.
The topics covered in Volumes II and III are largely those one would expect to see covered
in such a set of lecture notes. Personal taste has led me to include a few special (but still
well-known) topics. Examples of this include sections on the statistical mechanical theory
of polymer chains and the lattice theory of crystalline solids in the discussion of constitutive
theory in Volume II; and sections on the so-called Eshelby problem and the effective behavior
of two-phase materials in Volume III.
There are a number of Worked Examples at the end of each chapter which are an essential
part of the notes. Many of these examples either provide, more details, or a proof, of a
result that had been quoted previously in the text; or it illustrates a general concept; or it
establishes a result that will be used subsequently (possibly in a later volume).
The content of these notes are entirely classical, in the best sense of the word, and none
of the material here is original. I have drawn on a number of sources over the years as I
prepared my lectures. I cannot recall every source I have used but certainly they include
those listed at the end of each chapter. In a more general sense the broad approach and
philosophy taken has been influenced by:
Volume I: A Brief Review of Some Mathematical Preliminaries
I.M. Gelfand and S.V. Fomin, Calculus of Variations, Prentice Hall, 1963.
J.K. Knowles, Linear Vector Spaces and Cartesian Tensors, Oxford University Press,
New York, 1997.
Volume II: Continuum Mechanics
P. Chadwick, Continuum Mechanics: Concise Theory and Problems, Dover,1999.
J.L. Ericksen, Introduction to the Thermodynamics of Solids, Chapman and Hall, 1991.
M.E. Gurtin, An Introduction to Continuum Mechanics, Academic Press, 1981.
J. K. Knowles and E. Sternberg, (Unpublished) Lecture Notes for AM136: Finite Elas-
ticity, California Institute of Technology, Pasadena, CA 1978.
v
C. Truesdell and W. Noll, The nonlinear field theories of mechanics, in Handb¨ uch der
Physik, Edited by S. Fl¨ ugge, Volume III/3, Springer, 1965.
Volume IIII: Elasticity
M.E. Gurtin, The linear theory of elasticity, in Mechanics of Solids - Volume II, edited
by C. Truesdell, Springer-Verlag, 1984.
J. K. Knowles, (Unpublished) Lecture Notes for AM135: Elasticity, California Institute
of Technology, Pasadena, CA, 1976.
A. E. H. Love, A Treatise on the Mathematical Theory of Elasticity, Dover, 1944.
S. P. Timoshenko and J.N. Goodier, Theory of Elasticity, McGraw-Hill, 1987.
The following notation will be used consistently in Volume I: Greek letters will denote real
numbers; lowercase boldface Latin letters will denote vectors; and uppercase boldface Latin
letters will denote linear transformations. Thus, for example, α, β, γ... will denote scalars
(real numbers); a, b, c, ... will denote vectors; and A, B, C, ... will denote linear transforma-
tions. In particular, “o” will denote the null vector while “0” will denote the null linear
transformation. As much as possible this notation will also be used in Volumes II and III
though there will be some lapses (for reasons of tradition).
vi
Contents
1 Matrix Algebra and Indicial Notation 1
1.1 Matrix algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Indicial notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Summation convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Kronecker delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 The alternator or permutation symbol . . . . . . . . . . . . . . . . . . . . . 10
1.6 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Vectors and Linear Transformations 17
2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1.1 Euclidean point space . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Linear Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3 Components of Tensors. Cartesian Tensors 41
3.1 Components of a vector in a basis. . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 Components of a linear transformation in a basis. . . . . . . . . . . . . . . . 43
3.3 Components in two bases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4 Determinant, trace, scalar-product and norm . . . . . . . . . . . . . . . . . . 47
vii
viii CONTENTS
3.5 Cartesian Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.6 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4 Symmetry: Groups of Linear Transformations 67
4.1 An example in two-dimensions. . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2 An example in three-dimensions. . . . . . . . . . . . . . . . . . . . . . . . . 69
4.3 Lattices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.4 Groups of Linear Transformations. . . . . . . . . . . . . . . . . . . . . . . . 73
4.5 Symmetry of a scalar-valued function . . . . . . . . . . . . . . . . . . . . . . 74
4.6 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5 Calculus of Vector and Tensor Fields 85
5.1 Notation and definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.2 Integral theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.3 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.4 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6 Orthogonal Curvilinear Coordinates 99
6.1 Introductory Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2 General Orthogonal Curvilinear Coordinates . . . . . . . . . . . . . . . . . . 102
6.2.1 Coordinate transformation. Inverse transformation. . . . . . . . . . . 102
6.2.2 Metric coefficients, scale moduli. . . . . . . . . . . . . . . . . . . . . . 104
6.2.3 Inverse partial derivatives . . . . . . . . . . . . . . . . . . . . . . . . 106
6.2.4 Components of ∂ˆ e
i
/∂ˆ x
j
in the local basis (ˆ e
1
, ˆ e
2
, ˆ e
3
) . . . . . . . . . . 107
6.3 Transformation of Basic Tensor Relations . . . . . . . . . . . . . . . . . . . . 108
6.3.1 Gradient of a scalar field . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.3.2 Gradient of a vector field . . . . . . . . . . . . . . . . . . . . . . . . . 109
CONTENTS ix
6.3.3 Divergence of a vector field . . . . . . . . . . . . . . . . . . . . . . . . 110
6.3.4 Laplacian of a scalar field . . . . . . . . . . . . . . . . . . . . . . . . 110
6.3.5 Curl of a vector field . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.3.6 Divergence of a symmetric 2-tensor field . . . . . . . . . . . . . . . . 111
6.3.7 Differential elements of volume . . . . . . . . . . . . . . . . . . . . . 112
6.3.8 Differential elements of area . . . . . . . . . . . . . . . . . . . . . . . 112
6.4 Examples of Orthogonal Curvilinear Coordinates . . . . . . . . . . . . . . . 113
6.5 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
7 Calculus of Variations 123
7.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7.2 Brief review of calculus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
7.3 A necessary condition for an extremum . . . . . . . . . . . . . . . . . . . . . 128
7.4 Application of necessary condition δF = 0 . . . . . . . . . . . . . . . . . . . 130
7.4.1 The basic problem. Euler equation. . . . . . . . . . . . . . . . . . . . 130
7.4.2 An example. The Brachistochrone Problem. . . . . . . . . . . . . . . 132
7.4.3 A Formalism for Deriving the Euler Equation . . . . . . . . . . . . . 136
7.5 Generalizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.5.1 Generalization: Free end-point; Natural boundary conditions. . . . . 137
7.5.2 Generalization: Higher derivatives. . . . . . . . . . . . . . . . . . . . 140
7.5.3 Generalization: Multiple functions. . . . . . . . . . . . . . . . . . . . 142
7.5.4 Generalization: End point of extremal lying on a curve. . . . . . . . . 147
7.6 Constrained Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.6.1 Integral constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.6.2 Algebraic constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
x CONTENTS
7.6.3 Differential constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 155
7.7 Weirstrass-Erdman corner conditions . . . . . . . . . . . . . . . . . . . . . . 157
7.7.1 Piecewise smooth minimizer with non-smoothness occuring at a pre-
scribed location. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7.7.2 Piecewise smooth minimizer with non-smoothness occuring at an un-
known location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
7.8 Generalization to higher dimensional space. . . . . . . . . . . . . . . . . . . 166
7.9 Second variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
7.10 Sufficient condition for convex functionals . . . . . . . . . . . . . . . . . . . 180
7.11 Direct method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
7.11.1 The Ritz method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.12 Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Chapter 1
Matrix Algebra and Indicial Notation
Notation:
{a} ..... m×1 matrix, i.e. a column matrix with m rows and one column
a
i
..... element in row-i of the column matrix {a}
[A] ..... m×n matrix
A
ij
..... element in row-i, column-j of the matrix [A]
1.1 Matrix algebra
Even though more general matrices can be considered, for our purposes it is sufficient to
consider a matrix to be a rectangular array of real numbers that obeys certain rules of
addition and multiplication. A m×n matrix [A] has m rows and n columns:
[A] =

¸
¸
¸
¸
A
11
A
12
. . . A
1n
A
21
A
22
. . . A
2n
. . . . . . . . . . . .
A
m1
A
m2
. . . A
mn

; (1.1)
A
ij
denotes the element located in the ith row and jth column. The column matrix
{x} =

¸
¸
¸
¸
x
1
x
2
. . .
x
m

(1.2)
1
2 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
has m rows and one column; The row matrix
{y} = {y
1
, y
2
, . . . , y
n
} (1.3)
has one row and n columns. If all the elements of a matrix are zero it is said to be a null
matrix and is denoted by [0] or {0} as the case may be.
Two m×n matrices [A] and [B] are said to be equal if and only if all of their corresponding
elements are equal:
A
ij
= B
ij
, i = 1, 2, . . . m, j = 1, 2, . . . , n. (1.4)
If [A] and [B] are both m× n matrices, their sum is the m× n matrix [C] denoted by
[C] = [A] + [B] whose elements are
C
ij
= A
ij
+ B
ij
, i = 1, 2, . . . m, j = 1, 2, . . . , n. (1.5)
If [A] is a p ×q matrix and [B] is a q ×r matrix, their product is the p ×r matrix [C] with
elements
C
ij
=
q
¸
k=1
A
ik
B
kj
, i = 1, 2, . . . p, j = 1, 2, . . . , q; (1.6)
one writes [C] = [A][B]. In general [A][B] = [B][A]; therefore rather than referring to [A][B]
as the product of [A] and [B] we should more precisely refer to [A][B] as [A] postmultiplied
by [B]; or [B] premultiplied by [A]. It is worth noting that if two matrices [A] and [B] obey
the equation [A][B] = [0] this does not necessarily mean that either [A] or [B] has to be the
null matrix [0]. Similarly if three matrices [A], [B] and [C] obey [A][B] = [A][C] this does
not necessarily mean that [B] = [C] (even if [A] = [0].) The product by a scalar α of a m×n
matrix [A] is the m×n matrix [B] with components
B
ij
= αA
ij
, i = 1, 2, . . . m, j = 1, 2, . . . , n; (1.7)
one writes [B] = α[A].
Note that a m
1
×n
1
matrix [A
1
] can be postmultiplied by a m
2
×n
2
matrix [A
2
] if and
only if n
1
= m
2
. In particular, consider a m× n matrix [A] and a n × 1 (column) matrix
{x}. Then we can postmultiply [A] by {x} to get the m×1 column matrix [A]{x}; but we
cannot premultiply [A] by {x} (unless m=1), i.e. {x}[A] does not exist is general.
The transpose of the m×n matrix [A] is the n ×m matrix [B] where
B
ij
= A
ji
for each i = 1, 2, . . . n, and j = 1, 2, . . . , m. (1.8)
1.1. MATRIX ALGEBRA 3
Usually one denotes the matrix [B] by [A]
T
. One can verify that
[A + B]
T
= [A]
T
+ [B]
T
, [AB]
T
= [B]
T
[A]
T
. (1.9)
The transpose of a column matrix is a row matrix; and vice versa. Suppose that [A] is a
m× n matrix and that {x} is a m× 1 (column) matrix. Then we can premultiply [A] by
{x}
T
, i.e. {x}
T
[A] exists (and is a 1 ×n row matrix). For any n×1 column matrix {x} note
that
{x}
T
{x} = {x}{x}
T
= x
2
1
+ x
2
2
. . . + x
2
n
=
n
¸
i=1
x
2
i
. (1.10)
A n×n matrix [A] is called a square matrix; the diagonal elements of this matrix are the
A
ii
’s. A square matrix [A] is said to be symmetrical if
A
ij
= A
ji
for each i, j = 1, 2, . . . n; (1.11)
skew-symmetrical if
A
ij
= −A
ji
for each i, j = 1, 2, . . . n. (1.12)
Thus for a symmetric matrix [A] we have [A]
T
= [A]; for a skew-symmetric matrix [A] we
have [A]
T
= −[A]. Observe that each diagonal element of a skew-symmetric matrix must be
zero.
If the off-diagonal elements of a square matrix are all zero, i.e. A
ij
= 0 for each i, j =
1, 2, . . . n, i = j, the matrix is said to be diagonal. If every diagonal element of a diagonal
matrix is 1 the matrix is called a unit matrix and is usually denoted by [I].
Suppose that [A] is a n×n square matrix and that {x} is a n×1 (column) matrix. Then
we can postmultiply [A] by {x} to get a n × 1 column matrix [A]{x}, and premultiply the
resulting matrix by {x}
T
to get a 1 ×1 square matrix, effectively just a scalar, {x}
T
[A]{x}.
Note that
{x}
T
[A]{x} =
n
¸
i=1
n
¸
j=1
A
ij
x
i
x
j
. (1.13)
This is referred to as the quadratic form associated with [A]. In the special case of a diagonal
matrix [A]
{x}
T
[A]{x} = A
11
x
2
1
+ A
22
x
2
1
+ . . . + A
nn
x
2
n
. (1.14)
The trace of a square matrix is the sum of the diagonal elements of that matrix and is
denoted by trace[A]:
trace[A] =
n
¸
i=1
A
ii
. (1.15)
4 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
One can show that
trace([A][B]) = trace([B][A]). (1.16)
Let det[A] denote the determinant of a square matrix. Then for a 2 ×2 matrix
det

A
11
A
12
A
21
A
22

= A
11
A
22
−A
12
A
21
, (1.17)
and for a 3 ×3 matrix
det

¸
¸
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33

= A
11
det

A
22
A
23
A
32
A
33

−A
12
det

A
21
A
23
A
31
A
33

+ A
13
det

A
21
A
22
A
31
A
32

.
(1.18)
The determinant of a n×n matrix is defined recursively in a similar manner. One can show
that
det([A][B]) = (det[A]) (det[B]). (1.19)
Note that trace[A] and det[A] are both scalar-valued functions of the matrix [A].
Consider a square matrix [A]. For each i = 1, 2, . . . , n, a row matrix {a}
i
can be created
by assembling the elements in the ith row of [A]: {a}
i
= {A
i1
, A
i2
, A
i3
, . . . , A
in
}. If the only
scalars α
i
for which
α
1
{a}
1
+ α
2
{a}
2
+ α
3
{a}
3
+ . . . α
n
{a}
n
= {0} (1.20)
are α
1
= α
2
= . . . = α
n
= 0, the rows of [A] are said to be linearly independent. If at least
one of the α’s is non-zero, they are said to be linearly dependent, and then at least one row
of [A] can be expressed as a linear combination of the other rows.
Consider a square matrix [A] and suppose that its rows are linearly independent. Then
the matrix is said to be non-singular and there exists a matrix [B], usually denoted by
[B] = [A]
−1
and called the inverse of [A], for which [B][A] = [A][B] = [I]. For [A] to be
non-singular it is necessary and sufficient that det[A] = 0. If the rows of [A] are linearly
dependent, the matrix is singular and an inverse matrix does not exist.
Consider a n×n square matrix [A]. First consider the (n−1)×(n−1) matrix obtained by
eliminating the ith row and jth column of [A]; then consider the determinant of that second
matrix; and finally consider the product of that determinant with (−1)
i+j
. The number thus
obtained is called the cofactor of A
ij
. If [B] is the inverse of [A], [B] = [A]
−1
, then
B
ij
=
cofactor of A
ji
det[A]
(1.21)
1.2. INDICIAL NOTATION 5
If the transpose and inverse of a matrix coincide, i.e. if
[A]
−1
= [A]
T
, (1.22)
then the matrix is said to be orthogonal. Note that for an orthogonal matrix [A], one has
[A][A]
T
= [A]
T
[A] = [I] and that det[A] = ±1.
1.2 Indicial notation
Consider a n × n square matrix [A] and two n × 1 column matrices {x} and {b}. Let A
ij
denote the element of [A] in its i
th
row and j
th
column, and let x
i
and b
i
denote the elements
in the i
th
row of {x} and {b} respectively. Now consider the matrix equation [A]{x} = {b}:

¸
¸
¸
¸
A
11
A
12
. . . A
1n
A
21
A
22
. . . A
2n
. . . . . . . . . . . .
A
n1
A
n2
. . . A
nn

¸
¸
¸
¸
x
1
x
2
. . .
x
n

=

¸
¸
¸
¸
b
1
b
2
. . .
b
n

. (1.23)
Carrying out the matrix multiplication, this is equivalent to the system of linear algebraic
equations
A
11
x
1
+A
12
x
2
+. . . +A
1n
x
n
= b
1
,
A
21
x
1
+A
22
x
2
+. . . +A
2n
x
n
= b
2
,
. . . +. . . +. . . +. . . = . . .
A
n1
x
1
+A
n2
x
2
+. . . +A
nn
x
n
= b
n
.

(1.24)
This system of equations can be written more compactly as
A
i1
x
1
+ A
i2
x
2
+ . . . A
in
x
n
= b
i
with i taking each value in the range 1, 2, . . . n; (1.25)
or even more compactly by omitting the statement “with i taking each value in the range
1, 2, . . . , n”, and simply writing
A
i1
x
1
+ A
i2
x
2
+ . . . + A
in
x
n
= b
i
(1.26)
with the understanding that (1.26) holds for each value of the subscript i in the range i =
1, 2, . . . n. This understanding is referred to as the range convention. The subscript i is called
a free subscript because it is free to take on each value in its range. From here on, we shall
always use the range convention unless explicitly stated otherwise.
6 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
Observe that
A
j1
x
1
+ A
j2
x
2
+ . . . + A
jn
x
n
= b
j
(1.27)
is identical to (1.26); this is because j is a free subscript in (1.27) and so (1.27) is required
to hold “for all j = 1, 2, . . . , n” and this leads back to (1.24). This illustrates the fact that
the particular choice of index for the free subscript in an equation is not important provided
that the same free subscript appears in every symbol grouping.
1
As a second example, suppose that f(x
1
, x
2
, . . . , x
n
) is a function of x
1
, x
2
, . . . , x
n
, Then,
if we write the equation
∂f
∂x
k
= 3x
k
, (1.28)
the index k in it is a free subscript and so takes all values in the range 1, 2, . . . , n. Thus
(1.28) is a compact way of writing the n equations
∂f
∂x
1
= 3x
1
,
∂f
∂x
2
= 3x
2
, . . . ,
∂f
∂x
n
= 3x
n
. (1.29)
As a third example, the equation
A
pq
= x
p
x
q
(1.30)
has two free subscripts p and q, and each, independently, takes all values in the range
1, 2, . . . , n. Therefore (1.30) corresponds to the nine equations
A
11
= x
1
x
1
, A
12
= x
1
x
2
, . . . A
1n
= x
1
x
n
,
A
21
= x
2
x
1
, A
22
= x
2
x
2
, . . . A
2n
= x
2
x
n
,
. . . . . . . . . . . . = . . .
A
n1
= x
n
x
1
, A
n2
= x
n
x
2
, . . . A
nn
= x
n
x
n
.

(1.31)
In general, if an equation involves N free indices, then it represents 3
N
scalar equations.
In order to be consistent it is important that the same free subscript(s) must appear once,
and only once, in every group of symbols in an equation. For example, in equation (1.26),
since the index i appears once in the symbol group A
i1
x
1
, it must necessarily appear once
in each of the remaining symbol groups A
i2
x
2
, A
i3
x
3
, . . . A
in
x
n
and b
i
of that equation.
Similarly since the free subscripts p and q appear in the symbol group on the left-hand
1
By a “symbol group” we mean a set of terms contained between +, − and = signs.
1.3. SUMMATION CONVENTION 7
side of equation (1.30), it must also appear in the symbol group on the right-hand side.
An equation of the form A
pq
= x
i
x
j
would violate this consistency requirement as would
A
i1
x
i
+ A
j2
x
2
= 0.
Note finally that had we adopted the range convention in Section 1.1, we would have
omitted the various “i=1,2,. . . ,n” statements there and written, for example, equation (1.4)
for the equality of two matrices as simply A
ij
= B
ij
; equation (1.5) for the sum of two
matrices as simply C
ij
= A
ij
+ B
ij
; equation (1.7) for the scalar multiple of a matrix as
B
ij
= αA
ij
; equation (1.8) for the transpose of a matrix as simply B
ij
= A
ji
; equation
(1.11) defining a symmetric matrix as simply A
ij
= A
ji
; and equation (1.12) defining a
skew-symmetric matrix as simply A
ij
= −A
ji
.
1.3 Summation convention
Next, observe that (1.26) can be written as
n
¸
j=1
A
ij
x
j
= b
i
. (1.32)
We can simplify the notation even further by agreeing to drop the summation sign and instead
imposing the rule that summation is implied over a subscript that appears twice in a symbol
grouping. With this understanding in force, we would write (1.32) as
A
ij
x
j
= b
i
(1.33)
with summation on the subscript j being implied. A subscript that appears twice in a
symbol grouping is called a repeated or dummy subscript; the subscript j in (1.33) is a
dummy subscript.
Note that
A
ik
x
k
= b
i
(1.34)
is identical to (1.33); this is because k is a dummy subscript in (1.34) and therefore summa-
tion on k in implied in (1.34). Thus the particular choice of index for the dummy subscript
is not important.
In order to avoid ambiguity, no subscript is allowed to appear more than twice in any
symbol grouping. Thus we shall never write, for example, A
ii
x
i
= b
i
since, if we did, the
index i would appear 3 times in the first symbol group.
8 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
Summary of Rules:
1. Lower-case latin subscripts take on values in the range (1, 2, . . . , n).
2. A given index may appear either once or twice in a symbol grouping. If it appears
once, it is called a free index and it takes on each value in its range. If it appears twice,
it is called a dummy index and summation is implied over it.
3. The same index may not appear more than twice in the same symbol grouping.
4. All symbol groupings in an equation must have the same free subscripts.
Free and dummy indices may be changed without altering the meaning of an expression
provided that one does not violate the preceding rules. Thus, for example, we can change
the free subscript p in every term of the equation
A
pq
x
q
= b
p
(1.35)
to any other index, say k, and equivalently write
A
kq
x
q
= b
k
. (1.36)
We can also change the repeated subscript q to some other index, say s, and write
A
ks
x
s
= b
k
. (1.37)
The three preceding equations are identical.
It is important to emphasize that each of the equations in, for example (1.24), involves
scalar quantities, and therefore, the order in which the terms appear within a symbol group
is irrelevant. Thus, for example, (1.24)
1
is equivalent to x
1
A
11
+ x
2
A
12
+ . . . + x
n
A
1n
=
b
1
. Likewise we can write (1.33) equivalently as x
j
A
ij
= b
i
. Note that both A
ij
x
j
= b
i
and x
j
A
ij
= b
i
represent the matrix equation [A]{x} = {b}; the second equation does not
correspond to {x}[A] = {b}. In an indicial equation it is the location of the subscripts that
is crucial; in particular, it is the location where the repeated subscript appears that tells us
whether {x} multiplies [A] or [A] multiplies {x}.
Note finally that had we adopted the range and summation conventions in Section 1.1,
we would have written equation (1.6) for the product of two matrices as C
ij
= A
ik
B
kj
;
equation (1.10) for the product of a matrix by its transpose as {x}
T
{x} = x
i
x
i
; equation
(1.13) for the quadratic form as {x}
T
[A]{x} = A
ij
x
i
x
j
; and equation (1.15) for the trace as
trace [A] = A
ii
.
1.4. KRONECKER DELTA 9
1.4 Kronecker delta
The Kronecker Delta, δ
ij
, is defined by
δ
ij
=

1 if i = j,
0 if i = j.
(1.38)
Note that it represents the elements of the identity matrix. If [Q] is an orthogonal matrix,
then we know that [Q][Q]
T
= [Q]
T
[Q] = [I]. This implies, in indicial notation, that
Q
ik
Q
jk
= Q
ki
Q
kj
= δ
ij
. (1.39)
The following useful property of the Kronecker delta is sometimes called the substitution
rule. Consider, for example, any column matrix {u} and suppose that one wishes to simplify
the expression u
i
δ
ij
. Recall that u
i
δ
ij
= u
1
δ
1j
+ u
2
δ
2j
+ . . . + u
n
δ
nj
. Since δ
ij
is zero unless
i = j, it follows that all terms on the right-hand side vanish trivially except for the one term
for which i = j. Thus the term that survives on the right-hand side is u
j
and so
u
i
δ
ij
= u
j
. (1.40)
Thus we have used the facts that (i) since δ
ij
is zero unless i = j, the expression being
simplified has a non-zero value only if i = j; (ii) and when i = j, δ
ij
is unity. Thus replacing
the Kronecker delta by unity, and changing the repeated subscript i → j, gives u
i
δ
ij
= u
j
.
Similarly, suppose that [A] is a square matrix and one wishes to simplify A
jk
δ
j
. Then by the
same reasoning, we replace the Kronecker delta by unity and change the repeated subscript
j → to obtain
2
A
jk
δ
j
= A
k
. (1.41)
More generally, if δ
ip
multiplies a quantity C
ijk
representing n
4
numbers, one replaces
the Kronecker delta by unity and changes the repeated subscript i →p to obtain
C
ijk
δ
ip
= C
pjk
. (1.42)
The substitution rule applies even more generally: for any quantity or expression T
ipq...z
, one
simply replaces the Kronecker delta by unity and changes the repeated subscript i → j to
obtain
T
ipq...z
δ
ij
= T
jpq...z
. (1.43)
2
Observe that these results are immediately apparent by using matrix algebra. In the first example, note
that δ
ji
u
i
(which is equal to the quantity δ
ij
u
i
that is given) is simply the jth element of the column matrix
[I]{u}. Since [I]{u} = {u} the result follows at once. Similarly in the second example, δ
j
A
jk
is simply the
, k-element of the matrix [I][A]. Since [I][A] = [A], the result follows.
10 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
1.5 The alternator or permutation symbol
We now limit attention to subscripts that range over 1, 2, 3 only. The alternator or permu-
tation symbol is defined by
e
ijk
=

0 if two or more subscripts i, j, k, are equal,
+1 if the subscripts i, j, k, are in cyclic order,
−1 if the subscripts i, j, k, are in anticyclic order,
=

0 if two or more subscripts i, j, k, are equal,
+1 for (i, j, k) = (1, 2, 3), (2, 3, 1), (3, 1, 2),
−1 for (i, j, k) = (1, 3, 2), (2, 1, 3), (3, 2, 1).
(1.44)
Observe from its definition that the sign of e
ijk
changes whenever any two adjacent subscripts
are switched:
e
ijk
= −e
jik
= e
jki
. (1.45)
One can show by direct calculation that the determinant of a 3 matrix [A] can be written
in either of two forms
det[A] = e
ijk
A
1i
A
2j
A
3k
or det[A] = e
ijk
A
i1
A
j2
A
k3
; (1.46)
as well as in the form
det[A] =
1
6
e
ijk
e
pqr
A
ip
A
jq
A
kr
. (1.47)
Another useful identity involving the determinant is
e
pqr
det[A] = e
ijk
A
ip
A
jq
A
kr
. (1.48)
The following relation involving the alternator and the Kronecker delta will be useful in
subsequent calculations
e
ijk
e
pqk
= δ
ip
δ
jq
−δ
iq
δ
jp
. (1.49)
It is left to the reader to develop proofs of these identities. They can, of course, be verified
directly, by simply writing out all of the terms in (1.46) - (1.49).
1.6. WORKED EXAMPLES. 11
1.6 Worked Examples.
Example(1.1): If [A] and [B] are n×n square matrices and {x}, {y}, {z} are n×1 column matrices, express
the matrix equation
{y} = [A]{x} + [B]{z}
as a set of scalar equations.
Solution: By the rules of matrix multiplication, the element y
i
in the i
th
row of {y} is obtained by first pairwise
multiplying the elements A
i1
, A
i2
, . . . , A
in
of the i
th
row of [A] by the respective elements x
1
, x
2
, . . . , x
n
of
{x} and summing; then doing the same for the elements of [B] and {z}; and finally adding the two together.
Thus
y
i
= A
ij
x
j
+B
ij
z
j
,
where summation over the dummy index j is implied, and this equation holds for each value of the free index
i = 1, 2, . . . , n. Note that one can alternatively – and equivalently – write the above equation in any of the
following forms:
y
k
= A
kj
x
j
+B
kj
z
j
, y
k
= A
kp
x
p
+B
kp
z
p
, y
i
= A
ip
x
p
+B
iq
z
q
.
Observe that all rules for indicial notation are satisfied by each of the three equations above.
Example(1.2): The n × n matrices [C], [D] and [E] are defined in terms of the two n × n matrices [A] and
[B] by
[C] = [A][B], [D] = [B][A], [E] = [A][B]
T
.
Express the elements of [C], [D] and [E] in terms of the elements of [A] and [B].
Solution: By the rules of matrix multiplication, the element C
ij
in the i
th
row and j
th
column of [C] is
obtained by multiplying the elements of the i
th
row of [A], pairwise, by the respective elements of the j
th
column of [B] and summing. So, C
ij
is obtained by multiplying the elements A
i1
, A
i2
, . . . A
in
by, respectively,
B
1j
, B
2j
, . . . B
nj
and summing. Thus
C
ij
= A
ik
B
kj
;
note that i and j are both free indices here and so this represents n
2
scalar equations; moreover summation
is carried out over the repeated index k. It follows likewise that the equation [D] = [B][A] leads to
D
ij
= B
ik
A
kj
; or equivalently D
ij
= A
kj
B
ik
,
where the second expression was obtained by simply changing the order in which the terms appear in the
first expression (since, as noted previously, the order of terms within a symbol group is insignificant since
these are scalar quantities.) In order to calculate E
ij
, we first multiply [A] by [B]
T
to obtain E
ij
= A
ik
B
T
kj
.
However, by definition of transposition, the i, j-element of a matrix [B]
T
equals the j, i-element of the matrix
[B]: B
T
ij
= B
ji
and so we can write
E
ij
= A
ik
B
jk
.
12 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
All four expressions here involve the ik, kj or jk elements of [A] and [B]. The precise locations of the
subscripts vary and the meaning of the terms depend crucially on these locations. It is worth repeating that
the location of the repeated subscript k tells us what term multiplies what term.
Example(1.3): If [S] is any symmetric matrix and [W] is any skew-symmetric matrix, show that
S
ij
W
ij
= 0.
Solution: Note that both i and j are dummy subscripts here; therefore there are summations over each of
them. Also, there is no free subscript so this is just a single scalar equation.
Whenever there is a dummy subscript, the choice of the particular index for that dummy subscript is
arbitrary, and we can change it to another index, provided that we change both repeated subscripts to the
new symbol (and as long as we do not have any subscript appearing more than twice). Thus, for example,
since i is a dummy subscript in S
ij
W
ij
, we can change i → p and get S
ij
W
ij
= S
pj
W
pj
. Note that we can
change i to any other index except j; if we did change it to j, then there would be four j’s and that violates
one of our rules.
By changing the dummy indices i →p and j →q, we get S
ij
W
ij
= S
pq
W
pq
. We can now change dummy
indices again, from p →j and q →i which gives S
pq
W
pq
= S
ji
W
ji
. On combining, these we get
S
ij
W
ij
= S
ji
W
ji
.
Effectively, we have changed both i and j simultaneously from i →j and j →i.
Next, since [S] is symmetric S
ji
= S
ij
; and since [W] is skew-symmetric, W
ji
= −W
ij
. Therefore
S
ji
W
ji
= −S
ij
W
ij
. Using this in the right-hand side of the preceding equation gives
S
ij
W
ij
= −S
ij
W
ij
from which it follows that S
ij
W
ij
= 0.
Remark: As a special case, take S
ij
= u
i
u
j
where {u} is an arbitrary column matrix; note that this [S] is
symmetric. It follows that for any skew-symmetric [W],
W
ij
u
i
u
j
= 0 for all u
i
.
Example(1.4): Show that any matrix [A] can be additively decomposed into the sum of a symmetric matrix
and a skew-symmetric matrix.
Solution: Define matrices [S] and [W] in terms of the given the matrix [A] as follows:
S
ij
=
1
2
(A
ij
+A
ji
), W
ij
=
1
2
(A
ij
−A
ji
).
It may be readily verified from these definitions that S
ij
= S
ji
and that W
ij
= −W
ij
. Thus, the matrix [S]
is symmetric and [W] is skew-symmetric. By adding the two equations in above one obtains
S
ij
+W
ij
= A
ij
,
1.6. WORKED EXAMPLES. 13
or in matrix form, [A] = [S] + [W].
Example (1.5): Show that the quadratic form T
ij
u
i
u
j
is unchanged if T
ij
is replaced by its symmetric part.
i.e. show that for any matrix [T],
T
ij
u
i
u
j
= S
ij
u
i
u
j
for all u
i
where S
ij
=
1
2
(T
ij
+T
ji
). (i)
Solution: The result follows from the following calculation:
T
ij
u
i
u
j
=

1
2
T
ij
+
1
2
T
ij
+
1
2
T
ji

1
2
T
ji

u
i
u
j
=
1
2
(T
ij
+T
ji
) u
i
u
j
+
1
2
(T
ij
−T
ji
) u
i
u
j
= S
ij
u
i
u
j
,
where in the last step we have used the facts that A
ij
= T
ij
− T
ji
is skew-symmetric, that B
ij
= u
i
u
j
is
symmetric, and that for any symmetric matrix [A] and any skew-symmetric matrix [B], one has A
ij
B
ij
= 0.
Example (1.6): Suppose that D
1111
, D
1112
, . . . D
111n
, . . . D
1121
, D
1122
, . . . D
112n
, . . . D
nnnn
are n
4
constants;
and let D
ijk
denote a generic element of this set where each of the subscripts i, j, k, take all values in
the range 1, 2, . . . n. Let [E] be an arbitrary symmetric matrix and define the elements of a matrix [A] by
A
ij
= D
ijk
E
k
. Show that [A] is unchanged if D
ijk
is replaced by its “symmetric part” C
ijk
where
C
ijk
=
1
2
(D
ijk
+D
ijk
). (i)
Solution: In a manner entirely analogous to the previous example,
A
ij
= D
ijk
E
k
=

1
2
D
ijk
+
1
2
D
ijk
+
1
2
D
ijk

1
2
D
ijk

E
k
=
1
2
(D
ijk
+D
ijk
) E
k
+
1
2
(D
ijk
−D
ijk
) E
k
= C
ijk
E
k
,
where in the last step we have used the fact that (D
ijk
−D
ijk
)E
k
= 0 since D
ijk
−D
ijk
is skew symmetric
in the subscripts k, while E
k
is symmetric in the subscripts k, .
Example (1.7): Evaluate the expression δ
ij
δ
ik
δ
jk
.
Solution: By using the substitution rule, first on the repeated index i and then on the repeated index j, we
have δ
ij
δ
ik
δ
jk
= δ
jk
δ
jk
= δ
kk
= δ
11

22
+. . . +δ
nn
= n.
Example(1.8): Given an orthogonal matrix [Q], use indicial notation to solve the matrix equation [Q]{x} =
{a} for {x}.
14 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
Solution: In indicial form, the equation [Q]{x} = {a} reads
Q
ij
x
j
= a
i
.
Multiplying both sides by Q
ik
gives
Q
ik
Q
ij
x
j
= Q
ik
a
i
.
Since [Q] is orthogonal, we know from (1.39) that Q
rp
Q
rq
= δ
pq
. Thus the preceding equation simplifies to
δ
jk
x
j
= Q
ik
a
i
,
which, by the substitution rule, reduces further to
x
k
= Q
ik
a
i
.
In matrix notation this reads {x} = [Q]
T
{a} which we could, of course, have written down immediately from
the fact that {x} = [Q]
−1
{a}, and for an orthogonal matrix, [Q]
−1
= [Q]
T
.
Example(1.9): Consider the function f(x
1
, x
2
, . . . , x
n
) = A
ij
x
i
x
j
where the A
ij
’s are constants. Calculate
the partial derivatives ∂f/∂x
i
.
Solution: We begin by making two general observations. First, note that because of the summation on
the indices i and j, it is incorrect to conclude that ∂f/∂x
i
= A
ij
x
j
by viewing this in the same way as
differentiating the function A
12
x
1
x
2
with respect to x
1
. Second, observe that if we differentiatiate f with
respect to x
i
and write ∂f/∂x
i
= ∂(A
ij
x
i
x
j
)/∂x
i
, we would violate our rules because the right-hand side
has the subscript i appearing three times in one symbol grouping. In order to get around this difficulty we
make use of the fact that the specific choice of the index in a dummy subscript is not significant and so we
can write f = A
pq
x
p
x
q
.
Differentiating f and using the fact that [A] is constant gives
∂f
∂x
i
=

∂x
i
(A
pq
x
p
x
q
) = A
pq

∂x
i
(x
p
x
q
) = A
pq
¸
∂x
p
∂x
i
x
q
+x
p
∂x
q
∂x
i

.
Since the x
i
’s are independent variables, it follows that
∂x
i
∂x
j
=

0 if i = j,
1 if i = j,
i.e.
∂x
i
∂x
j
= δ
ij
.
Using this above gives
∂f
∂x
i
= A
pq

pi
x
q
+x
p
δ
qi
] = A
pq
δ
pi
x
q
+A
pq
x
p
δ
qi
which, by the substitution rule, simplifies to
∂f
∂x
i
= A
iq
x
q
+A
pi
x
p
= A
ij
x
j
+A
ji
x
j
= (A
ij
+A
ji
)x
j
.
1.6. WORKED EXAMPLES. 15
Example (1.10): Suppose that {x}
T
[A]{x} = 0 for all column matrices {x} where the square matrix [A] is
independent of {x}. What does this imply about [A]?
Solution: We know from a previous example that that if [A] is a skew-symmetric and [S] is symmetric then
A
ij
S
ij
= 0, and as a special case of this that A
ij
x
i
x
j
= 0 for all {x}. Thus a sufficient condition for the
given equation to hold is that [A] be skew-symmetric. Now we show that this is also a necessary condition.
We are given that A
ij
x
i
x
j
= 0 for all x
i
. Since this equation holds for all x
i
, we may differentiate both
sides with respect to x
k
and proceed as follows:
0 =

∂x
k
(A
ij
x
i
x
j
) = A
ij

∂x
k
(x
i
x
j
) = A
ij
∂x
i
∂x
k
x
j
+A
ij
x
i
∂x
j
∂x
k
= A
ij
δ
ik
x
j
+A
ij
x
i
δ
jk
, (i)
where we have used the fact that ∂x
i
/∂x
j
= δ
ij
in the last step. On using the substitution rule, this simplifies
to
A
kj
x
j
+A
ik
x
i
= (A
kj
+A
jk
) x
j
= 0. (ii)
Since this also holds for all x
i
, it may be differentiated again with respect to x
i
to obtain
(A
kj
+A
jk
)
∂x
j
∂x
i
= (A
kj
+A
jk
) δ
ji
= A
ki
+A
ik
= 0. (iii)
Thus [A] must necessarily be a skew symmetric matrix,
Therefore it is necessary and sufficient that [A] be skew-symmetric.
Example (1.11): Let C
ijkl
be a set of n
4
constants. Define the function
´
W([E]) for all matrices [E] by
´
W([E]) = W(E
11
, E
12
, ....E
nn
) =
1
2
C
ijkl
E
ij
E
kl
. Calculate
∂W
∂E
ij
and

2
W
∂E
ij
∂E
kl
. (i)
Solution: First, since the E
ij
’s are independent variables, it follows that
∂E
pq
∂E
ij
=

1 if p = i and q = j,
0 otherwise.
Therefore,
∂E
pq
∂E
ij
= δ
pi
δ
qj
. (ii)
Keeping this in mind and differentiating W(E
11
, E
12
, ....E
33
) with respect to E
ij
gives
∂W
∂E
ij
=

∂E
ij

1
2
C
pqrs
E
pq
E
rs

=
1
2
C
pqrs

∂E
pq
∂E
ij
E
rs
+E
pq
∂E
rs
∂E
ij

=
1
2
C
pqrs

pi
δ
qj
E
rs

ri
δ
sj
E
pq
)
=
1
2
C
ijrs
E
rs
+
1
2
C
pqij
E
pq
=
1
2
(C
ijpq
+C
pqij
) E
pq
.
16 CHAPTER 1. MATRIX ALGEBRA AND INDICIAL NOTATION
where we have made use of the substitution rule. (Note that in the first step we wrote W =
1
2
C
pqrs
E
pq
E
rs
rather than W =
1
2
C
ijkl
E
ij
E
kl
because we would violate our rules for indices had we written ∂(
1
2
C
ijkl
E
ij
E
kl
)/∂E
ij
.)
Differentiating this once more with respect to E
kl
gives

2
W
∂E
ij
∂E
kl
=

∂E
k

1
2
(C
ijpq
+C
pqij
) E
pq

=
1
2
(C
ijpq
+C
pqij
) δ
pk
δ
ql
(iii)
=
1
2
(C
ijkl
+C
klij
) (iv)
Example (1.12): Evaluate the expression e
ijk
e
kij
.
Solution: By first using the skew symmetry property (1.45), then using the identity (1.49), and finally using
the substitution rule, we have e
ijk
e
kij
= −e
ijk
e
ikj
= −(δ
jk
δ
kj
−δ
jj
δ
kk
) = −(δ
jj
−δ
jj
δ
kk
) = −(3−3×3) = 6.
Example(1.13): Show that
e
ijk
S
jk
= 0 (i)
if and only if the matrix [S] is symmetric.
Solution: First, suppose that [S] is symmetric. Pick and fix the free subscript i at any value i = 1, 2, 3. Then,
we can think of e
ijk
as the j, k element of a 3×3 matrix. Since e
ijk
= −e
ikj
this is a skew-symmetric matrix.
In a previous example we showed that S
ij
W
ij
= 0 for any symmetric matrix [S] and any skew-symmetric
matrix [W]. Consequently (i) must hold.
Conversely suppose that (i) holds for some matrix [S]. Multiplying (i) by e
ipq
and using the identity
(1.49) leads to
e
ipq
e
ijk
S
jk
= (δ
pj
δ
qk
−δ
pk
δ
qj
)S
jk
= S
pq
−S
qp
= 0
where in the last step we have used the substitutin rule. Thus S
pq
= S
qp
and so [S] is symmetric.
Remark: Note as a special case of this result that
e
ijk
v
j
v
k
= 0 (ii)
for any arbitrary column matrix {v}.
References
1. R.A. Frazer, W.J. Duncan and A.R. Collar, Elementary Matrices, Cambridge University Press, 1965.
2. R. Bellman, Introduction to Matrix Analysis, McGraw-Hill, 1960.
Chapter 2
Vectors and Linear Transformations
Notation:
α ..... scalar
a ..... vector
A ..... linear transformation
As mentioned in the Preface, Linear Algebra is a far richer subject than the very restricted
glimpse provided here might suggest. The discussion in these notes is limited almost entirely
to (a) real 3-dimensional Euclidean vector spaces, and (b) to linear transformations that
carry vectors from one vector space into the same vector space. These notes are designed
to review those aspects of linear algebra that will be encountered in our study of continuum
mechanics; it is not meant to be a source for learning the subject of linear algebra for the
first time.
The following notation will be consistently used: Greek letters will denote real numbers;
lowercase boldface Latin letters will denote vectors; and uppercase boldface Latin letters will
denote linear transformations. Thus, for example, α, β, γ... will denote scalars (real num-
bers); a, b, c, ... will denote vectors; and A, B, C, ... will denote linear transformations. In
particular, “o” will denote the null vector while “0” will denote the null linear transforma-
tion.
17
18 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
2.1 Vectors
A vector space V is a collection of elements, called vectors, together with two operations,
addition and multiplication by a scalar. The operation of addition (has certain properties
which we do not list here) and associates with each pair of vectors x and y in V, a vector
denoted by x +y that is also in V. In particular, it is assumed that there is a unique vector
o ∈ V called the null vector such that x+o = x. The operation of scalar multiplication (has
certain properties which we do not list here) and associates with each vector x ∈ V and each
real number α, another vector in V denoted by αx.
Let x
1
, x
2
, . . . , x
k
be k vectors in V. These vectors are said to be linearly independent if
the only real numbers α
1
, α
2
. . . , α
k
for which
α
1
x
1
+ α
2
x
2
· · · + α
k
x
k
= o (2.1)
are the numbers α
1
= α
2
= . . . α
k
= 0. If V contains n linearly independent vectors but
does not contain n + 1 linearly independent vectors, we say that the dimension of V is n.
Unless stated otherwise, from hereon we restrict attention to 3-dimensional vector spaces.
If V is a vector space, any set of three linearly independent vectors {e
1
, e
2
, e
3
} is said to
be a basis for V. Given any vector x ∈ V there exist a unique set of numbers ξ
1
, ξ
2
, ξ
3
such
that
x = ξ
1
e
1
+ ξ
2
e
2
+ ξ
3
e
3
; (2.2)
the numbers ξ
1
, ξ
2
, ξ
3
are called the components of x in the basis {e
1
, e
2
, e
3
}.
Let U be a subset of a vector space V; we say that U is a subspace (or linear manifold)
of V if, for every x, y ∈ U and every real number α, the vectors x +y and αx are also in U.
Thus a linear manifold U of V is itself a vector space under the same operations of addition
and multiplication by a scalar as in V.
A scalar-product (or inner product or dot product) on V is a function which assigns to
each pair of vectors x, y in V a scalar, which we denote by x · y. A scalar-product has
certain properties which we do not list here except to note that it is required that
x · y = y · x for all x, y ∈ V. (2.3)
A Euclidean vector space is a vector space together with an inner product on that space.
From hereon we shall restrict attention to 3-dimensional Euclidean vector spaces and denote
such a space by E
3
.
2.1. VECTORS 19
The length (or magnitude or norm) of a vector x is the scalar denoted by |x| and defined
by
|x| = (x · x)
1/2
. (2.4)
A vector has zero length if and only if it is the null vector. A unit vector is a vector of unit
length. The angle θ between two vectors x and y is defined by
cos θ =
x · y
|x||y|
, 0 ≤ θ ≤ π. (2.5)
Two vectors x and y are orthogonal if x · y = 0. It is obvious, nevertheless helpful, to note
that if we are given two vectors x and y where x· y = 0 and y = o, this does not necessarily
imply that x = o; on the other hand if x · y = 0 for every vector y, then x must be the null
vector.
An orthonormal basis is a triplet of mutually orthogonal unit vectors e
1
, e
2
, e
3
∈ E
3
. For
such a basis,
e
i
· e
j
= δ
ij
for i, j = 1, 2, 3, (2.6)
where the Kronecker delta δ
ij
is defined in the usual way by
δ
ij
=

1 if i = j,
0 if i = j.
(2.7)
A vector-product (or cross-product) on E
3
is a function which assigns to each ordered pair
of vectors x, y ∈ E
3
, a vector, which we denote by x × y. The vector-product must have
certain properties (which we do not list here) except to note that it is required that
y ×x = −x ×y for all x, y ∈ V. (2.8)
One can show that
x ×y = |x| |y| sin θ n, (2.9)
where θ is the angle between x and y as defined by (2.5), and n is a unit vector in the
direction x×y which therefore is normal to the plane defined by x and y. Since n is parallel
to x ×y, and since it has unit length, it follows that n = (x ×y)/|(x ×y)|. The magnitude
|x × y| of the cross-product can be interpreted geometrically as the area of the triangle
formed by the vectors x and y. A basis {e
1
, e
2
, e
3
} is said to be right-handed if
(e
1
×e
2
) · e
3
> 0. (2.10)
20 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
2.1.1 Euclidean point space
A Euclidean point space P whose elements are called points, is related to a Euclidean vector
space E
3
in the following manner. Every order pair of points (p, q) is uniquely associated
with a vector in E
3
, say

pq, such that
(i)

pq = −

qp for all p, q ∈ P.
(ii)

pq +

qr=

pr for all p, q, r ∈ P.
(iii) given an arbitrary point p ∈ P and an arbitrary vector x ∈ E
3
, there is a unique point
q ∈ P such that x =

pq. Here x is called the position of point q relative to the point p.
Pick and fix an arbitrary point o ∈ P (which we call the origin of P) and an arbitrary basis
for E
3
of unit vectors e
1
, e
2
, e
3
. Corresponding to any point p ∈ P there is a unique vector

op= x = x
1
e
1
+ x
2
e
2
+ x
3
e
3
∈ E
3
. The triplet (x
1
, x
2
, x
3
) are called the coordinates of p
in the (coordinate) frame F = {o; e
1
, e
2
, e
3
} comprised of the origin o and the basis vectors
e
1
, e
2
, e
3
. If e
1
, e
2
, e
3
is an orthonormal basis, the coordinate frame {o; e
1
, e
2
, e
3
} is called a
rectangular cartesian coordinate frame.
2.2 Linear Transformations.
Consider a three-dimensional Euclidean vector space E
3
. Let F be a function (or transfor-
mation) which assigns to each vector x ∈ E
3
, a second vector y ∈ E
3
,
y = F(x), x ∈ E
3
, y ∈ E
3
; (2.11)
F is said to be a linear transformation if it is such that
F(αx + βy) = αF(x) + βF(y) (2.12)
for all scalars α, β and all vectors x, y ∈ E
3
. When F is a linear transformation, we usually
omit the parenthesis and write Fx instead of F(x). Note that Fx is a vector, and it is the
image of x under the transformation F.
A linear transformation is defined by the way it operates on vectors in E
3
. A geometric
example of a linear transformation is the “projection operator” Π which projects vectors
onto a given plane P. Let P be the plane normal to the unit vector n.; see Figure 2.1. For
2.2. LINEAR TRANSFORMATIONS. 21
P n
x
Πx
Figure 2.1: The projection Πx of a vector x onto the plane P.
any vector x ∈ E
3
, Πx ∈ P is the vector obtained by projecting x onto P. It can be verified
geometrically that P is defined by
Πx = x −(x · n)n for all x ∈ E
3
. (2.13)
Linear transformations tell us how vectors are mapped into other vectors. In particular,
suppose that {y
1
, y
2
, y
3
} are any three vectors in E
3
and that {x
1
, x
2
, x
3
} are any three
linearly independent vectors in E
3
. Then there is a unique linear transformation F that
maps {x
1
, x
2
, x
3
} into {y
1
, y
2
, y
3
}: y
1
= Fx
1
, y
2
= Fx
2
, y
3
= Fx
3
. This follows from the
fact that {x
1
, x
2
, x
3
} is a basis for E
3
. Therefore any arbitrary vector x can be expressed
uniquely in the form x = ξ
1
x
1
+ ξ
2
x
2
+ ξ
3
x
3
; consequently the image Fx of any vector x is
given by Fx = ξ
1
y
1
+ ξ
2
y
2
+ ξ
3
y
3
which is a rule for assigning a unique vector Fx to any
given vector x.
The null linear transformation 0 is the linear transformation that takes every vector x
into the null vector o. The identity linear transformation I takes every vector x into itself.
Thus
0x = o, Ix = x for all x ∈ E
3
. (2.14)
Let A and B be linear transformations on E
3
and let α be a scalar. The linear trans-
formations A+B, AB and αA are defined as those linear transformations which are such
that
(A+B)x = Ax +Bx for all x ∈ E
3
, (2.15)
(AB)x = A(Bx) for all x ∈ E
3
, (2.16)
(αA)x = α(Ax) for all x ∈ E
3
, (2.17)
respectively; A + B is called the sum of A and B, AB the product, and αA is the scalar
multiple of A by α. In general,
AB = BA. (2.18)
22 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
The range of a linear transformation A (i.e., the collection of all vectors Ax as x takes
all values in E
3
) is a subspace of E
3
. The dimension of this particular subspace is known
as the rank of A. The set of all vectors x for which Ax = o is also a subspace of E
3
; it is
known as the null space of A.
Given any linear transformation A, one can show that there is a unique linear transfor-
mation usually denoted by A
T
such that
Ax · y = x · A
T
y for all x, y ∈ E
3
. (2.19)
A
T
is called the transpose of A. One can show that
(αA)
T
= αA
T
, (A+B)
T
= A
T
+B
T
, (AB)
T
= B
T
A
T
. (2.20)
A linear transformation A is said to be symmetric if
A = A
T
; (2.21)
skew-symmetric if
A = −A
T
. (2.22)
Every linear transformation A can be represented as the sum of a symmetric linear trans-
formation S and a skew-symmetric linear transformation W as follows:
A = S +W where S =
1
2
(A+A
T
), W =
1
2
(A−A
T
). (2.23)
For every skew-symmetric linear transformation W, it may be shown that
Wx · x = 0 for all x ∈ E
3
; (2.24)
moreover, there exists a vector w (called the axial vector of W) which has the property that
Wx = w×x for all x ∈ E
3
. (2.25)
Given a linear transformation A, if the only vector x for which Ax = o is the zero
vector, then we say that A is non-singular. It follows from this that if A is non-singular
then Ax = Ay whenever x = y. Thus, a non-singular transformation A is a one-to-one
transformation in the sense that, for any given y ∈ E
3
, there is one and only one vector x ∈ E
3
for which Ax = y. Consequently, corresponding to any non-singular linear transformation
2.2. LINEAR TRANSFORMATIONS. 23
A, there exists a second linear transformation, denoted by A
−1
and called the inverse of A,
such that Ax = y if and only if x = A
−1
y, or equivalently, such that
AA
−1
= A
−1
A = I. (2.26)
If {y
1
, y
2
, y
3
} and {x
1
, x
2
, x
3
} are two sets of linearly independent vectors in E
3
, then
there is a unique non-singular linear transformation F that maps {x
1
, x
2
, x
3
} into {y
1
, y
2
, y
3
}:
y
1
= Fx
1
, y
2
= Fx
2
, y
3
= Fx
3
. The inverse of F maps {y
1
, y
2
, y
3
} into {x
1
, x
2
, x
3
}. If both
bases {x
1
, x
2
, x
3
} and {y
1
, y
2
, y
3
} are right-handed (or both are left-handed) we say that
the linear transformation F preserves the orientation of the vector space.
If two linear transformations A and B are both non-singular, then so is AB; moreover,
(AB)
−1
= B
−1
A
−1
. (2.27)
If A is non-singular then so is A
T
; moreover,
(A
T
)
−1
= (A
−1
)
T
, (2.28)
and so there is no ambiguity in writing this linear transformation as A
−T
.
A linear transformation Q is said to be orthogonal if it preserves length, i.e., if
|Qx| = |x| for all x ∈ E
3
. (2.29)
If Q is orthogonal, it follows that it also preserves the inner product:
Qx · Qy = x · y for all x, y ∈ E
3
. (2.30)
Thus an orthogonal linear transformation preserves both the length of a vector and the angle
between two vectors. If Q is orthogonal, it is necessarily non-singular and
Q
−1
= Q
T
. (2.31)
A linear transformation A is said to be positive definite if
Ax · x > 0 for all x ∈ E
3
, x = o; (2.32)
positive-semi-definite if
Ax · x
¯
≥ 0 for all x ∈ E
3
. (2.33)
24 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
A positive definite linear transformation is necessarily non-singular. Moreover, A is positive
definite if and only if its symmetric part (1/2)(A+A
T
) is positive definite.
Let A be a linear transformation. A subspace U is known as an invariant subspace of
A if Av ∈ U for all v ∈ U. Given a linear transformation A, suppose that there exists an
associated one-dimensional invariant subspace. Since U is one-dimensional, it follows that if
v ∈ U then any other vector in U can be expressed in the form λv for some scalar λ. Since
U is an invariant subspace we know in addition that Av ∈ U whenever v ∈ U. Combining
these two fact shows that Av = λv for all v ∈ U. A vector v and a scalar λ such that
Av = λv, (2.34)
are known, respectively, as an eigenvector and an eigenvalue of A. Each eigenvector of A
characterizes a one-dimensional invariant subspace of A. Every linear transformation A (on
a 3-dimensional vector space E
3
) has at least one eigenvalue.
It can be shown that a symmetric linear transformation A has three real eigenvalues
λ
1
, λ
2
, and λ
3
, and a corresponding set of three mutually orthogonal eigenvectors e
1
, e
2
, and
e
3
. The particular basis of E
3
comprised of {e
1
, e
2
, e
3
} is said to be a principal basis of A.
Every eigenvalue of a positive definite linear transformation must be positive, and no
eigenvalue of a non-singular linear transformation can be zero. A symmetric linear transfor-
mation is positive definite if and only if all three of its eigenvalues are positive.
If e and λ are an eigenvector and eigenvalue of a linear transformation A, then for any
positive integer n, it is easily seen that e and λ
n
are an eigenvector and an eigenvalue of A
n
where A
n
= AA...(n times)..AA; this continues to be true for negative integers m provided
A is non-singular and if by A
−m
we mean (A
−1
)
m
, m > 0.
Finally, according to the polar decomposition theorem, given any non-singular linear trans-
formation F, there exists unique symmetric positive definite linear transformations U and
V and a unique orthogonal linear transformation R such that
F = RU = VR. (2.35)
If λ and r are an eigenvalue and eigenvector of U, then it can be readily shown that λ and
Rr are an eigenvalue and eigenvector of V.
Given two vectors a, b ∈ E
3
, their tensor-product is the linear transformation usually
denoted by a ⊗b, which is such that
(a ⊗b)x = (x · b)a for all x ∈ E
3
. (2.36)
2.2. LINEAR TRANSFORMATIONS. 25
Observe that for any x ∈ E
3
, the vector (a ⊗b)x is parallel to the vector a. Thus the range
of the linear transformation a ⊗ b is the one-dimensional subspace of E
3
consisting of all
vectors parallel to a. The rank of the linear transformation a ⊗b is thus unity.
For any vectors a, b, c, and d it is easily shown that
(a ⊗b)
T
= b ⊗a, (a ⊗b)(c ⊗d) = (b · c)(a ⊗d). (2.37)
The product of a linear transformation A with the linear transformation a ⊗b gives
A(a ⊗b) = (Aa) ⊗b, (a ⊗b)A = a ⊗(A
T
b). (2.38)
Let {e
1
, e
2
, e
3
} be an orthonormal basis. Since this is a basis, any vector in E
3
, and
therefore in particular each of the vectors Ae
1
, Ae
2
, Ae
3
, can be expressed as a unique
linear combination of the basis vectors e
1
, e
2
, e
3
. It follows that there exist unique real
numbers A
ij
such that
Ae
j
=
3
¸
i=1
A
ij
e
i
, j = 1, 2, 3, (2.39)
where A
ij
is the i
th
component on the vector Ae
j
. They can equivalently be expressed as
A
ij
= e
i
· (Ae
j
). The linear transformation A can now be represented as
A =
3
¸
i=1
3
¸
j=1
A
ij
(e
i
⊗e
j
). (2.40)
One refers to the A
ij
’s as the components of the linear transformation A in the basis
{e
1
, e
2
, e
3
}. Note that
3
¸
i=1
e
i
⊗e
i
= I,
3
¸
i=1
(Ae
i
) ⊗e
i
= A. (2.41)
Let S be a symmetric linear transformation with eigenvalues λ
1
, λ
2
, λ
3
and corresponding
(mutually orthogonal unit) eigenvectors e
1
, e
2
, e
3
. Since Se
j
= λ
j
e
j
for each j = 1, 2, 3, it
follows from (2.39) that the components of S in the principal basis {e
1
, e
2
, e
3
} are S
11
=
λ
1
, S
21
= S
31
= 0; S
12
= 0, S
22
= λ
2
, S
32
= 0; S
13
= S
23
= 0, S
33
= λ
3
. It follows from the
general representation (2.40) that S admits the representation
S =
3
¸
i=1
λ
i
(e
i
⊗e
i
); (2.42)
26 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
this is called the spectral representation of a symmetric linear transformation. It can be
readily shown that, for any positive integer n,
S
n
=
3
¸
i=1
λ
n
i
(e
i
⊗e
i
); (2.43)
if S is symmetric and non-singular, then
S
−1
=
3
¸
i=1
(1/λ
i
) (e
i
⊗e
i
). (2.44)
If S is symmetric and positive definite, there is a unique symmetric positive definite linear
transformation T such that T
2
= S. We call T the positive definite square root of S and
denote it by T =

S. It is readily seen that

S =
3
¸
i=i

λ
i
(e
i
⊗e
i
). (2.45)
2.3 Worked Examples.
Example 2.1: Given three vectors a, b, c, show that
a · (b ×c) = b · (c ×a) = c · (a ×b).
Solution: By the properties of the vector-product, the vector (a + b) is normal to the vector (a + b) × c.
Thus
(a +b) · [(a +b) ×c] = 0.
On expanding this out one obtains
a · (a ×c) +a · (b ×c) +b · (a ×c) +b · (b ×c) = 0.
Since a is normal to (a × c), and b is normal to (b × c), the first and last terms in this equation vanish.
Finally, recall that a ×c = −c ×a. Thus the preceding equation simplifies to
a · (b ×c) = b · (c ×a).
This establishes the first part of the result. The second part is shown analogously.
Example 2.2: Show that a necessary and sufficient condition for three vectors a, b, c in E
3
– none of which
is the null vector – to be linearly dependent is that a · (b ×c) = 0.
2.3. WORKED EXAMPLES. 27
Solution: To show necessity, suppose that the three vectors a, b, c, are linearly dependent. It follows that
αa +βb +γc = o
for some real numbers α, β, γ, at least one of which is non zero. Taking the vector-product of this equation
with c and then taking the scalar-product of the result with a leads to
βa · (b ×c) = 0.
Analogous calculations with the other pairs of vectors, and keeping in mind that a · (b ×c) = b · (c ×a) =
c · (a ×b), leads to
αa · (b ×c) = 0, βa · (b ×c) = 0, γa · (b ×c) = 0.
Since at least one of α, β, γ is non-zero it follows that necessarily a · (b ×c) = o.
To show sufficiency, let a· (b×c) = 0 and assume that a, b, c are linearly independent. We will show that
this is a contradiction whence a, b, c must be linearly dependent. By the properties of the vector-product,
the vector b ×c is normal to the plane defined by the vectors b and c. By assumption, a · (b ×c) = 0, and
this implies that a is normal to b×c. Since we are in E
3
this means that a must lie in the plane defined by
b and c. This means they cannot be linearly independent.
Example 2.3: Interpret the quantity a·(b×c) geometrically in terms of the volume of the tetrahedron defined
by the vectors a, b, c.
Solution: Consider the tetrahedron formed by the three vectors a, b, c as depicted in Figure 2.2. Its volume
V
0
=
1
3
A
0
h
0
where A
0
is the area of its base and h
0
is its height.
n =
a × b
|a × b|
Volume =
1
3
A
0
×h
0
= |a × b| A
0
= c · n
h
0
c
a
b
n
Area A
0
Height h
0
Figure 2.2: Volume of the tetrahedron defined by vectors a, b, c.
Consider the triangle defined by the vectors a and b to be the base of the tetrahedron. Its area A
0
can
be written as 1/2 base ×height = 1/2|a|(|b|| sin θ|) where θ is the angle between a and b. However from the
property (2.9) of the vector-product we have |a ×b| = |a||b|| sin θ| and so A
0
= |a ×b|/2.
Next, n = (a × b)/|a × b| is a unit vector that is normal to the base of the tetrahedron, and so the
height of the tetrahedron is h
0
= c · n; see Figure 2.2.
Therefore
V
0
=
1
3
A
0
h
0
=
1
3

|a ×b|
2

(c · n) =
1
6
(a ×b) · c. (i)
28 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
Observe that this provides a geometric explanation for why the vectors a, b, c are linearly dependent if and
only if (a ×b) · c = 0.
Example 2.4: Let φ(x) be a scalar-valued function defined on the vector space E
3
. If φ is linear, i.e. if
φ(αx+βy) = αφ(x) +βφ(y) for all scalars α, β and all vectors x, y, show that φ(x) = c· x for some constant
vector c. Remark: This shows that the scalar-product is the most general scalar-valued linear function of a
vector.
Solution: Let {e
1
, e
3
, e
3
} be any orthonormal basis for E
3
. Then an arbitrary vector x can be written in
terms of its components as x = x
1
e
1
+x
2
e
2
+x
3
e
3
. Therefore
φ(x) = φ(x
1
e
1
+x
2
e
2
+x
3
e
3
)
which because of the linearity of φ leads to
φ(x) = x
1
φ(e
1
) +x
2
φ(e
2
) +x
3
φ(e
3
).
On setting c
i
= φ(e
i
), i = 1, 2, 3, we find
φ(x) = x
1
c
1
+x
2
c
2
+x
3
c
3
= c · x
where c = c
1
e
1
+c
2
e
2
+c
3
e
3
.
Example 2.5: If two linear transformations A and B have the property that Ax · y = Bx · y for all vectors
x and y, show that A = B.
Solution: Since (Ax − Bx) · y = 0 for all vectors y, we may choose y = Ax − Bx in this, leading to
|Ax −Bx|
2
= 0. Since the only vector of zero length is the null vector, this implies that
Ax = Bx for all vectors x (i)
and so A = B.
Example 2.6: Let n be a unit vector, and let P be the plane through o normal to n. Let Π and R be the
transformations which, respectively, project and reflect a vector in the plane P.
a. Show that Π and R are linear transformations; Π is called the “projection linear transformation”
while R is known as the “reflection linear transformation”.
b. Show that R(Rx) = x for all x ∈ E
3
.
c. Verify that a reflection linear transformation Ris non-singular while a projection linear transformation
Π is singular. What is the inverse of R?
d. Verify that a projection linear transformation Π is symmetric and that a reflection linear transforma-
tion R is orthogonal.
2.3. WORKED EXAMPLES. 29
P n
x
Πx
(x · n)n
Rx
(x · n)n
Figure 2.3: The projection Πx and reflection Rx of a vector x on the plane P.
e. Show that the projection linear transformation and reflection linear transformation can be represented
as Π = I −n ⊗n and R = I −2(n ⊗n) respectively.
Solution:
a. Figure 2.3 shows a sketch of the plane P, its unit normal vector n, a generic vector x, its projection
Πx and its reflection Rx. By geometry we see that
Πx = x −(x · n)n, Rx = x −2(x · n)n. (i)
These define the images Πx and Rx of a generic vector x under the transformation Π and R. One
can readily verify that Π and R satisfy the requirement (2.12) of a linear transformation.
b. Applying the definition (i)
2
of R to the vector Rx gives
R(Rx) = (Rx) −2

(Rx) · n

n
Replacing Rx on the right-hand side of this equation by (i)
2
, and expanding the resulting expression
shows that the right-hand side simplifies to x. Thus R(Rx) = x.
c. Applying the definition (i)
1
of Π to the vector n gives
Πn = n −(n · n)n = n −n = o.
Therefore Πn = o and (since n = o) we see that o is not the only vector that is mapped to the null
vector by Π. The transformation Π is therefore singular.
Next consider the transformation R and consider a vector x that is mapped by it to the null vector,
i.e. Rx = o. Using (i)
2
x = 2(x · n)n.
Taking the scalar-product of this equation with the unit vector n yields x · n = 2(x · n) from which
we conclude that x · n = 0. Substituting this into the right-hand side of the preceding equation leads
to x = o. Therefore Rx = o if and only if x = o and so R is non-singular.
To find the inverse of R, recall from part (b) that R(Rx) = x. Operating on both sides of this
equation by R
−1
gives Rx = R
−1
x. Since this holds for all vectors x it follows that R
−1
= R.
30 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
d. To show that Π is symmetric we simply use its definition (i)
1
to calculate Πx · y and x · Πy for
arbitrary vectors x and y. This yields
Πx · y =

x −(x · n)n

· y = x · y −(x · n)(y · n)
and
x · Πy = x ·

y −(x · n)n

= x · y −(x · n)(y · n).
Thus Πx · y = x · Πy and so Π is symmetric.
To show that R is orthogonal we must show that RR
T
= I or R
T
= R
−1
. We begin by calculating
R
T
. Recall from the definition (2.19) that the transpose satisfies the requirement x · R
T
y = Rx · y.
Using the definition (i)
2
of R on the right-hand side of this equation yields
x · R
T
y = x · y −2(x · n)(y · n).
We can rearrange the right-hand side of this equation so it reads
x · R
T
y = x ·

y −2(y · n)n

.
Since this holds for all x it follows that R
T
y = y − 2(y · n)n. Comparing this with (i)
2
shows that
R
T
= R. In part (c) we showed that R
−1
= R and so it now follows that R
T
= R
−1
. Thus R is
orthogonal.
e. Applying the operation (I −n ⊗n) on an arbitrary vector x gives

I −n ⊗n

x = x −(n ⊗n)x = x −(x · n)n = Πx
and so Π = I −n ⊗n.
Similarly

I −2n ⊗n

x = x −2(x · n)n = Rx
and so R = I −2n ⊗n.
Example 2.7: If W is a skew symmetric linear transformation show that
Wx · x = 0 for all x . (i)
Solution: By the definition (2.19) of the transpose, we have Wx · x = x · W
T
x; and since W = −W
T
for a
skew symmetric linear transformation, this can be written as Wx· x = −x· Wx. Finally the property (2.3)
of the scalar-product allows this to be written as Wx · x = −Wx · x from which the desired result follows.
Example 2.8: Show that (AB)
T
= B
T
A
T
.
Solution: First, by the definition (2.19) of the transpose,
(AB)x · y = x · (AB)
T
y . (i)
2.3. WORKED EXAMPLES. 31
Second, note that (AB)x · y = A(Bx) · y. By the definition of the transpose of A we have A(Bx) · y =
Bx· A
T
y; and by the definition of the transpose of B we have Bx· A
T
y = x· B
T
A
T
y. Therefore combining
these three equations shows that
(AB)x · y = x · B
T
A
T
y (ii)
On equating these two expressions for (AB)x · y shows that x · (AB)
T
y = x · B
T
A
T
y for all vectors x, y
which establishes the desired result.
Example 2.9: If o is the null vector, then show that Ao = o for any linear transformation A.
Solution: The null vector o has the property that when it is added to any vector, the vector remains
unchanged. Therefore x + o = x, and similarly Ax + o = Ax. However operating on the first of these
equations by A shows that Ax + Ao = Ax, which when combined with the second equation yields the
desired result.
Example 2.10: If A and B are non-singular linear transformations show that AB is also non-singular and
that (AB)
−1
= B
−1
A
−1
.
Solution: Let C = B
−1
A
−1
. We will show that (AB)C = C(AB) = I and therefore that C is the inverse
of AB. (Since the inverse would thus have been shown to exist, necessarily AB must be non-singular.)
Observe first that
(AB) C = (AB) B
−1
A
−1
= A(BB
−1
)A
−1
= AIA
−1
= I ,
and similarly that
C(AB) = B
−1
A
−1
(AB) = B
−1
(A
−1
A)B == B
−1
IB = I .
Therefore (AB)C = C(AB) = I and so C is the inverse of AB.
Example 2.11: If A is non-singular, show that (A
−1
)
T
= (A
T
)
−1
.
Solution: Since (A
T
)
−1
is the inverse of A
T
we have (A
T
)
−1
A
T
= I. Post-operating on both sides of this
equation by (A
−1
)
T
gives
(A
T
)
−1
A
T
(A
−1
)
T
= (A
−1
)
T
.
Recall that (AB)
T
= B
T
A
T
for any two linear transformations A and B. Thus the preceding equation
simplifies to
(A
T
)
−1
(A
−1
A)
T
= (A
−1
)
T
Since A
−1
A = I the desired result follows.
Example 2.12: Show that an orthogonal linear transformation Q preserves inner products, i.e. show that
Qx · Qy = x · y for all vectors x, y.
32 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
Solution: Since
(x −y) · (x −y) = x · x +y · y −2x · y
it follows that
x · y =
1
2
¸
|x|
2
+|y|
2
−|x −y|
2
¸
. (i)
Since this holds for all vectors x, y it must also hold when x and y are replaced by Qx and Qy:
Qx · Qy =
1
2
¸
|Qx|
2
+|Qy|
2
−|Qx −Qy|
2
¸
.
By definition, an orthogonal linear transformation Q preserves length, i.e. |Qv| = |v| for all vectors v. Thus
the preceding equation simplifies to
Qx · Qy =
1
2
¸
|x|
2
+|y|
2
−|x −y|
2
¸
. (ii)
Since the right-hand-sides of the preceding expressions for x · y and Qx · Qy are the same, it follows that
Qx · Qy = x · y.
Remark: Thus an orthogonal linear transformation preserves the length of any vector and the inner product
between any two vectors. It follows therefore that an orthogonal linear transformation preserves the angle
between a pair of vectors as well.
Example 2.13: Let Q be an orthogonal linear transformation. Show that
a. Q is non-singular, and that
b. Q
−1
= Q
T
.
Solution:
a. To show that Q is non-singular we must show that the only vector x for which Qx = o is the null
vector x = o. Suppose that Qx = o for some vector x. Taking the norm of the two sides of this
equation leads to |Qx| = |o| = 0. However an orthogonal linear transformation preserves length and
therefore |Qx| = |x|. Consequently |x| = 0. However the only vector of zero length is the null vector
and so necessarily x = o. Thus Q is non-singular.
b. Since Q is orthogonal it preserves the inner product: Qx· Qy = x· y for all vectors x and y. However
the property (2.19) of the transpose shows that Qx· Qy = x· Q
T
Qy. It follows that x· Q
T
Qy = x· y
for all vectors x and y, and therefore that Q
T
Q = I. Thus Q
−1
= Q
T
.
Example 2.14: If α
1
and α
2
are two distinct eigenvalues of a symmetric linear transformation A, show that
the corresponding eigenvectors a
1
and a
2
are orthogonal to each other.
Solution: Recall from the definition of the transpose that Aa
1
· a
2
= a
1
· A
T
a
2
, and since A is symmetric
that A = A
T
. Thus
Aa
1
· a
2
= a
1
· Aa
2
.
2.3. WORKED EXAMPLES. 33
Since a
1
and a
2
are eigenvectors of A corresponding to the eigenvalues α
1
and α
2
, we have Aa
1
= α
1
a
1
and
Aa
2
= α
2
a
2
. Thus the preceding equation reduces to α
1
a
1
· a
2
= α
2
a
1
· a
2
or equivalently

1
−α
2
)(a
1
· a
2
) = 0.
Since, α
1
= α
2
it follows that necessarily a
1
· a
2
= 0.
Example 2.15: If λ and e are an eigenvalue and eigenvector of an arbitrary linear transformation A, show
that λ and P
−1
e are an eigenvalue and eigenvector of the linear transformation P
−1
AP. Here P is an
arbitrary non-singular linear transformation.
Solution: Since PP
−1
= I it follows that Ae = APP
−1
e. However we are told that Ae = λe, whence
APP
−1
e = λe. Operating on both sides with P
−1
gives P
−1
APP
−1
e = λP
−1
e which establishes the
result.
Example 2.16: If λ is an eigenvalue of an orthogonal linear transformation Q, show that |λ| = 1.
Solution: Let λ and e be an eigenvalue and corresponding eigenvector of Q. Thus Qe = λe and so |Qe| =
|λe| = |λ| |e|. However, Q preserves length and so |Qe| = |e|. Thus |λ| = 1.
Remark: We will show later that +1 is an eigenvalue of a “proper” orthogonal linear transformation on E
3
.
The corresponding eigvector is known as the axis of Q.
Example 2.17: The components of a linear transformation A in an orthonormal basis {e
1
, e
2
, e
3
} are the
unique real numbers A
ij
defined by
Ae
j
=
3
¸
i=1
A
ij
e
i
, j = 1, 2, 3. (i)
Show that the linear transformation A can be represented as
A =
3
¸
i=1
3
¸
j=1
A
ij
(e
i
⊗e
j
). (ii)
Solution: Consider the linear transformation given on the right-hand side of (ii) and operate it on an arbitrary
vector x:

¸
3
¸
i=1
3
¸
j=1
A
ij
(e
i
⊗e
j
)

x =
3
¸
i=1
3
¸
j=1
A
ij
(x · e
j
)e
i
=
3
¸
i=1
3
¸
j=1
A
ij
x
j
e
i
=
3
¸
j=1
x
j

3
¸
i=1
A
ij
e
i

,
where we have used the facts that (p⊗q)r = (q· r)p and x
i
= x· e
i
. On using (i) in the right most expression
above, we can continue this calculation as follows:

¸
3
¸
i=1
3
¸
j=1
A
ij
(e
i
⊗e
j
)

x =
3
¸
j=1
x
j
Ae
j
= A
3
¸
j=1
x
j
e
j
= Ax.
34 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
The desired result follows from this since this holds for arbitrary vectors x.
Example 2.18: Let Rbe a “rotation transformation” that rotates vectors in IE
3
through an angle θ, 0 < θ < π,
about an axis e (in the sense of the right-hand rule). Show that R can be represented as
R = e ⊗e + (e
1
⊗e
1
+e
2
⊗e
2
) cos θ −(e
1
⊗e
2
−e
2
⊗e
1
) sin θ, (i)
where e
1
and e
2
are any two mutually orthogonal vectors such that {e
1
, e
2
, e} forms a right-handed or-
thonormal basis for IE
3
.
Solution: We begin by listing what is given to us in the problem statement. Since the transformation R
simply rotates vectors, it necessarily preserves the length of a vector and so
|Rx| = |x| for all vectors x. (ii)
In addition, since the angle through which R rotates a vector is θ, the angle between any vector x and its
image Rx is θ:
Rx · x = |x|
2
cos θ for all vectors x. (iii)
Next, since R rotates vectors about the axis e, the angle between any vector x and e must equal the angle
between Rx and e:
Rx · e = x · e for all vectors x; (iv)
moreover, it leaves the axis e itself unchanged:
Re = e. (v)
And finally, since the rotation is in the sense of the right-hand rule, for any vector x that is not parallelel to
the axis e, the vectors x, Rx and e must obey the inequality
(x ×Rx) · e > 0 for all vectors x that are not parallel to e. (vi)
Let {e
1
, e
2
, e} be a right-handed orthonormal basis. This implies that any vector in E
3
, and therefore
in particular the vectors Re
1
, Re
2
and Re, can be expressed as linear combinations of e
1
, e
2
and e,
Re
1
= R
11
e
1
+R
21
e
2
+R
31
e,
Re
2
= R
12
e
1
+R
22
e
2
+R
32
e,
Re = R
13
e
1
+R
23
e
2
+R
33
e,

(vii)
for some unique real numbers R
ij
, i, j = 1, 2, 3.
First, it follows from (v) and (vii)
3
that
R
13
= 0, R
23
= 0, R
33
= 1.
Second, we conclude from (iv) with the choice x = e
1
that Re
1
· e = 0. Similarly Re
2
· e = 0. These together
with (vii) imply that
R
31
= R
32
= 0.
2.3. WORKED EXAMPLES. 35
Third, it follows from (iii) with x = e
1
and (vii)
1
that R
11
= cos θ. One similarly shows that R
22
= cos θ.
Thus
R
11
= R
22
= cos θ.
Collecting these results allows us to write (vii) as
Re
1
= cos θ e
1
+R
21
e
2
,
Re
2
= R
12
e
1
+cos θ e
2
,
Re = e,

(viii)
Fourth, the inequality (vi) with the choice x = e
1
, together with (viii) and the fact that {e
1
, e
2
, e} forms
a right-handed basis yields R
21
> 0. Similarly the choice x = e
2
, yields R
12
< 0. Fifth, (ii) with x = e
1
gives |Re
1
| = 1 which in view of (viii)
1
requires that R
21
= ±sin θ. Similarly we find that R
12
= ±sin θ.
Collecting these results shows that
R
21
= +sin θ, R
12
= −sin θ,
since 0 < θ < π. Thus in conclusion we can write (viii) as
Re
1
= cos θ e
1
+sin θ e
2
,
Re
2
= −sin θ e
1
+cos θ e
2
,
Re = e.

(ix)
Finally, recall the representation (2.40) of a linear transformation in terms of its components as defined
in (2.39). Applying this to (ix) allows us to write
R = cos θ (e
1
⊗e
1
) + sin θ (e
2
⊗e
1
) −sin θ (e
1
⊗e
2
) + cos θ (e
2
⊗e
2
) + (e ⊗e) (x)
which can be rearranged to give the desired result.
Example 2.19: If F is a nonsingular linear transformation, show that F
T
F is symmetric and positive definite.
Solution: For any linear transformations A and B we know that (AB)
T
= B
T
A
T
and (A
T
)
T
= A. It
therefore follows that
(F
T
F)
T
= F
T
(F
T
)
T
= F
T
F; (i)
this shows that F
T
F is symmetric.
In order to show that F
T
F is positive definite, we consider the quadratic form F
T
Fx · x. By using the
property (2.19) of the transpose, we can write
F
T
Fx · x = (Fx) · (Fx) = |Fx|
2
≥ 0. (ii)
Further, equality holds here if and only if Fx = o, which, since F is nonsingular, can happen only if x = o.
Thus F
T
Fx · x > 0 for all vectors x = o and so F
T
F is positive definite.
36 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
Example 2.20: Consider a symmetric positive definite linear transformation S. Show that it has a unique
symmetric positive definite square root, i.e. show that there is a unique symmetric positive definite linear
transformation T for which T
2
= S.
Solution: Since S is symmetric and positive definite it has three real positive eigenvalues σ
1
, σ
2
, σ
3
with
corresponding eigenvectors s
1
, s
2
, s
3
which may be taken to be orthonormal. Further, we know that S can
be represented as
S =
3
¸
i=1
σ
i
(s
i
⊗s
i
). (i)
If one defines a linear transformation T by
T =
3
¸
i=1

σ
i
(s
i
⊗s
i
) (ii)
one can readily verify that T is symmetric, positive definite and that T
2
= S. This establishes the existence
of a symmetric positive definite square-root of S. What remains is to show uniqueness of this square-root.
Suppose that S has two symmetric positive definite square roots T
1
and T
2
: S = T
2
1
= T
2
2
. Let σ > 0
and s be an eigenvalue and corresponding eigenvector of S. Then Ss = σs and so T
2
1
s = σs. Thus we have
(T
1
+

σI)(T
1


σI)s = 0 . (iii)
If we set f = (T
1


σI)s this can be written as
T
1
f = −

σf . (iv)
Thus either f = o or f is an eigenvector of T
1
corresponding to the eigenvalue −

σ(< 0). Since T
1
is
positive definite it cannot have a negative eigenvalue. Thus f = o and so
T
1
s =

σs . (v)
It similarly follows that T
2
s =

σs and therefore that
T
1
s = T
2
s. (vi)
This holds for every eigenvector s of S: i.e. T
1
s
i
= T
2
s
i
, i = 1, 2, 3. Since the triplet of eigenvectors form a
basis for the underlying vector space this in turn implies that T
1
x = T
2
x for any vector x. Thus T
1
= T
2
.
Example 2.21: Polar Decomposition Theorem: If F is a nonsingular linear transformation, show that there
exists a unique positive definite symmetric linear transformation U, and a unique orthogonal linear trans-
formation R such that F = RU.
Solution: It follows from Example 2.19 that F
T
F is symmetric and positive definite. It then follows from
Example 2.20 that F
T
F has a unique symmetric positive definite square root, say, U:
U =

F
T
F. (i)
2.3. WORKED EXAMPLES. 37
Finally, since U is positive definite, it is nonsingular, and its inverse U
−1
exists. Define the linear
transformation R through:
R = FU
−1
. (ii)
All we have to do is to show that R is orthogonal. But this follows from
R
T
R = (FU
−1
)
T
(FU
−1
) = (U
−1
)
T
F
T
FU
−1
= U
−1
U
2
U
−1
= I. (iii)
In this calculation we have used the fact that U, and so U
−1
, are symmetric. This establishes the proposition
(except for the uniqueness which if left as an exercise).
Example 2.22: The polar decomposition theorem states that any nonsingular linear transformation F can
be represented uniquely in the forms F = RU = VR where R is orthogonal and U and V are symmetric
and positive definite. Let λ
i
, r
i
, i = 1, 2, 3 be the eigenvalues and eigenvectors of U. From Example 2.15 it
follows that the eigenvalues of V are the same as those of U and that the corresponding eigenvectors
i
of
V are given by
i
= Rr
i
. Thus U and V have the spectral decompositions
U =
3
¸
i=1
λ
i
r
i
⊗r
i
, V =
3
¸
i=1
λ
i

i

i
.
Show that
F =
3
¸
i=1
λ
i

i
⊗r
i
, R =
3
¸
i=1

i
⊗r
i
.
Solution: First, by using the property (2.38)
1
and
i
= Rr
i
we have
F = RU = R
3
¸
i=1
λ
i
r
i
⊗r
i
=
3
¸
i=1
λ
i
(Rr
i
) ⊗r
i
=
3
¸
i=1
λ
i

i
⊗r
i
. (i)
Next, since U is non-singular
U
−1
=
3
¸
i=1
λ
−1
i
r
i
⊗r
i
.
and therefore
R = FU
−1
=
3
¸
i=1
λ
i

i
⊗r
i
3
¸
j=1
λ
−1
j
r
j
⊗r
j
=
3
¸
i=1
3
¸
j=1
λ
i
λ
−1
j
(
i
⊗r
i
)(r
j
⊗r
j
).
By using the property (2.37)
2
and the fact that r
i
· r
j
= δ
ij
, we have (
i
⊗r
i
)(r
j
⊗r
j
) = (r
i
· r
j
)(
i
⊗r
j
) =
δ
ij
(
i
⊗r
j
). Therefore
R =
3
¸
i=1
3
¸
j=1
λ
i
λ
−1
j
δ
ij
(
i
⊗r
j
) =
3
¸
i=1
λ
i
λ
−1
i
(
i
⊗r
i
) =
3
¸
i=1
(
i
⊗r
i
). (ii)
Example 2.23: Determine the rank and the null space of the linear transformation C = a ⊗ b where a =
o, b = o.
38 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
Solution: Recall that the rank of any linear transformation A is the dimension of its range. (The range of A
is the particular subspace of E
3
comprised of all vectors Ax as x takes all values in E
3
.) Since Cx = (b· x)a
the vector Cx is parallel to the vector a for every choice of the vector x. Thus the range of C is the set of
vectors parallel to a and its dimension is one. The linear transformation C therefore has rank one.
Recall that the null space of any linear transformation A is the particular subspace of E
3
comprised of
the set of all vectors x for which Ax = o. Since Cx = (b · x)a and a = o, the null space of C consists of all
vectors x for which b · x, i.e. the set of all vectors normal to b.
Example 2.24: Let λ
1
≤ λ
2
≤ λ
3
be the eigenvalues of the symmetric linear transformation S. Show that S
can be expressed in the form
S = (I +a ⊗b)(I +b ⊗a) a = o, b = o, (i)
if and only if
0 ≤ λ
1
≤ 1, λ
2
= 1, λ
3
≥ 1. (ii)
Example 2.25: Calculate the square roots of the identity tensor.
Solution: The identity is certainly a symmetric positive definite tensor. By the result of a previous example
on the square-root of a symmetric positive definite tensor, it follows that there is a unique symmetric positive
definite tensor which is the square root of I. Obviously, this square root is also I. However, there are other
square roots of I that are not symmetric positive definite. We are to explore them here: thus we wish to
determine a tensor A on E
3
such that A
2
= I, A = I and A = −I.
First, if Ax = x for every vector x ∈ E
3
, then, by definition, A = I. Since we are given that A = I,
there must exist at least one non-null vector x for which Ax = x; call this vector f
1
so that Af
1
= f
1
. Set
e
1
= (A−I) f
1
; (i)
since Af
1
= f
1
, it follows that e
1
= o. Observe that
(A+I) e
1
= (A+I) (A−I)f
1
= (A
2
−I)f
1
= Of
1
= o. (ii)
Therefore
Ae
1
= −e
1
(iii)
and so −1 is an eigenvalue of A with corresponding eigenvector e
1
. Without loss of generality we can assume
that |e
1
| = 1.
Second, the fact that A = −I, together with A
2
= I similary implies that there must exist a unit vector
e
2
for which
Ae
2
= e
2
, (iv)
from which we conclude that +1 is an eigenvalue of A with corresponding eigenvector e
2
.
2.3. WORKED EXAMPLES. 39
Third, one can show that {e
1
, e
2
} is a linearly independent pair of vectors. To see this, suppose that for
some scalars ξ
1
, ξ
2
one has
ξ
1
e
1

2
e
2
= o.
Operating on this by A yields ξ
1
Ae
1

2
Ae
2
= o, which on using (iii) and (iv) leads to
−ξ
1
e
1

2
e
2
= o.
Subtracting and adding the preceding two equations shows that ξ
1
e
1
= ξ
2
e
2
= o. Since e
1
and e
2
are
eigenvectors, neither of them is the null vector o, and therefore ξ
1
= ξ
2
= 0. Therefore e
1
and e
2
are linearly
independent.
Fourth, let e
3
be a unit vector that is perpendicular to both e
1
and e
2
. The triplet of vectors {e
1
, e
2
, e
3
}
is linearly independent and therefore forms a basis for E
3
.
Fifth, the components A
ij
of the tensor A in the basis {e
1
, e
2
, e
3
} are given, as usual, by
Ae
j
= A
ij
e
i
. (v)
Comparing (v) with (iii) yields A
11
= −1, A
21
= A
31
= 0, and similarly comparing (v) with (iv) yields
A
22
= 1, A
12
= A
32
= 0. The matrix of components of A in this basis is therefore
[A] =

¸
¸
¸
−1 0 A
13
0 1 A
23
0 0 A
33

. (vi)
It follows that
[A
2
] = [A]
2
= [A][A] =

¸
¸
¸
1 0 −A
13
+A
13
A
33
0 1 A
23
+A
23
A
33
0 0 A
2
33

. (vii)
(Notation: [A
2
] is the matrix of components of A
2
while [A]
2
is the square of the matrix of components of
A. Why is [A
2
] = [A]
2
?) However, since A
2
= I, the matrix of components of A
2
in any basis has to be the
identity matrix. Therefore we must have
−A
13
+A
13
A
33
= 0, A
23
+A
23
A
33
= 0, A
2
33
= 1, (viii)
which implies that
either
A
13
= arbitrary,
A
23
= 0,
A
33
= 1,

or
A
13
= 0,
A
23
= arbitrary,
A
33
= −1.

(ix)
Consequently the matrix [A] must necessarily have one of the two forms

¸
¸
¸
−1 0 α
1
0 1 0
0 0 1

or

¸
¸
¸
−1 0 0
0 1 α
2
0 0 −1

, (x)
40 CHAPTER 2. VECTORS AND LINEAR TRANSFORMATIONS
where α
1
and α
2
are arbitrary scalars.
Sixth, set
p
1
= e
1
, q
1
= −e
1
+
α
1
2
e
3
. (xi)
Then
p
1
⊗q
1
= −e
1
⊗e
1
+
α
1
2
e
1
⊗e
3
,
and therefore
I + 2p
1
⊗q
1
=

e
1
⊗e
1
+e
2
⊗e
2
+e
3
⊗e
3

−2e
1
⊗e
1

1
e
1
⊗e
3
= −e
1
⊗e
1
+e
2
⊗e
2
+e
3
⊗e
3

1
e
1
⊗e
3
.
Note from this that the components of the tensor I +2p
1
⊗q
1
are given by (x)
1
. Conversely, one can readily
verify that the tensor
A = I + 2p
1
⊗q
1
(xii)
has the desired properties A
2
= I, A = I, A = −I for any value of the scalar α
1
.
Alternatively set
p
2
= e
2
, q
2
= e
2
+
α
2
2
e
3
. (xiii)
Then
p
2
⊗q
2
= e
2
⊗e
2
+
α
2
2
e
2
⊗e
3
,
and therefore
−I + 2p
2
⊗q
2
=

−e
1
⊗e
1
−e
2
⊗e
2
−e
3
⊗e
3

+ 2e
2
⊗e
2

2
e
2
⊗e
3
= −e
1
⊗e
1
+e
2
⊗e
2
−e
3
⊗e
3

2
e
2
⊗e
3
.
Note from this that the components of the tensor −I + 2p
2
⊗ q
2
are given by (x)
2
. Conversely, one can
readily verify that the tensor
A = −I + 2p
2
⊗q
2
(xiv)
has the desired properties A
2
= I, A = I, A = −I for any value of the scalar α
2
.
Thus the tensors defined in (xii) and (xiv) are both square roots of the identity tensor that are not
symmetric positive definite.
References
1. I.M. Gelfand, Lectures on Linear Algebra, Wiley, New York, 1963.
2. P.R. Halmos, Finite Dimensional Vector Spaces, Van Nostrand, New Jersey, 1958.
3. J.K. Knowles, Linear Vector Spaces and Cartesian Tensors, Oxford University Press, New York, 1997.
Chapter 3
Components of Vectors and Tensors.
Cartesian Tensors.
Notation:
α ..... scalar
{a} ..... 3 ×1 column matrix
a ..... vector
a
i
..... i
th
component of the vector a in some basis; or i
th
element of the column matrix {a}
[A] ..... 3 ×3 square matrix
A ..... linear transformation
A
ij
..... i, j component of the linear transformation A in some basis; or i, j element of the square matrix [A]
C
ijk
..... i, j, k, component of 4-tensor C in some basis
T
i1i2....in
..... i
1
i
2
....i
n
component of n-tensor T in some basis.
3.1 Components of a vector in a basis.
Let IE
3
be a three-dimensional Euclidean vector space. A set of three linearly independent
vectors {e
1
, e
2
, e
3
} forms a basis for IE
3
in the sense that an arbitrary vector v can always
be expressed as a linear combination of the three basis vectors; i.e. given any v ∈ IE
3
, there
are unique scalars α, β, γ such that
v = αe
1
+ βe
2
+ γe
3
. (3.1)
If each basis vector e
i
has unit length, and if each pair of basis vectors e
i
, e
j
are mutually
orthogonal, we say that {e
1
, e
2
, e
3
} forms an orthonormal basis for IE
3
. Thus, for an
41
42 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
orthonormal basis,
e
i
· e
j
= δ
ij
(3.2)
where δ
ij
is the Kronecker delta. In these notes we shall always restrict attention to or-
thonormal bases unless explicitly stated otherwise. If the basis is right-handed, one has in
addition that
e
i
· (e
j
×e
k
) = e
ijk
(3.3)
where e
ijk
is the alternator introduced previously in (1.44).
The components v
i
of a vector v in a basis {e
1
, e
2
, e
3
} are defined by
v
i
= v · e
i
. (3.4)
The vector can be expressed in terms of its components and the basis vectors as
v = v
i
e
i
. (3.5)
The components of v may be assembled into a column matrix
{v} =

¸
¸
v
1
v
2
v
3

. (3.6)
e

1
e

2 e

3
v

1
v

2
v

3
v
v
2
v
3
v
v
1
Figure 3.1: Components {v
1
, v
2
, v
3
} and {v

1
, v

2
, v

3
} of the same vector v in two different bases.
Even though this is obvious from the definition (3.4), it is still important to emphasize
that the components v
i
of a vector depend on both the vector v and the choice of basis.
Suppose, for example, that we are given two bases {e
1
, e
2
, e
3
} and {e

1
, e

2
, e

3
} as shown in
3.2. COMPONENTS OF A LINEAR TRANSFORMATION IN A BASIS. 43
Figure 3.1. Then the vector v has one set of components v
i
in the first basis and a different
set of components v

i
in the second basis:
v
i
= v · e
i
, v

i
= v · e

i
. (3.7)
Thus the one vector v can be expressed in either of the two equivalent forms
v = v
i
e
i
or v = v

i
e

i
. (3.8)
The components v
i
and v

i
are related to each other (as we shall discuss later) but in general,
v
i
= v

i
.
Once a basis {e
1
, e
2
, e
3
} is chosen and fixed, there is a unique vector x associated with
any given column matrix {x} such that the components of x in {e
1
, e
2
, e
3
} are {x}. Thus,
once the basis is fixed, there is a one-to-one correspondence between column matrices and
vectors. It follows, for example, that once the basis is fixed, the vector equation z = x + y
can be written equivalently as
{z} = {x} +{y} or z
i
= x
i
+ y
i
(3.9)
in terms of the components x
i
, y
i
and z
i
in the given basis.
If u
i
and v
i
are the components of two vectors u and v in a basis, then the scalar-product
u · v can be expressed as
u · v = u
i
v
i
; (3.10)
the vector-product u ×v can be expressed as
u ×v = (e
ijk
u
j
v
k
)e
i
or equivalently as (u ×v)
i
= e
ijk
u
j
v
k
, (3.11)
where e
ijk
is the alternator introduced previously in (1.44).
3.2 Components of a linear transformation in a basis.
Consider a linear transformation A. Any vector in IE
3
can be expressed as a linear combina-
tion of the basis vectors e
1
, e
2
and e
3
. In particular this is true of the three vectors Ae
1
, Ae
2
and Ae
3
. Let A
ij
be the ith component of the vector Ae
j
so that
Ae
j
= A
ij
e
i
. (3.12)
44 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
We can also write
A
ij
= e
i
· (Ae
j
). (3.13)
The 9 scalars A
ij
are known as the components of the linear transformation A in the
basis {e
1
, e
2
, e
3
}. The components A
ij
can be assembled into a square matrix:
[A] =

¸
¸
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33

. (3.14)
The linear transformation A can be expressed in terms of its components A
ij
and the basis
vectors e
i
as
A =
3
¸
j=1
3
¸
i=1
A
ij
(e
i
⊗e
j
). (3.15)
The components A
ij
of a linear transformation depend on both the linear transformation
A and the choice of basis. Suppose, for example, that we are given two bases {e
1
, e
2
, e
3
}
and {e

1
, e

2
, e

3
}. Then the linear transformation A has one set of components A
ij
in the first
basis and a different set of components A

ij
in the second basis:
A
ij
= e
i
· (Ae
j
), A

ij
= e

i
· (Ae

j
). (3.16)
The components A
ij
and A

ij
are related to each other (as we shall discuss later) but in
general A
ij
= A

ij
.
The components of the linear transformation A = a ⊗b are
A
ij
= a
i
b
j
. (3.17)
Once a basis {e
1
, e
2
, e
3
} is chosen and fixed, there is a unique linear transformation M
associated with any given square matrix [M] such that the components of M in {e
1
, e
2
, e
3
}
are [M]. Thus, once the basis is fixed, there is a one-to-one correspondence between square
matrices and linear transformations. It follows, for example, that the equation y = Ax
relating the linear transformation A and the vectors x and y can be written equivalently as
{y} = [A]{x} or y
i
= A
ij
x
j
(3.18)
in terms of the components A
ij
, x
i
and y
i
in the given basis. Similarly, if A, B and C are
linear transformations such that C = AB, then their component matrices [A], [B] and [C]
are related by
[C] = [A][B] or C
ij
= A
ik
B
kj
. (3.19)
3.3. COMPONENTS IN TWO BASES. 45
The component matrix [I] of the identity linear transformation I in any orthonormal basis
is the unit matrix; its components are therefore given by the Kronecker delta δ
ij
. If [A] and
[A
T
] are the component matrices of the linear transformations A and A
T
, then [A
T
] = [A]
T
and A
T
ij
= A
ji
.
As mentioned in Section 2.2, a symmetric linear transformation S has three real eigen-
values λ
1
, λ
2
, λ
3
and corresponding orthonormal eigenvectors e
1
, e
2
, e
3
. The eigenvectors are
referred to as the principal directions of S. The particular basis consisting of the eigenvec-
tors is called a principal basis for S. The component matrix [S] of the symmetric linear
transformation S in its principal basis is
[S] =

¸
¸
λ
1
0 0
0 λ
2
0
0 0 λ
3

. (3.20)
As a final remark we note that if we are to establish certain results for vectors and linear
transformations, we can, if it is more convenient to do so, pick and fix a basis, and then
work with the components in that basis. If necessary, we can revert back to the vectors and
linear transformations at the end. For example the first example in the previous chapter
asked us to show that a · (b × c) = b · (c × a). In terms of components, the left hand
side of this reads a · (b × c) = a
i
(b × c)
i
= a
i
e
ijk
b
j
c
k
= e
ijk
a
i
b
j
c
k
. Similarly the right-
hand side reads b · (c × a) = b
i
(c × a)
i
= b
i
e
ijk
c
j
a
k
= e
ijk
a
k
b
i
c
j
. Since i, j, k are dummy
subscripts in the right-most expression, they can be changed to any other subscript; thus by
changing k → i, i → j and j → k we can write b · (c × a) = e
jki
a
i
b
j
c
k
. Finally recalling
that the sign of e
ijk
changes when any two adjacent subscripts are switched we find that
b · (c × a) = e
jki
a
i
b
j
c
k
= −e
jik
a
i
b
j
c
k
= e
ijk
a
i
b
j
c
k
where we have first switched the ki and
then the ji in the subscript of the alternator. The right-most expressions of a · (b ×c) and
b · (c ×a) are identical and therefore this establishes the desired identity.
3.3 Components in two bases.
Consider a 3-dimensional Euclidean vector space together with two orthonormal bases {e
1
, e
2
, e
3
}
and {e

1
, e

2
, e

3
}. Since {e
1
, e
2
, e
3
} forms a basis, any vector, and therefore in particular the
vectors e

i
, can be represented as a linear combination of the basis vectors e
1
, e
2
, e
3
. Let Q
ij
be the jth component of the vector e

i
in the basis {e
1
, e
2
, e
3
}:
e

i
= Q
ij
e
j
. (3.21)
46 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
By taking the dot-product of (NNN) with e

k
, one sees that
Q
ij
= e

i
· e
j
, (3.22)
and so Q
ij
is the cosine of the angle between the basis vectors e

i
and e
j
. Observe from
(NNN) that Q
ji
can also be interpreted as the jth component of e
i
in the basis {e

1
, e

2
, e

3
}
whence we also have
e
i
= Q
ji
e

j
. (3.23)
The 9 numbers Q
ij
can be assembled into a square matrix [Q]. This matrix relates the
two bases {e
1
, e
2
, e
3
} and {e

1
, e

2
, e

3
}. Since both bases are orthonormal it can be readily
shown that [Q] is an orthogonal matrix. If in addition, if one basis can be rotated into the
other, which means that both bases are right-handed or both are left-handed, then [Q] is a
proper orthogonal matrix and det[Q] = +1; if the two bases are related by a reflection, which
means that one basis is right-handed and the other is left-handed, then [Q] is an improper
orthogonal matrix and det[Q] = −1.
We may now relate the different components of a single vector v in two bases.
Let v
i
and v

i
be the ith component of the same vector v in the two bases {e
1
, e
2
, e
3
} and
{e

1
, e

2
, e

3
}. Then one can show that
v

i
= Q
ij
v
j
or equivalently {v

} = [Q]{v} (3.24)
Since [Q] is orthogonal, one also has the inverse relationships
v
i
= Q
ji
v

j
or equivalently {v} = [Q]
T
{v

}. (3.25)
In general, the component matrices {v} and {v

} of a vector v in two different bases are
different. A vector whose components in every basis happen to be the same is called an
isotropic vector: {v} = [Q]{v} for all orthogonal matrices [Q]. It is possible to show that
the only isotropic vector is the null vector o.
Similarly, we may relate the different components of a single linear transforma-
tion A in two bases. Let A
ij
and A

ij
be the ij-components of the same linear transformation
A in the two bases {e
1
, e
2
, e
3
} and {e

1
, e

2
, e

3
}. Then one can show that
A

ij
= Q
ip
Q
jq
A
pq
or equivalently [A

] = [Q][A][Q]
T
. (3.26)
Since [Q] is orthogonal, one also has the inverse relationships
A
ij
= Q
pi
Q
qj
A

pq
or equivalently [A] = [Q]
T
[A

][Q]. (3.27)
3.4. DETERMINANT, TRACE, SCALAR-PRODUCT AND NORM 47
In general, the component matrices [A] and [A

] of a linear transformation A in two
different bases are different. A linear transformation whose components in every basis happen
to be the same is called an isotropic linear transformation: [A] = [Q][A][Q]
T
for all
orthogonal matrices [Q]. It is possible to show that the most general isotropic symmetric
linear transformation is a scalar multiple of the identity αI where α is an arbitrary scalar.
3.4 Scalar-valued functions of linear transformations.
Determinant, trace, scalar-product and norm.
Let Φ(A; e
1
, e
2
, e
3
) be a scalar-valued function that depends on a linear transformation
A and a (non-necessarily orthonormal) basis {e
1
, e
2
, e
3
}. For example Φ(A; e
1
, e
2
, e
3
) =
Ae
1
· e
1
. Certain such functions are in fact independent of the basis, so that for every two
(not-necessarily orthonormal) bases {e
1
, e
2
, e
3
} and {e

1
, e

2
, e

3
} one has Φ(A; e
1
, e
2
, e
3
) =
Φ(A; e

1
, e

2
, e

3
), and in such a case we can simply write Φ(A). One example of such a
function is
Φ(A; e
1
, e
2
, e
3
) =
(Ae
1
×Ae
2
) · Ae
3
(e
1
×e
2
) · e
3
, (3.28)
(though it is certainly not obvious that this function is independent of the choice of basis).
Equivalently, let A be a linear transformation and let [A] be the components of A in some
basis {e
1
, e
2
, e
3
}. Let φ([A]) be some real-valued function defined on the set of all square
matrices. If [A

] are the components of A in some other basis {e

1
, e

2
, e

3
}, then in general
φ([A]) = φ[A

]). This means that the function φ depends on the linear transformation A
and the underlying basis. Certain functions φ have the property that φ([A]) = φ[A

]) for
all pairs of bases {e
1
, e
2
, e
3
} and {e

1
, e

2
, e

3
} and therefore such a function depends on the
linear transformation only and not the basis. For such a function we may write φ(A).
We first consider two important examples here. Since the components [A] and [A

] of
a linear tranformation A in two bases are related by [A

] = [Q][A][Q]
T
, if we take the
determinant of this matrix equation we get
det[A

] = det([Q][A][Q]
T
) = det[Q] det[A] det[Q]
T
= (det[Q])
2
det[A] = det[A], (3.29)
since the determinant of an orthogonal matrix is ±1. Therefore without ambiguity we may
define the determinant of a linear transformation A to be the (basis independent) scalar-
valued function given by
det A = det[A]. (3.30)
48 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
We will see in an example at the end of this Chapter that the particular function Φ defined
in (3.28) is in fact the determinant det A.
Similarly, we may define the trace of a linear transformation A to be the (basis inde-
pendent) scalar-valued function given by
trace A = tr[A]. (3.31)
In terms of its components in a basis one has
det A = e
ijk
A
1i
A
2j
A
3k
= e
ijk
A
i1
A
j2
A
k3
, traceA = A
ii
; (3.32)
see (1.46). It is useful to note the following properties of the determinant of a linear trans-
formation:
det(AB) = det(A) det(B), det(αA) = α
3
det(A), det(A
T
) = det (A). (3.33)
As mentioned previously, a linear transformation A is said to be non-singular if the only
vector x for which Ax = o is the null vector x = o. Equivalently, one can show that A is
non-singular if and only if
det A = 0. (3.34)
If A is non-singular, then
det(A
−1
) = 1/ det(A). (3.35)
Suppose that λ and v = o are an eigenvalue and eigenvector of given a linear transfor-
mation A. Then by definition, Av = λv, or equivalently (A − λI)v = o. Since v = o it
follows that A−λI must be singular and so
det(A−λI) = 0. (3.36)
The eigenvalues are the roots λ of this cubic equation. The eigenvalues and eigenvectors of a
linear transformation do not depend on any choice of basis. Thus the eigenvalues of a linear
transformation are also scalar-valued functions of A whose values depends only on A and
not the basis: λ
i
= λ
i
(A). If S is symmetric, its matrix of components in a principal basis
are
[S] =

¸
¸
¸
¸
¸
λ
1
0 0
0 λ
2
0
0 0 λ
3

. (3.37)
3.4. DETERMINANT, TRACE, SCALAR-PRODUCT AND NORM 49
The particular scalar-valued functions
I
1
(A) = tr A,
I
2
(A) = 1/2 [(tr A)
2
−tr (A
2
)] ,
I
3
(A) = det A,
(3.38)
will appear frequently in what follows. It can be readily verified that for any linear trans-
formation A and all orthogonal linear transformations Q,
I
1
(Q
T
AQ) = I
1
(A), I
2
(Q
T
AQ) = I
2
(A), I
3
(Q
T
AQ) = I
3
(A), (3.39)
and for this reason the three functions (3.38) are said to be invariant under orthogonal trans-
formations. Observe from (3.37) that for a symmetric linear transformation with eigenvalues
λ
1
, λ
2
, λ
3
I
1
(S) = λ
1
+ λ
2
+ λ
3
,
I
2
(S) = λ
1
λ
2
+ λ
2
λ
3
+ λ
3
λ
1
,
I
3
(S) = λ
1
λ
2
λ
3
.
(3.40)
The mapping (3.40) between invariants and eigenvalues is one-to-one. In addition one can
show that for any linear transformation A and any real number α,
det(A−αI) = −α
3
+ I
1
(A)α
2
−I
2
(A)α + I
3
(A).
Note in particular that the cubic equation for the eigenvalues of a linear transformation can
be written as
λ
3
−I
1
(A)λ
2
+ I
2
(A)λ −I
3
(A) = 0.
Finally, one can show that
A
3
−I
1
(A)A
2
+ I
2
(A)A−I
3
(A)I = O. (3.41)
which is known as the Cayley-Hamilton theorem.
One can similarly define scalar-valued functions of two linear transformations A and B.
The particular function φ(A, B) defined by
φ(A, B) = tr(AB
T
) (3.42)
will play an important role in what follows. Note that in terms of components in a basis,
φ(A, B) = tr(AB
T
) = A
ij
B
ij
. (3.43)
50 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
This particular scalar-valued function is often known as the scalar product of the two
linear transformation A and B and is written as A· B:
A· B = tr(AB
T
). (3.44)
It is natural then to define the magnitude (or norm) of a linear transformation A, denoted
by |A| as
|A| =

A· A =

tr(AA
T
). (3.45)
Note that in terms of components in a basis,
|A|
2
= A
ij
A
ij
. (3.46)
Observe the useful property that if |A| →0, then each component
A
ij
→0. (3.47)
This will be used later when we linearize the theory of large deformations.
3.5 Cartesian Tensors
Consider two orthonormal bases {e
1
, e
2
, e
3
} and {e

1
, e

2
, e

3
}. A quantity whose components
v
i
and v

i
in these two bases are related by
v

i
= Q
ij
v
j
(3.48)
is called a 1
st
-order Cartesian tensor or a 1-tensor. It follows from our preceding discussion
that a vector is a 1-tensor.
A quantity whose components A
ij
and A

ij
in two bases are related by
A

ij
= Q
ip
Q
jq
A
pq
(3.49)
is called a 2
nd
-order Cartesian tensor or a 2-tensor. It follows from our preceding discussion
that a linear transformation is a 2-tensor.
The concept of an n
th
-order tensor can be introduced similarly: let T be a physical entity
which, in a given basis {e
1
, e
2
, e
3
}, is defined completely by a set of 3
n
ordered numbers
T
i
1
i
2
....in
. The numbers T
i
1
i
2
....in
are called the components of T in the basis {e
1
, e
2
, e
3
}. If,
for example, T is a scalar, vector or linear transformation, it is represented by 3
0
, 3
1
and 3
2
3.5. CARTESIAN TENSORS 51
components respectively in the given basis. Let {e

1
, e

2
, e

3
} be a second basis related to the
first one by the orthogonal matrix [Q], and let T

i
1
i
2
....in
be the components of the entity T
in the second basis. Then, if for every pair of such bases, these two sets of components are
related by
T

i
1
i
2
....in
= Q
i
1
j
1
Q
i
2
j
2
.... Q
injn
T
j
1
j
2
....jn
, (3.50)
the entity T is called a n
th
-order Cartesian tensor or more simply an n-tensor.
Note that the components of a tensor in an arbitrary basis can be calculated if its com-
ponents in any one basis are known.
Two tensors of the same order are added by adding corresponding components.
Recall that the outer-product of two vectors a and b is the 2−tensor C = a ⊗ b whose
components are given by C
ij
= a
i
b
j
. This can be generalized to higher-order tensors. Given
an n-tensor A and an m-tensor B their outer-product is the (m + n)−tensor C = A ⊗ B
whose components are given by
C
i
1
i
2
..inj
1
j
2
..jm
= A
i
1
i
2
...in
B
j
1
j
2
...jm
. (3.51)
Let A be a 2-tensor with components A
ij
in some basis. Then “contracting” A over its
subscripts leads to the scalar A
ii
. This can be generalized to higher-order tensors. Let A
be a n-tensor with components A
i
1
i
2
...in
in some basis. Then “contracting” A over two of its
subscripts, say the i
j
th and i
k
th subscripts, leads to the (n −2)−tensor whose components
in this basis are A
i
1
i
2
.. i
j−1
p i
j+1
...i
k−1
p i
k+1
.... in
. Contracting over two subscripts involves
setting those two subscripts equal, and therefore summing over them.
Let a, b and T be entities whose components in a basis are denoted by a
i
, b
i
and T
ij
.
Suppose that the components of T in some basis are related to the components of a and b
in that same basis by a
i
= T
ij
b
j
. If a and b are 1-tensors, then one can readily show that
T is necessarily a 2-tensor. This is called the quotient rule since it has the appearance
of saying that the quotient of two 1-tensors is a 2-tensor. This rule generalizes naturally to
tensors of more general order. Suppose that A, B and T are entities whose components in a
basis are related by,
A
i
1
i
2
..in
= T
k
1
k
2
...k

B
j
1
j
2
...jm
(3.52)
where some of the subscripts maybe repeated. If it is known that A and B are tensors, then
T is necessarily a tensor as well.
In general, the components of a tensor T in two different bases are different: T

i
1
i
2
...in
=
T
i
1
i
2
...in
. However, there are certain special tensors whose components in one basis are the
52 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
same as those in any other basis; an example of this is the identity 2-tensor I. Such a tensor
is said to be isotropic. In general, a tensor T is said to be an isotropic tensor if its
components have the same values in all bases, i.e. if
T

i
1
i
2
...in
= T
i
1
i
2
...in
(3.53)
in all bases {e
1
, e
2
, e
3
} and {e

1
, e

2
, e

3
}. Equivalently, for an isotropic tensor
T
i
1
i
2
....in
= Q
i
1
j
1
Q
i
2
j
2
....Q
injn
T
j
1
j
2
....jn
for all orthogonal matrices [Q]. (3.54)
One can show that (a) the only isotropic 1-tensor is the null vector o; (b) the most general
isotropic 2-tensor is a scalar multiple of the identity linear transformation, αI; (c) the most
general isotropic 3-tensor is the null 3-tensor o; (d) and the most general isotropic 4-tensor
C has components (in any basis)
C
ijkl
= αδ
ij
δ
kl
+ βδ
ik
δ
jl
+ γδ
il
δ
jk
(3.55)
where α, β, γ are arbitrary scalars.
3.6 Worked Examples.
In some of the examples below, we are asked to establish certain results for vectors and linear
transformations. As noted previously, whenever it is more convenient we may pick and fix a
basis, and then work using components in that basis. If necessary, we can revert back to the
vectors and linear transformations at the end. We shall do this frequently in what follows
and will not bother to explain this each time.
It is also worth pointing out that in some of the example below calculations involving
vectors and/or linear transformation are carried out without reference to their components.
One might have expected such examples to have been presented in Chapter 2. They are
contained in the present chapter because they all involve either the determinant or trace of a
linear transformation, and we chose to define these quantities in terms of components (even
though they are basis independent).
Example 3.1: Suppose that A is a symmetric linear transformation. Show that its matrix of components [A]
in any basis is a symmetric matrix.
3.6. WORKED EXAMPLES. 53
Solution: According to (3.13), the components of A in the basis {e
1
, e
2
, e
3
} are defined by
A
ji
= e
j
· Ae
i
. (i)
The property (NNN) of the transpose shows that e
j
· Ae
i
= A
T
e
j
· e
i
, which, on using the fact that A
is symmetric further simplifies to e
j
· Ae
i
= Ae
j
· e
i
; and finally since the order of the vectors in a scalar
product do not matter we have e
j
· Ae
i
= e
i
· Ae
j
. Thus
A
ji
= e
i
· Ae
j
. (ii)
By (3.13), the right most term here is the A
ij
component of A, and so (ii) yields
A
ji
= A
ij
. (iii)
Thus [A] = [A]
T
and so the matrix [A] is symmetric.
Remark: Conversely, if it is known that the matrix of components [A] of a linear transformation in some
basis is is symmetric, then the linear transformation A is also symmetric.
Example 2.5: Choose any convenient basis and calculate the components of the projection linear transfor-
mation Π and the reflection linear transformation R in that basis.
Solution: Let e
3
be a unit vector normal to the plane P and let e
1
and e
2
be any two unit vectors in
P such that {e
1
, e
2
, e
3
} forms an orthonormal basis. From an example in the previous chapter we know
that the projection transformation Π and the reflection transformation R in the plane P can be written as
Π = I −e
3
⊗e
3
and R = I −2(e
3
⊗e
3
) respectively. Since the components of e
3
in the chosen basis are δ
3i
,
we find that
Π
ij
= δ
ij
−(e
3
)
i
(e
3
)
j
= δ
ij
−δ
3i
δ
3j
, R
ij
= δ
ij
−2δ
3i
δ
3j
.
Example 3.2: Consider the scalar-valued function
f(A, B) = trace(AB
T
) (i)
and show that, for all linear transformations A, B, C, and for all scalars α, this functionf has the following
properties:
i) f(A, B) = f(B, A),
ii) f(αA, B) = αf(A, B),
iii) f(A+C, B) = f(A, B) +f(C, B) and
iv) f(A, A) > 0 provided A = 0.
Solution: Let A
ij
and B
ij
be the components of A and B in an arbitrary basis. In terms of these components,
(AB
T
)
ij
= A
ik
B
T
kj
= A
ik
B
jk
and so
f(A, B) = A
ik
B
ik
. (ii)
54 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
It is now trivial to verify that all of the above requirements hold.
Remark: It follows from this that the function f has all of the usual requirements of a scalar product.
Therefore we may define the scalar-product of two linear transformations A and B, denoted by A· B, as
A· B = trace(AB
T
) = A
ij
B
ij
. (iii)
Note that, based on this scalar-product, we can define the magnitude of a linear transformation to be
|A| =

A· A =

A
ij
A
ij
. (iv)
Example 3.3: For any two vectors u and v, show that their cross-product u×v is orthogonal to both u and
v.
Solution: We are to show, for example, that u · (u ×v) = 0. In terms of their components we can write
u · (u ×v) = u
i
(u ×v)
i
= u
i
(e
ijk
u
j
v
k
) = e
ijk
u
i
u
j
v
k
. (i)
Since e
ijk
= −e
jik
and u
i
u
j
= u
j
u
i
, it follows that e
ijk
is skew-symmetric in the subscripts ij and u
i
u
j
is
symmetric in the subscripts ij. Thus it follows from Example 1.3 that e
ijk
u
i
u
j
= 0 and so u · (u ×v) = 0.
The orthogonality of v and u ×v can be established similarly.
Example 3.4 Suppose that a, b, c, are any three linearly independent vectors and that F be an arbitrary
non-singular linear transformation. Show that
(Fa ×Fb) · Fc = det F (a ×b) · c (i)
Solution: First consider the left-hand side of (i). On using (3.10), and (3.11), we can express this as
(Fa ×Fb) · Fc = (Fa ×Fb)
i
(Fc)
i
= e
ijk
(Fa)
j
(Fb)
k
(Fc)
i
, (ii)
and consequently
(Fa ×Fb) · Fc = e
ijk
(F
jp
a
p
) (F
kq
b
q
) (F
ir
c
r
) = e
ijk
F
ir
F
jp
F
kq
a
p
b
q
c
r
. (iii)
Turning next to the right-hand side of (i), we note that
det F (a ×b) · c = det[F](a ×b)
i
c
i
= det[F]e
ijk
a
j
b
k
c
i
= det[F]e
rpq
a
p
b
q
c
r
. (iv)
Recalling the identity e
rpq
det[F] = e
ijk
F
ir
F
jp
F
kq
in (1.48) for the determinant of a matrix and substituting
this into (iv) gives
det F (a ×b) · c = e
ijk
F
ir
F
jp
F
kq
a
p
b
q
c
r
. (v)
Since the right-hand sides of (iii) and (v) are identical, it follows that the left-hand sides must also be equal,
thus establishing the desired result.
3.6. WORKED EXAMPLES. 55
Example 3.5 Suppose that a, b and c are three non-coplanar vectors in IE
3
. Let V
0
be the volume of the
tetrahedron defined by these three vectors. Next, suppose that F is a non-singular 2-tensor and let V denote
the volume of the tetrahedron defined by the vectors Fa, Fb and Fc. Note that the second tetrahedron is
the image of the first tetrahedron under the transformation F. Derive a formula for V in terms of V
0
and F.
c
a
b
Fa
Fb
Fc
Volume V
Volume V
0
F
Figure 3.2: Tetrahedron of volume V
0
defined by three non-coplanar vectors a, b and c; and its image
under the linear transformation F.
Solution: Recall from an example in the previous Chapter that the volume V
0
of the tetrahedron defined by
any three non-coplanar vectors a, b, c is
V
0
=
1
6
(a ×b) · c.
The volume V of the tetrahedron defined by the three vectors Fa, Fb, Fc is likewise
V =
1
6
(Fa ×Fb) · Fc.
It follows from the result of the previous example that
V/V
0
= det F
which describes how volumes are mapped by the transformation F.
Example 3.6: Suppose that a and b are two non-colinear vectors in IE
3
. Let α
0
be the area of the par-
allelogram defined by these two vectors and let n
0
be a unit vector that is normal to the plane of this
parallelogram. Next, suppose that F is a non-singular 2-tensor and let α and n denote the area and unit
normal to the parallelogram defined by the vectors Fa and Fb. Derive formulas for α and n in terms of
α
0
, n
0
and F.
Solution: By the properties of the vector-product we know that
α
0
= |a ×b|, n
0
=
a ×b
|a ×b|
;
and similarly that
α = |Fa ×Fb|, n =
Fa ×Fb
|Fa ×Fb|
.
56 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
F
a
b
n
0 Area α
0
Fa
Fb
n
Area α
Figure 3.3: Parallelogram of area α
0
with unit normal n
0
defined by two non-colinear vectors a and b;
and its image under the linear transformation F.
Therefore
α
0
n
0
= a ×b, and αn = Fa ×Fb. (i)
But
(Fa ×Fb)
s
= e
sij
(Fa)
i
(Fb)
j
= e
sij
F
ip
a
p
F
jq
b
q
. (ii)
Also recall the identity e
pqr
det[F] = e
ijk
F
ip
F
jq
F
kr
introduced in (1.48). Multiplying both sides of this
identity by F
−1
rs
leads to
e
pqr
det[F]F
−1
rs
= e
ijk
F
ip
F
jq
F
kr
F
−1
rs
= e
ijk
F
ip
F
jq
δ
ks
= e
ijs
F
ip
F
jq
= e
sij
F
ip
F
jq
(iii)
Substituting (iii) into (ii) gives
(Fa ×Fb)
s
= det[F]e
pqr
F
−1
rs
a
p
b
q
= det[F]e
rpq
a
p
b
q
F
−1
rs
= det F(a ×b)
r
F
−T
sr
= det F

F
−T
(a ×b)

s
and so using (i),
αn = α
0
det F(F
−T
n
0
).
This describes how (vectorial) areas are mapped by the transformation F. Taking the norm of this vector
equation gives
α
α
0
= | det F| |F
−T
n
0
|;
and substituting this result into the preceding equation gives
n =
F
−T
n
0
|F
−T
n
0
|
.
Example 3.5: Let {e
1
, e
2
, e
3
} and {e

1
, e

2
, e

3
} be two bases related by nine scalars Q
ij
through e

i
= Q
ij
e
j
.
Let Q be the linear transformation whose components in the basis {e
1
, e
2
, e
3
} are Q
ij
. Show that
e

i
= Q
T
e
i
;
thus Q
T
is the transformation that carries the first basis into the second.
3.6. WORKED EXAMPLES. 57
Solution: Since Q
ij
are the components of the linear transformation Q in the basis {e
1
, e
2
, e
3
}, it follows
from the definition of components that
Qe
j
= Q
ij
e
i
.
Since [Q] is an orthogonal matrix one readily sees that Q is an orthogonal transformation. Operating on
both sides of the preceding equation by Q
T
and using the orthogonality of Q leads to
e
j
= Q
ij
Q
T
e
i
.
Multiplying both sides of this by Q
kj
and noting by the orthogonality of Q that Q
kj
Q
ij
= δ
ki
, we are now
led to
Q
kj
e
j
= Q
T
e
k
or equivalently
Q
T
e
i
= Q
ij
e
j
.
This, together with the given fact that e

i
= Q
ij
e
j
, yields the desired result.
Example 3.6: Determine the relationship between the components v
i
and v

i
of a vector v in two bases.
Solution: The components v
i
of v in the basis {e
1
, e
2
, e
3
} are defined by
v
i
= v · e
i
,
and its components v

i
in the second basis {e

1
, e

2
, e

3
} are defined by
v

i
= v · e

i
.
It follows from this and (NNN) that
v

i
= v · e

i
= v · (Q
ij
e
j
) = Q
ij
v · e
j
= Q
ij
v
j
.
Thus, the components of the vector v in the two bases are related by
v

i
= Q
ij
v
j
.
Example 3.7: Determine the relationship between the components A
ij
and A

ij
of a linear transformation A
in two bases.
Solution: The components A
ij
of the linear transformation A in the basis {e
1
, e
2
, e
3
} are defined by
A
ij
= e
i
· (Ae
j
), (i)
and its components A

ij
in a second basis {e

1
, e

2
, e

3
} are defined by
A

ij
= e

i
· (Ae

j
). (ii)
58 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
By first making use of (NNN), and then (i), we can write (ii) as
A

ij
= e

i
· (Ae

j
) = Q
ip
e
p
· (AQ
jq
e
q
) = Q
ip
Q
jq
e
p
· (Ae
q
) = Q
ip
Q
jq
A
pq
. (iii)
Thus, the components of the linear transformation A in the two bases are related by
A

ij
= Q
ip
Q
jq
A
pq
. (iv)
Example 3.8: Suppose that the basis {e

1
, e

2
, e

3
} is obtained by rotating the basis {e
1
, e
2
, e
3
} through an
angle θ about the unit vector e
3
; see Figure 3.4. Write out the transformation rule for 2-tensors explicitly
in this case.
e
1
e
2
e

1
e

2
e
3
, e

3
θ
θ
Figure 3.4: A basis {e

1
, e

2
, e

3
} obtained by rotating the basis {e
1
, e
2
, e
3
} through an angle θ about the
unit vector e
3
.
Solution: In view of the given relationship between the two bases it follows that
e

1
= cos θ e
1
+ sin θ e
2
,
e

2
= −sin θ e
1
+ cos θ e
2
,
e

3
= e
3
.

The matrix [Q] which relates the two bases is defined by Q
ij
= e

i
· e
j
, and so it follows that
[Q] =

¸
¸
¸
¸
¸
cos θ sin θ 0
−sin θ cos θ 0
0 0 1

.
3.6. WORKED EXAMPLES. 59
Substituting this [Q] into [A

] = [Q][A][Q]
T
and multiplying out the matrices leads to the 9 equations
A

11
=
A
11
+A
22
2
+
A
11
−A
22
2
cos 2θ +
A
12
+A
21
2
sin 2θ,
A

12
=
A
12
−A
21
2
+
A
12
−A
21
2
cos 2θ −
A
11
−A
22
2
sin 2θ,
A

21
= −
A
12
−A
21
2

A
12
−A
21
2
cos 2θ −
A
11
−A
22
2
sin 2θ,
A

22
=
A
11
+A
22
2

A
11
−A
22
2
cos 2θ −
A
12
+A
21
2
sin 2θ,
A

13
= A
13
cos θ + A
23
sin θ, A

31
= A
31
cos θ + A
32
sin θ,
A

23
= A
23
cos θ − A
13
sin θ, A

32
= A
32
cos θ − A
31
sin θ,
A

33
= A
33
.
In the special case when [A] is symmetric, and in addition A
13
= A
23
= 0, these nine equations simplify to
A

11
=
A
11
+A
22
2
+
A
11
−A
22
2
cos 2θ + A
12
sin 2θ,
A

22
=
A
11
+A
22
2

A
11
−A
22
2
cos 2θ − A
12
sin 2θ,
A

12
= −
A
11
−A
22
2
sin 2θ,

together with A

13
= A

23
= 0 and A

33
= A
33
. These are the well-known equations underlying the Mohr’s
circle for transforming 2-tensors in two-dimensions.
Example 3.9:
a. Let a, b and T be entities whose components in some arbitrary basis are a
i
, b
i
and T
ij
. The components
of T in any basis are defined in terms of the components of a and b in that basis by
T
ijk
= a
i
b
j
b
k
. (i)
If a and b are vectors, show that T is a 3-tensor.
b. Suppose that A and B are 2-tensors and that their components in some basis are related by
A
ij
= C
ijk
B
k
. (ii)
Show that the C
ijk
’s are the components of a 4-tensor.
Solution:
a. Let a
i
, a

i
and b
i
, b

i
be the components of a and b in two arbitrary bases. We are told that the
components of the entity T in these two bases are defined by
T
ijk
= a
i
b
j
b
k
, T

ijk
= a

i
b

j
b

k
. (iii)
60 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
Since a and b are known to be vectors, their components transform according to the 1-tensor trans-
formation rule
a

i
= Q
ij
a
j
, b

i
= Q
ij
b
j
. (iv)
Combining equations (iii) and (iv) gives
T

ijk
= a

i
b

j
b

k
= Q
ip
a
p
Q
jq
b
q
Q
kr
b
r
= Q
ip
Q
jq
Q
kr
a
p
b
q
b
r
= Q
ip
Q
jq
Q
kr
T
pqr
. (v)
Therefore the components of T in two bases transform according to T

ijk
= Q
ip
Q
jq
Q
kr
T
pqr
. Therefore
T is a 3-tensor.
b. Let A
ij
, B
ij
, C
ijk
and A

ij
, B

ij
, C

ijk
be the components of A, B, C in two arbitrary bases:
A
ij
= C
ijk
B
k
, A

ij
= C

ijk
B

k
. (vi)
We are told that A and B are 2-tensors, whence
A

ij
= Q
ip
Q
jq
A
pq
, B

ij
= Q
ip
Q
jq
B
pq
, (vii)
and we must show that C
ijk
is a 4-tensor, i.e that C

ijk
= Q
ip
Q
jq
Q
kr
Q
s
C
pqrs
. Substituting (vii)
into (vi)
2
gives
Q
ip
Q
jq
A
pq
= C

ijk
Q
kp
Q
q
B
pq
, (viii)
Multiplying both sides by Q
im
Q
jn
and using the orthogonality of [Q], i.e. the fact that Q
ip
Q
im
= δ
pm
,
leads to
δ
pm
δ
qn
A
pq
= C

ijk
Q
im
Q
jn
Q
kp
Q
q
B
pq
, (ix)
which by the substitution rule tells us that
A
mn
= C

ijk
Q
im
Q
jn
Q
kp
Q
q
B
pq
, (x)
or on using (vi)
1
in this that
C
mnpq
B
pq
= C

ijk
Q
im
Q
jn
Q
kp
Q
q
B
pq
. (xi)
Since this holds for all matrices [B] we must have
C
mnpq
= C

ijk
Q
im
Q
jn
Q
kp
Q
q
. (xii)
Finally multiplying both sides by Q
am
Q
bn
Q
cp
Q
dq
, using the orthogonality of [Q] and the substitution
rule yields the desired result
Q
am
Q
bn
Q
cp
Q
dq
C
mnpq
= C

abcd
. (xiii)
Example 3.10: Verify that the alternator e
ijk
has the property that
e
ijk
= Q
ip
Q
jq
Q
kr
e
pqr
for all proper orthogonal matrices [Q], (i)
but that more generally
e
ijk
= Q
ip
Q
jq
Q
kr
e
pqr
for all orthogonal matrices [Q]. (ii)
3.6. WORKED EXAMPLES. 61
Note from this that the alternator is not an isotropic 3-tensor.
Example 3.11: If C
ijk
is an isotropic 4-tensor, show that necessarily C
iik
= αδ
k
for some arbitrary scalar
α.
Solution: Since C
ijkl
is an isotropic 4-tensor, by definition,
C
ijkl
= Q
ip
Q
jq
Q
kr
Q
ls
C
pqrs
for all orthogonal matrices [Q]. On setting i = j in this; then using the orthogonality of [Q]; and finally
using the substitution rule, we are led to
C
iikl
= Q
ip
Q
iq
Q
kr
Q
ls
C
pqrs
= δ
pq
Q
kr
Q
ls
C
pqrs
= Q
kr
Q
ls
C
pprs
.
Thus C
iik
obeys
C
iikl
= Q
kr
Q
ls
C
pprs
for all orthogonal matrices [Q],
and therefore it is an isotropic 2-tensor. The desired result now follows since the most general isotropic
2-tensor is a scalar multiple of the identity.
Example 3.12: Show that the most general isotropic vector is the null vector o.
Solution: In order to show this we must determine the most general vector u which is such that
u
i
= Q
ij
u
j
for all orthogonal matrices [Q]. (i)
Since (i) is to hold for all orthogonal matrices [Q], it must necessarily hold for the special choice [Q] = −[I].
Then Q
ij
= −δ
ij
, and so (i) reduces to
u
i
= −δ
ij
u
j
= −u
i
; (ii)
thus u
i
= 0 and so u = o.
Conversely, u = o obviously satisfies (i) for all orthogonal matrices [Q]. Thus u = o is the most general
isotropic vector.
Example 3.13: Show that the most general isotropic symmetric tensor is a scalar multiple of the identity.
Solution: We must find the most general symmetric 2-tensor A whose components in every basis are the
same; i.e.,
[A] = [Q][A][Q]
T
for all orthogonal matrices [Q]. (i)
First, since A is symmetric, we know that there is some basis in which [A] is diagonal. Since A is also
isotropic, it follows that [A] must therefore be diagonal in every basis. Thus [A] has the form
[A] =

¸
¸
λ
1
0 0
0 λ
2
0
0 0 λ
3

(ii)
62 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
in any basis. Thus (i) takes the form

¸
¸
λ
1
0 0
0 λ
2
0
0 0 λ
3

= [Q]

¸
¸
λ
1
0 0
0 λ
2
0
0 0 λ
3

[Q
T
] (iii)
for all orthogonal matrices [Q]. Thus (iii) must necessarily hold for the special choice
[Q] =

¸
¸
0 0 1
1 0 0
0 1 0

, (iv)
in which case (iii) reduces to

¸
¸
λ
1
0 0
0 λ
2
0
0 0 λ
3

=

¸
¸
λ
2
0 0
0 λ
1
0
0 0 λ
3

. (v)
Therefore, λ
1
= λ
2
.
A permutation of this special choice of [Q] similarly shows that λ
2
= λ
3
. Thus λ
1
= λ
2
= λ
3
= say α.
Therefore [A] necessarily must have the form [A] = α[I].
Conversely, by direct substitution, [A] = α[I] is readily shown to obey (i) for any orthogonal matrix [Q].
This establishes the result.
Example 3.14: If W is a skew-symmetric tensor, show that there is a vector w such that Wx = w×x for
all x ∈ IE.
Solution: Let W
ij
be the components of W in some basis and let w be the vector whose components in this
basis are defined by
w
i
= −
1
2
e
ijk
W
jk
. (i)
Then, we merely have to show that w has the desired property stated above.
Multiplying both sides of the preceding equation by e
ipq
and then using the identity e
ijk
e
ipq
= δ
jp
δ
kq

δ
jq
δ
kp
, and finally using the substitution rule gives
e
ipq
w
i
= −
1
2

jp
δ
kq
−δ
jq
δ
kp
) W
jk
= −
1
2
(W
pq
−W
qp
)
Since W is skew-symmetric we have W
ij
= −W
ji
and thus conclude that
W
ij
= −e
ijk
w
k
.
Now for any vector x,
W
ij
x
j
= −e
ijk
w
k
x
j
= e
ikj
w
k
x
j
= (w×x)
i
.
Thus the vector w defined by (i) has the desired property Wx = w×x.
3.6. WORKED EXAMPLES. 63
Example 3.15: Verify that the 4-tensor
C
ijk
= αδ
ij
δ
k
+βδ
ik
δ
j
+γδ
i
δ
jk
, (i)
where α, β, γ are scalars, is isotropic. If this isotropic 4-tensor is to possess the symmetry C
ijk
= C
jik
,
show that one must have β = γ.
Solution: In order to verify that C
ijk
are the components of an isotropic 4-tensor we have to show that
C
ijk
= Q
ip
Q
jq
Q
kr
Q
s
C
pqrs
for all orthogonal matrices [Q]. The right-hand side of this can be simplified
by using the given form of C
ijk
; the substitution rule; and the orthogonality of [Q] as follows:
Q
ip
Q
jq
Q
kr
Q
s
C
pqrs
= Q
ip
Q
jq
Q
kr
Q
s
(α δ
pq
δ
rs
+β δ
pr
δ
qs
+γ δ
ps
δ
qr
)
= α Q
iq
Q
jq
Q
ks
Q
s
+β Q
ir
Q
js
Q
kr
Q
s
+γ Q
is
Q
jr
Q
kr
Q
s
= α (Q
iq
Q
jq
) (Q
ks
Q
s
) +β (Q
ir
Q
kr
) (Q
js
Q
s
) +γ (Q
is
Q
s
) (Q
jr
Q
kr
)
= α δ
ij
δ
k
+β δ
ik
δ
j
+γ δ
i
δ
jk
= C
ijk
. (ii)
This establishes the desired result.
Turning to the second question, enforcing the requirement C
ijk
= C
jik
on (i) leads, after some simpli-
fication, to
(β −γ) (δ
ik
δ
j
−δ
jk
δ
i
) = 0 . (iii)
Since this must hold for all values of the free indices i, j, k, , it must necessarily hold for the special choice
i = 1, j = 2, k = 1, = 2. Therefore (β −γ)(δ
11
δ
22
−δ
21
δ
12
) = 0 and so
β = γ. (iv)
Remark: We have shown that β = γ is necessary if C given in (i) is to have the symmetry C
ijk
= C
jik
.
One can readily verify that it is sufficient as well. It is useful for later use to record here, that the most
general isotropic 4-tensor C with the symmetry property C
ijk
= C
jik
is
C
ijk
= αδ
ij
δ
k
+β (δ
ik
δ
j

i
δ
jk
) (v)
where α and β are scalars.
Remark: Observe that C
ijk
given by (v) automatically has the symmetry C
ijk
= C
kij
.
Example 3.16: If A is a tensor such that
Ax · x = 0 for all x (i)
show that A is necessarily skew-symmetric.
Solution: By definition of the transpose and the properties of the scalar product, Ax· x = x· A
T
x = A
T
x· x.
Therefore A has the properties that
Ax · x = 0, and A
T
x · x = 0 for all vectors x.
64 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
Adding these two equations gives
Sx · x = 0 where S =

A+A
T

.
Observe that S is symmetric. Therefore in terms of components in a principal basis of S,
Sx · x = σ
1
x
2
1

2
x
2
2

3
x
2
3
= 0
where the σ
k
’s are the eigenvalues of S. Since this must hold for all real numbers x
k
, it follows that every
eigenvalue must vanish: σ
1
= σ
2
= σ
3
= 0. Therefore S = O whence
A = −A
T
.
Remark: An important consequence of this is that if A is a tensor with the property that Ax · x = 0 for all
x, it does not follow that A = 0 necessarily.
Example 2.18: For any orthogonal linear transformation Q, show that det Q = ±1.
Solution: Recall that for any two linear transformations A and B we have det(AB) = det Adet B and
det B = det B
T
. Since QQ
T
= I it now follows that 1 = det I = det(QQ
T
) = det Qdet Q
T
= (det Q)
2
. The
desired result now follows.
Example 2.20: If Q is a proper orthogonal linear transformation on the vector space IE
3
, show that there
exists a vector v such that Qv = v. This vector is known as the axis of Q.
Solution: To show that there is a vector v such that Qv = v, it is sufficient to show that Q has an eigenvalue
+1, i.e. that (Q−I)v) = o or equivalently that det(Q−I) = 0.
Since QQ
T
= I we have Q(Q
T
− I) = I − Q. On taking the determinant of both sides and using the
fact that det(AB) = det Adet B we get
det Q det(Q
T
−I) = det (I −Q) . (i)
Recall that det Q = +1 for a proper orthogonal linear transformation, and that det A = A
T
and det(−A) =
(−1)
3
det(A) for a 3-dimensional vector space. Therefore this leads to
det(Q−I) = −det(Q−I), (ii)
and the desired result now follows.
Example 2.22: For any linear transformation A, show that det(A−µI) = det(Q
T
AQ−µI) for all orthogonal
linear transformations Q and all scalars µ.
Solution: This follows readily since
det(Q
T
AQ−µI) = det(Q
T
AQ−µQ
T
Q) = det

Q
T
(A−µI)Q

= det Q
T
det(A−µI) det Q = det(A−µI).
3.6. WORKED EXAMPLES. 65
Remark: Observe from this result that the eigenvalues of Q
T
AQ coincide with those of Q, so that in
particular the same is true of their product and their sum: det(Q
T
AQ) = det A and tr(Q
T
AQ) = tr A.
Example 2.26: Define a scalar-valued function φ(A; e
1
, e
2
, e
3
) for all linear transformations A and all (not
necessarily) orthonormal bases {e
1
, e
2
, e
3
} by
φ(A; e
1
, e
2
, e
3
) =
Ae
1
· (e
2
×e
3
) +e
1
· (Ae
2
×e
3
) +e
1
· (e
2
×Ae
3
)
e
1
· (e
2
×e
3
)
.
Show that φ(A, e
1
, e
2
, e
3
) is in fact independent of the choice of basis, i.e., show that
φ(A, e
1
, e
2
, e
3
) = I
1
(A, e

1
, e

2
, e

3
)
for any two bases {e
1
, e
2
, e
3
} and {e

1
, e

2
, e

3
}. Thus, we can simply write φ(A) instead of φ(A, e
1
, e
2
, e
3
);
φ(A) is called a scalar invariant of A.
Pick any orthonormal basis and express φ(A) in terms of the components of A in that basis; and hence
show that φ(A) = trace A.
Example 3.7: Let F(t) be a one-parameter familty of non-singular 2-tensors that depends smoothly on the
parameter t. Calculate
d
dt
det F(t).
Solution: From the result of Example 3.NNN we have

F(t)a ×F(t)b

· F(t)c = det F(t) (a ×b) · c
Differentiating this with respect to t gives

˙
F(t)a ×F(t)b

· F(t)c +

F(t)a ×
˙
F(t)b

· F(t)c +

F(t)a ×F(t)b

·
˙
F(t)c =
d
dt
det F(t) (a ×b) · c
where we have set
˙
F(t) = dF/dt. We can write this as

˙
FF
−1
Fa ×Fb

· Fc +

Fa ×
˙
FF
−1
Fb

· Fc +

Fa ×Fb

·
˙
FF
−1
Fc =

d
dt
det F

(a ×b) · c.
In view of the result of Example 3.NNN, this can be written as
trace

˙
FF
−1

Fa ×Fb

· Fc =

d
dt
det F

(a ×b) · c
and now using the result of Example 3.NNN once more we get
trace

˙
FF
−1

det F (a ×b) · c =

d
dt
det F

(a ×b) · c.
or
d
dt
det F = trace

˙
FF
−1

det F.
66 CHAPTER 3. COMPONENTS OF TENSORS. CARTESIAN TENSORS
Example 2.30: For any integer N > 0, show that the polynomial
P
N
(A) = c
0
I +c
1
A+c
2
A
2
+. . . c
k
A
k
+. . . +c
N
A
N
can be written as a quadratic polynomial of A.
Solution: This follows readily from the Cayley-Hamilton Theorem (3.41) as follows: suppose that A is non-
singular so that I
3
(A) = det A = 0. Then (3.41) shows that A
3
can be written as a linear combination of
I, A and A
2
. Next, multiplying this by A tells us that A
4
can be written as a linear combination of A, A
2
and A
3
, and therefore, on using the result of the previous step, as linear combination of I, A and A
2
. This
process can be continued an arbitrary number of times to see that for any integer k, A
k
can be expressed as
a linear combination of I, A and A
2
. The result thus follows.
Example 2.31: For any linear transformation A show that
det(A−αI) = −α
3
+I
1
(A)α
2
−I
2
(A)α +I
3
(A)
for all real numbers α where I
1
(A), I
2
(A) and I
3
(A) are the principal scalar invariants of A:
I
1
(A) = trace A, I
2
(A) = 1/2[(trace A)
2
−trace(A
2
)], I
3
(A) = det A.
Example 2.32: Calculate the principal scalar invariants I
1
, I
2
and I
3
of the linear transformation a ⊗b.
References
1. H. Jeffreys, Cartesian Tensors, Cambridge, 1931.
2. J.K. Knowles, Linear Vector Spaces and Cartesian Tensors, Oxford University Press, New York, 1997.
3. L.A. Segel, Mathematics Applied to Continuum Mechanics, Dover, New York, 1987.
Chapter 4
Characterizing Symmetry: Groups of
Linear Transformations.
Linear transformations are mappings of vector spaces into vector spaces. When an object
is mapped using a linear transformation, certain transformations preserve its symmetry
while others don’t. One way in which to characterize the symmetry of an object is to
consider the collection of all linear transformations that preserve its symmetry. The set of
such transformations depends on the object: for example the set of linear transformations
that preserve the symmetry of a cube is different to the set of linear transformations that
preserve the symmetry of a tetrahedron. In this chapter we touch briefly on the question of
characterizing symmetry by linear transformations.
Intuitively a “uniform all-around expansion”, i.e. a linear transformation of the form αI
that rescales the object by changing its size but not its shape, does not affect symmetry.
We are interested in other linear transformations that also preserve symmetry, principally
rotations and reflections. In this Chapter we shall consider those linear transformations that
map the object back into itself. The collection of such transformations have certain important
and useful properties.
67
68 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
i
j
B
C
D
A
o
Figure 4.1: Mapping a square into itself.
4.1 An example in two-dimensions.
We begin with an illustrative example. Consider a square, ABCD, which lies in a plane
normal to the unit vector k, whose center is at the origin o and whose sides are parallel
to the orthonormal vectors {i, j}. Consider mappings that carry the square into itself. The
vertex A can be placed in one of 4 positions; see Figure 4.1. Once the location of A has
been determined, the vertex B can be placed in one of 2 positions (allowing for reflections
or in just one position if only rotations are permitted). And once the locations of A and
B have been fixed, there is no further flexibility and the locations of the remaining vertices
are fixed. Thus there are a total of 4 × 2 = 8 symmetry preserving transformations of the
square, 4 of which are rotations and 4 of which are reflections.
Consider the 4 rotations. In order to determine them, we (a) identify the axes of rotational
symmetry and then (b) determine the number of distinct rotations about each such axis.
In the present case there is just 1 axis to consider, viz. k, and we note that 0
o
, 90
o
, 180
o
and 270
o
rotations about this axis map the square back into itself. Thus the following 4
distinct rotations are symmetry transformations: I, R
π/2
k
, R
π
k
, R
3π/2
k
where we are using the
notation introduced previously, i.e. R
φ
n
is a right-handed rotation through an angle φ about
the axis n.
Let G
square
denote the set consisting of these 4 symmetry preserving rotations:
G
square
= {I, R
π/2
k
, R
π
k
, R
3π/2
k
}.
4.2. AN EXAMPLE IN THREE-DIMENSIONS. 69
This collection of linear transformations has two important properties: first, observe that
the successive application of any two symmetries yields a third symmetry, i.e. if P
1
and P
2
are in G
square
, then so is their product P
1
P
2
. For example, R
π
k
R
π/2
k
= R
3π/2
k
, R
π/2
k
R
3π/2
k
=
I, R
3π/2
k
R
3π/2
k
= R
π
k
etc. Second, observe that if P is any member of G
square
, then so is its
inverse P
−1
. For example (R
π
k
)
−1
= R
π
k
, (R
3π/2
k
)
−1
= R
π/2
k
etc. As we shall see in Section
4.4, these two properties endow the set G
square
with a certain special structure.
Next consider the rotation R
π/2
k
and observe that every element of the set G
square
can be
represented in the form (R
π/2
k
)
n
for the integer choices n = 0, 1, 2, 3. Therefore we can say
that the set G
square
is “generated” by the element R
π/2
k
.
Finally observe that
G

square
= {I, R
π
}
is a subset of G
square
and that it too has the properties that if P
1
, P
2
∈ G

square
then their
product P
1
P
2
is also in G

square
; and if P ∈ G

square
so is its inverse P
−1
.
We shall generalize all of this in Section 4.4.
4.2 An example in three-dimensions.
B
C
D
A
i
j
k
o
Figure 4.2: Mapping a cube into itself.
Before considering some general theory, it is useful to consider the three-dimensional
version of the previous problem. Consider a cube whose center is at the origin o and whose
70 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
edges are parallel to the orthonormal vectors {i, j, k}, and consider mappings that carry the
cube into itself. Consider a vertex A, and its three adjacent vertices B, C, D. The vertex
A can be placed in one of 8 positions. Once the location of A has been determined, the
vertex B can be placed in one of 3 positions. And once the locations of A and B have been
fixed, the vertex C can be placed in one of 2 positions (allowing for reflections or in just one
position if only rotations are permitted). Once the vertices A, B and C have been placed,
the locations of the remaining vertices are fixed. Thus there are a total of 8 × 3 × 2 = 48
symmetry preserving transformations of the cube, 24 of which are rotations and 24 of which
are reflections.
First, consider the 24 rotations. In order to determine these rotations we again (a) identify
all axes of rotational symmetry and then (b) determine the number of distinct rotations about
each such axis. In the present case we see that, in addition to the identify transformation I
itself, we have the following rotational transformations that preserve symmetry:
1. There are 3 axes that join the center of one face of the cube to the center of the
opposite face of the cube which we can take to be i, j, k, (which in materials science
are called the {100} directions); and 90
o
, 180
o
and 270
o
rotations about each of these
axes maps the cube back into the cube. Thus the following 3×3 = 9 distinct rotations
are symmetry transformations:
R
π/2
i
, R
π
i
, R
3π/2
i
, R
π/2
j
, R
π
j
, R
3π/2
j
, R
π/2
k
, R
π
k
, R
3π/2
k
2. There are 4 axes that join one vertex of the cube to the diagonally opposite vertex of
the cube which we can take to be i + j + k, i − j + k, i + j − k, i − j − k, (which in
materials science are called the {111} directions); and 120
o
and 240
o
rotations about
each of these axes maps the cube back into the cube. Thus the following 4 × 2 = 8
distinct rotations are symmetry transformations:
R
2π/3
i+j+k
, R
4π/3
i+j+k
, R
2π/3
i−j+k
, R
4π/3
i−j+k
, R
2π/3
i+j−k
, R
4π/3
i+j−k
, R
2π/3
i−j−k
, R
4π/3
i−j−k
.
3. Finally, there are 6 axes that join the center of one edge of the cube to the center
of the diagonally opposite edge of the cube which we can take to be i + j, i − j, i +
k, i − k, j + k, j − k (which in materials science are called the {110} directions); and
4.2. AN EXAMPLE IN THREE-DIMENSIONS. 71
180
o
rotations about each of these axes maps the cube back into the cube. Thus the
following 6 ×1 = 6 distinct rotations are symmetry transformations:
R
π
i+j
, R
π
i−j
, R
π
i+k
, R
π
i−k
, R
π
j+k
, R
π
j−k
.
Let G
cube
denote the collection of these 24 symmetry preserving rotations:
G
cube
= {I,
R
π/2
i
, R
π
i
, R
3π/2
i
, R
π/2
j
, R
π
j
, R
3π/2
j
, R
π/2
k
, R
π
k
, R
3π/2
k
R
2π/3
i+j+k
, R
4π/3
i+j+k
, R
2π/3
i−j+k
, R
4π/3
i−j+k
, R
2π/3
i+j−k
, R
4π/3
i+j−k
, R
2π/3
i−j−k
, R
4π/3
i−j−k
,
R
π
i+j
, R
π
i−j
, R
π
i+k
, R
π
i−k
, R
π
j+k
, R
π
j−k
}.
(4.1)
If one considers rotations and reflections, then there are 48 elements in this set, where the
24 reflections are obtained by multiplying each rotation by −I. (It is important to remark
that this just happens to be true for the cube, but is not generally true. In general, if R is a
rotational symmetry of an object then −R is, of course, a reflection, but it need not describe
a reflectional symmetry of the object; e.g. see the example of the tetrahedron discussed
later.)
The collection of linear transformations G
cube
has two important properties that one can
verify: (i) if P
1
and P
2
∈ G
cube
, then their product P
1
P
2
is also in G
cube
, and (ii) if P ∈ G
cube
,
then so does its inverse P
−1
.
Next, one can verify that every element of the set G
cube
can be represented in the form
(R
π/2
i
)
p
(R
π/2
j
)
q
(R
π/2
k
)
r
for integer choices of p, q, r. For example the rotation R
2π/3
i+j+k
(about
a {111} axis) and the rotation R
π
i+k
(about a {110} axis) can be represented as
R
2π/3
i+j+k
=

R
π/2
k

−1

R
π/2
j

−1
, R
π
i+k
=

R
π/2
j

−1

R
π/2
k

2
.
(One way in which to verify this is to use the representation of a rotation tensor determined in
Example 2.18.) Therefore we can say that the set G
cube
is “generated” by the three elements
R
π/2
i
, R
π/2
j
and R
π/2
k
.
72 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
4.3 Lattices.
A geometric structure of particular interest in solid mechanics is a lattice and we now make
a few observations on the symmetry of lattices. The simplest lattice, a Bravais lattice
L{o;
1
,
2
,
3
}, is an infinite set of periodically arranged points in space generated by the
translation of a single point o through three linearly independent lattice vectors {
1
,
2
,
3
}:
L{o;
1
,
2
,
3
} = {x | x = o +
3
¸
n=1
n
i

i
, n
i
∈ Z } (4.2)
where Z is the set of integers. Figure 4.3 shows a two-dimensional square lattice and one
possible set of lattice vectors
1
,
2
. (It is clear from the figure that different sets of lattice
vectors can correspond to the same lattice.)
i
j
a
a

1

2
Figure 4.3: A two-dimensional square lattice with lattice vectors
1
,
2
.
It can be shown that a linear transformation P maps a lattice back into itself if and only
if
P
i
=
3
¸
j=1
M
ij

j
(4.3)
for some 3 ×3 matrix [M] whose elements M
ij
are integers and where det[M] = 1. Given a
lattice, let G
lattice
be the set of all linear transformations P that map the lattice back into
itself. One can show that if P
1
, P
2
∈ G
lattice
then their product P
1
P
2
is also in G
lattice
; and if
P ∈ G
lattice
so is its inverse P
−1
. The set G
lattice
is called the symmetry group of the lattice;
4.4. GROUPS OF LINEAR TRANSFORMATIONS. 73
and the set of rotations in G
lattice
is known as the point group of the lattice. For example the
point group of a simple cubic lattice
1
is the set G
cube
of 24 rotations given in (4.1).
4.4 Groups of Linear Transformations.
A collection G of non-singular linear transformations is said to be a group of linear trans-
formations if it possesses the following two properties:
(i) if P
1
∈ G and P
2
∈ G then P
1
P
2
∈ G,
(ii) if P ∈ G then P
−1
∈ G.
Note from this that the identity transformation I is necessarily a member of every group G.
Clearly the three sets G
square
, G
cube
and G
lattice
encountered in the previous sections are
groups. One can show that each of the following sets of linear transformations forms a group:
- the set of all orthogonal linear transformations;
- the set of all proper orthogonal linear transformations;
- the set of all unimodular linear transformations
2
(i.e. linear transformations with de-
terminant equal to ±1); and
- the set of all proper unimodular linear transformations (i.e. linear transformations
with determinant equal to +1).
The generators of a group G are those elements P
1
, P
2
, . . . , P
n
which, when they and
their inverses are multiplied among themselves in various combinations yield all the elements
of the group. Generators of the groups G
square
and G
cube
were given previously.
In general, a collection of linear transformations G

is said to be a subgroup of a group
G if
(i) G

⊂ G and
(ii) G

is itself a group.
1
There are seven different types of symmetry that arise in Bravais lattices, viz. triclinic, monoclinic,
orthorhombic, tetragonal, cubic, trigonal and hexagonal. Because, for example, a cubic lattice can be body-
centered or face-centered, and so on, the number of different types of lattices is greater than seven.
2
While the determinant of an orthogonal tensor is ±1 the converse is not necessarily true. There are
unimodular tensors, e.g. P = I + αi ⊗ j, that are not orthogonal. Thus the unimodular group is not
equivalent to the orthogonal group.
74 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
One can readily show that the group of proper orthogonal linear transformations is a sub-
group of the group of orthogonal linear transformations, which in turn is a subgroup of the
group of unimodular linear transformations. In our first example, G

square
is a subgroup of
G
square
.
It should be mentioned that the general theory of groups deals with collections of elements
(together with certain “rules” including “multiplication”) where the elements need not be
linear transformations. For example the set of all integers Z with “multiplication” defined
as the addition of numbers, the identity taken to be zero, and the inverse of x taken to be
−x is a group. Similarly the set of all matrices of the form

¸
cosh x sinh x
sinh x cosh x

where −∞< x < ∞,
with “multiplication” defined as matrix multiplication, the identity being the identity matrix,
and the inverse being

¸
cosh(−x) sinh(−x)
sinh(−x) cosh(−x)

,
can be shown to be a group. However, our discussion in these notes is limited to groups of
linear transformations.
4.5 Symmetry of a scalar-valued function of symmetric
positive-definite tensors.
When we discuss the constitutive behavior of a material in Volume 2, we will encounter
a scalar-valued function ψ(C) defined for all symmetric positive definite tensors C. (This
represents the energy in the material and characterizes its mechanical response). The sym-
metry of the material will be characterized by a set G of non-singular tensors P which has
the property that, for each P ∈ G,
ψ(C) = ψ(P
T
CP) for all symmetric positive −definite C. (4.4)
4.5. SYMMETRY OF A SCALAR-VALUED FUNCTION 75
It can be readily shown that this set of tensors G is a group. To see this, first let
P
1
, P
2
∈ G so that
ψ(C) = ψ(P
T
1
CP
1
), ψ(C) = ψ(P
T
2
CP
2
), (4.5)
for all symmetric positive-definite C. Then ψ((P
1
P
2
)
T
CP
1
P
2
) = ψ(P
T
2
(P
T
1
CP
1
)P
2
) =
ψ(P
T
1
CP
1
) = ψ(C) where we have used (4.5)
2
and (4.5)
1
in the penultimate and ultimate
steps respectively. Thus if P
1
and P
2
are in G, then so is P
1
P
2
. Next, suppose that P ∈ G.
Since P is non-singular, the equation S = P
T
CP provides a one-to-one relation between
symmetric positive definite tensors C and S. Thus, since (4.4) holds for all symmetric
positive-definite C, it also holds for all symmetric positive-definite linear transformations
S = P
T
CP. Substituting this into (4.4) gives ψ(S) = ψ(P
−T
SP
−1
) for all symmetric
positive-definite S; and so P
−1
is also in G. Thus the set G of nonsingular tensors obeying
(4.4) is a group; we shall refer to it as the symmetry group of ψ.
Observe from (4.4) that the symmetry group of ψ contains the elements I and −I, and
as a consequence, if P ∈ G then −P ∈ G also.
To examine an explicit example, consider the function
ψ(C) =
´
ψ

det C

. (4.6)
It is seen trivially that for this
´
ψ, equation (4.4) holds if and only if det P = ±1. Thus the
symmetry group of this ψ consists of all unimodular tensors ( i.e. tensors with determinant
equal to ±1).
As a second example consider the function
ψ(C) =
´
ψ

Cn · n

(4.7)
where n is a given fixed unit vector. Let Q
n
be a rotation about the axis n through an
arbitrary angle. Then since n is the axis of Q
n
we know that Q
n
n = n. Therefore
ψ(Q
T
n
CQ
n
) =
´
ψ

Q
T
n
CQ
n
n · n

=
´
ψ

CQ
n
n · Q
n
n

=
´
ψ

Cn · n

= ψ(C). (4.8)
The symmetry group of the function (4.7) therefore contains the set of all rotations about
n. (Are there any other tensors in G?)
76 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
The following result will be useful in Volume 2. Let H be some fixed nonsingular linear
transformation, and consider two functions ψ
1
(C) and ψ
2
(C), each defined for all symmetric
positive-definite tensors C. Suppose that ψ
1
and ψ
2
are related by
ψ
2
(C) = ψ
1
(H
T
CH) for all symmetric positive −definite tensors C. (4.9)
If G
1
and G
2
are the symmetry groups of ψ
1
and ψ
2
respectively, then it can be shown that
G
2
= HG
1
H
−1
(4.10)
in the sense that a tensor P ∈ G
1
if and only if the tensor HPH
−1
∈ G
2
. As a special case
of this, if H is a spherical tensor, i.e. if H = αI, then G
1
= G
2
.
Next, note that any nonsingular tensor P can be written as the product of a spherical
tensor αI and a unimodular tensor T as P = (αI)T provided that we take α = (| det P|)
1/3
since then det T = ±1. This, together with the special case of the result noted in the
preceding paragraph provides a hint of why we might want to limit attention to unimodular
tensors rather than consider all nonsingular tensors in our discussion of symmetry.
This motivates the following slight modification to our original notion of symmetry of a
function ψ(C). We characterize the symmetry of ψ by the set G of unimodular tensors P
which have the property that, for each P ∈ G,
ψ(C) = ψ(P
T
CP) for all symmetric positive −definite C. (4.11)
It can be readily shown that this set of tensors G is also a group, necessarily a subgroup of
the unimodular group.
A function ψ is said to be isotropic if its symmetry group G contains all orthogonal
tensors. Thus for an isotropic function ψ,
ψ(C) = ψ(P
T
CP) (4.12)
for all symmetric positive-definite C and all orthogonal P. From a theorem in algebra it
follows that an isotropic function ψ depends on C only through its principal scalar invariants
defined previously in (3.38), i.e. that there exists a function
´
ψ such that
ψ(C) =
´
ψ

I
1
(C), I
2
(C), I
3
(C)

(4.13)
4.6. WORKED EXAMPLES. 77
where
I
1
(C) = trace C
I
2
(C) = 1/2 [(trace C)
2
−trace (C
2
)] ,
I
3
(C) = det C.

(4.14)
As a second example consider “cubic symmetry” where the symmetry group G coincides
with the set of 24 rotations G
cube
given in (4.1) plus the corresponding reflections obtained
by multiplying these rotations by −I. As noted previously, this group is generated by
R
π/2
i
, R
π/2
j
, R
π/2
k
and −I, and contains 24 rotations and 24 reflections. Then, according to a
theorem in algebra (see pg 312 of Truesdell and Noll),
ψ(C) =
´
ψ

i
1
(C), i
2
(C), i
3
(C), i
4
(C), i
5
(C), i
6
(C), i
7
(C), i
8
(C), i
9
(C)

(4.15)
where
i
1
(C) = C
11
+ C
22
+ C
33
,
i
2
(C) = C
22
C
33
+ C
33
C
11
+ C
11
C
22
i
3
(C) = C
11
C
22
C
33
i
4
(C) = C
2
23
+ C
2
31
+ C
2
12
i
5
(C) = C
2
31
C
2
32
+ C
2
12
C
2
23
+ C
2
23
C
2
31
i
6
(C) = C
23
C
31
C
12
i
7
(C) = C
22
C
2
12
+ C
33
C
2
31
+ C
33
C
2
23
+ C
11
C
2
12
+ C
11
C
2
31
+ C
22
C
2
23
i
8
(C) = C
11
C
2
31
C
2
12
+ C
22
C
2
12
C
2
23
+ C
33
C
2
23
C
2
31
i
9
(C) = C
2
23
C
22
C
33
+ C
2
31
C
33
C
11
+ C
2
12
C
11
C
22

(4.16)
If G contains I and all rotations R
φ
n
, 0 < φ < 2π, through all angles φ about a fixed axis
n, the corresponding symmetry is called transverse isotropy.
If G includes the three elements −R
π
i
, −R
π
j
, −R
π
k
which represent reflections in the
planes normal to i, j and k, the symmetry is called orthotropy.
4.6 Worked Examples.
Example 4.1: Characterize the set H
square
of linear transformations that map a square back into a square,
including both rotations and reflections.
78 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
B
C D
A
i
j
Figure 4.4: Mapping a square into itself.
Solution: We return to the problem describe in Section 4.1 and now consider the set rotations and reflections
H
square
that map the square back into itself. The set of rotations that do this were determined earlier and
they are
G
square
= {I, R
π/2
k
, R
π
k
, R
3π/2
k
}.
As the 4 reflectional symmetries we can pick
H = reflection in the horizontal axis i,
V = reflection in the vertical axis j,
D = reflection in the diagonal with positive slope i +j,
D

= reflection in the diagonal with negative slope −i +j,
and so
H
square
=

I, R
π/2
k
, R
π
k
, R
3π/2
k
, H, V, D, D

¸
. (i)
One can verify that H
square
is a group since it possesses the property that if P
1
and P
2
are two transfor-
mations in G, then so is their product P
1
P
2
; e.g. D

= R
3π/2
k
H, D = HR
3π/2
k
etc. And if P is any member
of G, then so is its inverse; e.g. H
−1
= H etc.
Example 4.2: Find the generators of H
square
and all subgroups of H
square
.
Solution: All elements of H
square
can be represented in the form (R
π/2
k
)
i
H
j
for integer choices of i = 0, 1, 2, 3
and j = 0, 1:
R
π
k
= (R
π/2
k
)
2
, R
3π/2
k
= (R
π/2
k
)
3
, I = (R
π/2
k
)
4
.
D

= (R
π/2
k
)
3
H, V = (R
π/2
k
)
2
H, D = R
π/2
k
H.
Therefore the group H
square
is generated by the two elements H and R
π/2
.
One can verify that the following 8 collections of linear transformations are subgroups of H
square
:

I, R
π/2
, R
π
, R
3π/2
¸
,
¸
I, D, D

, R
π
¸
, {I, H, V, R
π
} , {I, R
π
} ,
{I, D} ,
¸
I, D

¸
, {I, H} , {I, V} ,
4.6. WORKED EXAMPLES. 79
Geometrically, each of these subgroups leaves some aspect of the square invariant. The first leaves the face
invariant, the second leaves a diagonal invariant, the third leaves the axis invariant, the fourth leaves an axis
and a diagonal invariant etc. There are no other subgroups of H
square
.
Example 4.3: Characterize the rotational symmetry of a regular tetrahedron.
B
C
D
A
i
j
k
p
Figure 4.5: A regular tetrahedron ABCD, three orthonormal vectors {i, j, k} and a unit vector p. The axis
k passes through the vertex A and the centroid of the opposite face BCD, while the unit vector p passes
through the center of the edge AD and the center of the opposite edge BC.
Solution:
1. There are 4 axes like k in the figure that pass through a vertex of the tetrahedron and the centroid
of the opposite face; and right-handed rotations of 120
o
and 240
o
about each of these axes maps the
tetrahedron back onto itself. Thus these 4 ×2 = 8 distinct rotations – of the form R
2π/3
k
, R
4π/3
k
, etc.
– are symmetry transformations of the tetrahedron.
2. There are three axes like p shown in the figure that pass through the mid-points of a pair of opposite
edges; and a right-handed rotation through 180
o
about each of these axes maps the tetrahedron
back onto itself. Thus these 3 × 1 = 3 distinct rotations – of the form R
π
p
, etc. – are symmetry
transformations of the tetrahedron.
The group G
tetrahedron
of rotational symmetries of a tetrahedron therefore consists of these 11 rotations
plus the identity transformation I.
Example 4.4: Are all symmetry preserving linear transformations necessarily either rotations or reflections?
Solution: We began this chapter by considering the symmetry of a square, and examining the different ways
in which the square could be mapped back into itself. Now consider the example of a two-dimensional a ×a
80 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
square lattice, i.e. the set of infinite points
L
square
= {x | x = n
1
ai +n
2
aj, n
1
, n
2
∈ Z ≡ Integers} (i)
depicted in Figure 4.6, and examine the different ways in which this lattice can be mapped back into itself.
i
j
a
a

1

2
Figure 4.6: A two-dimensional × square lattice.
We first note that the rotational and reflectional symmetry transformations of a a × a square are also
symmetry transformations for the lattice since they leave the lattice invariant. There are however other
transformations, that are neither rotations nor reflections, that also leave the lattice invariant. For example,
if for every integer n, one rigidly translates the n
th
row of the lattice by precisely the amount n in the
i direction, one recovers the original lattice. Thus, the “shearing” of the lattice described by the linear
transformation
P = I +ai ⊗j (ii)
is also a symmetry preserving transformation.
Example 4.5: Show that each of the following sets of linear transformations forms a group: all orthogonal
tensors; all proper orthogonal tensors; all unimodular tensors (i.e. tensors with determinant equal to ±1);
and all proper unimodular tensors (i.e. tensors with determinant equal to +1).
Example 4.6: Show that the group of proper orthogonal tensors is a subgroup of the group of orthogonal
tensors, which in turn is a subgroup of the group of unimodular tensors.
Example 4.7: Suppose that a function ψ(C) is defined for all symmetric positive definite tensors C and that
its symmetry group is the set of all orthogonal tensors. Show that ψ depends on C only through its principal
scalar invariants, i.e. show that there is a function
´
ψ such that
ψ(C) =
´
ψ

I
1
(C), I
2
(C), I
3
(C)

4.6. WORKED EXAMPLES. 81
where I
i
(C), i = 1, 2, 3, are the principal scalar invariants of C defined previously in (3.38).
Solution: We are given that ψ has the property that for all symmetric positive-definite tensors C and all
orthogonal tensors Q
ψ(C) = ψ(Q
T
CQ). (i)
In order to prove the desired result it is sufficient to show that, if C
1
and C
2
are two symmetric tensors
whose principal invariants I
i
are the same,
I
1
(C
1
) = I
1
(C
2
), I
2
(C
1
) = I
2
(C
2
), I
3
(C
1
) = I
3
(C
2
), (ii)
then ψ(C
1
) = ψ(C
2
).
Recall that the mapping (3.40) between principal invariants and eigenvalues is one-to-one. It follows
from this and (ii) that the eigenvalues of C
1
and C
2
are the same. Thus we can write
C
1
=
3
¸
i=1
λ
i
e
(1)
i
⊗e
(1)
i
, C
2
=
3
¸
i=1
λ
i
e
(2)
i
⊗e
(2)
i
, (iii)
where the two sets of orthonormal vectors {e
(1)
1
, e
(1)
2
, e
(1)
3
} and {e
(1)
1
, e
(1)
2
, e
(1)
3
} are the respective principal
bases of C
1
and C
2
. Since each set of basis vectors is orthonormal, there is an orthogonal tensor R that
carries {e
(1)
1
, e
(1)
2
, e
(1)
3
} into {e
(2)
1
, e
(2)
2
, e
(2)
3
}:
Re
(1)
i
= e
(2)
i
, i = 1, 2, 3. (iv)
Thus
R
T

3
¸
i=1
λ
i
e
(2)
i
⊗e
(2)
i

R =
3
¸
i=1
λ
i
R
T
(e
(2)
i
⊗e
(2)
i
)R =
3
¸
i=1
λ
i
(R
T
e
(2)
i
) ⊗(R
T
e
(2)
i
) =
3
¸
i=1
λ
i
(e
(1)
i
⊗(e
(1)
i
), (v)
and so R
T
C
2
R = C
1
. Therefore ψ(C
1
) = ψ(R
T
C
2
R) = ψ(C
2
) where in the last step we have used (i).
This establishes the desired result.
Example 4.8: Consider a scalar-valued function f(x) that is defined for all vectors x. Let G be the set of all
non-singular linear transformations P that have the property that for each P ∈ G, one has f(x) = f(Px)
for all vectors x.
i) Show that G is a group.
ii) Find the most general form of f if G contains the set of all orthogonal transformations.
Solution:
i) Suppose that P
1
and P
2
are in G, i.e. that
f(x) = f(P
1
x) for all vectors x, and
f(x) = f(P
2
x) for all vectors x.
¸
(i)
82 CHAPTER 4. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS
Then
f

(P
1
P
2
)x

= f

P
1
(P
2
x)

= f(P
2
x) = f(x)
where in the penultimate and ultimate steps we have used (i)
1
and (i)
2
respectively.
Next, suppose that P ∈ G so that
f(x) = f(Px) for all vectors x.
Since P is non-singular we can set y = Px and obtain
f(P
−1
y) = f(y) for all vectors y.
It thus follows that G has the two defining properties of a group.
ii) If x
1
and x
2
are two vectors that have the same length, we will show that f(x
1
) = f(x
2
), whence
f(x) depends on x only through its length |x|, i.e. there exists a function
´
f such that
f(x) =
´
f(|x|) for all vectors x.
If x
1
and x
2
are two vectors that have the same length, there is a rotation tensor R that carries x
2
to x
1
: Rx
2
= x
1
. Therefore
f(x
1
) = f(Rx
2
) = f(x
2
),
where in the last step we have used the fact that G contains the set of all orthogonal transformations,
i.e. that f(x) = f(Px) for all vectors x and all orthogonal P. This establishes the result claimed
above.
Example 4.9: Consider a scalar-valued function g(C, m⊗m) that is defined for all symmetric positive-definite
tensors C and all unit vectors m. Let G be the set of all non-singular linear transformations P that have
the property that for each P ∈ G, one has g(C, n ⊗n) = g(P
T
CP, P
T
(n ⊗n)P) for all symmetric positive-
definite tensors C and some particular unit vector n. If G contains the set of all orthogonal transformations,
show that there exists a function ´ g such that
g(C, n ⊗n) = ´ g

I
1
(C), I
2
(C), I
3
(C), I
4
(C, n), I
5
(C, n)

where I
1
(C), I
2
(C), I
3
(C) are the three fundamental scalar invariants of C and
I
4
(C, n) = Cn · n, I
5
(C, n) = C
2
n · n.
Remark: Observe that with respect to an orthonormal basis {e
1
, e
2
, e
3
} where e
3
= n one has I
4
= C
33
and
I
5
= C
2
31
+C
2
32
+C
2
33
.
Solution: We are told that
g(C, n ⊗n) = g(Q
T
CQ, Q
T
(n ⊗n)Q) (i)
for all orthogonal Q and all symmetric positive definite C. As in Example 4.7, it is sufficient to show that if
C
1
and C
2
are two symmetric positive definite linear transformations whose “invariants” I
i
, i = 1, 2, 3, 4, 5”
are the same, i.e.
I
1
(C
1
) = I
1
(C
2
), I
2
(C
1
) = I
2
(C
2
), I
3
(C
1
) = I
3
(C
2
), I
4
(C
1
, n) = I
4
(C
2
, n), I
5
(C
1
, n) = I
5
(C
2
, n) (ii)
4.6. WORKED EXAMPLES. 83
then g(C
1
, n ⊗n) = g(C
2
, n ⊗n). From (ii)
1,2,3
and the analysis in Example 4.7 it follows that there is an
orthogonal tensor R such that R
T
C
2
R = C
1
. It is readily seen from this that R
T
C
2
2
R = C
2
1
as well. It
now follows from this, the fact that R is orthogonal, (ii)
4,5
and the definitions of I
4
and I
5
that
Rn · Rn = n · n, C
2
Rn · Rn = C
2
n · n, C
2
2
Rn · Rn = C
2
2
n · n, (iii)
and this must hold for all symmetric positive define C
2
. This implies that
Rn = ±n and consequently R
T
n = ±n,
as may be seen, for example, for expressing (iii) in a principal basis of C
2
. Consequently
g(C
1
, n ⊗n) = g(R
T
C
2
R, (R
T
n) ⊗(R
T
n)) = g(R
T
C
2
R, R
T
(n ⊗n)R) = g(C
2
, n ⊗n)
where we have used (i) in the very last step. This establishes the desired result.
REFERENCES
1. M.A. Armstrong, Groups and Symmetry, Springer-Verlag, 1988.
2. G. Birkhoff and S. MacLane, A Survey of Modern Algebra, MacMillan, 1977.
3. C. Truesdell and W. Noll, The nonlinear field theories of mechanics, in Handbuch der Physik, Volume
III/3, edited by S. Flugge, Springer-Verlag, 1965.
4. A.J.M. Spencer, Theory of invariants, in Continuum Physics, Volume I, edited by A.C. Eringen,
Academic Press, 1971.
Chapter 5
Calculus of Vector and Tensor Fields
Notation:
α ..... scalar
{a} ..... 3 ×1 column matrix
a ..... vector
a
i
..... i
th
component of the vector a in some basis; or i
th
element of the column matrix {a}
[A] ..... 3 ×3 square matrix
A ..... second-order tensor (2-tensor)
A
ij
..... i, j component of the 2-tensor A in some basis; or i, j element of the square matrix [A]
C ..... fourth-order tensor (4-tensor)
C
ijk
..... i, j, k, component of 4-tensor C in some basis
T
i1i2....in
..... i
1
i
2
....i
n
component of n-tensor T in some basis.
5.1 Notation and definitions.
Let Rbe a bounded region of three-dimensional space whose boundary is denoted by ∂Rand
let x denote the position vector of a generic point in R+ ∂R. We shall consider scalar and
tensor fields such as φ(x), v(x), A(x) and T(x) defined on R+∂R. The region R+∂R and
these fields will always be assumed to be sufficiently regular so as to permit the calculations
carried out below.
While the subject of the calculus of tensor fields can be dealt with directly, we shall take
the more limited approach of working with the components of these fields. The components
will always be taken with respect to a single fixed orthonormal basis {e
1
, e
2
, e
3
}. Each com-
ponent of say a vector field v(x) or a 2-tensor field A(x) is effectively a scalar-valued function
85
86 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
on three-dimensional space, v
i
(x
1
, x
2
, x
3
) and A
ij
(x
1
, x
2
, x
3
), and we can use the well-known
operations of classical calculus on such fields such as partial differentiation with respect to
x
k
.
In order to simplify writing, we shall use the notation that a comma followed by a sub-
script denotes partial differentiation with respect to the corresponding x-coordinate. Thus,
for example, we will write
φ
,i
=
∂φ
∂x
i
, φ
,ij
=

2
φ
∂x
i
∂x
j
, v
i,j
=
∂v
i
∂x
j
, (5.1)
and so on, where v
i
and x
i
are the i
th
components of the vectors v and x in the basis
{e
1
, e
2
, e
3
}.
The gradient of a scalar field φ(x) is a vector field denoted by grad φ (or ∇φ). Its
i
th
-component in the orthonormal basis is
(grad φ)
i
= φ
,i
, (5.2)
so that
grad φ = φ
,i
e
i
.
The gradient of a vector field v(x) is a 2-tensor field denoted by grad v (or ∇v). Its ij
th
-
component in the orthonormal basis is
(grad v)
ij
= v
i,j
, (5.3)
so that
grad v = v
i,j
e
i
⊗e
j
.
The gradient of a scalar field φ in the particular direction of the unit vector n is denoted by
∂φ/∂n and defined by
∂φ
∂n
= ∇φ · n. (5.4)
The divergence of a vector field v(x) is a scalar field denoted by div v (or ∇· v). It is
given by
div v = v
i,i
. (5.5)
The divergence of a 2-tensor field A(x) is a vector field denoted by div A (or ∇· A). Its
i
th
-component in the orthonormal basis is
(div A)
i
= A
ij,j
(5.6)
5.2. INTEGRAL THEOREMS 87
so that
div A = A
ij,j
e
i
.
The curl of a vector field v(x) is a vector field denoted by curl v (or ∇× v). Its i
th
-
component in the orthonormal basis is
(curl v)
i
= e
ijk
v
k,j
(5.7)
so that
curl v = e
ijk
v
k,j
e
i
.
The Laplacians of a scalar field φ(x), a vector field v(x) and a 2-tensor field A(x) are
the scalar, vector and 2-tensor fields with components

2
φ = φ
,kk
, (∇
2
v)
i
= v
i,kk
, (∇
2
A)
ij
= A
ij,kk
, (5.8)
5.2 Integral theorems
Let D be an arbitrary regular sub-region of the region R. The divergence theorem allows
one to relate a surface integral on ∂D to a volume integral on D. In particular, for a scalar
field φ(x)

∂D
φn dA =

D
∇φ dV or

∂D
φn
k
dA =

D
φ
,k
dV. (5.9)
Likewise for a vector field v(x) one has

∂D
v · n dA =

D
∇· v dV or

∂D
v
k
n
k
dA =

D
v
k,k
dV, (5.10)
as well as

∂D
v ⊗n dA =

D
∇v dV or

∂D
v
i
n
k
dA =

D
v
i,k
dV. (5.11)
More generally for a n-tensor field T(x) the divergence theorem gives

∂D
T
i
1
i
2
...in
n
k
dA =

D

∂x
k
(T
i
1
i
2
...in
) dV (5.12)
where some of the subscripts i
1
, i
2
, . . . , i
n
may be repeated and one of them might equal k.
88 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
5.3 Localization
Certain physical principles are described to us in terms of equations that hold on an arbitrary
portion of a body, i.e. in terms of an integral over a subregion D of R. It is often useful
to derive an equivalent statement of such a principle in terms of equations that must hold
at each point x in the body. In what follows, we shall frequently have need to do this, i.e.
convert a “global principle” to an equivalent “local field equation”.
Consider for example the scalar field φ(x) that is defined and continuous at all x ∈
R+ ∂ R and suppose that

D
φ(x) dV = 0 for all subregions D ⊂ R. (5.13)
We will show that this “global principle” is equivalent to the “local field equation”
φ(x) = 0 at every point x ∈ R. (5.14)
z

B

(z)
D
Figure 5.1: The region R, a subregion D and a neighborhood B

(z) of the point z.
We will prove this by contradiction. Suppose that (5.14) does not hold. This implies that
there is a point, say z ∈ R, at which φ(z) = 0. Suppose that φ is positive at this point:
φ(z) > 0. Since we are told that φ is continuous, φ is necessarily (strictly) positive in some
neighborhood of z as well. Let B

(z) be a sphere with its center at z and radius > 0. We
can always choose sufficiently small so that B

(z) is a sufficiently small neighborhood of z
and
φ(x) > 0 at all x ∈ B

(z). (5.15)
Now pick a region D which is a subset of B

(z). Then φ(x) > 0 for all x ∈ D. Integrating φ
over this D gives

D
φ(x) dV > 0 (5.16)
thus contradicting (5.13). An entirely analogous calculation can be carried out in the case
φ(z) < 0. Thus our starting assumption must be false and (5.14) must hold.
5.4. WORKED EXAMPLES. 89
5.4 Worked Examples.
In all of the examples below the region R will be a bounded regular region and its boundary
∂R will be smooth. All fields are defined on this region and are as smooth as in necessary.
In some of the examples below, we are asked to establish certain results for vector and
tensor fields. When it is more convenient, we will carry out our calculations by first picking
and fixing a basis, and then working with the components in that basis. If necessary, we will
revert back to the vector and tensor fields at the end. We shall do this frequently in what
follows and will not bother to explain this strategy each time.
Example 5.1: Calculate the gradient of the scalar-valued function φ(x) = Ax · x where A is a constant
2-tensor.
Solution: Writing φ in terms of components
φ = A
ij
x
i
x
j
.
Calculating the partial derivative of φ with respect to x
k
yields
φ
,k
= A
ij
(x
i
x
j
)
,k
= A
ij
(x
i,k
x
j
+x
i
x
j,k
) = A
ij

ik
x
j
+x
i
δ
jk
) = A
kj
x
j
+A
ik
x
i
= (A
kj
+A
jk
)x
j
or equivalently ∇φ = (A+A
T
)x.
Example 5.2: Let v(x) be a vector field and let v
i
(x
1
, x
2
, x
3
) be the i
th
-component of v in a fixed orthonormal
basis {e
1
, e
2
, e
3
}. For each i and j define
F
ij
= v
i,j
.
Show that F
ij
are the components of a 2-tensor.
Solution: Since v and x are 1-tensors, their components obey the transformation rules
v

i
= Q
ik
v
k
, v
i
= Q
ki
v

k
and x

j
= Q
jk
x
k
, x

= Q
j
x

j
Therefore
F

ij
=
∂v

i
∂x

j
=
∂v

i
∂x

∂x

∂x

j
=
∂v

i
∂x

Q
j
=
∂(Q
ik
v
k
)
∂x

Q
j
= Q
ik
Q
j
∂v
k
∂x

= Q
ik
Q
j
F
k
,
which is the transformation rule for a 2-tensor.
Example 5.3: If φ(x), u(x) and A(x) are a scalar, vector and 2-tensor fields respectively. Establish the
identities
a. div (φu) = u · grad φ +φ div u
90 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
b. grad (φu) = u ⊗grad φ +φ grad u
c. div (φA) = A grad φ +φ div A
Solution:
a. In terms of components we are asked to show that (φu
i
)
,i
= u
i
φ
,i
+ φ u
i,i
. This follows immediately
by expanding (φu
i
)
,i
using the chain rule.
b. In terms of components we are asked to show that (φu
i
)
,j
= u
i
φ
,j
+ φ u
i,j
. Again, this follows
immediately by expanding (φu
i
)
,j
using the chain rule.
c. In terms of components we are asked to show that (φA
ij
)
,j
= A
ij
φ
,j
+ φ A
ij,j
. Again, this follows
immediately by expanding (φA
ij
)
,j
using the chain rule.
Example 5.4: If φ(x) and v(x) are a scalar and vector field respectively, show that
∇×(φv) = φ(∇×v) −v ×∇φ (i)
Solution: Recall that the curl of a vector field u can be expressed as ∇×u = e
ijk
u
k,j
e
i
where e
i
is a fixed
basis vector. Thus evaluating ∇×(φv):
∇×(φv) = e
ijk
(φv
k
)
,j
e
i
= e
ijk
φ v
k,j
e
i
+e
ijk
φ
,j
v
k
e
i
= φ ∇×v +∇φ ×v (ii)
from which the desired result follows because a ×b = −b ×a.
Example 5.5: Let u(x) be a vector field and define a second vector field ξ(x) by ξ(x) = curl u(x). Show
that
a. ∇· ξ = 0;
b. (∇u −∇u
T
)a = ξ ×a for any vector field a(x); and
c. ξ · ξ = ∇u · ∇u −∇u · ∇u
T
Solution: Recall that in terms of its components, ξ = curl u = ∇×u can be expressed as
ξ
i
= e
ijk
u
k,j
. (i)
a. A direct calculation gives
∇· ξ = ξ
i,i
= (e
ijk
u
k,j
)
,i
= e
ijk
u
k,ji
= 0 (ii)
where in the last step we have used the fact that e
ijk
is skew-symmetric in the subscripts i, j, and
u
k,ji
is symmetric in the subscripts i, j (since the order of partial differentiation can be switched) and
therefore their product vanishes.
5.4. WORKED EXAMPLES. 91
b. Multiplying both sides of (i) by e
ipq
gives
e
ipq
ξ
i
= e
ipq
e
ijk
u
k,j
= (δ
pj
δ
qk
−δ
pk
δ
qj
) u
k,j
= u
q,p
−u
p,q
, (iii)
where we have made use of the identity e
ipq
e
ijk
= δ
pj
δ
qk
− δ
pk
δ
qj
between the alternator and the
Kronecker delta infroduced in (1.49) as well as the substitution rule. Multiplying both sides of this
by a
q
and using the fact that e
ipq
= −e
piq
gives
e
piq
ξ
i
a
q
= (u
p,q
−u
q,p
)a
q
, (iv)
or ξ ×a = (∇u −∇u
T
)a.
c. Since (∇u)
ij
= u
i,j
and the inner product of two 2-tensors is A· B = A
ij
B
ij
, the right-hand side of
the equation we are asked to establish can be written as ∇u · ∇u − ∇u · ∇u
T
= (∇u)
ij
(∇u)
ij

(∇u)
ij
(∇u)
ji
= u
i,j
u
i,j
−u
i,j
u
j,i
. The left-hand side on the hand is ξ · ξ = ξ
i
ξ
i
.
Using (i), the aforementioned identity between the alternator and the Kronecker delta, and the sub-
stitution rule leads to the desired result as follows:
ξ
i
ξ
i
= (e
ijk
u
k,j
) (e
ipq
u
p,q
) = (δ
jp
δ
kq
−δ
jq
δ
kp
) u
k,j
u
p,q
= u
q,p
u
p,q
−u
p,q
u
p,q
. (v)
Example 5.6: Let u(x), E(x) and S(x) be, respectively, a vector and two 2-tensor fields. These fields are
related by
E =
1
2

∇u +∇u
T

, S = 2µE+λ trace(E) 1, (i)
where λ and µ are constants. Suppose that
u(x) = b
x
r
3
where r = |x|, |x| = 0, (ii)
and b is a constant. Use (i)
1
to calculate the field E(x) corresponding to the field u(x) given in (ii), and then
use (i)
2
to calculate the associated field S(x). Thus verify that the field S(x) corresponding to (ii) satisfies
the differential equation:
div S = o, |x| = 0. (iii)
Solution: We proceed in the manner suggested in the problem statement by first using (i)
1
to calculate the
E corresponding to the u given by (ii); substituting the result into (i)
2
gives the corresponding S; and finally
we can then check whether or not this S satisfies (iii).
In components,
E
ij
=
1
2
(u
i,j
−u
j,i
) , (iv)
and therefore we begin by calculting u
i,j
. For this, it is convenient to first calculate ∂r/∂x
j
= r
,j
. Observe
by differentiating r
2
= |x|
2
= x
i
x
i
that
2rr
,j
= 2x
i,j
x
i
= 2δ
ij
x
i
= 2x
j
, (v)
and therefore
r
,j
=
x
j
r
. (vi)
92 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
Now differentiating the given vector field u
i
= bx
i
/r
3
with respect to x
j
gives
u
i,j
=
b
r
3
x
i,j
+bx
i
(r
−3
)
,j
=
b
r
3
δ
ij
−3b
x
i
r
4
r
,j
= b
δ
ij
r
3
−3b
x
i
r
4
x
j
r
= b
δ
ij
r
3
−3b
x
i
x
j
r
5
.
(vii)
Substituting this into (iv) gives us E
ij
:
E
ij
=
1
2
(u
i,j
+u
j,i
) = b

δ
ij
r
3
−3
x
i
x
j
r
5

. (viii)
Next, substituting (viii) into (i)
2
, gives us S
ij
:
S
ij
= 2µ E
ij
+λE
kk
δ
ij
= 2µb

δ
ij
r
3

3x
i
x
j
r
5

+λb

δ
kk
r
3
−3
x
k
x
k
r
5

δ
ij
= 2µb

δ
ij
r
3

3x
i
x
j
r
5

+λb

3
r
3
−3
r
2
r
5

δ
ij
= 2µb

δ
ij
r
3

3x
i
x
j
r
5

.
(ix)
Finally we use this to calculate ∂S
ij
/∂x
j
= S
ij,j
:
1
2µb
S
ij,j
= δ
ij
(r
−3
)
,j

3
r
5
(x
i
x
j
)
,j
−3x
i
x
j
(r
−5
)
,j
= δ
ij


3
r
4
r
,j


3
r
5

ij
x
j
+x
i
δ
jj
) −3x
i
x
j


5
r
6
r
,j

= −3
δ
ij
r
4
x
j
r

3
r
5
(x
i
+ 3x
i
) +
15x
i
x
j
r
6
x
j
r
= 0. (x)
Example 5.7: Show that

∂R
x ⊗n dA = V I, (i)
where V is the volume of the region R, and x is the position vector of a typical point in R +∂R.
Solution: In terms of components in a fixed basis, we have to show that

∂R
x
i
n
j
dA = V δ
ij
. (ii)
The result follows immediately by using the divergence theorem (5.11):

∂R
x
i
n
j
dA =

R
x
i,j
dV =

R
δ
ij
dV = δ
ij

R
dV = δ
ij
V. (iii)
Example 5.8: Let A(x) be a 2-tensor field with the property that

∂D
A(x)n(x) dA = o for all subregions D ⊂ R, (i)
5.4. WORKED EXAMPLES. 93
where n(x) is the unit outward normal vector at a point x on the boundary ∂D. Show that (i) holds if and
only if div A = o at each point x ∈ R.
Solution: In terms of components in a fixed basis, we are told that

∂D
A
ij
(x)n
j
(x) dA = 0 for all subregions D ⊂ R. (ii)
By using the divergence theorem (5.12), this implies that

D
A
ij,j
dV = 0 for all subregions D ⊂ R. (iii)
If A
ij,j
is continuous on R, the result established in the previous problem allows us to conclude that
A
ij,j
= 0 at each x ∈ R. (iv)
Conversely if (iv) holds, one can easily reverse the preceding steps to conclude that then (i) also holds. This
shows that (iv) is both necessary and sufficient for (i) to hold.
Example 5.9: Let A(x) be a 2-tensor field which satisfies the differential equation div A = o at each point
in R. Suppose that in addition

∂D
x ×An dA = o for all subregions D ⊂ R.
Show that A must be a symmetric 2-tensor.
Solution: In terms of components we are given that

∂D
e
ijk
x
j
A
kp
n
p
dA = 0,
which on using the divergence theorem yields

D
e
ijk
(x
j
A
kp
)
,p
dV =

D
e
ijk

jp
A
kp
+x
j
A
kp,p
] dV = 0.
We are also given that A
ij,j
= 0 at each point in R and so the preceding equation simplifies, after using the
substitution rule, to

D
e
ijk
A
kj
dV = 0.
Since this holds for all subregions D ⊂ R we can localize it to
e
ijk
A
kj
= 0 at each x ∈ R.
Finally, multiplying both sides by e
ipq
and using the identity e
ipq
e
ijk
= δ
pj
δ
qk
−δ
pk
δ
qj
in (1.49) yields

pj
δ
qk
−δ
pk
δ
qj
)A
kj
= A
qp
−A
pq
= 0
and so A is symmetric.
94 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
Example 5.10: Let ε
1
(x
1
, x
2
) and ε
2
(x
1
, x
2
) be defined on a simply connected two-dimensional domain R.
Find necessary and sufficient conditions under which there exists a function u(x
1
, x
2
) such that
u
,1
= ε
1
, u
,2
= ε
2
for all (x
1
, x
2
) ∈ R. (i)
Solution: In the presence of sufficient smoothness, the order of partial differentiation does not matter and so
we necessarily have u
,12
= u
,21
. Therefore a necessary condition for (i) to hold is that ε
1
, ε
2
obey
ε
1,2
= ε
2,1
for all (x
1
, x
2
) ∈ R. (ii)
C

D
(x
1
, x
2
)
(0, 0)
R
s
n
s
1
= −n
2
, s
2
= n
1
(0, 0)
(x
1
, x
2
)
s
C
R

1
, ξ
2
) = (ξ
1
(s), ξ
2
(s))
(s
1
, s
2
) = (ξ

1
(s), ξ

2
(s))
s
(a) (b )
Figure 5.2: (a) Path C from (0, 0) to (x
1
, x
2
). The curve is parameterized by arc length s as ξ
1
= ξ
1
(s), ξ
2
=
ξ
2
(s), 0 ≤ s ≤ s
0
. The unit tangent vector on S, s, has components (s
1
, s
2
). (b) A closed path C

passing
through (0, 0) and (x
1
, x
2
) and coinciding with C over part of its length. The unit outward normal vector on
S is n, and it has components (n
1
, n
2
)
.
To show that (ii) is also sufficient for the existence of u, we shall provide a formula for explicitly
calculating the function u in terms of the given functions ε
1
and ε
2
. Let C be an arbitrary regular oriented
curve in R that connects (0, 0) to (x
1
, x
2
). A generic point on the curve is denoted by (ξ
1
, ξ
2
) and the curve
is characterized by the parameterization
ξ
1
= ξ
1
(s), ξ
2
= ξ
2
(s), 0 ≤ s ≤ s
0
, (iii)
where s is arc length on C and (ξ
1
(0), ξ
2
(0)) = (0, 0) and (ξ
1
(s
0
), ξ
2
(s
0
)) = (x
1
, x
2
). We will show that the
function
u(x
1
, x
2
) =

s0
0

ε
1

1
(s), ξ
2
(s))ξ

1
(s) +ε
2

1
(s), ξ
2
(s))ξ

2
(s)

ds (iv)
satisfies the requirement (i) when (ii) holds.
5.4. WORKED EXAMPLES. 95
To see this we must first show that the integral (iv) does in fact define a function of (x
1
, x
2
), i.e. that it
does not depend on the path of integration. (Note that if a function u satisfies (i). then so does the function
u + constant and so the dependence on the arbitrary starting point of the integral is to be expected.) Thus
consider a closed path C

that starts and ends at (0, 0) and passes through (x
1
, x
2
) as sketched in Figure
NNN (b). We need to show that

C

ε
1

1
(s), ξ
2
(s))ξ

1
(s) +ε
2

1
(s), ξ
2
(s))ξ

2
(s)

ds = 0. (v)
Recall that (ξ

1
(s), ξ

2
(s)) are the components of the unit tangent vector on C

at the point (ξ
1
(s), ξ
2
(s)):
s
1
= ξ

1
(s), s
2
= ξ

2
(s). Observe further from the figure that the components of the unit tangent vector s and
the unit outward normal vector n are related by s
1
= −n
2
and s
2
= n
1
. Thus the left-hand side of (v) can
be written as

C

ε
1
s
1

2
s
2

ds =

C

ε
2
n
1
−ε
1
n
2

ds =

D

ε
2,1
−ε
1,2

dA (vi)
where we have used the divergence theorem in the last step and D

is the region enclosed by C

. In view of
(ii), this last integral vanishes. Thus the integral (v) vanishes on any closed path C

and so the integral (iv) is
independent of path and depends only on the end points. Thus (iv) does in fact define a function u(x
1
, x
2
).
Finally it remains to show that the function (iv) satisfies the requirements (i). This is readily seen by
writing (iv) in the form
u(x
1
, x
2
) =

(x1,x2)
(0,0)

ε
1

1
, ξ
2
)dξ
1

2

1
, ξ
2
)dξ
2

(vii)
and then differentiating this with respect to x
1
and x
2
.
Example 5.11: Let a
1
(x
1
, x
2
) and a
2
(x
1
, x
2
) be defined on a simply connected two-dimensional domain R.
Suppose that a
1
and a
2
satisfy the partial differential equation
a
1,1
(x
1
, x
2
) +a
2,2
(x
1
, x
2
) = 0 for all (x
1
, x
2
) ∈ R. (i)
Show that (i) holds if and only if there is a function φ(x
1
, x
2
) such that
a
1
(x
1
, x
2
) = φ
,2
(x
1
, x
2
), a
2
(x
1
, x
2
) = −φ
,1
(x
1
, x
2
). (ii)
Solution: This is simply a restatement of the previous example in a form that will find useful in what follows.
Example 5.12: Find the most general vector field u(x) which satisfies the differential equation
1
2

∇u +∇u
T

= O at all x ∈ R. (i)
Solution: In terms of components, ∇u = −∇u
T
reads:
u
i,j
= −u
j,i
. (ii)
96 CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
Differentiating this with respect to x
k
, and then changing the order of differentiation gives
u
i,jk
= −u
j,ik
= −u
j,ki
.
However by (ii), u
j,k
= −u
k,j
. Using this and then changing the order of differentiation leads to
u
i,jk
= −u
j,ki
= u
k,ji
= u
k,ij
.
Again, by (ii), u
k,i
= −u
i,k
. Using this and changing the order of differentiation once again leads to
u
i,jk
= u
k,ij
= −u
i,kj
= −u
i,jk
.
It therefore follows that
u
i,jk
= 0.
Integrating this once gives
u
i,j
= C
ij
(iii)
where the C
ij
’s are constants. Integrating this once more gives
u
i
= C
ij
x
j
+c
i
, (iv)
where the c
i
’s are constants. The vector field u(x) must necessarily have this form if (ii) is to hold.
To examine sufficiency, substituting (iv) into (ii) shows that [C] must be skew-symmetric. Thus in
summary the most general vector field u(x) that satisfies (i) is
u(x) = Cx +c
where C is a constant skew-symmetric 2-tensor and c is a constant vector.
Example 5.13: Suppose that a scalar-valued function f(A) is defined for all symmetric tensors A. In terms
of components in a fixed basis we have f = f(A
11
, A
12
, A
13
, A
21
, . . . A
33
). The partial derivatives of f with
respect to A
ij
,
∂f
∂A
ij
, (i)
are the components of a 2-tensor. Is this tensor symmetric?
Solution: Consider, for example, the particular function f = A · A = A
ij
A
ij
which, when written out in
components, reads:
f = f
1
(A
11
, A
12
, A
13
, A
21
, . . . A
33
) = A
2
11
+A
2
22
+A
2
33
+ 2A
2
12
+ 2A
2
23
+ 2A
2
31
. (ii)
Proceeding formally and differentiating (ii) with respect to A
12
, and separately with respect to A
21
, gives
∂f
1
∂A
12
= 4A
12
,
∂f
1
∂A
21
= 0, (iii)
which implies that ∂f
1
/∂A
12
= ∂f
1
/∂A
21
.
5.4. WORKED EXAMPLES. 97
On the other hand, since A
ij
is symmetric we can write
A
ij
=
1
2
(A
ij
+A
ji
) . (iv)
Substituting (iv) into the formula (ii) for f gives f = f
2
(A
11
, A
12
, A
13
, A
21
, . . . A
33
) :
f
2
(A
11
, A
12
, A
13
, A
21
, . . . A
33
)
= A
2
11
+A
2
22
+A
2
33
+ 2
¸
1
2
(A
12
+A
21
)

2
+ 2
¸
1
2
(A
23
+A
31
)

2
+ 2
¸
1
2
(A
31
+A
13
)

2
,
= A
2
11
+A
2
22
+A
2
33
+
1
2
A
2
12
+A
12
A
21
+
1
2
A
2
21
+. . . +
1
2
A
2
31
+A
31
A
13
+
1
2
A
2
13
. (v)
Note that the values of f
1
[A] = f
2
[A] for any symmetric matrix [A]. Differentiating f
2
leads to
∂f
2
∂A
12
= A
12
+A
21
,
∂f
2
∂A
21
= A
21
+A
12
, (vi)
and so now, ∂f
2
/∂A
12
= ∂f
2
/∂A
21
.
The source of the original difficulty is the fact that the 9 A
ij
’s in the argument of f
1
are not independent
variables since A
ij
= A
ji
; and yet we have been calculating partial derivatives as if they were independent.
In fact, the original problem statement itself is ill-posed since we are asked to calculate ∂f/∂A
ij
but told
that [A] is restricted to being symmetric.
Suppose that f
2
is defined by (v) for all matrices [A] and not just symmetric matrices [A]. We see that
the values of the functions f
1
and f
2
are equal at all symmetric matrices and so in going from f
1
→ f
2
,
we have effectively relaxed the constraint of symmetry and expanded the domain of definition of f to all
matrices [A]. We may differentiate f
2
by treating the 9 A
ij
’s to be independent and the result can then be
evaluated at symmetric matrices. We assume that this is what was meant in the problem statement.
In general, if a function f(A
11
, A
12
, . . . A
33
) is expressed in symmetric form, by changing A
ij

1
2
(A
ij
+
A
ji
), then ∂f/∂A
ij
will be symmetric; but not otherwise. Throughout these volumes, whenever we encounter
a function of a symmetric tensor, we shall always assume that it has been written in symmetric form; and
therefore its derivative with respect to the tensor can be assumed to be symmetric.
Remark: We will encounter a similar situation involving tensors whose determinant is unity. On occasion we
will have need to differentiate a function g
1
(A) defined for all tensors with det A = 1 and we shall do this
by extending the definition of the given function and defining a second function g
2
(A) for all tensors; g
2
is
defined such that g
1
(A) = g
2
(A) for all tensors with unit determinant. We then differentiate g
2
and evaluate
the result at tensors with unit determinant.
References
1. P. Chadwick, Continuum Mechanics, Chapter 1, Sections 10 and 11, Dover, 1999.
2. M.E. Gurtin, An Introduction to Continuum Mechanics, Chapter 2, Academic Press, 1981.
3. L.A. Segel, Mathematics Applied to Continuum Mechanics, Section 2.3, Dover, New York, 1987.
Chapter 6
Orthogonal Curvilinear Coordinates
6.1 Introductory Remarks
The notes in this section are a somewhat simplified version of notes developed by Professor
Eli Sternberg of Caltech. The discussion here, which is a general treatment of orthogonal
curvilinear coordinates, is a compromise between a general tensorial treatment that includes
oblique coordinate systems and an ad-hoc treatment of special orthogonal curvilinear co-
ordinate systems. A summary of the main tensor analytic results of this section are given
in equations (6.32) - (6.37) in terms of the scale factors h
i
defined in (6.17) that relate
the rectangular cartesian coordinates (x
1
, x
2
, x
3
) to the orthogonal curvilinear coordinates
(ˆ x
1
, ˆ x
2
, ˆ x
3
).
It is helpful to begin by reviewing a few aspects of the familiar case of circular cylindrical
coordinates. Let {e
1
, e
2
, e
3
} be a fixed orthonormal basis, and let O be a fixed point chosen
as the origin. The point O and the basis {e
1
, e
2
, e
3
}, together, constitute a frame which we
denote by {O; e
1
, e
2
, e
3
}. Consider a generic point P in R
3
whose position vector relative
to this origin O is x. The rectangular cartesian coordinates of the point P in the frame
{O; e
1
, e
2
, e
3
} are the components (x
1
, x
2
, x
3
) of the position vector x in this basis.
We introduce circular cylindrical coordinates (r, θ, z) through the mappings
x
1
= r cos θ; x
2
= r sin θ; x
3
= z;
for all (r, θ, z) ∈ [0, ∞) ×[0, 2π) ×(−∞, ∞).

(6.1)
The mapping (6.1) is one-to-one except at r = 0 (i.e. x
1
= x
2
= 0). Indeed (6.1) may be
99
100 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
explicitly inverted for r > 0 to give
r =

x
2
1
+ x
2
2
; cos θ = x
1
/r, sin θ = x
2
/r; z = x
3
. (6.2)
For a general set of orthogonal curvilinear coordinates one cannot, in general, explicitly
invert the coordinate mapping in this way.
The Jacobian determinant of the mapping (6.1) is
∆(r, θ, z) = det

∂x
1
/∂r ∂x
1
/∂θ ∂x
1
/∂z
∂x
2
/∂r ∂x
2
/∂θ ∂x
2
/∂z
∂x
3
/∂r ∂x
3
/∂θ ∂x
3
/∂z
¸
¸
¸
¸
¸
¸
= r ≥ 0.
Note that ∆(r, θ, z) = 0 if and only if r = 0 and is otherwise strictly positive; this reflects
the invertibility of (6.1) on (r, θ, z) ∈ (0, ∞) × [0, 2π) × (−∞, ∞), and the breakdown in
invertibility at r = 0.
x
1
x
2
x
3
z = constant
z
r
θ
r = constant
θ = constant
θ
r
z-coordinate line
r-coordinate line
θ-coordinate line
z
Figure 6.1: Circular cylindrical coordinates (r, θ, z).
The circular cylindrical coordinates (r, θ, z) admit the familiar geometric interpretation
illustrated in Figure 6.1. In view of (6.2), one has:
r = r
o
= constant : circular cylinders, co −axial with x
3
−axis,
θ = θ
o
= constant : meridional half −planes through x
3
−axis,
z = z
o
= constant : planes perpendicular to x
3
−axis.
The above surfaces constitute a triply orthogonal family of coordinate surfaces; each “regular
point” of E
3
( i.e. a point at which r > 0) is the intersection of a unique triplet of (mutually
6.1. INTRODUCTORY REMARKS 101
perpendicular) coordinate surfaces. The coordinate lines are the pairwise intersections of
the coordinate surfaces; thus for example as illustrated in Figure 6.1, the line along which
a r-coordinate surface and a z-coordinate surface intersect is a θ-coordinate line. Along
any coordinate line only one of the coordinates (r, θ, z) varies, while the other two remain
constant.
In terms of the circular cylindrical coordinates the position vector x can be written as
x = x(r, θ, z) = (r cos θ)e
1
+ (r sin θ)e
2
+ ze
3
. (6.3)
The vectors
∂x/∂r, ∂x/∂θ, ∂x/∂z,
are tangent to the coordinate lines corresponding to r, θ and z respectively. The so-called
metric coefficients h
r
, h
θ
, h
z
denote the magnitudes of these vectors, i.e.
h
r
= |∂x/∂r|, h
θ
= |∂x/∂θ|, h
z
= |∂x/∂z|,
and so the unit tangent vectors corresponding to the respective coordinate lines r, θ and z
are:
e
r
=
1
h
r
(∂x/∂r), e
θ
=
1
h
θ
(∂x/∂θ), e
z
=
1
h
z
(∂x/∂z).
In the present case one has h
r
= 1, h
θ
= r, h
z
= 1 and
e
r
=
1
h
r
(∂x/∂r) = cos θ e
1
+ sin θ e
2
,
e
θ
=
1
h
θ
(∂x/∂θ) = −sin θ e
1
+ cos θe
2
,
e
z
=
1
h
z
(∂x/∂z) = e
3
.

The triplet of vectors {e
r
, e
θ
, e
z
} forms a local orthonormal basis at the point x. They are
local because they depend on the point x; sometimes, when we need to emphasize this fact,
we will write {e
r
(x), e
θ
(x), e
z
(x)}.
In order to calculate the derivatives of various field quantities it is clear that we will
need to calculate quantities such as ∂e
r
/∂r, ∂e
r
/∂θ, . . . etc. ; and in order to calculate the
components of these derivatives in the local basis we will need to calculate quantities of the
form
e
r
· (∂e
r
/∂r), e
θ
· (∂e
r
/∂r), e
z
· (∂e
r
/∂r),
e
r
· (∂e
θ
/∂r), e
θ
· (∂e
θ
/∂r), e
z
· (∂e
θ
/∂r), . . . etc.
(6.4)
102 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
Much of the analysis in the general case to follow, leading eventually to Equation (6.30) in
Sub-section 6.2.4, is devoted to calculating these quantities.
Notation: As far as possible, we will consistently denote the fixed cartesian coordinate system
and all components and quantities associated with it by symbols such as x
i
, e
i
, f(x
1
, x
2
, x
3
),
v
i
(x
1
, x
2
, x
3
), A
ij
(x
1
, x
2
, x
3
) etc. and we shall consistently denote the local curvilinear coor-
dinate system and all components and quantities associated with it by similar symbols with
“hats” over them, e.g. ˆ x
i
, ˆ e
i
,
ˆ
f(ˆ x
1
, ˆ x
2
, ˆ x
3
), ˆ v
i
(ˆ x
1
, ˆ x
2
, ˆ x
3
),
´
A
ij
(ˆ x
1
, ˆ x
2
, ˆ x
3
) etc.
6.2 General Orthogonal Curvilinear Coordinates
Let {e
1
, e
2
, e
3
} be a fixed right-handed orthonormal basis, let O be the fixed point cho-
sen as the origin and let {O; e
1
, e
2
, e
3
} be the associated frame. The rectangular cartesian
coordinates of the point with position vector x in this frame are
(x
1
, x
2
, x
3
) where x
i
= x · e
i
.
6.2.1 Coordinate transformation. Inverse transformation.
We introduce curvilinear coordinates (ˆ x
1
, ˆ x
2
, ˆ x
3
) through a triplet of scalar mappings
x
i
= x
i
(ˆ x
1
, ˆ x
2
, ˆ x
3
) for all (ˆ x
1
, ˆ x
2
, ˆ x
3
) ∈
ˆ
R, (6.5)
where the domain of definition
ˆ
R is a subset of E
3
. Each curvilinear coordinate ˆ x
i
belongs
to some linear interval L
i
, and
ˆ
R = L
1
× L
2
× L
3
. For example in the case of circular
cylindrical coordinates we have L
1
= {(ˆ x
1
| 0 ≤ ˆ x
1
< ∞}, L
2
= {(ˆ x
2
| 0 ≤ ˆ x
2
< 2π} and
L
3
= {(ˆ x
3
| − ∞ < ˆ x
3
< ∞}, and the “box”
ˆ
R is given by
ˆ
R = {(ˆ x
1
, ˆ x
2
, ˆ x
3
) | 0 ≤ ˆ x
1
<
∞, 0 ≤ ˆ x
2
< 2π, −∞< ˆ x
3
< ∞}. Observe that the “box”
ˆ
R includes some but possibly not
all of its faces.
Equation (6.5) may be interpreted as a mapping of
ˆ
R into E
3
. We shall assume that
(x
1
, x
2
, x
3
) ranges over all of E
3
as (ˆ x
1
, ˆ x
2
, ˆ x
3
) takes on all values in
ˆ
R. We assume further
that the mapping (6.5) is one-to-one and sufficiently smooth in the interior of
ˆ
R so that the
inverse mapping
ˆ x
i
= ˆ x
i
(x
1
, x
2
, x
3
) (6.6)
exists and is appropriately smooth at all (x
1
, x
2
, x
3
) in the image of the interior of
ˆ
R.
6.2. GENERAL ORTHOGONAL CURVILINEAR COORDINATES 103
Note that the mapping (6.5) might not be uniquely invertible on some of the faces of
ˆ
R
which are mapped into “singular” lines or surfaces in E
3
. (For example in the case of circular
cylindrical coordinates, ˆ x
1
= r = 0 is a singular surface; see Section 6.1.) Points that are not
on a singular line or surface will be referred to as “regular points” of E
3
.
The Jacobian matrix [J] of the mapping (6.5) has elements
J
ij
=
∂x
i
∂ˆ x
j
(6.7)
and by the assumed smoothness and one-to-oneness of the mapping, the Jacobian determi-
nant does not vanish on the interior of
ˆ
R. Without loss of generality we can take therefore
take it to be positive:
det[J] =
1
6
e
ijk
e
pqr
∂x
i
∂ˆ x
p
∂x
j
∂ˆ x
q
∂x
k
∂ˆ x
r
> 0 . (6.8)
The Jacobian matrix of the inverse mapping (6.6) is [J]
−1
.
The coordinate surface ˆ x
i
= constant is defined by
ˆ x
i
(x
1
, x
2
, x
3
) = ˆ x
o
i
= constant, i = 1, 2, 3;
the pairwise intersections of these surfaces are the corresponding coordinate lines, along
which only one of the curvilinear coordinates varies. Thus every regular point of E
3
is the
point of intersection of a unique triplet of coordinate surfaces and coordinate lines, as is
illustrated in Figure 6.2.
Recall that the tangent vector along an arbitrary regular curve
Γ : x = x(t), (α ≤ t ≤ β), (6.9)
can be taken to be
1
˙ x(t) = ˙ x
i
(t)e
i
; it is oriented in the direction of increasing t. Thus in the
case of the special curve
Γ
1
: x = x(ˆ x
1
, c
2
, c
3
), ˆ x
1
∈ L
1
, c
2
= constant, c
3
= constant,
corresponding to a ˆ x
1
-coordinate line, the tangent vector can be taken to be ∂x/∂ˆ x
1
. Gen-
eralizing this, ∂x/∂ˆ x
i
are tangent vectors and
ˆ e
i
=
1
|∂x/∂ˆ x
i
|
∂x
∂ˆ x
i
(no sum) (6.10)
1
Here and in the sequel a superior dot indicates differentiation with respect to the parameter t.
104 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
x
1
x
2
x
3
e
1
e
2
e
3
ˆ e
1
ˆ e
2
ˆ e
3
Q
ij
= ˆ e
i
· e
j
ˆ x
1
ˆ x
2
ˆ x
3 ˆ x
1
-coordinate surface
ˆ x
2
-coordinate surface
ˆ x
3
-coordinate surface
Figure 6.2: Orthogonal curvilinear coordinates (ˆ x
1
, ˆ x
2
, ˆ x
3
) and the associated local orthonormal basis
vectors {ˆ e
1
, ˆ e
2
, ˆ e
3
}. Here ˆ e
i
is the unit tangent vector along the ˆ x
i
-coordinate line, the sense of ˆ e
i
being
determined by the direction in which ˆ x
i
increases. The proper orthogonal matrix [Q] characterizes the
rotational transformation relating this basis to the rectangular cartesian basis {e
1
, e
2
, e
3
}.
are the unit tangent vectors along the ˆ x
i
−coordinate lines, both of which point in the sense
of increasing ˆ x
i
. Since our discussion is limited to orthogonal curvilinear coordinate systems,
we must require
for i = j : ˆ e
i
· ˆ e
j
= 0 or
∂x
∂ˆ x
i
·
∂x
∂ˆ x
j
= 0 or
∂x
k
∂ˆ x
i
·
∂x
k
∂ˆ x
j
= 0. (6.11)
6.2.2 Metric coefficients, scale moduli.
Consider again the arbitrary regular curve Γ parameterized by (6.9). If s(t) is the arc-length
of Γ, measured from an arbitrary fixed point on Γ, one has
| ˙ s(t)| = | ˙ x(t)| =

˙ x(t) · ˙ x(t). (6.12)
6.2. GENERAL ORTHOGONAL CURVILINEAR COORDINATES 105
One concludes from (6.12), (6.5) and the chain rule that

ds
dt

2
=
dx
dt
·
dx
dt
=
dx
k
dt
·
dx
k
dt
=

∂x
k
∂ˆ x
i
dˆ x
i
dt

·

∂x
k
∂ˆ x
j
dˆ x
j
dt

=

∂x
k
∂ˆ x
i
·
∂x
k
∂ˆ x
j

dˆ x
i
dt
dˆ x
j
dt
.
where
ˆ x
i
(t) = ˆ x
i
(x
1
(t), x
2
(t), x
3
(t)), (α ≤ t ≤ β),
Thus

ds
dt

2
= g
ij
dˆ x
i
dt
dˆ x
j
dt
or (ds)
2
= g
ij
dˆ x
i
dˆ x
j
, (6.13)
in which g
ij
are the metric coefficients of the curvilinear coordinate system under consider-
ation. They are defined by
g
ij
=
∂x
∂ˆ x
i
·
∂x
∂ˆ x
j
=
∂x
k
∂ˆ x
i
∂x
k
∂ˆ x
j
. (6.14)
Note that
g
ij
= 0, (i = j), (6.15)
as a consequence of the orthogonality condition (6.11). Observe that in terms of the Jacobian
matrix [J] defined earlier in (6.7) we can write g
ij
= J
ki
J
kj
or equivalently [g] = [J]
T
[J].
Because of (6.14) and (6.15) the metric coefficients can be written as
g
ij
= h
i
h
j
δ
ij
, (6.16)
where the scale moduli h
i
are defined by
23
h
i
=

g
ii
=

∂x
k
∂ˆ x
i
∂x
k
∂ˆ x
i
=

∂x
1
∂ˆ x
i

2
+

∂x
2
∂ˆ x
i

2
+

∂x
3
∂ˆ x
i

2
> 0, (6.17)
noting that h
i
= 0 is precluded by (6.8). The matrix of metric coefficients is therefore
[g] =

¸
¸
¸
¸
¸
h
2
1
0 0
0 h
2
2
0
0 0 h
2
3

. (6.18)
From (6.13), (6.14), (6.17) follows
(ds)
2
= (h
1
dˆ x
1
)
2
+ (h
2
dˆ x
2
)
2
+ (h
3
dˆ x
3
)
2
along Γ, (6.19)
2
Here and henceforth the underlining of one of two or more repeated indices indicates suspended sum-
mation with respect to this index.
3
Some authors such as Love define h
i
as 1/

g
ii
instead of as

g
ii
106 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
which reveals the geometric significance of the scale moduli, i.e.
h
i
=
ds
dˆ x
i
along the ˆ x
i
−coordinate lines. (6.20)
It follows from (6.17), (6.10) and (6.11) that the unit vector ˆ e
i
can be expressed as
ˆ e
i
=
1
h
i
∂x
∂ˆ x
i
, (6.21)
and therefore the proper orthogonal matrix [Q] relating the two sets of basis vectors is given
by
Q
ij
= ˆ e
i
· e
j
=
1
h
i
∂x
j
∂ˆ x
i
(6.22)
6.2.3 Inverse partial derivatives
In view of (6.5), (6.6) one has the identity
x
i
= x
i
(ˆ x
1
(x
1
, x
2
, x
3
), ˆ x
2
(x
1
, x
2
, x
3
), ˆ x
3
(x
1
, x
2
, x
3
)),
so that from the chain rule,
∂x
i
∂ˆ x
k
∂ˆ x
k
∂x
j
= δ
ij
.
Multiply this by ∂x
i
/∂ˆ x
m
, noting the implied contraction on the index i, and use (6.14),
(6.16) to confirm that
∂x
j
∂ˆ x
m
= g
km
∂ˆ x
k
∂x
j
= h
2
m
∂ˆ x
m
∂x
j
.
Thus the inverse partial derivatives are given by
∂ˆ x
i
∂x
j
=
1
h
2
i
∂x
j
∂ˆ x
i
. (6.23)
By (6.22), the elements of the matrix [Q] that relates the two coordinate systems can be
written in the alternative form
Q
ij
= ˆ e
i
· e
j
= h
i
∂ˆ x
i
∂x
j
. (6.24)
Moreover, (6.23) and (6.17) yield the following alternative expressions for h
i
:
h
i
= 1

∂ˆ x
i
∂x
1

2
+

∂ˆ x
i
∂x
2

2
+

∂ˆ x
i
∂x
3

2
.
6.2. GENERAL ORTHOGONAL CURVILINEAR COORDINATES 107
6.2.4 Components of ∂ˆ e
i
/∂ˆ x
j
in the local basis (ˆ e
1
, ˆ e
2
, ˆ e
3
)
In order to calculate the derivatives of various field quantities it is clear that we will need to
calculate the quantities ∂ˆ e
i
/∂ˆ x
j
; and in order to calculate the components of these derivatives
in the local basis we will need to calculate quantities of the form ˆ e
k
· ∂ˆ e
i
/∂ˆ x
j
. Calculating
these quantities is an essential prerequisite for the transformation of basic tensor-analytic
relations into arbitrary orthogonal curvilinear coordinates, and this subsection is devoted to
this calculation.
From (6.21),
∂ˆ e
i
∂ˆ x
j
· ˆ e
k
=


1
h
2
i
∂h
i
∂ˆ x
j
∂x
∂ˆ x
i
+
1
h
i

2
x
∂ˆ x
i
∂ˆ x
j
¸
·
1
h
k
∂x
∂ˆ x
k
,
while by (6.14), (6.17),
∂x
∂ˆ x
i
·
∂x
∂ˆ x
j
= g
ij
= δ
ij
h
i
h
j
. (6.25)
Therefore
∂ˆ e
i
∂ˆ x
j
· ˆ e
k
= −
δ
ik
h
i
∂h
i
∂ˆ x
j
+ +
1
h
i
h
k

2
x
∂ˆ x
i
∂ˆ x
k
·
∂x
∂x
k
. (6.26)
In order to express the second derivative term in (6.26) in terms of the scale-moduli and
their first partial derivatives, we begin by differentiating (6.25) with respect to ˆ x
k
. Thus,

2
x
∂ˆ x
i
∂ˆ x
k
·
∂x
∂ˆ x
j
+

2
x
∂ˆ x
j
∂ˆ x
k
·
∂x
∂ˆ x
i
= δ
ij

∂ˆ x
k
(h
i
h
j
) . (6.27)
If we refer to (6.27) as (a), and let (b) and (c) be the identities resulting from (6.27) when
(i, j, k) are replaced by (j, k, i) and (k, i, j), respectively, then
1
2
{(b)+(c) -(a)} is readily found
to yield

2
x
∂ˆ x
i
∂ˆ x
j
·
∂x
∂ˆ x
k
=
1
2

δ
jk

∂ˆ x
i
(h
j
h
k
) + δ
ki

∂ˆ x
j
(h
k
h
i
) −δ
ij

∂ˆ x
k
(h
i
h
j
)

. (6.28)
Substituting (6.28) into (6.26) leads to
∂ˆ e
i
∂ˆ x
j
· ˆ e
k
= −
δ
ik
h
i
∂h
i
∂ˆ x
j
+
1
2h
i
h
k

δ
jk

∂ˆ x
i
(h
j
h
k
) + δ
ki

∂ˆ x
j
(h
k
h
i
) −δ
ij

∂ˆ x
k
(h
i
h
j
)

. (6.29)
Equation (6.29) provides the explicit expressions for the terms ∂ˆ e
i
/∂ˆ x
j
· ˆ e
k
that we sought.
108 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
Observe the following properties that follow from it:
∂ˆ e
i
∂ˆ x
j
· ˆ e
k
= 0 if (i, j, k) distinct,
∂ˆ e
i
∂ˆ x
j
· ˆ e
k
= 0 if k = i,
∂ˆ e
i
∂ˆ x
i
· ˆ e
k
= −
1
h
k
∂h
i
∂ˆ x
k
, if i = k
∂ˆ e
i
∂ˆ x
k
· ˆ e
k
= −
1
h
i
∂h
k
∂ˆ x
i
if i = k.

(6.30)
6.3 Transformation of Basic Tensor Relations
Let T be a cartesian tensor field of order N ≥ 1, defined on a region R ⊂ E
3
and suppose
that the points of R are regular points of E
3
with respect to a given orthogonal curvilinear
coordinate system.
The curvilinear components
ˆ
T
ijk...n
of T are the components of T in the local basis
(ˆ e
1
, ˆ e
2
, ˆ e
3
). Thus,
ˆ
T
ij...n
= Q
ip
Q
jq
. . . Q
nr
T
pq...r
, where Q
ip
= ˆ e
i
· e
p
. (6.31)
6.3.1 Gradient of a scalar field
Let φ(x) be a scalar-valued function and let v(x) denote its gradient:
v = ∇φ or equivalently v
i
= φ
,i
.
The components of v in the two bases {e
1
, e
2
, e
3
} and {ˆ e
1
, ˆ e
2
, ˆ e
3
} are related in the usual
way by
ˆ v
k
= Q
ki
v
i
and so
ˆ v
k
= Q
ki
v
i
= Q
ki
∂φ
∂x
i
.
On using (6.22) this leads to
ˆ v
k
=

1
h
k
∂x
i
∂ˆ x
k

∂φ
∂x
i
=
1
h
k
∂φ
∂x
i
∂x
i
∂ˆ x
k
,
6.3. TRANSFORMATION OF BASIC TENSOR RELATIONS 109
so that by the chain rule
ˆ v
k
=
1
h
k

ˆ
φ
∂ˆ x
k
, (6.32)
where we have set
ˆ
φ(ˆ x
1
, ˆ x
2
, ˆ x
3
) = φ(x
1
(ˆ x
1
, ˆ x
2
, ˆ x
3
), x
2
(ˆ x
1
, ˆ x
2
, ˆ x
3
), x
3
(ˆ x
1
, ˆ x
2
, ˆ x
3
)).
6.3.2 Gradient of a vector field
Let v(x) be a vector-valued function and let W(x) denote its gradient:
W = ∇v or equivalently W
ij
= v
i,j
.
The components of W and v in the two bases {e
1
, e
2
, e
3
} and {ˆ e
1
, ˆ e
2
, ˆ e
3
} are related in the
usual way by
´
W
ij
= Q
ip
Q
jq
W
pq
, v
p
= Q
np
ˆ v
n
,
and therefore
´
W
ij
= Q
ip
Q
jq
∂v
p
∂x
q
= Q
ip
Q
jq

∂x
q
(Q
np
ˆ v
n
) = Q
ip
Q
jq

∂ˆ x
m
(Q
np
ˆ v
n
)
∂ˆ x
m
∂x
q
.
Thus by (6.24)
4
´
W
ij
= Q
ip
Q
jq
3
¸
m=1
1
h
m
Q
mq

∂ˆ x
m
(Q
np
ˆ v
n
) .
Since Q
jq
Q
mq
= δ
mj
, this simplifies to
´
W
ij
= Q
ip
1
h
j

∂ˆ x
j
(Q
np
ˆ v
n
),
which, on expanding the terms in parentheses, yields
´
W
ij
=
1
h
j

∂ˆ v
i
∂ˆ x
j
+ Q
ip
∂Q
np
∂ˆ x
j
ˆ v
n

.
However by (6.22)
Q
ip
∂Q
np
∂ˆ x
j
ˆ v
n
= Q
ip

∂ˆ x
j
(ˆ e
n
· e
p
)ˆ v
n
= ˆ e
i
·
∂ˆ e
n
∂ˆ x
j
ˆ v
n
,
and so
´
W
ij
=
1
h
j

∂ˆ v
i
∂ˆ x
j
+
¸
ˆ e
i
·
∂ˆ e
n
∂ˆ x
j

ˆ v
n

, (6.33)
in which the coefficient in brackets is given by (6.29).
4
We explicitly use the summation sign in this equation (and elsewhere) when an index is repeated 3 or
more times, and we wish sum over it.
110 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
6.3.3 Divergence of a vector field
Let v(x) be a vector-valued function and let W(x) = ∇v(x) denote its gradient. Then
div v = trace W = v
i,i
.
Therefore from (6.33), the invariance of the trace of W, and (6.30),
div v = trace W = trace
´
W =
´
W
ii
=
3
¸
i=1
1
h
i

∂ˆ v
i
∂ˆ x
i
+
¸
n=i
1
h
n
∂h
i
∂ˆ x
n
ˆ v
n
¸
.
Collecting terms involving ˆ v
1
, ˆ v
2
, and ˆ v
3
alone, one has
div v =
1
h
1
∂ˆ v
1
∂ˆ x
1
+
ˆ v
1
h
2
h
1
∂h
2
∂ˆ x
1
+
ˆ v
1
h
3
h
1
∂h
3
∂ˆ x
1
+ . . . + . . .
Thus
div v =
1
h
1
h
2
h
3


∂ˆ x
1
(h
2
h
3
ˆ v
1
) +

∂ˆ x
2
(h
3
h
1
ˆ v
2
) +

∂ˆ x
3
(h
1
h
2
ˆ v
3
)

. (6.34)
6.3.4 Laplacian of a scalar field
Let φ(x) be a scalar-valued function. Since

2
φ = div(grad φ) = φ
,kk
the results from Sub-sections (6.3.1) and (6.3.3) permit us to infer that

2
φ =
1
h
1
h
2
h
3


∂ˆ x
1

h
2
h
3
h
1

ˆ
φ
∂ˆ x
1

+

∂ˆ x
2

h
3
h
1
h
2

ˆ
φ
∂ˆ x
2

+

∂ˆ x
3

h
1
h
2
h
3

ˆ
φ
∂ˆ x
3

¸
(6.35)
where we have set
ˆ
φ(ˆ x
1
, ˆ x
2
, ˆ x
3
) = φ(x
1
(ˆ x
1
, ˆ x
2
, ˆ x
3
), x
2
(ˆ x
1
, ˆ x
2
, ˆ x
3
), x
3
(ˆ x
1
, ˆ x
2
, ˆ x
3
)).
6.3.5 Curl of a vector field
Let v(x) be a vector-valued field and let w(x) be its curl so that
w = curl v or equivalently w
i
= e
ijk
v
k,j
.
6.3. TRANSFORMATION OF BASIC TENSOR RELATIONS 111
Let
W = ∇v or equivalently W
ij
= v
i,j
.
Then as we have shown in an earlier chapter
w
i
= e
ijk
W
kj
, ´ w
i
= e
ijk
´
W
kj
.
Consequently from Sub-section 6.3.2,
´ w
i
=
3
¸
j=1
1
h
j
e
ijk

∂ˆ v
k
∂ˆ x
j
+
¸
ˆ e
k
·
∂ˆ e
n
∂ˆ x
j

ˆ v
n

.
By (6.30), the second term within the braces sums out to zero unless n = j. Thus, using the
second of (6.30), one arrives at
´ w
i
=
3
¸
j,k=1
1
h
j
e
ijk

∂ˆ v
k
∂ˆ x
j

1
h
k
∂h
j
∂ˆ x
k
ˆ v
j

.
This yields
´ w
1
=
1
h
2
h
3


∂ˆ x
2
(h
3
ˆ v
3
) −

∂ˆ x
3
(h
2
ˆ v
2
)

,
´ w
2
=
1
h
3
h
1


∂ˆ x
3
(h
1
ˆ v
1
) −

∂ˆ x
1
(h
3
ˆ v
3
)

,
´ w
3
=
1
h
1
h
2


∂ˆ x
1
(h
2
ˆ v
2
) −

∂ˆ x
2
(h
1
ˆ v
1
)

.
(6.36)
6.3.6 Divergence of a symmetric 2-tensor field
Let S(x) be a symmetric 2-tensor field and let v(x) denote its divergence:
v = div S, S = S
T
, or equivalently v
i
= S
ij,j
, S
ij
= S
ji
.
The components of v and S in the two bases {e
1
, e
2
, e
3
} and {ˆ e
1
, ˆ e
2
, ˆ e
3
} are related in the
usual way by
ˆ v
i
= Q
ip
v
p
, S
ij
= Q
mi
Q
nj
ˆ
S
mn
,
and consequently
ˆ v
i
= Q
ip
S
pj,j
= Q
ip

∂x
j
(Q
mp
Q
nj
ˆ
S
mn
) = Q
ip

∂ˆ x
k
(Q
mp
Q
nj
ˆ
S
mn
)
∂ˆ x
k
∂x
j
.
112 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
By using (6.24), the orthogonality of the matrix [Q], (6.30) and
ˆ
S
ij
=
ˆ
S
ji
we obtain
ˆ v
1
=
1
h
1
h
2
h
3


∂ˆ x
1
(h
2
h
3
ˆ
S
11
) +

∂ˆ x
2
(h
3
h
1
ˆ
S
12
) +

∂ˆ x
3
(h
1
h
2
ˆ
S
13
)

+
1
h
1
h
2
∂h
1
∂ˆ x
2
ˆ
S
12
+
1
h
1
h
3
∂h
1
∂ˆ x
3
ˆ
S
13

1
h
1
h
2
∂h
2
∂ˆ x
1
ˆ
S
22

1
h
1
h
3
∂h
3
∂ˆ x
1
ˆ
S
33
,
(6.37)
with analogous expressions for ˆ v
2
and ˆ v
3
.
Equations (6.32) - (6.37) provide the fundamental expressions for the basic tensor-analytic
quantities that we will need. Observe that they reduce to their classical rectangular cartesian
forms in the special case x
i
= ˆ x
i
(in which case h
1
= h
2
= h
3
= 1).
6.3.7 Differential elements of volume
When evaluating a volume integral over a region D, we sometimes find it convenient to
transform it from the form

D

. . .

dx
1
dx
2
dx
3
into an equivalent expression of the form

D

. . .

dˆ x
1
dˆ x
2
dˆ x
3
.
In order to do this we must relate dx
1
dx
2
dx
3
to dˆ x
1
dˆ x
2
dˆ x
3
. By (6.22),
det[Q] =
1
h
1
h
2
h
3
det[J].
However since [Q] is a proper orthogonal matrix its determinant takes the value +1. Therefore
det[J] = h
1
h
2
h
3
and so the basic relation dx
1
dx
2
dx
3
= det[J] dˆ x
1
dˆ x
2
dˆ x
3
leads to
dx
1
dx
2
dx
3
= h
1
h
2
h
3
dˆ x
1
dˆ x
2
dˆ x
3
. (6.38)
6.3.8 Differential elements of area
Let d
ˆ
A
1
denote a differential element of (vector) area on a ˆ x
1
-coordinate surface so that
d
ˆ
A
1
= (dˆ x
2
∂x/∂ˆ x
2
) × (dˆ x
3
∂x/∂ˆ x
3
). In view of (6.21) this leads to d
ˆ
A
1
= (dˆ x
2
h
2
ˆ e
2
) ×
(dˆ x
3
h
3
ˆ e
3
) = h
2
h
3
dˆ x
2
dˆ x
3
ˆ e
1
. Thus the differential elements of (scalar) area on the ˆ x
1
-, ˆ x
2
-
and ˆ x
3
-coordinate surfaces are given by
d
ˆ
A
1
= h
2
h
3
dˆ x
2
dˆ x
3
, d
ˆ
A
2
= h
3
h
1
dˆ x
3
dˆ x
1
, d
ˆ
A
3
= h
1
h
2
dˆ x
1
dˆ x
2
, (6.39)
respectively.
6.4. EXAMPLES OF ORTHOGONAL CURVILINEAR COORDINATES 113
6.4 Some Examples of Orthogonal Curvilinear Coordi-
nate Systems
Circular Cylindrical Coordinates (r, θ, z):
x
1
= r cos θ, x
2
= r sin θ, x
3
= z;
for all (r, θ, z) ∈ [0, ∞) ×[0, 2π) ×(−∞, ∞);
h
r
= 1, h
θ
= r, h
z
= 1.

(6.40)
Spherical Coordinates (r, θ, φ):
x
1
= r sin θ cos φ, x
2
= r sin θ sin φ, x
3
= r cos θ;
for all (r, θ, φ) ∈ [0, ∞) ×[0, 2π) ×(−π, π];
h
r
= 1, h
θ
= r, h
φ
= r sin θ .

(6.41)
Elliptical Cylindrical Coordinates (ξ, η, z):
x
1
= a cosh ξ cos η, x
2
= a sinh ξ sin η, x
3
= z;
for all (ξ, η, z) ∈ [0, ∞) ×(−π, π] ×(−∞, ∞);
h
ξ
= h
η
= a

sinh
2
ξ + sin
2
η, h
z
= 1 .

(6.42)
Parabolic Cylindrical Coordinates (u, v, w):
x
1
=
1
2
(u
2
−v
2
), x
2
= uv, x
3
= w;
for all (u, v, w) ∈ (−∞, ∞) ×[0, ∞) ×(−∞, ∞);
h
u
= h
v
=

u
2
+ v
2
, h
z
= 1 .

(6.43)
6.5 Worked Examples.
Example 6.1: Let E(x) be a symmetric 2-tensor field that is related to a vector field u(x) through
E =
1
2

∇u +∇u
T

.
114 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
In a cartesian coordinate system this can be written equivalently as
E
ij
=
1
2

∂u
i
∂x
j
+
∂u
j
∂x
i

.
Establish the analogous formulas in a general orthogonal curvilinear coordinate system.
Solution: Using the result from Sub-section 6.3.2 and the formulas for ˆ e
k
· (∂ˆ e
i
/∂ˆ x
j
one finds after elementary
simplification that
´
E
11
=
1
h
1
∂ˆ u
1
∂ˆ x
1
+
1
h
1
h
2
∂h
1
∂ˆ x
2
ˆ u
2
+
1
h
1
h
3
∂h
1
∂ˆ x
3
ˆ u
3
, E
22
= . . . , E
33
= . . . ,
´
E
12
=
´
E
21
=
1
2

h
1
h
2

∂ˆ x
2

ˆ u
1
h
1

+
h
2
h
1

∂ˆ x
1

ˆ u
2
h
2

, E
23
= . . . , E
31
= . . . ,

(i)
Example 6.2: Consider a symmetric 2-tensor field S(x) and a vector field b(x) that satisfy the equation
div S +b = o.
In a cartesian coordinate system this can be written equivalently as
∂S
ij
∂x
j
+b
i
= 0.
Establish the analogous formulas in a general orthogonal curvilinear coordinate system.
Solution: From the results in Sub-section 6.3.6 we have
1
h
1
h
2
h
3


∂ˆ x
1
(h
2
h
3
´
S
11
) +

∂ˆ x
2
(h
3
h
1
´
S
12
) +

∂ˆ x
3
(h
1
h
2
´
S
13
)

+
1
h
1
h
2
∂h
1
∂ˆ x
2
´
S
12
+
1
h
1
h
3
∂h
1
∂ˆ x
3
´
S
13

1
h
1
h
2
∂h
2
∂ˆ x
1
´
S
22

1
h
1
h
3
∂h
3
∂ˆ x
1
´
S
33
+
ˆ
b
1
= 0,
. . . . . . . . . etc.
(i)
where
ˆ
b
i
= Q
ip
b
p
Example 6.3: Consider circular cylindrical coordinates (ˆ x
1
, ˆ x
2
, ˆ x
3
) = (r, θ, z) which are related to (x
1
, x
2
, x
3
)
through
x
1
= r cos θ, x
2
= r sin θ, x
3
= z,
0 ≤ r < ∞, 0 ≤ θ < 2π, −∞< z < ∞.

Let f(x) be a scalar-valued field, u(x) a vector-valued field, and S(x) a symmetric 2-tensor field. Express
the following quanties,
(a) grad f
6.5. WORKED EXAMPLES. 115
x
1
x
2
x
3
r
θ
z
e
r
e
θ
e
z
e
r
= cos θ e
1
+ sin θ e
2
,
e
θ
= −sin θ e
1
+ cos θ e
2
,
e
z
= e
3
,









Figure 6.3: Cylindrical coordinates (r, θ, z) and the associated local curvilinear orthonormal basis
{e
r
, e
θ
, e
z
}.
(b) ∇
2
f
(c) div u
(d) curl u
(e)
1
2

∇u +∇u
T

and
(f) div S
in this coordinate system.
Solution: We simply need to specialize the basic results established in Section 6.3.
In the present case we have
(ˆ x
1
, ˆ x
2
, ˆ x
3
) = (r, θ, z) (i)
and the coordinate mapping (6.5) takes the particular form
x
1
= r cos θ, x
2
= r sin θ, x
3
= z. (ii)
The matrix [∂x
i
/∂ˆ x
j
] therefore specializes to

¸
¸
¸
¸
¸
∂x
1
/∂r ∂x
1
/∂θ ∂x
1
/∂z
∂x
2
/∂r ∂x
2
/∂θ ∂x
2
/∂z
∂x
3
/∂r ∂x
3
/∂θ ∂x
3
/∂z

=

¸
¸
¸
¸
¸
cos θ −r sin θ 0
sin θ r cos θ 0
0 0 1

,
116 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
and the scale moduli are
h
r
=

∂x
1
∂r

2
+

∂x
2
∂r

2
+

∂x
3
∂r

2
= 1,
h
θ
=

∂x
1
∂θ

2
+

∂x
2
∂θ

2
+

∂x
3
∂θ

2
= r,
h
z
=

∂x
1
∂z

2
+

∂x
2
∂z

2
+

∂x
3
∂z

2
= 1.
(iii)
We use the natural notation
(u
r
, u
θ
, u
z
) = (ˆ u
1
, ˆ u
2
, ˆ u
3
) (iv)
for the components of a vector field, and
(S
rr
, S

, . . .) = (
ˆ
S
11
,
ˆ
S
12
, . . .) (v)
for the components of a 2-tensor field, and
(e
r
, e
θ
, e
z
) = (ˆ e
1
, ˆ e
2
, ˆ e
3
) (vi)
for the unit vectors associated with the local cylindrical coordinate system.
From (ii),
x = (r cos θ)e
1
+ (r sin θ)e
2
+ (z)e
3
,
and therefore on using (6.21) and (iii) we obtain the following expressions for the unit vectors associated
with the local cylindrical coordinate system:
e
r
= cos θ e
1
+ sin θ e
2
,
e
θ
= −sin θ e
1
+ cos θ e
2
,
e
z
= e
3
,

which, in this case, could have been obtained geometrically from Figure 6.3.
(a) Substituting (i) and (iii) into (6.32) gives
∇f =


´
f
∂r

e
r
+

1
r

´
f
∂θ

e
θ
+


´
f
∂z

e
z
where we have set
´
f(r, θ, z) = f(x
1
, x
2
, x
3
).
(b) Substituting (i) and (iii) into (6.35) gives

2
´
f =

2 ´
f
∂r
2
+
1
r

´
f
∂r
+
1
r
2

2 ´
f
∂θ
2
+

2 ´
f
∂z
2
where we have set
´
f(r, θ, z) = f(x
1
, x
2
, x
3
).
6.5. WORKED EXAMPLES. 117
(c) Substituting (i) and (iii) into (6.34) gives
div u =
∂u
r
∂r
+
1
r
u
r
+
1
r
∂u
θ
∂θ
+
∂u
z
∂z
(d) Substituting (i) and (iii) into (6.36) gives
curl u =

1
r
∂u
z
∂θ

∂u
θ
∂z

e
r
+

∂u
r
∂z

∂u
z
∂r

e
θ
+

∂u
θ
∂r
+
u
θ
r

1
r
∂u
r
∂θ

e
z
(e) Set E=(1/2)(∇u + ∇u
T
). Substituting (i) and (iii) into (6.33) enables us to calculate ∇u whence we
can calculate E. Writing the cylindrical components
ˆ
E
ij
of E as
(E
rr
, E

, E
rz
, . . .) = (
ˆ
E
11
,
ˆ
E
12
,
ˆ
E
13
. . .),
one finds
E
rr
=
∂u
r
∂r
,
E
θθ
=
1
r
∂u
θ
∂θ
+
u
r
r
,
E
zz
=
∂u
z
∂z
,
E

=
1
2

1
r
∂u
r
∂θ
+
∂u
θ
∂r

u
θ
r

,
E
θz
=
1
2

∂u
θ
∂z
+
1
r
∂u
z
∂θ

,
E
zr
=
1
2

∂u
z
∂r
+
∂u
r
∂z

.

Alternatively these could have been obtained from the results of Example 6.1.
(f) Finally, substituting (i) and (iii) into (6.37) gives
div S =

∂S
rr
∂r
+
1
r
∂S

∂θ
+
∂S
rz
∂z
+
S
rr
−S
θθ
r

e
r
+

∂S

∂r
+
1
r
∂S
θθ
∂θ
+
∂S
θz
∂z
+
2S

r

e
θ
+

∂S
zr
∂r
+
1
r
∂S

∂θ
+
∂S
zz
∂z
+
S
zr
r

e
z
Alternatively these could have been obtained from the results of Example 6.2.
Example 6.4: Consider spherical coordinates (ˆ x
1
, ˆ x
2
, ˆ x
3
) = (r, θ, φ) which are related to (x
1
, x
2
, x
3
) through
x
1
= r sin θ cos φ, x
2
= r sin θ sin φ, x
3
= r cos θ,
0 ≤ r < ∞, 0 ≤ θ ≤ π, 0 ≤ φ < 2π.

Let f(x) be a scalar-valued field, u(x) a vector-valued field, and S(x) a symmetric 2-tensor field. Express
the following quanties,
118 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
x
1
x
2
x
3
r
θ
φ
e
r
= (sin θ cos φ) e
1
+ (sin θ sin φ) e
2
+ cos θ e
3
,
e
θ
= (cos θ cos φ) e
1
+ (cos θ sin φ) e
2
− sin θ e
3
,
e
φ
= −sin φ e
1
+ cos φ e
2
,









e
r
e
θ
e
φ
Figure 6.4: Spherical coordinates (r, θ, φ) and the associated local curvilinear orthonormal basis {e
r
, e
θ
, e
φ
}.
(a) grad f
(b) div u
(c) ∇
2
f
(d) curl u
(e)
1
2

∇u +∇u
T

and
(f) div S
in this coordinate system.
Solution: We simply need to specialize the basic results established in Section 6.3.
In the present case we have
(ˆ x
1
, ˆ x
2
, ˆ x
3
) = (r, θ, φ), (i)
and the coordinate mapping (6.5) takes the particular form
x
1
= r sin θ cos φ, x
2
= r sin θ sin φ, x
3
= r cos θ. (ii)
The matrix [∂x
i
/∂ˆ x
j
] therefore specializes to

¸
¸
¸
¸
¸
∂x
1
/∂r ∂x
1
/∂θ ∂x
1
/∂φ
∂x
2
/∂r ∂x
2
/∂θ ∂x
2
/∂φ
∂x
3
/∂r ∂x
3
/∂θ ∂x
3
/∂φ

=

¸
¸
¸
¸
¸
sin θ cos φ r cos θ cos φ −r sin θ sin φ
sin θ sin φ r cos θ sin φ r sin θ cos φ
cos θ −r sin θ 0

6.5. WORKED EXAMPLES. 119
and the scale moduli are
h
r
=

∂x
1
∂r

2
+

∂x
2
∂r

2
+

∂x
3
∂r

2
= 1,
h
θ
=

∂x
1
∂θ

2
+

∂x
2
∂θ

2
+

∂x
3
∂θ

2
= r,
h
φ
=

∂x
1
∂φ

2
+

∂x
2
∂φ

2
+

∂x
3
∂φ

2
= r sin θ.
(iii)
We use the natural notation
(u
r
, u
θ
, u
φ
) = (ˆ u
1
, ˆ u
2
, ˆ u
3
) (iv)
for the components of a vector field,
(S
rr
, S

, S

. . .) = (
ˆ
S
11
,
ˆ
S
12
,
ˆ
S
13
. . .) (v)
for the components of a 2-tensor field, and
(e
r
, e
θ
, e
φ
) = (ˆ e
1
, ˆ e
2
, ˆ e
3
) (vi)
for the unit vectors associated with the local spherical coordinate system.
From (ii),
x = (r sin θ cos φ)e
1
+ (r sin θ sin φ)e
2
+ (r cos θ)e
3
,
and therefore on using (6.21) and (iii) we obtain the following expressions for the unit vectors associated
with the local spherical coordinate system:
e
r
= (sin θ cos φ) e
1
+ (sin θ sin φ) e
2
+ cos θ e
3
,
e
θ
= (cos θ cos φ) e
1
+ (cos θ sin φ) e
2
− sin θ e
3
,
e
φ
= −sin φ e
1
+ cos φ e
2
,

which, in this case, could have been obtained geometrically from Figure 6.4.
(a) Substituting (i) and (iii) into (6.32) gives
∇f =


´
f
∂r

e
r
+

1
r

´
f
∂θ

e
θ
+

1
r sin θ

´
f
∂φ

e
φ
.
where we have set
´
f(r, θ, φ) = f(x
1
, x
2
, x
3
).
(b) Substituting (i) and (iii) into (6.35) gives

2
f =

2 ´
f
∂r
2
+
2
r

´
f
∂r
+
1
r
2

2 ´
f
∂θ
2
+
1
r
2
cot θ

´
f
∂θ
+
1
r
2
sin
2
θ

2 ´
f
∂φ
2
where we have set
´
f(r, θ, φ) = f(x
1
, x
2
, x
3
).
120 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES
(c) Substituting (i) and (iii) into (6.34) gives
div u =
1
r
2
sin θ
¸

∂r
(r
2
sin θ u
r
) +

∂θ
(r sin θu
θ
) +

∂φ
(ru
φ
)

.
(d) Substituting (i) and (iii) into (6.36) gives
curl u =

1
r
2
sin θ
¸

∂θ
(r sin θv
φ
) −

∂φ
(rv
θ
)

e
r
+

1
r sin θ
¸
∂v
r
∂φ


∂r
(r sin θv
φ
)

e
θ
+

1
r
¸

∂r
(rv
θ
) −
∂v
r
∂θ

e
φ
.
(e) Set E=(1/2)(∇u + ∇u
T
). We substitute (i) and (iii) into (6.33) to calculate ∇u from which one can
calculate E. Writing the spherical components
´
E
ij
of E as
(E
rr
, E

, E

, . . .) = (
´
E
11
,
´
E
12
,
´
E
13
. . .),
one finds
E
rr
=
∂u
r
∂r
,
E
θθ
=
1
r
∂u
θ
∂θ
+
u
r
r
,
E
φφ
=
1
r sin θ
∂u
φ
∂φ
+
u
r
r
+
cot θ
r
u
θ
,
E

=
1
2

1
r
∂u
r
∂θ
+
∂u
θ
∂r

u
θ
r

,
E
θφ
=
1
2

1
r sin θ
∂u
θ
∂φ
+
1
r
∂u
φ
∂θ

cot θ
r
u
φ

,
E
φr
=
1
2

1
r sin θ
∂u
r
∂φ
+
∂u
φ
∂r

u
φ
r

,

Alternatively these could have been obtained from the results of Example 6.1.
(f) Finally substituting (i) and (iii) into (6.37) gives
div S =

∂S
rr
∂r
+
1
r
∂S

∂θ
+
1
r sin θ
∂S

∂φ
+
1
r
[2S
rr
−S
θθ
−S
φφ
+ cot θS

]

e
r
+

∂S

∂r
+
1
r
∂S
θθ
∂θ
+
1
r sin θ
∂S
θφ
∂φ
+
1
r
[3S

+ cot θ(S
θθ
−S
φφ
)]

e
θ
+

∂S

∂r
+
1
r
∂S
θφ
∂θ
+
1
r sin θ
∂S
φφ
∂φ
+
1
r
[3S

+ 2 cot θS
θφ
]

e
φ
Alternatively these could have been obtained from the results of Example 6.2.
Example 6.5: Show that the matrix [Q] defined by (6.22) is a proper orthogonal matrix.
Proof: From (6.22),
Q
ij
=
1
h
i
∂x
j
∂ˆ x
i
,
6.5. WORKED EXAMPLES. 121
and therefore
Q
ik
Q
jk
=
1
h
i
h
j
∂x
k
∂ˆ x
i
∂x
k
∂ˆ x
j
=
1
h
i
h
j
g
ij
= δ
ij
,
where in the penultimate step we have used (6.14) and in the ultimate step we have used (6.16). Thus [Q]
is an orthogonal matrix. Next, from (6.22) and (6.7),
Q
ij
=
1
h
i
∂x
j
∂ˆ x
i
=
1
h
i
J
ji
where J
ij
= ∂x
i
/∂ˆ x
j
are the elements of the Jacobian matrix. Thus
det[Q] =
1
h
1
h
2
h
3
det[J] > 0
where the inequality is a consequence of the inequalities in (6.8) and (6.17). Hence [Q] is proper orthogonal.
References
1. H. Reismann and P.S. Pawlik, Elasticity: Theory and Applications, Wiley, 1980.
2. L.A. Segel, Mathematics Applied to Continuum Mechanics, Dover, New York, 1987.
3. E. Sternberg, (Unpublished) Lecture Notes for AM 135: Elasticity, California Institute of Technology,
Pasadena, California, 1976.
Chapter 7
Calculus of Variations
7.1 Introduction.
Numerous problems in physics can be formulated as mathematical problems in optimization.
For example in optics, Fermat’s principle states that the path taken by a ray of light in
propagating from one point to another is the path that minimizes the travel time. Most
equilibrium theories of mechanics involve finding a configuration of a system that minimizes
its energy. For example a heavy cable that hangs under gravity between two fixed pegs
adopts the shape that, from among all possible shapes, minimizes the gravitational potential
energy of the system. Or, if we subject a straight beam to a compressive load, its deformed
configuration is the shape which minimizes the total energy of the system. Depending on
the load, the energy minimizing configuration may be straight or bent (buckled). If we dip a
(non-planar) wire loop into soapy water, the soap film that forms across the loop is the one
that minimizes the surface energy (which under most circumstances equals minimizing the
surface area of the soap film). Another common problem occurs in geodesics where, given
some surface and two points on it, we want to find the path of shortest distance joining those
two points which lies entirely on the given surface.
In each of these problem we have a scalar-valued quantity F such as energy or time that
depends on a function φ such as the shape or path, and we want to find the function φ that
minimizes the quantity F of interest. Note that the scalar-valued function F is defined on a
set of functions. One refers to F as a functional and writes F{φ}.
As a specific example, consider the so-called Brachistochrone Problem. We are given two
123
124 CHAPTER 7. CALCULUS OF VARIATIONS
points (0, 0) and (1, h) in the x, y-plane, with h > 0, that are to be joined by a smooth wire.
A bead is released from rest from the point (0, 0) and slides along the wire due to gravity.
For what shape of wire is the time of travel from (0, 0) to (1, h) least?
x
y = φ(x)
g
y
(0, 0)
(1, h)
1
Figure 7.1: Curve joining (0, 0) to (1, h) along which a bead slides under gravity.
In order to formulate this problem, let y = φ(x), 0 ≤ x ≤ 1, describe a generic curve
joining (0, 0) to (1, h). Let s(t) denote the distance traveled by the bead along the wire at
time t so that v(t) = ds/dt is its corresponding speed. The travel time of the bead is
T =

ds
v
where the integral is taken along the entire path. In the question posed to us, we are to find
the curve, i.e. the function φ(x), which makes T a minimum. Since we are to minimize T by
varying φ, it is natural to first rewrite the formula for T in a form that explicitly displays
its dependency on φ.
Note first, that by elementary calculus, the arc length ds is related to dx by
ds =

dx
2
+ dy
2
=

1 + (dy/dx)
2
dx =

1 + (φ

)
2
dx
and so we can write
T =

1
0

1 + (φ

)
2
v
dx.
Next, we wish to express the speed v in terms of φ. If (x(t), y(t)) denote the coordinates
of the bead at time t, the conservation of energy tells us that the sum of the potential and
kinetic energies does not vary with time:
−mgφ(x(t)) +
1
2
mv
2
(t) = 0,
7.1. INTRODUCTION. 125
where the right hand side is the total energy at the initial instant. Solving this for v gives
v =

2gφ.
Finally, substituting this back into the formula for the travel time gives
T{φ} =

1
0

1 + (φ

)
2
2gφ
dx. (7.1)
Given a curve characterized by y = φ(x), this formula gives the corresponding travel time
for the bead. Our task is to find, from among all such curves, the one that minimizes T{φ}.
This minimization takes place over a set of functions φ. In order to complete the for-
mulation of the problem, we should carefully characterize this set of “admissible functions”
(or “test functions”). A generic curve is described by y = φ(x), 0 ≤ x ≤ 1. Since we are
only interested in curves that pass through the points (0, 0) and (1, h) we must require that
φ(0) = 0, φ(1) = h. Finally, for analytical reasons we only consider curves that are continu-
ous and have a continous slope, i.e. φ and φ

are both continuous on [0, 1]. Thus the set A of
admissible functions that we wish to consider is
A =
¸
φ(·)

φ : [0, 1] →R, φ ∈ C
1
[0, 1], φ(0) = 0, φ(1) = h
¸
. (7.2)
Our task is to minimize T{φ} over the set A.
Remark: Since the shortest distance between two points is given by the straight line that
joins them, it is natural to wonder whether a straight line is also the curve that gives the
minimum travel time. To investigate this, consider (a) a straight line, and (b) a circular arc,
that joins (0, 0) to (1, h). Use (7.1) to calculate the travel time for each of these paths and
show that the straight line is not the path that gives the least travel time.
Remark: One can consider various variants of the Brachistochrone Problem. For example, the
length of the curve joining the two points might be prescribed, in which case the minimization
is to be carried out subject to the constraint that the length is given. Or perhaps the position
of the left hand end might be prescribed as above, but the right hand end of the wire might
be allowed to lie anywhere on the vertical line through x = 1. Or, there might be some
prohibited region of the x, y-plane through which the path is disallowed from passing. And
so on.
In summary, in the simplest problem in the calculus of variations we are required to find
a function φ(x) ∈ C
1
[0, 1] that minimizes a functional F{φ} of the form
F{φ} =

1
0
f(x, φ, φ

)dx
126 CHAPTER 7. CALCULUS OF VARIATIONS
over an admissible set of test functions A. The test functions (or admissible functions) φ are
subject to certain conditions including smoothness requirements; possibly (but not neces-
sarily) boundary conditions at both ends x = 0, 1; and possibly (but not necessarily) side
constraints of various forms. Other types of problems will be also be encountered in what
follows.
7.2 Brief review of calculus.
Perhaps it is useful to begin by reviewing the familiar question of minimization in calculus.
Consider a subset A of n-dimensional space R
n
and let F(x) = F(x
1
, x
2
, . . . , x
n
) be a real-
valued function defined on A. We say that x
o
∈ A is a minimizer of F if
1
F(x) ≥ F(x
o
) for all x ∈ A. (7.3)
Sometimes we are only interested in finding a “local minimizer”, i.e. a point x
o
that
minimizes F relative to all x that are “close” to x
0
. In order to speak of such a notion we
must have a measure of “closeness”. Thus suppose that the vector space R
n
is Euclidean so
that a norm is defined on R
n
. Then we say that x
o
is a local minimizer of F if F(x) ≥ F(x
o
)
for all x in a neighborhood of x
o
, i.e. if
F(x) ≥ F(x
o
) for all x such that |x −x
o
| < r (7.4)
for some r > 0.
Define the function
ˆ
F(ε) for −ε
0
< ε < ε
0
by
ˆ
F(ε) = F(x
0
+ εn) (7.5)
where n is a fixed vector and ε
0
is small enough to ensure that x
0
+εn ∈ A for all ε ∈ (−ε
0
, ε
0
).
In the presence of sufficient smoothness we can write
ˆ
F(ε) −
ˆ
F(0) =
ˆ
F

(0)ε +
1
2
ˆ
F

(0)ε
2
+ O(ε
3
). (7.6)
Since F(x
0
+εn) ≥ F(x
0
) it follows that
ˆ
F(ε) ≥
ˆ
F(0). Thus if x
0
is to be a minimizier of F
it is necessary that
ˆ
F

(0) = 0,
ˆ
F

(0) ≥ 0. (7.7)
1
A maximizer of F is a minimizer of −F so we don’t need to address maximizing separately from mini-
mizing.
7.2. BRIEF REVIEW OF CALCULUS. 127
It is customary to use the following notation and terminology: we set
δF(x
o
, n) =
ˆ
F

(0), (7.8)
which is called the first variation of F and similarly set
δ
2
F(x
o
, n) =
ˆ
F

(0) (7.9)
which is called the second variation of F. At an interior local minimizer x
0
, one necessarily
must have
δF(x
o
, n) = 0 and δ
2
F(x
o
, n) ≥ 0 for all unit vectors n. (7.10)
In the present setting of calculus, we know from (7.5), (7.8) that δF(x
o
, n) = ∇F(x
o
) · n
and that δ
2
F(x
o
, n) = (∇∇F(x
o
))n·n. Here the vector field ∇F is the gradient of F and the
tensor field ∇∇F is the gradient of ∇F. Therefore (7.10) is equivalent to the requirements
that
∇F(x
o
) · n = 0 and (∇∇F(x
o
))n · n ≥ 0 for all unit vectors n (7.11)
or equivalently
n
¸
i=1
∂F
∂x
i

x=x
0
n
i
= 0 and
n
¸
i=1
n
¸
j=1

2
F
∂x
i
∂x
j

x=x
0
n
i
n
j
≥ 0 (7.12)
whence we must have ∇F(x
o
) = o and the Hessian ∇∇F(x
o
) must be positive semi-definite.
Remark: It is worth recalling that a function need not have a minimizer. For example, the
function F
1
(x) = x defined on A
1
= (−∞, ∞) is unbounded as x →±∞. Another example is
given by the function F
2
(x) = x defined on A
2
= (−1, 1) noting that F
2
≥ −1 on A
2
; however,
while the value of F
2
can get as close as one wishes to −1, it cannot actually achieve the
value −1 since there is no x ∈ A
2
at which f(x) = −1; note that −1 / ∈ A
2
. Finally, consider
the function F
3
(x) defined on A
3
= [−1, 1] where F
3
(x) = 1 for −1 ≤ x ≤ 0 and F(x) = x
for 0 < x ≤ 1; the value of F
3
can get as close as one wishes to 0 but cannot achieve it since
F(0) = 1. In the first example A
1
was unbounded. In the second, A
2
was bounded but open.
And in the third example A
3
was bounded and closed but the function was discontinuous on
A
3
. In order for a minimizer to exist, A must be compact (i.e. bounded and closed). It can
be shown that if A is compact and if F is continuous on A then F assumes both maximum
and minimum values on A.
128 CHAPTER 7. CALCULUS OF VARIATIONS
7.3 The basic idea: necessary conditions for a mini-
mum: δF = 0, δ
2
F ≥ 0.
In the calculus of variations, we are typically given a functional F defined on a function
space A, where F : A → R, and we are asked to find a function φ
o
∈ A that minimizes F
over A: i.e. to find φ
o
∈ A for which
F{φ} ≥ F{φ
o
} for all φ ∈ A.
x
0
y
(0, a)
(1, b)
y = φ
1
(x)
1
y = φ
2
(x)
Figure 7.2: Two functions φ
1
and φ
2
that are “close” in the sense of the norm || · ||
0
but not in the sense
of the norm || · ||
1
.
Most often, we will be looking for a local (or relative) minimizer, i.e. for a function φ
0
that minimizes F relative to all “nearby functions”. This requires that we select a norm so
that the distance between two functions can be quantified. For a function φ in the set of
functions that are continuous on an interval [x
1
, x
2
], i.e. for φ ∈ C[x
1
, x
2
], one can define a
norm by
||φ||
0
= max
x
1
≤x≤x
2
|φ(x)|.
For a function φ in the set of functions that are continuous and have continuous first deriva-
tives on [x
1
, x
2
], i.e. for φ ∈ C
1
[x
1
, x
2
] one can define a norm by
||φ||
1
= max
x
1
≤x≤x
2
|φ(x)| + max
x
1
≤x≤x
2

(x)|;
and so on. (Of course the norm ||φ||
0
can also be used on C
1
[x
1
, x
2
].)
7.3. A NECESSARY CONDITION FOR AN EXTREMUM 129
When seeking a local minimizer of a functional F we might say we want to find φ
0
for
which
F{φ} ≥ F{φ
o
} for all admissible φ such that ||φ −φ
0
||
0
< r
for some r > 0. In this case the minimizer φ
0
is being compared with all admissible functions
φ whose values are close to those of φ
0
for all x
1
≤ x ≤ x
2
. Such a local minimizer is called a
strong minimizer. On the other hand, when seeking a local minimizer we might say we want
to find φ
0
for which
F{φ} ≥ F{φ
o
} for all admissible φ such that ||φ −φ
0
||
1
< r
for some r > 0. In this case the minimizer is being compared with all functions whose values
and whose first derivatives are close to those of φ
0
for all x
1
≤ x ≤ x
2
. Such a local minimizer
is called a weak minimizer. A strong minimizer is automatically a weak minimizer.
Unless explicitly stated otherwise, in this Chapter we will be examining weak local ex-
trema. The approach for finding such extrema of a functional is essentially the same as that
used in the more familiar case of calculus reviewed in the preceding sub-section. Consider
a functional F{φ} defined on a function space A and suppose that φ
o
∈ A minimizes F. In
order to determine φ
0
we consider the one-parameter family of admissible functions
φ(x; ε) = φ
0
(x) + ε η(x) (7.13)
that are close to φ
0
; here ε is a real variable in the range −ε
0
< ε < ε
0
and η(x) is a once
continuously differentiable function. Since φ is to be admissible, we must have φ
0
+ εη ∈ A
for each ε ∈ (−ε
0
, ε
0
). Define a function
ˆ
F(ε) by
ˆ
F(ε) = F{φ
0
+ εη}, −ε
0
< ε < ε
0
. (7.14)
Since φ
0
minimizes F it follows that F{φ
0
+ εη} ≥ F{φ
0
} or equivalently
ˆ
F(ε) ≥
ˆ
F(0).
Therefore ε = 0 minimizes
ˆ
F(ε). The first and second variations of F are defined by
δF{φ
0
, η} =
ˆ
F

(0) and δ
2
F{φ
0
, η} =
ˆ
F

(0) respectively, and so if φ
0
minimizes F, then
it is necessary that
δF{φ
0
, η} = 0, δ
2
F{φ
0
, η} ≥ 0. (7.15)
These are necessary conditions on a minimizer φ
o
. We cannot go further in general.
In any specific problem, such as those in the subsequent sections, the necessary condition
δF{φ
o
, η} = 0 can be further simplified by exploiting the fact that it must hold for all
admissible η. This allows one to eliminate η leading to a condition (or conditions) that only
involves the minimizer φ
0
.
130 CHAPTER 7. CALCULUS OF VARIATIONS
Remark: Note that when η is independent of ε the functions φ
0
(x) and φ
0
(x) + εη(x), and
their derivatives, are close to each other for small ε. On the other hand the functions φ
0
(x)
and φ
0
(x)+ε sin(x/ε) are close to each other but their derivatives are not close to each other.
Throughout these notes we will consider functions η that are independent of ε and so, as
noted previously, we will be restricting attention exclusively to weak minimizers.
7.4 Application of the necessary condition δF = 0 to
the basic problem. Euler equation.
7.4.1 The basic problem. Euler equation.
Consider the following class of problems: let A be the set of all continuously differentiable
functions φ(x) defined for 0 ≤ x ≤ 1 with φ(0) = a, φ(1) = b:
A =
¸
φ(·)

φ : [0, 1] →R, φ ∈ C
1
[0, 1], φ(0) = a, φ(1) = b
¸
. (7.16)
Let f(x, y, z) be a given function, defined and smooth for all real x, y, z. Define a functional
F{φ}, for every φ ∈ A, by
F{φ} =

1
0
f (x, φ(x), φ

(x)) dx. (7.17)
We wish to find a function φ ∈ A which minimizes F{φ}.
x
0
y
(0, a)
(1, b)
y = φ
0
(x) + εη(x)
y = φ
0
(x)
Figure 7.3: The minimizer φ
0
and a neighboring function φ
0
+εη.
7.4. APPLICATION OF NECESSARY CONDITION δF = 0 131
Suppose that φ
0
(x) ∈ A is a minimizer of F, so that F{φ} ≥ F{φ
0
} for all φ ∈ A. In
order to determine φ
0
we consider the one parameter family of admissible functions φ(x; ε) =
φ
0
(x) + ε η(x) where ε is a real variable in the range −ε
0
< ε < ε
0
and η(x) is a once
continuously differentiable function on [0, 1]; see Figure 7.3. Since φ must be admissible we
need φ
0
+ ε η ∈ A for each ε. Therefore we must have φ(0, ε) = a and φ(1, ε) = b which in
turn requires that
η(0) = η(1) = 0. (7.18)
Pick any function η(x) with the property (7.18) and fix it. Define the function
ˆ
F(ε) =
F{φ
0
+ εη} so that
ˆ
F(ε) = F{φ
0
+ εη} =

1
0
f(x, φ
0
+ εη, φ

0
+ εη

) dx. (7.19)
We know from the analysis of the preceding section that a necessary condition for φ
0
to
minimize F is that
δF{φ
o
, η} =
ˆ
F

(0) = 0. (7.20)
On using the chain-rule, we find
ˆ
F

(ε) from (7.19) to be
ˆ
F

(ε) =

1
0

∂f
∂y
(x, φ
0
+ εη, φ

0
+ εη

) η +
∂f
∂z
(x, φ
0
+ εη, φ

0
+ εη

dx,
and so (7.20) leads to
δF{φ
o
, η} =
ˆ
F

(0) =

1
0

∂f
∂y
(x, φ
0
, φ

0
)η +
∂f
∂z
(x, φ
0
, φ

0

dx = 0. (7.21)
Thus far we have simply repeated the general analysis of the preceding section in the
context of the particular functional (7.17). Our goal is to find φ
0
and so we must eliminate η
from (7.21). To do this we rearrange the terms in (7.21) into a convenient form and exploit
the fact that (7.21) must hold for all functions η that satisfy (7.18).
In order to do this we proceed as follows: Integrating the second term in (7.21) by parts
gives

1
0

∂f
∂z

η

dx =
¸
η
∂f
∂z

x=1
x=0

1
0
d
dx

∂f
∂z

η dx .
However by (7.18) we have η(0) = η(1) = 0 and therefore the first term on the right-hand
side drops out. Thus (7.21) reduces to

1
0
¸
∂f
∂y

d
dx

∂f
∂z

η dx = 0. (7.22)
132 CHAPTER 7. CALCULUS OF VARIATIONS
Though we have viewed η as fixed up to this point, we recognize that the above derivation
is valid for all once continuously differentiable functions η(x) which have η(0) = η(1) = 0.
Therefore (7.22) must hold for all such functions.
Lemma: The following is a basic result from calculus: Let p(x) be a continuous function on [0, 1] and suppose
that

1
0
p(x)n(x)dx = 0
for all continuous functions n(x) with n(0) = n(1) = 0. Then,
p(x) = 0 for 0 ≤ x ≤ 1.
In view of this Lemma we conclude that the integrand of (7.22) must vanish and therefore
obtain the differential equation
d
dx
¸
∂f
∂z
(x, φ
0
, φ

0
)


∂f
∂y
(x, φ
0
, φ

0
) = 0 for 0 ≤ x ≤ 1. (7.23)
This is a differential equation for φ
0
, which together with the boundary conditions
φ
0
(0) = a, φ
0
(1) = b, (7.24)
provides the mathematical problem governing the minimizer φ
0
(x). The differential equation
(7.23) is referred to as the Euler equation (sometimes referred to as the Euler-Lagrange
equation) associated with the functional (7.17).
Notation: In order to avoid the (precise though) cumbersome notation above, we shall drop
the subscript “0” from the minimizing function φ
0
; moreover, we shall write the Euler equa-
tion (7.23) as
d
dx
¸
∂f
∂φ

(x, φ, φ

)


∂f
∂φ
(x, φ, φ

) = 0, (7.25)
where, in carrying out the partial differentiation in (7.25), one treats x, φ and φ

as if they
were independent variables.
7.4.2 An example. The Brachistochrone Problem.
Consider the Brachistochrone Problem formulated in the first example of Section 7.1. Here
we have
f(x, φ, φ

) =

1 + (φ

)
2
2gφ
7.4. APPLICATION OF NECESSARY CONDITION δF = 0 133
and we wish to find the function φ
0
(x) that minimizes
F{φ} =

1
0
f(x, φ(x), φ

(x))dx =

1
0

1 + [φ

(x)]
2
2gφ(x)
dx
over the class of functions φ(x) that are continuous and have continuous first derivatives on
[0,1], and satisfy the boundary conditions φ(0) = 0, φ(1) = h. Treating x, φ and φ

as if they
are independent variables and differentiating the function f(x, φ, φ

) gives:
∂f
∂φ
=

1 + (φ

)
2
2g
1
2(φ)
3/2
,
∂f
∂φ

=
φ

2gφ(1 + (φ

)
2
)
,
and therefore the Euler equation (7.23) specializes to
d
dx

φ

(φ)(1 + (φ

)
2
)

1 + (φ

)
2
2(φ)
3/2
= 0, 0 < x < 1, (7.26)
with associated boundary conditions
φ(0) = 0, φ(1) = h. (7.27)
The minimizer φ(x) therefore must satisfy the boundary-value problem consisting of the
second-order (nonlinear) ordinary differential equation (7.26) and the boundary conditions
(7.27).
The rest of this sub-section has nothing to do with the calculus of variations. It is simply
concerned with the solving the boundary value problem (7.26), (7.27). We can write the
differential equation as
φ

φ(1 + (φ

)
2
)
d

φ

φ(1 + (φ

)
2
)

+
1

2
= 0
which can be immediately integrated to give
1

(x))
2
=
φ(x)
c
2
−φ(x)
(7.28)
where c is a constant of integration that is to be determined.
It is most convenient to find the path of fastest descent in parametric form, x = x(θ), φ =
φ(θ), θ
1
< θ < θ
2
, and to this end we adopt the substitution
φ =
c
2
2
(1 −cos θ) = c
2
sin
2
(θ/2), θ
1
< θ < θ
2
. (7.29)
134 CHAPTER 7. CALCULUS OF VARIATIONS
Differentiating this with respect to x gives
φ

(x) =
c
2
2
sin θ θ

(x)
so that, together with (7.28) and (7.29), this leads to
dx

=
c
2
2
(1 −cos θ)
which integrates to give
x =
c
2
2
(θ −sin θ) + c
1
, θ
1
< θ < θ
2
. (7.30)
We now turn to the boundary conditions. The requirement φ(x) = 0 at x = 0, together
with (7.29) and (7.30), gives us θ
1
= 0 and c
1
= 0. We thus have
x =
c
2
2
(θ −sin θ),
φ =
c
2
2
(1 −cos θ),

0 ≤ θ ≤ θ
2
. (7.31)
The remaining boundary condition φ(x) = h at x = 1 gives the following two equations for
finding the two constants θ
2
and c:
1 =
c
2
2

2
−sin θ
2
),
h =
c
2
2
(1 −cos θ
2
).

(7.32)
Once this pair of equations is solved for c and θ
2
then (7.31) provides the solution of the
problem. We now address the solvability of (7.32).
To this end, first, if we define the function p(θ) by
p(θ) =
θ −sin θ
1 −cos θ
, 0 < θ < 2π, (7.33)
then, by dividing the first equation in (7.32) by the second, we see that θ
2
is a root of the
equation
p(θ
2
) = h
−1
. (7.34)
One can readily verify that the function p(θ) has the properties
p →0 as θ →0+, p →∞ as θ →2π−,
dp

=
cos θ/2
sin
3
θ/2
(tan θ/2 −θ/2) > 0 for 0 < θ < 2π.
7.4. APPLICATION OF NECESSARY CONDITION δF = 0 135
p(θ)
θ
h
−1
θ
2 2π
Figure 7.4: A graph of the function p(θ) defined in (7.33) versus θ. Note that given any h > 0 the equation
h
−1
= p(θ) has a unique root θ = θ
2
∈ (0, 2π).
Therefore it follows that as θ goes from 0 to 2π, the function p(θ) increases monotonically
from 0 to ∞; see Figure 7.4. Therefore, given any h > 0, the equation p(θ
2
) = h
−1
can be
solved for a unique value of θ
2
∈ (0, 2π). The value of c is then given by (7.32)
1
.
Thus in summary, the path of minimum descent is given by the curve defined in (7.31)
with the values of θ
2
and c given by (7.34) and (7.32)
1
respectively. Figure 7.5 shows that
the curve (7.31) is a cycloid – the path traversed by a point on the rim of a wheel that rolls
without slipping.
A
P
θ
P
P
A
A
π θ =
x(θ) y(θ) ( , )
P
'
R
x
y
R
y(θ) = AP

− AP cos θ = R(1 − cos θ)
x(θ) = PP

− AP sin θ = R(θ − sin θ)



Figure 7.5: A cycloid x = x(θ), y = y(θ) is generated by rolling a circle along the x-axis as shown, the
parameter θ having the significance of being the angle of rolling.
136 CHAPTER 7. CALCULUS OF VARIATIONS
7.4.3 A Formalism for Deriving the Euler Equation
In order to expedite the steps involved in deriving the Euler equation, one usually uses the
following formal procedure. First, we adopt the following notation: if H{φ} is any quantity
that depends on φ, then by δH we mean
2
δH = H(φ + εη) −H(φ) up to linear terms. (7.35)
that is,
δH = ε
dH{φ + εη}

ε=0
, (7.36)
For example, by δφ we mean
δφ = (φ + εη) −(φ) = εη; (7.37)
by δφ

we mean
δφ

= (φ

+ εη

) −(φ

) = εη

= (δφ)

; (7.38)
by δf we mean
δf = f(x, φ + εη, φ

+ εη

) −f(x, φ, φ

)
=
∂f
∂φ
(x, φ, φ

) εη +
∂f
∂φ

(x, φ, φ

) εη

=

∂f
∂φ

δφ +

∂f
∂φ

δφ

;
(7.39)
and by δF, or δ

1
0
f dx, we mean
δF = F{φ + εη} −F{φ} = ε
¸
d

F{φ + εη}

ε=0
= ε

1
0
¸
∂f
∂φ

η +

∂f
∂φ

η

dx
=

1
0
¸
∂f
∂φ
δφ +
∂f
∂φ

δφ

dx =

1
0
δf dx.
(7.40)
We refer to δφ(x) as an admissible variation. When η(0) = η(1) = 0, it follows that
δφ(0) = δφ(1) = 0.
2
Note the following minor change in notation: what we call δH here is what we previously would have
called ε δH.
7.5. GENERALIZATIONS. 137
We refer to δF as the first variation of the functional F. Observe from (7.40) that
δF = δ

1
0
f dx =

1
0
δf dx. (7.41)
Finally observe that the necessary condition for a minimum that we wrote down previously
can be written as
δF{φ, δφ} = 0 for all admissible variations δφ. (7.42)
For purposes of illustration, let us now repeat our previous derivation of the Euler equa-
tion using this new notation
3
. Given the functional F, a necessary condition for an extremum
of F is
δF = 0
and so our task is to calculate δF:
δF = δ

1
0
f dx =

1
0
δf dx.
Since f = f(x, φ, φ

), this in turn leads to
4
δF =

1
0
¸
∂f
∂φ

δφ +

∂f
∂φ

δφ

dx.
From here on we can proceed as before by setting δF = 0, integrating the second term by
parts, and using the boundary conditions and the arbitrariness of an admissible variation
δφ(x) to derive the Euler equation.
7.5 Generalizations.
7.5.1 Generalization: Free end-point; Natural boundary conditions.
Consider the following modified problem: suppose that we want to find the function φ(x)
from among all once continuously differentiable functions that makes the functional
F{φ} =

1
0
f(x, φ, φ

) dx
3
If ever in doubt about a particular step during a calculation, always go back to the meaning of the
symbols δφ, etc. or revert to using ε, η etc.
4
Note that the variation δ does not operate on x since it is the function φ that is being varied not the
independent variable x. So in particular, δf = f
φ
δφ +f
φ
δφ

and not δf = f
x
δx +f
φ
δφ +f
φ
δφ

.
138 CHAPTER 7. CALCULUS OF VARIATIONS
a minimum. Note that we do not restrict attention here to those functions that satisfy
φ(0) = a, φ(1) = b. So the set of admissible functions A is
A =
¸
φ(·)| φ : [0, 1] →R, φ ∈ C
1
[0, 1]
¸
(7.43)
Note that the class of admissible functions A is much larger than before. The functional
F{φ} is defined for all φ ∈ A by
F{φ} =

1
0
f(x, φ, φ

) dx. (7.44)
We begin by calculating the first variation of F:
δF = δ

1
0
f dx =

1
0
δf dx =

1
0
¸
∂f
∂φ

δφ +

∂f
∂φ

δφ

dx (7.45)
Integrating the last term by parts yields
δF =

1
0
¸
∂f
∂φ

d
dx

∂f
∂φ

δφ dx +
¸
∂f
∂φ

δφ

1
0
. (7.46)
Since δF = 0 at an extremum, we must have

1
0
¸
∂f
∂φ

d
dx

∂f
∂φ

δφ dx +
¸
∂f
∂φ

δφ

1
0
= 0 (7.47)
for all admissible variations δφ(x). Note that the boundary term in (7.47) does not automat-
ically drop out now because δφ(0) and δφ(1) do not have to vanish. First restrict attention
to all variations δφ with the additional property δφ(0) = δφ(1) = 0; equation (7.47) must
necessarily hold for all such variations δφ. The boundary terms now drop out and by the
Lemma in Section 7.4.1 it follows that
d
dx
¸
∂f
∂φ


∂f
∂φ
= 0 for 0 < x < 1. (7.48)
This is the same Euler equation as before. Next, return to (7.47) and keep (7.48) in mind.
We see that we must have
∂f
∂φ

x=1
δφ(1) −
∂f
∂φ

x=0
δφ(0) = 0 (7.49)
for all admissible variations δφ. Since δφ(0) and δφ(1) are both arbitrary (and not necessarily
zero), (7.49) requires that
∂f
∂φ

= 0 at x = 0 and x = 1. (7.50)
7.5. GENERALIZATIONS. 139
Equation (7.50) provides the boundary conditions to be satisfied by the extremizing function
φ(x). These boundary conditions were determined as part of the extremization; they are
referred to as natural boundary conditions in contrast to boundary conditions that are given
as part of a problem statement.
Example: Reconsider the Brachistochrone Problem analyzed previously but now suppose
that we want to find the shape of the wire that commences from (0, 0) and ends somewhere
on the vertical through x = 1; see Figure 7.6. The only difference between this and the first
Brachistochrone Problem is that here the set of admissible functions is
A
2
=
¸
φ(·)

φ : [0, 1] →R, φ ∈ C
1
[0, 1], φ(0) = 0
¸
;
note that there is no restriction on φ at x = 1. Our task is to minimize the travel time of
the bead T{φ} over the set A
2
.
x
0
g
y
y = φ(x)
1
Figure 7.6: Curve joining (0, 0) to an arbitrary point on the vertical line through x = 1.
The minimizer must satisfy the same Euler equation (7.26) as in the first problem, and
the same boundary condition φ(0) = 0 at the left hand end. To find the natural boundary
condition at the other end, recall that
f(x, φ, φ

) =

1 + (φ

)
2
2gφ
.
Differentiating this gives
∂f
∂φ

=
φ

2gφ(1 + (φ

)
2
)
.
and so by (7.50), the natural boundary coundition is
φ

2gφ(1 + (φ

)
2
)
= 0 at x = 1,
140 CHAPTER 7. CALCULUS OF VARIATIONS
which simplifies to
φ

(1) = 0.
.
7.5.2 Generalization: Higher derivatives.
The functional F{φ} considered above involved a function φ and its first derivative φ

. One
can consider functionals that involve higher derivatives of φ, for example
F{φ} =

1
0
f(x, φ, φ

, φ

) dx.
We begin with the formulation and analysis of a specific example and then turn to some
theory.
u

x
y
M
M
u
φ = u

Energy per unit length =
1
2

=
1
2
EI(u

)
2
δφ
M = EIφ

Figure 7.7: The neutral axis of a beam in reference (straight) and deformed (curved) states. The bold lines
represent a cross-section of the beam in the reference and deformed states. In the classical Bernoulli-Euler
theory of beams, cross-sections remain perpendicular to the neutral axis.
Example: The Bernoulli-Euler Beam. Consider an elastic beam of length L and bending
stiffness EI, which is clamped at its left hand end. The beam carries a distributed load p(x)
along its length and a concentrated force F at the right hand end x = L; both loads act in
the −y-direction. Let u(x), 0 ≤ x ≤ L, be a geometrically admissible deflection of the beam.
Since the beam is clamped at the left hand end this means that u(x) is any (smooth enough)
function that satisfies the geometric boundary conditions
u(0) = 0, u

(0) = 0; (7.51)
7.5. GENERALIZATIONS. 141
the boundary condition (7.51)
1
describes the geometric condition that the beam is clamped
at x = 0 and therefore cannot deflect at that point; the boundary condition (7.51)
2
describes
the geometric condition that the beam is clamped at x = 0 and therefore cannot rotate at
the left end. The set of admissible test functions that we consider is
A =
¸
u(·) | u : [0, L] →R, u ∈ C
4
[0, L], u(0) = 0, u

(0) = 0
¸
, (7.52)
which consists of all “geometrically possible configurations”.
From elasticity theory we know that the elastic energy associated with a deformed con-
figuration of the beam is (1/2)EI(u

)
2
per unit length. Therefore the total potential energy
of the system is
Φ{u} =

L
0
1
2
EI(u

(x))
2
dx −

L
0
p(x)u(x) dx −F u(L), (7.53)
where the last two terms represent the potential energy of the distributed and concentrated
loading respectively; the negative sign in front of these terms arises because the loads act
in the −y-direction while u is the deflection in the +y-direction. Note that the integrand
of the functional involves the higher derivative term u

. In addition, note that only two
boundary conditions u(0) = 0, u

(0) = 0 are given and so we expect to derive additional
natural boundary conditions at the right hand end x = L.
The actual deflection of the beam minimizes the potential energy (7.53) over the set
(7.52). We proceed in the usual way by calculating the first variation δΦ and setting it equal
to zero:
δΦ = 0.
By using (7.53) this can be written explicitly as

L
0
EI u

δu

dx −

L
0
p δu dx −F δu(L) = 0.
Twice integrating the first term by parts leads to

L
0
EI u

δu dx −

L
0
p δu dx −F δu(L) −

EIu

δu

L
0
+

EIu

δu

L
0
= 0.
The given boundary conditions (7.51) require that an admissible variation δu must obey
δu(0) = 0, δu

(0) = 0. Therefore the preceding equation simplifies to

L
0
(EI u

−p) δu dx −[EIu

(L) + F] δu(L) + EIu

(L)δu

(L) = 0.
142 CHAPTER 7. CALCULUS OF VARIATIONS
Since this must hold for all admissible variations δu(x), it follows in the usual way that the
extremizing function u(x) must obey
EI u

(x) −p(x) = 0 for 0 < x < L,
EI u

(L) + F = 0,
EI u

(L) = 0.

(7.54)
Thus the extremizer u(x) obeys the fourth order linear ordinary differential equation (7.54)
1
,
the prescribed boundary conditions (7.51) and the natural boundary conditions (7.54)
2,3
.
The natural boundary condition (7.54)
2
describes the mechanical condition that the beam
carries a concentrated force F at the right hand end; and the natural boundary condition
(7.54)
3
describes the mechanical condition that the beam is free to rotate (and therefore has
zero “bending moment”) at the right hand end.
Exercise: Consider the functional
F{φ} =

1
0
f(x, φ, φ

, φ

)dx
defined on the set of admissible functions A consisting of functions φ that are defined and
four times continuously differentiable on [0, 1] and that satisfy the four boundary conditions
φ(0) = φ
0
, φ

(0) = φ

0
, φ(1) = φ
1
, φ

(1) = φ

1
.
Show that the function φ that extremizes F over the set A must satisfy the Euler equation
∂f
∂φ

d
dx

∂f
∂φ

+
d
2
dx
2

∂f
∂φ

= 0 for 0 < x < 1
where, as before, the partial derivatives ∂f/∂φ, ∂f/∂φ

and ∂f/∂φ

are calculated by treating
φ, φ

and φ

as if they are independent variables in f(x, φ, φ

, φ

).
7.5.3 Generalization: Multiple functions.
The functional F{φ} considered above involved a single function φ and its derivatives. One
can consider functionals that involve multiple functions, for example a functional
F{u, v} =

1
0
f(x, u, u

, v, v

) dx
7.5. GENERALIZATIONS. 143
that involves two functions u(x) and v(x). We begin with the formulation and analysis of a
specific example and then turn to some theory.
Example: The Timoshenko Beam. Consider a beam of length L, bending stiffness
5
EI and
shear stiffness GA. The beam is clamped at x = 0, it carries a distributed load p(x) along its
length which acts in the −y-direction, and carries a concentrated force F at the end x = L,
also in the −y-direction.
In the simplest model of a beam – the so-called Bernoulli-Euler model – the deformed
state of the beam is completely defined by the deflection u(x) of the centerline (the neutral
axis) of the beam. In that theory, shear deformations are neglected and therefore a cross-
section of the beam remains perpendicular to the neutral axis even in the deformed state.
Here we discuss a more general theory of beams, one that accounts for shear deformations.
u

φ
u

φ -
x
y
Figure 7.8: The neutral axis of a beam in reference (straight) and deformed (curved) states. The bold lines
represent a cross-section of the beam in the reference and deformed states. The thin line is perpendicular
to the deformed neutral axis, so that in the classical Bernoulli-Euler theory of beams, where cross-sections
remain perpendicular to the neutral axis, the thin line and the bold line would coincide. The angle between
the vertical and the bold line if φ. The angle between the neutral axis and the horizontal, which equals the
angle between the perpendicular to the neutral axis (the thin line) and the vertical dashed line, is u

. The
decrease in the angle between the cross-section and the neutral axis is therefore u

−φ.
In the theory considered here, a cross-section of the beam is not constrained to remain
perpendicular to the neutral axis. Thus a deformed state of the beam is characterized by two
5
E and G are the Young’s modulus and shear modulus of the material, while I and A are the second
moment of cross-section and the area of the cross-section respectively.
144 CHAPTER 7. CALCULUS OF VARIATIONS
fields: one, u(x), characterizes the deflection of the centerline of the beam at a location x,
and the second, φ(x), characterizes the rotation of the cross-section at x. (In the Bernoulli-
Euler model, φ(x) = u

(x) since for small angles, the rotation equals the slope.) The fact
that the left hand end is clamped implies that the point x = 0 cannot deflect and that the
cross-section at x = 0 cannot rotate. Thus we have the geometric boundary conditions
u(0) = 0, φ(0) = 0. (7.55)
Note that the zero rotation boundary condition is φ(0) = 0 and not u

(0) = 0.
In the more accurate beam theory discussed here, the so-called Timoshenko beam theory,
one does not neglect shear deformations and so u(x) and φ(x) are (geometrically) independent
functions. Since the shear strain is defined as the change in angle between two fibers that
are initially at right angles to each other, the shear strain in the present situation is
γ(x) = u

(x) −φ(x);
see Figure 7.8. Observe that in the Bernoulli-Euler theory γ(x) = 0.
γ
S = GAγ
S
S
M
δφ
M = EIφ

M
Figure 7.9: Basic constitutive relationships for a beam.
The basic equations of elasticity tell us that the moment-curvature relation for bending
is
M(x) = EIφ

(x)
7.5. GENERALIZATIONS. 145
and that the associated elastic energy per unit length of the beam, (1/2)Mφ

, is
1
2
EI(φ

(x))
2
.
Similarly, we know from elasticity that the shear force-shear strain relation for a beam is
6
S(x) = GAγ(x)
and that the associated elastic energy per unit length of the beam, (1/2)Sγ, is
1
2
GA(γ(x))
2
.
The total potential energy of the system is thus
Φ = Φ{u, φ} =

L
0

1
2
EI(φ

(x))
2
+
1
2
GA(u

(x) −φ(x))
2

dx −

L
0
pu(x)dx −Fu(L),
(7.56)
where the last two terms in this expression represent the potential energy of the distributed
and concentrated loading respectively (and the negative signs arise because u is the deflection
in the +y-direction while the loadings p and F are applied in the −y-direction). We allow
for the possibility that p, EI and GA may vary along the length of the beam and therefore
might be functions of x.
The displacement and rotation fields u(x) and φ(x) associated with an equilibrium con-
figuration of the beam minimizes the potential energy Φ{u, φ} over the admissible set A of
test functions where take
A = {u(·), φ(·)

u : [0, l] →R, φ : [0, l] →R, u ∈ C
2
([0, L]), φ ∈ C
2
([0, L]), u(0) = 0, φ(0) = 0}.
Note that all admissible functions are required to satisfy the geometric boundary conditions
(7.55).
To find a minimizer of Φ we calculate its first variation which from (7.56) is
δΦ =

L
0

EIφ

δφ

+ GA(u

−φ)(δu

−δφ)
¸
dx −

L
0
p δu dx −F δu(L).
6
Since the top and bottom faces of the differential element shown in Figure 7.9 are free of shear traction,
we know that the element is not in a state of simple shear. Instead, the shear stress must vary with y such
that it vanishes at the top and bottom. In engineering practice, this is taken into account approximately by
replacing GA by κGA where the heuristic parameter κ ≈ 0.8 −0.9.
146 CHAPTER 7. CALCULUS OF VARIATIONS
Integrating the terms involving δu

and δφ

by parts gives
δΦ =

EIφ

δφ

L
0

L
0
d
dx

EIφ

δφdx
+

GA(u

−φ)δu

L
0

L
0
d
dx

GA(u

−φ)

δu dx −

L
0
GA(u

−φ)δφdx

L
0
p δu dx −Fδu(L).
Finally on using the facts that an admissible variation must satisfy δu(0) = 0 and δφ(0) = 0,
and collecting the like terms in the preceding equation leads to
δΦ = EIφ

(L) δφ(L) +

GA

u

(L) −φ(L)

−F

δu(L)

L
0
¸
d
dx

EIφ

+ GA(u

−φ)

δφ(x) dx

L
0
¸
d
dx

GA(u

−φ)

+ p

δu(x) dx.
(7.57)
At a minimizer, we have δΦ = 0 for all admissible variations. Since the variations
δu(x), δφ(x) are arbitrary on 0 < x < L and since δu(L) and δφ(L) are also arbitrary,
it follows from (7.57) that the field equations
d
dx

EIφ

+ GA(u

−φ) = 0, 0 < x < L,
d
dx

GA(u

−φ)

+ p = 0, 0 < x < L,

(7.58)
and the natural boundary conditions
EIφ

(L) = 0, GA

u

(L) −φ(L)

= F (7.59)
must hold.
Thus in summary, an equilibrium configuration of the beam is described by the deflec-
tion u(x) and rotation φ(x) that satisfy the differential equations (7.58) and the boundary
conditions (7.55), (7.59). [Remark: Can you recover the Bernoulli-Euler theory from the
Timoshenko theory in the limit as the shear rigidity GA →∞?]
Exercise: Consider a smooth function f(x, y
1
, y
2
, . . . , y
n
, z
1
, z
2
, . . . , z
n
) defined for all x, y
1
, y
2
, . . . , y
n
,
z
1
, . . . , z
n
. Let φ
1
(x), φ
2
(x), . . . , φ
n
(x) be n once-continuously differentiable functions on [0, 1]
7.5. GENERALIZATIONS. 147
with φ
i
(0) = a
i
, φ
i
(1) = b
i
. Let F be the functional defined by
F {φ
1
, φ
2
, . . . , φ
n
} =

1
0
f(x, φ
1
, φ
2
, . . . , φ
n
, φ

1
, φ

2
, . . . , φ

n
) dx (7.60)
on the set of all such admissible functions. Show that the functions φ
1
(x), φ
2
(x), . . . , φ
n
(x)
that extremize F must necessarily satisfy the n Euler equations
d
dx
¸
∂f
∂φ

i


∂f
∂φ
i
= 0 for 0 < x < 1, (i = 1, 2, . . . , n). (7.61)
7.5.4 Generalization: End point of extremal lying on a curve.
Consider the set A of all functions that describe curves in the x, y-plane that commence from
a given point (0, a) and end at some point on the curve G(x, y) = 0. We wish to minimize a
functional F{φ} over this set of functions.
x
0
y = φ(x)
y
G(x, y) = 0
(0, a)
y = φ(x) + δφ(x)
x
R
x
R
+ δx
R
Figure 7.10: Curve joining (0, a) to an arbitrary point on the given curve G(x, y) = 0.
Suppose that φ(x) ∈ A is a minimizer of F. Let x = x
R
be the abscissa of the point at
which the curve y = φ(x) intersects the curve G(x, y) = 0. Observe that x
R
is not known a
priori and is to be determined along with φ. Moreover, note that the abscissa of the point
at which a neighboring curve defined by y = φ(x) + δφ(x) intersects the curve G = 0 is not
x
R
but x
R
+ δx
R
; see Figure 7.10.
At the minimizer,
F{φ} =

x
R
0
f(x, φ, φ

)dx
148 CHAPTER 7. CALCULUS OF VARIATIONS
and at a neighboring test function
F{φ + δφ} =

x
R
+δx
R
0
f(x, φ + δφ, φ

+ δφ

)dx.
Therefore on calculating the first variation δF, which equals the linearized form of F{φ +
δφ} −F{φ}, we find
δF =

x
R
+δx
R
0

f(x, φ, φ

) + f
φ
(x, φ, φ

)δφ + f
φ
(x, φ, φ

)δφ

dx −

x
R
0
f(x, φ, φ

)dx
where we have set f
φ
= ∂f/∂φ and f
φ
= ∂f/∂φ

. This leads to
δF =

x
R
+δx
R
x
R
f(x, φ, φ

)dx +

x
R
0

f
φ
(x, φ, φ

)δφ + f
φ
(x, φ, φ

)δφ

dx
which in turn reduces to
δF = f

x
R
, φ(x
R
), φ

(x
R
)

δx
R
+

x
R
0

f
φ
δφ + f
φ
δφ

dx.
Thus setting the first variation δF equal to zero gives
f

x
R
, φ(x
R
), φ

(x
R
)

δx
R
+

x
R
0

f
φ
δφ + f
φ
δφ

dx = 0.
After integrating the last term by parts we get
f

x
R
, φ(x
R
), φ

(x
R
)

δx
R
+

f
φ
δφ

x
R
0
+

x
R
0

f
φ

d
dx
f
φ

δφdx = 0
which, on using the fact that δφ(0) = 0, reduces to
f

x
R
, φ(x
R
), φ

(x
R
)

δx
R
+ f
φ

x
R
, φ(x
R
), φ

(x
R
)

δφ(x
R
) +

x
R
0

f
φ

d
dx
f
φ

δφdx = 0.
(7.62)
First limit attention to the subset of all test functions that terminate at the same point
(x
R
, φ(x
R
)) as the minimizer. In this case δx
R
= 0 and δφ(x
R
) = 0 and so the first two
terms in (7.62) vanish. Since this specialized version of equation (7.62) must hold for all such
variations δφ(x), this leads to the Euler equation
f
φ

d
dx
f
φ
= 0, 0 ≤ x ≤ x
R
. (7.63)
We now return to arbitrary admissible test functions. Substituting (7.63) into (7.62) gives
f

x
R
, φ(x
R
), φ

(x
R
)

δx
R
+ f
φ

x
R
, φ(x
R
), φ

(x
R
)

δφ(x
R
) = 0 (7.64)
7.5. GENERALIZATIONS. 149
which must hold for all admissible δx
R
and δφ(x
R
). It is important to observe that since
admissible test curves must end on the curve G = 0, the quantities δx
R
and δφ(x
R
) are not
independent of each other. Thus (7.64) does not hold for all δx
R
and δφ(x
R
); only for those
that are consistent with this geometric requirement. The requirement that the minimizing
curve and the neighboring test curve terminate on the curve G(x, y) = 0 implies that
G(x
R
, φ(x
R
)) = 0, G(x
R
+ δx
R
, φ(x
R
+ δx
R
) + δφ(x
R
+ δx
R
)) = 0, .
Note that linearization gives
G(x
R
+ δx
R
, φ(x
R
+ δx
R
) + δφ(x
R
+ δx
R
))
= G(x
R
+ δx
R
, φ(x
R
) + φ

(x
R
)δx
R
+ δφ(x
R
))
= G(x
R
, φ(x
R
)) + G
x
(x
R
, φ(x
R
))δx
R
+ G
y
(x
R
, φ(x
R
))

φ

(x
R
)δx
R
+ δφ(x
R
)

,
= G(x
R
, φ(x
R
)) +

G
x
(x
R
, φ(x
R
)) + φ

(x
R
)G
y
(x
R
, φ(x
R
))

δx
R
+ G
y
(x
R
, φ(x
R
)) δφ(x
R
).
where we have set G
x
= ∂G/∂x and G
y
= ∂G/∂x. Setting δG = G(x
R
+δx
R
, φ(x
R
+δx
R
) +
δφ(x
R
+δx
R
)) −G(x
R
, φ(x
R
)) = 0 thus leads to the following relation between the variations
δx
R
and δφ(x
R
):

G
x
(x
R
, φ(x
R
)) + φ

(x
R
)G
y
(x
R
, φ(x
R
))

δx
R
+ G
y
(x
R
, φ(x
R
)) δφ(x
R
) = 0. (7.65)
Thus (7.64) must hold for all δx
R
and δφ(x
R
) that satisfy (7.65). This implies that
7
f

x
R
, φ(x
R
), φ

(x
R
)

−λ

G
x
(x
R
, φ(x
R
)) + φ

(x
R
)G
y
(x
R
, φ(x
R
))

= 0,
f
φ

x
R
, φ(x
R
), φ

(x
R
)

−λG
y
(x
R
, φ(x
R
)) = 0,

for some constant λ (referred to as a Lagrange multiplier). We can use the second equation
above to simplify the first equation which then leads to the pair of equations
f

x
R
, φ(x
R
), φ

(x
R
)

−φ

(x
R
)f
φ

x
R
, φ(x
R
), φ

(x
R
)

−λG
x
(x
R
, φ(x
R
)) = 0,
f
φ

x
R
, φ(x
R
), φ

(x
R
)

−λG
y
(x
R
, φ(x
R
)) = 0.

(7.66)
7
It may be helpful to recall from calculus that if we are to minimize a function I(ε
1
, ε
2
), we must satisfy
the condition dI = (∂I/∂ε
1
)dε
1
+ (∂I/∂ε
2
)dε
2
= 0. But if this minimization is carried out subject to the
side constraint J(ε
1
, ε
2
) = 0 then we must respect the side condition dJ = (∂J/∂ε
1
)dε
1
+ (∂J/∂ε
2
)dε
2
= 0.
Under these circumstances, one finds that that one must require the conditions ∂I/∂ε
1
= λ∂J/∂ε
1
, ∂I/∂ε
2
=
λ∂J/∂ε
2
where the Lagrange multiplier λ is unknown and is also to be determined. The constrain equation
J = 0 provides the extra condition required for this purpose.
150 CHAPTER 7. CALCULUS OF VARIATIONS
Equation (7.66) provides two natural boundary conditions at the right hand end x = x
R
.
In summary: an extremal φ(x) must satisfy the differential equations (7.63) on 0 ≤
x ≤ x
R
, the boundary condition φ = a at x = 0, the two natural boundary conditions
(7.66) at x = x
R
, and the equation G(x
R
, φ(x
R
)) = 0. (Note that the presence of the
additional unknown λ is compensated for by the imposition of the additional condition
G(x
R
, φ(x
R
)) = 0.)
Example: Suppose that G(x, y) = c
1
x+c
2
y+c
3
and that we are to find the curve of shortest
length that commences from (0, a) and ends on G = 0.
Since ds =

dx
2
+ dy
2
=

1 + (φ

)
2
dx we are to minimize the functional
F =

x
R
0

1 + (φ

)
2
dx.
Thus
f(x, φ, φ

) =

1 + (φ

)
2
, f
φ
(x, φ, φ

) = 0 and f
φ
(x, φ, φ

) =
φ

1 + (φ

)
2
. (7.67)
On using (7.67), the Euler equation (7.63) can be integrated immediately to give
φ

(x) = constant for 0 ≤ x ≤ x
R
.
The boundary condition at the left hand end is
φ(0) = a,
while the boundary conditions (7.66) at the right hand end give
1

1 + φ
2
(x
R
)
= λc
1
,
φ

(x
R
)

1 + φ
2
(x
R
)
= λc
2
.
Finally the condition G(x
R
, φ(x
R
)) = 0 requires that
c
1
x
R
+ c
2
φ(x
R
) + c
3
= 0.
Solving the preceding equations leads to the minimizer
φ(x) = (c
2
/c
1
)x + a for 0 ≤ x ≤ −
c
1
(ac
2
+ c
3
)
c
2
1
+ c
2
2
.
7.6. CONSTRAINED MINIMIZATION 151
7.6 Constrained Minimization
7.6.1 Integral constraints.
Consider a problem of the following general form: find admissible functions φ
1
(x), φ
2
(x) that
minimizes
F{φ
1
, φ
2
} =

1
0
f(x, φ
1
(x), φ
2
(x), φ

1
(x), φ

2
(x)) dx (7.68)
subject to the constraint
G(φ
1
, φ
2
) =

1
0
f(x, φ
1
(x), φ
2
(x), φ

1
(x), φ

2
(x)) dx = 0. (7.69)
For reasons of clarity we shall return to the more detailed approach where we introduce
parameters ε
1
, ε
2
and functions η
1
(x), η
1
(x), rather than following the formal approach using
variations δφ
1
(x), δφ
2
(x). Accordingly, suppose that the pair φ
1
(x), φ
2
(x) is the minimizer. By
evaluating F and G on a family of neighboring admissible functions φ
1
(x) +ε
1
η
1
(x), φ
2
(x) +
ε
2
η
2
(x) we have
ˆ
F(ε
1
, ε
2
) = F{φ
1
(x) + ε
1
η
1
(x), φ
2
(x) + ε
2
η
2
(x)},
ˆ
G(ε
1
, ε
2
) = G(φ
1
(x) + ε
1
η
1
(x), φ
2
(x) + ε
2
η
2
(x)) = 0.
(7.70)
If we begin by keeping η
1
and η
2
fixed, this is a classical minimization problem for a function
of two variables: we are to minimize the function
ˆ
F(ε
1
, ε
2
) with respect to the variables
ε
1
and ε
2
, subject to the constraint
ˆ
G(ε
1
, ε
2
) = 0. A necessary condition for this is that
d
ˆ
F(ε
1
, ε
2
) = 0, i.e. that
d
ˆ
F =

ˆ
F
∂ε
1

1
+

ˆ
F
∂ε
2

2
= 0, (7.71)
for all dε
1
, dε
2
that are consistent with the constraint. Because of the constraint, dε
1
and dε
2
cannot be varied independently. Instead the constraint requires that they be related by
d
ˆ
G =

ˆ
G
∂ε
1

1
+

ˆ
G
∂ε
2

2
= 0. (7.72)
If we didn’t have the constraint, then (7.71) would imply the usual requirements ∂
ˆ
F/∂ε
1
=

ˆ
F/∂ε
2
= 0. However when the constraint equation (7.72) holds, (7.71) only requires that

ˆ
F
∂ε
1
= λ

ˆ
G
∂ε
1
,

ˆ
F
∂ε
2
= λ

ˆ
G
∂ε
2
, (7.73)
152 CHAPTER 7. CALCULUS OF VARIATIONS
for some constant λ, or equivalently

∂ε
1
(
ˆ
F −λ
ˆ
G) = 0,

∂ε
2
(
ˆ
F −λ
ˆ
G) = 0. (7.74)
Therefore minimizing
ˆ
F subject to the constraint
ˆ
G = 0 is equivalent to minimizing
ˆ
F −λ
ˆ
G
without regard to the constraint; λ is known as a Lagrange multiplier. Proceeding from
here on leads to the Euler equation associated with F −λG. The presence of the additional
unknown parameter λ is balanced by the availability of the constraint equation G = 0.
Example: Consider a heavy inextensible cable of mass per unit length m that hangs under
gravity. The two ends of the cable are held at the same vertical height, a distance 2H apart.
The cable has a given length L. We know from physics that the cable adopts a shape that
minimizes the potential energy. We are asked to determine this shape.
x
0
y = φ(x)
g
y
H −H
Let y = φ(x), −H ≤ x ≤ H, describe an admissible shape of the cable. The potential
energy of the cable is determined by integrating mgφ with respect to the arc length s along
the cable which, since ds =

dx
2
+ dy
2
=

1 + (φ

)
2
dx, is given by
V {φ} =

L
0
mgφds = mg

H
−H
φ

1 + (φ

)
2
dx. (7.75)
Since the cable is inextensible, its length
{φ} =

L
0
ds =

H
−H

1 + (φ

)
2
dx (7.76)
must equal L. Therefore we are asked to find a function φ(x) with φ(−H) = φ(H), that
minimizes V {φ} subject to the constraint {φ} = L. According to the theory developed
7.6. CONSTRAINED MINIMIZATION 153
above, this function must satisfy the Euler equation associated with the functional V {φ} −
λ{φ} where the Lagrange multiplier λ is a constant. The resulting boundary value problem
together with the constraint = L yields the shape of the cable φ(x).
Calculating the first variation of V −λmg, where the constant λ is a Lagrange multiplier,
leads to the Euler equation
d
dx

(φ −λ)
φ

1 + (φ

)
2
¸

1 + (φ

)
2
= 0, −H < x < H.
This can be integrated once to yield
φ

=

(φ −λ)
2
c
2
− 1
where c is a constant of integration. Integrating this again leads to
φ(x) = c cosh[(x + d)/c] + λ, −H < x < H,
where d is a second constant of integration. For symmetry, we must have φ

(0) = 0 and
therefore d = 0. Thus
φ(x) = c cosh(x/c) + λ, −H < x < H. (7.77)
The constant λ in (7.77) is simply a reference height. For example we could take the x-axis
to pass through the two pegs in which case φ(±H) = 0 and then λ = −c cosh(H/c) and so
φ(x) = c

cosh(x/c) − cosh(H/c)

, −H < x < H. (7.78)
Substituting (7.78) into the constraint condition {φ} = L with given by (7.76) yields
L = 2c sinh(H/c). (7.79)
Thus in summary, if equation (7.79) can be solved for c, then (7.78) gives the equation
describing the shape of the cable.
All that remains is to examine the solvability of (7.79). To this end set z = H/c and
µ = L/(2H). Then we must solve µz = sinh z where µ > 1 is a constant. (The requirement
µ > 1 follows from the physical necessity that the distance between the pegs, 2H, be less
than the length of the rope, L.) One can show that as z increases from 0 to ∞, the function
sinh z −µz starts from the value 0, decreases monotonically to some finite negative value at
some z = z

> 0, and then increases monotonically to ∞. Thus for each µ > 0 the function
sinh z − µz vanishes at some unique positive value of z. Consequently (7.79) has a unique
root c > 0.
154 CHAPTER 7. CALCULUS OF VARIATIONS
7.6.2 Algebraic constraints
Now consider a problem of the following general type: find a pair of admissible functions
φ
1
(x), φ
2
(x) that minimizes

1
0
f(x, φ
1
, φ
2
, φ

1
, φ

2
)dx
subject to the algebraic constraint
g(x, φ
1
(x), φ
2
(x)) = 0 for 0 ≤ x ≤ 1.
One can show that a necessary condition is that the minimizer should satisfy the Euler
equation associated with f −λg. In this problem the Lagrange multiplier λ may be a function
of x.
Example: Consider a conical surface characterized by
g(x
1
, x
2
, x
3
) = x
2
1
+ x
2
2
−R
2
(x
3
) = 0, R(x
3
) = x
3
tan α, x
3
> 0.
Let P = (p
1
, p
2
, p
3
) and Q = (q
1
, q
2
, q
3
), q
3
> p
3
, be two points on this surface. A smooth
wire lies entirely on the conical surface and joins the points P and Q. A bead slides along
the wire under gravity, beginning at rest from P. From among all such wires, we are to find
the one that gives the minimum travel time.
x
1
x
2
x
3
P
Q
g
Figure 7.11: A curve that joins the points (p
1
, p
2
, p
3
) to (q
2
, q
2
, q
3
) and lies on the conical surface x
2
1
+
x
2
2
−x
2
3
tan
2
α = 0.
Suppose that the wire can be described parametrically by x
1
= φ
1
(x
3
), x
2
= φ
2
(x
3
) for
p
3
≤ x
3
≤ q
3
. (Not all of the permissible curves can be described this way and so by using
7.6. CONSTRAINED MINIMIZATION 155
this characterization we are limiting ourselves to a subset of all the permited curves.) Since
the curve has to lie on the conical surface it is necessary that
g(φ
1
(x
3
), φ
2
(x
3
), x
3
) = 0, p
3
≤ x
3
≤ q
3
. (7.80)
The travel time is found by integrating ds/v along the path. The arc length ds along the
path is given by
ds =

dx
2
1
+ dx
2
2
+ dx
2
3
=

1
)
2
+ (φ

2
)
2
+ 1 dx
3
.
The conservation of energy tells us that
1
2
mv
2
(t) −mgx
3
(t) = −mgp
3
, or
v =

2g(x
3
−p
3
).
Therefore the travel time is
T{φ
1
, φ
2
} =

q
3
p
3

1
)
2
+ (φ

2
)
2
+ 1

2g(x
3
−p
3
)
dx
3
.
Our task is to minimize T{φ
1
, φ
2
} over the set of admissible functions
A =


1
, φ
2
)

φ
i
: [p
3
, q
3
] →R, φ
i
∈ C
2
[p
3
, q
3
], φ
i
(p
3
) = p
i
, φ
i
(q
3
) = q
i
, i = 1, 2
¸
,
subject to the constraint
g(φ
1
(x
3
), φ
2
(x
3
), x
3
) = 0, p
3
≤ x
3
≤ q
3
.
According to the theory developed the solution is given by solving the Euler equations
associated with f −λ(x
3
)g where
f(x
3
, φ
1
, φ
2
, φ

1
, φ

2
) =

1
)
2
+ (φ

2
)
2
+ 1

2g(x
3
−p
3
)
and g(x
1
, x
2
, x
3
) = x
2
1
+ x
2
2
−x
2
3
tan
2
α,
subject to the prescribed conditions at the ends and the constraint g(x
1
, x
2
, x
3
) = 0.
7.6.3 Differential constraints
Now consider a problem of the following general type: find a pair of admissible functions
φ
1
(x), φ
2
(x) that minimizes

1
0
f(x, φ
1
, φ
2
, φ

1
, φ

2
)dx
156 CHAPTER 7. CALCULUS OF VARIATIONS
subject to the differential equation constraint
g(x, φ
1
(x), φ
2
(x), φ

1
(x), φ

2
(x)) = 0, for 0 ≤ x ≤ 1.
Suppose that the constraint is not integrable, i.e. suppose that there does not exist a func-
tion h(x, φ
1
(x), φ
2
(x)) such that g = dh/dx. (In dynamics, such constraints are called non-
holonomic.) One can show that it is necessary that the minimizer satisfy the Euler equation
associated with f − λg. In these problems, the Lagrange multiplier λ may be a function of
x.
Example: Determine functions φ
1
(x) and φ
2
(x) that minimize

1
0
f(x, φ
1
, φ

1
, φ

2
)dx
over an admissible set of functions subject to the non-holonomic constraint
g(x, φ
1
, φ
2
, φ

1
, φ

2
) = φ
2
−φ

1
= 0, for 0 ≤ x ≤ 1. (7.81)
According to the theory above, the minimizers satisfy the Euler equations
d
dx
¸
∂h
∂φ

1


∂h
∂φ
1
= 0,
d
dx
¸
∂h
∂φ

2


∂h
∂φ
2
= 0 for 0 < x < 1, (7.82)
where h = f −λg. On substituting for f and g, these Euler equations reduce to
d
dx
¸
∂f
∂φ

1
+ λ


∂f
∂φ
1
= 0,
d
dx
¸
∂f
∂φ

2

+ λ = 0 for 0 < x < 1. (7.83)
Thus the functions φ
1
(x), φ
2
(x), λ(x) are determined from the three differential equations
(7.81), (7.83).
Remark: Note by substituting the constraint into the integrand of the functional that we can
equivalently pose this problem as one for determining the function φ
1
(x) that minimizes

1
0
f(x, φ
1
, φ

1
, φ

1
)dx
over an admissible set of functions.
7.7. WEIRSTRASS-ERDMAN CORNER CONDITIONS 157
7.7 Piecewise smooth minimizers. Weirstrass-Erdman
corner conditions.
In order to motivate the discussion to follow, first consider the problem of minimizing the
functional
F{φ} =

1
0
((φ

)
2
−1)
2
dx (7.84)
over functions φ with φ(0) = φ(1) = 0.
This is apparently a problem of the classical type where in the present case we are
to minimize the integral of f(x, φ, φ

) =

)
2
− 1

2
with respect to x over the interval
[0, 1]. Assuming that the class of admissible functions are those that are C
1
[0, 1] and satisfy
φ(0) = φ(1) = 0, the minimizer must necessarily satisfy the Euler equation
d
dx
(∂f/∂φ

) −
(∂f/∂φ) = 0. In the present case this specializes to 2[ (φ

)
2
−1](2φ

) = constant for 0 ≤ x ≤ 1,
which in turn gives φ

(x) = constant for 0 ≤ x ≤ 1. On enforcing the boundary conditions
φ(0) = φ(1) = 0, this gives φ(x) = 0 for 0 ≤ x ≤ 1. This is an extremizer of F{φ} over
the class of admissible functions under consideration. It is readily seen from (7.84) that the
value of F at this particular function φ(x) = 0 is F = 1
Note from (7.84) that F ≥ 0. It is natural to wonder whether there is a function φ

(x)
that gives F{φ

} = 0. If so, φ

would be a minimizer. If there is such a function φ

, we know
that it cannot belong to the class of admissible functions considered above, since if it did,
we would have found it from the preceding calculation. Therefore if there is a function φ

of
this type, it does not belong to the set of functions A. The functions in A were required to
be C
1
[0, 1] and to vanish at the two ends x = 0 and x = 1. Since φ

/ ∈ A it must not satisfy
one or both of these two conditions. The problem statement requires that the boundary
conditions must hold. Therefore it must be true that φ is not as smooth as C
1
[0, 1].
If there is a function φ

such that F{φ

} = 0, it follows from the nonnegative character
of the integrand in (7.84) that the integrand itself should vanish almost everywhere in [0, 1].
This requires that φ

(x) = ±1 almost everywhere in [0, 1]. The piecewise linear function
φ

(x) =

x for 0 ≤ x ≤ 1/2,
(1 −x) for 1/2 ≤ x ≤ 1,
(7.85)
has this property. It is continuous, is piecewise C
1
, and gives F{φ

} = 0. Moreover φ

(x)
satisfies the Euler equation except at x = 1/2.
158 CHAPTER 7. CALCULUS OF VARIATIONS
But is it legitimate for us to consider piecewise smooth functions? If so are there are
any restrictions that we must enforce? Physical problems involving discontinuities in certain
physical fields or their derivatives often arise when, for example, the problem concerns an
interface separating two different mateirals. A specific example will be considered below.
7.7.1 Piecewise smooth minimizer with non-smoothness occuring
at a prescribed location.
Suppose that we wish to extremize the functional
F{φ} =

1
0
f(x, φ, φ

)dx
over some suitable set of admissible functions, and suppose further that we know that the
extremal φ(x) is continuous but has a discontinuity in its slope at x = s: i.e. φ

(s−) = φ

(s+)
where φ

(s±) denotes the limiting values of φ

(s ± ε) as ε → 0. Thus the set of admissible
functions is composed of all functions that are smooth on either side of x = s, that are
continuous at x = s, and that satisfy the given boundary conditions φ(0) = φ
0
, φ(1) = φ
1
:
A = {φ(·)

φ : [0, 1] →R, φ ∈ C
1
([0, s) ∪ (s, 1]), φ ∈ C[0, 1], φ(0) = φ
0
, φ(1) = φ
1
} .
Observe that an admissible function is required to be continuous on [0, 1], required to have
a continuous first derivative on either side of x = s, and its first derivative is permitted to
have a jump discontinuity at a given location x = s.
1
x
y
s
φ(x)
φ(x) + δφ(x)
φ(x)
φ(x) + δφ(x)
Figure 7.12: Extremal φ(x) and a neighboring test function φ(x) +δφ(x) both with kinks at x = s.
7.7. WEIRSTRASS-ERDMAN CORNER CONDITIONS 159
Suppose that F is extremized by a function φ(x) ∈ A and suppose that this extremal has
a jump discontinuity in its first derivative at x = s. Let δφ(x) be an admissible variation
which means that the neighboring function φ(x) + δ(x) is also in A which means that it is
C
1
on [0, s) ∪(s, 1] and (may) have a jump discontinuity in its first derivative at the location
x = s; see Figure 7.12. This implies that
δφ(x) ∈ C([0, 1]), δφ(x) ∈ C
1
([0, s) ∪ (s, 1]), δφ(0) = δφ(1) = 0.
In view of the lack of smoothness at x = s it is convenient to split the integral into two parts
and write
F{φ} =

s
0
f(x, φ, φ

)dx +

1
s
f(x, φ, φ

)dx,
and
F{φ + δφ} =

s
0
f(x, φ + δφ, φ

+ δφ

)dx +

1
s
f(x, φ + δφ, φ

+ δφ

)dx.
Upon calculating δF, which by definition equals F{φ+δφ} −F{φ} upto terms linear in δφ,
and setting δF = 0, we obtain

s
0
(f
φ
δφ + f
φ
δφ

) dx +

1
s
(f
φ
δφ + f
φ
δφ

) dx = 0.
Integrating the terms involving δφ

by parts leads to

s
0
¸
f
φ

d
dx
(f
φ
)

δφ dx +

1
s
¸
f
φ

d
dx
(f
φ
)

δφ dx +
¸
∂f
∂φ

δφ

s−
x=0
+
¸
∂f
∂φ

δφ

1
x=s+
= 0.
However, since δφ(0) = δφ(1) = 0, this simplifies to

1
0
¸
∂f
∂φ

d
dx

∂f
∂φ

δφ(x) dx +

∂f
∂φ

x=s−

∂f
∂φ

x=s+

δφ(s) = 0.
First, if we limit attention to variations that are such that δφ(s) = 0, the second term
in the equation above vanishes, and only the integral remains. Since δφ(x) can be chosen
arbitrarily for all x ∈ (0, 1), x = s, this implies that the term within the brackets in the
integrand must vanish at each of these x’s. This leads to the Euler equation
∂f
∂φ

d
dx

∂f
∂φ

= 0 for 0 < x < 1, x = s.
Second, when this is substituted back into the equation above it, the integral now disappears.
Since the resulting equation must hold for all variations δφ(s), it follows that we must have
∂f
∂φ

x=s−
=
∂f
∂φ

x=s+
160 CHAPTER 7. CALCULUS OF VARIATIONS
at x = s. This is a “matching condition” or “jump condition” that relates the solution on
the left of x = s to the solution on its right. The matching condition shows that even though
φ

has a jump discontinuity at x = s, the quantity ∂f/∂φ

is continuous at this point.
Thus in summary an extremal φ must obey the following boundary value problem:
d
dx

∂f
∂φ


∂f
∂φ
= 0 for 0 < x < 1, x = s,
φ(0) = φ
0
,
φ(1) = φ
1
,
∂f
∂φ

x=s−
=
∂f
∂φ

x=s+
at x = s.

(7.86)
x
0
y
n
1
(x, y)
n
2
(x, y)
(a, A)
(b, B)
x = 0, −∞< y < ∞
θ
+
θ

Figure 7.13: Ray of light in a two-phase material.
Example: Consider a two-phase material that occupies all of x, y-space. The material oc-
cupying x < 0 is different to the material occupying x > 0 and so x = 0 is the interface
between the two materials. In particular, suppose that the refractive indices of the materials
occupying x < 0 and x > 0 are n
1
(x, y) and n
2
(x, y) respectively; see Figure 7.13. We are
asked to determine the path y = φ(x), a ≤ x ≤ b, followed by a ray of light travelling from
a point (a, A) in the left half-plane to the point (b, B) in the right half-plane. In particular,
we are to determine the conditions at the point where the ray crosses the interface between
the two media.
7.7. WEIRSTRASS-ERDMAN CORNER CONDITIONS 161
According to Fermat’s principle, a ray of light travelling between two given points follows
the path that it can traverse in the shortest possible time. Also, we know that light travels
at a speed c/n(x, y) where n(x, y) is the index of refraction at the point (x, y). Thus the
transit time is determined by integrating n/c along the path followed by the light, which,
since ds =

1 + (φ

)
2
dx can be written as
T{φ} =

b
a
1
c
n(x, φ(x))

1 + (φ

)
2
dx.
Thus the problem at hand is to determine φ that minimizes the functional T{φ} over the
set of admissible functions
A = {φ(·)

φ ∈ C[a, b], φ ∈ C
1
([a, 0) ∪ (0, b]), φ(a) = A, φ(b) = B}.
Note that this set of admissible functions allows the path followed by the light to have a
kink at x = 0 even though the path is continuous.
The functional we are asked to minimize can be written in the standard form
T{φ} =

b
a
f(x, φ, φ

) dx where f(x, φ, φ

) =
n(x, φ)
c

1 + (φ

)
2
.
Therefore
∂f
∂φ

=
n(x, φ)
c
φ

1 + (φ

)
2
and so the matching condition at the kink at x = 0 requires that
n
c
φ

1 + (φ

)
2
be continuous at x = 0.
Observe that, if θ is the angle made by the ray of light with the x-axis at some point along
its path, then tan θ = φ

and so sin θ = φ

/

1 + (φ

)
2
. Therefore the matching condition
requires that nsin θ be continuous, or
n
+
sin θ
+
= n

sin θ

where n
±
and θ
±
are the limiting values of n(x, φ(x)) and θ(x) as x → 0±. This is Snell’s
well-known law of refraction.
162 CHAPTER 7. CALCULUS OF VARIATIONS
7.7.2 Piecewise smooth minimizer with non-smoothness occuring
at an unknown location
Suppose again that we wish to extremize the functional
F(φ) =

1
0
f(x, φ, φ

) dx
over the admissible set of functions
A = {φ(·) : φ : [0, 1] →R, φ ∈ C[0, 1], φ ∈ C
1
p
[0, 1], φ(0) = a, φ(1) = b}
Just as before, the admissible functions are continuous and have a piecewise continuous first
derivative. However in contrast to the preceding case, if there is discontinuity in the first
derivative of φ at some location x = s, the location s is not known a priori and so is also to
be determined.
1
x
y
s
s + δs
φ(x)
φ(x) + δφ(x)
φ(x)
φ(x) + δφ(x)
Figure 7.14: Extremal φ(x) with a kink at x = s and a neighboring test function φ(x) +δφ(x) with kinks
at x = s and s +δs.
Suppose that F is extremized by the function φ(x) and that it has a jump discontinuity
in its first derivative at x = s; (we shall say that φ has a “kink” at x = s). Suppose further
that φ is C
1
on either side of x = s. Consider a variation δφ(x) that vanishes at the two ends
x = 0 and x = 1, is continuous on [0, 1], is C
1
on [0, 1] except at x = s + δs where it has a
jump discontinuity in its first derivative:
δφ ∈ C[0, 1] ∪ C
1
[0, s + δs) ∪ C
1
(s + δs, 1], δφ(0) = δφ(1) = 0.
Note that φ(x) + δφ(x) has kinks at both x = s and x = s + δs. Note further that we have
varied the function φ(x) and the location of the kink s. See Figure 7.14.
7.7. WEIRSTRASS-ERDMAN CORNER CONDITIONS 163
Since the extremal φ(x) has a kink at x = s it is convenient to split the integral and
express F{φ} as
F{φ} =

s
0
f(x, φ, φ

)dx +

1
s
f(x, φ, φ

)dx.
Similarly since the the neigboring function φ(x) + δ(x) has kinks at x = s and x = s + δs,
it is convenient to express F{φ + δφ} by splitting the integral into three terms as follows:
F{φ + δφ} =

s
0
f(x, φ + δφ, φ

+ δφ

)dx +

s+δs
s
f(x, φ + δφ, φ

+ δφ

)dx
+

1
s+δs
f(x, φ + δφ, φ

+ δφ

)dx.
We can now calculate the first variation δF which, by definition, equals F{φ + δφ} −F{φ}
upto terms linear in δφ. Calculating δF in this way and setting the result equal to zero, leads
after integrating by parts, to

1
0
Aδφ(x)dx + B δφ(s) + C δs = 0,
where
A =
∂f
∂φ

d
dx

∂f
∂φ

,
B =

∂f
∂φ

x=s−

∂f
∂φ

x=s+
,
C =

f −φ

∂f
∂φ

x=s−

f −φ

∂f
∂φ

x=s+
.
(7.87)
By the arbitrariness of the variations above, it follows in the usual way that A, B and C all
must vanish. This leads to the usual Euler equation on (0, s) ∪ (s, 1), and the following two
additional requirements at x = s:
∂f
∂φ

s−
=
∂f
∂φ

s+
, (7.88)

f −φ

∂f
∂φ

s−
=

f −φ

∂f
∂φ

s+
. (7.89)
The two matching conditions (or jump conditions) (7.88) and (7.89) are known as the
Wierstrass-Erdmann corner conditions (the term “corner” referring to the “kink” in φ).
Equation (7.88) is the same condition that was derived in the preceding subsection.
164 CHAPTER 7. CALCULUS OF VARIATIONS
Example: Find the extremals of the functional
F(φ) =

4
0
f(x, φ, φ

) dx =

4
0

−1)
2

+ 1)
2
dx
over the set of piecewise smooth functions subject to the end conditions φ(0) = 0, φ(4) = 2.
For simplicity, restrict attention to functions that have at most one point at which φ

has a
discontinuity.
Here
f(x, φ, φ

) =

)
2
−1

2
(7.90)
and therefore on differentiating f,
∂f
∂φ

= 4φ

)
2
−1

,
∂f
∂φ
= 0. (7.91)
Consequently the Euler equation (at points of smoothness) is
d
dx
f
φ
−f
φ
=
d
dx

)
2
−1

= 0. (7.92)
First, consider an extremal that is smooth everywhere. (Such an extremal might not, of
course, exist.) In this case the Euler equation (7.92) holds on the entire interval (0, 4) and
so we conclude that φ

(x) = constant for 0 ≤ x ≤ 4. On integrating this and using the
boundary conditions φ(0) = 0, φ(4) = 2, we find that φ(x) = x/2, 0 ≤ x ≤ 4, is a smooth
extremal. In order to compare this with what follows, it is helpful to call this, say, φ
0
. Thus
φ
o
(x) =
1
2
x for 0 ≤ x ≤ 4,
is a smooth extremal of F.
Next consider a piecewise smooth extremizer of F which has a kink at some location
x = s; the value of s ∈ (0, 4) is not known a priori and is to be determined. (Again, such an
extremal might not, of course, exist.) The Euler equation (7.92) now holds on either side of
x = s and so we find from (7.92) that φ

= c = constant on (0, s) and φ

= d = constant on
(s, 4) where c = d; (if c = d there would be no kink at x = s and we have already dealt with
this case above). Thus
φ

(x) =

c for 0 < x < s,
d for s < x < 4.
7.7. WEIRSTRASS-ERDMAN CORNER CONDITIONS 165
Integrating this, separately on (0, s) and (s, 4), and enforcing the boundary conditions φ(0) =
0, φ(4) = 2, leads to
φ(x) =

cx for 0 ≤ x ≤ s,
d(x −4) + 2 for s ≤ x ≤ 4.
(7.93)
Since φ is required to be continuous, we must have φ(s−) = φ(s+) which requires that
cs = d(s −4) + 2 whence
s =
2 −4d
c −d
. (7.94)
Note that s would not exist if c = d.
All that remains is to find c and d, and the two Weirstrass-Erdmann corner conditions
(7.88), (7.89) provide us with the two equations for doing this. From (7.90), (7.91) and (7.93),
∂f
∂φ

=

4c(c
2
−1) for 0 < x < s,
4d(d
2
−1) for s < x < 4.
and
f −φ

∂f
∂φ

=

−(c
2
−1)(1 + 3c
2
) for 0 < x < s,
−(d
2
−1)(1 + 3d
2
) for s < x < 4.
Therefore the Weirstrass-Erdmann corner conditions (7.88) and (7.89), which require re-
spectively the continuity of ∂f/∂φ

and f−φ

∂f/∂φ

at x = s, give us the pair of simultaneous
equations
c(c
2
−1) = d(d
2
−1),
(c
2
−1)(1 + 3c
2
) = (d
2
−1)(1 + 3d
2
).

Keeping in mind that c = d and solving these equations leads to the two solutions:
c = 1, d = −1, and c = −1, d = 1.
Corresponding to the former we find from (7.94) that s = 3, while the latter leads to s = 1.
Thus from (7.93) there are two piecewise smooth extremals φ
1
(x) and φ
2
(x) of the assumed
form:
φ
1
(x) =

x for 0 ≤ x ≤ 3,
−x + 6 for 3 ≤ x ≤ 4.
φ
2
(x) =

−x for 0 ≤ x ≤ 1,
x −2 for 1 ≤ x ≤ 4.
166 CHAPTER 7. CALCULUS OF VARIATIONS
φ
0
(x)
φ
1
(x)
φ
2
(x)
1 2 3
x
y
1
2
3
4
φ
2
(x)
φ
1
(x)
Figure 7.15: Smooth extremal φ
0
(x) and piecewise smooth extremals φ
1
(x) and φ
2
(x).
Figure 7.15 shows graphs of φ
o
, φ
1
and φ
2
. By evaluating the functional F at each of the
extremals φ
0
, φ
1
and φ
2
, we find
F{φ
o
} = 9/4, F{φ
1
} = F{φ
2
} = 0.
Remark: By inspection of the given functional
F(φ) =

4
0

)
2
−1

2
dx,
it is clear that (a) F ≥ 0, and (b) F = 0 if and only if φ

= ±1 everywhere (except at
isolated points where φ

may be undefined). The extremals φ
1
and φ
2
have this property and
therefore correspond to absolute minimizers of F.
7.8 Generalization to higher dimensional space.
In order to help motivate the way in which we will approach higher-dimensional problems
(which will in fact be entirely parallel to the approach we took for one-dimensional problems)
we begin with some preliminary observations.
First, consider the one-dimensional variational problem of minimizing a functional
F{φ} =

1
0
f(x, φ, φ

, φ

) dx
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 167
on a set of suitably smooth functions with no prescribed boundary conditions at either
end. The analogous two-dimensional problem would be to consider a set of suitably smooth
functions φ(x, y) defined on a domain D of the x, y-plane and to minimize a given functional
F{φ} =

D
f(x, y, φ, ∂φ/∂x, ∂φ/∂y, ∂
2
φ/∂x
2
, ∂
2
φ/∂x∂y, ∂
2
φ/∂y
2
) dA
over this set of functions with no boundary conditions prescribed anywhere on the boundary
∂D.
In deriving the Euler equation in the one-dimensional case our strategy was to exploit
the fact that the variation δφ(x) was arbitrary in the interior 0 < x < 1 of the domain. This
motivated us to express the integrand in the form of some quantity A (independent of any
variations) multiplied by δφ(x). Then, the arbitrariness of δφ allowed us to conclude that A
must vanish on the entire domain. We approach two-dimensional problems similarly and our
strategy will be to exploit the fact that δφ(x, y) is arbitrary in the interior of D and so we
attempt to express the integrand as some quantity A that is independent of any variations
multiplied by δφ. Similarly concerning the boundary terms, in the one-dimensional case we
were able to exploit the fact that δφ and its derivative δφ

are arbitrary at the boundary
points x = 0 and x = 1, and this motivated us to express the boundary terms as some
quantity B that is independent of any variations multiplied by δφ(0), another quantity C
independent of any variations multiplied by δφ

(0), and so on. We approach two-dimensional
problems similarly and our strategy for the boundary terms is to exploit the fact that δφ
and its normal derivative ∂(δφ)/∂n are arbitrary on the boundary ∂D. Thus the goal in
our calculations will be to express the boundary terms as some quantity independent of any
variations multiplied by δφ, another quantity independent of any variations multiplied by
∂(δφ)/∂n etc. Thus in the two-dimensional case our strategy will be to take the first variation
of F and carry out appropriate calculations that lead us to an equation of the form
δF =

D
Aδφ(x, y) dA +

∂D
B δφ(x, y) ds +

∂D
C


∂n
(δφ(x, y))

ds = 0 (7.95)
where A, B, C are independent of δφ and its derivatives and the latter two integrals are on
the boundary of the domain D. We then exploit the arbitrariness of δφ(x, y) on the interior
of the domain of integration, and the arbitrariness of δφ and ∂(δφ)/∂n on the boundary
∂D to conclude that the minimizer must satisfy the partial differential equation A = 0 for
(x, y) ∈ D and the boundary conditions B = C = 0 on ∂D.
Next, recall that one of the steps involved in calculating the minimizer of a one-dimensional
problem is integration by parts. This converts a term that is an integral over [0, 1] into terms
168 CHAPTER 7. CALCULUS OF VARIATIONS
that are only evaluated on the boundary points x = 0 and 1. The analog of this in higher
dimensions is carried out using the divergence theorem, which in two-dimensions reads

D

∂P
∂x
+
∂Q
∂y

dA =

∂D
(Pn
x
+ Qn
y
) ds (7.96)
which expresses the left hand side, which is an integral over D, in a form that only involves
terms on the boundary. Here n
x
, n
y
are the components of the unit normal vector n on ∂D
that points out of D. Note that in the special case where P = ∂χ/∂x and Q = ∂χ/∂y for
some χ(x, y) the integrand of the right hand side is ∂χ/∂n.
Remark: The derivative of a function φ(x, y) in a direction corresponding to a unit vector m
is written as ∂φ/∂m and defined by ∂φ/∂m = ∇φ· m = (∂φ/∂x) m
x
+∂φ/∂y) m
y
where m
x
and m
y
are the components of m in the x- and y-directions respectively. On the boundary
∂D of a two dimensional domain D we frequently need to calculate the derivative of φ in
directions n and s that are normal and tangential to ∂D. In vector form we have
∇φ =
∂φ
∂x
i +
∂φ
∂y
j =
∂φ
∂n
n +
∂φ
∂s
s
where i and j are unit vectors in the x- and y-directions. Recall also that a function φ(x, y)
and its tangential derivative ∂φ/∂s along the boundary ∂D are not independent of each other
in the following sense: if one knows the values of φ along ∂D one can differentiate φ along
the boundary to get ∂φ/∂s; and conversely if one knows the values of ∂φ/∂s along ∂D one
can integrate it along the boundary to find φ to within a constant. This is why equation
(7.95) does not involve a term of the form E ∂(δφ)/∂s integrated along the boundary ∂D
since it can be rewritten as the integral of −(∂E/∂s) δφ along the boundary
Example 1: A stretched membrane. A stretched flexible membrane occupies a regular
region D of the x, y-plane. A pressure p(x, y) is applied normal to the surface of the membrane
in the negative z-direction. Let u(x, y) be the resulting deflection of the membrane in the
z-direction. The membrane is fixed along its entire edge ∂D and so
u = 0 for (x, y) ∈ ∂D. (7.97)
One can show that the potential energy Φ associated with any deflection u that is compatible
with the given boundary condition is
Φ{u} =

D
1
2

∇u

2
dA −

D
pu dA
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 169
where we have taken the relevant stiffness of the membrane to be unity. The actual deflection
of the membrane is the function that minimizes the potential energy over the set of test
functions
A = {u

u ∈ C
2
(D), u = 0 for (x, y) ∈ ∂D}.
x
y
0
∂D
D
Fixed
Figure 7.16: A stretched elastic membrane whose mid-plane occupies a region D of the x, y-plane and
whose boundary ∂D is fixed. The membrane surface is subjected to a pressure loading p(x, y) that acts in
the negative z-direction.
Since
Φ{u} =

D
1
2
(u
,x
u
,x
+ u
,y
u
,y
) dA −

D
pu dA,
its first variation is
δΦ =

D
(u
,x
δu
,x
+ u
,y
δu
,y
) dA −

D
pδudA,
where an admissible variation δu(x, y) vanishes on ∂D. Here we are using the notation
that a comma followed by a subscript denotes partial differentiation with respect to the
corresponding coordinate, for example u
,x
= ∂u/∂x and u
,xy
= ∂
2
u/∂x∂y. In order to make
use of the divergence theorem and convert the area integral into a boundary integral we
must write the integrand so that it involves terms of the form (. . .)
,x
+ (. . .)
,y
; see (7.96).
This suggests that we rewrite the preceding equation as
δΦ =

D

(u
,x
δu)
,x
+ (u
,y
δu)
,y
−(u
,xx
+ u
,yy
)δu

dA −

D
pδudA,
or equivalently as
δΦ =

D

(u
,x
δu)
,x
+ (u
,y
δu)
,y

dA −

D

u
,xx
+ u
,yy
+ p

δu dA.
170 CHAPTER 7. CALCULUS OF VARIATIONS
By using the divergence theorem on the first integral we get
δΦ =

∂D

u
,x
n
x
+ u
,y
n
y

δu ds −

D
(u
,xx
+ u
,yy
+ p) δu dA
where n is a unit outward normal along ∂D. We can write this equivalently as
δΦ =

∂D
∂u
∂n
δu ds −

D


2
u + p

δu dA. (7.98)
Since the variation δu vanishes on ∂D the first integral drops out and we are left with
δΦ = −

D


2
u + p

δu dA (7.99)
which must vanish for all admissible variations δu(x, y). Thus the minimizer satisfies the
partial differential equation

2
u + p = 0 for (x, y) ∈ D
which is the Euler equation in this case that is to be solved subject to the prescribed boundary
condition (7.97). Note that if some part of the boundary of D had not been fixed, then we
would not have δu = 0 on that part in which case (7.98) and (7.99) would yield the natural
boundary condition ∂φ/∂n = 0 on that segment.
NNN Show the calculations for just one term w
2
xx
. Include nu = 0 in text.NNN
NNN Check signs of terms and sign conventionNNN
Example 2: The Kirchhoff theory of plates. We consider the bending of a thin plate
according to the so-called Kirchhoff theory. Solely for purposes of mathematical simplicity
we shall assume that the Poisson ratio ν of the material is zero. A discussion of the case
ν = 0 can be found in many books, for example, in “Energy & Finite Elements Methods in
Structural Mechanic” by I.H. Shames & C.L. Dym. When ν = 0 the plate bending stiffness
D = Et
3
/12 where E is the Young’s modulus of the material and t is the thickness of the
plate. The mid-plane of the plate occupies a domain of the x, y-plane and w(x, y) denotes
the deflection (displacement) of a point on the mid-plane in the z-direction. The basic con-
stitutive relationships of elastic plate theory relate the internal moments M
x
, M
y
, M
xy
, M
yx
(see Figure 7.17) to the second derivatives of the displacement field w
,xx
, w
,yy
, w
,xy
by
M
x
= −Dw
,xx
, M
y
= −Dw
,yy
, M
xy
= M
yx
= −Dw
,xy
, (7.100)
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 171
where a comma followed by a subscript denotes partial differentiation with respect to the
corresponding coordinate and D is the plate bending stiffness; and the shear forces in the
plate are given by
V
x
= −D(∇
2
w)
,x
, V
y
= −D(∇
2
w)
,y
. (7.101)
The elastic energy per unit volume of the plate is given by
1
2

M
x
w
,xx
+ M
y
w
,yy
+ M
xy
w
,xy
+ M
yx
w
,yx

=
D
2

w
2
,xx
+ w
2
,yy
+ 2w
2
,xy

. (7.102)
x
y
z
M
x
M
y
M
xy
V
x
V
y
M
x
= D(w
,xx
+ νw
yy
)
M
y
= D(w
,yy
+ νw
xx
)
D(1 −ν)w
,xy
V
x
= D(∇
2
w)
,y
V
y
= D(∇
2
w)
,x
dx
dy
t
M
yx
M
xy
= M
yx
=
Figure 7.17: A differential element dx×dy ×t of a thin plate. A bold arrow represents a force and thus V
x
and V
y
are (shear) forces. A bold arrow with two arrow heads represents a moment whose sense is given by
the right hand rule. Thus M
xy
and M
yx
are (twisting) moments while M
x
and M
y
are (bending) moments.
x
y
a
b
0
Clamped
x
y

∂Ω
1
∂Ω
2
0
Free
Clamped
n
s
n
x
= 1, n
y
= 0
s
x
= 0, s
y
= 1
Free
∂Ω
2
Figure 7.18: Left: A thin elastic plate whose mid-plane occupies a region Ω of the x, y-plane. The segment
∂Ω
1
of the boundary is clamped while the remainder ∂Ω
2
is free of loading. Right: A rectangular a ×b plate
with a load free edge at its right hand side x = a, 0 ≤ y ≤ b.
172 CHAPTER 7. CALCULUS OF VARIATIONS
It is worth noting the following puzzling question: Consider the rectangular plate shown
in the right hand diagram of Figure 7.18. Based on Figure 7.17 we know that there is a
bending moment M
x
, a twisting moment M
xy
, and a shear force V
x
acting on any surface
x = constant in the plate. Therefore, in particular, since the right hand edge x = a is free
of loading one would expect to have the three conditions M
x
= M
xy
= V
x
= 0 along that
boundary. However we will find that the differential equation to be solved in the interior of
the plate requires (and can only accommodate) two boundary conditions at any point on
the edge. The question then arises as to what the correct boundary conditions on this edge
should be. Our variational approach will give us precisely two natural boundary conditions
on this edge. They will involve M
x
, M
xy
and V
x
but will not require that each of them must
vanish individually.
Consider a thin elastic plate whose mid-plane occupies a domain Ω of the x, y-plane as
shown in the left hand diagram of Figure 7.18. A normal loading p(x, y) is applied on the
flat face of the plate in the −z-direction. A part of the plate boundary denoted by ∂Ω
1
is
clamped while the remainder ∂Ω
2
is free of any external loading. Thus if w(x, y) denotes the
deflection of the plate in the z-direction we have the geometric boundary conditions
w = ∂w/∂n = 0 for (x, y) ∈ ∂Ω
1
. (7.103)
The total potential energy of the system is
Φ{w} =

D
2

w
2
,xx
+ 2w
2
,xy
+ w
2
,yy

−p w

dA (7.104)
where the first group of terms represents the elastic energy in the plate and the last term
represents the potential energy of the pressure loading (the negative sign arising from the
fact that p acts in the minus z-direction while w is the deflection in the positive z direction).
This functional Φ is defined on the set of all kinematically admissible deflection fields which
is the set of all suitably smooth functions w(x, y) that satisfy the geometric requirements
(7.103). The actual deflection field is the one that minimizes the potential energy Φ over this
set.
We now determine the Euler equation and natural boundary conditions associated with
(7.104) by calculating the first variation of Φ{w} and setting it equal to zero:

w
,xx
δw
,xx
+ 2w
,xy
δw
,xy
+ w
,yy
δw
,yy

p
D
δw

dA = 0. (7.105)
To simplify this we begin be rearranging the terms into a form that will allow us to use
the divergence theorem, thereby converting part of the area integral on Ω into a boundary
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 173
integral on ∂Ω. In order to use the divergence theorem we must write the integrand so that
it involves terms of the form (. . .)
,x
+ (. . .)
,y
; see (7.96). Accordingly we rewrite (7.105) as
0 =

w
,xx
δw
,xx
+ 2w
,xy
δw
,xy
+ w
,yy
δw
,yy
−(p/D)δw

dA,
=

w
,xx
δw
,x
+ w
,xy
δw
,y

,x
+

w
,xy
δw
,x
+ w
,yy
δw
,y

,y
−w
,xxx
δw
,x
−w
,xxy
δw
,y
−w
,xyy
δw
,x
−w
,yyy
δw
,y
−(p/D)δw

dA,
=

∂Ω

w
,xx
δw
,x
+ w
,xy
δw
,y

n
x
+

w
,xy
δw
,x
+ w
,yy
δw
,y

n
y

ds

w
,xxx
δw
,x
+ w
,xxy
δw
,y
+ w
,xyy
δw
,x
+ w
,yyy
δw
,y
+ (p/D)δw

dA,
=

∂Ω
I
1
ds −


I
2
dA.
(7.106)
We have used the divergence theorem (7.96) in going from the second equation above to the
third equation. In order to facilitate further simplification, in the last step we have let I
1
and I
2
denote the integrands of the boundary and area integrals.
To simplify the area integral in (7.106) we again rearrange the terms in I
2
into a form
that will allow us to use the divergence theorem. Thus


I
2
dA =

w
,xxx
δw
,x
+ w
,xxy
δw
,y
+ w
,xyy
δw
,x
+ w
,yyy
δw
,y
+ p/Dδw

dA,
=

w
,xxx
δw + w
,xyy
δw

,x
+

w
,xxy
δw + w
,yyy
δw

,y

w
,xxxx
+ 2w
,xxyy
+ w
,yyyy
−p/D

δw

dA,
=

∂Ω

w
,xxx
δw + w
,xyy
δw

n
x
+

w
,xxy
δw + w
,yyy
δw

n
y

ds


4
w −(p/D)

δwdA,
=

∂Ω

w
,xxx
n
x
+ w
,xyy
n
x
+ w
,xxy
n
y
+ w
,yyy
n
y

δwds


4
w −p/D

δwdA,
=

∂Ω
P
1
δwds −


P
2
δwdA,
(7.107)
174 CHAPTER 7. CALCULUS OF VARIATIONS
where we have set
P
1
= w
,xxx
n
x
+ w
,xyy
n
x
+ w
,xxy
n
y
+ w
,yyy
n
y
and P
2
= ∇
4
w −p/D. (7.108)
In the preceding calculation, we have used the divergence theorem (7.96) in going from the
second equation in (7.107) to the third equation, and we have set

4
w = ∇
2
(∇
2
w) = w
,xxxx
+ 2w
,xxyy
+ w
,yyyy
.
Next we simplify the boundary term in (7.106) by converting the derivatives of the
variation with respect to x and y into derivatives with respect to normal and tangential
coordinates n and s. To do this we use the fact that ∇δw = δw
,x
i + δw
,y
j = δw
,n
n + δw
,s
s
from which it follows that δw
,x
= δw
,n
n
x
+ δw
,s
s
x
and δw
,y
= δw
,n
n
y
+ δw
,s
s
y
. Thus from
(7.106),

∂Ω
I
1
ds =

∂Ω

w
,xx
n
x
δw
,x
+ w
,xy
n
x
δw
,y

+

w
,xy
n
y
δw
,x
+ w
,yy
n
y
δw
,y

ds,
=

∂Ω

w
,xx
n
x
+ w
,xy
n
y

δw
,x
+

w
,xy
n
x
+ w
,yy
n
y

δw
,y

ds,
=

∂Ω

w
,xx
n
x
+ w
,xy
n
y

δw
,n
n
x
+ δw
,s
s
x

+

w
,xy
n
x
+ w
,yy
n
y

δw
,n
n
y
+ δw
,s
s
y

ds,
=

∂Ω

w
,xx
n
2
x
+ w
,xy
n
x
n
y
+ w
,xy
n
x
n
y
+ w
,yy
n
2
y

δw
,n
+

w
,xx
n
x
s
x
+ w
,xy
s
x
n
y
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y

δw
,s

ds,
=

∂Ω

w
,xx
n
2
x
+ w
,xy
n
x
n
y
+ w
,xy
n
x
n
y
+ w
,yy
n
2
y

δw
,n
+ I
3

ds.
(7.109)
To further simplify this we have set I
3
equal to the last expression in (7.109) and this term
can be written as

∂Ω
I
3
ds =

∂Ω

w
,xx
n
x
s
x
+ w
,xy
s
x
n
y
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y

δw
,s

ds,
=

∂Ω

w
,xx
n
x
s
x
+ w
,xy
s
x
n
y
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y

δw

,s

w
,xx
n
x
s
x
+ w
,xy
s
x
n
y
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y

,s
δw

ds.
(7.110)
If a field f(x, y) varies smoothly along ∂Ω, and if the curve ∂Ω itself is smooth, then

∂Ω
∂f
∂s
ds = 0 (7.111)
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 175
since this is an integral over a closed curve
8
. It follows from this that the first term in the
last expression of (7.110) vanishes and so

∂Ω
I
3
ds = −

∂Ω

w
,xx
n
x
s
x
+ w
,xy
s
x
n
y
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y

,s
δw

ds.
(7.112)
Substituting (7.112) into (7.109) yields

∂Ω
I
1
ds =

∂Ω
P
3

∂n
(δw) ds −

∂Ω

∂s
(P
4
) δwds, (7.113)
where we have set
P
3
= w
,xx
n
2
x
+ w
,xy
n
x
n
y
+ w
,xy
n
x
n
y
+ w
,yy
n
2
y
,
P
4
= w
,xx
n
x
s
x
+ w
,xy
n
y
s
x
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y
.
(7.114)
Finally, substituting (7.113) and (7.107) into (7.106) leads to


P
2
δwdA −

∂Ω

P
1
+

∂s
(P
4
)

δwds +

∂Ω
P
3

∂n
(δw) ds = 0 (7.115)
which must hold for all admissible variations δw. First restrict attention to variations which
vanish on the boundary ∂Ω and whose normal derivative ∂(δw)/∂n also vanish on ∂Ω. This
leads us to the Euler equation P
2
= 0:

4
w −p/D = 0 for (x, y) ∈ Ω. (7.116)
Returning to (7.115) with this gives

∂Ω

P
1
+

∂s
(P
4
)

δwds +

∂Ω
P
3

∂n
(δw) ds = 0. (7.117)
Since the portion ∂Ω
1
of the boundary is clamped we have w = ∂w/∂n = 0 for (x, y) ∈ ∂Ω
1
.
Thus the variations δw and ∂(δw)/∂n must also vanish on ∂Ω
1
. Thus (7.117) simplifies to

∂Ω
2

P
1
+

∂s
(P
4
)

δwds +

∂Ω
2
P
3

∂n
(δw) ds = 0 (7.118)
for variations δw and ∂(δw)/∂n that are arbitrary on ∂Ω
2
where ∂Ω
2
is the complement of
∂Ω
1
, i.e. ∂Ω = ∂Ω
1
∪ ∂Ω
2
. Thus we conclude that P
1
+ ∂P
4
/∂s = 0 and P
3
= 0 on ∂Ω
2
:
w
,xxx
n
x
+ w
,xyy
n
x
+ w
,xxy
n
y
+ w
,yyy
n
y
+

∂s
(w
,xx
n
x
s
x
+ w
,xy
n
y
s
x
+ w
,xy
n
x
s
y
+ w
,yy
n
y
s
y
) = 0
w
,xx
n
2
x
+ w
,xy
n
x
n
y
+ w
,xy
n
x
n
y
+ w
,yy
n
2
y
= 0,

for (x, y) ∈ ∂Ω
2
.
(7.119)
8
In the present setting one would have this degree of smoothness if there are no concentrated loads applied
on the boundary of the plate ∂Ω and the boundary curve itself has no corners.
176 CHAPTER 7. CALCULUS OF VARIATIONS
Thus, in summary, the Kirchhoff theory of plates for the problem at hand requires that
one solve the field equation (7.116) on Ω subjected to the displacement boundary conditions
(7.103) on ∂Ω
1
and the natural boundary conditions (7.119) on ∂Ω
2
.
Remark: If we define the moments M
n
, M
ns
and force V
n
by
M
n
= −D (w
,xx
n
x
n
x
+ w
,xy
n
x
n
y
+ w
,yx
n
y
n
x
+ w
,yy
n
y
n
y
)
M
ns
= −D (w
,xx
n
x
s
x
+ w
,xy
n
y
s
x
+ w
,yx
n
x
s
y
+ w
,yy
n
y
s
y
)
V
n
= −D (w
,xxx
n
x
+ w
,xyy
n
x
+ w
,yxx
n
y
+ w
,yyy
n
y
)
(7.120)
then the two natural boundary conditions can be written as
M
n
= 0,

∂s

M
ns

+ V
n
= 0. (7.121)
As a special case suppose that the plate is rectangular, 0 ≤ x ≤ a, 0 ≤ y ≤ b and that
the right edge x = a, 0 ≤ y ≤ b is free of load; see the right diagram in Figure 7.18. Then
n
x
= 1, n
y
= 0, s
x
= 0, s
y
= 1 on ∂Ω
2
and so (7.120) simplifies to
M
n
= −D w
,xx
M
ns
= −D w
,yx
V
n
= −D (w
,xxx
+ w
,xyy
)
(7.122)
which because of (7.100) shows that in this case M
n
= M
x
, M
ns
= M
xy
, V
n
= V
x
. Thus the
natural boundary conditions (7.121) can be written as
M
x
= 0,

∂y

M
xy

+ V
x
= 0. (7.123)
This answers the question we posed soon after (7.101) as to what the correct boundary
conditions on a free edge should be. We had noted that intuitively we would have expected
the moments and forces to vanish on a free edge and therefore that M
x
= M
xy
= V
x
= 0 there;
but this is in contradiction to the mathematical fact that the differential equation (7.116)
only requires two conditions at each point on the boundary. The two natural boundary
conditions (7.123) require that certain combinations of M
x
, M
xy
, V
x
vanish but not that all
three vanish.
Example 3: Minimal surface equation. Let C be a closed curve in R
3
as sketched in
Figure 7.19. From among all surfaces S in R
3
that have C as its boundary, we wish to
7.8. GENERALIZATION TO HIGHER DIMENSIONAL SPACE. 177
determine the surface that has minimum area. As a physical example, if C corresponds to
a wire loop which we dip in a soapy solution, a thin soap film will form across C. The
surface that forms is the one that, from among all possible surfaces S that are bounded
by C, minimizes the total surface energy of the film; which (if the surface energy density is
constant) is the surface with minimal area.
y
x
z
D
∂D
C
:
z = φ(x, y)
z = h(x, y)
S :
i
j
k
Figure 7.19: The closed curve C in R
3
is given. From among all surfaces S in R
3
that have C as its boundary,
the surface with minimal area is to be sought. The curve ∂D is the projection of C onto the x, y-plane.
Let C be a closed curve in R
3
. Suppose that its projection onto the x, y-plane is denoted
by ∂D and let D denote the simply connected region contained within ∂D; see Figure 7.19.
Suppose that C is characterized by z = h(x, y) for (x, y) ∈ ∂D. Let z = φ(x, y) for (x, y) ∈ D
describe a surface S in R
3
that has C as its boundary; necessarily φ = h on ∂D. Thus the
admissible set of functions we are considering are
A{φ

φ : D →R, φ ∈ C
1
(D), φ = h on ∂D} .
Consider a rectangular differential element on the x, y-plane that is contained within D.
The vector joining (x, y) to (x+dx, y) is dx = dx i while the vector joining (x, y) to (x, y+dy)
is dy = dy j. If du and dv are vectors on the surface z = φ(x, y) whose projections are dx
and dy respectively, then we know that
du = dx i + φ
x
dx k, dv = dy j + φ
y
dy k.
The vectors du and dv define a parallelogram on the surface z = φ(x, y) and the area of
this parallelogram is |du ×dv|. Thus the area of a differential element on S is
|du ×dv| =

−φ
x
dxdy i −φ
y
dxdy j + dxdy k

=

1 + φ
2
x
+ φ
2
y
dxdy.
178 CHAPTER 7. CALCULUS OF VARIATIONS
Consequently the problem at hand is to minimize the functional
F{φ} =

D

1 + φ
2
x
+ φ
2
y
dA.
over the set of admissible functions
A = {φ | φ : D →R, φ ∈ C
2
(D), φ = h on ∂D}.
It is left as an exercise to show that setting the first variation of F equal to zero leads to the
so-called minimal surface equation
(1 + φ
2
y

xx
−2φ
x
φ
y
φ
xy
+ (1 + φ
2
x

yy
= 0.
Remark: See en.wikipedia.org/wiki/Soap bubble and www.susqu.edu/facstaff/b/brakke/ for
additional discussion.
7.9 Second variation. Another necessary condition for
a minimum.
In order to illustrate the basic ideas of this section in the simplest possible setting, we confine
the discussion to the particular functional
F{φ} =

1
0
f(x, φ, φ

)dx
defined over a set of admissible functions A. Suppose that a particular function φ minimizes
F, and that for some given function η, the one-parameter family of functions φ + εη are
admissible for all sufficiently small values of the parameter ε. Define
ˆ
F(ε) by
ˆ
F(ε) = F{φ + εη} =

1
0
f(x, φ + εη, φ

+ εη

)dx,
so that by Taylor expansion,
ˆ
F(ε) =
ˆ
F(0) + ε
ˆ
F

(0) +
ε
2
2
ˆ
F

(0) + O(ε
3
),
where
ˆ
F(0) =

1
0
f(x, φ, φ

)dx = F{φ},
ε
ˆ
F

(0) = δF{φ, η},
ε
2
F

(0) = ε
2

1
0
{f
φφ
η
2
+ 2f
φφ
ηη

+ f
φ

φ

)
2
} dx
def
= δ
2
F{φ, η}.
7.9. SECOND VARIATION 179
Since φ minimizes F, it follows that ε = 0 minimizes
ˆ
F(ε), and consequently that
δ
2
F{φ, η} ≥ 0,
in addition to the requirement δF{φ, η} = 0. Thus a necessary condition for a function φ to
minimize a functional F is that the second variation of F be non-negative for all admissible
variations δφ:
δ
2
F{φ, δφ} =

1
0
¸
f
φφ
(δφ)
2
+ 2f
φφ
(δφ)(δφ

) + f
φ

φ
(δφ

)
2
¸
dx ≥ 0, (7.124)
where we have set δφ = εη. The inequality is reversed if φ maximizes F.
The condition δ
2
F{φ, η} ≥ 0 is necessary but not sufficient for the functional F to have
a minimum at φ. We shall not discuss sufficient conditions in general in these notes.
Proposition: Legendre Condition: A necessary condition for (7.124) to hold is that
f
φ

φ
(x, φ(x), φ

(x)) ≥ 0 for 0 ≤ x ≤ 1
for the minimizing function φ.
Example: Consider a curve in the x, y-plane characterized by y = φ(x) that begins at (0, φ
0
)
and ends at (1, φ
1
). From among all such curves, find the one that, when rotated about the
x-axis, generates the surface of minimum area.
Thus we are asked to minimize the functional
F{φ} =

1
0
f(x, φ, φ

) dx where f(x, φ, φ

) = φ

1 + (φ

)
2
,
over a set of admissible functions that satisfy the boundary conditions φ(0) = φ
0
, φ(1) = φ
1
.
A function φ that minimizes F must satisfy the boundary value problem consisting of
the Euler equation and the given boundary conditions:
d
dx

φφ

1 + (φ

)
2

1 + (φ

)
2
= 0,
φ(0) = φ
0
, φ(1) = φ
1
.

The general solution of this Euler equation is
φ(x) = αcosh
x −β
α
for 0 ≤ x ≤ 1,
180 CHAPTER 7. CALCULUS OF VARIATIONS
where the constants α and β are determined through the boundary conditions. To test the
Legendre condition we calculate f
φ

φ
and find that
f
φ

φ
=
φ
(

1 + (φ

)
2
)
3
,
which, when evaluated at the particular function φ(x) = αcosh(x −β)/α yields
f
φ

φ
|
φ=αcosh(x−β)/α
=
α
cosh
2
(x −β)/α
.
Therefore as long as α > 0 the Legendre condition is satisfied.
7.10 Sufficient condition for minimization of convex
functionals
x
0
y
x
1
x
2
y = F(x)
F(x
1
) ≥ F(x
2
) + F

(x
2
)(x
1
−x
2
) for all x
1
, x
2
∈ D.
D
y = F(x
2
) + F

(x
2
)(x −x
2
)
Figure 7.20: A convex curve y = F(x) lies above the tangent line through any point of the curve.
We now turn to a brief discussion of sufficient conditions for a minimum for a special
class of functionals. It is useful to begin by reviewing the question of finding the minimum of
a real-valued function of a real variable. A function F(x) defined for x ∈ A with continuous
first derivatives is said to be convex if
F(x
1
) ≥ F(x
2
) + F

(x
2
)(x
1
−x
2
) for all x
1
, x
2
∈ A;
7.10. SUFFICIENT CONDITION FOR CONVEX FUNCTIONALS 181
see Figure 7.20 for a geometric interpretation of convexity. If a convex function has a station-
ary point, say at x
o
, then it follows by setting x
2
= x
o
in the preceding equation that x
o
is a
minimizer of F. Therefore a stationary point of a convex function is necessarily a minimizer.
If F is strictly convex on A, i.e. if F is convex and F(x
1
) = F(x
2
) + F

(x
2
)(x
1
− x
2
) if and
only if x
1
= x
2
, then F can only have one stationary point and so can only have one interior
minimum.
This is also true for a real-valued function F with continuous first derivatives on a domain
A in R
n
, where convexity is defined by
9
F(x
1
) ≥ F(x
2
) + δF(x
2
, x
1
−x
2
) for all x
1
, x
2
∈ A.
If a convex function has a stationary point at, say, x
o
, then since δF(x
o
, y) = 0 for all y
it follows that x
o
is a minimizer of F. Therefore a stationary point of a convex function
is necessarily a minimizer. If F is strictly convex on A, i.e. if F is convex and F(x
1
) =
F(x
2
) + δF(x
2
, x
1
− x
2
) if and only if x
1
= x
2
, then F can have only one stationary point
and so can have only one interior minimum.
We now turn to a functional F{φ} which is said to be convex on A if
F{φ + η} ≥ F{φ} + δF{φ, η} for all φ, φ + η ∈ A.
If F is stationary at φ
o
∈ A, then δF{φ
o
, η} = 0 for all admissible η, and it follows that φ
o
is in fact a minimizer of F. Therefore a stationary point of a convex functional is necessarily
a minimizer.
For example, consider the generic functional
F{φ} =

1
0
f(x, φ, φ

)dx. (7.125)
Then
δF{φ, η} =

1
0

∂f
∂φ
η +
∂f
∂φ

η

dx
and so the convexity condition F{φ + η} −F{φ} ≥ δF{φ, η} takes the special form

1
0

f(x, φ + η, φ

+ η

) −f(x, φ, φ

)

dx ≥

1
0

∂f
∂φ
η +
∂f
∂φ

η

dx. (7.126)
In general it might not be simple to test whether this condition holds in a particular case.
It is readily seen that a sufficient condition for (7.126) to hold is that the integrands satisfy
9
See equation (??) for the definition of δF(x, y).
182 CHAPTER 7. CALCULUS OF VARIATIONS
the inequality
f(x, y + v, z + w) −f(x, y, z) ≥
∂f
∂y
v +
∂f
∂z
w (7.127)
for all (x, y, z), (x, y + v, z + w) in the domain of f. This is precisely the requirement that
the function f(x, y, z) be a convex function of y, z at fixed x.
Thus in summary: if the integrand f of the functional F defined in (7.125) satisfies the
convexity condition (7.127), then, a function φ that extremizes F is in fact a minimizer of F.
Note that this is simply a sufficient condition for ensuring that an extremum is a minimum.
Remark: In the special case where f(x, y, z) is independent of y, one sees from basic calculus
that if ∂
2
f/∂z
2
> 0 then f(x, z) is a strictly convex function of z at each fixed x.
Example: Geodesics. Find the curve of shortest length that lies entirely on a circular
cylinder of radius a, beginning (in circular cylindrical coordinates (r, θ, ξ)) at (a, θ
1
, ξ
1
) and
ending at (a, θ
2
, ξ
2
) as shown in the figure.
a
ξ
(a, θ
1
, ξ
1
)
(a, θ
2
, ξ
2
)
a
θ1
θ
2
x = a cos θ
y = a sin θ
= ξ(θ)
θ
1
≤ θ ≤ θ
2
ξ
Figure 7.21: A curve that lies entirely on a circular cylinder of radius a, beginning (in circular cylindrical
coordinates) at (a, θ
1
, ξ
1
) and ending at (a, θ
2
, ξ
2
).
We can characterize a curve in R
3
using a parametric characterization using circular
cylindrical coordinates by r = r(θ), ξ = ξ(θ), θ
1
≤ θ ≤ θ
2
. When the curve lies on the surface
of a circular cylinder of radius a this specializes to
r = a, ξ = ξ(θ) for θ
1
≤ θ ≤ θ
2
.
7.11. DIRECT METHOD 183
Since the arc length can be written as
ds =

dr
2
+ r
2

2
+ dξ
2
=

r

(θ)

2
+

r(θ)

2
+

ξ

(θ)

2
dθ = =

a
2
+

ξ

(θ)

2
dθ.
our task is to minimize the functional
F{ξ} =

θ
2
θ
1
f(θ, ξ(θ), ξ

(θ)) dθ where f(x, y, z) =

a
2
+ z
2
over the set of all suitably smooth functions ξ(θ) defined for θ
1
≤ θ ≤ θ
2
which satisfy
ξ(θ
1
) = ξ
1
, ξ(θ
2
) = ξ
2
.
Evaluating the necessary condition δF = 0 leads to the Euler equation. This second order
differential equation for ξ(θ) can be readily solved, which after using the boundary conditions
ξ(θ
1
) = ξ
1
, ξ(θ
2
) = ξ
2
leads to
ξ(θ) = ξ
1
+

ξ
1
−ξ
2
θ
1
−θ
2

(θ −θ
1
). (7.128)
Direct differentiation of f(x, y, z) =

a
2
+ z
2
shows that

2
f
∂z
2
=
a
2
(a
2
+ z
2
)
3/2
> 0
and so f is a strictly convex function of z. Thus the curve of minimum length is given
uniquely by (7.128) – a helix. Note that if the circular cylindrical surface is cut along a
vertical line and unrolled into a flat sheet, this curve unfolds into a straight line.
7.11 Direct method of the calculus of variations and
minimizing sequences.
We now turn to a different method of seeking minima, and for purposes of introduction, begin
by reviewing the familiar case of a real-valued function f(x) of a real variable x ∈ (−∞, ∞).
Consider the specific example f(x) = x
2
. This function is nonnegative and has a minimum
value of zero which it attains at x = 0. Consider the sequence of numbers
x
0
, x
1
, x
2
, x
3
. . . x
k
, . . . where x
k
=
1
2
k
,
and note that
lim
k→∞
f(x
k
) = 0.
184 CHAPTER 7. CALCULUS OF VARIATIONS
-2 -1 1 2
1
2
3
-2 -1 1 2
1
2
3
(a) (b)
f(x) = 1
f(x) = x
2
f(x) = x
2
x x
Figure 7.22: (a) The function f(x) = x
2
for −∞ < x < ∞ and (b) the function f(x) = 1 for x ≤ 0,
f(x) = x
2
for x > 0.
The sequence 1/2, 1/2
2
, . . . , 1/2
k
, . . . is called a minimizing sequence in the sense that
the value of the function f(x
k
) converges to the minimum value of f as k → ∞. Moreover,
observe that
lim
k→∞
x
k
= 0
as well, and so the sequence itself converges to the minimizer of f, i.e. to x = 0. This latter
feature is true because in this example
f( lim
k→∞
x
k
) = lim
n→∞
f(x
k
).
As we know, not all functions have a minimum value, even if they happen to have a finite
greatest lower bound. We now consider an example to illustrate the fact that a minimizing
sequence can be used to find the greatest lower bound of a function that does not have a
minimum. Consider the function f(x) = 1 for x ≤ 0 and f(x) = x
2
for x > 0. This function
is non-negative, and in fact, it can take values arbitrarily close to the value 0. However it
does not have a minimum value since there is no value of x for which f(x) = 0; (note that
f(0) = 1). The greatest lower bound or infimum (denoted by “inf”) of f is
inf
−∞<x<∞
f(x) = 0.
Again consider the sequence of numbers
x
0
, x
1
, x
2
, x
3
. . . x
k
, . . . where x
k
=
1
2
k
,
and note that
lim
k→∞
f(x
k
) = 0.
7.11. DIRECT METHOD 185
In this case the value of the function f(x
k
) converges to the infimum of f as k →∞. However
since
lim
k→∞
x
k
= 0
the limit of the sequence itself is x = 0 and f(0) is not the infimum of f. This is because in
this example
f( lim
k→∞
x
k
) = lim
n→∞
f(x
k
).
Returning now to a functional, suppose that we are to find the infimum (or the minimum
if it exists) of a functional F{φ} over an admissible set of functions A. Let
inf
φ∈A
F{φ} = m (> −∞).
Necessarily there must exist a sequence of functions φ
1
, φ
2
, . . . in A such that
lim
n→∞
F{φ
k
} = m;
such a sequence is called a minimizing sequence.
If the sequence φ
1
, φ
2
, . . . converges to a limiting function φ

, and if
F{ lim
n→∞
φ
k
} = lim
n→∞
F{φ
k
},
then it follows that F{φ

} = m and the function φ

is the minimizer of F. The functions φ
k
of a minimizing sequence can be considered to be approximate solutions of the minimization
problem.
Just as in the second example of this section, in some variational problems the limiting
function φ

of a minimizing sequence φ
1
, φ
2
, . . . does not minimize the functional F; see the
last Example of this section.
7.11.1 The Ritz method
Suppose that we are to minimize a functional F{φ} over an admissible set A. Consider an
infinite sequence of functions φ
1
, φ
2
, . . . in A. Let A
p
be the subset of functions in A that
can be expressed as a linear combination of the first p functions φ
1
, φ
2
, . . . φ
p
. In order to
minimize F over the subset A
p
we must simply minimize
´
F(α
1
, α
2
, . . . , α
p
) = F{α
1
φ
1
+ α
2
φ
2
+ . . . + α
p
φ
p
}
186 CHAPTER 7. CALCULUS OF VARIATIONS
with respect to the real parameters α
1
, α
2
, . . . α
p
. Suppose that the minimum of F on A
p
is
denoted by m
p
. Clearly A
1
⊂ A
2
⊂ A
3
. . . ⊂ A and therefore m
1
≥ m
2
≥ m
3
. . .
10
. Thus, in
the so-called Ritz Method, we minimize F over a subset A
p
to find an approximate minimizer;
moreover, increasing the value of p improves the approximation in the sense of the preceding
footnote.
Example: Consider an elastic bar of length L and modulus E that is fixed at both ends and
carries a distributed axial load b(x). A displacement field u(x) must satisfy the boundary
conditions u(0) = u(L) = 0 and the associated potential energy is
F{u} =

L
0
1
2
E(u

)
2
dx −

L
0
bu dx.
We now use the Ritz method to find an approximate displacement field that minimizes
F. Consider the sequence of functions v
1
, v
2
, v
3
, . . . where
v
p
= sin
pπx
L
;
observe that v
p
(0) = v
p
(L) = 0 for all intergers p. Consider the function
u
n
(x) =
n
¸
p=1
α
p
sin
pπx
L
for any integer n ≥ 1 and evaluate
´
F(α
1
, α
2
, . . . α
n
) = F{u
n
} =

L
0
1
2
E(u

n
)
2
dx −

L
0
bu
n
dx.
Since

L
0
2 cos
pπx
L
cos
qπx
L
dx =

0 for p = q,
L for p = q,
it follows that

L
0
(u

n
)
2
dx =

L
0

n
¸
p=1
α
p

L
cos
pπx
L

n
¸
q=1
α
q

L
cos
qπx
L

dx =
1
2
n
¸
p=1
α
2
p
p
2
π
2
L
Therefore
´
F(α
1
, α
2
, . . . α
n
) = F{u
n
} =
n
¸
p=1

1
4
E α
2
p
p
2
π
2
L
−α
p

L
0
b sin
pπx
L
dx

(7.129)
10
If the sequence φ
1
, φ
2
, . . . is complete, and the functional F{φ} is continuous in the appropriate norm,
then one can show that lim
p→∞
m
p
= m.
7.12. WORKED EXAMPLES. 187
To minimize
´
F(α
1
, α
2
, . . . α
n
) with respect to α
p
we set ∂
´
F/∂α
p
= 0. This leads to
α
p
=

L
0
b sin
pπx
L
dx
E
p
2
π
2
2L
for p = 1, 2, . . . n. (7.130)
Therefore by substituting (7.130) into (7.129) we find that the n-term Ritz approximation
of the energy is

n
¸
p=1
1
4
E α
2
p
p
2
π
2
L
where α
p
=

L
0
b sin
pπx
L
dx
E
p
2
π
2
2L
,
and the corresponding approximate displacement field is given by
u
n
=
n
¸
p=1
α
p
sin
pπx
L
where α
p
=

L
0
b sin
pπx
L
dx
E
p
2
π
2
2L
.
References
1. J.L. Troutman, Variational Calculus with Elementary Convexity, Springer-Verlag, 1983.
2. C. Fox, An Introduction to the Calculus of Variations, Dover, 1987.
3. G. Bliss, Lectures on the Calculus of Variations, University of Chicago Press, 1946.
4. L.A. Pars, Calculus of Variations, Wiley, 1963.
5. R. Weinstock, Calculus of Variations with Applications, Dover, 1952.
6. I.M. Gelfand and S.V. Fomin, Calculus of Variations, Prentice-Hall, 1963.
7. T. Mura and T. Koya, Variational Methods in Mechanics, Oxford, 1992.
7.12 Worked Examples.
Example 7.N: Consider two given points (x
1
, h
1
) and (x
2
, h
2
), with h
1
> h
2
, that are to be joined by a
smooth wire. The wire is permited to have any shape, provided that it does not enter into the interior of the
circular region (x − x
0
)
2
+ (y − y
0
)
2
≤ R
2
. A bead is released from rest from the point (x
1
, h
1
) and slides
188 CHAPTER 7. CALCULUS OF VARIATIONS
x
0
y = φ(x)
g
y
(x
1
, h
1
)
(x
2
, h
2
)
x
1
x
2
Figure 7.23: A curve y = φ(x) joining (x
1
, h
1
) to (x
2
, h
2
) which is disallowed from entering the forbidden
region (x −x
0
)
2
+ (φ(x) −y
0
)
2
< R
2
.
along the wire (without friction) due to gravity. For what shape of wire is the time of travel from (x
1
, h
1
) to
(x
2
, h
2
) least?
Here the wire may not enter into the interior of the prescribed circular region . Therefore in considering
different wires that connect (x
1
, h
1
) to (x
2
, h
2
), we may only consider those that lie entirely outside this
region:
(x −x
0
)
2
+ (φ(x) −y
0
)
2
≥ R
2
, x
1
≤ x ≤ x
2
. (i)
The travel time of the bead is again given by (7.1) and the test functions must satisfy the same requirements
as in the first example except that, in addition, they must be such satisfy the (inequality) constraint (i). Our
task is to minimize T{φ} over the set A
1
subject to the constratint (i).
Example 7.N: Buckling: Consider a beam whose centerline occupies the interval y = 0, 0 < x < L, in an
undeformed configuration. A compressive force P is applied at x = L and the beam adopts a buckled shape
described by y = φ(x). Figure NNN shows the centerline of the beam in both the undeformed and deformed
configurations. The beam is fixed by a pin at x = 0; the end x = L is also pinned but is permitted to move
along the x-axis. The prescribed geometric boundary conditions on the deflected shape of the beam are thus
φ(0) = φ(L) = 0.
By geometry, the curvature κ(x) of a curve y = φ(x) is given by
κ(x) =
φ

(x)
[1 + (φ

(x))
2
]
3/2
.
From elasticity theory we know that the bending energy per unit length of a beam is (1/2)Mκ and that
the bending moment M is related to the curvature κ by M = EIκ where EI is the bending stiffness of the
beam. Thus the bending energy associated with a differential element of the beam is (1/2)EIκ
2
ds where ds
7.12. WORKED EXAMPLES. 189
x
y
P
L
O
Figure 7.24: An elastic beam in undeformed (lower figure) and buckled (upper figure) configurations.
is arc length along the deformed beam. Thus the total bending energy in the beam is

L
0
1
2
EI κ
2
(x) ds
where the arc length s is related to the coordinate x by the geometric relation
ds =

1 + (φ

(x))
2
dx.
Thus the total bending energy of the beam is

L
0
1
2
EI

)
2
[1 + (φ

)
2
]
5/2
dx.
Next we need to account for the potential energy associated with the compressive force P on the beam.
Since the change in length of a differential element is ds − dx, the amount by which the right hand end of
the beam moves leftwards is

L
0
ds −

L
0
dx

= −

L
0

1 + (φ

)
2
dx − L

.
Thus the potential energy associated with the applied force P is
− P

L
0

1 + (φ

)
2
dx − L

.
Therefore the total potential energy of the system is
Φ{x, φ, φ

, φ

} =

L
0
1
2
EI

)
2
[1 + (φ

)
2
]
5/2
dx −

L
0
P

1 + (φ

)
2
− 1

dx.
The Euler equation, which for such a functional has the general form
d
2
dx
2

f
φ


d
dx

f
φ

+ f
φ
= 0,
190 CHAPTER 7. CALCULUS OF VARIATIONS
simplifies in the present case since f does not depend explicitly on φ. The last term above therefore drops out
and the resulting equation can be integrated once immediately. This eventually leads to the Euler equation
d
dx

φ

[1 + (φ

)
2
]
5/2

+
φ

2[1 + (φ

)
2
]
1/2

P
EI/2
+ 5
¸
φ

[1 + (φ

)
2
]
3/2

2

= c
where c is a constant of integration, and the natural boundary conditions are
φ

(0) = φ

(L) = 0.
Example 7.N: Linearize BVP in buckling problem above. Also, approximate the energy and derive Euler
equation associated with it.
Example 7.N: u(x, t) where 0 ≤ x ≤ L, 0 ≤ t ≤ T Functional
F{u} =

T
0

L
0

1
2
u
2
t

1
2
u
2
x

dxdt
Euler equation (Wave equation)
u
tt
−u
xx
= 0.
Example 7.N: Physical example? Functional
F{u} =

T
0

L
0

1
2
u
2
t

1
2
u
2
x
+
1
2
m
2
u
2

dxdt
Euler equation (Klein-Gordon equation)
u
tt
−u
xx
+m
2
u = 0.
Example 7.N: Null lagrangian
Example 7.2: Soap Film Problem. Consider two circular wires, each of radius R, that are placed coaxially, a
distance H apart. The planes defined by the two circles are parallel to each other and perpendicular to their
common axis. This arrangement of wires is dipped into a soapy bath and taken out. Determine the shape of
the soap film that forms.
We shall assume that the soap film adopts the shape with minimum surface energy, which implies that we
are to find the shape with minimum surface area. Suppose that the film spans across the two circular wires.
7.12. WORKED EXAMPLES. 191
H
x
y
R
R
−H
y = φ(x)
By symmetry, the surface must coincide with the surface of revolution of some curve y = φ(x), −H ≤ x ≤ H.
By geometry, the surface area of this film is
Area{φ} = 2π

φ(x)ds = 2π

H
−H
φ(x)

1 + (φ

)
2
dx,
where we have used the fact that ds =

1 + (φ

)
2
dx, and this is to be minimized subject to the requirements
φ(−H) = φ(H) = R and
φ(x) ≥ 0 for −H < x < H.
In order to determine the shape that minimizes the surface area we calculate its first variation δArea
and set it equal to zero. This gives the Euler equation
d
dx

φφ

1 + (φ

)
2
¸

1 + (φ

)
2
= 0
which we can write as
φφ

1 + (φ

)
2
d
dx

φφ

1 + (φ

)
2
¸
− φφ

= 0
or
d
dx

φφ

1 + (φ

)
2

2

d
dx
(φ)
2
= 0.
This can be integrated to give

)
2
=

φ
c

2
−1
where c is a constant. Integrating again and using the boundary conditions φ(H) = φ(−H) = R, leads to
φ(x) = c cosh

x
c

(i)
where c is to be determined from the algebraic equation
cosh
H
c
=
R
c
. (ii)
192 CHAPTER 7. CALCULUS OF VARIATIONS
0.5 1 1.5 2 2.5 3
2
4
6
8
10
ζ = cosh ξ
ζ = µξ, µ > µ

ζ = µ

ξ
ζ = µξ, µ < µ

ξ
ζ
Figure 7.25: Intersection of the curve described by ζ = cosh ξ with the straight line ζ = µξ.
Given H and R, if this equation can be solved for c, then the minimizing shape is given by (i) with this value
(or values) of c.
To examine the solvability of (ii) set ξ = H/c and µ = R/H and then this equation can be written as
cosh ξ = µξ.
As seen from Figure 7.25, the graph ζ = cosh ξ intersects the straight line ζ = µξ twice if µ > µ

; once if
µ = µ

; and there is no intersection if µ < µ

. Here µ

≈ 1.50888 is found by solving the pair of algebraic
equations cosh ξ = µ

ξ, sinh ξ = µ

where the latter equation reflects the tangency of the two curves at the
contact point in this limiting case.
Thus in summary, if R < µ

H there is no shape of the soap film that extremizes the surface area; if
R = µ

H there is a unique shape of the soap film given by (i) that extremizes the surface area; if R > µ

H
there are two shapes of the soap film given by (i) that extremize the surface area (and further analysis
investigating the stability of these configurations is needed in order to determine the physically realized
shape).
Remark: In order to understand what happens when R < µ

H consider the following heuristic argument.
There are three possible configurations of the soap film to consider: one, the film bridges across from one
circular wire to the other but it does not form on the flat faces of the two circular wires themselves (which
is the case analyzed above); two, the film forms on each circular wire but does not bridge the two wires;
and three, the film does both of the above. We can immediately discard the third case since it involves more
surface area than either of the first two cases. Consider the first possibility: the soap film spans across the
two circular wires and, as an approximation, suppose that it forms a circular cylinder of radius R and length
2H. In this case the area of the soap film is 2πR(2H). In the second case, the soap film covers only the two
end regions formed by the circular wires and so the area of the soap film is 2×πR
2
. Since 4πRH < 2πR
2
for
2H < R, and 4πRH > 2πR
2
for 2H > R, this suggests that the soap film will span across the two circular
7.12. WORKED EXAMPLES. 193
wires if R > 2H, whereas the soap will not span across the two circular wires if R < 2H (and would instead
cover only the two circular ends).
Example 7.4: Minimum Drag Problem. Consider a space-craft whose shape is to be designed such that the
drag on it is a minimum. The outer surface of the space-craft is composed of two segments as shown in
Figure 7.26: the inner portion (x = 0 with 0 < y < h
1
in the figure) is a flat circular disk shaped nose of
radius h
1
, and the outer portion is obtained by rigidly rotating the curve y = φ(x), 0 < x < 1, about the
x-axis. We are told that φ(0) = h
1
, φ(1) = h
2
with the value of h
2
being given; the value of h
1
however is
not prescribed and is to be chosen along with the function φ(x) such that the drag is minimized.
x
y
y = φ(x)
dF = pdA
θ
p ∼ (V cos θ)
2 s
0 1
h
1
h
2
V
dA = 2πφds
Figure 7.26: The shape of the space craft with minimum drag is generated by rotating the curve y = φ(x)
about the x-axis. The space craft moves at a speed V in the −x-direction.
According to the most elementary model of drag (due to Newton), the pressure at some point on a surface
is proportional to the square of the normal speed of that point. Thus if the space craft has speed V relative to
the surrounding medium, the pressure on the body at some generic point is proportional to (V cos θ)
2
where
θ is the angle shown in Figure 7.26; this acts on a differential area dA = 2πyds = 2πφds. The horizontal
component of this force is therefore obtained by integrating dF cos θ = (V cos θ)
2
×(2πφds) ×cos θ over the
entire body. Thus the drag D is given, in suitable units, by
D = πh
2
1
+ 2π

1
0
φ(φ

)
3
[1 + (φ

)
2
]
dx,
where we have used the fact that ds = dx

1 + (φ

)
2
and cos θ = φ

/

1 + (φ

)
2
.
To optimize this we calculate the first variation of D, remembering that both the function φ and the
parameter h
1
can be varied. Thus we are led to
δD = 2πh
1
δh
1
+ 2π

1
0

)
3
[1 + (φ

)
2
]
δφ dx + 2π

1
0
φ
¸
3(φ

)
2
1 + (φ

)
2

2(φ

)
4
[1 + (φ

)
2
]
2

δφ

dx,
= 2πh
1
δh
1
+ 2π

1
0

)
3
[1 + (φ

)
2
]
δφ dx + 2π

1
0
¸
φ(φ

)
2
(3 + (φ

)
2
)
[1 + (φ

)
2
]
2

δφ

dx.
194 CHAPTER 7. CALCULUS OF VARIATIONS
Integrating the last term by parts and recalling that δφ(1) = 0 and δφ(0) = δh
1
(since the value of φ(1) is
prescribed but the value of φ(0) = h
1
is not) we are led to
δD = 2πh
1
δh
1
+2π

1
0

)
3
[1 + (φ

)
2
]
δφ dx −2π

1
0
d
dx
¸
φ(φ

)
2
(3 + (φ

)
2
)
[1 + (φ

)
2
]
2

δφ dx −
2πφ(φ

)
2
(3 + (φ

)
2
)
[1 + (φ

)
2
]
2

x=0
δh
1
.
The arbitrariness of δφ and δh
1
now yield
d
dx
¸
φ(φ

)
2
(3 + (φ

)
2
)
[1 + (φ

)
2
]
2


)
3
[1 + (φ

)
2
]
= 0 for 0 < x < 1,

)
2
(3 + (φ

)
2
)
[1 + (φ

)
2
]
2

x=0
= 1.
(i)
The differential equation in (i)
1
and the natural boundary condition (i)
2
can be readily reduced to
d
dx

φ(φ

)
3
[1 + (φ

)
2
]
2

= 0 for 0 < x < 1,
φ

(0) = 1.
(ii)
The differential equation (ii) tells us that
φ(φ

)
3
[1 + (φ

)
2
]
2
= c
1
for 0 < x < 1, (iii)
where c
1
is a constant. Together with the given boundary conditions, we are therefore to solve the differential
equation (iii) subject to the conditions
φ(0) = h
1
, φ(1) = h
2
, φ

(0) = 1, (iv)
in order to find the shape φ(x) and the parameter h
1
.
Since φ(0) = h
1
and φ

(0) = 1 this shows that c
1
= h
1
/4.
It is most convenient to write the solution of (iii) with c
1
= h
1
/4 parametrically by setting φ

= ξ. This
leads to
φ =
h
1
4

ξ
−3
+ 2ξ
−1

,
x =
h
1
4

3
4
ξ
−4

−2
+ log ξ

+c
2
,

1 > ξ > ξ
2
,
where c
2
is a constant of integration and ξ is the parameter. On physical grounds we expect that the slope φ

will decrease with increasing x and so we have supposed that ξ decreases as x increases; thus as x increases
from 0 to 1 we have supposed that ξ decreases from ξ
1
to ξ
2
(where we know that ξ
1
= φ

(0) = 1). Since
ξ = φ

= 1 when x = 0 the preceding equation gives c
2
= −7h
1
/16. Thus
φ =
h
1
4

ξ
−3
+ 2ξ
−1

,
x =
h
1
4

3
4
ξ
−4

−2
+ log ξ −
7
4

,

1 > ξ > ξ
2
. (v)
7.12. WORKED EXAMPLES. 195
The boundary condition φ = h
1
, φ

= 1 at x = 0 has already been satisfied. The boundary condition
φ = h
2
, x = 1 at ξ = ξ
2
requires that
h
2
=
h
1
4

ξ
−3
2
+ 2ξ
−1
2

2

,
1 =
h
1
4

3
4
ξ
−4
2

−2
2
+ log ξ
2

7
4

,

(vi)
which are two equations for determining h
2
and ξ
2
. Dividing the first of (vi) by the second yields a single
equation for ξ
2
:
ξ
5
2
−h
2
ξ
4
2
log ξ
2
+
7
4
h
2
ξ
4
2
+ 2ξ
3
2
−h
2
ξ
2
2

2

3
4
h
2
= 0. (vii)
If this can be solved for ξ
2
, then either equation in (vi) gives the value of h
1
and (v) then provides a parametric
description of the optimal shape. For example if we take h
2
= 1 then the root of (vii) is ξ
2
≈ 0.521703 and
then h
1
≈ 0.350943.
Example 7.5: Consider the variational problem where we are asked to minimize the functional
F{φ} =

1
0
f(φ, φ

)dx
over some admissible set of functions A. Note that this is a special case of the standard problem where the
function f(x, φ, φ

) is not explicitly dependent on x. In the present case f depends on x only through φ(x)
and φ

(x).
The Euler equation is given, as usual, by
d
dx
f
φ
− f
φ
= 0 for 0 < x < 1.
Multiplying this by φ

gives
φ

d
dx
f
φ
− φ

f
φ
= 0 for 0 < x < 1,
which can be written equivalently as
¸
d
dx

f
φ
) −φ

f
φ


¸
d
dx
f −φ

f
φ

= 0 for 0 < x < 1.
Since this simplifies to
d
dx

φ

f
φ
−f

= 0 for 0 < x < 1,
it follows that in this special case the Euler equation can be integrated once to have the simplified form
φ

f
φ
−f = constant for 0 < x < 1.
Remark: We could have taken advantage of this in, for example, the preceding problem.
Example 7.6: Elastic bar. The following problem arises when examining the equilibrium state of a one-
dimensional bar composed of a nonlinearly elastic material. An equilibrium state of the bar is characterized
196 CHAPTER 7. CALCULUS OF VARIATIONS
by a displacement field u(x) and the material of which the bar is composed is characterized by a potential
´
W(u

). It is convenient to denote the derivative of W by σ,
´ σ(u

) =
´
W

(u

),
so that then σ(x) = ´ σ(u

(x)) represents the stress at the point x in the bar. The bar has unit cross-sectional
area and occupies the interval 0 ≤ x ≤ L in a reference configuration. The end x = 0 of the bar is fixed, so
that u(0) = 0, a prescribed force P is applied at the end x = L, and a distributed force per unit length b(x)
is applied along the length of the bar.
An admissible displacement field is required to be continuous on [0, L], piecewise continuously differen-
tiable on [0, L] and to conform to the boundary condition u(0) = 0. The total potential energy associated
with any admissible displacement field is
V {u} =

L
0
´
W(u

(x))dx −

L
0
b(x)u(x)dx − Pu(L),
which can be written in the conventional form
V {u} =

L
0
f(x, u, u

)dx where f(x, u, u

) =
´
W(u

) −bu −Pu

.
The actual displacement field minimizes the potential energy V over the admissible set, and so the three
basic ingredients of the theory can now be derived as follows:
i. At any point x at which the displacement field is smooth, the Euler equation
d
dx

∂f
∂u


∂f
∂u
= 0
takes the explicit form
d
dx
´
W

(u

) +b = 0,
which can be written in terms of stress as
d
dx
σ +b = 0.
ii. The displacement field u(x) satisfies the prescribed boundary condition u = 0 at x = 0. The natural
boundary condition at the right hand end is given, according to equation (7.50) in Section 7.5.1, by
f
u
= 0 at x = L,
which in the present case reduces to
σ(x) =
´
W

(u

(x) = P at x = L.
iii. Finally, suppose that u

has a jump discontinuity at some location x = s. Then the first Weirstrass-
Erdmann corner condition (7.88) requires that ∂f/∂u

be continuous at x = s, i.e. that the stress
σ(x) must be continuous at x = s:
σ

x=s−
= σ

x=s+
. (i)
7.12. WORKED EXAMPLES. 197
The second Weirstrass-Erdmann corner condition (7.89) requires that f −u

∂f/∂u

be continuous at
x = s, i.e. that the quantity W −u

σ must be continuous at x = s:
W −u

σ

x=s−
= W −u

σ

x=s+
(ii)
Remark: The generalization of the quantity W −u

σ to 3-dimensions in known as the Eshelby tensor.
0
u

W(u

)
0
σ
σ(u

)
u

(s+) u

(s−)
u

(a) (b )

Figure 7.27: (a) A nonmonotonic (rising-falling-rising) stress response function ´ σ(u

) and (b) the corre-
sponding nonconvex energy
´
W(u

).
In order to illustrate how a discontinuity in u

can arise in an elastic bar, observe first that according
to the first Weirstrass-Erdmann condition (i), the stress σ on either side of x = s has to be continuous.
Thus if the function ´ σ(u

) is monotonically increasing, then it follows that σ = ´ σ(u

) has a unique solution
u

corresponding to a given σ, and so u

must also be continuous at x = s. On the other hand if ´ σ(u

)
is a nonmonotonic function as, for example, shown in Figure 7.27(a), then more than one value of u

can
correspond to the same value of σ, and so in such a case, even though σ(x) is continuous at x = s it is possible
for u

to be discontinuous, i.e. for u

(s−) = u

(s+), as shown in the figure. The energy function
´
W(u

) sketched
in Figure 7.27(b) corresponds to the stress function ´ σ(u

) shown in Figure 7.27(a). In particular, the values
of u

at which ´ σ has a local maximum and local minimum, correspond to inflection points of the energy
function
´
W(u

) since
´
W

= 0 when ´ σ

= 0.
The second Weirstrass-Erdmann condition (ii) tells us that the stress σ at the discontinuity has to have
a special value. To see this we write out (ii) explicitly as
´
W(u

(s+)) −u

(s+)σ =
´
W(u

(s−) −u

(s−)σ
and then use ´ σ(u

) =
´
W

(u

) to express it in the form

u

(s+)
u

(s−)
´ σ(v)dv = σ

u

(s+) −u

(s−)

. (iii)
198 CHAPTER 7. CALCULUS OF VARIATIONS
This implies that the value of σ must be such that the area under the stress response curve in Figure 7.27(a)
from u

(s−) to u

(s+) must equal the area of the rectangle which has the same base and has height σ; or
equivalently that the two shaded areas in Figure 7.27(a) must be equal.
Example 7.7: Non-smooth extremal. Find a curve that extremizes
F{φ} =

1
0
f(x, φ(x), φ

(x))dx,
that begins from (0, a), and ends at (1, b) after contacting a given curve y = g(x).
Remark: By identifying the curve y = g(x) with the surface of a mirror and specializing the functional F to
the travel time of light, one can thus derive the law of reflection for light.
Example 7.8: Inequality Constraint. Find a curve that extremizes
I(φ) =

a
0

(x))
3
dx, φ(0) = φ(a) = 0,
that is prohibited from entering the interior of the circle
(x −a/2)
2
+y
2
= b
2
.
Example 7.9: An example to caution against simple-minded discretization. (Due to John Ball). Let
F{u} =

1
0
(u
3
(x) −x)
2
(u

(x))
2
dx
for all functions such that u(0) = 0, u(1) = 1. Clearly F{u} ≥ 0. Moreover F{¯ u} = 0 for ¯ u(x) = x
1/3
.
Therefore the minimizer of F{u} is ¯ u(x) = x
1/3
.
Discretize the interval [0, 1] into N segments, and take a piecewise linear test function that is linear on
each segment. Calculate the functional F at this test function, next minimize it at fixed N, and finally take
its limit as N tends to infinity. What do you get? (You will get an answer but not the correct one.)
To anticipate this difficulty in a different way, consider a 2 element discretization, and take the continuous
test function
u
1
(x) =

cx for 0 < x < h,
¯ u(x) for h < x < 1.
(i)
Calculate F{u} for this function. Take limit as h → 0 and observe that F{u} does not go to zero (i.e. to
F{¯ u}).
7.12. WORKED EXAMPLES. 199
Example 7.10: Legendre necessary condition for a local minimum: Let
F{ε} =

1
0
f(x, φ +εη, φ

+εη

)dx
for all functions η(x) with η(0) = η(1) = 0. Show that
F

(0) =

1
0
¸
f
φφ
η
2
+ 2f
φφ
ηη

+f
φ

φ

)
2
¸
dx.
Suppose that F

(0) ≥ 0 for all admissible functions η. Show that it is necessary that
f
φ

φ
(x, φ(x), φ

(x)) ≥ 0 for 0 ≤ x ≤ 1.
Example 7.11: Bending of a thin plate. Consider a thin rectangular plate of dimensions a ×b that occupies
the region A = {(x, y) | 0 < x < a, 0 < y < b} of the x, y-plane. A distributed pressure loading p(x, y)
is applied on the planar face of the plate in the z-direction, and the resulting deflection of the plate in the
z-direction is denoted by w(x, y). The edges x = 0 and y = 0 of the plate are clamped, which implies the
geometric restrictions that the plate cannot deflect nor rotate along these edges:
w = 0,
∂w
∂x
= 0 on x = 0, 0 < y < b,
w = 0,
∂w
∂y
= 0 on y = 0, 0 < x < a;

(i)
the edge y = b is hinged, which means that its deflection must be zero but there is no geometric restriction
on the slope:
w = 0, on y = b, 0 < x < a; (ii)
and finally the edge x = a is free in the sense that the deflection and the slope are not geometrically restricted
in any way.
The potential energy of the plate and loading associated with an admissible deflection field, i.e. a function
w(x, y) that obeys (i) and (i) is given by
Φ{w} =
D
2

A
¸


2
w
∂x
2
+

2
w
∂y
2

2
−2(1 −ν)


2
w
∂x
2

2
w
∂y
2


2
w
∂x∂y

2
¸
dxdy −

A
pw dxdy. (iii)
where D and ν are constants. The actual deflection of the plate is given by the minimizer of Φ. We are asked
to derive the Euler equation and the natural boundary conditions to be satisfied by minimizing w.
Answer: The Euler equation is

4
w
∂x
4
+ 2

4
w
∂x
2
∂y
2
+

4
w
∂x
4
=
p
D
0 < x < a, 0 < y < b, (iv)
and the natural boundary conditions are

2
w
∂y
2


2
w
∂x
2
= 0, on y = b, 0 < x < a; (v)
200 CHAPTER 7. CALCULUS OF VARIATIONS
and

2
w
∂x
2


2
w
∂y
2
= 0 and

3
w
∂x
3
+ 2(1 −ν)

3
w
∂x∂y
2
= 0, on x = a, 0 < y < b; (vi)
Example 7.12: Consider the functional
F{φ} =

1
0

)
2
−1

2

2

dx, φ(0) = φ(1) = 0, (i)
and determine a minimizing sequence φ
1
, φ
2
, φ
3
, . . . such that F{φ
k
} approaches its infimum as k → ∞. If
the minimizing sequence itself converges to φ

show that F{φ

} is not the infimum of F.
Remark: Note that this functional is non-negative. If the functional takes the value zero, then, since its
integrand is the sum of two non-negative terms, each of those terms must vanish individually. Thus we must
have φ

(x) = ±1 and φ(x) = 0 on the interval 0 < x < 1. These cannot be both satisfied by a regular
function.
0 1
h
h =
1
k
nh (n + 1)h
h/2
x
φ
k
(x) = x −nh
φ
k
(x) = −x + (n + 1)
Figure 7.28: Sawtooth function φ
k
(x) with k local maxima and linear segments of slope ±1.
Let φ
k
(x) be the piecewise linear saw-tooth function with k-local maxima as shown in Figure 7.28; the
slope of each linear segment is ±1. Note that the base h = 1/k and the height is h/2. Thus as k increases
there are more and more teeth, each of which has a smaller base and smaller height. Observe that the first
term in the integrand of (i) vanishes identically for any k; the second term, which equals the area under the
square of φ
k
approaches zero as k → ∞. Thus the family of saw-teeth functions is a minimizing sequence
of (i). However, note that since φ
k
(x) → 0 at each fixed x as k → ∞ the limiting function to which this
sequence converges, i.e. φ(x) = 0, is not a minimizer of (i).

2

3

Electronic Publication

Rohan Abeyaratne Quentin Berg Professor of Mechanics Department of Mechanical Engineering 77 Massachusetts Institute of Technology Cambridge, MA 02139-4307, USA Copyright c by Rohan Abeyaratne, 1987 All rights reserved

Abeyaratne, Rohan, 1952Lecture Notes on The Mechanics of Elastic Solids. Volume 1: A Brief Review of Some Mathematical Preliminaries / Rohan Abeyaratne – 1st Edition – Cambridge, MA:

ISBN-13: 978-0-9791865-0-9 ISBN-10: 0-9791865-0-1

QC

Please send corrections, suggestions and comments to abeyaratne.vol.1@gmail.com

Updated June 25 2007

4

i Dedicated with admiration and affection to Matt Murphy and the miracle of science. for the gift of renaissance. .

.

2. . I have been most fortunate to have had the opportunity to apprentice under these inspiring and distinctive scholars. Janet Blume. I am also indebted to the many MIT students who have given me enormous fulfillment and joy to be part of their education. It has been organized as follows: Volume I: A Brief Review of Some Mathematical Preliminaries Volume II: Continuum Mechanics Volume III: Elasticity My appreciation for mechanics was nucleated by Professors Douglas Amarasekara and Munidasa Ranaweera of the (then) University of Ceylon. The material in the current presentation is still meant to be a set of lecture notes. Mechanics of Continuous Media. Finite Element Analysis of Solids and Fluids.071: 2. whose influence on me cannot be overstated. Advanced Mechanical Behavior of Materials. colleague and friend Jim Knowles. refined and expanded on every following occasion that I taught these classes.099: Mechanics of Solid Materials.073: 2.094: 2.iii PREFACE The Department of Mechanical Engineering at MIT offers a series of graduate level subjects on the Mechanics of Solids and Structures which include: 2. Morton E. and was subsequently shaped and grew substantially under the influence of Professors James K.072 and 2. and the current three volumes are comprised of the lecture notes I developed for them.075: 2. I would especially like to acknowledge a great many illuminating and stimulating interactions with my mentor. Molecular Modeling and Simulation for Mechanics. My understanding of elasticity as well as these notes have also benefitted greatly from many useful conversations with Kaushik Bhattacharya. Solid Mechanics: Plasticity and Inelastic Deformation.074 (formerly known as 2. not a text book.072: 2. I have had the opportunity to regularly teach the second and third of these subjects.080: 2. Knowles and Eli Sternberg of the California Institute of Technology. and Computational Mechanics of Materials. Over the years. The first draft of these notes was produced in 1987 and they have been corrected. Structural Mechanics. Eliot Fried. Solid Mechanics: Elasticity.083).074: 2.095: 2.

E. Gelfand and S. Stewart Silling and Nicolas Triantafyllidis. The treatment is concise. results. Academic Press. Pasadena. James. or it illustrates a general concept. Knowles and E. Chapman and Hall. or it establishes a result that will be used subsequently (possibly in a later volume). Linear Vector Spaces and Cartesian Tensors. Stelios Kyriakides. There are a number of Worked Examples at the end of each chapter which are an essential part of the notes. Continuum Mechanics: Concise Theory and Problems. It is most certainly not meant to be a source for learning these topics for the first time. M. J.M. I have drawn on a number of sources over the years as I prepared my lectures. Calculus of Variations. Personal taste has led me to include a few special (but still well-known) topics. 1997.1999.L. Richard D. 1991. For example. and sections on the so-called Eshelby problem and the effective behavior of two-phase materials in Volume III. Parks. Linear Algebra is a far richer subject than the treatment here.V. An Introduction to Continuum Mechanics. Fomin. CA 1978. Dover. New York. I cannot recall every source I have used but certainly they include those listed at the end of each chapter. Ericksen. 1981. designed to review those aspects of mathematics that will be encountered in the subsequent volumes. Volume II: Continuum Mechanics P. or a proof. The content of these notes are entirely classical. Prentice Hall. Phoebus Rosakis. selective and limited in scope. Introduction to the Thermodynamics of Solids. Sternberg.iv Gurtin. (Unpublished) Lecture Notes for AM136: Finite Elasticity. which is limited to real 3-dimensional Euclidean vector spaces. J. K. J. which I gratefully acknowledge. . and none of the material here is original. In a more general sense the broad approach and philosophy taken has been influenced by: Volume I: A Brief Review of Some Mathematical Preliminaries I. more details. Examples of this include sections on the statistical mechanical theory of polymer chains and the lattice theory of crystalline solids in the discussion of constitutive theory in Volume II. and illustrative examples. Chadwick. 1963. Volume I of these notes provides a collection of essential definitions. Knowles. The topics covered in Volumes II and III are largely those one would expect to see covered in such a set of lecture notes.K. David M. in the best sense of the word. California Institute of Technology. Gurtin. Many of these examples either provide. of a result that had been quoted previously in the text. Oxford University Press.

C. c. Thus. 1965. and A. Volume III/3.. Pasadena. CA. edited by C. H. “o” will denote the null vector while “0” will denote the null linear transformation. J. McGraw-Hill.. P. in Mechanics of Solids . The linear theory of elasticity. Goodier. and uppercase boldface Latin letters will denote linear transformations. The nonlinear field theories of mechanics.. The following notation will be used consistently in Volume I: Greek letters will denote real numbers. B. 1944. a. In particular. (Unpublished) Lecture Notes for AM135: Elasticity.E.N. 1976. Love. will denote linear transformations. in Handb¨ch der u Physik. As much as possible this notation will also be used in Volumes II and III though there will be some lapses (for reasons of tradition). . Edited by S. will denote scalars (real numbers). Truesdell and W. A Treatise on the Mathematical Theory of Elasticity. Dover. Noll. 1984. will denote vectors.. S. Knowles.Volume II. b. 1987. α... lowercase boldface Latin letters will denote vectors. . A.v C. Theory of Elasticity. Gurtin. K. Fl¨gge. u Volume IIII: Elasticity M. . Timoshenko and J. Springer. E. Springer-Verlag. for example. Truesdell. California Institute of Technology. γ. β.

vi .

. . . . . . . . . . . . .5 1. . 2 Vectors and Linear Transformations 2. . . . . . . . . . . . . . . . trace. . . . . . . . . . . .2 1.2 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Indicial notation .3 Euclidean point space . . . . . . . . . . . . . . . . . . Determinant. . . . . .4 Components of a vector in a basis. . . . . . . . vii . . . . . . . . . . . . . . .1 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Components of a linear transformation in a basis. . . Worked Examples. . . . . . . . . . . . . . . . .3 3. . . . .1 2. . . . . . . . . .4 1. . . . . . . . . . . . . .Contents 1 Matrix Algebra and Indicial Notation 1. . . scalar-product and norm . . . . . .1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 1. . . . . . . . . . . . . . 1 1 5 7 9 10 11 17 18 20 20 26 41 41 43 45 47 Summation convention . Kronecker delta . . .6 Matrix algebra . . . . . . . . . Worked Examples. . . . . . . . . Cartesian Tensors 3. . . . . . Components in two bases. . . . . 3 Components of Tensors. . . . . . . . . . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Transformations. . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . The alternator or permutation symbol . . . . . . . . . . . . . 2. .2 2. . .

. . . . . . . . . . 102 Metric coefficients. . . . . . . .3 6. . . . . . . . . .1 5. .2 Introductory Remarks . . . . . . . . . . . . . . . . . . . . e3 ) . . . .1 4.3. . . . Integral theorems . . . e2 . . . . . . . . . . . . . . . . . . . . . . . .2 4. . . . . Inverse transformation. . . . 108 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 3. . . . . Symmetry of a scalar-valued function . . . . . . . . . . . . . . . . . . . .6 CONTENTS Cartesian Tensors . .1 6. . . . . . . . . . . . . An example in three-dimensions. . . . . . . . . . . . . .4 Coordinate transformation. . . . . .1 6. . . . . . . . . . . . . . . . . Worked Examples. . .2. 50 52 67 68 69 72 73 74 77 85 85 87 88 89 99 99 4 Symmetry: Groups of Linear Transformations 4. . . . . . . . . . . . . . . . .2 5. . . . . .2. .viii 3. . 107 e ˆ e ˆ ˆ 6. . . . . . . . . . . . . . . .3 Transformation of Basic Tensor Relations . . .2. . Localization . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. .1 6. . Worked Examples. . . . . .5 4. . 108 Gradient of a vector field . . . . . .4 4. . . 5 Calculus of Vector and Tensor Fields 5. . . . 109 . . . . . . . . . . . . . . . . .2 Gradient of a scalar field . . . . . . . . 106 Components of ∂ˆi /∂ xj in the local basis (ˆ1 . . . . . . . . . .4 Notation and definitions. . . Lattices. . . . . . . . . . . . . . . . . . . . . . . . General Orthogonal Curvilinear Coordinates . . Worked Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Groups of Linear Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 4. . . . . . . . . . . . . . . . . . . . 102 6. . . . . . . . scale moduli. . . . . . .6 An example in two-dimensions. . . . .2 6. . . . . . . . . . . . . . . . . . . . . . . .3 5. . . . . . . . . . . . . 6 Orthogonal Curvilinear Coordinates 6. . . . . . . . . . . . . . . . . . . . . . . 104 Inverse partial derivatives .

. . . . . . . . . . . . . .5. . . . . . . . .1 7. . . . . .2 7. . . . . . . . . . .3 7. . . . . . . . . .6 6. . . . 123 Brief review of calculus. 136 7. . . . .4 Generalization: Free end-point. . . . . . .1 7. . . . . . . . . . 142 Generalization: End point of extremal lying on a curve. . . . . . . 154 . . . . . . . . . . 113 123 7 Calculus of Variations 7. . . . . . . . . . . 137 7. . . .6. . . . . . . . 110 Divergence of a symmetric 2-tensor field . . 110 Curl of a vector field . . . . . .1 7. . . . . . . . . . . . . . . . .6. 113 Worked Examples. . . . . . . . . . . . . .7 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . .5. 130 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Introduction. . . . . .4 6. . . . . . 140 Generalization: Multiple functions. . . . . .3. . . . . . . . . .3. . . . . . 128 Application of necessary condition δF = 0 . .5 6. . . . . . 112 Examples of Orthogonal Curvilinear Coordinates . . . . . . . . . 126 A necessary condition for an extremum . . . . . . . . . . . . . . . . . . . . . . . . . 111 Differential elements of volume . . . . . .5 Generalizations. .5 ix Divergence of a vector field . . . . . . . . . . . . 147 7. . . . 151 Algebraic constraints . . . . .6 Constrained Minimization . . 110 Laplacian of a scalar field . . . .CONTENTS 6. . . . . .4. . . . . Natural boundary conditions.3.3 6. . . . . 137 Generalization: Higher derivatives. . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . Euler equation. . . . . . . . . .3 7. . . . . . . The Brachistochrone Problem. . . . 130 An example. . . .4. 112 Differential elements of area . . . . . .2 Integral constraints. . . .3. . . . . . . 151 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 7. . . .5. . . . . . . . . . . .8 6. . . . . .3 The basic problem. .2 7. . . . . . . . . . . . .5.4 6. . . . . . .2 7. . . . . . . 132 A Formalism for Deriving the Euler Equation . . .3. . . . . . . . . . . . . .

. . . . . . . . . . .1 The Ritz method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 . . . . . . . . . . .3 7. . . . . .6. . . . .11. . . 185 7. . . . . . . 166 7. . . . . 178 7. . . . . . .1 Piecewise smooth minimizer with non-smoothness occuring at a prescribed location. . . . . . . . . . . . . . . . . . . . . . .9 Generalization to higher dimensional space. . . .8 7. . . . . .7 CONTENTS Differential constraints . . . . . . . . . . . . . . .10 Sufficient condition for convex functionals . . . . . . . . . . . . . . 157 7. . . . 180 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 . . 158 Piecewise smooth minimizer with non-smoothness occuring at an unknown location . . . . . . .12 Worked Examples. . . Second variation . 183 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Weirstrass-Erdman corner conditions .x 7. . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . .11 Direct method . . . . .2 7. . . . . . . . . . . . . .7. . . .

...1) Aij denotes the element located in the ith row and jth column. The column matrix   {x} =   1  x1 x2 .  (1..e... Amn     . Am2 .. i. Aij . a column matrix with m rows and one column element in row-i of the column matrix {a} m × n matrix element in row-i. column-j of the matrix [A] 1. [A] .... m × 1 matrix. Am1 A12 A22 .. xm      (1... ......2) .. for our purposes it is sufficient to consider a matrix to be a rectangular array of real numbers that obeys certain rules of addition and multiplication..1 Matrix algebra Even though more general matrices can be considered.... A1n A2n . A m × n matrix [A] has m rows and n columns:   [A] =    A11 A21 ...... ai ..Chapter 1 Matrix Algebra and Indicial Notation Notation: {a} . ... ..

consider a m × n matrix [A] and a n × 1 (column) matrix {x}. j = 1. and j = 1. . (1. . i = 1. In particular. . their sum is the m × n matrix [C] denoted by [C] = [A] + [B] whose elements are Cij = Aij + Bij . (1. MATRIX ALGEBRA AND INDICIAL NOTATION has m rows and one column. It is worth noting that if two matrices [A] and [B] obey the equation [A][B] = [0] this does not necessarily mean that either [A] or [B] has to be the null matrix [0]. 2. . . . . [B] and [C] obey [A][B] = [A][C] this does not necessarily mean that [B] = [C] (even if [A] = [0]. yn } (1.) The product by a scalar α of a m × n matrix [A] is the m × n matrix [B] with components Bij = αAij . {x}[A] does not exist is general. therefore rather than referring to [A][B] as the product of [A] and [B] we should more precisely refer to [A][B] as [A] postmultiplied by [B].e. . . The transpose of the m × n matrix [A] is the n × m matrix [B] where Bij = Aji for each i = 1. . 2. . 2. Two m×n matrices [A] and [B] are said to be equal if and only if all of their corresponding elements are equal: Aij = Bij . . 2. 2. .7) . p. or [B] premultiplied by [A]. . m. . .5) If [A] is a p × q matrix and [B] is a q × r matrix. y2 . but we cannot premultiply [A] by {x} (unless m=1). one writes [B] = α[A]. j = 1. . In general [A][B] = [B][A]. . (1. n. j = 1. 2. . 2. . . . . . . . . . Similarly if three matrices [A]. m. .2 CHAPTER 1. m. . Then we can postmultiply [A] by {x} to get the m × 1 column matrix [A]{x}. The row matrix {y} = {y1 . Note that a m1 × n1 matrix [A1 ] can be postmultiplied by a m2 × n2 matrix [A2 ] if and only if n1 = m2 . . 2.4) If [A] and [B] are both m × n matrices. m. 2. . (1. i. (1. i = 1. . i = 1. 2. If all the elements of a matrix are zero it is said to be a null matrix and is denoted by [0] or {0} as the case may be. . n. .6) one writes [C] = [A][B]. . . q.8) i = 1.3) has one row and n columns. j = 1. . their product is the p × r matrix [C] with elements q Cij = k=1 Aik Bkj . n. n. .

i.1. n.13) This is referred to as the quadratic form associated with [A]. Aij = 0 for each i.e. [AB]T = [B]T [A]T . Suppose that [A] is a m × n matrix and that {x} is a m × 1 (column) matrix. MATRIX ALGEBRA Usually one denotes the matrix [B] by [A]T . n. and premultiply the resulting matrix by {x}T to get a 1 × 1 square matrix. Note that n n {x}T [A]{x} = Aij xi xj . (1. i i=1 (1. j = 1. and vice versa. i=1 j=1 for each i. the diagonal elements of this matrix are the Aii ’s. A square matrix [A] is said to be symmetrical if Aij = Aji skew-symmetrical if Aij = −Aji for each i.12) Thus for a symmetric matrix [A] we have [A]T = [A]. . If every diagonal element of a diagonal matrix is 1 the matrix is called a unit matrix and is usually denoted by [I]. . One can verify that [A + B]T = [A]T + [B]T . (1. For any n × 1 column matrix {x} note that n {x}T {x} = {x}{x}T = x2 + x2 . . j = 1. + Ann x2 . . {x}T [A]{x}.e. i = j. . (1. the matrix is said to be diagonal. . Then we can postmultiply [A] by {x} to get a n × 1 column matrix [A]{x}. {x}T [A] exists (and is a 1 × n row matrix).11) (1. Suppose that [A] is a n × n square matrix and that {x} is a n × 1 (column) matrix. 2.15) . Observe that each diagonal element of a skew-symmetric matrix must be zero. In the special case of a diagonal matrix [A] {x}T [A]{x} = A11 x2 + A22 x2 + . . Then we can premultiply [A] by {x}T . effectively just a scalar. . j = 1. 2.10) A n × n matrix [A] is called a square matrix. . . + x2 = 1 2 n x2 . (1. .14) 1 1 n The trace of a square matrix is the sum of the diagonal elements of that matrix and is denoted by trace[A]: n trace[A] = i=1 Aii .9) The transpose of a column matrix is a row matrix. 2. 3 (1. . If the off-diagonal elements of a square matrix are all zero. n. i. for a skew-symmetric matrix [A] we have [A]T = −[A]. .1.

and then at least one row of [A] can be expressed as a linear combination of the other rows. The number thus obtained is called the cofactor of Aij . and finally consider the product of that determinant with (−1)i+j . are α1 = α2 = . = αn = 0. For [A] to be non-singular it is necessary and sufficient that det[A] = 0. . then Bij = cofactor of Aji det[A] (1. If the rows of [A] are linearly dependent. . (1. n. . Let det[A] denote the determinant of a square matrix. . . αn {a}n = {0} (1. . First consider the (n−1)×(n−1) matrix obtained by eliminating the ith row and jth column of [A]. the matrix is singular and an inverse matrix does not exist. the rows of [A] are said to be linearly independent. For each i = 1. . MATRIX ALGEBRA AND INDICIAL NOTATION trace([A][B]) = trace([B][A]). One can show that det([A][B]) = (det[A]) (det[B]). Consider a square matrix [A] and suppose that its rows are linearly independent. then consider the determinant of that second matrix. If the only scalars αi for which α1 {a}1 + α2 {a}2 + α3 {a}3 + . . Ai3 .19) Note that trace[A] and det[A] are both scalar-valued functions of the matrix [A]. (1. If at least one of the α’s is non-zero. Then the matrix is said to be non-singular and there exists a matrix [B]. Consider a square matrix [A]. Then for a 2 × 2 matrix det A11 A12 A21 A22 = A11 A22 − A12 A21 . .20) and for a 3 × 3 matrix   A11 A12 A13   det  A21 A22 A23  = A11 det A31 A32 A33 A22 A23 A32 A33 − A12 det A21 A23 A31 A33 + A13 det A21 A22 A31 A32 .18) The determinant of a n × n matrix is defined recursively in a similar manner. . Ain }. Ai2 . 2. they are said to be linearly dependent.17) (1. [B] = [A]−1 . . usually denoted by [B] = [A]−1 and called the inverse of [A].4 One can show that CHAPTER 1.21) .16) (1. for which [B][A] = [A][B] = [I]. a row matrix {a}i can be created by assembling the elements in the ith row of [A]: {a}i = {Ai1 . Consider a n×n square matrix [A]. If [B] is the inverse of [A]. .

.23)  . From here on. 5 (1... n. . 1.  . . Ann xn bn or even more compactly by omitting the statement “with i taking each value in the range 1.       A21 x1 +A22 x2 + . we shall always use the range convention unless explicitly stated otherwise.26) holds for each value of the subscript i in the range i = 1. . .  =     .26) with the understanding that (1..e.. Now consider the matrix equation [A]{x} = {b}:      b1 x1 A11 A12 . . and simply writing Ai1 x1 + Ai2 x2 + .. . . +. .22) then the matrix is said to be orthogonal. this is equivalent to the system of linear algebraic equations  A11 x1 +A12 x2 + . . n.. 2...  An1 An2 ..   . one has [A][A]T = [A]T [A] = [I] and that det[A] = ±1. +A2n xn = b2 . 2. .2.  (1. +.  with i taking each value in the range 1. A2n   x2  (1. if [A]−1 = [A]T . . .24)   ..2 Indicial notation This system of equations can be written more compactly as Ai1 x1 + Ai2 x2 + .. .. n”. . A1n  A  b     2   21 A22 . = . . . The subscript i is called a free subscript because it is free to take on each value in its range. . . ... This understanding is referred to as the range convention. 2.. . i. Ain xn = bi Carrying out the matrix multiplication. . INDICIAL NOTATION If the transpose and inverse of a matrix coincide. +A1n xn = b1 . . Note that for an orthogonal matrix [A].. . . . and let xi and bi denote the elements in the ith row of {x} and {b} respectively. . .1.. .. + Ain xn = bi (1.     An1 x1 +An2 x2 + . +. +Ann xn = bn .25) Consider a n × n square matrix [A] and two n × 1 column matrices {x} and {b}... . . Let Aij denote the element of [A] in its ith row and j th column. (1..

suppose that f (x1 . MATRIX ALGEBRA AND INDICIAL NOTATION Aj1 x1 + Aj2 x2 + . (1.. A1n = x1 xn .24). 2. . .       An1 = xn x1 . . . and only once. . . Ain xn and bi of that equation.. n. − and = signs. x2 . xn ) is a function of x1 . . . . . if we write the equation ∂f = 3xk . . Similarly since the free subscripts p and q appear in the symbol group on the left-hand 1 By a “symbol group” we mean a set of terms contained between +. Ann = xn xn . .. . Thus (1. Then. 2. .31)   . since the index i appears once in the symbol group Ai1 x1 . ∂xn (1. .. .27) is identical to (1. . then it represents 3N scalar equations. . the equation Apq = xp xq (1.  In general. .  (1. .30) corresponds to the nine equations  A11 = x1 x1 . and each. . A22 = x2 x2 .. = .. . Therefore (1. A2n = x2 xn . + Ajn xn = bj (1. .27) and so (1...26).29) As a third example. . takes all values in the range 1. n” and this leads back to (1.         A21 = x2 x1 . xn . . in every group of symbols in an equation.. .26). In order to be consistent it is important that the same free subscript(s) must appear once. For example. . . . in equation (1. x2 . . ∂x2 . .. This illustrates the fact that the particular choice of index for the free subscript in an equation is not important provided that the same free subscript appears in every symbol grouping. if an equation involves N free indices. Ai3 x3 . 2. . A12 = x1 x2 . n.. this is because j is a free subscript in (1..1 As a second example..30) has two free subscripts p and q. ∂x1 ∂f = 3x2 . it must necessarily appear once in each of the remaining symbol groups Ai2 x2 . .27) is required to hold “for all j = 1.28) ∂xk the index k in it is a free subscript and so takes all values in the range 1. . An2 = xn x2 . .28) is a compact way of writing the n equations ∂f = 3x1 . ∂f = 3xn . . .6 Observe that CHAPTER 1. . independently.

11) defining a symmetric matrix as simply Aij = Aji . . Thus the particular choice of index for the dummy subscript is not important.33) is a dummy subscript. . Thus we shall never write. for example. In order to avoid ambiguity.34). no subscript is allowed to appear more than twice in any symbol grouping.30).34) and therefore summation on k in implied in (1. if we did. we would write (1. equation (1. the subscript j in (1.26) can be written as n Aij xj = bi . SUMMATION CONVENTION 7 side of equation (1.3.32) We can simplify the notation even further by agreeing to drop the summation sign and instead imposing the rule that summation is implied over a subscript that appears twice in a symbol grouping. . Aii xi = bi since. equation (1.34) is identical to (1. the index i would appear 3 times in the first symbol group.4) for the equality of two matrices as simply Aij = Bij .1. 1. A subscript that appears twice in a symbol grouping is called a repeated or dummy subscript. this is because k is a dummy subscript in (1. and equation (1. for example. observe that (1. equation (1.n” statements there and written.7) for the scalar multiple of a matrix as Bij = αAij .33). An equation of the form Apq = xi xj would violate this consistency requirement as would Ai1 xi + Aj2 x2 = 0. we would have omitted the various “i=1. j=1 (1.5) for the sum of two matrices as simply Cij = Aij + Bij . .1.33) with summation on the subscript j being implied.3 Summation convention Next.32) as Aij xj = bi (1. With this understanding in force.2. Note that Aik xk = bi (1.8) for the transpose of a matrix as simply Bij = Aji . Note finally that had we adopted the range convention in Section 1..12) defining a skew-symmetric matrix as simply Aij = −Aji . equation (1. equation (1. it must also appear in the symbol group on the right-hand side.

10) for the product of a matrix by its transpose as {x}T {x} = xi xi . MATRIX ALGEBRA AND INDICIAL NOTATION 1.6) for the product of two matrices as Cij = Aik Bkj . . 2. the order in which the terms appear within a symbol group is irrelevant. . . . Lower-case latin subscripts take on values in the range (1. and equivalently write Akq xq = bk . The three preceding equations are identical. If it appears once. for example (1. Likewise we can write (1. .8 Summary of Rules: CHAPTER 1. If it appears twice. and equation (1.33) equivalently as xj Aij = bi . equation (1. it is the location where the repeated subscript appears that tells us whether {x} multiplies [A] or [A] multiplies {x}. involves scalar quantities. The same index may not appear more than twice in the same symbol grouping. (1. and therefore. 3.24).1. in particular. In an indicial equation it is the location of the subscripts that is crucial. and write Aks xs = bk . 2. it is called a free index and it takes on each value in its range. A given index may appear either once or twice in a symbol grouping. n). say k.24)1 is equivalent to x1 A11 + x2 A12 + . . It is important to emphasize that each of the equations in. say s. Free and dummy indices may be changed without altering the meaning of an expression provided that one does not violate the preceding rules. for example. Thus. it is called a dummy index and summation is implied over it. for example.13) for the quadratic form as {x}T [A]{x} = Aij xi xj .15) for the trace as trace [A] = Aii . Note finally that had we adopted the range and summation conventions in Section 1. 4. All symbol groupings in an equation must have the same free subscripts. + xn A1n = b1 . We can also change the repeated subscript q to some other index.35) . (1. Thus. the second equation does not correspond to {x}[A] = {b}.36) (1. we would have written equation (1.37) (1. Note that both Aij xj = bi and xj Aij = bi represent the matrix equation [A]{x} = {b}. we can change the free subscript p in every term of the equation Apq xq = bp to any other index. equation (1.

δ j Ajk is simply the . one simply replaces the Kronecker delta by unity and changes the repeated subscript i → j to obtain Tipq.. This implies.1.40) Thus we have used the facts that (i) since δij is zero unless i = j.z . it follows that all terms on the right-hand side vanish trivially except for the one term for which i = j. Recall that ui δij = u1 δ1j + u2 δ2j + .41) More generally.43) Observe that these results are immediately apparent by using matrix algebra.. + un δnj . In the first example. (1.z . gives ui δij = uj . 0 if i = j. Consider. Similarly in the second example.z δij = Tjpq. then we know that [Q][Q]T = [Q]T [Q] = [I].. suppose that [A] is a square matrix and one wishes to simplify Ajk δ j . If [Q] is an orthogonal matrix. KRONECKER DELTA 9 1. one replaces the Kronecker delta by unity and changes the repeated subscript i → p to obtain Cij k δip = Cpj k . (1. δij is unity. (1. . 2 .. Similarly. (1. Then by the same reasoning. Since [I]{u} = {u} the result follows at once. . is defined by δij = 1 if i = j. any column matrix {u} and suppose that one wishes to simplify the expression ui δij .. for example.4. Since δij is zero unless i = j. (1. (ii) and when i = j. Since [I][A] = [A]. we replace the Kronecker delta by unity and change the repeated subscript j → to obtain2 Ajk δ j = A k . Thus replacing the Kronecker delta by unity. k-element of the matrix [I][A]. note that δji ui (which is equal to the quantity δij ui that is given) is simply the jth element of the column matrix [I]{u}. and changing the repeated subscript i → j.42) The substitution rule applies even more generally: for any quantity or expression Tipq.39) The following useful property of the Kronecker delta is sometimes called the substitution rule. if δip multiplies a quantity Cij k representing n4 numbers. in indicial notation.4 Kronecker delta The Kronecker Delta. δij . the result follows..38) Note that it represents the elements of the identity matrix. that Qik Qjk = Qki Qkj = δij . Thus the term that survives on the right-hand side is uj and so ui δij = uj . the expression being simplified has a non-zero value only if i = j. (1.

  −1 if the subscripts i. are equal. 1). 6 (1. j.5 The alternator or permutation symbol We now limit attention to subscripts that range over 1.(1. directly. 3).44) Observe from its definition that the sign of eijk changes whenever any two adjacent subscripts are switched: eijk = −ejik = ejki . (3.  = +1 for (i. be verified . (1. are in cyclic order. (1. 2). The alternator or permutation symbol is defined by 0 if two or more subscripts i. 2. 1. are in anticyclic order. 2. 3). (2.48) The following relation involving the alternator and the Kronecker delta will be useful in subsequent calculations eijk epqk = δip δjq − δiq δjp . 1. j.   0 if two or more subscripts i.46) . = +1 if the subscripts i. k. k. j. 1). (1. j. (2.49). k.    eijk (1. 2). (1.   −1 for (i.10 CHAPTER 1. k) = (1. MATRIX ALGEBRA AND INDICIAL NOTATION 1. are equal. 2. of course. 3. 3. They can.45) One can show by direct calculation that the determinant of a 3 matrix [A] can be written in either of two forms det[A] = eijk A1i A2j A3k as well as in the form det[A] = 1 eijk epqr Aip Ajq Akr .49) It is left to the reader to develop proofs of these identities. k. j. by simply writing out all of the terms in (1. 3 only. (3.47) or det[A] = eijk Ai1 Aj2 Ak3 . k) = (1. j.46) Another useful identity involving the determinant is epqr det[A] = eijk Aip Ajq Akr .

. or equivalently Dij = Akj Bik . yi = Aip xp + Biq zq . by definition of transposition. yk = Akp xp + Bkp zp .2): The n × n matrices [C]. Ain of the ith row of [A] by the respective elements x1 .1. . .) In order to calculate Eij . . . and this equation holds for each value of the free index i = 1. the order of terms within a symbol group is insignificant since T these are scalar quantities. . . pairwise. . Note that one can alternatively – and equivalently – write the above equation in any of the following forms: yk = Akj xj + Bkj zj . . . 2. [D] and [E] in terms of the elements of [A] and [B]. express the matrix equation {y} = [A]{x} + [B]{z} as a set of scalar equations. by the respective elements of the j th column of [B] and summing. Solution: By the rules of matrix multiplication. . . WORKED EXAMPLES. However. i-element of the matrix T [B]: Bij = Bji and so we can write Eij = Aik Bjk . Ai2 . Bnj and summing. as noted previously. where summation over the dummy index j is implied. [D] = [B][A]. and finally adding the two together. {y}. . the i. we first multiply [A] by [B]T to obtain Eij = Aik Bkj . where the second expression was obtained by simply changing the order in which the terms appear in the first expression (since. j-element of a matrix [B]T equals the j. the element Cij in the ith row and j th column of [C] is obtained by multiplying the elements of the ith row of [A]. n. Solution: By the rules of matrix multiplication.6. . . Express the elements of [C]. xn of {x} and summing. Ain by. [E] = [A][B]T . B2j . So. B1j . Thus yi = Aij xj + Bij zj . 11 1. Example(1. Ai2 . It follows likewise that the equation [D] = [B][A] leads to Dij = Bik Akj . the element yi in the ith row of {y} is obtained by first pairwise multiplying the elements Ai1 . [D] and [E] are defined in terms of the two n × n matrices [A] and [B] by [C] = [A][B]. then doing the same for the elements of [B] and {z}. Observe that all rules for indicial notation are satisfied by each of the three equations above. . {z} are n × 1 column matrices. x2 . respectively.6 Worked Examples. moreover summation is carried out over the repeated index k. Cij is obtained by multiplying the elements Ai1 . Example(1. . Thus Cij = Aik Bkj . note that i and j are both free indices here and so this represents n2 scalar equations. . .1): If [A] and [B] are n × n square matrices and {x}.

On combining. therefore there are summations over each of them. then there would be four j’s and that violates one of our rules. Wij = (Aij − Aji ). Solution: Define matrices [S] and [W ] in terms of the given the matrix [A] as follows: 1 1 (Aij + Aji ). since [S] is symmetric Sji = Sij . Whenever there is a dummy subscript. Therefore Sji Wji = −Sij Wij . note that this [S] is symmetric. the matrix [S] is symmetric and [W ] is skew-symmetric. Using this in the right-hand side of the preceding equation gives Sij Wij = −Sij Wij from which it follows that Sij Wij = 0. Also. By changing the dummy indices i → p and j → q. take Sij = ui uj where {u} is an arbitrary column matrix. Effectively. We can now change dummy indices again. It is worth repeating that the location of the repeated subscript k tells us what term multiplies what term. Example(1. from p → j and q → i which gives Spq Wpq = Sji Wji . we can change i → p and get Sij Wij = Spj Wpj . Thus. Example(1. Remark: As a special case. and we can change it to another index. MATRIX ALGEBRA AND INDICIAL NOTATION All four expressions here involve the ik. there is no free subscript so this is just a single scalar equation. By adding the two equations in above one obtains Sij = Sij + Wij = Aij .4): Show that any matrix [A] can be additively decomposed into the sum of a symmetric matrix and a skew-symmetric matrix.3): If [S] is any symmetric matrix and [W ] is any skew-symmetric matrix. these we get Sij Wij = Sji Wji . 2 2 It may be readily verified from these definitions that Sij = Sji and that Wij = −Wij . Note that we can change i to any other index except j. the choice of the particular index for that dummy subscript is arbitrary. . Next. and since [W ] is skew-symmetric. Solution: Note that both i and j are dummy subscripts here. kj or jk elements of [A] and [B]. we have changed both i and j simultaneously from i → j and j → i. Wji = −Wij . if we did change it to j. we get Sij Wij = Spq Wpq .12 CHAPTER 1. Wij ui uj = 0 for all ui . provided that we change both repeated subscripts to the new symbol (and as long as we do not have any subscript appearing more than twice). for example. since i is a dummy subscript in Sij Wij . Thus. The precise locations of the subscripts vary and the meaning of the terms depend crucially on these locations. show that Sij Wij = 0. It follows that for any skew-symmetric [W ].

Solution: By using the substitution rule. k where in the last step we have used the fact that (Dijk − Dij k ) Ek = 0 since Dijk − Dij in the subscripts k. . while Ek is symmetric in the subscripts k. i. 13 Example (1. . we have δij δik δjk = δjk δjk = δkk = δ11 + δ22 + . one has Aij Bij = 0. k. n.1. . .5): Show that the quadratic form Tij ui uj is unchanged if Tij is replaced by its symmetric part. show that for any matrix [T ]. j.7): Evaluate the expression δij δik δjk . . Tij ui uj = Sij ui uj for all ui where Sij = 1 (Tij + Tji ). . is skew symmetric Example (1. take all values in the range 1. . .8): Given an orthogonal matrix [Q]. D1122 . D112n . 2. first on the repeated index i and then on the repeated index j. that Bij = ui uj is symmetric. 2 (i) Solution: The result follows from the following calculation: Tij ui uj = 1 1 1 1 Tij + Tij + Tji − Tji ui uj 2 2 2 2 1 1 (Tij + Tji ) ui uj + (Tij − Tji ) ui uj 2 2 = Sij ui uj . + δnn = n. . . . and that for any symmetric matrix [A] and any skew-symmetric matrix [B].6. D1112 . use indicial notation to solve the matrix equation [Q]{x} = {a} for {x}. Dnnnn are n4 constants.6): Suppose that D1111 . . . . D1121 . Aij = Dijk Ek = = = 1 1 1 1 Dijk + Dijk + Dij k − Dij k Ek 2 2 2 2 1 1 (Dijk + Dij k ) Ek + (Dijk − Dij k ) Ek 2 2 Cijk Ek . . [A] = [S] + [W ]. Example (1. WORKED EXAMPLES. 2 (i) Solution: In a manner entirely analogous to the previous example. = where in the last step we have used the facts that Aij = Tij − Tji is skew-symmetric. . . . and let Dijk denote a generic element of this set where each of the subscripts i.e. Show that [A] is unchanged if Dijk is replaced by its “symmetric part” Cijk where Cijk = 1 (Dijk + Dij k ). or in matrix form. D111n . Example(1. Let [E] be an arbitrary symmetric matrix and define the elements of a matrix [A] by Aij = Dijk Ek . .

Differentiating f and using the fact that [A] is constant gives ∂ ∂f ∂ ∂xp ∂xq = (Apq xp xq ) = Apq (xp xq ) = Apq xq + xp ∂xi ∂xi ∂xi ∂xi ∂xi Since the xi ’s are independent variables. .e. x2 . ∂xj Using this above gives ∂f = Apq [δpi xq + xp δqi ] = Apq δpi xq + Apq xp δqi ∂xi which. . . . Calculate the partial derivatives ∂f /∂xi . by the substitution rule. ∂xi =  ∂xj 1 if i = j. note that because of the summation on the indices i and j. of course. Thus the preceding equation simplifies to δjk xj = Qik ai . simplifies to ∂f = Aiq xq + Api xp = Aij xj + Aji xj = (Aij + Aji )xj . ∂xi = δij . and for an orthogonal matrix. it follows that   0 if i = j. have written down immediately from the fact that {x} = [Q]−1 {a}. In order to get around this difficulty we make use of the fact that the specific choice of the index in a dummy subscript is not significant and so we can write f = Apq xp xq . observe that if we differentiatiate f with respect to xi and write ∂f /∂xi = ∂(Aij xi xj )/∂xi . it is incorrect to conclude that ∂f /∂xi = Aij xj by viewing this in the same way as differentiating the function A12 x1 x2 with respect to x1 . Since [Q] is orthogonal. First.9): Consider the function f (x1 . Multiplying both sides by Qik gives Qik Qij xj = Qik ai . i. we would violate our rules because the right-hand side has the subscript i appearing three times in one symbol grouping. Example(1. Solution: We begin by making two general observations.39) that Qrp Qrq = δpq . In matrix notation this reads {x} = [Q]T {a} which we could. . reduces further to xk = Qik ai . the equation [Q]{x} = {a} reads Qij xj = ai . xn ) = Aij xi xj where the Aij ’s are constants. we know from (1. which. ∂xi . Second. by the substitution rule.14 CHAPTER 1. [Q]−1 = [Q]T . MATRIX ALGEBRA AND INDICIAL NOTATION Solution: In indicial form.

∂Eij (ii) Keeping this in mind and differentiating W (E11 .6. ∂xk ∂xk ∂xk ∂xk (i) where we have used the fact that ∂xi /∂xj = δij in the last step... 2 . What does this imply about [A]? Solution: We know from a previous example that that if [A] is a skew-symmetric and [S] is symmetric then Aij Sij = 0. Therefore it is necessary and sufficient that [A] be skew-symmetric. . ∂Eij ∂Ekl (i) Solution: First.1.. this simplifies to Akj xj + Aik xi = (Akj + Ajk ) xj = 0. since the Eij ’s are independent variables. Define the function W ([E]) for all matrices [E] by W ([E]) = W (E11 . E12 . WORKED EXAMPLES.E33 ) with respect to Eij gives ∂W ∂ = ∂Eij ∂Eij 1 Cpqrs Epq Ers 2 = 1 Cpqrs 2 ∂Epq ∂Ers Ers + Epq ∂Eij ∂Eij = = = 1 Cpqrs (δpi δqj Ers + δri δsj Epq ) 2 1 1 Cijrs Ers + Cpqij Epq 2 2 1 (Cijpq + Cpqij ) Epq .10): Suppose that {x}T [A]{x} = 0 for all column matrices {x} where the square matrix [A] is independent of {x}. and as a special case of this that Aij xi xj = 0 for all {x}. Therefore. Thus a sufficient condition for the given equation to hold is that [A] be skew-symmetric. Example (1. Calculate ∂W ∂Eij and ∂2W .. ∂xi (iii) Thus [A] must necessarily be a skew symmetric matrix. (ii) Since this also holds for all xi .Enn ) = 1 2 Cijkl Eij Ekl . ∂Epq = δpi δqj . E12 . 15 Example (1. Since this equation holds for all xi .. we may differentiate both sides with respect to xk and proceed as follows: 0= ∂ ∂xi ∂xj ∂ (Aij xi xj ) = Aij (xi xj ) = Aij xj + Aij xi = Aij δik xj + Aij xi δjk . On using the substitution rule. it may be differentiated again with respect to xi to obtain (Akj + Ajk ) ∂xj = (Akj + Ajk ) δji = Aki + Aik = 0.11): Let Cijkl be a set of n4 constants. . it follows that   1 if p = i and q = j. We are given that Aij xi xj = 0 for all xi .. ∂Epq =  ∂Eij 0 otherwise. Now we show that this is also a necessary condition.

1965.) 2 Differentiating this once more with respect to Ekl gives ∂2W ∂ = ∂Eij ∂Ekl ∂Ek 1 (Cijpq + Cpqij ) Epq 2 = 1 (Cijpq + Cpqij ) δpk δql 2 1 (Cijkl + Cklij ) 2 (iii) = (iv) Example (1. Frazer. McGraw-Hill. Cambridge University Press. suppose that [S] is symmetric. Remark: Note as a special case of this result that eijk vj vk = 0 for any arbitrary column matrix {v}. Conversely suppose that (i) holds for some matrix [S]. k element of a 3 × 3 matrix. Collar. Duncan and A.R. In a previous example we showed that Sij Wij = 0 for any symmetric matrix [S] and any skew-symmetric matrix [W ]. we have eijk ekij = −eijk eikj = −(δjk δkj −δjj δkk ) = −(δjj −δjj δkk ) = −(3−3×3) = 6. Pick and fix the free subscript i at any value i = 1. Elementary Matrices. Consequently (i) must hold. Thus Spq = Sqp and so [S] is symmetric. Bellman.45).J. 3. Solution: First. then using the identity (1. Introduction to Matrix Analysis. (ii) (i) References 1. W.12): Evaluate the expression eijk ekij .49).49) leads to eipq eijk Sjk = (δpj δqk − δpk δqj )Sjk = Spq − Sqp = 0 where in the last step we have used the substitutin rule.A. 2. we can think of eijk as the j.16 CHAPTER 1.13): Show that eijk Sjk = 0 if and only if the matrix [S] is symmetric. 1960. Solution: By first using the skew symmetry property (1. 2. Then. R. Multiplying (i) by eipq and using the identity (1. Example(1. R. MATRIX ALGEBRA AND INDICIAL NOTATION where we have made use of the substitution rule. Since eijk = −eikj this is a skew-symmetric matrix. . and finally using the substitution rule. (Note that in the first step we wrote W = 1 Cpqrs Epq Ers 2 1 rather than W = 2 Cijkl Eij Ekl because we would violate our rules for indices had we written ∂( 1 Cijkl Eij Ekl )/∂Eij .

will denote linear transformations. and (b) to linear transformations that carry vectors from one vector space into the same vector space. vector A . a. Linear Algebra is a far richer subject than the very restricted glimpse provided here might suggest... β. Thus. for example... it is not meant to be a source for learning the subject of linear algebra for the first time.. γ.. The discussion in these notes is limited almost entirely to (a) real 3-dimensional Euclidean vector spaces.. c. will denote scalars (real numbers). In particular.. . α. B. “o” will denote the null vector while “0” will denote the null linear transformation. scalar a .. will denote vectors. linear transformation As mentioned in the Preface. The following notation will be consistently used: Greek letters will denote real numbers.. b... 17 . and A.. C. lowercase boldface Latin letters will denote vectors.... These notes are designed to review those aspects of linear algebra that will be encountered in our study of continuum mechanics. and uppercase boldface Latin letters will denote linear transformations..Chapter 2 Vectors and Linear Transformations Notation: α . ..

called vectors. . αk = 0. e2 . Let x1 .1 Vectors A vector space V is a collection of elements. it is assumed that there is a unique vector o ∈ V called the null vector such that x + o = x. .3) A Euclidean vector space is a vector space together with an inner product on that space. From hereon we shall restrict attention to 3-dimensional Euclidean vector spaces and denote such a space by E3 . . the vectors x + y and αx are also in U. . we say that the dimension of V is n. The operation of addition (has certain properties which we do not list here) and associates with each pair of vectors x and y in V.2) the numbers ξ1 . Given any vector x ∈ V there exist a unique set of numbers ξ1 . e3 }. VECTORS AND LINEAR TRANSFORMATIONS 2. If V is a vector space.1) are the numbers α1 = α2 = . Unless stated otherwise. . any set of three linearly independent vectors {e1 . Thus a linear manifold U of V is itself a vector space under the same operations of addition and multiplication by a scalar as in V. a vector denoted by x + y that is also in V. A scalar-product has certain properties which we do not list here except to note that it is required that x·y =y·x for all x. If V contains n linearly independent vectors but does not contain n + 1 linearly independent vectors. for every x. αk for which α1 x1 + α2 x2 · · · + αk xk = o (2. . xk be k vectors in V. Let U be a subset of a vector space V. . ξ3 are called the components of x in the basis {e1 . . x2 . (2. e3 } is said to be a basis for V. In particular. (2. another vector in V denoted by αx. . ξ3 such that x = ξ1 e1 + ξ2 e2 + ξ3 e3 . These vectors are said to be linearly independent if the only real numbers α1 . y ∈ V. together with two operations. addition and multiplication by a scalar. y ∈ U and every real number α. e2 . A scalar-product (or inner product or dot product) on V is a function which assigns to each pair of vectors x. ξ2 . which we denote by x · y. . α2 . The operation of scalar multiplication (has certain properties which we do not list here) and associates with each vector x ∈ V and each real number α.18 CHAPTER 2. ξ2 . y in V a scalar. from hereon we restrict attention to 3-dimensional vector spaces. we say that U is a subspace (or linear manifold) of V if.

Since n is parallel to x × y. this does not necessarily imply that x = o. The angle θ between two vectors x and y is defined by cos θ = x·y . j = 1.9) for all x. The vector-product must have certain properties (which we do not list here) except to note that it is required that y × x = −x × y One can show that x × y = |x| |y| sin θ n. 0 if i = j. to note that if we are given two vectors x and y where x · y = 0 and y = o. For such a basis. It is obvious. e3 ∈ E3 .8) where θ is the angle between x and y as defined by (2.10) . e2 . y ∈ E3 . on the other hand if x · y = 0 for every vector y. which we denote by x × y. y ∈ V. (2. then x must be the null vector. (2.5). A unit vector is a vector of unit length. (2.2. e2 . 3. a vector. e3 } is said to be right-handed if (e1 × e2 ) · e3 > 0. ei · ej = δij for i.5) Two vectors x and y are orthogonal if x · y = 0.1. |x||y| 0 ≤ θ ≤ π. it follows that n = (x × y)/|(x × y)|. and n is a unit vector in the direction x × y which therefore is normal to the plane defined by x and y. A basis {e1 . nevertheless helpful. (2. (2.4) A vector has zero length if and only if it is the null vector. (2. The magnitude |x × y| of the cross-product can be interpreted geometrically as the area of the triangle formed by the vectors x and y. (2.6) where the Kronecker delta δij is defined in the usual way by δij = 1 if i = j. An orthonormal basis is a triplet of mutually orthogonal unit vectors e1 . and since it has unit length. VECTORS 19 by The length (or magnitude or norm) of a vector x is the scalar denoted by |x| and defined |x| = (x · x)1/2 .7) A vector-product (or cross-product) on E3 is a function which assigns to each ordered pair of vectors x. 2.

e2 .11) F is said to be a linear transformation if it is such that F(αx + βy) = αF(x) + βF(y) (2. q ∈ P. y ∈ E3 . (iii) given an arbitrary point p ∈ P and an arbitrary vector x ∈ E3 . Here x is called the position of point q relative to the point p. (2. For . A geometric example of a linear transformation is the “projection operator” Π which projects vectors onto a given plane P. Consider a three-dimensional Euclidean vector space E3 . VECTORS AND LINEAR TRANSFORMATIONS 2. there is a unique point → q ∈ P such that x =pq.. x3 ) are called the coordinates of p in the (coordinate) frame F = {o. Let F be a function (or transformation) which assigns to each vector x ∈ E3 . Pick and fix an arbitrary point o ∈ P (which we call the origin of P) and an arbitrary basis for E3 of unit vectors e1 .1. r ∈ P.1. the coordinate frame {o.2 Linear Transformations. e3 } comprised of the origin o and the basis vectors e1 . y = F(x). e3 . e3 is an orthonormal basis. If e1 . e2 . Corresponding to any point p ∈ P there is a unique vector → op= x = x1 e1 + x2 e2 + x3 e3 ∈ E3 . q) is uniquely associated → with a vector in E3 . Every order pair of points (p. see Figure 2. When F is a linear transformation. e2 . e1 . e3 } is called a rectangular cartesian coordinate frame. A linear transformation is defined by the way it operates on vectors in E3 . 2. e1 . y ∈ E3 . a second vector y ∈ E3 . e2 . such that (i) pq = − qp (ii) pq + qr=pr → → → → → for all p. x ∈ E3 . we usually omit the parenthesis and write Fx instead of F(x). is related to a Euclidean vector space E3 in the following manner. β and all vectors x. e3 . Let P be the plane normal to the unit vector n. q. e2 . for all p. say pq.12) for all scalars α.20 CHAPTER 2.1 Euclidean point space A Euclidean point space P whose elements are called points. and it is the image of x under the transformation F. x2 . Note that Fx is a vector. The triplet (x1 .

y3 = Fx3 . any vector x ∈ E3 . (2. (2. suppose that {y1 . consequently the image Fx of any vector x is given by Fx = ξ1 y1 + ξ2 y2 + ξ3 y3 which is a rule for assigning a unique vector Fx to any given vector x. It can be verified geometrically that P is defined by Πx = x − (x · n)n for all x ∈ E3 . (2. The null linear transformation 0 is the linear transformation that takes every vector x into the null vector o. Ix = x for all x ∈ E3 . (2. The identity linear transformation I takes every vector x into itself. This follows from the fact that {x1 . and αA is the scalar multiple of A by α. Then there is a unique linear transformation F that maps {x1 . x2 . Thus 0x = o. LINEAR TRANSFORMATIONS. y3 } are any three vectors in E3 and that {x1 .13) Linear transformations tell us how vectors are mapped into other vectors.15) (AB)x = A(Bx) (αA)x = α(Ax) for all x ∈ E3 .2. x2 . x3 } are any three linearly independent vectors in E3 . for all x ∈ E3 .1: The projection Πx of a vector x onto the plane P. x2 .14) Let A and B be linear transformations on E3 and let α be a scalar.16) (2. (2. AB = BA. Πx ∈ P is the vector obtained by projecting x onto P.2. In general. In particular. x3 } is a basis for E3 . AB and αA are defined as those linear transformations which are such that (A + B)x = Ax + Bx for all x ∈ E3 . y2 . y2 = Fx2 . A + B is called the sum of A and B. AB the product. The linear transformations A + B. y2 . Therefore any arbitrary vector x can be expressed uniquely in the form x = ξ1 x1 + ξ2 x2 + ξ3 x3 . y3 }: y1 = Fx1 .17) respectively. x3 } into {y1 .18) . P n x 21 Πx Figure 2.

there exists a vector w (called the axial vector of W) which has the property that Wx = w × x for all x ∈ E3 .e. Given any linear transformation A..20) A linear transformation A is said to be symmetric if A = AT . (2. (A + B)T = AT + BT . then we say that A is non-singular. it is known as the null space of A. a non-singular transformation A is a one-to-one transformation in the sense that. corresponding to any non-singular linear transformation . W = (A − AT ). there is one and only one vector x ∈ E3 for which Ax = y. skew-symmetric if A = −AT .25) Given a linear transformation A. for any given y ∈ E3 . (2. Consequently.22 CHAPTER 2. (AB)T = BT AT . The dimension of this particular subspace is known as the rank of A.23) moreover. The set of all vectors x for which Ax = o is also a subspace of E3 .22) (2.21) Every linear transformation A can be represented as the sum of a symmetric linear transformation S and a skew-symmetric linear transformation W as follows: 1 1 A = S + W where S = (A + AT ).19) AT is called the transpose of A. the collection of all vectors Ax as x takes all values in E3 ) is a subspace of E3 . One can show that (αA)T = αAT . (2. (2. Thus. if the only vector x for which Ax = o is the zero vector. it may be shown that Wx · x = 0 for all x ∈ E3 . VECTORS AND LINEAR TRANSFORMATIONS The range of a linear transformation A (i. It follows from this that if A is non-singular then Ax = Ay whenever x = y. (2. 2 2 For every skew-symmetric linear transformation W.24) (2. y ∈ E3 . one can show that there is a unique linear transformation usually denoted by AT such that Ax · y = x · AT y for all x.

If two linear transformations A and B are both non-singular.31) .e. y3 } are right-handed (or both are left-handed) we say that the linear transformation F preserves the orientation of the vector space. there exists a second linear transformation. x2 . denoted by A−1 and called the inverse of A. x3 } are two sets of linearly independent vectors in E3 . The inverse of F maps {y1 . ¯ (2. such that Ax = y if and only if x = A−1 y.32) (2.2. if |Qx| = |x| for all x ∈ E3 . x3 } and {y1 . positive-semi-definite if Ax · x ≥ 0 for all x ∈ E3 . and so there is no ambiguity in writing this linear transformation as A−T . moreover.29) (2. (AB)−1 = B−1 A−1 . y3 = Fx3 .2. y2 . then there is a unique non-singular linear transformation F that maps {x1 . (2. A linear transformation Q is said to be orthogonal if it preserves length. y3 } into {x1 . (AT )−1 = (A−1 )T .26) If {y1 . x3 } into {y1 . If A is non-singular then so is AT . y2 . If both bases {x1 .28) Thus an orthogonal linear transformation preserves both the length of a vector and the angle between two vectors. y3 } and {x1 . moreover. x3 }. 23 A. or equivalently. it is necessarily non-singular and Q−1 = QT . (2..30) (2. such that AA−1 = A−1 A = I. it follows that it also preserves the inner product: Qx · Qy = x · y for all x. y2 = Fx2 .27) (2. i. y2 . x2 . x2 . then so is AB. y3 }: y1 = Fx1 . y2 .33) (2. LINEAR TRANSFORMATIONS. If Q is orthogonal. A linear transformation A is said to be positive definite if Ax · x > 0 for all x ∈ E3 . If Q is orthogonal. y ∈ E3 . x = o. x2 .

then it can be readily shown that λ and Rr are an eigenvalue and eigenvector of V. Since U is an invariant subspace we know in addition that Av ∈ U whenever v ∈ U. b ∈ E3 .34) are known. which is such that (a ⊗ b)x = (x · b)a for all x ∈ E3 . their tensor-product is the linear transformation usually denoted by a ⊗ b. and λ3 . Every linear transformation A (on a 3-dimensional vector space E3 ) has at least one eigenvalue. and no eigenvalue of a non-singular linear transformation can be zero. there exists unique symmetric positive definite linear transformations U and V and a unique orthogonal linear transformation R such that F = RU = VR. Let A be a linear transformation. as an eigenvector and an eigenvalue of A. this continues to be true for negative integers m provided A is non-singular and if by A−m we mean (A−1 )m . Given two vectors a. suppose that there exists an associated one-dimensional invariant subspace. The particular basis of E3 comprised of {e1 . and e3 . VECTORS AND LINEAR TRANSFORMATIONS A positive definite linear transformation is necessarily non-singular. e2 ..35) If λ and r are an eigenvalue and eigenvector of U. Each eigenvector of A characterizes a one-dimensional invariant subspace of A. Every eigenvalue of a positive definite linear transformation must be positive. (2. A subspace U is known as an invariant subspace of A if Av ∈ U for all v ∈ U. given any non-singular linear transformation F. Combining these two fact shows that Av = λv for all v ∈ U. (2. e2 . Moreover.(n times). Since U is one-dimensional. A is positive definite if and only if its symmetric part (1/2)(A + AT ) is positive definite. (2. It can be shown that a symmetric linear transformation A has three real eigenvalues λ1 . respectively.24 CHAPTER 2. it follows that if v ∈ U then any other vector in U can be expressed in the form λv for some scalar λ.. λ2 .AA. it is easily seen that e and λn are an eigenvector and an eigenvalue of An where An = AA. m > 0. according to the polar decomposition theorem. then for any positive integer n. Given a linear transformation A. If e and λ are an eigenvector and eigenvalue of a linear transformation A. e3 } is said to be a principal basis of A.. Finally. A vector v and a scalar λ such that Av = λv. and a corresponding set of three mutually orthogonal eigenvectors e1 . A symmetric linear transformation is positive definite if and only if all three of its eigenvalues are positive.36) .

3. (2. c. S22 = λ2 . For any vectors a. S33 = λ3 . It follows from the general representation (2. (2. e2 . e3 . Thus the range of the linear transformation a ⊗ b is the one-dimensional subspace of E3 consisting of all vectors parallel to a.39) where Aij is the ith component on the vector Aej . The rank of the linear transformation a ⊗ b is thus unity. e3 }. λ2 . can be expressed as a unique linear combination of the basis vectors e1 . (2.41) Let S be a symmetric linear transformation with eigenvalues λ1 . Since Sej = λj ej for each j = 1. e2 . (2. any vector in E3 . b. 2. They can equivalently be expressed as Aij = ei · (Aej ).42) . e2 . e2 .39) that the components of S in the principal basis {e1 . j = 1. (a ⊗ b)A = a ⊗ (AT b). (2. S32 = 0. e3 } be an orthonormal basis. S13 = S23 = 0. LINEAR TRANSFORMATIONS. it follows from (2. e3 .37) The product of a linear transformation A with the linear transformation a ⊗ b gives A(a ⊗ b) = (Aa) ⊗ b. 3.2. i=1 (Aei ) ⊗ ei = A. Note that 3 3 i=1 ei ⊗ ei = I. e3 } are S11 = λ1 . e2 . It follows that there exist unique real numbers Aij such that 3 Aej = i=1 Aij ei . The linear transformation A can now be represented as 3 3 A= i=1 j=1 Aij (ei ⊗ ej ). and therefore in particular each of the vectors Ae1 . 25 Observe that for any x ∈ E3 . λ3 and corresponding (mutually orthogonal unit) eigenvectors e1 . S21 = S31 = 0. Since this is a basis. Ae3 . S12 = 0. 2.38) Let {e1 .2. Ae2 . and d it is easily shown that (a ⊗ b)T = b ⊗ a. (2. the vector (a ⊗ b)x is parallel to the vector a.40) One refers to the Aij ’s as the components of the linear transformation A in the basis {e1 . (a ⊗ b)(c ⊗ d) = (b · c)(a ⊗ d).40) that S admits the representation 3 S= i=1 λi (ei ⊗ ei ).

recall that a × c = −c × a. the first and last terms in this equation vanish. Finally. and b is normal to (b × c). show that a · (b × c) = b · (c × a) = c · (a × b).2: Show that a necessary and sufficient condition for three vectors a. Thus the preceding equation simplifies to a · (b × c) = b · (c × a).43) if S is symmetric and non-singular. the vector (a + b) is normal to the vector (a + b) × c. . It can be readily shown that. It is readily seen that √ 3 S= i=i λi (ei ⊗ ei ). (2. i (2.26 CHAPTER 2. Solution: By the properties of the vector-product. b. for any positive integer n. Example 2. b. 3 S = i=1 n λn (ei ⊗ ei ). Example 2. c. VECTORS AND LINEAR TRANSFORMATIONS this is called the spectral representation of a symmetric linear transformation. (2. c in E3 – none of which is the null vector – to be linearly dependent is that a · (b × c) = 0. Thus (a + b) · [(a + b) × c] = 0.1: Given three vectors a. there is a unique symmetric positive definite linear transformation T such that T2 = S. On expanding this out one obtains a · (a × c) + a · (b × c) + b · (a × c) + b · (b × c) = 0. We call T the positive definite square root of S and √ denote it by T = S. then 3 S−1 = i=1 (1/λi ) (ei ⊗ ei ). The second part is shown analogously.3 Worked Examples.44) If S is symmetric and positive definite. This establishes the first part of the result. Since a is normal to (a × c).45) 2.

are linearly dependent. and keeping in mind that a · (b × c) = b · (c × a) = c · (a × b). 27 Solution: To show necessity. Consider the triangle defined by the vectors a and b to be the base of the tetrahedron. b. Its area A0 can be written as 1/2 base × height = 1/2|a|(|b|| sin θ|) where θ is the angle between a and b. However from the property (2. b. leads to αa · (b × c) = 0.3: Interpret the quantity a·(b×c) geometrically in terms of the volume of the tetrahedron defined by the vectors a. βa · (b × c) = 0. Its volume V0 = 1 A0 h0 where A0 is the area of its base and h0 is its height. By assumption. Since at least one of α. b. a · (b × c) = 0. Since we are in E3 this means that a must lie in the plane defined by b and c. the vector b × c is normal to the plane defined by the vectors b and c. suppose that the three vectors a. let a·(b×c) = 0 and assume that a. Analogous calculations with the other pairs of vectors.2. 3 Height Heigh h0 Area A0 c b a n olume Volume = 1 3 A0 × h0 A0 = |a × b| h0 = c · n a×b n= |a × b| Figure 2. WORKED EXAMPLES. γ is non-zero it follows that necessarily a · (b × c) = o.3.2: Volume of the tetrahedron defined by vectors a. c. and so the height of the tetrahedron is h0 = c · n. c are linearly independent. 6 (i) . Next. see Figure 2. β. Solution: Consider the tetrahedron formed by the three vectors a. b. c. c.9) of the vector-product we have |a × b| = |a||b|| sin θ| and so A0 = |a × b|/2. It follows that αa + βb + γc = o for some real numbers α. n = (a × b)/|a × b| is a unit vector that is normal to the base of the tetrahedron. c must be linearly dependent. Therefore V0 = 1 1 A0 h0 = 3 3 |a × b| 2 (c · n) = 1 (a × b) · c. We will show that this is a contradiction whence a.2. γ. Example 2. b. and this implies that a is normal to b × c. γa · (b × c) = 0. c as depicted in Figure 2. at least one of which is non zero. b. This means they cannot be linearly independent. To show sufficiency.2. β. Taking the vector-product of this equation with c and then taking the scalar-product of the result with a leads to βa · (b × c) = 0. By the properties of the vector-product.

show that φ(x) = c · x for some constant vector c. a. Verify that a reflection linear transformation R is non-singular while a projection linear transformation Π is singular. i = 1. respectively. Solution: Let {e1 . y. 2.4: Let φ(x) be a scalar-valued function defined on the vector space E3 . and let P be the plane through o normal to n. for all vectors x (i) Example 2. On setting ci = φ(ei ).28 CHAPTER 2. e3 } be any orthonormal basis for E3 . Verify that a projection linear transformation Π is symmetric and that a reflection linear transformation R is orthogonal. b. we find φ(x) = x1 c1 + x2 c2 + x3 c3 = c · x where c = c1 e1 + c2 e2 + c3 e3 . c are linearly dependent if and only if (a × b) · c = 0. What is the inverse of R? d. VECTORS AND LINEAR TRANSFORMATIONS Observe that this provides a geometric explanation for why the vectors a. c. Since the only vector of zero length is the null vector. Remark: This shows that the scalar-product is the most general scalar-valued linear function of a vector. we may choose y = Ax − Bx in this. e3 . Example 2. If φ is linear. leading to |Ax − Bx|2 = 0. if φ(αx + βy) = αφ(x) + βφ(y) for all scalars α. this implies that Ax = Bx and so A = B.5: If two linear transformations A and B have the property that Ax · y = Bx · y for all vectors x and y. .e. b.6: Let n be a unit vector. Therefore φ(x) = φ(x1 e1 + x2 e2 + x3 e3 ) which because of the linearity of φ leads to φ(x) = x1 φ(e1 ) + x2 φ(e2 ) + x3 φ(e3 ). project and reflect a vector in the plane P. Show that R(Rx) = x for all x ∈ E3 . 3. Solution: Since (Ax − Bx) · y = 0 for all vectors y. i. Π is called the “projection linear transformation” while R is known as the “reflection linear transformation”. β and all vectors x. Example 2. Then an arbitrary vector x can be written in terms of its components as x = x1 e1 + x2 e2 + x3 e3 . show that A = B. Show that Π and R are linear transformations. Let Π and R be the transformations which.

recall from part (b) that R(Rx) = x. Rx = o.12) of a linear transformation. Taking the scalar-product of this equation with the unit vector n yields x · n = 2(x · n) from which we conclude that x · n = 0. (i) These define the images Πx and Rx of a generic vector x under the transformation Π and R. Substituting this into the right-hand side of the preceding equation leads to x = o. Therefore Πn = o and (since n = o) we see that o is not the only vector that is mapped to the null vector by Π. One can readily verify that Π and R satisfy the requirement (2. b. Figure 2.3 shows a sketch of the plane P. Solution: a. its projection Πx and its reflection Rx. . e. Show that the projection linear transformation and reflection linear transformation can be represented as Π = I − n ⊗ n and R = I − 2(n ⊗ n) respectively. i. Since this holds for all vectors x it follows that R−1 = R. Applying the definition (i)1 of Π to the vector n gives Πn = n − (n · n)n = n − n = o. Applying the definition (i)2 of R to the vector Rx gives R(Rx) = (Rx) − 2 (Rx) · n n Replacing Rx on the right-hand side of this equation by (i)2 .2. WORKED EXAMPLES. a generic vector x. To find the inverse of R. its unit normal vector n. Rx = x − 2(x · n)n. Thus R(Rx) = x. Operating on both sides of this equation by R−1 gives Rx = R−1 x. Therefore Rx = o if and only if x = o and so R is non-singular.3: The projection Πx and reflection Rx of a vector x on the plane P. By geometry we see that Πx = x − (x · n)n.e. The transformation Π is therefore singular. Next consider the transformation R and consider a vector x that is mapped by it to the null vector. P n x Πx Rx (x · n)n 29 (x · n)n Figure 2. Using (i)2 x = 2(x · n)n.3. c. and expanding the resulting expression shows that the right-hand side simplifies to x.

This yields Πx · y = x − (x · n)n · y = x · y − (x · n)(y · n) and x · Πy = x · y − (x · n)n = x · y − (x · n)(y · n). To show that Π is symmetric we simply use its definition (i)1 to calculate Πx · y and x · Πy for arbitrary vectors x and y. In part (c) we showed that R−1 = R and so it now follows that RT = R−1 . VECTORS AND LINEAR TRANSFORMATIONS d. We can rearrange the right-hand side of this equation so it reads x · RT y = x · y − 2(y · n)n .19) of the transpose. (AB)x · y = x · (AB)T y . Finally the property (2.30 CHAPTER 2. Thus Πx · y = x · Πy and so Π is symmetric. by the definition (2. Applying the operation (I − n ⊗ n) on an arbitrary vector x gives I − n ⊗ n x = x − (n ⊗ n)x = x − (x · n)n = Πx and so Π = I − n ⊗ n. e. Comparing this with (i)2 shows that RT = R. We begin by calculating RT . we have Wx · x = x · WT x. Since this holds for all x it follows that RT y = y − 2(y · n)n.19) that the transpose satisfies the requirement x · RT y = Rx · y. Thus R is orthogonal.3) of the scalar-product allows this to be written as Wx · x = −Wx · x from which the desired result follows. Example 2. To show that R is orthogonal we must show that RRT = I or RT = R−1 .8: Show that (AB)T = BT AT . Example 2. Similarly I − 2n ⊗ n x = x − 2(x · n)n = Rx and so R = I − 2n ⊗ n.19) of the transpose. Recall from the definition (2. Solution: First. and since W = −WT for a skew symmetric linear transformation. Using the definition (i)2 of R on the right-hand side of this equation yields x · RT y = x · y − 2(x · n)(y · n).7: If W is a skew symmetric linear transformation show that Wx · x = 0 for all x . (i) . this can be written as Wx · x = −x · Wx. (i) Solution: By the definition (2.

Recall that (AB)T = BT AT for any two linear transformations A and B. 31 Second. Therefore x + o = x.3. By the definition of the transpose of A we have A(Bx) · y = Bx · AT y. i. the vector remains unchanged. and similarly that C (AB) = B−1 A−1 (AB) = B−1 (A−1 A)B == B−1 IB = I . WORKED EXAMPLES.12: Show that an orthogonal linear transformation Q preserves inner products. Example 2. Solution: The null vector o has the property that when it is added to any vector. Example 2. Thus the preceding equation simplifies to (AT )−1 (A−1 A)T = (A−1 )T Since A−1 A = I the desired result follows. Example 2. (Since the inverse would thus have been shown to exist. which when combined with the second equation yields the desired result. However operating on the first of these equations by A shows that Ax + Ao = Ax. We will show that (AB)C = C(AB) = I and therefore that C is the inverse of AB. show that Qx · Qy = x · y for all vectors x. Example 2. then show that Ao = o for any linear transformation A.11: If A is non-singular. Therefore (AB)C = C(AB) = I and so C is the inverse of AB.10: If A and B are non-singular linear transformations show that AB is also non-singular and that (AB)−1 = B−1 A−1 . . y. Therefore combining these three equations shows that (AB)x · y = x · BT AT y (ii) On equating these two expressions for (AB)x · y shows that x · (AB)T y = x · BT AT y for all vectors x. show that (A−1 )T = (AT )−1 .9: If o is the null vector.) Observe first that (AB) C = (AB) B−1 A−1 = A(BB−1 )A−1 = AIA−1 = I . note that (AB)x · y = A(Bx) · y.2. Post-operating on both sides of this equation by (A−1 )T gives (AT )−1 AT (A−1 )T = (A−1 )T . Solution: Let C = B−1 A−1 .e. and by the definition of the transpose of B we have Bx · AT y = x · BT AT y. and similarly Ax + o = Ax. Solution: Since (AT )−1 is the inverse of AT we have (AT )−1 AT = I. y which establishes the desired result. necessarily AB must be non-singular.

show that the corresponding eigenvectors a1 and a2 are orthogonal to each other. it follows that Qx · Qy = x · y. 2 it follows that (i) By definition. Example 2. and since A is symmetric that A = AT . Q is non-singular. Taking the norm of the two sides of this equation leads to |Qx| = |o| = 0. However the property (2. an orthogonal linear transformation Q preserves length. Show that a. 2 Since this holds for all vectors x.13: Let Q be an orthogonal linear transformation. (ii) Since the right-hand-sides of the preceding expressions for x · y and Qx · Qy are the same. Thus Q−1 = QT . However the only vector of zero length is the null vector and so necessarily x = o. y it must also hold when x and y are replaced by Qx and Qy: x·y = Qx · Qy = 1 |Qx|2 + |Qy|2 − |Qx − Qy|2 . Thus the preceding equation simplifies to Qx · Qy = 1 |x|2 + |y|2 − |x − y|2 2 . Q−1 = QT . It follows that x · QT Qy = x · y for all vectors x and y. Remark: Thus an orthogonal linear transformation preserves the length of any vector and the inner product between any two vectors. VECTORS AND LINEAR TRANSFORMATIONS (x − y) · (x − y) = x · x + y · y − 2x · y 1 |x|2 + |y|2 − |x − y|2 . |Qv| = |v| for all vectors v.14: If α1 and α2 are two distinct eigenvalues of a symmetric linear transformation A. Solution: a. To show that Q is non-singular we must show that the only vector x for which Qx = o is the null vector x = o. . It follows therefore that an orthogonal linear transformation preserves the angle between a pair of vectors as well. i. Since Q is orthogonal it preserves the inner product: Qx · Qy = x · y for all vectors x and y. b. Example 2.19) of the transpose shows that Qx · Qy = x · QT Qy. Thus Q is non-singular. Thus Aa1 · a2 = a1 · Aa2 .32 Solution: Since CHAPTER 2. and that b. Solution: Recall from the definition of the transpose that Aa1 · a2 = a1 · AT a2 . Suppose that Qx = o for some vector x. Consequently |x| = 0. However an orthogonal linear transformation preserves length and therefore |Qx| = |x|.e. and therefore that QT Q = I.

Remark: We will show later that +1 is an eigenvalue of a “proper” orthogonal linear transformation on E3 .15: If λ and e are an eigenvalue and eigenvector of an arbitrary linear transformation A.16: If λ is an eigenvalue of an orthogonal linear transformation Q. Thus the preceding equation reduces to α1 a1 · a2 = α2 a1 · a2 or equivalently (α1 − α2 )(a1 · a2 ) = 0. 3.2. show that |λ| = 1. Solution: Since PP−1 = I it follows that Ae = APP−1 e. (i) Show that the linear transformation A can be represented as 3 3 A= i=1 j=1 Aij (ei ⊗ ej ). Q preserves length and so |Qe| = |e|. we have Aa1 = α1 a1 and Aa2 = α2 a2 . The corresponding eigvector is known as the axis of Q. However we are told that Ae = λe. (ii) Solution: Consider the linear transformation given on the right-hand side of (ii) and operate it on an arbitrary vector x:   3 3 3 3 3 3 3 3 where we have used the facts that (p⊗q)r = (q·r)p and xi = x·ei . Since. On using (i) in the right most expression above.3.17: The components of a linear transformation A in an orthonormal basis {e1 . whence APP−1 e = λe. 2. j=1 j=1 . Example 2. e2 . 33 Since a1 and a2 are eigenvectors of A corresponding to the eigenvalues α1 and α2 . Example 2. show that λ and P−1 e are an eigenvalue and eigenvector of the linear transformation P−1 AP. Example 2. we can continue this calculation as follows:   3 3 3 3  i=1 j=1 Aij (ei ⊗ ej ) x = i=1 j=1 Aij (x · ej )ei = Aij xj ei = xj Aij ei . WORKED EXAMPLES. j = 1. Here P is an arbitrary non-singular linear transformation. Thus Qe = λe and so |Qe| = |λe| = |λ| |e|. α1 = α2 it follows that necessarily a1 · a2 = 0. However. Solution: Let λ and e be an eigenvalue and corresponding eigenvector of Q. e3 } are the unique real numbers Aij defined by 3 Aej = i=1 Aij ei . i=1 j=1 j=1 i=1  i=1 j=1 Aij (ei ⊗ ej ) x = xj Aej = A xj ej = Ax. Operating on both sides with P−1 gives P−1 APP−1 e = λP−1 e which establishes the result. Thus |λ| = 1.

(v) And finally. it leaves the axis e itself unchanged: Re = e. (iii) Next.18: Let R be a “rotation transformation” that rotates vectors in IE3 through an angle θ. it follows from (v) and (vii)3 that Let {e1 . since R rotates vectors about the axis e. e} forms a right-handed orthonormal basis for IE3 . 3. . the vectors x. Second. we conclude from (iv) with the choice x = e1 that Re1 · e = 0. R33 = 1.      (vii) Re2 = R12 e1 + R22 e2 + R32 e.34 CHAPTER 2. since the angle through which R rotates a vector is θ. for any vector x that is not parallelel to the axis e. e2 . the angle between any vector x and e must equal the angle between Rx and e: Rx · e = x · e for all vectors x. e} be a right-handed orthonormal basis. about an axis e (in the sense of the right-hand rule).  R13 = 0. R23 = 0. VECTORS AND LINEAR TRANSFORMATIONS The desired result follows from this since this holds for arbitrary vectors x. 0 < θ < π. j = 1. and therefore in particular the vectors Re1 . Solution: We begin by listing what is given to us in the problem statement. (vi) for some unique real numbers Rij .  Re1 = R11 e1 + R21 e2 + R31 e. (iv) moreover. Show that R can be represented as R = e ⊗ e + (e1 ⊗ e1 + e2 ⊗ e2 ) cos θ − (e1 ⊗ e2 − e2 ⊗ e1 ) sin θ. the angle between any vector x and its image Rx is θ: Rx · x = |x|2 cos θ for all vectors x. e2 and e. These together with (vii) imply that R31 = R32 = 0. (i) where e1 and e2 are any two mutually orthogonal vectors such that {e1 . since the rotation is in the sense of the right-hand rule. Similarly Re2 · e = 0. Since the transformation R simply rotates vectors. 2. (ii) In addition. This implies that any vector in E3 . First. Rx and e must obey the inequality (x × Rx) · e > 0 for all vectors x that are not parallel to e. can be expressed as linear combinations of e1 . i.     Re = R13 e1 + R23 e2 + R33 e. Example 2. e2 . it necessarily preserves the length of a vector and so |Rx| = |x| for all vectors x. Re2 and Re.

39). the inequality (vi) with the choice x = e1 . (ii) with x = e1 gives |Re1 | = 1 which in view of (viii)1 requires that R21 = ± sin θ.            (viii) Fourth.19: If F is a nonsingular linear transformation.  + sin θ e2 .3. yields R12 < 0. we can write FT Fx · x = (Fx) · (Fx) = |Fx|2 ≥ 0. can happen only if x = o. (x) Example 2. In order to show that FT F is positive definite. . Collecting these results shows that R21 = + sin θ. Similarly we find that R12 = ± sin θ. since F is nonsingular.19) of the transpose. show that FT F is symmetric and positive definite. It therefore follows that (FT F)T = FT (FT )T = FT F. (ii) Further. +R21 e2 . WORKED EXAMPLES. 35 Third. together with (viii) and the fact that {e1 .      + cos θ e2 .2. + cos θ e2 . R12 = − sin θ. (ix) Finally. Thus R11 = R22 = cos θ. we consider the quadratic form FT Fx · x. Thus in conclusion we can write (viii) as Re1 Re2 Re = cos θ e1 = − sin θ e1 = e. Thus FT Fx · x > 0 for all vectors x = o and so FT F is positive definite. recall the representation (2. Similarly the choice x = e2 . which. Applying this to (ix) allows us to write R = cos θ (e1 ⊗ e1 ) + sin θ (e2 ⊗ e1 ) − sin θ (e1 ⊗ e2 ) + cos θ (e2 ⊗ e2 ) + (e ⊗ e) which can be rearranged to give the desired result. One similarly shows that R22 = cos θ. (i) this shows that FT F is symmetric. Solution: For any linear transformations A and B we know that (AB)T = BT AT and (AT )T = A. it follows from (iii) with x = e1 and (vii)1 that R11 = cos θ. e2 . Fifth. e} forms a right-handed basis yields R21 > 0. equality holds here if and only if Fx = o. Collecting these results allows us to write (vii) as Re1 Re2 Re = = = cos θ e1 R12 e1 e.      since 0 < θ < π.40) of a linear transformation in terms of its components as defined in (2. By using the property (2.

σ2 . Solution: It follows from Example 2. (iii) If we set f = (T1 − √ σI)s this can be written as √ T1 f = − σf . U: U= FT F. σ3 with corresponding eigenvectors s1 .36 CHAPTER 2. (i) . (iv) √ Thus either f = o or f is an eigenvector of T1 corresponding to the eigenvalue − σ(< 0). Since the triplet of eigenvectors form a basis for the underlying vector space this in turn implies that T1 x = T2 x for any vector x. T1 si = T2 si . Show that it has a unique symmetric positive definite square root. This establishes the existence of a symmetric positive definite square-root of S.e. Since T1 is positive definite it cannot have a negative eigenvalue.21: Polar Decomposition Theorem: If F is a nonsingular linear transformation. Let σ > 0 2 1 and s be an eigenvalue and corresponding eigenvector of S. Thus we have 1 √ √ (T1 + σI)(T1 − σI)s = 0 .20 that FT F has a unique symmetric positive definite square root. Thus f = o and so √ (v) T1 s = σs . (vi) This holds for every eigenvector s of S: i. What remains is to show uniqueness of this square-root. It similarly follows that T2 s = √ σs and therefore that T1 s = T2 s. Solution: Since S is symmetric and positive definite it has three real positive eigenvalues σ1 . 2. say. show that there is a unique symmetric positive definite linear transformation T for which T2 = S. s3 which may be taken to be orthonormal. and a unique orthogonal linear transformation R such that F = RU.20: Consider a symmetric positive definite linear transformation S. (i) If one defines a linear transformation T by 3 T= i=1 √ σi (si ⊗ si ) (ii) one can readily verify that T is symmetric. we know that S can be represented as 3 S= i=1 σi (si ⊗ si ). Suppose that S has two symmetric positive definite square roots T1 and T2 : S = T2 = T2 . positive definite and that T2 = S. Then Ss = σs and so T2 s = σs.e. Example 2. i.19 that FT F is symmetric and positive definite. 3. s2 . VECTORS AND LINEAR TRANSFORMATIONS Example 2. It then follows from Example 2. Thus T1 = T2 . show that there exists a unique positive definite symmetric linear transformation U. i = 1. Further.

Thus U and V have the spectral decompositions 3 3 U= i=1 λi ri ⊗ ri . since U is positive definite. i = 1.38)1 and 3 = Rri we have 3 3 F = RU = R i=1 λi ri ⊗ ri = i=1 λi (Rri ) ⊗ ri = λi i=1 i ⊗ ri . 3 be the eigenvalues and eigenvectors of U. transformation R through: R = FU−1 . V= i=1 λi i ⊗ i . Let λi . we have ( δij ( i ⊗ rj ). Finally. i and therefore 3 3 3 3 R = FU−1 = i=1 λi i ⊗ ri j=1 λ−1 rj ⊗ rj = j λi λ−1 ( j i=1 j=1 i i ⊗ ri )(rj ⊗ rj ). All we have to do is to show that R is orthogonal. This establishes the proposition (except for the uniqueness which if left as an exercise). are symmetric. 2. b = o. 37 Define the linear (ii) (iii) In this calculation we have used the fact that U.15 it follows that the eigenvalues of V are the same as those of U and that the corresponding eigenvectors i of V are given by i = Rri . . But this follows from RT R = (FU−1 )T (FU−1 ) = (U−1 )T FT FU−1 = U−1 U2 U−1 = I.3. Show that F= 3 3 λi i=1 i ⊗ ri . Example 2.23: Determine the rank and the null space of the linear transformation C = a ⊗ b where a = o. WORKED EXAMPLES. i R= i=1 i ⊗ ri . ri .2. i By using the property (2. (i) Next. since U is non-singular 3 U−1 = i=1 λ−1 ri ⊗ ri . From Example 2. and its inverse U−1 exists. it is nonsingular. Solution: First. (ii) Example 2. by using the property (2. and so U−1 .37)2 and the fact that ri · rj = δij . Therefore 3 3 3 ⊗ ri )(rj ⊗ rj ) = (ri · rj )( 3 ⊗ rj ) = R= i=1 j=1 λi λ−1 δij ( j i ⊗ rj ) = λi λ−1 ( i i=1 i ⊗ ri ) = ( i=1 i ⊗ ri ).22: The polar decomposition theorem states that any nonsingular linear transformation F can be represented uniquely in the forms F = RU = VR where R is orthogonal and U and V are symmetric and positive definite.

Without loss of generality we can assume that |e1 | = 1. this square root is also I. i. call this vector f1 so that Af1 = f1 . λ3 ≥ 1. A = I. since Af1 = f1 .38 CHAPTER 2. the set of all vectors normal to b. We are to explore them here: thus we wish to determine a tensor A on E3 such that A2 = I. the fact that A = −I. (i) Example 2. Set e1 = (A − I) f1 . if Ax = x for every vector x ∈ E3 . Second. there are other square roots of I that are not symmetric positive definite.25: Calculate the square roots of the identity tensor. Since Cx = (b · x)a and a = o. First. (ii) a = o. Solution: The identity is certainly a symmetric positive definite tensor. it follows that e1 = o. VECTORS AND LINEAR TRANSFORMATIONS Solution: Recall that the rank of any linear transformation A is the dimension of its range. By the result of a previous example on the square-root of a symmetric positive definite tensor.e. it follows that there is a unique symmetric positive definite tensor which is the square root of I. Since we are given that A = I. (The range of A is the particular subspace of E3 comprised of all vectors Ax as x takes all values in E3 . . together with A2 = I similary implies that there must exist a unit vector e2 for which Ae2 = e2 . Obviously. Example 2. (iv) from which we conclude that +1 is an eigenvalue of A with corresponding eigenvector e2 . λ2 = 1. However. Observe that (A + I) e1 = (A + I) (A − I)f1 = (A2 − I)f1 = Of1 = o. there must exist at least one non-null vector x for which Ax = x. Therefore Ae1 = −e1 (iii) (ii) (i) and so −1 is an eigenvalue of A with corresponding eigenvector e1 . Thus the range of C is the set of vectors parallel to a and its dimension is one.24: Let λ1 ≤ λ2 ≤ λ3 be the eigenvalues of the symmetric linear transformation S. The linear transformation C therefore has rank one. Recall that the null space of any linear transformation A is the particular subspace of E3 comprised of the set of all vectors x for which Ax = o. then. the null space of C consists of all vectors x for which b · x. Show that S can be expressed in the form S = (I + a ⊗ b)(I + b ⊗ a) if and only if 0 ≤ λ1 ≤ 1. b = o. A = I and A = −I.) Since Cx = (b · x)a the vector Cx is parallel to the vector a for every choice of the vector x. by definition.

A13 or A23 A33  = = = 0. A12 = A32 = 0. e3 } is linearly independent and therefore forms a basis for E3 . Why is [A2 ] = [A]2 ?) However. Therefore we must have −A13 + A13 A33 = 0. The triplet of vectors {e1 . by Aej = Aij ei . e2 } is a linearly independent pair of vectors. (vi)   0 0 A33 0 1 0  [A2 ] = [A]2 = [A][A] =  0  0  1 −A13 + A13 A33 A2 33  A23 + A23 A33  . Fifth. one can show that {e1 . which implies that A13 either A23 A33 =  arbitrary. WORKED EXAMPLES. −1.   (vii) (Notation: [A2 ] is the matrix of components of A2 while [A]2 is the square of the matrix of components of A. Therefore e1 and e2 are linearly independent.            A23 + A23 A33 = 0. 39 Third. suppose that for some scalars ξ1 . ξ2 one has ξ1 e1 + ξ2 e2 = o.2. neither of them is the null vector o. the components Aij of the tensor A in the basis {e1 . e3 } are given. and therefore ξ1 = ξ2 = 0. and similarly comparing (v) with (iv) yields A22 = 1. e2 . Subtracting and adding the preceding two equations shows that ξ1 e1 = ξ2 e2 = o. the matrix of components of A2 in any basis has to be the identity matrix.      = 0.      = 1. arbitrary. Since e1 and e2 are eigenvectors. as usual. Fourth. 33 (viii) (ix) Consequently the matrix [A] must necessarily have one of the two forms    −1 0 α1 −1 0 0     0 1 0   0 1 α2 or    0 0 1 0 0 −1  .  (x) . Operating on this by A yields ξ1 Ae1 + ξ2 Ae2 = o.3. A2 = 1. A21 = A31 = 0. which on using (iii) and (iv) leads to −ξ1 e1 + ξ2 e2 = o. (v) It follows that Comparing (v) with (iii) yields A11 = −1. To see this. let e3 be a unit vector that is perpendicular to both e1 and e2 . The matrix of components of A in this basis is therefore   −1 0 A13   [A] =  0 1 A23  . e2 . since A2 = I.

40 CHAPTER 2. 2 (xiii) α2 e2 ⊗ e3 . New Jersey. . 1997. Gelfand. 3. P. Linear Vector Spaces and Cartesian Tensors. Note from this that the components of the tensor −I + 2p2 ⊗ q2 are given by (x)2 . Oxford University Press. 2 = −e1 ⊗ e1 + e2 ⊗ e2 + e3 ⊗ e3 + α1 e1 ⊗ e3 . Knowles. Conversely. set p1 = e1 . A = I. New York. A = −I for any value of the scalar α1 . Lectures on Linear Algebra. one can readily verify that the tensor A = −I + 2p2 ⊗ q2 (xiv) has the desired properties A2 = I. Van Nostrand. A = I.K.R. References 1.M. Alternatively set p2 = e2 . Finite Dimensional Vector Spaces. Sixth. A = −I for any value of the scalar α2 . 2 = −e1 ⊗ e1 + e2 ⊗ e2 − e3 ⊗ e3 + α2 e2 ⊗ e3 . Conversely. 2. one can readily verify that the tensor A = I + 2p1 ⊗ q1 (xii) has the desired properties A2 = I. VECTORS AND LINEAR TRANSFORMATIONS where α1 and α2 are arbitrary scalars. J. New York. 2 (xi) α1 e1 ⊗ e3 . 1958. Note from this that the components of the tensor I + 2p1 ⊗ q1 are given by (x)1 . Thus the tensors defined in (xii) and (xiv) are both square roots of the identity tensor that are not symmetric positive definite. I. Then p2 ⊗ q2 = e2 ⊗ e2 + and therefore −I + 2p2 ⊗ q2 = − e1 ⊗ e1 − e2 ⊗ e2 − e3 ⊗ e3 + 2e2 ⊗ e2 + α2 e2 ⊗ e3 q2 = e2 + α2 e3 . Wiley. Halmos. Then p1 ⊗ q1 = −e1 ⊗ e1 + and therefore I + 2p1 ⊗ q1 = e1 ⊗ e1 + e2 ⊗ e2 + e3 ⊗ e3 − 2e1 ⊗ e1 + α1 e1 ⊗ e3 q1 = −e1 + α1 e3 . 1963.

... . γ such that v = αe1 + βe2 + γe3 .... ej are mutually orthogonal... given any v ∈ IE3 ......Chapter 3 Components of Vectors and Tensors... (3... component of 4-tensor C in some basis i1 i2 .. . e2 . there are unique scalars α.. β.. Notation: α {a} a ai [A] A Aij Cijk Ti1 i2 . . 3.. A set of three linearly independent vectors {e1 . . e2 ... and if each pair of basis vectors ei . i. e3 } forms an orthonormal basis for IE3 . or ith element of the column matrix {a} 3 × 3 square matrix linear transformation i. or i.... Cartesian Tensors.in component of n-tensor T in some basis. Let IE3 be a three-dimensional Euclidean vector space.. j. .. for an 41 .. . k....1 Components of a vector in a basis. j component of the linear transformation A in some basis. we say that {e1 .e. e3 } forms a basis for IE3 in the sense that an arbitrary vector v can always be expressed as a linear combination of the three basis vectors.1) If each basis vector ei has unit length. j element of the square matrix [A] i.... .in . scalar 3 × 1 column matrix vector ith component of the vector a in some basis.... .... Thus..

v2 . If the basis is right-handed. COMPONENTS OF TENSORS. that we are given two bases {e1 . e2 . one has in addition that ei · (ej × ek ) = eijk where eijk is the alternator introduced previously in (1. it is still important to emphasize that the components vi of a vector depend on both the vector v and the choice of basis.5) (3. ei · ej = δij (3.2) where δij is the Kronecker delta. e3 } as shown in . for example. v3 } and {v1 .4). The components of v may be assembled into a column matrix  v1   {v} =  v2  .44). CARTESIAN TENSORS orthonormal basis.42 CHAPTER 3. v3 } of the same vector v in two different bases.1: Components {v1 .4) (3. e3 } and {e1 .6) v3 v v3 e3 v e2 v1 v2 v1 v2 e1 Figure 3. Even though this is obvious from the definition (3. v2 . In these notes we shall always restrict attention to orthonormal bases unless explicitly stated otherwise. e2 . The vector can be expressed in terms of its components and the basis vectors as v = vi ei . e3 } are defined by vi = v · ei . Suppose.3) (3. The components vi of a vector v in a basis {e1 . v3  (3. e2 .

9) where eijk is the alternator introduced previously in (1. Let Aij be the ith component of the vector Aej so that Aej = Aij ei . e2 . the vector equation z = x + y can be written equivalently as {z} = {x} + {y} or zi = xi + yi in terms of the components xi .3. If ui and vi are the components of two vectors u and v in a basis. yi and zi in the given basis. there is a one-to-one correspondence between column matrices and vectors.7) Thus the one vector v can be expressed in either of the two equivalent forms v = vi ei or v = vi ei . (3. In particular this is true of the three vectors Ae1 . for example. 43 Figure 3.2. (3.44). e3 } is chosen and fixed. 3. Any vector in IE3 can be expressed as a linear combination of the basis vectors e1 . there is a unique vector x associated with any given column matrix {x} such that the components of x in {e1 . Consider a linear transformation A. once the basis is fixed. (3. Thus.1. that once the basis is fixed. Ae2 and Ae3 . (3.12) . Once a basis {e1 . COMPONENTS OF A LINEAR TRANSFORMATION IN A BASIS. vi = vi .2 Components of a linear transformation in a basis. then the scalar-product u · v can be expressed as u · v = ui vi . e2 and e3 .10) the vector-product u × v can be expressed as u × v = (eijk uj vk )ei or equivalently as (u × v)i = eijk uj vk . vi = v · ei . (3. e3 } are {x}. Then the vector v has one set of components vi in the first basis and a different set of components vi in the second basis: vi = v · ei .8) The components vi and vi are related to each other (as we shall discuss later) but in general. e2 . It follows.11) (3.

[B] and [C] are related by [C] = [A][B] or Cij = Aik Bkj .16) The components Aij and Aij are related to each other (as we shall discuss later) but in general Aij = Aij . if A. e3 } and {e1 . e2 . (3. (3.13) The 9 scalars Aij are known as the components of the linear transformation A in the basis {e1 . Thus. It follows. for example.14) A31 A32 A33 3 3 The linear transformation A can be expressed in terms of its components Aij and the basis vectors ei as A= j=1 i=1 Aij (ei ⊗ ej ). for example. e3 }. The components of the linear transformation A = a ⊗ b are Aij = ai bj . (3. CARTESIAN TENSORS We can also write Aij = ei · (Aej ).44 CHAPTER 3. (3. Then the linear transformation A has one set of components Aij in the first basis and a different set of components Aij in the second basis: Aij = ei · (Aej ). e3 }. The components Aij can be assembled into a square matrix:   A11 A12 A13   [A] =  A21 A22 A23  . then their component matrices [A].18) in terms of the components Aij . e2 . e2 . B and C are linear transformations such that C = AB.19) . Suppose. e2 . e3 } are [M ]. Aij = ei · (Aej ).15) The components Aij of a linear transformation depend on both the linear transformation A and the choice of basis. xi and yi in the given basis. once the basis is fixed. (3. COMPONENTS OF TENSORS. that the equation y = Ax relating the linear transformation A and the vectors x and y can be written equivalently as {y} = [A]{x} or yi = Aij xj (3. there is a one-to-one correspondence between square matrices and linear transformations. e3 } is chosen and fixed.17) Once a basis {e1 . (3. that we are given two bases {e1 . there is a unique linear transformation M associated with any given square matrix [M ] such that the components of M in {e1 . Similarly. e2 .

a symmetric linear transformation S has three real eigenvalues λ1 . For example the first example in the previous chapter asked us to show that a · (b × c) = b · (c × a). (3. Let Qij be the jth component of the vector ei in the basis {e1 .3.3. If necessary. its components are therefore given by the Kronecker delta δij . ij As mentioned in Section 2. 45 The component matrix [I] of the identity linear transformation I in any orthonormal basis is the unit matrix. e2 . Similarly the righthand side reads b · (c × a) = bi (c × a)i = bi eijk cj ak = eijk ak bi cj . λ3 and corresponding orthonormal eigenvectors e1 . e2 . e2 . i → j and j → k we can write b · (c × a) = ejki ai bj ck . then [AT ] = [A]T and AT = Aji .2. Since i. and therefore in particular the vectors ei . pick and fix a basis. e3 . they can be changed to any other subscript. e2 .21) . the left hand side of this reads a · (b × c) = ai (b × c)i = ai eijk bj ck = eijk ai bj ck . and then work with the components in that basis. any vector. Finally recalling that the sign of eijk changes when any two adjacent subscripts are switched we find that b · (c × a) = ejki ai bj ck = −ejik ai bj ck = eijk ai bj ck where we have first switched the ki and then the ji in the subscript of the alternator. if it is more convenient to do so. In terms of components. we can. e3 . If [A] and [AT ] are the component matrices of the linear transformations A and AT . we can revert back to the vectors and linear transformations at the end. can be represented as a linear combination of the basis vectors e1 . e2 .3 Components in two bases. e3 } and {e1 . thus by changing k → i. e3 }: ei = Qij ej .20) 0 0 λ3 As a final remark we note that if we are to establish certain results for vectors and linear transformations. The eigenvectors are referred to as the principal directions of S. (3. e2 . e3 }. λ2 . k are dummy subscripts in the right-most expression. The right-most expressions of a · (b × c) and b · (c × a) are identical and therefore this establishes the desired identity. j. The particular basis consisting of the eigenvectors is called a principal basis for S. e3 } forms a basis. Since {e1 . 3. Consider a 3-dimensional Euclidean vector space together with two orthonormal bases {e1 . COMPONENTS IN TWO BASES. The component matrix [S] of the symmetric linear transformation S in its principal basis is   λ1 0 0   [S] =  0 λ2 0  .

the component matrices {v} and {v } of a vector v in two different bases are different. e2 . Observe from (NNN) that Qji can also be interpreted as the jth component of ei in the basis {e1 .26) Since [Q] is orthogonal. If in addition. COMPONENTS OF TENSORS. Then one can show that Aij = Qip Qjq Apq or equivalently [A ] = [Q][A][Q]T .22) and so Qij is the cosine of the angle between the basis vectors ei and ej .24) Since [Q] is orthogonal. e3 } and {e1 . one also has the inverse relationships vi = Qji vj or equivalently {v} = [Q]T {v }. e2 . which means that one basis is right-handed and the other is left-handed. e3 } whence we also have ei = Qji ej . We may now relate the different components of a single vector v in two bases. e3 }. (3. if one basis can be rotated into the other. e3 }. then [Q] is a proper orthogonal matrix and det[Q] = +1. then [Q] is an improper orthogonal matrix and det[Q] = −1. e2 . (3. if the two bases are related by a reflection. Let Aij and Aij be the ij-components of the same linear transformation A in the two bases {e1 . (3. e3 } and {e1 . e2 .25) In general. one also has the inverse relationships Aij = Qpi Qqj Apq or equivalently [A] = [Q]T [A ][Q]. Since both bases are orthonormal it can be readily shown that [Q] is an orthogonal matrix. e2 . e2 . This matrix relates the two bases {e1 . (3. e2 . Then one can show that vi = Qij vj or equivalently {v } = [Q]{v} (3. one sees that Qij = ei · ej . e3 }. (3. It is possible to show that the only isotropic vector is the null vector o. we may relate the different components of a single linear transformation A in two bases. e3 } and {e1 . Similarly. which means that both bases are right-handed or both are left-handed.27) . CARTESIAN TENSORS By taking the dot-product of (NNN) with ek .46 CHAPTER 3. Let vi and vi be the ith component of the same vector v in the two bases {e1 . A vector whose components in every basis happen to be the same is called an isotropic vector: {v} = [Q]{v} for all orthogonal matrices [Q].23) The 9 numbers Qij can be assembled into a square matrix [Q].

then in general φ([A]) = φ[A ]). e2 . Let Φ(A. e2 . e2 .4 Scalar-valued functions of linear transformations. e3 ) = Φ(A. One example of such a function is (Ae1 × Ae2 ) · Ae3 . e1 .4.29) since the determinant of an orthogonal matrix is ±1. e3 ). let A be a linear transformation and let [A] be the components of A in some basis {e1 . Let φ([A]) be some real-valued function defined on the set of all square matrices.3. e2 . e2 . e3 } and {e1 . so that for every two (not-necessarily orthonormal) bases {e1 . It is possible to show that the most general isotropic symmetric linear transformation is a scalar multiple of the identity αI where α is an arbitrary scalar. e2 . Certain such functions are in fact independent of the basis. e2 . e3 } and {e1 . SCALAR-PRODUCT AND NORM 47 In general. (3.30) . e3 }. e3 } and therefore such a function depends on the linear transformation only and not the basis. If [A ] are the components of A in some other basis {e1 . Certain functions φ have the property that φ([A]) = φ[A ]) for all pairs of bases {e1 . 3. For example Φ(A. TRACE. e2 . Since the components [A] and [A ] of a linear tranformation A in two bases are related by [A ] = [Q][A][Q]T . e1 . e3 } one has Φ(A. e2 . e3 }.28) Φ(A. A linear transformation whose components in every basis happen to be the same is called an isotropic linear transformation: [A] = [Q][A][Q]T for all orthogonal matrices [Q]. e2 . e3 ) = Ae1 · e1 . DETERMINANT. e1 . the component matrices [A] and [A ] of a linear transformation A in two different bases are different. trace. e2 . e3 }. e3 ) = (e1 × e2 ) · e3 (though it is certainly not obvious that this function is independent of the choice of basis). Determinant. and in such a case we can simply write Φ(A). e2 . if we take the determinant of this matrix equation we get det[A ] = det([Q][A][Q]T ) = det[Q] det[A] det[Q]T = (det[Q])2 det[A] = det[A]. We first consider two important examples here. e3 ) be a scalar-valued function that depends on a linear transformation A and a (non-necessarily orthonormal) basis {e1 . (3. scalar-product and norm. Therefore without ambiguity we may define the determinant of a linear transformation A to be the (basis independent) scalarvalued function given by det A = det[A]. This means that the function φ depends on the linear transformation A and the underlying basis. (3. For such a function we may write φ(A). e1 . Equivalently. e1 .

Similarly. Thus the eigenvalues of a linear transformation are also scalar-valued functions of A whose values depends only on A and not the basis: λi = λi (A).37)     0 0 λ3 .46).33) As mentioned previously. then det(A−1 ) = 1/ det(A). one can show that A is non-singular if and only if det A = 0. or equivalently (A − λI)v = o. It is useful to note the following properties of the determinant of a linear transformation: det(AB) = det(A) det(B). (3. (3. (3.28) is in fact the determinant det A. If S is symmetric.31) see (1. traceA = Aii . a linear transformation A is said to be non-singular if the only vector x for which Ax = o is the null vector x = o.48 CHAPTER 3.35) Suppose that λ and v = o are an eigenvalue and eigenvector of given a linear transformation A. det(αA) = α3 det(A). Then by definition. we may define the trace of a linear transformation A to be the (basis independent) scalar-valued function given by trace A = tr[A]. (3. Since v = o it follows that A − λI must be singular and so det(A − λI) = 0. The eigenvalues and eigenvectors of a linear transformation do not depend on any choice of basis.36) The eigenvalues are the roots λ of this cubic equation. its matrix of components in a principal basis are   λ1 0 0     [S] =  0 λ2 0  . In terms of its components in a basis one has det A = eijk A1i A2j A3k = eijk Ai1 Aj2 Ak3 . (3. Equivalently. det(AT ) = det (A). (3. Av = λv.34) If A is non-singular. COMPONENTS OF TENSORS. CARTESIAN TENSORS We will see in an example at the end of this Chapter that the particular function Φ defined in (3.32) (3.

I3 (QT AQ) = I3 (A). B) = tr(ABT ) = Aij Bij . which is known as the Cayley-Hamilton theorem. 49 (3. λ3 I1 (S) = λ1 + λ2 + λ3 .43) . TRACE. I2 (S) = λ1 λ2 + λ2 λ3 + λ3 λ1 .37) that for a symmetric linear transformation with eigenvalues λ1 . DETERMINANT. Finally. It can be readily verified that for any linear transformation A and all orthogonal linear transformations Q. (3. I3 (A) = det A.39) and for this reason the three functions (3. λ2 . Note that in terms of components in a basis.40) will play an important role in what follows. B) = tr(ABT ) (3.38) are said to be invariant under orthogonal transformations. The particular function φ(A. I2 (QT AQ) = I2 (A).3. SCALAR-PRODUCT AND NORM The particular scalar-valued functions I1 (A) = tr A. I2 (A) = 1/2 [(tr A)2 − tr (A2 )] . Note in particular that the cubic equation for the eigenvalues of a linear transformation can be written as λ3 − I1 (A)λ2 + I2 (A)λ − I3 (A) = 0. I1 (QT AQ) = I1 (A).41) (3.40) between invariants and eigenvalues is one-to-one. φ(A. det(A − αI) = −α3 + I1 (A)α2 − I2 (A)α + I3 (A). In addition one can show that for any linear transformation A and any real number α. One can similarly define scalar-valued functions of two linear transformations A and B.4.38) will appear frequently in what follows. Observe from (3. I3 (S) = λ1 λ2 λ3 .42) (3. B) defined by φ(A. (3. The mapping (3. one can show that A3 − I1 (A)A2 + I2 (A)A − I3 (A)I = O.

Observe the useful property that if |A| → 0.. e2 .in are called the components of T in the basis {e1 . then each component Aij → 0. it is represented by 30 . The concept of an nth -order tensor can be introduced similarly: let T be a physical entity which. If. in a given basis {e1 . A quantity whose components vi and vi in these two bases are related by vi = Qij vj (3. CARTESIAN TENSORS This particular scalar-valued function is often known as the scalar product of the two linear transformation A and B and is written as A · B: A · B = tr(ABT ). e2 . (3. COMPONENTS OF TENSORS. e3 }. e3 } and {e1 .. It follows from our preceding discussion that a vector is a 1-tensor. e3 }..47) (3.in . vector or linear transformation.44) It is natural then to define the magnitude (or norm) of a linear transformation A. (3. e2 . Note that in terms of components in a basis.46) 3. e3 }. This will be used later when we linearize the theory of large deformations.5 Cartesian Tensors Consider two orthonormal bases {e1 . The numbers Ti1 i2 . is defined completely by a set of 3n ordered numbers Ti1 i2 . denoted by |A| as √ (3.. for example. T is a scalar.. It follows from our preceding discussion that a linear transformation is a 2-tensor. e2 . |A|2 = Aij Aij .50 CHAPTER 3. A quantity whose components Aij and Aij in two bases are related by Aij = Qip Qjq Apq (3. 31 and 32 .49) is called a 2nd -order Cartesian tensor or a 2-tensor.48) is called a 1st -order Cartesian tensor or a 1-tensor.45) |A| = A · A = tr(AAT )..

....in = Qi1 j1 Qi2 j2 .in = Tk1 k2 .. there are certain special tensors whose components in one basis are the .. and let Ti1 i2 ...50) the entity T is called a nth -order Cartesian tensor or more simply an n-tensor. Recall that the outer-product of two vectors a and b is the 2−tensor C = a ⊗ b whose components are given by Cij = ai bj .. Let A be a n-tensor with components Ai1 i2 .52) where some of the subscripts maybe repeated. Then “contracting” A over two of its subscripts.. then T is necessarily a tensor as well.3.. if for every pair of such bases. Two tensors of the same order are added by adding corresponding components.51) Let A be a 2-tensor with components Aij in some basis. If a and b are 1-tensors.. e2 ... This can be generalized to higher-order tensors. b and T be entities whose components in a basis are denoted by ai . in .k Bj1 j2 . Suppose that A. Let {e1 ..... Ai1 i2 .. Note that the components of a tensor in an arbitrary basis can be calculated if its components in any one basis are known.. leads to the (n − 2)−tensor whose components in this basis are Ai1 i2 .in .jm . (3. the components of a tensor T in two different bases are different: Ti1 i2 . say the ij th and ik th subscripts. ij−1 p ij+1 . (3. these two sets of components are related by Ti1 i2 ..in = Ti1 i2 . e3 } be a second basis related to the first one by the orthogonal matrix [Q]. CARTESIAN TENSORS 51 components respectively in the given basis.jm (3. B and T are entities whose components in a basis are related by. then one can readily show that T is necessarily a 2-tensor. Suppose that the components of T in some basis are related to the components of a and b in that same basis by ai = Tij bj ..ik−1 p ik+1 .jm = Ai1 i2 .... Qin jn Tj1 j2 .jn ..... If it is known that A and B are tensors. and therefore summing over them.. This is called the quotient rule since it has the appearance of saying that the quotient of two 1-tensors is a 2-tensor.in be the components of the entity T in the second basis. However.in Bj1 j2 .. Then. Given an n-tensor A and an m-tensor B their outer-product is the (m + n)−tensor C = A ⊗ B whose components are given by Ci1 i2 . In general.. This can be generalized to higher-order tensors. bi and Tij . Contracting over two subscripts involves setting those two subscripts equal. This rule generalizes naturally to tensors of more general order.in j1 j2 ..in in some basis.5. Let a... Then “contracting” A over its subscripts leads to the scalar Aii .

53) One can show that (a) the only isotropic 1-tensor is the null vector o. a tensor T is said to be an isotropic tensor if its components have the same values in all bases. (b) the most general isotropic 2-tensor is a scalar multiple of the identity linear transformation. if Ti1 i2 . (d) and the most general isotropic 4-tensor C has components (in any basis) Cijkl = αδij δkl + βδik δjl + γδil δjk where α.. As noted previously. (c) the most general isotropic 3-tensor is the null 3-tensor o.. (3. They are contained in the present chapter because they all involve either the determinant or trace of a linear transformation. In general. αI. we can revert back to the vectors and linear transformations at the end. we are asked to establish certain results for vectors and linear transformations. γ are arbitrary scalars.54) (3. and we chose to define these quantities in terms of components (even though they are basis independent). (3..in in all bases {e1 . In some of the examples below.jn for all orthogonal matrices [Q]. Show that its matrix of components [A] in any basis is a symmetric matrix.. i.. COMPONENTS OF TENSORS.in = Qi1 j1 Qi2 j2 . for an isotropic tensor Ti1 i2 .52 CHAPTER 3..1: Suppose that A is a symmetric linear transformation. β.6 Worked Examples.e. e2 . e3 } and {e1 . e2 .. It is also worth pointing out that in some of the example below calculations involving vectors and/or linear transformation are carried out without reference to their components.. an example of this is the identity 2-tensor I...in = Ti1 i2 . and then work using components in that basis. CARTESIAN TENSORS same as those in any other basis. We shall do this frequently in what follows and will not bother to explain this each time. e3 }.. Example 3. . Such a tensor is said to be isotropic. If necessary.. whenever it is more convenient we may pick and fix a basis.. Equivalently. One might have expected such examples to have been presented in Chapter 2.55) 3.Qin jn Tj1 j2 .

and so (ii) yields Aji = Aij . Rij = δij − 2δ3i δ3j . Solution: Let Aij and Bij be the components of A and B in an arbitrary basis. e2 . B) = trace(ABT ) (i) and show that. A). e3 } are defined by Aji = ej · Aei . B) = f (A. iii) f (A + C.2: Consider the scalar-valued function f (A. B) and iv) f (A. WORKED EXAMPLES.3. B) = f (B. Thus [A] = [A]T and so the matrix [A] is symmetric. B. the right most term here is the Aij component of A.5: Choose any convenient basis and calculate the components of the projection linear transformation Π and the reflection linear transformation R in that basis. 53 (i) The property (NNN) of the transpose shows that ej · Aei = AT ej · ei . then the linear transformation A is also symmetric. B) = Aik Bik . which. C. From an example in the previous chapter we know that the projection transformation Π and the reflection transformation R in the plane P can be written as Π = I − e3 ⊗ e3 and R = I − 2(e3 ⊗ e3 ) respectively. e2 . B).13). the components of A in the basis {e1 . e3 } forms an orthonormal basis. Solution: Let e3 be a unit vector normal to the plane P and let e1 and e2 be any two unit vectors in P such that {e1 . Example 3. (ii) . Solution: According to (3. (iii) (ii) Example 2. and finally since the order of the vectors in a scalar product do not matter we have ej · Aei = ei · Aej . for all linear transformations A.13). ii) f (αA. B) = αf (A. and for all scalars α. B) + f (C. this functionf has the following properties: i) f (A. on using the fact that A is symmetric further simplifies to ej · Aei = Aej · ei . Thus Aji = ei · Aej . T (ABT )ij = Aik Bkj = Aik Bjk and so f (A. In terms of these components. A) > 0 provided A = 0. we find that Πij = δij − (e3 )i (e3 )j = δij − δ3i δ3j . Remark: Conversely. By (3.6. Since the components of e3 in the chosen basis are δ3i . if it is known that the matrix of components [A] of a linear transformation in some basis is is symmetric.

COMPONENTS OF TENSORS. On using (3. that u · (u × v) = 0. Thus it follows from Example 1. In terms of their components we can write u · (u × v) = ui (u × v)i = ui (eijk uj vk ) = eijk ui uj vk . it follows that the left-hand sides must also be equal. (iv) (iii) (ii) (i) Recalling the identity erpq det[F ] = eijk Fir Fjp Fkq in (1. it follows that eijk is skew-symmetric in the subscripts ij and ui uj is symmetric in the subscripts ij.48) for the determinant of a matrix and substituting this into (iv) gives det F (a × b) · c = eijk Fir Fjp Fkq ap bq cr . denoted by A · B. Example 3. for example. c. The orthogonality of v and u × v can be established similarly.3: For any two vectors u and v. and (3.11). (i) Since eijk = −ejik and ui uj = uj ui . show that their cross-product u × v is orthogonal to both u and v. CARTESIAN TENSORS It is now trivial to verify that all of the above requirements hold.3 that eijk ui uj = 0 and so u · (u × v) = 0. as A · B = trace(ABT ) = Aij Bij . we can define the magnitude of a linear transformation to be |A| = √ A·A= Aij Aij . (iv) (iii) Example 3.10). are any three linearly independent vectors and that F be an arbitrary non-singular linear transformation. we can express this as (Fa × Fb) · Fc = (Fa × Fb)i (Fc)i = eijk (Fa)j (Fb)k (Fc)i . Show that (Fa × Fb) · Fc = det F (a × b) · c Solution: First consider the left-hand side of (i). we note that det F (a × b) · c = det[F ](a × b)i ci = det[F ]eijk aj bk ci = det[F ]erpq ap bq cr .4 Suppose that a. b. thus establishing the desired result. (v) Since the right-hand sides of (iii) and (v) are identical. Note that. and consequently (Fa × Fb) · Fc = eijk (Fjp ap ) (Fkq bq ) (Fir cr ) = eijk Fir Fjp Fkq ap bq cr . . based on this scalar-product. Solution: We are to show. Therefore we may define the scalar-product of two linear transformations A and B. Remark: It follows from this that the function f has all of the usual requirements of a scalar product.54 CHAPTER 3. Turning next to the right-hand side of (i).

6 The volume V of the tetrahedron defined by the three vectors Fa. Derive formulas for α and n in terms of α0 .6. WORKED EXAMPLES. n0 and F. b. and its image under the linear transformation F.3. Fb. b and c are three non-coplanar vectors in IE3 . Example 3. 6 F Fc Fb It follows from the result of the previous example that V /V0 = det F which describes how volumes are mapped by the transformation F. |a × b| . Next. and similarly that α = |Fa × Fb|. |Fa × Fb| n0 = a×b .2: Tetrahedron of volume V0 defined by three non-coplanar vectors a. c is 1 V0 = (a × b) · c. n= Fa × Fb . Fc is likewise V = 1 (Fa × Fb) · Fc.6: Suppose that a and b are two non-colinear vectors in IE3 . suppose that F is a non-singular 2-tensor and let V denote the volume of the tetrahedron defined by the vectors Fa. Volume V olume Volume V0 olume c b a Fa Figure 3. b and c. Note that the second tetrahedron is the image of the first tetrahedron under the transformation F.5 Suppose that a. Let V0 be the volume of the tetrahedron defined by these three vectors. Fb and Fc. Derive a formula for V in terms of V0 and F. suppose that F is a non-singular 2-tensor and let α and n denote the area and unit normal to the parallelogram defined by the vectors Fa and Fb. Next. Let α0 be the area of the parallelogram defined by these two vectors and let n0 be a unit vector that is normal to the plane of this parallelogram. 55 Example 3. Solution: Recall from an example in the previous Chapter that the volume V0 of the tetrahedron defined by any three non-coplanar vectors a. Solution: By the properties of the vector-product we know that α0 = |a × b|.

α0 and substituting this result into the preceding equation gives n= F−T n0 . and its image under the linear transformation F. e2 . thus QT is the transformation that carries the first basis into the second.5: Let {e1 . This describes how (vectorial) areas are mapped by the transformation F. e3 } and {e1 . (i) Also recall the identity epqr det[F ] = eijk Fip Fjq Fkr introduced in (1. Taking the norm of this vector equation gives α = | det F| |F−T n0 |. Show that ei = QT ei . (ii) and αn = Fa × Fb. Therefore α0 n0 = a × b. COMPONENTS OF TENSORS. .48). CARTESIAN TENSORS F n0 Area α Area α0 b a n Fb Fa Figure 3. Multiplying both sides of this −1 identity by Frs leads to −1 −1 epqr det[F ]Frs = eijk Fip Fjq Fkr Frs = eijk Fip Fjq δks = eijs Fip Fjq = esij Fip Fjq (iii) Substituting (iii) into (ii) gives −1 −1 −T (Fa × Fb)s = det[F ]epqr Frs ap bq = det[F ]erpq ap bq Frs = det F(a × b)r Fsr = det F F−T (a × b) s and so using (i). |F−T n0 | Example 3.56 CHAPTER 3. e3 } are Qij . e2 . e3 } be two bases related by nine scalars Qij through ei = Qij ej . αn = α0 det F(F−T n0 ). e2 . But (Fa × Fb)s = esij (Fa)i (Fb)j = esij Fip ap Fjq bq . Let Q be the linear transformation whose components in the basis {e1 .3: Parallelogram of area α0 with unit normal n0 defined by two non-colinear vectors a and b.

3. yields the desired result. e2 . WORKED EXAMPLES. it follows from the definition of components that Qej = Qij ei . Thus. Example 3. Example 3. the components of the vector v in the two bases are related by vi = Qij vj . This. It follows from this and (NNN) that vi = v · ei = v · (Qij ej ) = Qij v · ej = Qij vj . Solution: The components vi of v in the basis {e1 . (ii) (i) . e3 } are defined by vi = v · e i . Solution: The components Aij of the linear transformation A in the basis {e1 . Since [Q] is an orthogonal matrix one readily sees that Q is an orthogonal transformation. e3 }. and its components Aij in a second basis {e1 . e3 } are defined by vi = v · e i . e2 . 57 Solution: Since Qij are the components of the linear transformation Q in the basis {e1 . e2 . e2 . e3 } are defined by Aij = ei · (Aej ). together with the given fact that ei = Qij ej .6: Determine the relationship between the components vi and vi of a vector v in two bases. e2 . we are now led to Qkj ej = QT ek or equivalently QT ei = Qij ej .7: Determine the relationship between the components Aij and Aij of a linear transformation A in two bases. e3 } are defined by Aij = ei · (Aej ). and its components vi in the second basis {e1 . Multiplying both sides of this by Qkj and noting by the orthogonality of Q that Qkj Qij = δki .6. Operating on both sides of the preceding equation by QT and using the orthogonality of Q leads to ej = Qij QT ei .

and then (i). e3.4: A basis {e1 . CARTESIAN TENSORS By first making use of (NNN). e3 } through an angle θ about the unit vector e3 . e3 } obtained by rotating the basis {e1 .     e3 . the components of the linear transformation A in the two bases are related by Aij = Qip Qjq Apq . e3 } is obtained by rotating the basis {e1 . and so it follows that   [Q] =  − sin θ   0    0  . e2 . see Figure 3. e3 θ e2 θ e1 e2 e1 Figure 3.   1 . Write out the transformation rule for 2-tensors explicitly in this case. we can write (ii) as Aij = ei · (Aej ) = Qip ep · (AQjq eq ) = Qip Qjq ep · (Aeq ) = Qip Qjq Apq .58 CHAPTER 3. COMPONENTS OF TENSORS. Thus.      − sin θ e1 + cos θ e2 . (iv) (iii) Example 3. e3 } through an angle θ about the unit vector e3 .4. e2 . e2 . Solution: In view of the given relationship between the two bases it follows that e1 e2 e3 = = =  cos θ e1 + sin θ e2 .  cos θ sin θ cos θ 0 0  The matrix [Q] which relates the two bases is defined by Qij = ei · ej . e2 .8: Suppose that the basis {e1 .

A31 = A31 cos θ + A32 sin θ. Substituting this [Q] into [A ] = [Q][A][Q]T and multiplying out the matrices leads to the 9 equations A11 A12 A21 A22 A13 A23 A33 A11 + A22 A11 − A22 A12 + A21 + cos 2θ + sin 2θ. ai and bi . If a and b are vectors. These are the well-known equations underlying the Mohr’s circle for transforming 2-tensors in two-dimensions. = A33 . The components of T in any basis are defined in terms of the components of a and b in that basis by Tijk = ai bj bk . Show that the Cijk ’s are the components of a 4-tensor. Tijk = ai bj bk . We are told that the components of the entity T in these two bases are defined by Tijk = ai bj bk . WORKED EXAMPLES. these nine equations simplify to  A11 − A22 A11 + A22  + cos 2θ + A12 sin 2θ.   2 Example 3. Let a. = A23 cos θ − A13 sin θ. A32 = A32 cos θ − A31 sin θ. show that T is a 3-tensor. = 2 2 2 A12 − A21 A12 − A21 A11 − A22 = − − cos 2θ − sin 2θ. bi and Tij . bi be the components of a and b in two arbitrary bases. 59 together with A13 = A23 = 0 and A33 = A33 . 2 2 2 A12 − A21 A11 − A22 A12 − A21 + cos 2θ − sin 2θ. Suppose that A and B are 2-tensors and that their components in some basis are related by Aij = Cijk Bk . and in addition A13 = A23 = 0. 2 2      A11 − A22   A12 = − sin 2θ. In the special case when [A] is symmetric.6. b.9: a. Solution: a. (iii) (ii) (i) . Let ai . 2 2 2 A11 + A22 A11 − A22 A12 + A21 = − cos 2θ − sin 2θ.  A11 =   2 2      A11 + A22 A11 − A22 A22 = − cos 2θ − A12 sin 2θ. b and T be entities whose components in some arbitrary basis are ai . 2 2 2 = = A13 cos θ + A23 sin θ.3.

Bij = Qip Qjq Bpq . (iv) Combining equations (iii) and (iv) gives Tijk = ai bj bk = Qip ap Qjq bq Qkr br = Qip Qjq Qkr ap bq br = Qip Qjq Qkr Tpqr .60 CHAPTER 3. Cijk be the components of A. (viii) Multiplying both sides by Qim Qjn and using the orthogonality of [Q]. Let Aij . (i) . CARTESIAN TENSORS Since a and b are known to be vectors. or on using (vi)1 in this that Cmnpq Bpq = Cijk Qim Qjn Qkp Q q Bpq . Substituting (vii) into (vi)2 gives Qip Qjq Apq = Cijk Qkp Q q Bpq . b. Since this holds for all matrices [B] we must have Cmnpq = Cijk Qim Qjn Qkp Q q . using the orthogonality of [Q] and the substitution rule yields the desired result Qam Qbn Qcp Qdq Cmnpq = Cabcd . i.10: Verify that the alternator eijk has the property that eijk = Qip Qjq Qkr epqr but that more generally eijk = Qip Qjq Qkr epqr for all orthogonal matrices [Q]. Bij . (v) Therefore the components of T in two bases transform according to Tijk = Qip Qjq Qkr Tpqr . the fact that Qip Qim = δpm . (vii) Aij = Cijk Bk . (ii) for all proper orthogonal matrices [Q]. their components transform according to the 1-tensor transformation rule ai = Qij aj . C in two arbitrary bases: Aij = Cijk Bk . Therefore T is a 3-tensor. Cijk and Aij . We are told that A and B are 2-tensors. whence Aij = Qip Qjq Apq . (vi) and we must show that Cijk is a 4-tensor. i. Bij . (xiii) Example 3. bi = Qij bj . COMPONENTS OF TENSORS. B. (ix) which by the substitution rule tells us that Amn = Cijk Qim Qjn Qkp Q q Bpq .e. leads to δpm δqn Apq = Cijk Qim Qjn Qkp Q q Bpq . (xii) (xi) (x) Finally multiplying both sides by Qam Qbn Qcp Qdq .e that Cijk = Qip Qjq Qkr Q s Cpqrs .

by definition. [A] = [Q][A][Q]T for all orthogonal matrices [Q]. and therefore it is an isotropic 2-tensor. Solution: We must find the most general symmetric 2-tensor A whose components in every basis are the same. Thus [A] has the form   λ1 0 0   [A] =  0 (ii) λ2 0  0 0 λ3 .3. (ii) thus ui = 0 and so u = o. Then Qij = −δij .. since A is symmetric. Solution: In order to show this we must determine the most general vector u which is such that ui = Qij uj for all orthogonal matrices [Q]. Note from this that the alternator is not an isotropic 3-tensor. we are led to Ciikl = Qip Qiq Qkr Qls Cpqrs = δpq Qkr Qls Cpqrs = Qkr Qls Cpprs . (i) Since (i) is to hold for all orthogonal matrices [Q]. and so (i) reduces to ui = −δij uj = −ui . On setting i = j in this. it must necessarily hold for the special choice [Q] = −[I]. show that necessarily Ciik = αδk for some arbitrary scalar α. 61 Example 3. it follows that [A] must therefore be diagonal in every basis.6. Solution: Since Cijkl is an isotropic 4-tensor. i.13: Show that the most general isotropic symmetric tensor is a scalar multiple of the identity. (i) First. u = o obviously satisfies (i) for all orthogonal matrices [Q]. The desired result now follows since the most general isotropic 2-tensor is a scalar multiple of the identity. we know that there is some basis in which [A] is diagonal. Conversely. Thus Ciik obeys Ciikl = Qkr Qls Cpprs for all orthogonal matrices [Q].12: Show that the most general isotropic vector is the null vector o. Example 3. Example 3.e.11: If Cijk is an isotropic 4-tensor. Since A is also isotropic. then using the orthogonality of [Q]. and finally using the substitution rule. Thus u = o is the most general isotropic vector. WORKED EXAMPLES. Cijkl = Qip Qjq Qkr Qls Cpqrs for all orthogonal matrices [Q].

show that there is a vector w such that Wx = w × x for all x ∈ IE. we merely have to show that w has the desired property stated above. Example 3. [A] = α[I] is readily shown to obey (i) for any orthogonal matrix [Q].14: If W is a skew-symmetric tensor. Thus (iii) must necessarily hold for the special choice   0 0 1   [Q] =  1 0 0  . Now for any vector x. A permutation of this special choice of [Q] similarly shows that λ2 = λ3 . Conversely. Wij xj = −eijk wk xj = eikj wk xj = (w × x)i . CARTESIAN TENSORS in which case (iii) reduces to for all orthogonal matrices [Q]. Thus (i) takes the form  λ1 0  λ2  0 0 0   λ1 0   0  = [Q]  0 0 λ3 0 λ2 0  0  0  [QT ] λ3 (iii) (iv) (v) Therefore. COMPONENTS OF TENSORS. Multiplying both sides of the preceding equation by eipq and then using the identity eijk eipq = δjp δkq − δjq δkp . by direct substitution. λ1 = λ2 . Thus λ1 = λ2 = λ3 = say α. . and finally using the substitution rule gives 1 1 eipq wi = − (δjp δkq − δjq δkp ) Wjk = − (Wpq − Wqp ) 2 2 Since W is skew-symmetric we have Wij = −Wji and thus conclude that Wij = −eijk wk . Therefore [A] necessarily must have the form [A] = α[I]. 0 1 0  λ1   0 0 0 λ2 0   0 λ2   0 = 0 λ3 0 0 λ1 0  0  0 . Solution: Let Wij be the components of W in some basis and let w be the vector whose components in this basis are defined by 1 wi = − eijk Wjk . Thus the vector w defined by (i) has the desired property Wx = w × x. λ3 in any basis. This establishes the result.62 CHAPTER 3. (i) 2 Then.

and the orthogonality of [Q] as follows: Qip Qjq Qkr Q s Cpqrs = = = = = Qip Qjq Qkr Q s (α δpq δrs + β δpr δqs + γ δps δqr ) α Qiq Qjq Qks Q s + β Qir Qjs Qkr Q s + γ Qis Qjr Qkr Q s α (Qiq Qjq ) (Qks Q s ) + β (Qir Qkr ) (Qjs Q s ) + γ (Qis Q s ) (Qjr Qkr ) α δij δk + β δik δj + γ δi δjk Cijk . to (β − γ) (δik δj − δjk δi ) = 0 . γ are scalars.3. The right-hand side of this can be simplified by using the given form of Cijk . is isotropic. Turning to the second question. the substitution rule. j = 2. k = 1. (iv) Remark: We have shown that β = γ is necessary if C given in (i) is to have the symmetry Cijk = Cjik . show that one must have β = γ. One can readily verify that it is sufficient as well. It is useful for later use to record here. .6. it must necessarily hold for the special choice i = 1. j.15: Verify that the 4-tensor Cijk = αδij δk + βδik δj + γδi δjk . β. Therefore (β − γ)(δ11 δ22 − δ21 δ12 ) = 0 and so β = γ. enforcing the requirement Cijk = Cjik on (i) leads. Example 3. k. Remark: Observe that Cijk given by (v) automatically has the symmetry Cijk = Ck ij . (iii) Since this must hold for all values of the free indices i. WORKED EXAMPLES. Therefore A has the properties that Ax · x = 0. for all x (i) . = 2. Solution: By definition of the transpose and the properties of the scalar product. after some simplification.16: If A is a tensor such that Ax · x = 0 show that A is necessarily skew-symmetric. (ii) This establishes the desired result. and AT x · x = 0 for all vectors x. 63 (i) where α. (v) Example 3. Solution: In order to verify that Cijk are the components of an isotropic 4-tensor we have to show that Cijk = Qip Qjq Qkr Q s Cpqrs for all orthogonal matrices [Q]. Ax · x = x · AT x = AT x · x. If this isotropic 4-tensor is to possess the symmetry Cijk = Cjik . that the most general isotropic 4-tensor C with the symmetry property Cijk = Cjik is Cijk = αδij δk + β (δik δj + δi δjk ) where α and β are scalars.

Since this must hold for all real numbers xk . CARTESIAN TENSORS Adding these two equations gives Sx · x = 0 where S = A + AT . Solution: This follows readily since det(QT AQ−µI) = det(QT AQ−µQT Q) = det QT (A − µI)Q = det QT det(A−µI) det Q = det(A−µI). and the desired result now follows. Example 2. Example 2. Therefore this leads to det(Q − I) = − det(Q − I). Remark: An important consequence of this is that if A is a tensor with the property that Ax · x = 0 for all x. Sx · x = σ1 x2 + σ2 x2 + σ3 x2 = 0 1 2 3 where the σk ’s are the eigenvalues of S. show that det Q = ±1. Observe that S is symmetric. it does not follow that A = 0 necessarily. Solution: Recall that for any two linear transformations A and B we have det(AB) = det A det B and det B = det BT . (ii) Example 2. that (Q − I)v) = o or equivalently that det(Q − I) = 0. The desired result now follows.64 CHAPTER 3.18: For any orthogonal linear transformation Q. On taking the determinant of both sides and using the fact that det(AB) = det A det B we get det Q det(QT − I) = det (I − Q) . COMPONENTS OF TENSORS. Therefore in terms of components in a principal basis of S. Since QQT = I we have Q(QT − I) = I − Q. (i) Recall that det Q = +1 for a proper orthogonal linear transformation. Solution: To show that there is a vector v such that Qv = v. show that there exists a vector v such that Qv = v. Therefore S = O whence A = −AT . This vector is known as the axis of Q. and that det A = AT and det(−A) = (−1)3 det(A) for a 3-dimensional vector space.22: For any linear transformation A.20: If Q is a proper orthogonal linear transformation on the vector space IE3 . it follows that every eigenvalue must vanish: σ1 = σ2 = σ3 = 0. .e. i. it is sufficient to show that Q has an eigenvalue +1. Since QQT = I it now follows that 1 = det I = det(QQT ) = det Q det QT = (det Q)2 . show that det(A−µI) = det(QT AQ−µI) for all orthogonal linear transformations Q and all scalars µ.

e1 . e1 . e3 ) for any two bases {e1 . e2 . this can be written as ˙ trace FF−1 Fa × Fb · Fc = d det F dt (a × b) · c d det F dt (a × b) · c. we can simply write φ(A) instead of φ(A. so that in particular the same is true of their product and their sum: det(QT AQ) = det A and tr(QT AQ) = tr A. Calculate d det F(t). e2 .. e3 ) = I1 (A.3. Example 2. and hence show that φ(A) = trace A. e2 . e2 .e.NNN we have F(t)a × F(t)b · F(t)c = det F(t) (a × b) · c Differentiating this with respect to t gives d ˙ ˙ ˙ F(t)a × F(t)b · F(t)c + F(t)a × F(t)b · F(t)c + F(t)a × F(t)b · F(t)c = det F(t) (a × b) · c dt ˙ where we have set F(t) = dF/dt. e3 } by φ(A. e1 .26: Define a scalar-valued function φ(A. e3 ) for all linear transformations A and all (not necessarily) orthonormal bases {e1 . e3 } and {e1 . e1 . e2 . WORKED EXAMPLES. Thus. Example 3.NNN.6. e2 . e2 . Pick any orthonormal basis and express φ(A) in terms of the components of A in that basis. e1 · (e2 × e3 ) Show that φ(A. e3 ). We can write this as ˙ ˙ ˙ FF−1 Fa × Fb · Fc + Fa × FF−1 Fb · Fc + Fa × Fb · FF−1 Fc = In view of the result of Example 3.7: Let F(t) be a one-parameter familty of non-singular 2-tensors that depends smoothly on the parameter t. show that φ(A.NNN once more we get ˙ trace FF−1 det F (a × b) · c = or d det F dt (a × b) · c. φ(A) is called a scalar invariant of A. e1 . e3 ) = Ae1 · (e2 × e3 ) + e1 · (Ae2 × e3 ) + e1 · (e2 × Ae3 ) . e3 }. d ˙ det F = trace FF−1 det F. e2 . i. e2 . and now using the result of Example 3. e3 ) is in fact independent of the choice of basis. dt Solution: From the result of Example 3. e1 . 65 Remark: Observe from this result that the eigenvalues of QT AQ coincide with those of Q. dt .

L. . New York.66 CHAPTER 3. I2 (A) = 1/2[(trace A)2 − trace(A2 )]. . show that the polynomial PN (A) = c0 I + c1 A + c2 A2 + .41) shows that A3 can be written as a linear combination of I. multiplying this by A tells us that A4 can be written as a linear combination of A. A and A2 . A and A2 . and therefore. . . This process can be continued an arbitrary number of times to see that for any integer k. J.A.K. I2 and I3 of the linear transformation a ⊗ b. The result thus follows. ck Ak + . on using the result of the previous step. New York. References 1. COMPONENTS OF TENSORS.41) as follows: suppose that A is nonsingular so that I3 (A) = det A = 0. 1931. 1997. CARTESIAN TENSORS Example 2. Segel. Then (3. . Example 2. Cambridge. as linear combination of I. I2 (A) and I3 (A) are the principal scalar invariants of A: I1 (A) = trace A. A and A2 .30: For any integer N > 0. Solution: This follows readily from the Cayley-Hamilton Theorem (3. Oxford University Press. 3. + cN AN can be written as a quadratic polynomial of A. Jeffreys. Next. Knowles. Dover. Example 2. I3 (A) = det A. H. Cartesian Tensors. 1987. Mathematics Applied to Continuum Mechanics. A2 and A3 . Linear Vector Spaces and Cartesian Tensors.32: Calculate the principal scalar invariants I1 . 2. Ak can be expressed as a linear combination of I.31: For any linear transformation A show that det(A − αI) = −α3 + I1 (A)α2 − I2 (A)α + I3 (A) for all real numbers α where I1 (A).

does not affect symmetry. In this Chapter we shall consider those linear transformations that map the object back into itself. When an object is mapped using a linear transformation. certain transformations preserve its symmetry while others don’t. principally rotations and reflections. We are interested in other linear transformations that also preserve symmetry. The set of such transformations depends on the object: for example the set of linear transformations that preserve the symmetry of a cube is different to the set of linear transformations that preserve the symmetry of a tetrahedron. i. One way in which to characterize the symmetry of an object is to consider the collection of all linear transformations that preserve its symmetry. 67 . a linear transformation of the form αI that rescales the object by changing its size but not its shape.e. Linear transformations are mappings of vector spaces into vector spaces. The collection of such transformations have certain important and useful properties. Intuitively a “uniform all-around expansion”.Chapter 4 Characterizing Symmetry: Groups of Linear Transformations. In this chapter we touch briefly on the question of characterizing symmetry by linear transformations.

1. the vertex B can be placed in one of 2 positions (allowing for reflections or in just one position if only rotations are permitted). R π/2 π/2 3π/2 k . We begin with an illustrative example. In order to determine them.1: Mapping a square into itself. Consider mappings that carry the square into itself. Rπ . The been determined. k. 90o . Thus the following 4 . see Figure 4. k k 3π/2 . viz. Consider a square. 4 of which are rotations and 4 of which are reflections. And once the locations of A and B have been fixed. Rπ . Once the location of A has to the orthonormal vectors {i. Rn is a right-handed rotation through an angle φ about the axis n. ABCD.1 An example in two-dimensions. 180o and 270o rotations about this axis map the square back into itself. which lies in a plane normal to the unit vector k. Thus there are a total of 4 × 2 = 8 symmetry preserving transformations of the square. In the present case there is just 1 axis to consider. we (a) identify the axes of rotational symmetry and then (b) determine the number of distinct rotations about each such axis. R where we are using the k k k φ notation introduced previously. i. distinct rotations are symmetry transformations: I. Consider the 4 rotations. there is no further flexibility and the locations of the remaining vertices are fixed. R Let Gsquare denote the set consisting of these 4 symmetry preserving rotations: Gsquare = {I. 4.e. and we note that 0o . whose center is at the origin o and whose sides are parallel vertex A can be placed in one of 4 positions. R }. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS C j D B A o i Figure 4.68 CHAPTER 4. j}.

i. R R = k k k k k 3π/2 3π/2 I. (R ) =R etc.4. 2. 69 This collection of linear transformations has two important properties: first. 3. P2 ∈ G square then their product P1 P2 is also in G square . then so is their product P1 P2 . Before considering some general theory.4.4.2 An example in three-dimensions. k Next consider the rotation R Finally observe that G square = {I. π/2 π/2 3π/2 π/2 3π/2 4. it is useful to consider the three-dimensional version of the previous problem. observe that the successive application of any two symmetries yields a third symmetry.e. and observe that every element of the set Gsquare can be k π/2 represented in the form (R )n for the integer choices n = 0. k j B o i C D A Figure 4. Rπ R = R . then so is its k k k 3π/2 −1 π/2 −1 inverse P . if P1 and P2 are in Gsquare . For example. For example (Rπ )−1 = Rπ . and if P ∈ G square so is its inverse P−1 . 1. AN EXAMPLE IN THREE-DIMENSIONS. Rπ } is a subset of Gsquare and that it too has the properties that if P1 . Therefore we can say k π/2 that the set Gsquare is “generated” by the element R . As we shall see in Section k k k k 4. Consider a cube whose center is at the origin o and whose . observe that if P is any member of Gsquare . Second.2.2: Mapping a cube into itself. R R = Rπ etc. these two properties endow the set Gsquare with a certain special structure. We shall generalize all of this in Section 4.

In order to determine these rotations we again (a) identify all axes of rotational symmetry and then (b) determine the number of distinct rotations about each such axis. R j j 3π/2 . There are 4 axes that join one vertex of the cube to the diagonally opposite vertex of materials science are called the {111} directions). Thus the following 4 × 2 = 8 distinct rotations are symmetry transformations: R 2π/3 the cube which we can take to be i + j + k. i + k. First. and consider mappings that carry the cube into itself. i − k. R . k}. R . C. and its three adjacent vertices B. consider the 24 rotations. B and C have been placed. R i i 3π/2 . i − j. R 4π/3 i+j+k . R k k k π/2 3π/2 2. R . and 90o . (which in materials science are called the {100} directions). There are 3 axes that join the center of one face of the cube to the center of the opposite face of the cube which we can take to be i.70 CHAPTER 4. and 120o and 240o rotations about each of these axes maps the cube back into the cube. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS edges are parallel to the orthonormal vectors {i. R 2π/3 . R . R . Consider a vertex A. Rπ . Once the vertices A. j. i − j + k. j. j − k (which in materials science are called the {110} directions). R π/2 j . Thus there are a total of 8 × 3 × 2 = 48 are reflections. (which in i+j+k . The vertex A can be placed in one of 8 positions. Finally. i + j − k. 24 of which are rotations and 24 of which i . in addition to the identify transformation I itself. Rπ . j + k. and . In the present case we see that. i − j − k. Rπ . R . Once the location of A has been determined. the vertex B can be placed in one of 3 positions. k. there are 6 axes that join the center of one edge of the cube to the center of the diagonally opposite edge of the cube which we can take to be i + j. the locations of the remaining vertices are fixed. the vertex C can be placed in one of 2 positions (allowing for reflections or in just one position if only rotations are permitted). And once the locations of A and B have been fixed. 180o and 270o rotations about each of these axes maps the cube back into the cube. i−j+k i−j+k i+j−k i+j−k i−j−k i−j−k 4π/3 2π/3 4π/3 2π/3 4π/3 3. we have the following rotational transformations that preserve symmetry: 1. D. Thus the following 3 × 3 = 9 distinct rotations are symmetry transformations: R π/2 symmetry preserving transformations of the cube.

Rπ−k . Rπ−k .) The collection of linear transformations Gcube has two important properties that one can rotational symmetry of an object then −R is.) Therefore we can say that the set Gcube is “generated” by the three elements R π/2 i . Rπ+k = R i j π/2 −1 R k π/2 2 . (One way in which to verify this is to use the representation of a rotation tensor determined in Example 2. Rπ−k . R R π/2 i . Rπ . Rπ+k . 71 180o rotations about each of these axes maps the cube back into the cube. R j j j .4. i−j−k (4.R π/2 j and R π/2 k . R 4π/3 3π/2 2π/3 4π/3 i+j+k i+j+k i−j+k . In general. R . AN EXAMPLE IN THREE-DIMENSIONS. For example the rotation R (about i j k i+j+k a {111} axis) and the rotation Rπ+k (about a {110} axis) can be represented as i (R R = R i+j+k k 2π/3 π/2 −1 π/2 p Next. ) (R )q (R )r for integer choices of p. where the 24 reflections are obtained by multiplying each rotation by −I. R 2π/3 π/2 3π/2 .1) 4π/3 Rπ+j . Rπ . if R is a a reflectional symmetry of the object. R . R π/2 k . Rπ+k . one can verify that every element of the set Gcube can be represented in the form π/2 π/2 2π/3 R j π/2 −1 . (It is important to remark that this just happens to be true for the cube. a reflection.2. Rπ+k . Rπ+k . see the example of the tetrahedron discussed later. then there are 48 elements in this set. Thus the following 6 × 1 = 6 distinct rotations are symmetry transformations: Rπ+j . e. . Rπ−j .18. R i i . r. but is not generally true. R 3π/2 . R . q. i i i i j j If one considers rotations and reflections. Rπ−j .g. R 2π/3 i−j−k . R 4π/3 2π/3 i−j+k i+j−k i+j−k . but it need not describe then so does its inverse P−1 . verify: (i) if P1 and P2 ∈ Gcube . then their product P1 P2 is also in Gcube . Rπ . and (ii) if P ∈ Gcube . i i i i j j Let Gcube denote the collection of these 24 symmetry preserving rotations: Gcube = {I. Rπ−k }. R k k . of course.

2 . let Glattice be the set of all linear transformations P that map the lattice back into itself.2) where Z is the set of integers. The simplest lattice.3 shows a two-dimensional square lattice and one possible set of lattice vectors 1. translation of a single point o through three linearly independent lattice vectors { 1 . SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS 4. The set Glattice is called the symmetry group of the lattice. 2. (It is clear from the figure that different sets of lattice vectors can correspond to the same lattice. P2 ∈ Glattice then their product P1 P2 is also in Glattice . A geometric structure of particular interest in solid mechanics is a lattice and we now make a few observations on the symmetry of lattices.3) for some 3 × 3 matrix [M ] whose elements Mij are integers and where det[M ] = 1. 3 }. One can show that if P1 . 3 }: L{o.72 CHAPTER 4. 2.3: A two-dimensional square lattice with lattice vectors 1. n i ∈ Z } (4. 1.3 Lattices. It can be shown that a linear transformation P maps a lattice back into itself if and only if 3 P i = j=1 Mij j (4. Given a lattice. a Bravais lattice L{o. Figure 4. 3 1 . 3} = {x | x = o + n=1 ni i . 2. and if P ∈ Glattice so is its inverse P−1 . . is an infinite set of periodically arranged points in space generated by the 2 .) a a j 2 i 1 Figure 4.

Note from this that the identity transformation I is necessarily a member of every group G. P2 . a collection of linear transformations G is said to be a subgroup of a group (i) G ⊂ G and (ii) G is itself a group. The generators of a group G are those elements P1 . Because. viz. and so on.the set of all proper orthogonal linear transformations. 2 While the determinant of an orthogonal tensor is ±1 the converse is not necessarily true. when they and Clearly the three sets Gsquare .the set of all proper unimodular linear transformations (i. trigonal and hexagonal. .the set of all unimodular linear transformations2 (i. triclinic. 73 and the set of rotations in Glattice is known as the point group of the lattice. There are unimodular tensors. Gcube and Glattice encountered in the previous sections are their inverses are multiplied among themselves in various combinations yield all the elements of the group. . Thus the unimodular group is not equivalent to the orthogonal group. .e. A collection G of non-singular linear transformations is said to be a group of linear transformations if it possesses the following two properties: (i) (ii) if P1 ∈ G and P2 ∈ G then P1 P2 ∈ G. a cubic lattice can be bodycentered or face-centered. G if In general. GROUPS OF LINEAR TRANSFORMATIONS. for example. e. Pn which. There are seven different types of symmetry that arise in Bravais lattices.1). .the set of all orthogonal linear transformations. the number of different types of lattices is greater than seven. For example the point group of a simple cubic lattice1 is the set Gcube of 24 rotations given in (4. monoclinic. cubic. that are not orthogonal. tetragonal. One can show that each of the following sets of linear transformations forms a group: . . linear transformations with determinant equal to +1). linear transformations with determinant equal to ±1).e. 1 . if P ∈ G then P−1 ∈ G. Generators of the groups Gsquare and Gcube were given previously.g. P = I + αi ⊗ j. .4 Groups of Linear Transformations. orthorhombic. and .4.4. 4. groups.

SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS One can readily show that the group of proper orthogonal linear transformations is a subgroup of the group of orthogonal linear transformations. for each P ∈ G. . It should be mentioned that the general theory of groups deals with collections of elements (together with certain “rules” including “multiplication”) where the elements need not be linear transformations. When we discuss the constitutive behavior of a material in Volume 2. However. we will encounter a scalar-valued function ψ(C) defined for all symmetric positive definite tensors C. For example the set of all integers Z with “multiplication” defined as the addition of numbers. (4.5 Symmetry of a scalar-valued function of symmetric positive-definite tensors.74 CHAPTER 4.   cosh(−x) sinh(−x) sinh(−x) cosh(−x)  can be shown to be a group. and the inverse of x taken to be −x is a group. which in turn is a subgroup of the group of unimodular linear transformations.4) . the identity taken to be zero. ψ(C) = ψ(PT CP) for all symmetric positive − definite C. the identity being the identity matrix. our discussion in these notes is limited to groups of linear transformations. In our first example. Similarly the set of all matrices of the form   and the inverse being cosh x sinh x sinh x cosh x   where − ∞ < x < ∞. with “multiplication” defined as matrix multiplication. The symmetry of the material will be characterized by a set G of non-singular tensors P which has the property that. (This represents the energy in the material and characterizes its mechanical response). G square is a subgroup of Gsquare . 4.

Then since n is the axis of Qn we know that Qn n = n. we shall refer to it as the symmetry group of ψ. and so P−1 is also in G. suppose that P ∈ G. Thus the symmetry group of this ψ consists of all unimodular tensors ( i. equation (4. Therefore ψ(QT CQn ) = ψ QT CQn n · n = ψ CQn n · Qn n = ψ Cn · n = ψ(C). Let Qn be a rotation about the axis n through an arbitrary angle. n n (4.7) where n is a given fixed unit vector. (4.5)1 in the penultimate and ultimate 1 steps respectively.5. (Are there any other tensors in G?) .4) is a group. 1 ψ(C) = ψ(PT CP2 ). SYMMETRY OF A SCALAR-VALUED FUNCTION 75 P1 . As a second example consider the function ψ(C) = ψ Cn · n (4. Thus the set G of nonsingular tensors obeying (4. To see this. since (4. it also holds for all symmetric positive-definite linear transformations S = PT CP.7) therefore contains the set of all rotations about n.4) holds if and only if det P = ±1. then so is P1 P2 . and To examine an explicit example.4.6) as a consequence. Next.5)2 and (4.4) gives ψ(S) = ψ(P−T SP−1 ) for all symmetric positive-definite S. the equation S = PT CP provides a one-to-one relation between symmetric positive definite tensors C and S.4) that the symmetry group of ψ contains the elements I and −I. Then ψ((P1 P2 )T CP1 P2 ) = ψ(PT (PT CP1 )P2 ) = 2 1 ψ(PT CP1 ) = ψ(C) where we have used (4. Since P is non-singular.5) for all symmetric positive-definite C. Observe from (4. Substituting this into (4. P2 ∈ G so that It can be readily shown that this set of tensors G is a group. 2 (4. consider the function ψ(C) = ψ det C . if P ∈ G then −P ∈ G also. Thus if P1 and P2 are in G.8) The symmetry group of the function (4. It is seen trivially that for this ψ.4) holds for all symmetric positive-definite C. first let ψ(C) = ψ(PT CP1 ).e. Thus. tensors with determinant equal to ±1).

e. each defined for all symmetric positive-definite tensors C.10) in the sense that a tensor P ∈ G1 if and only if the tensor HPH−1 ∈ G2 .11) It can be readily shown that this set of tensors G is also a group.13) . for all symmetric positive-definite C and all orthogonal P. I2 (C). i. Suppose that ψ1 and ψ2 are related by ψ2 (C) = ψ1 (HT CH) for all symmetric positive − definite tensors C. i. This motivates the following slight modification to our original notion of symmetry of a function ψ(C).12) tensors.e. for each P ∈ G. if H is a spherical tensor. if H = αI. that there exists a function ψ such that ψ(C) = ψ I1 (C). This. together with the special case of the result noted in the preceding paragraph provides a hint of why we might want to limit attention to unimodular tensors rather than consider all nonsingular tensors in our discussion of symmetry.76 CHAPTER 4. and consider two functions ψ1 (C) and ψ2 (C). then G1 = G2 . Thus for an isotropic function ψ. note that any nonsingular tensor P can be written as the product of a spherical tensor αI and a unimodular tensor T as P = (αI)T provided that we take α = (| det P|)1/3 since then det T = ±1. necessarily a subgroup of the unimodular group. A function ψ is said to be isotropic if its symmetry group G contains all orthogonal ψ(C) = ψ(PT CP) (4. Let H be some fixed nonsingular linear transformation. We characterize the symmetry of ψ by the set G of unimodular tensors P which have the property that. (4. then it can be shown that G2 = HG1 H−1 of this. (4. I3 (C) (4. ψ(C) = ψ(PT CP) for all symmetric positive − definite C.9) If G1 and G2 are the symmetry groups of ψ1 and ψ2 respectively. As a special case Next. SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS The following result will be useful in Volume 2. (4.38). From a theorem in algebra it follows that an isotropic function ψ depends on C only through its principal scalar invariants defined previously in (3.

j and k. If G contains I and all rotations Rφ . including both rotations and reflections.6. where I1 (C) = trace C I2 (C) = 1/2 [(trace C)2 − trace (C2 )] . i4 (C).1) plus the corresponding reflections obtained R by multiplying these rotations by −I.1: Characterize the set Hsquare of linear transformations that map a square back into a square. 0 < φ < 2π. i7 (C). I3 (C) = det C. Then. i9 (C) where i1 (C) i2 (C) i3 (C) i4 (C) i5 (C) i6 (C) i7 (C) i8 (C) i9 (C) = = = = = = = = = C11 + C22 + C33 . −Rπ . C22 C33 + C33 C11 + C11 C22 C11 C22 C33 2 2 2 C23 + C31 + C12 2 2 2 2 2 2 C31 C32 + C12 C23 + C23 C31 C23 C31 C12 2 2 2 2 2 2 C22 C12 + C33 C31 + C33 C23 + C11 C12 + C11 C31 + C22 C23 2 2 2 2 2 2 C11 C31 C12 + C22 C12 C23 + C33 C23 C31 2 2 2 C23 C22 C33 + C31 C33 C11 + C12 C11 C22                                    (4. i2 (C).4. . i6 (C). As noted previously. Example 4.15) (4.14) with the set of 24 rotations Gcube given in (4. according to a i j k theorem in algebra (see pg 312 of Truesdell and Noll).6 Worked Examples. i5 (C). WORKED EXAMPLES. ψ(C) = ψ i1 (C). i3 (C). through all angles φ about a fixed axis n If G includes the three elements −Rπ . R . 4. and contains 24 rotations and 24 reflections. the symmetry is called orthotropy. −Rπ which represent reflections in the i j k planes normal to i. R and −I.      77 (4. i8 (C). this group is generated by π/2 π/2 π/2 As a second example consider “cubic symmetry” where the symmetry group G coincides .16) n. the corresponding symmetry is called transverse isotropy.

k k k As the 4 reflectional symmetries we can pick H V D D and so Hsquare = I. D = HR etc. Rπ .g. V. then so is its inverse. = reflection in the vertical axis j. Rπ } . SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS j D C i A B Figure 4. Rπ . R3π/2 . Example 4. then so is their product P1 P2 . D = R k k −1 of G. I. One can verify that the following 8 collections of linear transformations are subgroups of Hsquare : I. I. k π/2 D=R π/2 k H. Solution: All elements of Hsquare can be represented in the form (R )i Hj for integer choices of i = 0. V} . {I. Therefore the group Hsquare is generated by the two elements H and Rπ/2 . = reflection in the diagonal with positive slope i + j.1 and now consider the set rotations and reflections Hsquare that map the square back into itself. And if P is any member mations in G. The set of rotations that do this were determined earlier and they are π/2 3π/2 Gsquare = {I. Solution: We return to the problem describe in Section 4. Rπ/2 . I = (R )4 . R = (R )3 . Rπ . (i) One can verify that Hsquare is a group since it possesses the property that if P1 and P2 are two transfor3π/2 3π/2 H. R }.2: Find the generators of Hsquare and all subgroups of Hsquare .g. D. = reflection in the diagonal with negative slope −i + j. {I. V. 1. D . Rπ } . {I. H. D . Rπ . {I.78 CHAPTER 4. D} . e. 3 k and j = 0. H = H etc. 2. e. R . k . k k k k k D = (R π/2 3 π/2 k ) H. 1: π/2 π/2 π/2 3π/2 Rπ = (R )2 . H. D.4: Mapping a square into itself. . H} . {I. D . R k 3π/2 k . V = (R )2 H. R π/2 = reflection in the horizontal axis i.

Thus these 3 × 1 = 3 distinct rotations – of the form Rπ .6. the third leaves the axis invariant.5: A regular tetrahedron ABCD. – are symmetry p transformations of the tetrahedron. each of these subgroups leaves some aspect of the square invariant. There are no other subgroups of Hsquare . k k – are symmetry transformations of the tetrahedron. There are three axes like p shown in the figure that pass through the mid-points of a pair of opposite edges. The axis k passes through the vertex A and the centroid of the opposite face BCD. and examining the different ways in which the square could be mapped back into itself. k} and a unit vector p.4. Example 4. and a right-handed rotation through 180o about each of these axes maps the tetrahedron back onto itself. j.4: Are all symmetry preserving linear transformations necessarily either rotations or reflections? Solution: We began this chapter by considering the symmetry of a square. WORKED EXAMPLES. Solution: 1. There are 4 axes like k in the figure that pass through a vertex of the tetrahedron and the centroid of the opposite face. three orthonormal vectors {i. while the unit vector p passes through the center of the edge AD and the center of the opposite edge BC.3: Characterize the rotational symmetry of a regular tetrahedron. k A B p D j C i Figure 4. the fourth leaves an axis and a diagonal invariant etc. 2. The first leaves the face invariant. etc. etc. The group Gtetrahedron of rotational symmetries of a tetrahedron therefore consists of these 11 rotations plus the identity transformation I. Now consider the example of a two-dimensional a × a . and right-handed rotations of 120o and 240o about each of these axes maps the 2π/3 4π/3 tetrahedron back onto itself. Thus these 4 × 2 = 8 distinct rotations – of the form R . 79 Geometrically. the second leaves a diagonal invariant. Example 4.R .

Show that ψ depends on C only through its principal scalar invariants. Example 4.e. a a j 2 i 1 Figure 4. if for every integer n. one recovers the original lattice. all unimodular tensors (i.6: Show that the group of proper orthogonal tensors is a subgroup of the group of orthogonal tensors.e.e.e.5: Show that each of the following sets of linear transformations forms a group: all orthogonal tensors.7: Suppose that a function ψ(C) is defined for all symmetric positive definite tensors C and that its symmetry group is the set of all orthogonal tensors. n1 . that are neither rotations nor reflections. I2 (C). SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS square lattice.6: A two-dimensional × square lattice.80 CHAPTER 4. n2 ∈ Z ≡ Integers} (i) depicted in Figure 4. and all proper unimodular tensors (i. There are however other transformations. Example 4. the “shearing” of the lattice described by the linear transformation P = I + ai ⊗ j (ii) is also a symmetry preserving transformation. Example 4. all proper orthogonal tensors. i. the set of infinite points Lsquare = {x | x = n1 ai + n2 aj. i. that also leave the lattice invariant. For example. tensors with determinant equal to +1).6. show that there is a function ψ such that ψ(C) = ψ I1 (C). We first note that the rotational and reflectional symmetry transformations of a a × a square are also symmetry transformations for the lattice since they leave the lattice invariant. I3 (C) . tensors with determinant equal to ±1). one rigidly translates the nth row of the lattice by precisely the amount n in the i direction. which in turn is a subgroup of the group of unimodular tensors. and examine the different ways in which this lattice can be mapped back into itself. Thus.

Since each set of basis vectors is orthonormal. (1) (1) (1) C2 = i=1 (1) λi ei (2) ⊗ ei . (v) (1) (1) and so RT C2 R = C1 .8: Consider a scalar-valued function f (x) that is defined for all vectors x. (1) (1) (2) (iii) where the two sets of orthonormal vectors {e1 . Therefore ψ(C1 ) = ψ(RT C2 R) = ψ(C2 ) where in the last step we have used (i). then ψ(C1 ) = ψ(C2 ). one has f (x) = f (Px) for all vectors x. e3 } into {e1 . e3 } and {e1 . (ii) C1 = i=1 λi ei (1) ⊗ ei . e2 . 81 Solution: We are given that ψ has the property that for all symmetric positive-definite tensors C and all orthogonal tensors Q ψ(C) = ψ(QT CQ). e2 . 3. I3 (C1 ) = I3 (C2 ). are the principal scalar invariants of C defined previously in (3. This establishes the desired result. (i) In order to prove the desired result it is sufficient to show that. WORKED EXAMPLES. Recall that the mapping (3. for all vectors x. and (i) . Let G be the set of all non-singular linear transformations P that have the property that for each P ∈ G. Example 4. (iv) RT i=1 λi ei (2) ⊗ ei (2) R= i=1 λi RT (ei ⊗ ei )R = (2) (2) i=1 λi (RT ei ) ⊗ (RT ei ) = (2) (2) i=1 λi (ei ⊗ (ei ).38).40) between principal invariants and eigenvalues is one-to-one. e2 . Thus we can write 3 3 I2 (C1 ) = I2 (C2 ).e. e2 . It follows from this and (ii) that the eigenvalues of C1 and C2 are the same. i = 1. e3 } are the respective principal bases of C1 and C2 . there is an orthogonal tensor R that (1) (1) (1) (2) (2) (2) carries {e1 . ii) Find the most general form of f if G contains the set of all orthogonal transformations. Solution: i) Suppose that P1 and P2 are in G.6. 2. I1 (C1 ) = I1 (C2 ). 2. (2) i = 1. if C1 and C2 are two symmetric tensors whose principal invariants Ii are the same. that f (x) f (x) = = f (P1 x) f (P2 x) for all vectors x. i) Show that G is a group. where Ii (C).4. i. 3. e3 }: Rei Thus 3 3 3 3 (1) (1) = ei .

e. 5” are the same. I4 (C. i = 1. Therefore f (x1 ) = f (Rx2 ) = f (x2 ). I2 (C). I5 (C.e. i. n) = I5 (C2 . whence f (x) depends on x only through its length |x|. m⊗m) that is defined for all symmetric positive-definite tensors C and all unit vectors m. n) = C2 n · n. If x1 and x2 are two vectors that have the same length. Next.e. i. n) where I1 (C). I3 (C).7.82 Then CHAPTER 4. show that there exists a function g such that g(C. n) (ii) . we will show that f (x1 ) = f (x2 ). n) = Cn · n. there exists a function f such that f (x) = f (|x|) for all vectors x. that f (x) = f (Px) for all vectors x and all orthogonal P. n). n). Solution: We are told that g(C. n ⊗ n) = g I1 (C). I3 (C1 ) = I3 (C2 ). I2 (C1 ) = I2 (C2 ). 4. It thus follows that G has the two defining properties of a group. I1 (C1 ) = I1 (C2 ). PT (n ⊗ n)P) for all symmetric positivedefinite tensors C and some particular unit vector n. I5 (C1 . where in the last step we have used the fact that G contains the set of all orthogonal transformations. I4 (C1 . QT (n ⊗ n)Q) (i) for all orthogonal Q and all symmetric positive definite C. e2 . 2. n ⊗ n) = g(QT CQ. Since P is non-singular we can set y = Px and obtain f (P−1 y) = f (y) for all vectors y. As in Example 4. If G contains the set of all orthogonal transformations. e3 } where e3 = n one has I4 = C33 and 2 2 2 I5 = C31 + C32 + C33 . 3. I2 (C). SYMMETRY: GROUPS OF LINEAR TRANSFORMATIONS f (P1 P2 )x = f P1 (P2 x) = f (P2 x) = f (x) where in the penultimate and ultimate steps we have used (i)1 and (i)2 respectively. one has g(C. I5 (C. I3 (C) are the three fundamental scalar invariants of C and I4 (C.9: Consider a scalar-valued function g(C. Remark: Observe that with respect to an orthonormal basis {e1 . suppose that P ∈ G so that f (x) = f (Px) for all vectors x. ii) If x1 and x2 are two vectors that have the same length. Example 4. there is a rotation tensor R that carries x2 to x1 : Rx2 = x1 . i. it is sufficient to show that if C1 and C2 are two symmetric positive definite linear transformations whose “invariants” Ii . n) = I4 (C2 . This establishes the result claimed above. n ⊗ n) = g(PT CP. Let G be the set of all non-singular linear transformations P that have the property that for each P ∈ G.

1971. Noll. n ⊗ n). Academic Press. Spencer.M.2. the fact that R is orthogonal. 1977. in Handbuch der Physik. Theory of invariants. 3. From (ii)1.7 it follows that there is an orthogonal tensor R such that RT C2 R = C1 . 2 2 (iii) and this must hold for all symmetric positive define C2 .J. . Volume III/3. Eringen. A Survey of Modern Algebra. C2 Rn · Rn = C2 n · n. for example. Consequently g(C1 . This implies that Rn = ±n and consequently RT n = ±n. 83 then g(C1 . It 2 1 now follows from this. 1965. C.4. REFERENCES 1. in Continuum Physics. as may be seen. A. M. Armstrong. MacLane. n ⊗ n) = g(RT C2 R. MacMillan. for expressing (iii) in a principal basis of C2 . This establishes the desired result. edited by A.A. Flugge. RT (n ⊗ n)R) = g(C2 .5 and the definitions of I4 and I5 that Rn · Rn = n · n. (ii)4. 2. Volume I.3 and the analysis in Example 4. WORKED EXAMPLES. Groups and Symmetry. C2 Rn · Rn = C2 n · n. The nonlinear field theories of mechanics. G.6. 4. 1988. It is readily seen from this that RT C2 R = C2 as well.C. n ⊗ n) = g(C2 . Springer-Verlag. Birkhoff and S. (RT n) ⊗ (RT n)) = g(RT C2 R. Truesdell and W. Springer-Verlag. edited by S. n ⊗ n) where we have used (i) in the very last step.

.

.. or i.. .. j element of the square matrix [A] fourth-order tensor (4-tensor) i. Let R be a bounded region of three-dimensional space whose boundary is denoted by ∂R and let x denote the position vector of a generic point in R + ∂R.. .. . .... A(x) and T(x) defined on R + ∂R. . 5. j... we shall take the more limited approach of working with the components of these fields. . scalar 3 × 1 column matrix vector ith component of the vector a in some basis. While the subject of the calculus of tensor fields can be dealt with directly.. e3 }. j component of the 2-tensor A in some basis.Chapter 5 Calculus of Vector and Tensor Fields Notation: α {a} a ai [A] A Aij C Cijk Ti1 i2 ... v(x). e2 .... k.. . component of 4-tensor C in some basis i1 i2 ....... Each component of say a vector field v(x) or a 2-tensor field A(x) is effectively a scalar-valued function 85 .....in .in component of n-tensor T in some basis...1 Notation and definitions.. .. We shall consider scalar and tensor fields such as φ(x).. The region R + ∂R and these fields will always be assumed to be sufficiently regular so as to permit the calculations carried out below... or ith element of the column matrix {a} 3 × 3 square matrix second-order tensor (2-tensor) i...... The components will always be taken with respect to a single fixed orthonormal basis {e1 ... .....

j = ∂vi . The gradient of a scalar field φ(x) is a vector field denoted by grad φ (or i -component in the orthonormal basis is th φ). we shall use the notation that a comma followed by a subscript denotes partial differentiation with respect to the corresponding x-coordinate.5) · A). for example.j ei ⊗ ej . Its ij th - (5. x2 . where vi and xi are the ith components of the vectors v and x in the basis {e1 .1) and so on.j . CALCULUS OF VECTOR AND TENSOR FIELDS on three-dimensional space.3) The gradient of a scalar field φ in the particular direction of the unit vector n is denoted by ∂φ/∂n and defined by ∂φ = φ · n. and we can use the well-known operations of classical calculus on such fields such as partial differentiation with respect to xk . we will write φ. e3 }.i = ∂φ . Its (5. so that grad v = vi. ∂xj (5.i .ij = ∂ 2φ . Its (grad φ)i = φ. ∂xi ∂xj vi. e2 .i ei .4) ∂n The divergence of a vector field v(x) is a scalar field denoted by div v (or given by div v = vi. The divergence of a 2-tensor field A(x) is a vector field denoted by div A (or ith -component in the orthonormal basis is (div A)i = Aij.i . The gradient of a vector field v(x) is a 2-tensor field denoted by grad v (or component in the orthonormal basis is (grad v)ij = vi. vi (x1 . so that grad φ = φ. It is (5.86 CHAPTER 5.6) . (5. x2 . ∂xi φ.2) v).j · v). x3 ). x3 ) and Aij (x1 . In order to simplify writing. (5. Thus.

12) where some of the subscripts i1 . (5. (5.j ei .5. . a vector field v(x) and a 2-tensor field A(x) are the scalar. ..k dV.in nk dA = ∂D D ∂ (Ti i .i ) dV ∂xk 1 2 n (5. i2 . vector and 2-tensor fields with components 2 φ = φ..8) 5. INTEGRAL THEOREMS so that div A = Aij.kk .k dV. in may be repeated and one of them might equal k.kk . The curl of a vector field v(x) is a vector field denoted by curl v (or component in the orthonormal basis is (curl v)i = eijk vk. The divergence theorem allows one to relate a surface integral on ∂D to a volume integral on D. (5.10) ∂D D as well as v ⊗ n dA = v dV D or ∂D vi nk dA = D vi.k dV.j ei .j so that curl v = eijk vk. Its ith (5. ( 2 v)i = vi.. . 87 × v).kk . .2 Integral theorems Let D be an arbitrary regular sub-region of the region R.2.11) ∂D More generally for a n-tensor field T(x) the divergence theorem gives Ti1 i2 . ( 2 A)ij = Aij. for a scalar field φ(x) φn dA = ∂D D φ dV or ∂D φnk dA = D φ.. . In particular.7) The Laplacians of a scalar field φ(x).9) Likewise for a vector field v(x) one has v · n dA = · v dV or ∂D vk nk dA = D vk. (5.

88

CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS

5.3

Localization

Certain physical principles are described to us in terms of equations that hold on an arbitrary portion of a body, i.e. in terms of an integral over a subregion D of R. It is often useful to derive an equivalent statement of such a principle in terms of equations that must hold at each point x in the body. In what follows, we shall frequently have need to do this, i.e. convert a “global principle” to an equivalent “local field equation”. Consider for example the scalar field φ(x) that is defined and continuous at all x ∈ R + ∂ R and suppose that φ(x) dV = 0
D

for all subregions D ⊂ R. at every point x ∈ R.

(5.13)

We will show that this “global principle” is equivalent to the “local field equation” φ(x) = 0 (5.14)

D

z

B (z)

Figure 5.1: The region R, a subregion D and a neighborhood B (z) of the point z. We will prove this by contradiction. Suppose that (5.14) does not hold. This implies that there is a point, say z ∈ R, at which φ(z) = 0. Suppose that φ is positive at this point: φ(z) > 0. Since we are told that φ is continuous, φ is necessarily (strictly) positive in some neighborhood of z as well. Let B (z) be a sphere with its center at z and radius > 0. We can always choose sufficiently small so that B (z) is a sufficiently small neighborhood of z and φ(x) > 0 at all x ∈ B (z). (5.15) Now pick a region D which is a subset of B (z). Then φ(x) > 0 for all x ∈ D. Integrating φ over this D gives φ(x) dV > 0
D

(5.16)

thus contradicting (5.13). An entirely analogous calculation can be carried out in the case φ(z) < 0. Thus our starting assumption must be false and (5.14) must hold.

5.4. WORKED EXAMPLES.

89

5.4

Worked Examples.

In all of the examples below the region R will be a bounded regular region and its boundary ∂R will be smooth. All fields are defined on this region and are as smooth as in necessary. In some of the examples below, we are asked to establish certain results for vector and tensor fields. When it is more convenient, we will carry out our calculations by first picking and fixing a basis, and then working with the components in that basis. If necessary, we will revert back to the vector and tensor fields at the end. We shall do this frequently in what follows and will not bother to explain this strategy each time.

Example 5.1: Calculate the gradient of the scalar-valued function φ(x) = Ax · x where A is a constant 2-tensor. Solution: Writing φ in terms of components φ = Aij xi xj . Calculating the partial derivative of φ with respect to xk yields φ,k = Aij (xi xj ),k = Aij (xi,k xj + xi xj,k ) = Aij (δik xj + xi δjk ) = Akj xj + Aik xi = (Akj + Ajk )xj or equivalently φ = (A + AT )x.

Example 5.2: Let v(x) be a vector field and let vi (x1 , x2 , x3 ) be the ith -component of v in a fixed orthonormal basis {e1 , e2 , e3 }. For each i and j define Fij = vi,j . Show that Fij are the components of a 2-tensor. Solution: Since v and x are 1-tensors, their components obey the transformation rules vi = Qik vk , vi = Qki vk Therefore Fij = and xj = Qjk xk , x = Qj xj

∂vi ∂vi ∂x ∂vi ∂(Qik vk ) ∂vk = = Qj = Qj = Qik Qj = Qik Qj Fk , ∂xj ∂x ∂xj ∂x ∂x ∂x

which is the transformation rule for a 2-tensor.

Example 5.3: If φ(x), u(x) and A(x) are a scalar, vector and 2-tensor fields respectively. Establish the identities a. div (φu) = u · grad φ + φ div u

90

CHAPTER 5. CALCULUS OF VECTOR AND TENSOR FIELDS
b. grad (φu) = u ⊗ grad φ + φ grad u c. div (φA) = A grad φ + φ div A

Solution: a. In terms of components we are asked to show that (φui ),i = ui φ,i + φ ui,i . This follows immediately by expanding (φui ),i using the chain rule. b. In terms of components we are asked to show that (φui ),j = ui φ,j + φ ui,j . Again, this follows immediately by expanding (φui ),j using the chain rule. c. In terms of components we are asked to show that (φAij ),j = Aij φ,j + φ Aij,j . Again, this follows immediately by expanding (φAij ),j using the chain rule.

Example 5.4: If φ(x) and v(x) are a scalar and vector field respectively, show that × (φv) = φ( × v) − v × φ (i) × u = eijk uk,j ei where ei is a fixed ×v+ φ×v (ii)

Solution: Recall that the curl of a vector field u can be expressed as basis vector. Thus evaluating × (φv):

× (φv) = eijk (φvk ),j ei = eijk φ vk,j ei + eijk φ,j vk ei = φ from which the desired result follows because a × b = −b × a.

Example 5.5: Let u(x) be a vector field and define a second vector field ξ(x) by ξ(x) = curl u(x). Show that a. · ξ = 0; uT )a = ξ × a for any vector field a(x); and u· u− u· uT × u can be expressed as (i)

b. ( u − c. ξ · ξ =

Solution: Recall that in terms of its components, ξ = curl u = ξi = eijk uk,j . a. A direct calculation gives

· ξ = ξi,i = (eijk uk,j ),i = eijk uk,ji = 0

(ii)

where in the last step we have used the fact that eijk is skew-symmetric in the subscripts i, j, and uk,ji is symmetric in the subscripts i, j (since the order of partial differentiation can be switched) and therefore their product vanishes.

5.4. WORKED EXAMPLES.
b. Multiplying both sides of (i) by eipq gives eipq ξi = eipq eijk uk,j = (δpj δqk − δpk δqj ) uk,j = uq,p − up,q ,

91

(iii)

where we have made use of the identity eipq eijk = δpj δqk − δpk δqj between the alternator and the Kronecker delta infroduced in (1.49) as well as the substitution rule. Multiplying both sides of this by aq and using the fact that eipq = −epiq gives epiq ξi aq = (up,q − uq,p )aq , or ξ × a = ( u − uT )a. (iv)

c. Since ( u)ij = ui,j and the inner product of two 2-tensors is A · B = Aij Bij , the right-hand side of the equation we are asked to establish can be written as u · u − u · uT = ( u)ij ( u)ij − ( u)ij ( u)ji = ui,j ui,j − ui,j uj,i . The left-hand side on the hand is ξ · ξ = ξi ξi . Using (i), the aforementioned identity between the alternator and the Kronecker delta, and the substitution rule leads to the desired result as follows: ξi ξi = (eijk uk,j ) (eipq up,q ) = (δjp δkq − δjq δkp ) uk,j up,q = uq,p up,q − up,q up,q . (v)

Example 5.6: Let u(x), E(x) and S(x) be, respectively, a vector and two 2-tensor fields. These fields are related by 1 E= u + uT , S = 2µE + λ trace(E) 1, (i) 2 where λ and µ are constants. Suppose that u(x) = b x r3 where r = |x|, |x| = 0, (ii)

and b is a constant. Use (i)1 to calculate the field E(x) corresponding to the field u(x) given in (ii), and then use (i)2 to calculate the associated field S(x). Thus verify that the field S(x) corresponding to (ii) satisfies the differential equation: div S = o, |x| = 0. (iii) Solution: We proceed in the manner suggested in the problem statement by first using (i)1 to calculate the E corresponding to the u given by (ii); substituting the result into (i)2 gives the corresponding S; and finally we can then check whether or not this S satisfies (iii). In components, 1 (ui,j − uj,i ) , (iv) 2 and therefore we begin by calculting ui,j . For this, it is convenient to first calculate ∂r/∂xj = r,j . Observe by differentiating r2 = |x|2 = xi xi that Eij = 2rr,j = 2xi,j xi = 2δij xi = 2xj , and therefore r,j = xj . r (v)

(vi)

8: Let A(x) be a 2-tensor field with the property that A(x)n(x) dA = o ∂D for all subregions D ⊂ R.j + uj.j r5 − 3 5 (δij xj + xi δjj ) − 3xi xj − 6 r. (i) . Solution: In terms of components in a fixed basis. δij xj 3 15xi xj xj − 5 (xi + 3xi ) + r4 r r r6 r (x) Example 5.j + bxi (r−3 ). r r (vii) Substituting this into (iv) gives us Eij : Eij = 1 (ui.j − δij − 3 r.92 CHAPTER 5. 3 3 r r r r5 (ix) Finally we use this to calculate ∂Sij /∂xj = Sij.11): xi nj dA = ∂R R xi.j 2µb = δij (r−3 ).j b xi. gives us Sij : Sij = = 2µ Eij + λEkk δij = 2µb 2µb δij 3xi xj − 3 r r5 δij 3xi xj δkk xk xk − + λb −3 δij 3 5 3 r r r r5 r2 3xi xj 3 δij − 3 5 δij = 2µb − + λb .j − 3xi xj (r−5 ). substituting (viii) into (i)2 . we have to show that xi nj dA = V δij . ∂R (ii) The result follows immediately by using the divergence theorem (5.j 3 ij r r xi xj δij b 3 − 3b 5 .j : 1 Sij. (iii) Example 5.j r5 r = = = −3 0.j r4 3 (xi xj ). and x is the position vector of a typical point in R + ∂R. (i) where V is the volume of the region R. CALCULUS OF VECTOR AND TENSOR FIELDS Now differentiating the given vector field ui = bxi /r3 with respect to xj gives ui. (viii) Next.7: Show that ∂R x ⊗ n dA = V I.j dV = R δij dV = δij R dV = δij V.i ) = b 2 xi xj δij −3 5 3 r r .j r3 xi xj δij = b 3 − 3b 4 r r r = = = b xi δ − 3b 4 r.

5.j = 0 at each point in R and so the preceding equation simplifies. . multiplying both sides by eipq and using the identity eipq eijk = δpj δqk − δpk δqj in (1. (iii) If Aij.9: Let A(x) be a 2-tensor field which satisfies the differential equation div A = o at each point in R. This shows that (iv) is both necessary and sufficient for (i) to hold. we are told that Aij (x)nj (x) dA = 0 ∂D for all subregions D ⊂ R. 93 where n(x) is the unit outward normal vector at a point x on the boundary ∂D.12). (ii) By using the divergence theorem (5. this implies that Aij. Suppose that in addition ∂D x × An dA = o for all subregions D ⊂ R.p ] dV = 0. Example 5.j = 0 at each x ∈ R.j is continuous on R. after using the substitution rule. ∂D which on using the divergence theorem yields eijk (xj Akp ). Solution: In terms of components we are given that eijk xj Akp np dA = 0. Show that (i) holds if and only if div A = o at each point x ∈ R. to eijk Akj dV = 0. (iv) Conversely if (iv) holds. Show that A must be a symmetric 2-tensor.p dV = D D eijk [δjp Akp + xj Akp. WORKED EXAMPLES. Solution: In terms of components in a fixed basis. one can easily reverse the preceding steps to conclude that then (i) also holds. We are also given that Aij.4.j dV = 0 D for all subregions D ⊂ R. D Since this holds for all subregions D ⊂ R we can localize it to eijk Akj = 0 at each x ∈ R. the result established in the previous problem allows us to conclude that Aij. Finally.49) yields (δpj δqk − δpk δqj )Akj = Aqp − Apq = 0 and so A is symmetric.

(ii) (a) (b) R (x1. (i) Solution: In the presence of sufficient smoothness. x2 ). x2 ) and ε2 (x1 . A generic point on the curve is denoted by (ξ1 . x2) D s n (0. s2) = (ξ1(s). CALCULUS OF VECTOR AND TENSOR FIELDS Example 5.21 . The curve is parameterized by arc length s as ξ1 = ξ1 (s). x2 ) such that u. x2 ). the order of partial differentiation does not matter and so we necessarily have u. 0) and (ξ1 (s0 ). ξ2 = ξ2 (s). and it has components (n1 . 0) to (x1 .2: (a) Path C from (0.12 = u. ξ2 (s0 )) = (x1 . 0 ≤ s ≤ s0 . ξ2 ) and the curve is characterized by the parameterization ξ1 = ξ1 (s).10: Let ε1 (x1 . 0) to (x1 . ξ2(s)) )) s1 = −n2. (iii) where s is arc length on C and (ξ1 (0). ξ2) = (ξ1(s). x2 ). n2 ) . 0 ≤ s ≤ s0 . . Find necessary and sufficient conditions under which there exists a function u(x1 .2 = ε2. The unit outward normal vector on S is n. x2) C )) (ξ1. ξ2 (s))ξ1 (s) + ε2 (ξ1 (s). s2 ).2 = ε2 for all (x1 . 0) and (x1 .1 for all (x1 .1 = ε1 . Let C be an arbitrary regular oriented curve in R that connects (0. (b) A closed path C passing through (0. ξ2 = ξ2 (s). 0) (0 (s1. x2 ) ∈ R. ε2 obey ε1. ξ2(s)) s s R C (x1. we shall provide a formula for explicitly calculating the function u in terms of the given functions ε1 and ε2 . has components (s1 . x2 ) and coinciding with C over part of its length. s2 = n 1 Figure 5. ξ2 (s))ξ2 (s) ds (iv) 0 satisfies the requirement (i) when (ii) holds. x2 ) ∈ R.94 CHAPTER 5. The unit tangent vector on S. (0 0) (0. ξ2 (0)) = (0. Therefore a necessary condition for (i) to hold is that ε1 . s. We will show that the function s0 u(x1 . u. x2 ) = ε1 (ξ1 (s). To show that (ii) is also sufficient for the existence of u. x2 ) be defined on a simply connected two-dimensional domain R.

5. then so does the function u + constant and so the dependence on the arbitrary starting point of the integral is to be expected. ξ2 (s)) are the components of the unit tangent vector on C at the point (ξ1 (s). u+ uT = O at all x ∈ R.1 − ε1.2 (x1 . ξ2 )dξ1 + ε2 (ξ1 . x2 ) such that a1 (x1 . WORKED EXAMPLES.0) ε1 (ξ1 . x2 ) = −φ. Thus (iv) does in fact define a function u(x1 .i . x2 ) as sketched in Figure NNN (b). x2 ) ∈ R. x2 ) = 0 for all (x1 . this last integral vanishes. C (v) Recall that (ξ1 (s). x2 ) and a2 (x1 . that it does not depend on the path of integration. (Note that if a function u satisfies (i). Suppose that a1 and a2 satisfy the partial differential equation a1. (ii) Solution: This is simply a restatement of the previous example in a form that will find useful in what follows. Thus the integral (v) vanishes on any closed path C and so the integral (iv) is independent of path and depends only on the end points. x2 ) + a2.) Thus consider a closed path C that starts and ends at (0.1 (x1 .e. Example 5. x2 ) be defined on a simply connected two-dimensional domain R. This is readily seen by writing (iv) in the form (x1 . s2 = ξ2 (s). ξ2 )dξ2 (vii) and then differentiating this with respect to x1 and x2 .12: Find the most general vector field u(x) which satisfies the differential equation 1 2 Solution: In terms of components. x2 ). In view of (ii).2 (x1 . x2 ) = (0.j = −uj.11: Let a1 (x1 . (i) u = − uT reads: ui. (ii) .x2 ) u(x1 . ξ2 (s)): s1 = ξ1 (s). ξ2 (s))ξ2 (s) ds = 0. i. x2 ) = φ. 95 To see this we must first show that the integral (iv) does in fact define a function of (x1 . Finally it remains to show that the function (iv) satisfies the requirements (i). (i) Show that (i) holds if and only if there is a function φ(x1 .4. Example 5. Observe further from the figure that the components of the unit tangent vector s and the unit outward normal vector n are related by s1 = −n2 and s2 = n1 . We need to show that ε1 (ξ1 (s). x2 ).2 dA (vi) where we have used the divergence theorem in the last step and D is the region enclosed by C . ξ2 (s))ξ1 (s) + ε2 (ξ1 (s). a2 (x1 . x2 ). 0) and passes through (x1 . Thus the left-hand side of (v) can be written as ε1 s1 + ε2 s2 ds = C C ε2 n1 − ε1 n2 ds = D ε2. x2 ).1 (x1 .

Integrating this once gives ui. The vector field u(x) must necessarily have this form if (ii) is to hold. Is this tensor symmetric? Solution: Consider. gives ∂f1 = 4A12 . Again.ij . when written out in components. A13 . (i) ∂Aij are the components of a 2-tensor. A21 . A21 . .i = −ui. 11 22 33 12 23 31 (ii) Proceeding formally and differentiating (ii) with respect to A12 .jk = −uj. CALCULUS OF VECTOR AND TENSOR FIELDS Differentiating this with respect to xk . .96 CHAPTER 5. substituting (iv) into (ii) shows that [C] must be skew-symmetric.jk = uk.jk . A33 ). by (ii). . uk. The partial derivatives of f with respect to Aij . Using this and then changing the order of differentiation leads to ui. A33 ) = A2 + A2 + A2 + 2A2 + 2A2 + 2A2 .jk = −uj. ∂f . Using this and changing the order of differentiation once again leads to ui. . for example.ki . ∂A12 which implies that ∂f1 /∂A12 = ∂f1 /∂A21 . reads: f = f1 (A11 .kj = −ui.13: Suppose that a scalar-valued function f (A) is defined for all symmetric tensors A. To examine sufficiency.ik = −uj. It therefore follows that ui.ij = −ui. A13 . and separately with respect to A21 .j = Cij where the Cij ’s are constants.ki = uk.k = −uk.ji = uk.k . where the ci ’s are constants. uj. (iv) (iii) Example 5. Integrating this once more gives ui = Cij xj + ci . . ∂A21 (iii) . A12 . Thus in summary the most general vector field u(x) that satisfies (i) is u(x) = Cx + c where C is a constant skew-symmetric 2-tensor and c is a constant vector.jk = 0.j . In terms of components in a fixed basis we have f = f (A11 . the particular function f = A · A = Aij Aij which. A12 . . ∂f1 = 0. However by (ii). and then changing the order of differentiation gives ui.

. since Aij is symmetric we can write 1 (Aij + Aji ) . . On the other hand. . A12 . In fact. We then differentiate g2 and evaluate the result at tensors with unit determinant. 2 Substituting (iv) into the formula (ii) for f gives f = f2 (A11 . . by changing Aij → 1 (Aij + 2 Aji ). . then ∂f /∂Aij will be symmetric. . ∂A12 and so now. . References 1. Academic Press. ∂A21 2 2 2 97 (iv) . A33 ) : Aij = f2 (A11 . WORKED EXAMPLES. Gurtin. but not otherwise. ∂f2 /∂A12 = ∂f2 /∂A21 . + A31 + A31 A13 + A2 .3. Mathematics Applied to Continuum Mechanics. A13 . 1981. Dover. A13 . New York. L. We see that the values of the functions f1 and f2 are equal at all symmetric matrices and so in going from f1 → f2 . Continuum Mechanics. Segel. A21 . A33 ) is expressed in symmetric form. . A21 . we shall always assume that it has been written in symmetric form. On occasion we will have need to differentiate a function g1 (A) defined for all tensors with det A = 1 and we shall do this by extending the definition of the given function and defining a second function g2 (A) for all tensors. and therefore its derivative with respect to the tensor can be assumed to be symmetric. 1987. 2. P. In general. . ∂f2 = A21 + A12 . We may differentiate f2 by treating the 9 Aij ’s to be independent and the result can then be evaluated at symmetric matrices. Differentiating f2 leads to = A2 + A2 + A2 + 2 11 22 33 1 (A12 + A21 ) 2 +2 ∂f2 = A12 + A21 . we have effectively relaxed the constraint of symmetry and expanded the domain of definition of f to all matrices [A]. the original problem statement itself is ill-posed since we are asked to calculate ∂f /∂Aij but told that [A] is restricted to being symmetric. 11 22 33 2 2 2 2 13 Note that the values of f1 [A] = f2 [A] for any symmetric matrix [A]. An Introduction to Continuum Mechanics. A12 . Chapter 2. . .4. M. Sections 10 and 11. Chapter 1. (v) (vi) The source of the original difficulty is the fact that the 9 Aij ’s in the argument of f1 are not independent variables since Aij = Aji . 3. g2 is defined such that g1 (A) = g2 (A) for all tensors with unit determinant. Chadwick.5. . and yet we have been calculating partial derivatives as if they were independent. A12 . whenever we encounter a function of a symmetric tensor. A33 ) 1 1 (A23 + A31 ) + 2 (A31 + A13 ) 2 2 1 2 1 2 1 2 1 = A2 + A2 + A2 + A12 + A12 A21 + A21 + . We assume that this is what was meant in the problem statement. Dover. Section 2. Remark: We will encounter a similar situation involving tensors whose determinant is unity. 1999.E. Throughout these volumes.A. Suppose that f2 is defined by (v) for all matrices [A] and not just symmetric matrices [A]. if a function f (A11 .

.

θ. Let {e1 . x2 . e3 } be a fixed orthonormal basis. e1 .  99 (6. e3 }.(6. which is a general treatment of orthogonal curvilinear coordinates. θ. e2 . x2 = r sin θ. together. z) ∈ [0. ∞). x3 ).e.1 Introductory Remarks The notes in this section are a somewhat simplified version of notes developed by Professor Eli Sternberg of Caltech. The rectangular cartesian coordinates of the point P in the frame {O. ∞) × [0.1) is one-to-one except at r = 0 (i. e1 . for all (r. The point O and the basis {e1 . e2 .1) may be . x3 ) of the position vector x in this basis. Indeed (6.17) that relate the rectangular cartesian coordinates (x1 . is a compromise between a general tensorial treatment that includes oblique coordinate systems and an ad-hoc treatment of special orthogonal curvilinear coordinate systems.Chapter 6 Orthogonal Curvilinear Coordinates 6. The discussion here. Consider a generic point P in R3 whose position vector relative to this origin O is x. e2 .32) . z) through the mappings   x1 = r cos θ. We introduce circular cylindrical coordinates (r. x2 .1) The mapping (6.37) in terms of the scale factors hi defined in (6. e2 . x2 . x3 = z. 2π) × (−∞. x ˆ ˆ It is helpful to begin by reviewing a few aspects of the familiar case of circular cylindrical coordinates. and let O be a fixed point chosen as the origin. A summary of the main tensor analytic results of this section are given in equations (6. constitute a frame which we denote by {O. e3 }. x1 = x2 = 0). e3 } are the components (x1 . x3 ) to the orthogonal curvilinear coordinates (ˆ1 .

The circular cylindrical coordinates (r.2).1) on (r. meridional half − planes through x3 − axis.100 CHAPTER 6. 2π) × (−∞. a point at which r > 0) is the intersection of a unique triplet of (mutually . one has: r = ro = constant : θ = θo = constant : z = zo = constant : circular cylinders. explicitly invert the coordinate mapping in this way. 2 1 cos θ = x1 /r. z) = 0 if and only if r = 0 and is otherwise strictly positive. θ. z) ∈ (0. planes perpendicular to x3 − axis. z). z) admit the familiar geometric interpretation illustrated in Figure 6. each “regular point” of E3 ( i. θ. (6. ORTHOGONAL CURVILINEAR COORDINATES explicitly inverted for r > 0 to give r= x2 + x2 . z) = det  ∂x2 /∂r ∂x2 /∂θ   ∂x3 /∂r ∂x3 /∂θ is ∂x1 /∂z   ∂x2 /∂z  = r ≥ 0.1)  ∂x1 /∂r ∂x1 /∂θ   ∆(r. and the breakdown in invertibility at r = 0. In view of (6. ∞).e. in general.   ∂x3 /∂z  Note that ∆(r.1: Circular cylindrical coordinates (r. θ.2) For a general set of orthogonal curvilinear coordinates one cannot. sin θ = x2 /r. ∞) × [0. θ. The above surfaces constitute a triply orthogonal family of coordinate surfaces. The Jacobian determinant of the mapping (6. x3 z = constan constant z r-coordinate line -coordinate θ-coordinate line -coordinate θ constant θ = constan r θ x1 -coordinate z-coordinate line z r constant r = constan x2 Figure 6.1. this reflects the invertibility of (6. co − axial with x3 − axis. θ. z = x3 .

etc. and in order to calculate the components of these derivatives in the local basis we will need to calculate quantities of the form er · (∂er /∂r). hz = |∂x/∂z|. er = (∂x/∂r). hz = 1 and  1  (∂x/∂r) = cos θ e1 + sin θ e2 . eθ . INTRODUCTORY REMARKS 101 perpendicular) coordinate surfaces. hθ . ez = hz . Along any coordinate line only one of the coordinates (r. i.  hθ     1   (∂x/∂z) = e3 . the line along which a r-coordinate surface and a z-coordinate surface intersect is a θ-coordinate line. In terms of the circular cylindrical coordinates the position vector x can be written as x = x(r. In order to calculate the derivatives of various field quantities it is clear that we will need to calculate quantities such as ∂er /∂r. The so-called metric coefficients hr . thus for example as illustrated in Figure 6. . hr = |∂x/∂r|. eθ (x).  er =   hr    1 eθ = (∂x/∂θ) = − sin θ e1 + cos θe2 .e. ez } forms a local orthonormal basis at the point x. eθ · (∂eθ /∂r).1. ∂er /∂θ. hr hθ hz In the present case one has hr = 1.6. θ and z respectively.4) er · (∂eθ /∂r). . hz denote the magnitudes of these vectors. hθ = r. (6.3) The triplet of vectors {er .1. z) = (r cos θ)e1 + (r sin θ)e2 + ze3 . are tangent to the coordinate lines corresponding to r. and so the unit tangent vectors corresponding to the respective coordinate lines r. we will write {er (x). θ. sometimes. eθ · (∂er /∂r). ∂x/∂z. . The coordinate lines are the pairwise intersections of the coordinate surfaces. . ∂x/∂θ. (6. The vectors ∂x/∂r. ez · (∂er /∂r). . θ. when we need to emphasize this fact. . ez · (∂eθ /∂r). . hθ = |∂x/∂θ|. z) varies. ez = (∂x/∂z). while the other two remain constant. θ and z are: 1 1 1 eθ = (∂x/∂θ). They are local because they depend on the point x. etc. ez (x)}.

Notation: As far as possible. Observe that the “box” R includes some but possibly not ˆ ˆ all of its faces. x2 . e2 . f (x1 . and R = L1 × L2 × L3 . x3 ). x3 ) in the image of the interior of R. 6. x2 . x3 ).5) may be interpreted as a mapping of R into E3 . x2 . x2 .g. xi . 0 ≤ x2 < 2π. x3 ).5) is one-to-one and sufficiently smooth in the interior of R so that the inverse mapping xi = xi (x1 . x2 . ˆ ˆ ˆx ˆ ˆ ˆ x ˆ ˆ x ˆ ˆ 6.5) ˆ where the domain of definition R is a subset of E3 . x3 ) ranges over all of E3 as (ˆ1 . ei . vi (ˆ1 . and the “box” R is given by R = {(ˆ1 . Each curvilinear coordinate xi belongs ˆ ˆ to some linear interval Li .2 General Orthogonal Curvilinear Coordinates Let {e1 .102 CHAPTER 6. Aij (x1 . and we shall consistently denote the local curvilinear coordinate system and all components and quantities associated with it by similar symbols with “hats” over them. x2 . x2 . is devoted to calculating these quantities. Inverse transformation. ORTHOGONAL CURVILINEAR COORDINATES Much of the analysis in the general case to follow.4. x2 . x2 . −∞ < x3 < ∞}. x2 . e. x2 . x3 ) through a triplet of scalar mappings x ˆ ˆ ˆ xi = xi (ˆ1 . x3 ) | 0 ≤ x1 < x ˆ x ˆ ˆ ˆ ˆ ∞. e1 . We introduce curvilinear coordinates (ˆ1 . x3 ) ˆ ˆ (6. L2 = {(ˆ2 | 0 ≤ x2 < 2π} and x ˆ x ˆ ˆ ˆ L3 = {(ˆ3 | − ∞ < x3 < ∞}. Aij (ˆ1 . We shall assume that ˆ (x1 . let O be the fixed point chosen as the origin and let {O.2. x3 ) etc.1 Coordinate transformation. x2 . e2 . leading eventually to Equation (6. f (ˆ1 . x3 ) for all (ˆ1 . e3 } be a fixed right-handed orthonormal basis. . e3 } be the associated frame. We assume further x ˆ ˆ ˆ that the mapping (6.30) in Sub-section 6. The rectangular cartesian coordinates of the point with position vector x in this frame are (x1 . x3 ) etc.2. we will consistently denote the fixed cartesian coordinate system and all components and quantities associated with it by symbols such as xi . x2 . x3 ) where xi = x · ei . x3 ) ∈ R. x3 ) takes on all values in R. vi (x1 . x2 . x2 . For example in the case of circular cylindrical coordinates we have L1 = {(ˆ1 | 0 ≤ x1 < ∞}. x ˆ ˆ x ˆ ˆ (6. ˆ Equation (6.6) ˆ exists and is appropriately smooth at all (x1 . ei . x3 ).

6. it is oriented in the direction of increasing t. .) Points that are not ˆ on a singular line or surface will be referred to as “regular points” of E3 . x3 ) = xo = constant. GENERAL ORTHOGONAL CURVILINEAR COORDINATES 103 ˆ Note that the mapping (6. x2 . c3 ). (6. Thus every regular point of E3 is the point of intersection of a unique triplet of coordinate surfaces and coordinate lines. the tangent vector can be taken to be ∂x/∂ x1 . The Jacobian matrix [J] of the mapping (6.2.6) is [J]−1 . c3 = constant. Without loss of generality we can take therefore take it to be positive: 1 ∂xi ∂xj ∂xk det[J] = eijk epqr > 0. The coordinate surface xi = constant is defined by ˆ xi (x1 . x x1 ∈ L1 . see Section 6.9) ˙ can be taken to be1 x(t) = xi (t)ei . 3. Thus in the ˙ case of the special curve Γ1 : x = x(ˆ1 . 2.8) 6 ∂ xp ∂ xq ∂ x r ˆ ˆ ˆ The Jacobian matrix of the inverse mapping (6. c2 .1. corresponding to a x1 -coordinate line. (6. along which only one of the curvilinear coordinates varies. the Jacobian determiˆ nant does not vanish on the interior of R.5) might not be uniquely invertible on some of the faces of R which are mapped into “singular” lines or surfaces in E3 .10) Here and in the sequel a superior dot indicates differentiation with respect to the parameter t.7) and by the assumed smoothness and one-to-oneness of the mapping. (For example in the case of circular cylindrical coordinates. the pairwise intersections of these surfaces are the corresponding coordinate lines. ∂x/∂ xi are tangent vectors and ˆ ˆ ei = 1 1 ∂x |∂x/∂ xi | ∂ xi ˆ ˆ (no sum) (6.5) has elements Jij = ∂xi ∂ xj ˆ (6. Recall that the tangent vector along an arbitrary regular curve Γ : x = x(t). ˆ ˆi i = 1. x1 = r = 0 is a singular surface. Genˆ ˆ eralizing this. as is illustrated in Figure 6. ˆ c2 = constant.2. (α ≤ t ≤ β).

Here ei is the unit tangent vector along the xi -coordinate line. If s(t) is the arc-length of Γ. one has ˙ |s(t)| = |x(t)| = ˙ ˙ ˙ x(t) · x(t).2 Metric coefficients. The proper orthogonal matrix [Q] characterizes the ˆ rotational transformation relating this basis to the rectangular cartesian basis {e1 .12) . (6. measured from an arbitrary fixed point on Γ. ˆ we must require for i = j : ˆ ˆ ei · ej = 0 or ∂x ∂x · =0 ∂ xi ∂ xj ˆ ˆ or ∂xk ∂xk · = 0. the sense of ei being e ˆ ˆ ˆ determined by the direction in which xi increases. ORTHOGONAL CURVILINEAR COORDINATES x3 ˆ x1-coordinate surface ˆ -coordinate x3 ˆ e3 x2-coordinate surface ˆ -coordinate ˆ e1 x1 ˆ ˆ e2 x3-coordinate surface ˆ -coordinate x2 ˆ e3 e2 e1 ˆ Qij = ei · ej x2 x1 Figure 6. are the unit tangent vectors along the xi − coordinate lines. both of which point in the sense ˆ of increasing xi . Since our discussion is limited to orthogonal curvilinear coordinate systems. ∂ xi ∂ xj ˆ ˆ (6.104 CHAPTER 6. e2 .11) 6.9).2: Orthogonal curvilinear coordinates (ˆ1 . x3 ) and the associated local orthonormal basis x ˆ ˆ ˆ ˆ vectors {ˆ1 . scale moduli. e3 }. e3 }. e2 . Consider again the arbitrary regular curve Γ parameterized by (6. x2 .2.

(i = j). where the scale moduli hi are defined by23 √ hi = gii = ∂xk ∂xk = ∂ xi ∂ xi ˆ ˆ ∂x1 ∂ xi ˆ 2 (6.6.15) the metric coefficients can be written as gij = hi hj δij . x2 (t).12). x3 (t)). (6.14) gij = ∂ xi ∂ xj ˆ ˆ ∂ x i ∂ xj ˆ ˆ Note that gij = 0. √ √ 3 Some authors such as Love define hi as 1/ gii instead of as gii .14) and (6.11). (6. (6.19) (ds)2 = (h1 dˆ1 )2 + (h2 dˆ2 )2 + (h3 dˆ3 )2 x x x 2 Here and henceforth the underlining of one of two or more repeated indices indicates suspended summation with respect to this index. (6. dt dt (α ≤ t ≤ β). Observe that in terms of the Jacobian matrix [J] defined earlier in (6. (6.7) we can write gij = Jki Jkj or equivalently [g] = [J]T [J].2.15) as a consequence of the orthogonality condition (6. They are defined by ∂xk ∂xk ∂x ∂x · = .17) follows matrix of metric coefficients is therefore  0 0   2 (6. The  h2 1   [g] =  0   0 From (6. (6.17) noting that hi = 0 is precluded by (6. Because of (6. GENERAL ORTHOGONAL CURVILINEAR COORDINATES One concludes from (6.5) and the chain rule that ds dt where xi (t) = xi (x1 (t).18) h2 0  . ˆ ˆ Thus 2 2 105 = dx dx dxk dxk · = · = dt dt dt dt ∂xk dˆi x ∂ xi dt ˆ · ∂xk dˆj x ∂ xj dt ˆ = ∂xk ∂xk · ∂ xi ∂ xj ˆ ˆ dˆi dˆj x x . (6. x x (6.16) + ∂x2 ∂ xi ˆ 2 + ∂x3 ∂ xi ˆ 2 > 0.8).13).14). ds dˆi dˆj x x = gij or (ds)2 = gij dˆi dˆj .   0 h2 3 along Γ.13) dt dt dt in which gij are the metric coefficients of the curvilinear coordinate system under consideration.

20) ˆ It follows from (6. x3 )). and use (6. x2 .3 Inverse partial derivatives In view of (6. (6. ˆ (6. ˆ (6. i.6) one has the identity xi = xi (ˆ1 (x1 . ORTHOGONAL CURVILINEAR COORDINATES which reveals the geometric significance of the scale moduli.17) yield the following alternative expressions for hi : hi = 1 ∂ xi ˆ ∂x1 2 + ∂ xi ˆ ∂x2 2 + ∂ xi ˆ ∂x3 2 .23) By (6. ∂ xk ∂xj ˆ Multiply this by ∂xi /∂ xm . x2 (x1 .14).106 CHAPTER 6.16) to confirm that ∂xj ∂ xm ˆ ∂ xk ˆ = gkm = h2 .22). ∂xj (6. x3 ). ∂xj hi ∂ xi ˆ (6.e.21) and therefore the proper orthogonal matrix [Q] relating the two sets of basis vectors is given by 1 ∂xj ˆ (6.17). x2 . noting the implied contraction on the index i. hi = ds dˆi x along the xi − coordinate lines.24) Moreover. x2 . m ∂ xm ˆ ∂xj ∂xj Thus the inverse partial derivatives are given by ∂ xi ˆ 1 ∂xj = 2 . x3 ). x ˆ ˆ so that from the chain rule. . the elements of the matrix [Q] that relates the two coordinate systems can be written in the alternative form ˆ Qij = ei · ej = hi ∂ xi ˆ .10) and (6. hi ∂ xi ˆ (6.2.11) that the unit vector ei can be expressed as ˆ ei = 1 ∂x . (6. (6. x3 (x1 .5). ∂xi ∂ xk ˆ = δij .23) and (6.22) Qij = ei · ej = hi ∂ xi ˆ 6.

25) with respect to xk .2.28) Substituting (6. hk ∂ xk ˆ In order to express the second derivative term in (6. k) are replaced by (j.28) into (6. i. (6.26) leads to ∂ˆi e δik ∂hi 1 ˆ · ek = − + ∂ xj ˆ hi ∂ xj 2hi hk ˆ δjk ∂ ∂ ∂ (hj hk ) + δki (hk hi ) − δij (hi hj ) . e3 ) e ˆ e ˆ ˆ In order to calculate the derivatives of various field quantities it is clear that we will need to calculate the quantities ∂ˆi /∂ xj .2.27) when 1 (i.17). we begin by differentiating (6.25) − 1 ∂hi ∂x 1 ∂ 2x + h2 ∂ xj ∂ xi hi ∂ xi ∂ xj ˆ ˆ ˆ i ˆ · 1 ∂x . and let (b) and (c) be the identities resulting from (6. and this subsection is devoted to this calculation. From (6. j. e ˆ ˆ . GENERAL ORTHOGONAL CURVILINEAR COORDINATES 107 6.27) as (a). ∂ xi ∂ xj ˆ ˆ Therefore δik ∂hi 1 ∂x ∂ˆi e ∂ 2x ˆ · ek = − ++ · . j). Thus.27) If we refer to (6. and in order to calculate the components of these derivatives e ˆ ˆ in the local basis we will need to calculate quantities of the form ek · ∂ˆi /∂ xj .29) Equation (6. ∂ˆi e ˆ · ek = ∂ xj ˆ while by (6. ∂ xi ˆ ∂ xj ˆ ∂ xk ˆ (6.14).26) (6. i) and (k. Calculating e ˆ these quantities is an essential prerequisite for the transformation of basic tensor-analytic relations into arbitrary orthogonal curvilinear coordinates.26) in terms of the scale-moduli and their first partial derivatives. respectively. ∂ xj ˆ hi ∂ xj ˆ hi hk ∂ xi ∂ xk ∂xk ˆ ˆ (6. k. ∂ xi ˆ ∂ xj ˆ ∂ xk ˆ (6. ∂x ∂x · = gij = δij hi hj . then 2 {(b)+(c) -(a)} is readily found to yield ∂x 1 ∂ 2x · = ∂ xi ∂ xj ∂ xk ˆ ˆ ˆ 2 δjk ∂ ∂ ∂ (hj hk ) + δki (hk hi ) − δij (hi hj ) .6.4 Components of ∂ˆi /∂ xj in the local basis (ˆ1 . e2 .29) provides the explicit expressions for the terms ∂ˆi /∂ xj · ek that we sought. ∂ xi ∂ xk ∂ xj ∂ x j ∂ xk ∂ x i ˆ ˆ ˆ ˆ ˆ ˆ ∂ xk ˆ (6.21). ˆ ∂ 2x ∂x ∂ 2x ∂x ∂ · + · = δij (hi hj ) .

108

CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES

Observe the following properties that follow from it: ∂ˆi e ˆ · ek = 0 if (i, j, k) distinct, ∂ xj ˆ ∂ˆi e ˆ · ek = 0 if k = i, ∂ xj ˆ ∂ˆi e 1 ∂hi ˆ · ek = − , if i = k ∂ xi ˆ hk ∂ xk ˆ ∂ˆi e 1 ∂hk ˆ · ek = − if i = k. ∂ xk ˆ hi ∂ xi ˆ                         

(6.30)

6.3

Transformation of Basic Tensor Relations

Let T be a cartesian tensor field of order N ≥ 1, defined on a region R ⊂ E3 and suppose that the points of R are regular points of E3 with respect to a given orthogonal curvilinear coordinate system. ˆ The curvilinear components Tijk...n of T are the components of T in the local basis (ˆ1 , e2 , e3 ). Thus, e ˆ ˆ ˆ Tij...n = Qip Qjq . . . Qnr Tpq...r , where ˆ Qip = ei · ep . (6.31)

6.3.1

Gradient of a scalar field

Let φ(x) be a scalar-valued function and let v(x) denote its gradient: v= φ or equivalently vi = φ,i .

The components of v in the two bases {e1 , e2 , e3 } and {ˆ1 , e2 , e3 } are related in the usual e ˆ ˆ way by vk = Qki vi ˆ and so vk = Qki vi = Qki ˆ On using (6.22) this leads to vk = ˆ 1 ∂xi hk ∂ xk ˆ ∂φ 1 ∂φ ∂xi = , ∂xi hk ∂xi ∂ xk ˆ ∂φ . ∂xi

6.3. TRANSFORMATION OF BASIC TENSOR RELATIONS so that by the chain rule vk = ˆ where we have set ˆx ˆ ˆ φ(ˆ1 , x2 , x3 ) = φ(x1 (ˆ1 , x2 , x3 ), x2 (ˆ1 , x2 , x3 ), x3 (ˆ1 , x2 , x3 )). x ˆ ˆ x ˆ ˆ x ˆ ˆ

109

ˆ 1 ∂φ , hk ∂ xk ˆ

(6.32)

6.3.2

Gradient of a vector field

Let v(x) be a vector-valued function and let W(x) denote its gradient: W= v or equivalently Wij = vi,j .

The components of W and v in the two bases {e1 , e2 , e3 } and {ˆ1 , e2 , e3 } are related in the e ˆ ˆ usual way by Wij = Qip Qjq Wpq , vp = Qnp vn , ˆ and therefore Wij = Qip Qjq Thus by (6.24)4 Wij = Qip Qjq Since Qjq Qmq = δmj , this simplifies to Wij = Qip 1 ∂ (Qnp vn ), ˆ hj ∂ xj ˆ 1 ∂ Qmq (Qnp vn ) . ˆ hm ∂ xm ˆ m=1
3

∂vp ∂ xm ˆ ∂ ∂ = Qip Qjq (Qnp vn ) = Qip Qjq ˆ (Qnp vn ) ˆ . ∂xq ∂xq ∂ xm ˆ ∂xq

which, on expanding the terms in parentheses, yields Wij = However by (6.22) Qip and so Wij = ∂Qnp ∂ ∂ˆn e ˆ vn = Qip ˆ (ˆn · ep )ˆn = ei · e v vn , ˆ ∂ xj ˆ ∂ xj ˆ ∂ xj ˆ 1 hj ∂ˆn e ∂ˆi v ˆ + ei · vn , ˆ ∂ xj ˆ ∂ xj ˆ (6.33) 1 hj ∂ˆi v ∂Qnp + Qip vn . ˆ ∂ xj ˆ ∂ xj ˆ

in which the coefficient in brackets is given by (6.29).
We explicitly use the summation sign in this equation (and elsewhere) when an index is repeated 3 or more times, and we wish sum over it.
4

110

CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES

6.3.3

Divergence of a vector field
v(x) denote its gradient. Then

Let v(x) be a vector-valued function and let W(x) =

div v = trace W = vi,i . Therefore from (6.33), the invariance of the trace of W, and (6.30),
3

div v = trace W = trace W = Wii =
i=1

1 hi

∂ˆi v + ∂ xi ˆ

n=i

1 ∂hi vn ˆ hn ∂ xn ˆ

.

Collecting terms involving v1 , v2 , and v3 alone, one has ˆ ˆ ˆ div v = Thus div v = 1 ∂ˆ1 v v1 ∂h2 ˆ v1 ∂h3 ˆ + + + ... + ... h1 ∂ x1 h2 h1 ∂ x1 h3 h1 ∂ x1 ˆ ˆ ˆ ∂ ∂ ∂ (h2 h3 v1 ) + ˆ (h3 h1 v2 ) + ˆ (h1 h2 v3 ) . ˆ ∂ x1 ˆ ∂ x2 ˆ ∂ x3 ˆ (6.34)

1 h1 h2 h3

6.3.4

Laplacian of a scalar field

Let φ(x) be a scalar-valued function. Since
2

φ = div(grad φ) = φ,kk

the results from Sub-sections (6.3.1) and (6.3.3) permit us to infer that
2

φ=

1 h1 h2 h3

ˆ ˆ ˆ ∂ h2 h3 ∂ φ ∂ h3 h1 ∂ φ ∂ h1 h2 ∂ φ + + ∂ x1 h1 ∂ x1 ˆ ˆ ∂ x2 h2 ∂ x2 ˆ ˆ ∂ x3 h3 ∂ x3 ˆ ˆ

(6.35)

where we have set ˆx ˆ ˆ φ(ˆ1 , x2 , x3 ) = φ(x1 (ˆ1 , x2 , x3 ), x2 (ˆ1 , x2 , x3 ), x3 (ˆ1 , x2 , x3 )). x ˆ ˆ x ˆ ˆ x ˆ ˆ

6.3.5

Curl of a vector field

Let v(x) be a vector-valued field and let w(x) be its curl so that w = curl v or equivalently wi = eijk vk,j .

6.3. TRANSFORMATION OF BASIC TENSOR RELATIONS Let W= v or equivalently Wij = vi,j . Then as we have shown in an earlier chapter wi = eijk Wkj , Consequently from Sub-section 6.3.2,
3

111

wi = eijk Wkj .

wi =
j=1

1 eijk hj

∂ˆk v ∂ˆn e ˆ + ek · vn . ˆ ∂ xj ˆ ∂ xj ˆ

By (6.30), the second term within the braces sums out to zero unless n = j. Thus, using the second of (6.30), one arrives at wi = This yields w1 = w2 = w3 = 1 h2 h3 1 h3 h1 1 h1 h2 ∂ ∂ (h3 v3 ) − ˆ (h2 v2 ) , ˆ ∂ x2 ˆ ∂ x3 ˆ ∂ ∂ (h1 v1 ) − ˆ (h3 v3 ) , ˆ ∂ x3 ˆ ∂ x1 ˆ ∂ ∂ (h2 v2 ) − ˆ (h1 v1 ) . ˆ ∂ x1 ˆ ∂ x2 ˆ (6.36) 1 eijk hj j,k=1
3

∂ˆk v 1 ∂hj − vj . ˆ ∂ xj ˆ hk ∂ xk ˆ

6.3.6

Divergence of a symmetric 2-tensor field

Let S(x) be a symmetric 2-tensor field and let v(x) denote its divergence: v = div S, S = ST , or equivalently vi = Sij,j , Sij = Sji .

The components of v and S in the two bases {e1 , e2 , e3 } and {ˆ1 , e2 , e3 } are related in the e ˆ ˆ usual way by ˆ vi = Qip vp , Sij = Qmi Qnj Smn , ˆ and consequently vi = Qip Spj,j = Qip ˆ ∂ ∂ ∂ xk ˆ ˆ ˆ (Qmp Qnj Smn ) = Qip (Qmp Qnj Smn ) . ∂xj ∂ xk ˆ ∂xj

112

CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES

ˆ ˆ By using (6.24), the orthogonality of the matrix [Q], (6.30) and Sij = Sji we obtain v1 = ˆ 1 h1 h2 h3 ∂ ∂ ∂ ˆ ˆ ˆ (h2 h3 S11 ) + (h3 h1 S12 ) + (h1 h2 S13 ) ∂ x1 ˆ ∂ x2 ˆ ∂ x3 ˆ

1 ∂h1 ˆ 1 ∂h1 ˆ 1 ∂h2 ˆ 1 ∂h3 ˆ + S12 + S13 − S22 − S33 , h1 h2 ∂ x2 ˆ h1 h3 ∂ x3 ˆ h1 h2 ∂ x1 ˆ h1 h3 ∂ x1 ˆ with analogous expressions for v2 and v3 . ˆ ˆ

(6.37)

Equations (6.32) - (6.37) provide the fundamental expressions for the basic tensor-analytic quantities that we will need. Observe that they reduce to their classical rectangular cartesian forms in the special case xi = xi (in which case h1 = h2 = h3 = 1). ˆ

6.3.7

Differential elements of volume

When evaluating a volume integral over a region D, we sometimes find it convenient to transform it from the form . . . dx1 dx2 dx3
D

into an equivalent expression of the form
D

. . . dˆ1 dˆ2 dˆ3 . x x x

In order to do this we must relate dx1 dx2 dx3 to dˆ1 dˆ2 dˆ3 . By (6.22), x x x det[Q] = 1 det[J]. h1 h2 h3

However since [Q] is a proper orthogonal matrix its determinant takes the value +1. Therefore det[J] = h1 h2 h3 and so the basic relation dx1 dx2 dx3 = det[J] dˆ1 dˆ2 dˆ3 leads to x x x dx1 dx2 dx3 = h1 h2 h3 dˆ1 dˆ2 dˆ3 . x x x (6.38)

6.3.8

Differential elements of area

ˆ Let dA1 denote a differential element of (vector) area on a x1 -coordinate surface so that ˆ ˆ ˆ ˆ dA1 = (dˆ2 ∂x/∂ x2 ) × (dˆ3 ∂x/∂ x3 ). In view of (6.21) this leads to dA1 = (dˆ2 h2 e2 ) × x ˆ x ˆ x ˆ (dˆ3 h3 e3 ) = h2 h3 dˆ2 dˆ3 e1 . Thus the differential elements of (scalar) area on the x1 -, x2 x x x ˆ ˆ ˆ and x3 -coordinate surfaces are given by ˆ ˆ dA1 = h2 h3 dˆ2 dˆ3 , x x respectively. ˆ dA2 = h3 h1 dˆ3 dˆ1 , x x ˆ dA3 = h1 h2 dˆ1 dˆ2 , x x (6.39)

40) Elliptical Cylindrical Coordinates (ξ.    2 2 hξ = hη = a sinh ξ + sin η. w) ∈ (−∞. θ. z) ∈ [0.4 Some Examples of Orthogonal Curvilinear Coordinate Systems Circular Cylindrical Coordinates (r.    hr = 1. ∞) × [0.     (6. hθ = r.4. hz = 1. φ) ∈ [0. η. (6. hu = hv = u hz = 1 . ∞). x2 = r sin θ. EXAMPLES OF ORTHOGONAL CURVILINEAR COORDINATES 113 6. z) ∈ [0. θ. θ. φ): x1 = r sin θ cos φ. θ.41) Parabolic Cylindrical Coordinates (u. ∞) × (−π. z): x1 = r cos θ. (6. ∞) × (−∞. Spherical Coordinates (r.    for all (r. z): x1 = a cosh ξ cos η. x2 = a sinh ξ sin η. ∞) × [0. v. w): 1 x1 = 2 (u2 − v 2 ). hφ = r sin θ . 2π) × (−∞.  x3 = r cos θ. hz = 1 .6. x3 = z. hθ = r. π] × (−∞. 2π) × (−π.    for all (ξ. x3 = w.5 Worked Examples.        for all (r. ∞).43) 6. . η. x2 = r sin θ sin φ. ∞). ∞) × [0. π]. hr = 1.42) for all (u. Example 6.  x3 = z. (6. x2 = uv.1: Let E(x) be a symmetric 2-tensor field that is related to a vector field u(x) through E= 1 2 u+ uT . v.   √  2 + v2.

x2 . (a) grad f  .3: Consider circular cylindrical coordinates (ˆ1 .   E31   = . (i) Example 6. . . θ. b h1 h2 ∂ x2 ˆ h1 h3 ∂ x3 ˆ h1 h2 ∂ x1 ˆ h1 h3 ∂ x1 ˆ (i) . . .  E11 E12 E22 = . z) which are related to (x1 . x3 ) = (r.. Solution: From the results in Sub-section 6. 0 ≤ r < ∞. x2 = r sin θ. .2: Consider a symmetric 2-tensor field S(x) and a vector field b(x) that satisfy the equation div S + b = o. Express the following quanties.3. . . .. . . In a cartesian coordinate system this can be written equivalently as ∂Sij + bi = 0. ˆ e ˆ Solution: Using the result from Sub-section 6. ˆ h1 ∂ x1 ˆ h1 h2 ∂ x2 ˆ h1 h3 ∂ x3 ˆ 1 h1 ∂ u1 ˆ h2 ∂ u2 ˆ = E21 = + 2 h2 ∂ x2 h1 ˆ h1 ∂ x1 h2 ˆ =   E33 = . x3 = z. . ∂xj Establish the analogous formulas in a general orthogonal curvilinear coordinate system. 0 ≤ θ < 2π. Establish the analogous formulas in a general orthogonal curvilinear coordinate system. . x2 . . E23 = . x3 ) x ˆ ˆ through   x1 = r cos θ. .. −∞ < z < ∞. . . u(x) a vector-valued field. and S(x) a symmetric 2-tensor field. where ˆi = Qip bp b Example 6.3. . .2 and the formulas for ek ·(∂ˆi /∂ xj one finds after elementary simplification that 1 ∂ u1 ˆ 1 ∂h1 1 ∂h1 + u2 + ˆ u3 . etc.6 we have 1 h1 h2 h3 ∂ ∂ ∂ (h2 h3 S11 ) + (h3 h1 S12 ) + (h1 h2 S13 ) ∂ x1 ˆ ∂ x2 ˆ ∂ x3 ˆ 1 ∂h1 1 ∂h1 1 ∂h2 1 ∂h3 + S12 + S13 − S22 − S33 + ˆ1 = 0. Let f (x) be a scalar-valued field.114 CHAPTER 6. ORTHOGONAL CURVILINEAR COORDINATES In a cartesian coordinate system this can be written equivalently as Eij = 1 2 ∂ui ∂uj + ∂xj ∂xi .

ez = er = 115 x3 ez eθ z r θ x1 er x2 Figure 6. eθ . θ. ez }.     e3 . (b) 2 f (c) div u (d) curl u (e) 1 2 u+ uT and (f) div S in this coordinate system. Solution: We simply need to specialize the basic results established in Section 6. x3 ) = (r. The matrix [∂xi /∂ xj ] therefore specializes to ˆ     ∂x2 /∂r   ∂x3 /∂r ∂x1 /∂r ∂x1 /∂θ ∂x2 /∂θ ∂x3 /∂θ ∂x1 /∂z   ∂x2 /∂z    ∂x3 /∂z      sin θ   0 cos θ −r sin θ r cos θ 0   0 .  cos θ e1 + sin θ e2. x3 = z.   1 0  x2 = r sin θ. z) and the associated local curvilinear orthonormal basis {er . In the present case we have (ˆ1 .6.3. z) x ˆ ˆ and the coordinate mapping (6. (ii) (i) = .5. WORKED EXAMPLES.5) takes the particular form x1 = r cos θ.     eθ = − sin θ e1 + cos θ e2. x2 . θ.3: Cylindrical coordinates (r.

eθ .) = (S11 . u3 ) u ˆ ˆ for the components of a vector field. z) = f (x1 . in this case. . x = (r cos θ)e1 + (r sin θ)e2 + (z)e3 . (vi) (v) (iv) which.3. ez ) = (ˆ1 . θ.21) and (iii) we obtain the following expressions for the unit vectors associated with the local cylindrical coordinate system:  er = cos θ e1 + sin θ e2 . . e2 . . x3 ). could have been obtained geometrically from Figure 6.) for the components of a 2-tensor field. uz ) = (ˆ1 .32) gives f= ∂f ∂r er + 1 ∂f r ∂θ eθ + ∂f ∂z ez where we have set f (r. x3 ). Srθ . (iii) + 2 + 2 = r. (a) Substituting (i) and (iii) into (6. e3 ) e ˆ ˆ for the unit vectors associated with the local cylindrical coordinate system.35) gives 2 f= ∂2f 1 ∂f 1 ∂2f ∂2f + + 2 2 + 2 2 ∂r r ∂r r ∂θ ∂z where we have set f (r. z) = f (x1 . ORTHOGONAL CURVILINEAR COORDINATES hr hθ hz = = = ∂x1 ∂r ∂x1 ∂θ ∂x1 ∂z 2 + 2 ∂x2 ∂r ∂x2 ∂θ ∂x2 ∂z 2 + 2 ∂x3 ∂r ∂x3 ∂θ ∂x3 ∂z 2 = 2 1. . and ˆ ˆ (Srr . x2 . We use the natural notation (ur . θ. u2 .      ez = e3 . . . . 2 + + = 1. and therefore on using (6. (b) Substituting (i) and (iii) into (6.      eθ = − sin θ e1 + cos θ e2 . S12 . x2 .116 and the scale moduli are CHAPTER 6. From (ii). uθ . and (er .

WORKED EXAMPLES. (c) Substituting (i) and (iii) into (6. Writing the cylindrical components Eij of E as ˆ ˆ ˆ (Err .36) gives curl u = 1 ∂uz ∂uθ − r ∂θ ∂z er + ∂ur ∂uz − ∂z ∂r eθ + ∂uθ uθ 1 ∂ur + − ∂r r r ∂θ ez u whence we (e) Set E=(1/2)( u + uT ).  . x3 ) = (r.6.  0 ≤ r < ∞. E12 . substituting (i) and (iii) into (6. Erθ . . u(x) a vector-valued field. Let f (x) be a scalar-valued field. x2 = r sin θ sin φ.1.37) gives div S = + + ∂Srr 1 ∂Srθ ∂Srz Srr − Sθθ + + + ∂r r ∂θ ∂z r ∂Srθ 1 ∂Sθθ ∂Sθz 2Srθ + + + ∂r r ∂θ ∂z r ∂Szr 1 ∂Szθ ∂Szz Szr + + + ∂r r ∂θ ∂z r eθ ez er Alternatively these could have been obtained from the results of Example 6.5. x2 . .4: Consider spherical coordinates (ˆ1 . x2 .34) gives div u = ∂ur 1 1 ∂uθ ∂uz + ur + + ∂r r r ∂θ ∂z 117 (d) Substituting (i) and (iii) into (6. .) = (E11 . . and S(x) a symmetric 2-tensor field.33) enables us to calculate ˆ can calculate E. one finds Err Eθθ Ezz Erθ Eθz Ezr = = = = = =  ∂ur   . x3 = r cos θ.   ∂z ∂uθ uθ 1 1 ∂ur  + − . Substituting (i) and (iii) into (6.   2 ∂r ∂z Alternatively these could have been obtained from the results of Example 6.   r ∂θ r      ∂uz   . E13 . x3 ) through x ˆ ˆ  x1 = r sin θ cos φ. θ.). Express the following quanties.   ∂r      1 ∂uθ ur   + .   2 ∂z r ∂θ      ∂ur 1 ∂uz   + . 0 ≤ θ ≤ π. 0 ≤ φ < 2π. φ) which are related to (x1 . (f ) Finally. .    2 r ∂θ ∂r r      1 ∂uθ 1 ∂uz   + . Example 6. Erz .2.

Solution: We simply need to specialize the basic results established in Section 6. (a) grad f (b) div u (c) 2 f (d) curl u (e) 1 2 u+ uT and (f) div S in this coordinate system. x ˆ ˆ and the coordinate mapping (6. The matrix [∂xi /∂ xj ] therefore specializes to ˆ     ∂x2 /∂r   ∂x3 /∂r ∂x1 /∂r ∂x1 /∂θ ∂x2 /∂θ ∂x3 /∂θ   ∂x2 /∂φ    ∂x3 /∂φ ∂x1 /∂φ      sin θ sin φ   cos θ sin θ cos φ r cos θ cos φ r cos θ sin φ −r sin θ −r sin θ sin φ   r sin θ cos φ    0  x2 = r sin θ sin φ. ORTHOGONAL CURVILINEAR COORDINATES  er = (sin θ cos φ) e1 + (sin θ sin φ) e2 + cos θ e3. eθ .3. (ii) (i) = . In the present case we have (ˆ1 . θ. θ.  sin sin    cos cos eθ = (cos θ cos φ) e1 + (cos θ sin φ) e2 − sin θ e3.4: Spherical coordinates (r. φ) and the associated local curvilinear orthonormal basis {er . x2 φ er eφ eθ θ r x1 Figure 6. eφ }. x3 = r cos θ. x2 . φ).5) takes the particular form x1 = r sin θ cos φ. x3 ) = (r.118 x3 CHAPTER 6.     eφ = − sin φ e1 + cos φ e2.

.) = (S11 . φ) = f (x1 . (a) Substituting (i) and (iii) into (6. (vi) (v) (iv) which. . and the scale moduli are hr hθ hφ = = = ∂x1 ∂r ∂x1 ∂θ ∂x1 ∂φ 2 119 + 2 ∂x2 ∂r ∂x2 ∂θ ∂x2 ∂φ 2 + 2 ∂x3 ∂r ∂x3 ∂θ ∂x3 ∂φ 2 = 2 1. and therefore on using (6. φ) = f (x1 . From (ii).32) gives f= ∂f ∂r er + 1 ∂f r ∂θ eθ + 1 ∂f r sin θ ∂φ eφ .35) gives 2 f= ∂2f 2 ∂f 1 ∂2f 1 ∂f 1 ∂2f + + 2 2 + 2 cot θ + 2 2 ∂r2 r ∂r r ∂θ r ∂θ r sin θ ∂φ2 where we have set f (r. . x3 ). u2 . uφ ) = (ˆ1 . WORKED EXAMPLES. .5. θ.      eφ = − sin φ e1 + cos φ e2 . u3 ) u ˆ ˆ for the components of a vector field.      eθ = (cos θ cos φ) e1 + (cos θ sin φ) e2 − sin θ e3 . ˆ ˆ ˆ (Srr . x2 . Srθ . x2 . . (iii) + 2 + 2 = r. e2 .6. in this case. Srφ . 2 + + = r sin θ. x = (r sin θ cos φ)e1 + (r sin θ sin φ)e2 + (r cos θ)e3 . S12 . and (er . eθ .) for the components of a 2-tensor field. eφ ) = (ˆ1 . e3 ) e ˆ ˆ for the unit vectors associated with the local spherical coordinate system.21) and (iii) we obtain the following expressions for the unit vectors associated with the local spherical coordinate system:  er = (sin θ cos φ) e1 + (sin θ sin φ) e2 + cos θ e3 . (b) Substituting (i) and (iii) into (6.4. could have been obtained geometrically from Figure 6. We use the natural notation (ur . uθ . where we have set f (r. θ. x3 ). S13 .

  r sin θ ∂φ r r 1 1 ∂ur ∂uθ uθ    + − . (d) Substituting (i) and (iii) into (6.120 CHAPTER 6.5: Show that the matrix [Q] defined by (6.    r ∂θ r     1 ∂uφ ur cot θ   + + uθ . Erφ . one finds Err Eθθ Eφφ Erθ Eθφ Eφr = = = = = =  ∂ur   . E12 . 2 r sin θ ∂φ ∂r r Alternatively these could have been obtained from the results of Example 6.34) gives div u = ∂ 2 1 ∂ ∂ (r sin θ ur ) + (r sin θuθ ) + (ruφ ) r2 sin θ ∂r ∂θ ∂φ . ORTHOGONAL CURVILINEAR COORDINATES (c) Substituting (i) and (iii) into (6.37) gives div S = + + 1 ∂Srθ 1 ∂Srφ 1 ∂Srr + + + [2Srr − Sθθ − Sφφ + cot θSrθ ] er ∂r r ∂θ r sin θ ∂φ r ∂Srθ 1 ∂Sθθ 1 ∂Sθφ 1 + + + [3Srθ + cot θ(Sθθ − Sφφ )] eθ ∂r r ∂θ r sin θ ∂φ r ∂Srφ 1 ∂Sθφ 1 ∂Sφφ 1 + + + [3Srφ + 2 cot θSθφ ] eφ ∂r r ∂θ r sin θ ∂φ r Alternatively these could have been obtained from the results of Example 6.22).   ∂r     1 ∂uθ ur   + .1.33) to calculate calculate E.22) is a proper orthogonal matrix. .2. u from which one can er + 1 ∂vr ∂ − (r sin θvφ ) r sin θ ∂φ ∂r eθ (e) Set E=(1/2)( u + uT ).) = (E11 . Example 6.    2 r sin θ ∂φ r ∂θ r      1 1 ∂ur ∂uφ uφ   + − .). .   2 r ∂θ ∂r r     1 ∂uθ 1 ∂uφ cot θ 1   + − uφ . . (f ) Finally substituting (i) and (iii) into (6. Erθ . Writing the spherical components Eij of E as (Err . . hi ∂ xi ˆ . E13 . Proof: From (6.36) gives curl u = + 1 ∂ ∂ (r sin θvφ ) − (rvθ ) r2 sin θ ∂θ ∂φ ∂vr 1 ∂ (rvθ ) − r ∂r ∂θ eφ . Qij = 1 ∂xj . We substitute (i) and (iii) into (6. .

6. Hence [Q] is proper orthogonal. 1987. E. Segel. California Institute of Technology.7).8) and (6.S. Sternberg. New York. Pawlik. Qij = 1 ∂xj 1 Jji = hi ∂ xi ˆ hi where Jij = ∂xi /∂ xj are the elements of the Jacobian matrix. (Unpublished) Lecture Notes for AM 135: Elasticity. Mathematics Applied to Continuum Mechanics.16). H. 1980. hi hj ∂ xi ∂ xj ˆ ˆ hi hj where in the penultimate step we have used (6. from (6. Thus [Q] is an orthogonal matrix.14) and in the ultimate step we have used (6. 2.22) and (6. California. . 3.5. Pasadena.A. L.17). Next. References 1. Reismann and P. WORKED EXAMPLES. Dover. Thus ˆ det[Q] = 1 det[J] > 0 h1 h2 h3 where the inequality is a consequence of the inequalities in (6. Elasticity: Theory and Applications. 1976. Wiley. and therefore Qik Qjk = 121 1 ∂xk ∂xk 1 = gij = δij .

.

For example a heavy cable that hangs under gravity between two fixed pegs adopts the shape that. its deformed configuration is the shape which minimizes the total energy of the system. Depending on the load. the energy minimizing configuration may be straight or bent (buckled). minimizes the gravitational potential energy of the system. Numerous problems in physics can be formulated as mathematical problems in optimization. and we want to find the function φ that minimizes the quantity F of interest. the soap film that forms across the loop is the one that minimizes the surface energy (which under most circumstances equals minimizing the surface area of the soap film). If we dip a (non-planar) wire loop into soapy water. Or. As a specific example. Note that the scalar-valued function F is defined on a set of functions. Another common problem occurs in geodesics where. Fermat’s principle states that the path taken by a ray of light in propagating from one point to another is the path that minimizes the travel time. consider the so-called Brachistochrone Problem. One refers to F as a functional and writes F {φ}.Chapter 7 Calculus of Variations 7. we want to find the path of shortest distance joining those two points which lies entirely on the given surface. In each of these problem we have a scalar-valued quantity F such as energy or time that depends on a function φ such as the shape or path. For example in optics.1 Introduction. Most equilibrium theories of mechanics involve finding a configuration of a system that minimizes its energy. We are given two 123 . given some surface and two points on it. if we subject a straight beam to a compressive load. from among all possible shapes.

it is natural to first rewrite the formula for T in a form that explicitly displays its dependency on φ. In order to formulate this problem. If (x(t). the conservation of energy tells us that the sum of the potential and kinetic energies does not vary with time: 1 −mgφ(x(t)) + mv 2 (t) = 0. h) along which a bead slides under gravity. that by elementary calculus. 0) and (1. CALCULUS OF VARIATIONS points (0. we wish to express the speed v in terms of φ.124 CHAPTER 7. (1 h) y = φ(x) y Figure 7. describe a generic curve joining (0. y(t)) denote the coordinates of the bead at time t. The travel time of the bead is T = ds v where the integral is taken along the entire path. 0) and slides along the wire due to gravity. For what shape of wire is the time of travel from (0. A bead is released from rest from the point (0. that are to be joined by a smooth wire. h). y-plane. 0 ≤ x ≤ 1. i.1: Curve joining (0. with h > 0. 0) to (1. which makes T a minimum. 0) to (1.e. Note first. the function φ(x). h) in the x. (0 0) 1 x g (1. let y = φ(x). h) least? (0. 0) to (1. the arc length ds is related to dx by ds = and so we can write T = 0 dx2 + dy 2 = 1 + (dy/dx)2 dx = 1 + (φ )2 dx 1 1 + (φ )2 dx. 2 . Since we are to minimize T by varying φ. we are to find the curve. In the question posed to us. v Next. Let s(t) denote the distance traveled by the bead along the wire at time t so that v(t) = ds/dt is its corresponding speed.

y-plane through which the path is disallowed from passing. Thus the set A of admissible functions that we wish to consider is A = φ(·) φ : [0. 1] that minimizes a functional F {φ} of the form 1 (7. consider (a) a straight line. φ )dx 0 .7. φ(1) = h. 0 ≤ x ≤ 1. 0) to (1. Finally. Our task is to find. In order to complete the formulation of the problem. Solving this for v gives v= 2gφ. from among all such curves. 125 where the right hand side is the total energy at the initial instant.2) F {φ} = f (x. i. 1]. For example. 1]. Or perhaps the position of the left hand end might be prescribed as above.e. this formula gives the corresponding travel time for the bead. Finally. Remark: Since the shortest distance between two points is given by the straight line that joins them. the length of the curve joining the two points might be prescribed. This minimization takes place over a set of functions φ. Use (7. INTRODUCTION. Since we are only interested in curves that pass through the points (0. in the simplest problem in the calculus of variations we are required to find a function φ(x) ∈ C 1 [0. 1] → R. φ and φ are both continuous on [0. Our task is to minimize T {φ} over the set A. φ ∈ C 1 [0.1. and (b) a circular arc. there might be some prohibited region of the x. 0) and (1. we should carefully characterize this set of “admissible functions” (or “test functions”). To investigate this. 2gφ (7. that joins (0. it is natural to wonder whether a straight line is also the curve that gives the minimum travel time. φ(0) = 0.1) Given a curve characterized by y = φ(x). Remark: One can consider various variants of the Brachistochrone Problem. substituting this back into the formula for the travel time gives 1 T {φ} = 0 1 + (φ )2 dx. in which case the minimization is to be carried out subject to the constraint that the length is given. h). the one that minimizes T {φ}. φ(1) = h .1) to calculate the travel time for each of these paths and show that the straight line is not the path that gives the least travel time. In summary. Or. but the right hand end of the wire might be allowed to lie anywhere on the vertical line through x = 1. φ. h) we must require that φ(0) = 0. for analytical reasons we only consider curves that are continuous and have a continous slope. And so on. A generic curve is described by y = φ(x).

Other types of problems will be also be encountered in what follows. 2 (7.3) Sometimes we are only interested in finding a “local minimizer”. a point xo that minimizes F relative to all x that are “close” to x0 .6) ˆ ˆ Since F (x0 + εn) ≥ F (x0 ) it follows that F (ε) ≥ F (0). ˆ Define the function F (ε) for −ε0 < ε < ε0 by ˆ F (ε) = F (x0 + εn) (7. .e. Consider a subset A of n-dimensional space Rn and let F (x) = F (x1 . 7.5) (7.e. ε0 ). xn ) be a realvalued function defined on A. . We say that xo ∈ A is a minimizer of F if1 F (x) ≥ F (xo ) for all x ∈ A. CALCULUS OF VARIATIONS over an admissible set of test functions A. x2 . 1 . (7. The test functions (or admissible functions) φ are subject to certain conditions including smoothness requirements. In the presence of sufficient smoothness we can write 1ˆ ˆ ˆ ˆ F (ε) − F (0) = F (0)ε + F (0)ε2 + O(ε3 ). . F (0) ≥ 0. Thus suppose that the vector space Rn is Euclidean so that a norm is defined on Rn . Then we say that xo is a local minimizer of F if F (x) ≥ F (xo ) for all x in a neighborhood of xo . Thus if x0 is to be a minimizier of F it is necessary that ˆ ˆ F (0) = 0. Perhaps it is useful to begin by reviewing the familiar question of minimization in calculus. i.2 Brief review of calculus. if F (x) ≥ F (xo ) for all x such that |x − xo | < r for some r > 0. and possibly (but not necessarily) side constraints of various forms. 1. (7. In order to speak of such a notion we must have a measure of “closeness”. possibly (but not necessarily) boundary conditions at both ends x = 0.126 CHAPTER 7. . i.4) where n is a fixed vector and ε0 is small enough to ensure that x0 +εn ∈ A for all ε ∈ (−ε0 .7) A maximizer of F is a minimizer of −F so we don’t need to address maximizing separately from minimizing.

bounded and closed). Here the vector field F is the gradient of F and the tensor field F is the gradient of F . n) = F (0). the function F1 (x) = x defined on A1 = (−∞. one necessarily must have δF (xo . we know from (7. n) ≥ 0 for all unit vectors n.e. In the first example A1 was unbounded. Therefore (7. And in the third example A3 was bounded and closed but the function was discontinuous on A3 . (7.12) whence we must have F (xo ) = o and the Hessian F (xo ) must be positive semi-definite.11) i=1 ∂F ∂xi n n ni = 0 x=x0 and i=1 j=1 ∂ 2F ∂xi ∂xj x=x0 ni nj ≥ 0 (7. which is called the first variation of F and similarly set ˆ δ 2 F (xo .9) which is called the second variation of F . . A2 was bounded but open. It is customary to use the following notation and terminology: we set ˆ δF (xo . It can be shown that if A is compact and if F is continuous on A then F assumes both maximum and minimum values on A. For example.10) In the present setting of calculus. (7. note that −1 ∈ A2 . n) = 0 and δ 2 F(xo . At an interior local minimizer x0 .5).10) is equivalent to the requirements that F (xo ) · n = 0 or equivalently n and ( F(xo ))n · n ≥ 0 for all unit vectors n (7. n) = ( F (xo ))n·n. ∞) is unbounded as x → ±∞. Remark: It is worth recalling that a function need not have a minimizer.8) (7. Another example is given by the function F2 (x) = x defined on A2 = (−1. consider / the function F3 (x) defined on A3 = [−1. BRIEF REVIEW OF CALCULUS.8) that δF (xo . 1) noting that F2 ≥ −1 on A2 . In order for a minimizer to exist. n) = F (xo ) · n and that δ 2 F (xo . the value of F3 can get as close as one wishes to 0 but cannot achieve it since F (0) = 1. A must be compact (i.7. In the second.2. 1] where F3 (x) = 1 for −1 ≤ x ≤ 0 and F (x) = x for 0 < x ≤ 1. while the value of F2 can get as close as one wishes to −1. Finally. however. it cannot actually achieve the value −1 since there is no x ∈ A2 at which f (x) = −1. n) = F (0) 127 (7.

x2 ] one can define a norm by ||φ||1 = max |φ(x)| + x1 ≤x≤x2 x1 ≤x≤x2 max |φ (x)|.e. δ 2F ≥ 0. and we are asked to find a function φo ∈ A that minimizes F over A: i. x2 ]. for a function φ0 that minimizes F relative to all “nearby functions”. where F : A → R. Most often. one can define a norm by ||φ||0 = max |φ(x)|. x2 ]. In the calculus of variations.3 The basic idea: necessary conditions for a minimum: δF = 0. i. CALCULUS OF VARIATIONS 7.) .128 CHAPTER 7. and so on. (0 a) 0 x 1 Figure 7.e.2: Two functions φ1 and φ2 that are “close” in the sense of the norm || · ||0 but not in the sense of the norm || · ||1 . x2 ].e. This requires that we select a norm so that the distance between two functions can be quantified. For a function φ in the set of functions that are continuous on an interval [x1 . we are typically given a functional F defined on a function space A. x2 ].e. i. i. we will be looking for a local (or relative) minimizer. (Of course the norm ||φ||0 can also be used on C 1 [x1 . b) (1 y = φ1(x) (0. for φ ∈ C[x1 . x1 ≤x≤x2 For a function φ in the set of functions that are continuous and have continuous first derivatives on [x1 . for φ ∈ C 1 [x1 . to find φo ∈ A for which F {φ} ≥ F {φo } y for all φ ∈ A. y = φ2(x) (1.

in this Chapter we will be examining weak local extrema. . A NECESSARY CONDITION FOR AN EXTREMUM 129 When seeking a local minimizer of a functional F we might say we want to find φ0 for which F {φ} ≥ F {φo } for all admissible φ such that ||φ − φ0 ||0 < r for some r > 0. η} = F (0) respectively. −ε0 < ε < ε0 . Define a function F (ε) by ˆ F (ε) = F {φ0 + εη}.15) These are necessary conditions on a minimizer φo . This allows one to eliminate η leading to a condition (or conditions) that only involves the minimizer φ0 . ˆ Therefore ε = 0 minimizes F (ε). ε) = φ0 (x) + ε η(x) (7. On the other hand. we must have φ0 + εη ∈ A ˆ for each ε ∈ (−ε0 . In order to determine φ0 we consider the one-parameter family of admissible functions φ(x. The approach for finding such extrema of a functional is essentially the same as that used in the more familiar case of calculus reviewed in the preceding sub-section.13) that are close to φ0 . then it is necessary that δF {φ0 . (7. Unless explicitly stated otherwise. the necessary condition δF {φo . η} = F (0) and δ 2 F {φ0 . ε0 ).14) ˆ ˆ Since φ0 minimizes F it follows that F {φ0 + εη} ≥ F {φ0 } or equivalently F (ε) ≥ F (0). (7. η} ≥ 0. In any specific problem. Such a local minimizer is called a weak minimizer. such as those in the subsequent sections. We cannot go further in general. In this case the minimizer is being compared with all functions whose values and whose first derivatives are close to those of φ0 for all x1 ≤ x ≤ x2 . when seeking a local minimizer we might say we want to find φ0 for which F {φ} ≥ F {φo } for all admissible φ such that ||φ − φ0 ||1 < r for some r > 0. A strong minimizer is automatically a weak minimizer. The first and second variations of F are defined by ˆ ˆ δF {φ0 . η} = 0. η} = 0 can be further simplified by exploiting the fact that it must hold for all admissible η. Consider a functional F {φ} defined on a function space A and suppose that φo ∈ A minimizes F .7. δ 2 F {φ0 . In this case the minimizer φ0 is being compared with all admissible functions φ whose values are close to those of φ0 for all x1 ≤ x ≤ x2 . and so if φ0 minimizes F . Since φ is to be admissible.3. Such a local minimizer is called a strong minimizer. here ε is a real variable in the range −ε0 < ε < ε0 and η(x) is a once continuously differentiable function.

φ (x)) dx. for every φ ∈ A. Throughout these notes we will consider functions η that are independent of ε and so. (7. y y = φ0(x) + εη(x) (1. CALCULUS OF VARIATIONS Remark: Note that when η is independent of ε the functions φ0 (x) and φ0 (x) + εη(x). z) be a given function. Consider the following class of problems: let A be the set of all continuously differentiable functions φ(x) defined for 0 ≤ x ≤ 1 with φ(0) = a. y.130 CHAPTER 7. 1].1 The basic problem. φ(1) = b: A = φ(·) φ : [0. we will be restricting attention exclusively to weak minimizers. y. . Euler equation. defined and smooth for all real x. z. φ ∈ C 1 [0.3: The minimizer φ0 and a neighboring function φ0 + εη. φ(1) = b . (0 a) y = φ0(x) 0 x Figure 7. are close to each other for small ε. Define a functional F {φ}. 7. as noted previously. 7. b) (1 (0.4.4 Application of the necessary condition δF = 0 to the basic problem. On the other hand the functions φ0 (x) and φ0 (x)+ε sin(x/ε) are close to each other but their derivatives are not close to each other.17) We wish to find a function φ ∈ A which minimizes F {φ}. Euler equation. 1] → R. and their derivatives. φ(x). by 1 F {φ} = f (x.16) Let f (x. 0 (7. φ(0) = a.

(7.21). see Figure 7. ε) = a and φ(1. φ0 )η + (x. Since φ must be admissible we need φ0 + ε η ∈ A for each ε. φ0 + εη. Therefore we must have φ(0. so that F {φ} ≥ F {φ0 } for all φ ∈ A. φ0 + εη ) η + (x.22) .18) ˆ Pick any function η(x) with the property (7. Our goal is to find φ0 and so we must eliminate η from (7.21) into a convenient form and exploit the fact that (7. To do this we rearrange the terms in (7. In order to determine φ0 we consider the one parameter family of admissible functions φ(x. (7.19) We know from the analysis of the preceding section that a necessary condition for φ0 to minimize F is that ˆ δF {φo . and so (7. In order to do this we proceed as follows: Integrating the second term in (7.7.17). φ0 )η ∂y ∂z dx = 0. φ0 + εη.21) reduces to 1 0 ∂f d − ∂y dx ∂f ∂z η dx = 0. 1].21) must hold for all functions η that satisfy (7. ∂z ∂z x=0 ∂z 0 0 dx However by (7. Define the function F (ε) = F {φ0 + εη} so that ˆ F (ε) = F {φ0 + εη} = 1 f (x. η} = F (0) = 1 0 ∂f ∂f (x.18) we have η(0) = η(1) = 0 and therefore the first term on the right-hand side drops out. φ0 . we find F (ε) from (7.18). ε) = φ0 (x) + ε η(x) where ε is a real variable in the range −ε0 < ε < ε0 and η(x) is a once continuously differentiable function on [0.4.20) ˆ On using the chain-rule. APPLICATION OF NECESSARY CONDITION δF = 0 131 Suppose that φ0 (x) ∈ A is a minimizer of F .19) to be ˆ F (ε) = 0 1 ∂f ∂f (x.21) Thus far we have simply repeated the general analysis of the preceding section in the context of the particular functional (7.21) by parts gives x=1 1 1 ∂f ∂f d ∂f η dx = η − η dx . (7. φ0 + εη ) dx. φ0 + εη.20) leads to ˆ δF {φo . ε) = b which in turn requires that η(0) = η(1) = 0. (7.18) and fix it. φ0 + εη )η ∂y ∂z dx. η} = F (0) = 0.3. φ0 . Thus (7. 0 (7.

φ0 (1) = b.132 CHAPTER 7. we shall drop the subscript “0” from the minimizing function φ0 .22) must vanish and therefore obtain the differential equation d ∂f ∂f (x. Then. φ ) = 2gφ . 1] and suppose that 1 p(x)n(x)dx = 0 0 for all continuous functions n(x) with n(0) = n(1) = 0. Consider the Brachistochrone Problem formulated in the first example of Section 7. we shall write the Euler equation (7. moreover. Lemma: The following is a basic result from calculus: Let p(x) be a continuous function on [0.25) dx ∂φ ∂φ where.1. Therefore (7. φ ) = 0.25). φ. Here we have 1 + (φ )2 f (x. φ ) − (x. φ0 ) = 0 for 0 ≤ x ≤ 1.2 An example.4. In view of this Lemma we conclude that the integrand of (7. φ0 ) − (x.23) as d ∂f ∂f (x.17). φ. which together with the boundary conditions φ0 (0) = a.23) is referred to as the Euler equation (sometimes referred to as the Euler-Lagrange equation) associated with the functional (7.24) (7. φ0 . CALCULUS OF VARIATIONS Though we have viewed η as fixed up to this point. The differential equation (7.23) provides the mathematical problem governing the minimizer φ0 (x). we recognize that the above derivation is valid for all once continuously differentiable functions η(x) which have η(0) = η(1) = 0. in carrying out the partial differentiation in (7. φ. p(x) = 0 for 0 ≤ x ≤ 1.22) must hold for all such functions. The Brachistochrone Problem. (7. (7. dx ∂z ∂y This is a differential equation for φ0 . φ0 . 7. φ and φ as if they were independent variables. Notation: In order to avoid the (precise though) cumbersome notation above. one treats x.

φ.27) The minimizer φ(x) therefore must satisfy the boundary-value problem consisting of the second-order (nonlinear) ordinary differential equation (7.1]. φ(x).27). 2 θ1 < θ < θ2 . The rest of this sub-section has nothing to do with the calculus of variations.28) . (7.26). (7. We can write the differential equation as φ φ(1 + (φ )2 ) d dφ φ φ(1 + (φ )2 ) + 1 = 0 2φ2 which can be immediately integrated to give 1 φ(x) = 2 2 (φ (x)) c − φ(x) where c is a constant of integration that is to be determined. (7. φ and φ as if they are independent variables and differentiating the function f (x. φ (x))dx = 0 0 1 + [φ (x)]2 dx 2gφ(x) over the class of functions φ(x) that are continuous and have continuous first derivatives on [0. and satisfy the boundary conditions φ(0) = 0. θ1 < θ < θ2 . APPLICATION OF NECESSARY CONDITION δF = 0 and we wish to find the function φ0 (x) that minimizes 1 1 133 F {φ} = f (x. and to this end we adopt the substitution φ= c2 (1 − cos θ) = c2 sin2 (θ/2).26) and the boundary conditions (7. 2g 2(φ)3/2 ∂f = ∂φ φ 2gφ(1 + (φ )2 ) . φ ) gives: ∂f = ∂φ 1 + (φ )2 1 . x = x(θ). and therefore the Euler equation (7. 2(φ)3/2 0 < x < 1. φ = φ(θ).27). (7. φ(1) = h.29) (7. It is simply concerned with the solving the boundary value problem (7.23) specializes to d dx φ (φ)(1 + (φ )2 ) − 1 + (φ )2 = 0. It is most convenient to find the path of fastest descent in parametric form.4. φ(1) = h.26) with associated boundary conditions φ(0) = 0. Treating x.7.

32). if we define the function p(θ) by p(θ) = θ − sin θ . gives us θ1 = 0 and c1 = 0. φ = 2 Once this pair of equations is solved for c and θ2 then (7.33) We now turn to the boundary conditions. We thus have  c2  x = (θ − sin θ). then.   2 0 ≤ θ ≤ θ2 . (7. The requirement φ(x) = 0 at x = 0.134 Differentiating this with respect to x gives CHAPTER 7. dp cos θ/2 = (tan θ/2 − θ/2) > 0 for 0 < θ < 2π.29). 1 − cos θ 0 < θ < 2π.   2 (7. this leads to φ (x) = dx c2 = (1 − cos θ) dθ 2 which integrates to give x= c2 (θ − sin θ) + c1 . (7.32) by the second. dθ sin3 θ/2 . 2 θ1 < θ < θ2 .31)  c2   (1 − cos θ). p → ∞ as θ → 2π−. first.29) and (7. (7. together with (7. together with (7. We now address the solvability of (7.30) The remaining boundary condition φ(x) = h at x = 1 gives the following two equations for finding the two constants θ2 and c:  c2  1 = (θ2 − sin θ2 ). CALCULUS OF VARIATIONS c2 sin θ θ (x) 2 so that.32)  c2  h = (1 − cos θ2 ).31) provides the solution of the problem. by dividing the first equation in (7.28) and (7.34) One can readily verify that the function p(θ) has the properties p → 0 as θ → 0+. (7.30). we see that θ2 is a root of the equation p(θ2 ) = h−1 .  2 To this end.

33) versus θ. see Figure 7. 2π).32)1 .4. given any h > 0. the equation p(θ2 ) = h−1 can be solved for a unique value of θ2 ∈ (0.5 shows that the curve (7. the parameter θ having the significance of being the angle of rolling. y(θ)) θ A θ=π A R x(θ) = P P − AP sin θ = R(θ − sin θ)   y(θ) = AP − AP cos θ = R(1 − cos θ)  (1 y P Figure 7. the path of minimum descent is given by the curve defined in (7. Note that given any h > 0 the equation h−1 = p(θ) has a unique root θ = θ2 ∈ (0. y = y(θ) is generated by rolling a circle along the x-axis as shown.4. Figure 7.5: A cycloid x = x(θ). P P' x R P A (x(θ).7. The value of c is then given by (7.32)1 respectively. APPLICATION OF NECESSARY CONDITION δF = 0 p(θ) 135 h−1 θ2 2π θ Figure 7. . 2π). Therefore it follows that as θ goes from 0 to 2π. Thus in summary. the function p(θ) increases monotonically from 0 to ∞.34) and (7.31) is a cycloid – the path traversed by a point on the rim of a wheel that rolls without slipping.4: A graph of the function p(θ) defined in (7.31) with the values of θ2 and c given by (7. Therefore.

∂φ ∂φ = 1 (7. we mean δF = F {φ + εη} − F {φ} = ε 1 d F {φ + εη} dε η dx = 0 ε=0 = ε 0 1 ∂f ∂φ η+ ∂f ∂φ dx 1 (7. by δφ we mean δφ = (φ + εη) − (φ) = εη. it follows that δφ(0) = δφ(1) = 0.40) δf dx. by δf we mean δf = f (x. then by δH we mean2 δH = H(φ + εη) − H(φ) up to linear terms.39) and by δF . 2 . = 0 ∂f ∂f δφ + δφ ∂φ ∂φ We refer to δφ(x) as an admissible variation. ε=0 (7. φ. First.4. one usually uses the following formal procedure. When η(0) = η(1) = 0. Note the following minor change in notation: what we call δH here is what we previously would have called ε δH. φ + εη ) − f (x. φ ) εη ∂φ ∂φ ∂f ∂f = δφ + δφ . CALCULUS OF VARIATIONS 7.3 A Formalism for Deriving the Euler Equation In order to expedite the steps involved in deriving the Euler equation.37) (7. φ + εη. φ ) ∂f ∂f (x.35) dH{φ + εη} dε . we adopt the following notation: if H{φ} is any quantity that depends on φ.136 CHAPTER 7. φ. φ ) εη + (x.36) (7. or δ 0 f dx. φ. that is. by δφ we mean δφ = (φ + εη ) − (φ ) = εη = (δφ) .38) (7. δH = ε For example.

φ. 4 Note that the variation δ does not operate on x since it is the function φ that is being varied not the independent variable x.5. (7.40) that 1 1 137 δF = δ 0 f dx = 0 δf dx. η etc.1 Generalizations. Given the functional F . φ ) dx 0 If ever in doubt about a particular step during a calculation. . Observe from (7. δφ} = 0 for all admissible variations δφ. (7. GENERALIZATIONS.5. φ. always go back to the meaning of the symbols δφ. integrating the second term by parts. Generalization: Free end-point. and using the boundary conditions and the arbitrariness of an admissible variation δφ(x) to derive the Euler equation.41) Finally observe that the necessary condition for a minimum that we wrote down previously can be written as δF {φ. etc. Consider the following modified problem: suppose that we want to find the function φ(x) from among all once continuously differentiable functions that makes the functional 1 F {φ} = 3 f (x. a necessary condition for an extremum of F is δF = 0 and so our task is to calculate δF : 1 1 δF = δ 0 f dx = 0 δf dx.7. Natural boundary conditions. or revert to using ε. δf = fφ δφ + fφ δφ and not δf = fx δx + fφ δφ + fφ δφ .5 7. So in particular. let us now repeat our previous derivation of the Euler equation using this new notation3 . From here on we can proceed as before by setting δF = 0. We refer to δF as the first variation of the functional F. φ ). Since f = f (x. 7. this in turn leads to4 1 δF = 0 ∂f ∂φ δφ + ∂f ∂φ δφ dx.42) For purposes of illustration.

50) ∂φ .47) and keep (7.138 CHAPTER 7. return to (7. 0 (7. Since δφ(0) and δφ(1) are both arbitrary (and not necessarily zero). (7.44) We begin by calculating the first variation of F : 1 1 1 δF = δ 0 f dx = 0 δf dx = 0 ∂f ∂φ δφ + ∂f ∂φ δφ dx (7. So the set of admissible functions A is A = φ(·)| φ : [0.48) This is the same Euler equation as before. equation (7. CALCULUS OF VARIATIONS a minimum. The functional F {φ} is defined for all φ ∈ A by 1 F {φ} = f (x. We see that we must have ∂f ∂φ δφ(1) − ∂f ∂φ δφ(0) = 0 x=0 (7. we must have 1 0 ∂f d − ∂φ dx ∂f ∂φ δφ dx + ∂f δφ ∂φ 1 = 0 0 (7. 1] → R.47) for all admissible variations δφ(x).47) does not automatically drop out now because δφ(0) and δφ(1) do not have to vanish.45) Integrating the last term by parts yields 1 δF = 0 d ∂f − ∂φ dx ∂f ∂φ δφ dx + ∂f δφ ∂φ 1 .48) in mind. First restrict attention to all variations δφ with the additional property δφ(0) = δφ(1) = 0. φ ) dx.47) must necessarily hold for all such variations δφ. 1] (7. ∂φ (7. φ(1) = b.46) Since δF = 0 at an extremum.49) requires that ∂f = 0 at x = 0 and x = 1. Next. Note that we do not restrict attention here to those functions that satisfy φ(0) = a. 0 (7. (7. φ. Note that the boundary term in (7.4.43) Note that the class of admissible functions A is much larger than before.49) x=1 for all admissible variations δφ. The boundary terms now drop out and by the Lemma in Section 7.1 it follows that d ∂f dx ∂φ − ∂f = 0 for 0 < x < 1. φ ∈ C 1 [0.

139 Equation (7. the natural boundary coundition is = 0 at x = 1. Our task is to minimize the travel time of the bead T {φ} over the set A2 . and the same boundary condition φ(0) = 0 at the left hand end. 2gφ and so by (7. 1 + (φ )2 .6: Curve joining (0. see Figure 7. GENERALIZATIONS. 0) and ends somewhere on the vertical through x = 1. . The minimizer must satisfy the same Euler equation (7.6.7. Example: Reconsider the Brachistochrone Problem analyzed previously but now suppose that we want to find the shape of the wire that commences from (0. φ ∈ C 1 [0. they are referred to as natural boundary conditions in contrast to boundary conditions that are given as part of a problem statement.50).5. The only difference between this and the first Brachistochrone Problem is that here the set of admissible functions is A2 = φ(·) φ : [0.50) provides the boundary conditions to be satisfied by the extremizing function φ(x). φ(0) = 0 . 0) to an arbitrary point on the vertical line through x = 1. 1]. note that there is no restriction on φ at x = 1. φ. To find the natural boundary condition at the other end. 0 1 x g y = φ(x) y Figure 7. These boundary conditions were determined as part of the extremization. φ ) = Differentiating this gives ∂f = ∂φ φ 2gφ(1 + (φ )2 ) φ 2gφ(1 + (φ )2 ) .26) as in the first problem. 1] → R. recall that f (x.

5. φ. Consider an elastic beam of length L and bending stiffness EI.140 which simplifies to CHAPTER 7.7: The neutral axis of a beam in reference (straight) and deformed (curved) states. 7. CALCULUS OF VARIATIONS φ (1) = 0. φ ) dx. The bold lines represent a cross-section of the beam in the reference and deformed states. In the classical Bernoulli-Euler theory of beams. δφ Energy per unit length = 1 M φ = 1 EI(u )2 er 2 2 M M u M = EIφ φ=u y x u Figure 7. Let u(x). both loads act in the −y-direction. Example: The Bernoulli-Euler Beam. φ . (7. 0 ≤ x ≤ L. 0 We begin with the formulation and analysis of a specific example and then turn to some theory. The beam carries a distributed load p(x) along its length and a concentrated force F at the right hand end x = L.2 Generalization: Higher derivatives. Since the beam is clamped at the left hand end this means that u(x) is any (smooth enough) function that satisfies the geometric boundary conditions u(0) = 0. be a geometrically admissible deflection of the beam. The functional F {φ} considered above involved a function φ and its first derivative φ . which is clamped at its left hand end.51) . One can consider functionals that involve higher derivatives of φ. . cross-sections remain perpendicular to the neutral axis. u (0) = 0. for example 1 F {φ} = f (x.

By using (7. Note that the integrand of the functional involves the higher derivative term u . L] → R. The given boundary conditions (7. Therefore the preceding equation simplifies to L (EI u 0 − p) δu dx − [EIu (L) + F ] δu(L) + EIu (L)δu (L) = 0.52).53) where the last two terms represent the potential energy of the distributed and concentrated loading respectively.51)1 describes the geometric condition that the beam is clamped at x = 0 and therefore cannot deflect at that point. u (0) = 0 are given and so we expect to derive additional natural boundary conditions at the right hand end x = L. (7. u ∈ C 4 [0. (7.53) this can be written explicitly as L 0 L EI u δu dx − 0 p δu dx − F δu(L) = 0. 141 the boundary condition (7. The actual deflection of the beam minimizes the potential energy (7. note that only two boundary conditions u(0) = 0. We proceed in the usual way by calculating the first variation δΦ and setting it equal to zero: δΦ = 0. L]. . GENERALIZATIONS.5. u(0) = 0. The set of admissible test functions that we consider is A = u(·) | u : [0.7.53) over the set (7.51)2 describes the geometric condition that the beam is clamped at x = 0 and therefore cannot rotate at the left end.52) Φ{u} = 0 1 EI(u (x))2 dx − 2 L 0 p(x)u(x) dx − F u(L). u (0) = 0 which consists of all “geometrically possible configurations”. δu (0) = 0. Twice integrating the first term by parts leads to L 0 L EI u δu dx − 0 p δu dx − F δu(L) − EIu δu L 0 L + EIu δu 0 = 0.51) require that an admissible variation δu must obey δu(0) = 0. Therefore the total potential energy of the system is L . From elasticity theory we know that the elastic energy associated with a deformed configuration of the beam is (1/2)EI(u )2 per unit length. the boundary condition (7. the negative sign in front of these terms arises because the loads act in the −y-direction while u is the deflection in the +y-direction. In addition.

54)3 describes the mechanical condition that the beam is free to rotate (and therefore has zero “bending moment”) at the right hand end.54)2 describes the mechanical condition that the beam carries a concentrated force F at the right hand end.54)2. as before.3 Generalization: Multiple functions. One can consider functionals that involve multiple functions.142 CHAPTER 7. 7. φ. and the natural boundary condition (7. u. Show that the function φ that extremizes F over the set A must satisfy the Euler equation d ∂f − ∂φ dx ∂f ∂φ d2 + 2 dx ∂f ∂φ =0 for 0 < x < 1 where. φ and φ as if they are independent variables in f (x.      EI u (L) = 0. φ . φ )dx 0 defined on the set of admissible functions A consisting of functions φ that are defined and four times continuously differentiable on [0. the partial derivatives ∂f /∂φ. v} = f (x. 1] and that satisfy the four boundary conditions φ(0) = φ0 . v. φ . for example a functional 1 F {u.54) EI u (L) + F = 0.      (7. φ(1) = φ1 . CALCULUS OF VARIATIONS Since this must hold for all admissible variations δu(x). φ (0) = φ0 . it follows in the usual way that the extremizing function u(x) must obey  EI u (x) − p(x) = 0 for 0 < x < L. The natural boundary condition (7. φ (1) = φ1 . v ) dx 0 . The functional F {φ} considered above involved a single function φ and its derivatives.51) and the natural boundary conditions (7. φ ).5.54)1 . the prescribed boundary conditions (7. u . Thus the extremizer u(x) obeys the fourth order linear ordinary differential equation (7. Exercise: Consider the functional 1 F {φ} = f (x. ∂f /∂φ and ∂f /∂φ are calculated by treating φ. φ.3 .

φ u -φ u y x Figure 7. Thus a deformed state of the beam is characterized by two E and G are the Young’s modulus and shear modulus of the material. In the theory considered here. The angle between the neutral axis and the horizontal. and carries a concentrated force F at the end x = L. In that theory. the thin line and the bold line would coincide. Consider a beam of length L. while I and A are the second moment of cross-section and the area of the cross-section respectively. The decrease in the angle between the cross-section and the neutral axis is therefore u − φ. is u . also in the −y-direction.5. a cross-section of the beam is not constrained to remain perpendicular to the neutral axis. where cross-sections remain perpendicular to the neutral axis. 143 that involves two functions u(x) and v(x). In the simplest model of a beam – the so-called Bernoulli-Euler model – the deformed state of the beam is completely defined by the deflection u(x) of the centerline (the neutral axis) of the beam. one that accounts for shear deformations. The beam is clamped at x = 0. The bold lines represent a cross-section of the beam in the reference and deformed states. shear deformations are neglected and therefore a crosssection of the beam remains perpendicular to the neutral axis even in the deformed state. We begin with the formulation and analysis of a specific example and then turn to some theory.7. so that in the classical Bernoulli-Euler theory of beams.8: The neutral axis of a beam in reference (straight) and deformed (curved) states. Example: The Timoshenko Beam. Here we discuss a more general theory of beams. The thin line is perpendicular to the deformed neutral axis. 5 . The angle between the vertical and the bold line if φ. it carries a distributed load p(x) along its length which acts in the −y-direction. bending stiffness5 EI and shear stiffness GA. GENERALIZATIONS. which equals the angle between the perpendicular to the neutral axis (the thin line) and the vertical dashed line.

and the second. see Figure 7.144 CHAPTER 7. Since the shear strain is defined as the change in angle between two fibers that are initially at right angles to each other. one does not neglect shear deformations and so u(x) and φ(x) are (geometrically) independent functions. the so-called Timoshenko beam theory. u(x). The basic equations of elasticity tell us that the moment-curvature relation for bending is M (x) = EIφ (x) . the shear strain in the present situation is γ(x) = u (x) − φ(x). CALCULUS OF VARIATIONS fields: one.55) Note that the zero rotation boundary condition is φ(0) = 0 and not u (0) = 0.9: Basic constitutive relationships for a beam. In the more accurate beam theory discussed here. characterizes the rotation of the cross-section at x. (7. δφ S γ M M S S = GAγ GA M = EIφ Figure 7. φ(x) = u (x) since for small angles. φ(0) = 0. characterizes the deflection of the centerline of the beam at a location x. φ(x).) The fact that the left hand end is clamped implies that the point x = 0 cannot deflect and that the cross-section at x = 0 cannot rotate. Thus we have the geometric boundary conditions u(0) = 0. (In the BernoulliEuler model. the rotation equals the slope.8. Observe that in the Bernoulli-Euler theory γ(x) = 0.

u ∈ C 2 ([0. GENERALIZATIONS. (7. L]). (1/2)M φ . . φ} over the admissible set A of test functions where take A = {u(·). and that the associated elastic energy per unit length of the beam. In engineering practice.55). l] → R. Note that all admissible functions are required to satisfy the geometric boundary conditions (7. we know that the element is not in a state of simple shear.5. the shear stress must vary with y such that it vanishes at the top and bottom. this is taken into account approximately by replacing GA by κGA where the heuristic parameter κ ≈ 0. φ} = 0 1 1 EI(φ (x))2 + GA(u (x) − φ(x))2 2 2 L dx − 0 pu(x)dx − F u(L).56) where the last two terms in this expression represent the potential energy of the distributed and concentrated loading respectively (and the negative signs arise because u is the deflection in the +y-direction while the loadings p and F are applied in the −y-direction). u(0) = 0.8 − 0. φ(0) = 0}. EI and GA may vary along the length of the beam and therefore might be functions of x. The displacement and rotation fields u(x) and φ(x) associated with an equilibrium configuration of the beam minimizes the potential energy Φ{u.9 are free of shear traction. we know from elasticity that the shear force-shear strain relation for a beam is6 S(x) = GAγ(x) and that the associated elastic energy per unit length of the beam. φ : [0. is 1 EI(φ (x))2 . We allow for the possibility that p. φ(·) u : [0. l] → R. Instead. To find a minimizer of Φ we calculate its first variation which from (7.56) is L L δΦ = 0 6 EIφ δφ + GA(u − φ)(δu − δφ) dx − 0 p δu dx − F δu(L). L]). is 1 GA(γ(x))2 . 2 145 Similarly.7. (1/2)Sγ. 2 The total potential energy of the system is thus L Φ = Φ{u.9. Since the top and bottom faces of the differential element shown in Figure 7. φ ∈ C 2 ([0.

 dx and the natural boundary conditions EIφ (L) = 0. . . . . CALCULUS OF VARIATIONS Integrating the terms involving δu and δφ by parts gives δΦ = + − EIφ δφ d EIφ δφ dx 0 0 dx L L d GA(u − φ)δu − GA(u − φ) δu dx − 0 0 dx L L − L 0 GA(u − φ)δφ dx L 0 p δu dx − F δu(L). .55). we have δΦ = 0 for all admissible variations. . must hold. . φ2 (x). zn . . z2 .57) that the field equations  d  EIφ + GA(u − φ) = 0. . . y2 . an equilibrium configuration of the beam is described by the deflection u(x) and rotation φ(x) that satisfy the differential equations (7. . yn . . [Remark: Can you recover the Bernoulli-Euler theory from the Timoshenko theory in the limit as the shear rigidity GA → ∞?] Exercise: Consider a smooth function f (x. it follows from (7.58) and the boundary conditions (7. Thus in summary. φn (x) be n once-continuously differentiable functions on [0. dx (7.58)  d  GA(u − φ) + p = 0. and collecting the like terms in the preceding equation leads to δΦ = EIφ (L) δφ(L) + GA u (L) − φ(L) − F δu(L) L − − 0 L 0 d EIφ + GA(u − φ) δφ(x) dx dx d GA(u − φ) + p δu(x) dx.57) At a minimizer. . zn ) defined for all x. . Let φ1 (x).   dx (7. z1 . (7. . y2 . . 1] GA u (L) − φ(L) = F (7. z1 .59). δφ(x) are arbitrary on 0 < x < L and since δu(L) and δφ(L) are also arbitrary. . 0 < x < L. .146 CHAPTER 7. y1 .59) . yn . y1 . . . Finally on using the facts that an admissible variation must satisfy δu(0) = 0 and δφ(0) = 0. Since the variations δu(x). 0 < x < L.

y) = 0 x. φ1 . .4 Generalization: End point of extremal lying on a curve. note that the abscissa of the point at which a neighboring curve defined by y = φ(x) + δφ(x) intersects the curve G = 0 is not xR but xR + δxR . . . . φ. a) and end at some point on the curve G(x. . . . . φn (x) that extremize F must necessarily satisfy the n Euler equations d ∂f ∂f =0 − dx ∂φi ∂φi for 0 < x < 1. . F {φ} = 0 xR f (x. φ2 . At the minimizer. . (i = 1. . φ2 .10: Curve joining (0. Moreover. y) = 0. 2. Suppose that φ(x) ∈ A is a minimizer of F . GENERALIZATIONS. φi (1) = bi . see Figure 7. y-plane that commence from a given point (0.10. φ2 . φ1 . y) = 0. . Show that the functions φ1 (x). . . (7.5. We wish to minimize a functional F {φ} over this set of functions. n). y = φ(x) 0 xR xR + δxR x Figure 7. Consider the set A of all functions that describe curves in the x. . . . a) to an arbitrary point on the given curve G(x. φ2 (x).5. Let x = xR be the abscissa of the point at which the curve y = φ(x) intersects the curve G(x. φ )dx . Let F be the functional defined by 1 147 F {φ1 .61) 7. . .7. . φn } = f (x.60) on the set of all such admissible functions. Observe that xR is not known a priori and is to be determined along with φ. φn . (0 a) G(x. with φi (0) = ai . y) = 0. φn ) dx 0 (7. y y = φ(x) + δφ(x) (0.

φ (xR ) δxR + fφ δφ xR 0 xR + 0 fφ − d fφ δφ dx = 0 dx xR which. φ(xR ).62) must hold for all such variations δφ(x).62) First limit attention to the subset of all test functions that terminate at the same point (xR . Thus setting the first variation δF equal to zero gives xR f xR . φ(xR ). After integrating the last term by parts we get f xR . which equals the linearized form of F {φ + δφ} − F {φ}.63) We now return to arbitrary admissible test functions. φ (xR ) δxR + 0 fφ δφ + fφ δφ dx. φ. Substituting (7. φ ) + fφ (x. Since this specialized version of equation (7. φ )δφ dx which in turn reduces to xR δF = f xR . φ (xR ) δφ(xR ) + 0 fφ − d fφ δφ dx = 0.148 and at a neighboring test function CHAPTER 7. φ(xR )) as the minimizer. φ. φ )dx + 0 fφ (x. φ(xR ).62) vanish. φ(xR ). φ. φ + δφ. φ. φ )δφ + fφ (x. φ. This leads to xR +δxR xR δF = xR f (x. dx 0 ≤ x ≤ xR . this leads to the Euler equation fφ − d fφ = 0. φ(xR ). In this case δxR = 0 and δφ(xR ) = 0 and so the first two terms in (7. φ )δφ + fφ (x. φ(xR ). on using the fact that δφ(0) = 0.62) gives f xR . 0 Therefore on calculating the first variation δF . φ (xR ) δxR + 0 fφ δφ + fφ δφ dx = 0. φ )dx 0 where we have set fφ = ∂f /∂φ and fφ = ∂f /∂φ . φ. φ (xR ) δφ(xR ) = 0 (7. φ(xR ). reduces to f xR . we find xR +δxR xR δF = 0 f (x. dx (7. φ + δφ )dx. CALCULUS OF VARIATIONS xR +δxR F {φ + δφ} = f (x. φ. φ (xR ) δxR + fφ xR . φ (xR ) δxR + fφ xR .64) .63) into (7. φ )δφ dx − f (x. (7.

= G(xR . where we have set Gx = ∂G/∂x and Gy = ∂G/∂x. 149 which must hold for all admissible δxR and δφ(xR ). y) = 0 implies that G(xR . φ(xR )) δxR + Gy (xR . Note that linearization gives G(xR + δxR . we must satisfy the condition dI = (∂I/∂ε1 )dε1 + (∂I/∂ε2 )dε2 = 0. This implies that7  f xR . (7. Under these circumstances. φ(xR )) = 0 thus leads to the following relation between the variations δxR and δφ(xR ): Gx (xR . φ (xR ) − λGy (xR . φ(xR )) + φ (xR )Gy (xR . φ(xR )) + φ (xR )Gy (xR . ∂I/∂ε2 = λ∂J/∂ε2 where the Lagrange multiplier λ is unknown and is also to be determined.65). φ(xR )) = 0.  (7. It is important to observe that since admissible test curves must end on the curve G = 0. 7 G(xR + δxR . φ(xR )) = 0. ε2 ) = 0 then we must respect the side condition dJ = (∂J/∂ε1 )dε1 + (∂J/∂ε2 )dε2 = 0. φ(xR ))δxR + Gy (xR . φ(xR + δxR ) + δφ(xR + δxR )) = 0. one finds that that one must require the conditions ∂I/∂ε1 = λ∂J/∂ε1 . But if this minimization is carried out subject to the side constraint J(ε1 .64) does not hold for all δxR and δφ(xR ). . for some constant λ (referred to as a Lagrange multiplier). ε2 ). φ(xR ). φ(xR )) + Gx (xR . φ(xR )) = 0. The requirement that the minimizing curve and the neighboring test curve terminate on the curve G(x.66)  fφ xR .65)  It may be helpful to recall from calculus that if we are to minimize a function I(ε1 .7. φ(xR )) δφ(xR ) = 0. φ (xR ) − φ (xR )fφ xR . Thus (7. GENERALIZATIONS. φ(xR )) = 0. φ (xR ) − λ Gx (xR . We can use the second equation above to simplify the first equation which then leads to the pair of equations  f xR . Thus (7.5. φ(xR )) δφ(xR ).64) must hold for all δxR and δφ(xR ) that satisfy (7. φ(xR + δxR ) + δφ(xR + δxR )) − G(xR . φ(xR )) φ (xR )δxR + δφ(xR ) . the quantities δxR and δφ(xR ) are not independent of each other. φ(xR ) + φ (xR )δxR + δφ(xR )) = G(xR . φ(xR )) δxR + Gy (xR . φ (xR ) − λGy (xR . φ(xR ). . φ(xR ). Setting δG = G(xR + δxR .  fφ xR . The constrain equation J = 0 provides the extra condition required for this purpose. φ (xR ) − λGx (xR . φ(xR )) = 0. φ(xR )) + Gx (xR . only for those that are consistent with this geometric requirement. φ(xR )) + φ (xR )Gy (xR . φ(xR ). φ(xR + δxR ) + δφ(xR + δxR )) = G(xR + δxR . φ(xR ).

Finally the condition G(xR .63) on 0 ≤ x ≤ xR .150 CHAPTER 7.66) at the right hand end give 1 1 + φ 2 (xR ) = λc1 . φ. y) = c1 x+c2 y +c3 and that we are to find the curve of shortest length that commences from (0. Thus f (x. the boundary condition φ = a at x = 0. φ ) = φ 1 + (φ )2 . while the boundary conditions (7.66) at x = xR .) Example: Suppose that G(x. (7. φ ) = 1 + (φ )2 . for 0 ≤ x ≤ xR . and the equation G(xR . Solving the preceding equations leads to the minimizer φ(x) = (c2 /c1 )x + a for 0 ≤ x ≤ − c1 (ac2 + c3 ) . c2 + c2 1 2 . φ ) = 0 and fφ (x. φ (xR ) 1 + φ 2 (xR ) = λc2 .67) On using (7.67). φ. (Note that the presence of the additional unknown λ is compensated for by the imposition of the additional condition G(xR . Since ds = dx2 + dy 2 = 1 + (φ )2 dx we are to minimize the functional xR F = 0 1 + (φ )2 dx. φ. φ(xR )) = 0.66) provides two natural boundary conditions at the right hand end x = xR . fφ (x. the two natural boundary conditions (7. φ(xR )) = 0. φ(xR )) = 0 requires that c1 xR + c2 φ(xR ) + c3 = 0.63) can be integrated immediately to give φ (x) = constant The boundary condition at the left hand end is φ(0) = a. the Euler equation (7. a) and ends on G = 0. CALCULUS OF VARIATIONS Equation (7. In summary: an extremal φ(x) must satisfy the differential equations (7.

6.71) only requires that ˆ ˆ ∂G ∂F =λ . φ1 (x). φ2 } = subject to the constraint 1 f (x. φ2 (x)) dx 0 (7.e. ε2 ) = 0. φ2 (x). ∂ε1 ∂ε1 ˆ ˆ ∂F ∂G =λ .72) ˆ If we didn’t have the constraint. φ2 (x) that minimizes 1 F {φ1 .7. φ2 ) = 0 f (x. δφ2 (x). (7. dε2 that are consistent with the constraint. ε2 ) = G(φ1 (x) + ε1 η1 (x).72) holds.6. ε2 ) = 0. (7. ∂ε1 ∂ε2 (7. suppose that the pair φ1 (x). ∂ε2 ∂ε2 (7. φ1 (x). CONSTRAINED MINIMIZATION 151 7. However when the constraint equation (7. then (7. Accordingly. (7. ε2 ) = F {φ1 (x) + ε1 η1 (x). φ2 (x) + ε2 η2 (x) we have ˆ F (ε1 .1 Constrained Minimization Integral constraints.69) For reasons of clarity we shall return to the more detailed approach where we introduce parameters ε1 . A necessary condition for this is that ˆ dF (ε1 .71) ∂ε1 ∂ε2 for all dε1 .71) would imply the usual requirements ∂ F /∂ε1 = ˆ ∂ F /∂ε2 = 0. φ2 (x) + ε2 η2 (x)) = 0. φ2 (x) + ε2 η2 (x)}. ε2 ) with respect to the variables ˆ ε1 and ε2 . φ2 (x)) dx = 0. rather than following the formal approach using variations δφ1 (x). ε2 and functions η1 (x). ˆ G(ε1 . φ1 (x). Because of the constraint. i. φ2 (x). dε1 and dε2 cannot be varied independently. η1 (x). that ˆ ˆ ∂F ∂F ˆ dF = dε1 + dε2 = 0.6 7.70) If we begin by keeping η1 and η2 fixed. (7. this is a classical minimization problem for a function ˆ of two variables: we are to minimize the function F (ε1 . Instead the constraint requires that they be related by ˆ dG = ˆ ˆ ∂G ∂G dε1 + dε2 = 0. φ1 (x). φ2 (x) is the minimizer. By evaluating F and G on a family of neighboring admissible functions φ1 (x) + ε1 η1 (x).68) G(φ1 .73) . Consider a problem of the following general form: find admissible functions φ1 (x). subject to the constraint G(ε1 .

λ is known as a Lagrange multiplier. The cable has a given length L. y y = φ(x) g −H 0 H x Let y = φ(x). since ds = dx2 + dy 2 = 1 + (φ )2 dx. Example: Consider a heavy inextensible cable of mass per unit length m that hangs under gravity. or equivalently ∂ ˆ ˆ (F − λG) = 0. The potential energy of the cable is determined by integrating mgφ with respect to the arc length s along the cable which. The presence of the additional unknown parameter λ is balanced by the availability of the constraint equation G = 0.152 for some constant λ. Therefore we are asked to find a function φ(x) with φ(−H) = φ(H).76) must equal L. its length L H {φ} = ds = 0 −H 1 + (φ )2 dx (7. is given by L H V {φ} = mgφ ds = mg 0 −H φ 1 + (φ )2 dx. describe an admissible shape of the cable. Proceeding from here on leads to the Euler equation associated with F − λG. ∂ε1 CHAPTER 7. that minimizes V {φ} subject to the constraint {φ} = L.74) ˆ ˆ ˆ ˆ Therefore minimizing F subject to the constraint G = 0 is equivalent to minimizing F − λG without regard to the constraint. ∂ε2 (7. a distance 2H apart. The two ends of the cable are held at the same vertical height. According to the theory developed . We know from physics that the cable adopts a shape that minimizes the potential energy. We are asked to determine this shape. CALCULUS OF VARIATIONS ∂ ˆ ˆ (F − λG) = 0. (7. −H ≤ x ≤ H.75) Since the cable is inextensible.

−H < x < H. 2H. Thus for each µ > 0 the function sinh z − µz vanishes at some unique positive value of z.) One can show that as z increases from 0 to ∞. For example we could take the x-axis to pass through the two pegs in which case φ(±H) = 0 and then λ = −c cosh(H/c) and so φ(x) = c cosh(x/c) − cosh(H/c) .78) Substituting (7.78) into the constraint condition {φ} = L with L = 2c sinh(H/c). −H < x < H.79). CONSTRAINED MINIMIZATION 153 above. L. Then we must solve µz = sinh z where µ > 1 is a constant. This can be integrated once to yield (φ − λ)2 − 1 c2 where c is a constant of integration.6. Thus φ(x) = c cosh(x/c) + λ. given by (7. where the constant λ is a Lagrange multiplier. For symmetry. Integrating this again leads to φ = φ(x) = c cosh[(x + d)/c] + λ. where d is a second constant of integration. decreases monotonically to some finite negative value at some z = z∗ > 0. and then increases monotonically to ∞. Consequently (7.77) The constant λ in (7. Calculating the first variation of V −λmg . leads to the Euler equation d dx (φ − λ) φ 1 + (φ )2 − 1 + (φ )2 = 0. All that remains is to examine the solvability of (7.76) yields (7.79) Thus in summary. To this end set z = H/c and µ = L/(2H).78) gives the equation describing the shape of the cable. (The requirement µ > 1 follows from the physical necessity that the distance between the pegs.77) is simply a reference height.7.79) has a unique root c > 0. be less than the length of the rope. −H < x < H.79) can be solved for c. if equation (7. the function sinh z − µz starts from the value 0. we must have φ (0) = 0 and therefore d = 0. then (7. (7. . −H < x < H. this function must satisfy the Euler equation associated with the functional V {φ} − λ {φ} where the Lagrange multiplier λ is a constant. (7. The resulting boundary value problem together with the constraint = L yields the shape of the cable φ(x).

In this problem the Lagrange multiplier λ may be a function of x. One can show that a necessary condition is that the minimizer should satisfy the Euler equation associated with f −λg. φ2 . 2 3 Suppose that the wire can be described parametrically by x1 = φ1 (x3 ). beginning at rest from P. A bead slides along the wire under gravity. A smooth wire lies entirely on the conical surface and joins the points P and Q. Let P = (p1 . we are to find the one that gives the minimum travel time. φ1 . p3 ) and Q = (q1 . x3 > 0. x2 = φ2 (x3 ) for p3 ≤ x3 ≤ q3 .11: A curve that joins the points (p1 . x3 ) = x2 + x2 − R2 (x3 ) = 0. CALCULUS OF VARIATIONS 7. φ2 (x)) = 0 for 0 ≤ x ≤ 1.6. q3 > p3 . p2 . be two points on this surface. Example: Consider a conical surface characterized by g(x1 . x1 g x2 P Q x3 Figure 7. p2 . φ2 )dx 0 subject to the algebraic constraint g(x. q3 ). p3 ) to (q2 . q3 ) and lies on the conical surface x2 + 1 x2 − x2 tan2 α = 0. q2 . φ1 (x). 1 2 R(x3 ) = x3 tan α. q2 . (Not all of the permissible curves can be described this way and so by using . From among all such wires. φ2 (x) that minimizes 1 f (x. x2 .2 Algebraic constraints Now consider a problem of the following general type: find a pair of admissible functions φ1 (x). φ1 .154 CHAPTER 7.

The arc length ds along the path is given by ds = dx2 + dx2 + dx2 = 3 2 1 (φ1 )2 + (φ2 )2 + 1 dx3 . (7. x3 ) = 0. φ1 . T {φ1 .6. φi (q3 ) = qi .3 Differential constraints Now consider a problem of the following general type: find a pair of admissible functions φ1 (x). CONSTRAINED MINIMIZATION 155 this characterization we are limiting ourselves to a subset of all the permited curves. x2 .7. subject to the constraint g(φ1 (x3 ). φ2 (x) that minimizes 1 f (x. x3 ) = x2 + x2 − x2 tan2 α. φ2 (x3 ). φ2 ) = (φ1 )2 + (φ2 )2 + 1 2g(x3 − p3 ) and g(x1 . According to the theory developed the solution is given by solving the Euler equations associated with f − λ(x3 )g where f (x3 . 2 . x3 ) = 0. φi ∈ C 2 [p3 . or v= Therefore the travel time is q3 2g(x3 − p3 ). p3 ≤ x3 ≤ q3 . 7. q3 ] → R. φ1 . x3 ) = 0. φ2 )dx 0 . φ2 ) φi : [p3 . φ1 . p3 Our task is to minimize T {φ1 . φ2 (x3 ). φ2 } over the set of admissible functions A = (φ1 . i = 1. p3 ≤ x3 ≤ q3 .80) The travel time is found by integrating ds/v along the path.) Since the curve has to lie on the conical surface it is necessary that g(φ1 (x3 ). 1 The conservation of energy tells us that 2 mv 2 (t) − mgx3 (t) = −mgp3 . φ2 . φ2 } = (φ1 )2 + (φ2 )2 + 1 2g(x3 − p3 ) dx3 . φ2 . φi (p3 ) = pi . φ1 . q3 ]. x2 . 3 1 2 subject to the prescribed conditions at the ends and the constraint g(x1 .6.

CALCULUS OF VARIATIONS subject to the differential equation constraint g(x. (7. suppose that there does not exist a function h(x. In these problems. φ1 (x).82) where h = f − λg. φ2 (x)) such that g = dh/dx. λ(x) are determined from the three differential equations (7. − dx ∂φ1 ∂φ1 d ∂h ∂h =0 − dx ∂φ2 ∂φ2 for 0 < x < 1.83). for 0 ≤ x ≤ 1. φ1 . φ1 . (7. the Lagrange multiplier λ may be a function of x. (7. φ1 . (In dynamics. the minimizers satisfy the Euler equations ∂h d ∂h = 0. . On substituting for f and g.156 CHAPTER 7. φ2 )dx 0 over an admissible set of functions subject to the non-holonomic constraint g(x. φ1 . i.81) According to the theory above. φ2 (x)) = 0. φ1 (x). Example: Determine functions φ1 (x) and φ2 (x) that minimize 1 f (x. φ2 . φ1 )dx 0 over an admissible set of functions. Remark: Note by substituting the constraint into the integrand of the functional that we can equivalently pose this problem as one for determining the function φ1 (x) that minimizes 1 f (x. φ1 . φ1 (x). φ2 (x). dx ∂φ1 ∂φ1 d ∂f +λ=0 dx ∂φ2 for 0 < x < 1. φ2 (x). (7.e.) One can show that it is necessary that the minimizer satisfy the Euler equation associated with f − λg. such constraints are called nonholonomic. Suppose that the constraint is not integrable. these Euler equations reduce to d ∂f ∂f +λ − = 0.83) Thus the functions φ1 (x). for 0 ≤ x ≤ 1. φ1 . φ2 ) = φ2 − φ1 = 0.81).

and gives F {φ∗ } = 0. we know that it cannot belong to the class of admissible functions considered above. 1] and satisfy d φ(0) = φ(1) = 0.84) that F ≥ 0.84) that the integrand itself should vanish almost everywhere in [0. This is an extremizer of F {φ} over the class of admissible functions under consideration. φ ) = (φ )2 − 1 with respect to x over the interval [0. WEIRSTRASS-ERDMAN CORNER CONDITIONS 157 7. Since φ∗ ∈ A it must not satisfy / one or both of these two conditions. The problem statement requires that the boundary conditions must hold. Therefore if there is a function φ∗ of this type. The piecewise linear function   x for 0 ≤ x ≤ 1/2. has this property. 1].84) that the value of F at this particular function φ(x) = 0 is F = 1 Note from (7. Assuming that the class of admissible functions are those that are C 1 [0. is piecewise C 1 . Moreover φ∗ (x) satisfies the Euler equation except at x = 1/2. On enforcing the boundary conditions φ(0) = φ(1) = 0. φ∗ would be a minimizer.7. This requires that φ (x) = ±1 almost everywhere in [0. we would have found it from the preceding calculation. it follows from the nonnegative character of the integrand in (7. In the present case this specializes to 2[ (φ )2 −1](2φ ) = constant for 0 ≤ x ≤ 1. first consider the problem of minimizing the functional 1 F {φ} = over functions φ with φ(0) = φ(1) = 0. this gives φ(x) = 0 for 0 ≤ x ≤ 1. since if it did. 0 ((φ )2 − 1)2 dx (7. 1]. .7. 1]. In order to motivate the discussion to follow. If there is such a function φ∗ . 1]. Therefore it must be true that φ is not as smooth as C 1 [0. it does not belong to the set of functions A.84) This is apparently a problem of the classical type where in the present case we are 2 to minimize the integral of f (x. If there is a function φ∗ such that F {φ∗ } = 0. 1] and to vanish at the two ends x = 0 and x = 1. the minimizer must necessarily satisfy the Euler equation dx (∂f /∂φ ) − (∂f /∂φ) = 0. It is natural to wonder whether there is a function φ∗ (x) that gives F {φ∗ } = 0. The functions in A were required to be C 1 [0.7 Piecewise smooth minimizers. Weirstrass-Erdman corner conditions. φ∗ (x) = (7. It is readily seen from (7. It is continuous. which in turn gives φ (x) = constant for 0 ≤ x ≤ 1.85)  (1 − x) for 1/2 ≤ x ≤ 1. If so. φ.

y φ(x) + δφ(x) φ(x) + δφ(x) φ(x) φ(x) s 1 x Figure 7. φ )dx 0 over some suitable set of admissible functions. Thus the set of admissible functions is composed of all functions that are smooth on either side of x = s. 1] → R. 7. φ ∈ C[0. and suppose further that we know that the extremal φ(x) is continuous but has a discontinuity in its slope at x = s: i. and that satisfy the given boundary conditions φ(0) = φ0 . A specific example will be considered below. φ.1 Piecewise smooth minimizer with non-smoothness occuring at a prescribed location.7. and its first derivative is permitted to have a jump discontinuity at a given location x = s. φ(0) = φ0 .e. Observe that an admissible function is required to be continuous on [0. φ(1) = φ1 } . CALCULUS OF VARIATIONS But is it legitimate for us to consider piecewise smooth functions? If so are there are any restrictions that we must enforce? Physical problems involving discontinuities in certain physical fields or their derivatives often arise when. 1]. the problem concerns an interface separating two different mateirals. required to have a continuous first derivative on either side of x = s.12: Extremal φ(x) and a neighboring test function φ(x) + δφ(x) both with kinks at x = s. Suppose that we wish to extremize the functional 1 F {φ} = f (x. 1]). . φ (s−) = φ (s+) where φ (s±) denotes the limiting values of φ (s ± ε) as ε → 0. that are continuous at x = s. φ ∈ C 1 ([0.158 CHAPTER 7. φ(1) = φ1 : A = {φ(·) φ : [0. for example. s) ∪ (s. 1].

the integral now disappears. which by definition equals F {φ + δφ} − F {φ} upto terms linear in δφ. φ )dx + 0 s f (x. Since the resulting equation must hold for all variations δφ(s).12. if we limit attention to variations that are such that δφ(s) = 0. x=s+ However. we obtain s 1 (fφ δφ + fφ δφ ) dx + 0 s (fφ δφ + fφ δφ ) dx = 0. 1]). see Figure 7. φ )dx. Let δφ(x) be an admissible variation which means that the neighboring function φ(x) + δ(x) is also in A which means that it is C 1 on [0.7. it follows that we must have ∂f ∂φ x=s− = ∂f ∂φ x=s+ . φ + δφ )dx + s f (x. Second. In view of the lack of smoothness at x = s it is convenient to split the integral into two parts and write s 1 F {φ} = and F {φ + δφ} = 0 s f (x. this implies that the term within the brackets in the integrand must vanish at each of these x’s. 1] and (may) have a jump discontinuity in its first derivative at the location x = s. This implies that δφ(x) ∈ C([0. 1 f (x. when this is substituted back into the equation above it. Since δφ(x) can be chosen arbitrarily for all x ∈ (0. x=s+ First. 1).7. s) ∪ (s. φ + δφ. x = s. WEIRSTRASS-ERDMAN CORNER CONDITIONS 159 Suppose that F is extremized by a function φ(x) ∈ A and suppose that this extremal has a jump discontinuity in its first derivative at x = s. φ. φ. s) ∪ (s. and only the integral remains. φ + δφ. δφ(0) = δφ(1) = 0. this simplifies to 1 0 ∂f d − ∂φ dx ∂f ∂φ δφ(x) dx + ∂f ∂φ x=s− − ∂f ∂φ δφ(s) = 0. since δφ(0) = δφ(1) = 0. Upon calculating δF . This leads to the Euler equation ∂f d − ∂φ dx ∂f ∂φ =0 for 0 < x < 1. the second term in the equation above vanishes. and setting δF = 0. Integrating the terms involving δφ by parts leads to s 0 d (fφ ) δφ dx + fφ − dx 1 s d (fφ ) δφ dx + fφ − dx ∂f δφ ∂φ s− + x=0 ∂f δφ ∂φ 1 = 0. x = s. 1]). δφ(x) ∈ C 1 ([0. φ + δφ )dx.

160 CHAPTER 7. The matching condition shows that even though φ has a jump discontinuity at x = s. see Figure 7. We are asked to determine the path y = φ(x). CALCULUS OF VARIATIONS at x = s. n2(x.13. . Example: Consider a two-phase material that occupies all of x.   ∂φ x=s− ∂φ x=s+ y x.    dx ∂φ ∂φ        φ(0) = φ0 . In particular. x = s. This is a “matching condition” or “jump condition” that relates the solution on the left of x = s to the solution on its right. y-space.86)  φ(1) = φ1 . a ≤ x ≤ b. y) and n2 (x. y) b. B) in the right half-plane. the quantity ∂f /∂φ is continuous at this point. B) n1(x. −∞ < y < ∞ Figure 7.   (7. followed by a ray of light travelling from a point (a. The material occupying x < 0 is different to the material occupying x > 0 and so x = 0 is the interface between the two materials. A) a. In particular. suppose that the refractive indices of the materials occupying x < 0 and x > 0 are n1 (x. y) respectively. we are to determine the conditions at the point where the ray crosses the interface between the two media. x = 0. Thus in summary an extremal φ must obey the following boundary value problem:  ∂f d ∂f  − =0 for 0 < x < 1. θ+ θ− 0 x (a. y) x. (b. A) in the left half-plane to the point (b.         ∂f ∂f   = at x = s.13: Ray of light in a two-phase material.

n(x. y). a ray of light travelling between two given points follows the path that it can traverse in the shortest possible time. φ(b) = B}. since ds = 1 + (φ )2 dx can be written as b T {φ} = a 1 n(x. if θ is the angle made by the ray of light with the x-axis at some point along its path. φ(a) = A. φ(x)) 1 + (φ )2 dx. Note that this set of admissible functions allows the path followed by the light to have a kink at x = 0 even though the path is continuous. φ) c 1 + (φ )2 . . 0) ∪ (0. This is Snell’s well-known law of refraction. which. we know that light travels at a speed c/n(x. φ(x)) and θ(x) as x → 0±. b]. or n+ sin θ+ = n− sin θ− where n± and θ± are the limiting values of n(x. then tan θ = φ and so sin θ = φ / 1 + (φ )2 . φ ) dx a where f (x. φ ∈ C 1 ([a. φ. c Thus the problem at hand is to determine φ that minimizes the functional T {φ} over the set of admissible functions A = {φ(·) φ ∈ C[a. φ) ∂f = ∂φ c φ 1 + (φ )2 and so the matching condition at the kink at x = 0 requires that n c φ 1 + (φ )2 be continuous at x = 0. Observe that. φ ) = n(x. y) is the index of refraction at the point (x. Thus the transit time is determined by integrating n/c along the path followed by the light. b]). φ. Therefore the matching condition requires that n sin θ be continuous. The functional we are asked to minimize can be written in the standard form b T {φ} = Therefore f (x. y) where n(x.7. WEIRSTRASS-ERDMAN CORNER CONDITIONS 161 According to Fermat’s principle.7. Also.

. However in contrast to the preceding case. 1] except at x = s + δs where it has a jump discontinuity in its first derivative: δφ ∈ C[0. s + δs) ∪ C 1 (s + δs. φ(1) = b} Just as before. is continuous on [0. Note further that we have varied the function φ(x) and the location of the kink s. 1]. 1] ∪ C 1 [0. if there is discontinuity in the first derivative of φ at some location x = s. 1]. φ(0) = a. Consider a variation δφ(x) that vanishes at the two ends x = 0 and x = 1. (we shall say that φ has a “kink” at x = s).162 CHAPTER 7.7.14: Extremal φ(x) with a kink at x = s and a neighboring test function φ(x) + δφ(x) with kinks at x = s and s + δs. the location s is not known a priori and so is also to be determined. 1] → R. φ ) dx over the admissible set of functions 1 A = {φ(·) : φ : [0. CALCULUS OF VARIATIONS 7. y φ(x) + δφ(x) φ(x) + δφ(x) φ(x) φ(x) s s + δs 1 x Figure 7. Suppose that F is extremized by the function φ(x) and that it has a jump discontinuity in its first derivative at x = s. See Figure 7.14. φ. Suppose further that φ is C 1 on either side of x = s. 1]. φ ∈ C[0. is C 1 on [0. φ ∈ Cp [0. δφ(0) = δφ(1) = 0. Note that φ(x) + δφ(x) has kinks at both x = s and x = s + δs.2 Piecewise smooth minimizer with non-smoothness occuring at an unknown location Suppose again that we wish to extremize the functional 1 F (φ) = 0 f (x. the admissible functions are continuous and have a piecewise continuous first derivative. 1].

(7. φ )dx + 0 s f (x. equals F {φ + δφ} − F {φ} upto terms linear in δφ. ∂f ∂φ B = . by definition. φ + δφ. WEIRSTRASS-ERDMAN CORNER CONDITIONS 163 Since the extremal φ(x) has a kink at x = s it is convenient to split the integral and express F {φ} as s 1 F {φ} = f (x. s) ∪ (s. it follows in the usual way that A.89) The two matching conditions (or jump conditions) (7. ∂f ∂φ (7.87) ∂f ∂φ x=s− C = f −φ ∂f ∂φ x=s− − f −φ . 0 where A = ∂f d − ∂φ dx ∂f ∂φ ∂f ∂φ − .7. φ + δφ. B and C all must vanish. φ + δφ )dx + 0 1 s f (x. 1). φ. and the following two additional requirements at x = s: ∂f ∂φ f −φ ∂f ∂φ = ∂f ∂φ . it is convenient to express F {φ + δφ} by splitting the integral into three terms as follows: s s+δs F {φ + δφ} = + f (x.88) and (7. φ + δφ )dx f (x.88) is the same condition that was derived in the preceding subsection.89) are known as the Wierstrass-Erdmann corner conditions (the term “corner” referring to the “kink” in φ). Equation (7. φ + δφ )dx. s+δs We can now calculate the first variation δF which. .88) s− s+ s− = f −φ s+ . This leads to the usual Euler equation on (0. Similarly since the the neigboring function φ(x) + δ(x) has kinks at x = s and x = s + δs. leads after integrating by parts. φ + δφ. x=s+ By the arbitrariness of the variations above. φ )dx. x=s+ (7.7. Calculating δF in this way and setting the result equal to zero. to 1 A δφ(x)dx + B δφ(s) + C δs = 0. φ.

CALCULUS OF VARIATIONS Example: Find the extremals of the functional 4 4 F (φ) = 0 f (x. Thus 1 φo (x) = x 2 is a smooth extremal of F . On integrating this and using the boundary conditions φ(0) = 0.92) holds on the entire interval (0. φ(4) = 2. φ ) dx = 0 (φ − 1)2 (φ + 1)2 dx over the set of piecewise smooth functions subject to the end conditions φ(0) = 0. say. (Again. the value of s ∈ (0. 0 ≤ x ≤ 4. for 0 ≤ x ≤ 4.90) Consequently the Euler equation (at points of smoothness) is d d fφ − fφ = dx dx 4φ (φ )2 − 1 = 0.) The Euler equation (7. φ. φ(4) = 2. φ0 . ∂φ (7. (Such an extremal might not. exist. restrict attention to functions that have at most one point at which φ has a discontinuity.92) First.92) that φ = c = constant on (0. 4) where c = d. ∂f = 4φ (φ )2 − 1 . Thus   c for 0 < x < s. consider an extremal that is smooth everywhere.) In this case the Euler equation (7. Here f (x. Next consider a piecewise smooth extremizer of F which has a kink at some location x = s. 4) and so we conclude that φ (x) = constant for 0 ≤ x ≤ 4. we find that φ(x) = x/2. exist. (if c = d there would be no kink at x = s and we have already dealt with this case above).164 CHAPTER 7. of course. In order to compare this with what follows. ∂φ ∂f = 0. φ. s) and φ = d = constant on (s. such an extremal might not. φ ) = (φ )2 − 1 and therefore on differentiating f . it is helpful to call this. For simplicity. . is a smooth extremal. 4) is not known a priori and is to be determined. φ (x) =  d for s < x < 4.92) now holds on either side of x = s and so we find from (7.91) 2 (7. (7. of course.

d = 1.89). Therefore the Weirstrass-Erdmann corner conditions (7. and the two Weirstrass-Erdmann corner conditions (7. we must have φ(s−) = φ(s+) which requires that cs = d(s − 4) + 2 whence 2 − 4d . and ∂f f −φ =  ∂φ   −(c2 − 1)(1 + 3c2 ) for 0 < x < s. .88) and (7.  Corresponding to the former we find from (7. Integrating this. From (7.93). Keeping in mind that c = d and solving these equations leads to the two solutions: c = 1.   −x for 0 ≤ x ≤ 1. φ1 (x) =  −x + 6 for 3 ≤ x ≤ 4. ∂f =  ∂φ 4d(d2 − 1) for s < x < 4. and enforcing the boundary conditions φ(0) = 0.94) that s = 3.94) s= c−d Note that s would not exist if c = d. φ(x) = (7. (7. d = −1. which require respectively the continuity of ∂f /∂φ and f −φ ∂f /∂φ at x = s. −(d2 − 1)(1 + 3d2 ) for s < x < 4.91) and (7.93)  d(x − 4) + 2 for s ≤ x ≤ 4. separately on (0.89) provide us with the two equations for doing this.   4c(c2 − 1) for 0 < x < s. 4).93) there are two piecewise smooth extremals φ1 (x) and φ2 (x) of the assumed form:   x for 0 ≤ x ≤ 3. WEIRSTRASS-ERDMAN CORNER CONDITIONS 165 Since φ is required to be continuous. (7. φ(4) = 2.7.90). give us the pair of simultaneous equations   c(c2 − 1) = d(d2 − 1). All that remains is to find c and d. while the latter leads to s = 1. and c = −1. (7. leads to   cx for 0 ≤ x ≤ s.7.88). (c2 − 1)(1 + 3c2 ) = (d2 − 1)(1 + 3d2 ). φ2 (x) =  x − 2 for 1 ≤ x ≤ 4. Thus from (7. s) and (s.

7. φ.8 Generalization to higher dimensional space.15: Smooth extremal φ0 (x) and piecewise smooth extremals φ1 (x) and φ2 (x). consider the one-dimensional variational problem of minimizing a functional 1 F {φ} = f (x. and (b) F = 0 if and only if φ = ±1 everywhere (except at isolated points where φ may be undefined). φ ) dx 0 . Remark: By inspection of the given functional 4 F (φ) = 0 (φ )2 − 1 dx. Figure 7. φ . φ1 and φ2 . we find F {φo } = 9/4. φ1 and φ2 . In order to help motivate the way in which we will approach higher-dimensional problems (which will in fact be entirely parallel to the approach we took for one-dimensional problems) we begin with some preliminary observations. By evaluating the functional F at each of the extremals φ0 . First.15 shows graphs of φo . 2 it is clear that (a) F ≥ 0.166 CHAPTER 7. CALCULUS OF VARIATIONS y 3 2 1 φ1(x) φ1(x) φ0(x) 1 2 3 4 x φ2(x) φ2(x) Figure 7. F {φ1 } = F {φ2 } = 0. The extremals φ1 and φ2 have this property and therefore correspond to absolute minimizers of F .

GENERALIZATION TO HIGHER DIMENSIONAL SPACE. y) on the interior of the domain of integration. Then. ∂φ/∂y. We approach two-dimensional problems similarly and our strategy for the boundary terms is to exploit the fact that δφ and its normal derivative ∂(δφ)/∂n are arbitrary on the boundary ∂D. y)) ds = 0 ∂n (7. ∂ 2 φ/∂y 2 ) dA D over this set of functions with no boundary conditions prescribed anywhere on the boundary ∂D. y) is arbitrary in the interior of D and so we attempt to express the integrand as some quantity A that is independent of any variations multiplied by δφ.8. The analogous two-dimensional problem would be to consider a set of suitably smooth functions φ(x. the arbitrariness of δφ allowed us to conclude that A must vanish on the entire domain. We approach two-dimensional problems similarly and our strategy will be to exploit the fact that δφ(x. another quantity independent of any variations multiplied by ∂(δφ)/∂n etc. φ. Next. 1] into terms . This converts a term that is an integral over [0. In deriving the Euler equation in the one-dimensional case our strategy was to exploit the fact that the variation δφ(x) was arbitrary in the interior 0 < x < 1 of the domain. y) dA + ∂D B δφ(x. Similarly concerning the boundary terms. y. and this motivated us to express the boundary terms as some quantity B that is independent of any variations multiplied by δφ(0).95) where A. y) ds + ∂D C ∂ (δφ(x. y) defined on a domain D of the x. and the arbitrariness of δφ and ∂(δφ)/∂n on the boundary ∂D to conclude that the minimizer must satisfy the partial differential equation A = 0 for (x. y-plane and to minimize a given functional F {φ} = f (x. ∂ 2 φ/∂x∂y. We then exploit the arbitrariness of δφ(x.7. and so on. in the one-dimensional case we were able to exploit the fact that δφ and its derivative δφ are arbitrary at the boundary points x = 0 and x = 1. Thus the goal in our calculations will be to express the boundary terms as some quantity independent of any variations multiplied by δφ. 167 on a set of suitably smooth functions with no prescribed boundary conditions at either end. recall that one of the steps involved in calculating the minimizer of a one-dimensional problem is integration by parts. C are independent of δφ and its derivatives and the latter two integrals are on the boundary of the domain D. Thus in the two-dimensional case our strategy will be to take the first variation of F and carry out appropriate calculations that lead us to an equation of the form δF = D A δφ(x. ∂φ/∂x. y) ∈ D and the boundary conditions B = C = 0 on ∂D. This motivated us to express the integrand in the form of some quantity A (independent of any variations) multiplied by δφ(x). another quantity C independent of any variations multiplied by δφ (0). B. ∂ 2 φ/∂x2 .

96) D which expresses the left hand side. in a form that only involves terms on the boundary. y) and its tangential derivative ∂φ/∂s along the boundary ∂D are not independent of each other in the following sense: if one knows the values of φ along ∂D one can differentiate φ along the boundary to get ∂φ/∂s. In vector form we have φ= ∂φ ∂φ ∂φ ∂φ i+ j= n+ s ∂x ∂y ∂n ∂s where i and j are unit vectors in the x. The analog of this in higher dimensions is carried out using the divergence theorem. The membrane is fixed along its entire edge ∂D and so u=0 for (x. Recall also that a function φ(x. Note that in the special case where P = ∂χ/∂x and Q = ∂χ/∂y for some χ(x. Let u(x. Remark: The derivative of a function φ(x. On the boundary ∂D of a two dimensional domain D we frequently need to calculate the derivative of φ in directions n and s that are normal and tangential to ∂D.and y-directions. which is an integral over D. (7. y) in a direction corresponding to a unit vector m is written as ∂φ/∂m and defined by ∂φ/∂m = φ · m = (∂φ/∂x) mx + ∂φ/∂y) my where mx and my are the components of m in the x. Here nx . A pressure p(x.95) does not involve a term of the form E ∂(δφ)/∂s integrated along the boundary ∂D since it can be rewritten as the integral of −(∂E/∂s) δφ along the boundary Example 1: A stretched membrane. y-plane. ny are the components of the unit normal vector n on ∂D that points out of D. which in two-dimensions reads ∂Q ∂P + ∂x ∂y dA = ∂D (P nx + Qny ) ds (7. y) is applied normal to the surface of the membrane in the negative z-direction.97) One can show that the potential energy Φ associated with any deflection u that is compatible with the given boundary condition is Φ{u} = D 1 2 u dA − 2 pu dA D . y) ∈ ∂D. A stretched flexible membrane occupies a regular region D of the x.and y-directions respectively. CALCULUS OF VARIATIONS that are only evaluated on the boundary points x = 0 and 1. y) the integrand of the right hand side is ∂χ/∂n.168 CHAPTER 7. y) be the resulting deflection of the membrane in the z-direction. This is why equation (7. and conversely if one knows the values of ∂φ/∂s along ∂D one can integrate it along the boundary to find φ to within a constant.

This suggests that we rewrite the preceding equation as δΦ = D (u. see (7.xx + u.y dA − u.y δu. u = 0 for (x.8.96). for example u. y) ∈ ∂D}. D its first variation is δΦ = D (u.x δu.y δu). In order to make use of the divergence theorem and convert the area integral into a boundary integral we must write the integrand so that it involves terms of the form (. GENERALIZATION TO HIGHER DIMENSIONAL SPACE.x = ∂u/∂x and u.).x u. The membrane surface is subjected to a pressure loading p(x. y-plane and whose boundary ∂D is fixed.xy = ∂ 2 u/∂x∂y.y ) dA − pδu dA.x + (u.yy + p δu dA. y 0 x D ∂D Fixed Figure 7.yy )δu dA − pδu dA. . 169 where we have taken the relevant stiffness of the membrane to be unity.y u.).xx + u. Since Φ{u} = D 1 (u.x + (u. y) vanishes on ∂D. Here we are using the notation that a comma followed by a subscript denotes partial differentiation with respect to the corresponding coordinate. D . D where an admissible variation δu(x.x δu).y δu).x + u.y . .x δu).16: A stretched elastic membrane whose mid-plane occupies a region D of the x.x + (. .7. .y ) dA − 2 pu dA. y) that acts in the negative z-direction. D or equivalently as δΦ = D (u.y − (u.x + u. The actual deflection of the membrane is the function that minimizes the potential energy over the set of test functions A = {u u ∈ C 2 (D).

99) would yield the natural boundary condition ∂φ/∂n = 0 on that segment. y) ∈ D which is the Euler equation in this case that is to be solved subject to the prescribed boundary condition (7.xy by Mx = −Dw.xx . (7. then we would not have δu = 0 on that part in which case (7. Myx (see Figure 7. The basic constitutive relationships of elastic plate theory relate the internal moments Mx . The mid-plane of the plate occupies a domain of the x.99) which must vanish for all admissible variations δu(x.170 CHAPTER 7.17) to the second derivatives of the displacement field w. Thus the minimizer satisfies the partial differential equation 2 u+p=0 for (x. w. Dym. in “Energy & Finite Elements Methods in Structural Mechanic” by I.L. Mxy = Myx = −Dw. Shames & C. Solely for purposes of mathematical simplicity we shall assume that the Poisson ratio ν of the material is zero. My .y ny δu ds − (u. 2 NNN Show the calculations for just one term wxx . We can write this equivalently as δΦ = ∂D ∂u δu ds − ∂n 2 D u + p δu dA. Mxy .NNN NNN Check signs of terms and sign conventionNNN Example 2: The Kirchhoff theory of plates. y) denotes the deflection (displacement) of a point on the mid-plane in the z-direction.xx .yy .100) .98) and (7. y-plane and w(x.xy . Include nu = 0 in text.97).yy + p) δu dA D where n is a unit outward normal along ∂D. When ν = 0 the plate bending stiffness D = Et3 /12 where E is the Young’s modulus of the material and t is the thickness of the plate. We consider the bending of a thin plate according to the so-called Kirchhoff theory. y).x nx + u. w. My = −Dw. for example.yy .H. CALCULUS OF VARIATIONS By using the divergence theorem on the first integral we get δΦ = ∂D u. (7. A discussion of the case ν = 0 can be found in many books. Note that if some part of the boundary of D had not been fixed.98) Since the variation δu vanishes on ∂D the first integral drops out and we are left with δΦ = − 2 D u + p δu dA (7.xx + u.

y + νwxx) .y .y Vy = D(∇2w).yx = w + w. Right: A rectangular a × b plate with a load free edge at its right hand side x = a.17: A differential element dx × dy × t of a thin plate.xy .yy + Mxy w. ny = 0 0 x Ω 0 s a sx = 0. 2 2 . The segment ∂Ω1 of the boundary is clamped while the remainder ∂Ω2 is free of loading.yy Mxy = Myx = D(1 − ν)w. y-plane. 171 where a comma followed by a subscript denotes partial differentiation with respect to the corresponding coordinate and D is the plate bending stiffness. Clamped ∂Ω1 Clamped y y Clamped Clamped b ∂Ω2 Free ree nx = 1.xx + νwyy ) My = D(w. (7. sy = 1 n x ∂Ω2 Free ree Figure 7.x . 0 ≤ y ≤ b.x Vy y Figure 7. Vy = −D( 2 w). A bold arrow with two arrow heads represents a moment whose sense is given by the right hand rule.8. Thus Mxy and Myx are (twisting) moments while Mx and My are (bending) moments. and the shear forces in the plate are given by Vx = −D( 2 w).18: Left: A thin elastic plate whose mid-plane occupies a region Ω of the x.yy + 2w. . A bold arrow represents a force and thus Vx and Vy are (shear) forces.xy + Myx w.xx (7.xx + My w. GENERALIZATION TO HIGHER DIMENSIONAL SPACE.7.102) z dy dx Mx = D(w.101) The elastic energy per unit volume of the plate is given by D 2 1 2 2 Mx w.xy (1 t Mxy x Vx Mx My Myx Vx = D(∇2w).

xy + w. D (7. y) that satisfy the geometric requirements (7. Based on Figure 7. The question then arises as to what the correct boundary conditions on this edge should be.172 CHAPTER 7. y) is applied on the flat face of the plate in the −z-direction.105) Ω To simplify this we begin be rearranging the terms into a form that will allow us to use the divergence theorem. a twisting moment Mxy . They will involve Mx . However we will find that the differential equation to be solved in the interior of the plate requires (and can only accommodate) two boundary conditions at any point on the edge.104) by calculating the first variation of Φ{w} and setting it equal to zero: w.104) where the first group of terms represents the elastic energy in the plate and the last term represents the potential energy of the pressure loading (the negative sign arising from the fact that p acts in the minus z-direction while w is the deflection in the positive z direction). thereby converting part of the area integral on Ω into a boundary .xy + w.yy − p δw dA = 0. Consider a thin elastic plate whose mid-plane occupies a domain Ω of the x. Our variational approach will give us precisely two natural boundary conditions on this edge. The actual deflection field is the one that minimizes the potential energy Φ over this set. Thus if w(x. and a shear force Vx acting on any surface x = constant in the plate. in particular. A normal loading p(x.103). Therefore. We now determine the Euler equation and natural boundary conditions associated with (7.18. A part of the plate boundary denoted by ∂Ω1 is clamped while the remainder ∂Ω2 is free of any external loading. CALCULUS OF VARIATIONS It is worth noting the following puzzling question: Consider the rectangular plate shown in the right hand diagram of Figure 7.103) D 2 2 2 w.xx + 2w. Mxy and Vx but will not require that each of them must vanish individually. This functional Φ is defined on the set of all kinematically admissible deflection fields which is the set of all suitably smooth functions w(x.yy − p w dA 2 (7. (7. y) denotes the deflection of the plate in the z-direction we have the geometric boundary conditions w = ∂w/∂n = 0 The total potential energy of the system is Φ{w} = Ω for (x. y-plane as shown in the left hand diagram of Figure 7.18.xx δw.17 we know that there is a bending moment Mx .xy δw.xx + 2w. y) ∈ ∂Ω1 . since the right hand edge x = a is free of loading one would expect to have the three conditions Mx = Mxy = Vx = 0 along that boundary.yy δw.

x + w.yy δw.yy − (p/D)δw dA.yyy δw ny ds − 4 Ω w − (p/D) δw dA.xxx nx + w. (7.y nx + w. Accordingly we rewrite (7.yyy δw. In order to facilitate further simplification.xxx δw.yyy δw.y ny ds − w.xyy nx + w.yyyy − p/D δw dA.x + w.xxy δw.xxxx + 2w. Ω = ∂Ω P1 δw ds − .yyy ny δw ds − 4 Ω w − p/D δw dA. Ω = ∂Ω I1 ds − I2 dA. see (7.yyy δw .xy δw.x = + w.xxy δw.yy δw.7.xyy δw nx + w.).xx + 2w.xxyy + w. . P2 δw dA.xy δw.xx δw.xx δw.xyy δw Ω . = ∂Ω (7. Thus I Ω 2 dA = Ω w.y − (p/D)δw dA.107) = ∂Ω w.y + p/D δw dA.yyy δw.96).y − w.x + w.xyy δw.105) as 0 = Ω w.xy δw.xy δw.xxx δw.x + w.y .y .x + (.y . = ∂Ω w. 173 integral on ∂Ω.xyy δw.x + w. .106) w.xxy δw. GENERALIZATION TO HIGHER DIMENSIONAL SPACE.). Ω We have used the divergence theorem (7. In order to use the divergence theorem we must write the integrand so that it involves terms of the form (.yy δw.x + w. in the last step we have let I1 and I2 denote the integrands of the boundary and area integrals.xxy ny + w.8.x + w. .96) in going from the second equation above to the third equation.xxx δw.106) we again rearrange the terms in I2 into a form that will allow us to use the divergence theorem.x − w.y − w.xxy δw + w. .y + (p/D)δw dA.x − w.y + w. w.xxy δw + w.y −w.xy δw. To simplify the area integral in (7.y + w. w.xxx δw + w.xy + w.xx δw.x + w.xxx δw + w.x = Ω + w.xyy δw.

s sx + w.n ny + δw.xy sx ny + w.xx nx δw. CALCULUS OF VARIATIONS P1 = w.174 where we have set CHAPTER 7.xxy ny + w.xy nx sy + w.xx nx + w. = ∂Ω w.n n + δw.109) and this term can be written as I ∂Ω 3 ds = ∂Ω w.yy ny δw.yyyy .yy ny δw.109) To further simplify this we have set I3 equal to the last expression in (7.yy ny sy . w.yy ny sy δw. then ∂f ds ∂s = 0 (7.xy nx sy + w.y ds.xyy nx + w.yyy ny and P2 = 4 w − p/D.96) in going from the second equation in (7.107) to the third equation.xy ny δw. (7.xy nx + w.xx nx sx + w. w.n nx + δw.yy ny δw.xy nx sy + w.x = δw.x + w.y = δw. and if the curve ∂Ω itself is smooth.yy n2 δw.xy nx ny + w.111) ∂Ω .106). Next we simplify the boundary term in (7. − w.xxyy + w.s s from which it follows that δw.s sy .xxxx + 2w.xy nx δw. we have used the divergence theorem (7.108) In the preceding calculation.x + w.xxx nx + w.xy sx ny + w.s sx and δw.xx n2 + w.xx n2 + w. x y (7.xy ny ∂Ω = = ∂Ω δw. y) varies smoothly along ∂Ω.s ds.xx nx + w.xy sx ny + w.xx nx sx + w. I ∂Ω 1 = ∂Ω ds = ∂Ω w.s ds.yy n2 δw.y + w.xy nx ny + w.x i + δw. and we have set 4 w= 2 ( 2 w) = w.n x y + w.xx nx sx + w.n nx + δw. Thus from (7.xx nx sx + w.s If a field f (x.xy ny δw.x + w.110) δw ds.yy ny sy δw.xy nx + w.xy nx sy + w.yy ny sy δw ∂Ω .n ny + δw. To do this we use the fact that δw = δw. w.xy sx ny + w. w.xy nx ny + w.n + I3 ds.y ds.106) by converting the derivatives of the variation with respect to x and y into derivatives with respect to normal and tangential coordinates n and s.s = (7.y j = δw.xy nx ny + w.s sy ds.

xx nx sx + w.113) where we have set P3 = w.yy n2 .118) for variations δw and ∂(δw)/∂n that are arbitrary on ∂Ω2 where ∂Ω2 is the complement of ∂Ω1 . GENERALIZATION TO HIGHER DIMENSIONAL SPACE. ∂n (7. y) ∈ ∂Ω2 .7. i.xy ny sx + w. Finally.xy nx ny + w. y) ∈ ∂Ω1 .116) Returning to (7.xxy ny + w.xy ny sx + w.xx nx + w.yy ny = 0.xxx nx + w.xx nx sx + w.115) which must hold for all admissible variations δw. It follows from this that the first term in the last expression of (7.112) into (7. (7. ∂s (7.114) P2 δw dA − P1 + ∂Ω ∂ (P4 ) ∂s δw ds + ∂Ω P3 ∂ (δw) ds = 0 ∂n (7. x y P4 = w. substituting (7.xy nx sy + w.yyy ny     ∂ + ∂s (w.xy nx ny + w. ∂Ω = ∂Ω1 ∪ ∂Ω2 .xyy nx + w.110) vanishes and so I ∂Ω 3 ds = − w. Thus (7.8. (7.117) Since the portion ∂Ω1 of the boundary is clamped we have w = ∂w/∂n = 0 for (x.xx n2 + w.xy nx ny + w.107) into (7. Thus we conclude that P1 + ∂P4 /∂s = 0 and P3 = 0 on ∂Ω2 :  w.yy ny sy ∂Ω .yy ny sy .117) simplifies to − P1 + ∂Ω2 ∂ (P4 ) ∂s δw ds + ∂Ω2 P3 ∂ (δw) ds = 0 ∂n (7.     2 2 w. Thus the variations δw and ∂(δw)/∂n must also vanish on ∂Ω1 . First restrict attention to variations which vanish on the boundary ∂Ω and whose normal derivative ∂(δw)/∂n also vanish on ∂Ω. 175 since this is an integral over a closed curve8 .s δw ds.115) with this gives − P1 + ∂Ω δw ds + ∂ (δw) ds = 0.xy nx sy + w.106) leads to Ω (7.xy nx ny + w.113) and (7.112) Substituting (7.109) yields I1 ds = ∂Ω ∂Ω P3 ∂ (δw) ds − ∂n ∂Ω ∂ (P4 ) δw ds. 8 . P3 ∂Ω (7. y) ∈ Ω.xy sx ny + w.xx nx sx + w. This leads us to the Euler equation P2 = 0: 4 w − p/D = 0 ∂ (P4 ) ∂s for (x.xy nx sy + w.e.119) In the present setting one would have this degree of smoothness if there are no concentrated loads applied on the boundary of the plate ∂Ω and the boundary curve itself has no corners.yy ny sy ) = 0 for (x.

(7. we wish to . From among all surfaces S in R3 that have C as its boundary.121) As a special case suppose that the plate is rectangular.120) Mns = −D (w. Vn = Vx .116) on Ω subjected to the displacement boundary conditions (7.yx nx sy + w. 0 ≤ y ≤ b and that the right edge x = a.xy nx ny + w.xx (7. ∂y (7.18.xyy ) which because of (7. but this is in contradiction to the mathematical fact that the differential equation (7. Mxy . The two natural boundary conditions (7.yxx ny + w.123) require that certain combinations of Mx .120) simplifies to Mn = −D w.103) on ∂Ω1 and the natural boundary conditions (7. Mns and force Vn by Mn = −D (w. CALCULUS OF VARIATIONS Thus.19. Example 3: Minimal surface equation. see the right diagram in Figure 7.116) only requires two conditions at each point on the boundary.yyy ny ) ∂ Mns + Vn = 0.100) shows that in this case Mn = Mx .xyy nx + w. Let C be a closed curve in R3 as sketched in Figure 7. Vx vanish but not that all three vanish.122) Mns = −D w.xxx + w. 0 ≤ y ≤ b is free of load. ∂s then the two natural boundary conditions can be written as Mn = 0. Thus the natural boundary conditions (7.xy ny sx + w. sy = 1 on ∂Ω2 and so (7.xx nx nx + w. sx = 0. We had noted that intuitively we would have expected the moments and forces to vanish on a free edge and therefore that Mx = Mxy = Vx = 0 there. in summary.176 CHAPTER 7.yx Vn = −D (w. ∂ Mxy + Vx = 0. 0 ≤ x ≤ a.yy ny ny ) (7.123) This answers the question we posed soon after (7. ny = 0.xx nx sx + w. Remark: If we define the moments Mn .101) as to what the correct boundary conditions on a free edge should be.yy ny sy ) Vn = −D (w. Then nx = 1. the Kirchhoff theory of plates for the problem at hand requires that one solve the field equation (7.yx ny nx + w.119) on ∂Ω2 .121) can be written as Mx = 0.xxx nx + w. Mns = Mxy .

y-plane is denoted by ∂D and let D denote the simply connected region contained within ∂D. Thus the area of a differential element on S is |du × dv| = − φx dxdy i − φy dxdy j + dxdy k = 1 + φ2 + φ2 dxdy. y) ∈ D describe a surface S in R3 that has C as its boundary. The surface that forms is the one that. y-plane that is contained within D. y) and the area of this parallelogram is |du × dv|. φ ∈ C 1 (D). which (if the surface energy density is constant) is the surface with minimal area.19: The closed curve C in R3 is given. The curve ∂D is the projection of C onto the x. y) to (x+dx. Suppose that its projection onto the x. C : z = h(x. y) ∈ ∂D. y) for (x. from among all possible surfaces S that are bounded by C. y) for (x. x y .8. y) to (x.19. The vector joining (x. GENERALIZATION TO HIGHER DIMENSIONAL SPACE.7. Let z = φ(x. y+dy) is dy = dy j. The vectors du and dv define a parallelogram on the surface z = φ(x. then we know that du = dx i + φx dx k. y) whose projections are dx and dy respectively. If du and dv are vectors on the surface z = φ(x. if C corresponds to a wire loop which we dip in a soapy solution. minimizes the total surface energy of the film. y-plane. Suppose that C is characterized by z = h(x. z x. Figure 7. y) y x. As a physical example. dv = dy j + φy dy k. 177 determine the surface that has minimum area. S : z = φ(x. y) k j D i ∂D x the surface with minimal area is to be sought. Consider a rectangular differential element on the x. From among all surfaces S in R3 that have C as its boundary. see Figure 7. φ = h on ∂D} . y) is dx = dx i while the vector joining (x. a thin soap film will form across C. Thus the admissible set of functions we are considering are A{φ φ : D → R. Let C be a closed curve in R3 . necessarily φ = h on ∂D.

2 where ˆ F (0) ˆ εF (0) = 1 0 1 f (x. φ )dx 0 defined over a set of admissible functions A. 1 + φ2 + φ2 dA.wikipedia. φ = h on ∂D}. def . Suppose that a particular function φ minimizes F . = δF {φ. In order to illustrate the basic ideas of this section in the simplest possible setting. φ.edu/facstaff/b/brakke/ for additional discussion. Define F (ε) by ˆ F (ε) = F {φ + εη} = so that by Taylor expansion. φ ∈ C 2 (D). we confine the discussion to the particular functional 1 F {φ} = f (x.9 Second variation.susqu. Another necessary condition for a minimum.178 CHAPTER 7. η}. 1 0 ε2 F (0) = ε2 {fφφ η 2 + 2fφφ ηη + fφ φ (η )2 } dx = δ 2 F {φ. the one-parameter family of functions φ + εη are ˆ admissible for all sufficiently small values of the parameter ε. η}. φ. and that for some given function η. φ )dx = F {φ}. 0 f (x. CALCULUS OF VARIATIONS Consequently the problem at hand is to minimize the functional F {φ} = over the set of admissible functions A = {φ | φ : D → R. φ + εη )dx. x y D 7. φ + εη. ε2 ˆ ˆ ˆ ˆ F (ε) = F (0) + εF (0) + F (0) + O(ε3 ). It is left as an exercise to show that setting the first variation of F equal to zero leads to the so-called minimal surface equation (1 + φ2 )φxx − 2φx φy φxy + (1 + φ2 )φyy = 0. y x Remark: See en.org/wiki/Soap bubble and www.

φ(1) = φ1 . generates the surface of minimum area. φ. Proposition: Legendre Condition: A necessary condition for (7. (7. and consequently that δ 2 F {φ. y-plane characterized by y = φ(x) that begins at (0. A function φ that minimizes F must satisfy the boundary value problem consisting of the Euler equation and the given boundary conditions:   d φφ  − 1 + (φ )2 = 0. Thus a necessary condition for a function φ to minimize a functional F is that the second variation of F be non-negative for all admissible variations δφ: 1 δ 2 F {φ. The general solution of this Euler equation is φ(x) = α cosh x−β α for 0 ≤ x ≤ 1. it follows that ε = 0 minimizes F (ε).7. δφ} = 0 fφφ (δφ)2 + 2fφφ (δφ)(δφ ) + fφ φ (δφ )2 dx ≥ 0. find the one that. φ0 ) and ends at (1. φ(1) = φ1 . Example: Consider a curve in the x. The condition δ 2 F {φ. SECOND VARIATION ˆ Since φ minimizes F . when rotated about the x-axis. We shall not discuss sufficient conditions in general in these notes. 179 in addition to the requirement δF {φ.  dx 1 + (φ )2    φ(0) = φ0 . The inequality is reversed if φ maximizes F . η} ≥ 0.9. φ (x)) ≥ 0 for the minimizing function φ. over a set of admissible functions that satisfy the boundary conditions φ(0) = φ0 .124) to hold is that fφ φ (x. φ1 ). φ ) dx 0 where f (x. From among all such curves. φ. η} ≥ 0 is necessary but not sufficient for the functional F to have a minimum at φ. φ ) = φ 1 + (φ )2 . Thus we are asked to minimize the functional 1 for 0 ≤ x ≤ 1 F {φ} = f (x. . η} = 0. φ(x).124) where we have set δφ = εη.

x2 ∈ A.20: A convex curve y = F (x) lies above the tangent line through any point of the curve. which. when evaluated at the particular function φ(x) = α cosh(x − β)/α yields fφ φ |φ=α cosh(x−β)/α = α . x2 ∈ D. It is useful to begin by reviewing the question of finding the minimum of a real-valued function of a real variable. cosh (x − β)/α 2 Therefore as long as α > 0 the Legendre condition is satisfied. We now turn to a brief discussion of sufficient conditions for a minimum for a special class of functionals. CALCULUS OF VARIATIONS where the constants α and β are determined through the boundary conditions. A function F (x) defined for x ∈ A with continuous first derivatives is said to be convex if F (x1 ) ≥ F (x2 ) + F (x2 )(x1 − x2 ) for all x1 .10 Sufficient condition for minimization of convex functionals y F (x1) ≥ F (x2) + F (x2)(x1 − x2) for all x1. y = F (x) y = F (x2) + F (x2)(x − x2) 0 x2 x1 x D Figure 7. .180 CHAPTER 7. To test the Legendre condition we calculate fφ φ and find that fφ φ = φ ( 1 + (φ )2 )3 . 7.

If F is stationary at φo ∈ A.20 for a geometric interpretation of convexity. φ + η. Therefore a stationary point of a convex functional is necessarily a minimizer.10. For example. then F can only have one stationary point and so can only have one interior minimum. We now turn to a functional F {φ} which is said to be convex on A if F {φ + η} ≥ F {φ} + δF {φ. Therefore a stationary point of a convex function is necessarily a minimizer. where convexity is defined by9 F (x1 ) ≥ F (x2 ) + δF (x2 . If a convex function has a stationary point. φ + η ) − f (x. φ )dx. This is also true for a real-valued function F with continuous first derivatives on a domain A in Rn . It is readily seen that a sufficient condition for (7. x1 − x2 ) for all x1 . If F is strictly convex on A. then since δF (xo . . 0 1 (7.e. then δF {φo . η} for all φ. (7. consider the generic functional 1 F {φ} = Then δF {φ.7.e. then F can have only one stationary point and so can have only one interior minimum. y) = 0 for all y it follows that xo is a minimizer of F . If a convex function has a stationary point at. xo . if F is convex and F (x1 ) = F (x2 ) + F (x2 )(x1 − x2 ) if and only if x1 = x2 . x2 ∈ A. and it follows that φo is in fact a minimizer of F . η} = 1 0 0 f (x. φ + η ∈ A. say at xo . Therefore a stationary point of a convex function is necessarily a minimizer. φ. i.126) to hold is that the integrands satisfy 9 See equation (??) for the definition of δF (x. SUFFICIENT CONDITION FOR CONVEX FUNCTIONALS 181 see Figure 7.125) ∂f ∂f η+ η ∂φ ∂φ 1 dx and so the convexity condition F {φ + η} − F {φ} ≥ δF {φ. y). φ ) dx ≥ 0 ∂f ∂f η+ η ∂φ ∂φ dx. If F is strictly convex on A. φ. then it follows by setting x2 = xo in the preceding equation that xo is a minimizer of F . i. if F is convex and F (x1 ) = F (x2 ) + δF (x2 . say. η} = 0 for all admissible η. x1 − x2 ) if and only if x1 = x2 . η} takes the special form f (x.126) In general it might not be simple to test whether this condition holds in a particular case.

y. (a. ξ2 ). z) be a convex function of y. y. θ1. ξ1 ) and ending at (a. y.125) satisfies the convexity condition (7. Note that this is simply a sufficient condition for ensuring that an extremum is a minimum. θ1 . y + v. ξ1 ) and ending at (a. θ2. This is precisely the requirement that the function f (x. z) is independent of y.127) for all (x. θ2 . a function φ that extremizes F is in fact a minimizer of F . beginning (in circular cylindrical coordinates (r. (x. one sees from basic calculus that if ∂ 2 f /∂z 2 > 0 then f (x. θ. y. Remark: In the special case where f (x. We can characterize a curve in R3 using a parametric characterization using circular cylindrical coordinates by r = r(θ). z at fixed x. z) is a strictly convex function of z at each fixed x.182 the inequality CHAPTER 7. ξ)) at (a. beginning (in circular cylindrical coordinates) at (a. Find the curve of shortest length that lies entirely on a circular cylinder of radius a. ξ = ξ(θ) for θ1 ≤ θ ≤ θ2 . z) ≥ ∂f ∂f v+ w ∂y ∂z (7. z + w) in the domain of f . . z). θ2 . ξ a θ1 θ2 a a. CALCULUS OF VARIATIONS f (x. θ1 .21: A curve that lies entirely on a circular cylinder of radius a. (a. then. θ1 ≤ θ ≤ θ2 . Example: Geodesics. ξ = ξ(θ).127). Thus in summary: if the integrand f of the functional F defined in (7. y + v. ξ1) x = a cos θ y = a sin θ ξ = ξ(θ) θ1 ≤ θ ≤ θ 2 a. ξ2 ) as shown in the figure. z + w) − f (x. ξ2) Figure 7. When the curve lies on the surface of a circular cylinder of radius a this specializes to r = a.

We now turn to a different method of seeking minima. 7. y. . DIRECT METHOD Since the arc length can be written as ds = dr2 + r2 dθ2 + dξ 2 = r (θ) 2 183 + r(θ) 2 + ξ (θ) 2 dθ = = a2 + ξ (θ) 2 dθ.128) a2 + z 2 shows that a2 ∂ 2f = 2 >0 ∂z 2 (a + z 2 )3/2 and so f is a strictly convex function of z. ξ (θ)) dθ θ1 where f (x. . begin by reviewing the familiar case of a real-valued function f (x) of a real variable x ∈ (−∞. 2k lim f (xk ) = 0. . z) = √ a2 + z 2 over the set of all suitably smooth functions ξ(θ) defined for θ1 ≤ θ ≤ θ2 which satisfy ξ(θ1 ) = ξ1 . Evaluating the necessary condition δF = 0 leads to the Euler equation.7. ξ(θ). ξ(θ2 ) = ξ2 . ξ(θ2 ) = ξ2 leads to ξ(θ) = ξ1 + Direct differentiation of f (x. x 3 . x 1 . Consider the specific example f (x) = x2 . This function is nonnegative and has a minimum value of zero which it attains at x = 0. . (7. and for purposes of introduction. and note that k→∞ where xk = 1 . which after using the boundary conditions ξ(θ1 ) = ξ1 . ∞). x 2 . Consider the sequence of numbers x 0 .128) – a helix. this curve unfolds into a straight line.11. . z) = √ ξ1 − ξ2 θ1 − θ2 (θ − θ1 ). . our task is to minimize the functional θ2 F {ξ} = f (θ. xk . This second order differential equation for ξ(θ) can be readily solved. y. Note that if the circular cylindrical surface is cut along a vertical line and unrolled into a flat sheet. Thus the curve of minimum length is given uniquely by (7.11 Direct method of the calculus of variations and minimizing sequences.

184

CHAPTER 7. CALCULUS OF VARIATIONS

3

f(x) = x2

3

f(x) = x2

2

2

f(x) = 1
1 1

-2

-1

1

2

x

-2

-1

1

2

x

(a)

(b)

Figure 7.22: (a) The function f (x) = x2 for −∞ < x < ∞ and (b) the function f (x) = 1 for x ≤ 0,
f (x) = x2 for x > 0.

The sequence 1/2, 1/22 , . . . , 1/2k , . . . is called a minimizing sequence in the sense that the value of the function f (xk ) converges to the minimum value of f as k → ∞. Moreover, observe that lim xk = 0
k→∞

as well, and so the sequence itself converges to the minimizer of f , i.e. to x = 0. This latter feature is true because in this example f ( lim xk ) = lim f (xk ).
k→∞ n→∞

As we know, not all functions have a minimum value, even if they happen to have a finite greatest lower bound. We now consider an example to illustrate the fact that a minimizing sequence can be used to find the greatest lower bound of a function that does not have a minimum. Consider the function f (x) = 1 for x ≤ 0 and f (x) = x2 for x > 0. This function is non-negative, and in fact, it can take values arbitrarily close to the value 0. However it does not have a minimum value since there is no value of x for which f (x) = 0; (note that f (0) = 1). The greatest lower bound or infimum (denoted by “inf”) of f is
−∞<x<∞

inf

f (x) = 0.

Again consider the sequence of numbers x 0 , x 1 , x 2 , x 3 . . . xk , . . . and note that
k→∞

where xk =

1 , 2k

lim f (xk ) = 0.

7.11. DIRECT METHOD

185

In this case the value of the function f (xk ) converges to the infimum of f as k → ∞. However since lim xk = 0
k→∞

the limit of the sequence itself is x = 0 and f (0) is not the infimum of f . This is because in this example f ( lim xk ) = lim f (xk ).
k→∞ n→∞

Returning now to a functional, suppose that we are to find the infimum (or the minimum if it exists) of a functional F {φ} over an admissible set of functions A. Let
φ∈A

inf F {φ} = m (> −∞).

Necessarily there must exist a sequence of functions φ1 , φ2 , . . . in A such that
n→∞

lim F {φk } = m;

such a sequence is called a minimizing sequence. If the sequence φ1 , φ2 , . . . converges to a limiting function φ∗ , and if F { lim φk } = lim F {φk },
n→∞ n→∞

then it follows that F {φ∗ } = m and the function φ∗ is the minimizer of F . The functions φk of a minimizing sequence can be considered to be approximate solutions of the minimization problem. Just as in the second example of this section, in some variational problems the limiting function φ∗ of a minimizing sequence φ1 , φ2 , . . . does not minimize the functional F ; see the last Example of this section.

7.11.1

The Ritz method

Suppose that we are to minimize a functional F {φ} over an admissible set A. Consider an infinite sequence of functions φ1 , φ2 , . . . in A. Let Ap be the subset of functions in A that can be expressed as a linear combination of the first p functions φ1 , φ2 , . . . φp . In order to minimize F over the subset Ap we must simply minimize F (α1 , α2 , . . . , αp ) = F {α1 φ1 + α2 φ2 + . . . + αp φp }

186

CHAPTER 7. CALCULUS OF VARIATIONS

with respect to the real parameters α1 , α2 , . . . αp . Suppose that the minimum of F on Ap is denoted by mp . Clearly A1 ⊂ A2 ⊂ A3 . . . ⊂ A and therefore m1 ≥ m2 ≥ m3 . . .10 . Thus, in the so-called Ritz Method, we minimize F over a subset Ap to find an approximate minimizer; moreover, increasing the value of p improves the approximation in the sense of the preceding footnote. Example: Consider an elastic bar of length L and modulus E that is fixed at both ends and carries a distributed axial load b(x). A displacement field u(x) must satisfy the boundary conditions u(0) = u(L) = 0 and the associated potential energy is
L

F {u} =

0

1 E(u )2 dx − 2

L

bu dx.
0

We now use the Ritz method to find an approximate displacement field that minimizes F . Consider the sequence of functions v1 , v2 , v3 , . . . where pπx ; vp = sin L observe that vp (0) = vp (L) = 0 for all intergers p. Consider the function
n

un (x) =
p=1

αp sin

pπx L

for any integer n ≥ 1 and evaluate
L

F (α1 , α2 , . . . αn ) = F {un } = Since
L

0

1 E(un )2 dx − 2   0

L

bun dx.
0

2 cos
0

it follows that
L L n

pπx qπx cos dx =  L L
n

for p = q,

L for p = q, 1 dx = 2
n 2 αp p=1

(un ) dx =
0 0

2

pπx pπ cos αp L L p=1
n

qπ qπx αq cos L L q=1
L

p2 π 2 L

Therefore F (α1 , α2 , . . . αn ) = F {un } =
10

p=1

2 2 1 2 p π E αp − αp 4 L

b sin
0

pπx dx L

(7.129)

If the sequence φ1 , φ2 , . . . is complete, and the functional F {φ} is continuous in the appropriate norm, then one can show that lim mp = m.
p→∞

7.12. WORKED EXAMPLES. To minimize F (α1 , α2 , . . . αn ) with respect to αp we set ∂ F /∂αp = 0. This leads to αp =
L 0

187

b sin pπx dx L E
p2 π 2 2L

for p = 1, 2, . . . n.

(7.130)

Therefore by substituting (7.130) into (7.129) we find that the n-term Ritz approximation of the energy is
n

p=1

2 2 1 2 p π E αp 4 L

where αp =

L 0

b sin pπx dx L E
p2 π 2 2L

,

and the corresponding approximate displacement field is given by pπx un = αp sin L p=1
n

where αp =

L 0

b sin pπx dx L E
p2 π 2 2L

.

References 1. J.L. Troutman, Variational Calculus with Elementary Convexity, Springer-Verlag, 1983. 2. C. Fox, An Introduction to the Calculus of Variations, Dover, 1987. 3. G. Bliss, Lectures on the Calculus of Variations, University of Chicago Press, 1946. 4. L.A. Pars, Calculus of Variations, Wiley, 1963. 5. R. Weinstock, Calculus of Variations with Applications, Dover, 1952. 6. I.M. Gelfand and S.V. Fomin, Calculus of Variations, Prentice-Hall, 1963. 7. T. Mura and T. Koya, Variational Methods in Mechanics, Oxford, 1992.

7.12

Worked Examples.

Example 7.N: Consider two given points (x1 , h1 ) and (x2 , h2 ), with h1 > h2 , that are to be joined by a smooth wire. The wire is permited to have any shape, provided that it does not enter into the interior of the circular region (x − x0 )2 + (y − y0 )2 ≤ R2 . A bead is released from rest from the point (x1 , h1 ) and slides

h2 ) which is disallowed from entering the forbidden region (x − x0 )2 + (φ(x) − y0 )2 < R2 . in an undeformed configuration. the curvature κ(x) of a curve y = φ(x) is given by κ(x) = φ (x) . By geometry. h1 ) to (x2 . The beam is fixed by a pin at x = 0. we may only consider those that lie entirely outside this region: (x − x0 )2 + (φ(x) − y0 )2 ≥ R2 . Example 7. Therefore in considering different wires that connect (x1 . Figure NNN shows the centerline of the beam in both the undeformed and deformed configurations. The prescribed geometric boundary conditions on the deflected shape of the beam are thus φ(0) = φ(L) = 0.1) and the test functions must satisfy the same requirements as in the first example except that. CALCULUS OF VARIATIONS (x1. Thus the bending energy associated with a differential element of the beam is (1/2)EIκ2 ds where ds . x1 ≤ x ≤ x2 . h2 ). For what shape of wire is the time of travel from (x1 . the end x = L is also pinned but is permitted to move along the x-axis. Our task is to minimize T {φ} over the set A1 subject to the constratint (i). (i) The travel time of the bead is again given by (7. h1) y = φ(x) g (x2. A compressive force P is applied at x = L and the beam adopts a buckled shape described by y = φ(x). along the wire (without friction) due to gravity. h1 ) to (x2 . h2 ) least? Here the wire may not enter into the interior of the prescribed circular region .188 y CHAPTER 7. h1 ) to (x2 . they must be such satisfy the (inequality) constraint (i).N: Buckling: Consider a beam whose centerline occupies the interval y = 0. [1 + (φ (x))2 ]3/2 From elasticity theory we know that the bending energy per unit length of a beam is (1/2)M κ and that the bending moment M is related to the curvature κ by M = EIκ where EI is the bending stiffness of the beam. 0 < x < L. in addition. h2) 0 x1 x2 x Figure 7.23: A curve y = φ(x) joining (x1 .

The Euler equation. φ. which for such a functional has the general form d2 fφ dx2 − d fφ dx + fφ = 0. 1 (φ )2 EI dx. Therefore the total potential energy of the system is L Φ{x. Thus the potential energy associated with the applied force P is L −P 0 1 + (φ )2 dx − L . Since the change in length of a differential element is ds − dx.24: An elastic beam in undeformed (lower figure) and buckled (upper figure) configurations. 2 [1 + (φ )2 ]5/2 Next we need to account for the potential energy associated with the compressive force P on the beam. φ . is arc length along the deformed beam.7. WORKED EXAMPLES.12. Thus the total bending energy in the beam is L 0 x 1 EI κ2 (x) ds 2 where the arc length s is related to the coordinate x by the geometric relation ds = Thus the total bending energy of the beam is L 0 1 + (φ (x))2 dx. φ } = 0 1 (φ )2 EI dx − 2 [1 + (φ )2 ]5/2 L P 0 1 + (φ )2 − 1 dx. . 189 P y O L Figure 7. the amount by which the right hand end of the beam moves leftwards is L L L − 0 ds − dx 0 = − 0 1 + (φ )2 dx − L .

This arrangement of wires is dipped into a soapy bath and taken out. . The planes defined by the two circles are parallel to each other and perpendicular to their common axis.2: Soap Film Problem. Example 7. that are placed coaxially. a distance H apart.N: Linearize BVP in buckling problem above. Example 7.N: Physical example? Functional T L 0 F {u} = 0 1 2 1 2 2 1 2 ut − u + m u 2 2 x 2 dxdt Euler equation (Klein-Gordon equation) utt − uxx + m2 u = 0. We shall assume that the soap film adopts the shape with minimum surface energy.N: Null lagrangian Example 7. Consider two circular wires. 0 ≤ t ≤ T Functional T L 0 F {u} = Euler equation (Wave equation) 0 1 1 2 u − u2 dxdt 2 t 2 x utt − uxx = 0. and the natural boundary conditions are φ (0) = φ (L) = 0.190 CHAPTER 7.N: u(x. t) where 0 ≤ x ≤ L. approximate the energy and derive Euler equation associated with it. Determine the shape of the soap film that forms. Example 7. The last term above therefore drops out and the resulting equation can be integrated once immediately. Also. Example 7. Suppose that the film spans across the two circular wires. This eventually leads to the Euler equation d dx φ [1 + (φ )2 ]5/2 + φ 2[1 + (φ )2 ]1/2 P φ + 5 EI/2 [1 + (φ )2 ]3/2 2 =c where c is a constant of integration. CALCULUS OF VARIATIONS simplifies in the present case since f does not depend explicitly on φ. which implies that we are to find the shape with minimum surface area. each of radius R.

WORKED EXAMPLES. the surface must coincide with the surface of revolution of some curve y = φ(x). leads to φ(x) = c cosh where c is to be determined from the algebraic equation cosh H R = . and this is to be minimized subject to the requirements for − H < x < H. This gives the Euler equation d dx which we can write as φφ 1 + (φ or d dx This can be integrated to give (φ ) = 2 φφ 1 + (φ )2 − 1 + (φ )2 = 0 )2 d dx φφ 1 + (φ )2 φφ 1 + (φ )2 2 − φφ = 0 − φ c 2 d (φ)2 = 0.12. the surface area of this film is H Area{φ} = 2π where we have used the fact that ds = φ(−H) = φ(H) = R and φ(x)ds = 2π −H φ(x) 1 + (φ )2 dx. 191 y y = φ(x) R −H R H x By symmetry.7. c c (ii) x c (i) . 1 + (φ )2 dx. Integrating again and using the boundary conditions φ(H) = φ(−H) = R. By geometry. −H ≤ x ≤ H. dx −1 where c is a constant. φ(x) ≥ 0 In order to determine the shape that minimizes the surface area we calculate its first variation δArea and set it equal to zero.

then the minimizing shape is given by (i) with this value (or values) of c. the soap film covers only the two end regions formed by the circular wires and so the area of the soap film is 2 × πR2 . once if µ = µ∗ . µ < µ∗ 0. There are three possible configurations of the soap film to consider: one. if R = µ∗ H there is a unique shape of the soap film given by (i) that extremizes the surface area. Since 4πRH < 2πR2 for 2H < R. this suggests that the soap film will span across the two circular . as an approximation.5 2 2. sinh ξ = µ∗ where the latter equation reflects the tangency of the two curves at the contact point in this limiting case. Remark: In order to understand what happens when R < µ∗ H consider the following heuristic argument. if R > µ∗ H there are two shapes of the soap film given by (i) that extremize the surface area (and further analysis investigating the stability of these configurations is needed in order to determine the physically realized shape). CALCULUS OF VARIATIONS ζ = cosh ξ 8 ζ = µξ. the film does both of the above. Here µ∗ ≈ 1.25. and 4πRH > 2πR2 for 2H > R.25: Intersection of the curve described by ζ = cosh ξ with the straight line ζ = µξ. the film forms on each circular wire but does not bridge the two wires.5 3 2 ξ Figure 7. Thus in summary. and there is no intersection if µ < µ∗ . In this case the area of the soap film is 2πR(2H). To examine the solvability of (ii) set ξ = H/c and µ = R/H and then this equation can be written as cosh ξ = µξ.192 ζ 10 CHAPTER 7. the film bridges across from one circular wire to the other but it does not form on the flat faces of the two circular wires themselves (which is the case analyzed above). and three. two.50888 is found by solving the pair of algebraic equations cosh ξ = µ∗ ξ. Given H and R. As seen from Figure 7. We can immediately discard the third case since it involves more surface area than either of the first two cases.5 1 1. µ > µ∗ 6 4 ζ = µ∗ ξ ζ = µξ. if R < µ∗ H there is no shape of the soap film that extremizes the surface area. suppose that it forms a circular cylinder of radius R and length 2H. Consider the first possibility: the soap film spans across the two circular wires and. In the second case. the graph ζ = cosh ξ intersects the straight line ζ = µξ twice if µ > µ∗ . if this equation can be solved for c.

about the x-axis. The space craft moves at a speed V in the −x-direction. 0 < x < 1.26: The shape of the space craft with minimum drag is generated by rotating the curve y = φ(x) about the x-axis. the pressure on the body at some generic point is proportional to (V cos θ)2 where θ is the angle shown in Figure 7. the pressure at some point on a surface is proportional to the square of the normal speed of that point.4: Minimum Drag Problem. The horizontal component of this force is therefore obtained by integrating dF cos θ = (V cos θ)2 × (2πφds) × cos θ over the entire body.12. Thus we are led to 1 δD = = 2πh1 δh1 + 2π 0 1 (φ )3 δφ dx + 2π [1 + (φ )2 ] (φ ) δφ dx + 2π [1 + (φ )2 ] 3 1 φ 0 1 0 3(φ )2 2(φ )4 − δφ dx. The outer surface of the space-craft is composed of two segments as shown in Figure 7. [1 + (φ )2 ]2 2πh1 δh1 + 2π 0 . y h2 dF = pdA dA θ dA = 2πφds ds y = φ(x) s h1 p ∼ (V cos θ)2 V 0 1 x Figure 7. whereas the soap will not span across the two circular wires if R < 2H (and would instead cover only the two circular ends). and the outer portion is obtained by rigidly rotating the curve y = φ(x). φ(1) = h2 with the value of h2 being given. We are told that φ(0) = h1 . 193 wires if R > 2H.26. [1 + (φ )2 ] 1 + (φ )2 . remembering that both the function φ and the parameter h1 can be varied.26: the inner portion (x = 0 with 0 < y < h1 in the figure) is a flat circular disk shaped nose of radius h1 . Thus the drag D is given. WORKED EXAMPLES. the value of h1 however is not prescribed and is to be chosen along with the function φ(x) such that the drag is minimized. Example 7. Thus if the space craft has speed V relative to the surrounding medium. this acts on a differential area dA = 2πyds = 2πφds. 1 + (φ )2 [1 + (φ )2 ]2 φ(φ )2 (3 + (φ )2 ) δφ dx.7. in suitable units. where we have used the fact that ds = dx 1 + (φ )2 and cos θ = φ / To optimize this we calculate the first variation of D. According to the most elementary model of drag (due to Newton). by 1 D = πh2 + 2π 1 0 φ(φ )3 dx. Consider a space-craft whose shape is to be designed such that the drag on it is a minimum.

  4 1 > ξ > ξ2 . (i) The differential equation in (i)1 and the natural boundary condition (i)2 can be readily reduced to d dx φ(φ )3 [1 + (φ )2 ]2 =0 for 0 < x < 1. [1 + (φ )2 ]2 x=0 0 < x < 1. φ (0) = 1.194 CHAPTER 7.   4 1 > ξ > ξ2 . Together with the given boundary conditions. CALCULUS OF VARIATIONS Integrating the last term by parts and recalling that δφ(1) = 0 and δφ(0) = δh1 (since the value of φ(1) is prescribed but the value of φ(0) = h1 is not) we are led to 1 δD = 2πh1 δh1 +2π 0 (φ )3 δφ dx − 2π [1 + (φ )2 ] 1 0 d φ(φ )2 (3 + (φ )2 ) 2πφ(φ )2 (3 + (φ )2 ) δφ dx − dx [1 + (φ )2 ]2 [1 + (φ )2 ]2 δh1 . (v) h1 3 −4 7  x = ξ + ξ −2 + log ξ − . Since ξ = φ = 1 when x = 0 the preceding equation gives c2 = −7h1 /16. φ(1) = h2 .    4 4 where c2 is a constant of integration and ξ is the parameter. It is most convenient to write the solution of (iii) with c1 = h1 /4 parametrically by setting φ = ξ. On physical grounds we expect that the slope φ will decrease with increasing x and so we have supposed that ξ decreases as x increases. The differential equation (ii) tells us that φ(φ )3 = c1 [1 + (φ )2 ]2 for 0 < x < 1. thus as x increases from 0 to 1 we have supposed that ξ decreases from ξ1 to ξ2 (where we know that ξ1 = φ (0) = 1). h1 3 −4  x = ξ + ξ −2 + log ξ + c2 .   4 4 4  . x=0 The arbitrariness of δφ and δh1 now yield d φ(φ )2 (3 + (φ )2 ) (φ )3 − =0 for dx [1 + (φ )2 ]2 [1 + (φ )2 ] (φ )2 (3 + (φ )2 ) = 1. (ii) φ (0) = 1. Since φ(0) = h1 and φ (0) = 1 this shows that c1 = h1 /4. Thus  h1 −3  −1  φ = ξ + 2ξ + ξ . (iii) where c1 is a constant. we are therefore to solve the differential equation (iii) subject to the conditions φ(0) = h1 . This leads to  h1 −3   φ = ξ + 2ξ −1 + ξ . (iv) in order to find the shape φ(x) and the parameter h1 .

φ ) is not explicitly dependent on x. − for 0 < x < 1.350943. the preceding problem. The Euler equation is given. for example. by d fφ − fφ = 0 dx Multiplying this by φ gives d fφ − φ fφ = 0 dx which can be written equivalently as φ d (φ fφ ) − φ fφ dx Since this simplifies to d φ fφ − f = 0 for 0 < x < 1. then either equation in (vi) gives the value of h1 and (v) then provides a parametric description of the optimal shape.6: Elastic bar. For example if we take h2 = 1 then the root of (vii) is ξ2 ≈ 0. An equilibrium state of the bar is characterized . for 0 < x < 1. φ = 1 at x = 0 has already been satisfied.  ξ2 + ξ2 + log ξ2 −  4 4 4  Example 7. 4 4 If this can be solved for ξ2 . WORKED EXAMPLES.12. In the present case f depends on x only through φ(x) and φ (x). d f − φ fφ dx =0 for 0 < x < 1. dx it follows that in this special case the Euler equation can be integrated once to have the simplified form φ fφ − f = constant for 0 < x < 1. φ )dx 0 over some admissible set of functions A. as usual. 195 which are two equations for determining h2 and ξ2 .7. Dividing the first of (vi) by the second yields a single equation for ξ2 : 3 7 4 3 2 5 4 (vii) ξ2 − h2 ξ2 log ξ2 + h2 ξ2 + 2ξ2 − h2 ξ2 + ξ2 − h2 = 0.5: Consider the variational problem where we are asked to minimize the functional 1 F {φ} = f (φ. h2 =   4 (vi) h1 3 −4 7  −2 1 = . The following problem arises when examining the equilibrium state of a onedimensional bar composed of a nonlinearly elastic material. Remark: We could have taken advantage of this in.521703 and then h1 ≈ 0. Note that this is a special case of the standard problem where the function f (x. The boundary condition φ = h2 . φ. Example 7. The boundary condition φ = h1 . x = 1 at ξ = ξ2 requires that  h1 −3  −1  ξ2 + 2ξ2 + ξ2 .

σ(u ) = W (u ). The end x = 0 of the bar is fixed. Finally. so that u(0) = 0. i. at x = L. The total potential energy associated with any admissible displacement field is L L V {u} = 0 W (u (x))dx − 0 b(x)u(x)dx − P u(L). Then the first WeirstrassErdmann corner condition (7. An admissible displacement field is required to be continuous on [0. L] and to conform to the boundary condition u(0) = 0. The bar has unit cross-sectional area and occupies the interval 0 ≤ x ≤ L in a reference configuration. At any point x at which the displacement field is smooth.5. The displacement field u(x) satisfies the prescribed boundary condition u = 0 at x = 0. ∂f ∂u − ∂f =0 ∂u iii. u )dx 0 where f (x. (i) . and a distributed force per unit length b(x) is applied along the length of the bar.196 CHAPTER 7. CALCULUS OF VARIATIONS by a displacement field u(x) and the material of which the bar is composed is characterized by a potential W (u ).50) in Section 7. dx ii. a prescribed force P is applied at the end x = L. suppose that u has a jump discontinuity at some location x = s. according to equation (7. u. by fu = 0 which in the present case reduces to σ(x) = W (u (x) = P at x = L. dx which can be written in terms of stress as d σ + b = 0.e. It is convenient to denote the derivative of W by σ. which can be written in the conventional form L V {u} = f (x. u. u ) = W (u ) − bu − P u . and so the three basic ingredients of the theory can now be derived as follows: i. The natural boundary condition at the right hand end is given.88) requires that ∂f /∂u be continuous at x = s. the Euler equation d dx takes the explicit form d W (u ) + b = 0. L]. The actual displacement field minimizes the potential energy V over the admissible set. that the stress σ(x) must be continuous at x = s: σ x=s− = σ x=s+ . so that then σ(x) = σ(u (x)) represents the stress at the point x in the bar. piecewise continuously differentiable on [0.1.

for u (s−) = u (s+).12.27(a). and so in such a case. The second Weirstrass-Erdmann condition (ii) tells us that the stress σ at the discontinuity has to have a special value. as shown in the figure. 197 The second Weirstrass-Erdmann corner condition (7. the values of u at which σ has a local maximum and local minimum. i. In particular.27(a). for example. WORKED EXAMPLES. The energy function W (u ) sketched in Figure 7.89) requires that f − u ∂f /∂u be continuous at x = s.27: (a) A nonmonotonic (rising-falling-rising) stress response function σ(u ) and (b) the corresponding nonconvex energy W (u ).e. then it follows that σ = σ(u ) has a unique solution u corresponding to a given σ. To see this we write out (ii) explicitly as W (u (s+)) − u (s+)σ = W (u (s−) − u (s−)σ and then use σ(u ) = W (u ) to express it in the form u (s+) u (s−) σ(v)dv = σ u (s+) − u (s−) . (iii) . the stress σ on either side of x = s has to be continuous. correspond to inflection points of the energy function W (u ) since W = 0 when σ = 0. observe first that according to the first Weirstrass-Erdmann condition (i). that the quantity W − u σ must be continuous at x = s: W −uσ x=s− = W −uσ (ii) x=s+ Remark: The generalization of the quantity W − u σ to 3-dimensions in known as the Eshelby tensor. i. On the other hand if σ(u ) is a nonmonotonic function as. even though σ(x) is continuous at x = s it is possible for u to be discontinuous. then more than one value of u can correspond to the same value of σ. Thus if the function σ(u ) is monotonically increasing.e. σ(u ) W (u ) σ 0 u (s−) (a) +) u (s+) u 0 (b) u Figure 7. In order to illustrate how a discontinuity in u can arise in an elastic bar.7. and so u must also be continuous at x = s. shown in Figure 7.27(b) corresponds to the stress function σ(u ) shown in Figure 7.

0 that begins from (0. Find a curve that extremizes 1 F {φ} = f (x. Example 7.7: Non-smooth extremal. Moreover F {¯} = 0 for u(x) = x1/3 . Find a curve that extremizes a I(φ) = 0 (φ (x)) dx. ¯ Calculate F {u} for this function. Remark: By identifying the curve y = g(x) with the surface of a mirror and specializing the functional F to the travel time of light.27(a) from u (s−) to u (s+) must equal the area of the rectangle which has the same base and has height σ. φ(x). that is prohibited from entering the interior of the circle (x − a/2)2 + y 2 = b2 . and ends at (1. and finally take its limit as N tends to infinity. Clearly F {u} ≥ 0. or equivalently that the two shaded areas in Figure 7. (Due to John Ball).9: An example to caution against simple-minded discretization.27(a) must be equal. What do you get? (You will get an answer but not the correct one. 3 φ(0) = φ(a) = 0.e. Let 1 F {u} = 0 (u3 (x) − x)2 (u (x))2 dx for all functions such that u(0) = 0.8: Inequality Constraint. to F {¯}). and take a piecewise linear test function that is linear on each segment. u ¯ Therefore the minimizer of F {u} is u(x) = x1/3 . u . b) after contacting a given curve y = g(x). consider a 2 element discretization. a). u1 (x) = (i)  u(x) for h < x < 1. one can thus derive the law of reflection for light. ¯ Discretize the interval [0. Example 7. and take the continuous test function   cx for 0 < x < h.) To anticipate this difficulty in a different way.198 CHAPTER 7. Example 7. u(1) = 1. 1] into N segments. next minimize it at fixed N . Calculate the functional F at this test function. φ (x))dx. Take limit as h → 0 and observe that F {u} does not go to zero (i. CALCULUS OF VARIATIONS This implies that the value of σ must be such that the area under the stress response curve in Figure 7.

0 < y < b.  ∂y the edge y = b is hinged. a function w(x. Answer: The Euler equation is ∂4w ∂4w ∂4w p + 2 + = ∂x4 ∂x2 ∂y 2 ∂x4 D and the natural boundary conditions are ∂2w ∂2w + ν 2 = 0.  w = 0. φ + εη. The potential energy of the plate and loading associated with an admissible deflection field. φ(x).10: Legendre necessary condition for a local minimum: Let 1 199 F {ε} = f (x.  ∂x (i)  ∂w  w = 0. y).12. =0 on y = 0. (v) 0 < x < a. Suppose that F (0) ≥ 0 for all admissible functions η. φ + εη )dx 0 for all functions η(x) with η(0) = η(1) = 0.7. 0 < x < a.e. A distributed pressure loading p(x. Consider a thin rectangular plate of dimensions a × b that occupies the region A = {(x. 0 < y < b. We are asked to derive the Euler equation and the natural boundary conditions to be satisfied by minimizing w. 0 < y < b} of the x. y-plane. WORKED EXAMPLES. y) | 0 < x < a. y) is applied on the planar face of the plate in the z-direction. Example 7. Example 7. ∂y 2 ∂x on y = b. which means that its deflection must be zero but there is no geometric restriction on the slope: w = 0. y) that obeys (i) and (i) is given by Φ{w} = D 2 ∂2w ∂2w + ∂x2 ∂y 2 2 A − 2(1 − ν) ∂2w ∂2w − ∂x2 ∂y 2 ∂2w ∂x∂y 2 dxdy − pw dxdy. 0 < x < a. and the resulting deflection of the plate in the z-direction is denoted by w(x. A (iii) where D and ν are constants. on y = b. The actual deflection of the plate is given by the minimizer of Φ. Show that 1 F (0) = 0 fφφ η 2 + 2fφφ ηη + fφ φ (η )2 dx. i. (ii) and finally the edge x = a is free in the sense that the deflection and the slope are not geometrically restricted in any way. φ (x)) ≥ 0 for 0 ≤ x ≤ 1. 0 < x < a. Show that it is necessary that fφ φ (x. (iv) . which implies the geometric restrictions that the plate cannot deflect nor rotate along these edges:  ∂w  =0 on x = 0. The edges x = 0 and y = 0 of the plate are clamped.11: Bending of a thin plate.

φ3 . each of those terms must vanish individually. . each of which has a smaller base and smaller height. the second term. Thus the family of saw-teeth functions is a minimizing sequence of (i). . (vi) Example 7. since its integrand is the sum of two non-negative terms. However. φ2 . If the minimizing sequence itself converges to φ∗ show that F {φ∗ } is not the infimum of F . Thus as k increases there are more and more teeth. Note that the base h = 1/k and the height is h/2. i. Remark: Note that this functional is non-negative.28: Sawtooth function φk (x) with k local maxima and linear segments of slope ±1. the slope of each linear segment is ±1. 1 k h= φk (x) = x − nh φk (x) = −x + (n + 1) h/ h/2 0 1 x h nh (n + 1)h 1) Figure 7. note that since φk (x) → 0 at each fixed x as k → ∞ the limiting function to which this sequence converges. . Observe that the first term in the integrand of (i) vanishes identically for any k. then.e.12: Consider the functional 1 F {φ} = 0 (φ )2 − 1 2 + φ2 dx. Thus we must have φ (x) = ±1 and φ(x) = 0 on the interval 0 < x < 1.200 and ∂2w ∂2w +ν 2 =0 2 ∂x ∂y CHAPTER 7. . 3 ∂x ∂x∂y 2 on x = a. CALCULUS OF VARIATIONS and ∂3w ∂3w + 2(1 − ν) = 0.28. These cannot be both satisfied by a regular function. If the functional takes the value zero. φ(x) = 0. φ(0) = φ(1) = 0. which equals the area under the square of φk approaches zero as k → ∞. is not a minimizer of (i). (i) and determine a minimizing sequence φ1 . such that F {φk } approaches its infimum as k → ∞. Let φk (x) be the piecewise linear saw-tooth function with k-local maxima as shown in Figure 7. 0 < y < b.