You are on page 1of 41

Ruhr-University Bochum

Faculty of Civil and Environmental Engineering


Institute of Mechanics

A Small Compendium on Vector


and Tensor Algebra and Calculus

Klaus Hackl
Mehdi Goodarzi

2010
ii
Contents
Foreword v
1 Vectors and tensors algebra 1
1.1 Scalars and vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Geometrical operations on vectors . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Fundamental properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Vector decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Dot product of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Index notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Tensors of second order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8 Dyadic product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.9 Tensors of higher order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.10 Coordinate transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.11 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.12 Invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.13 Singular value decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.14 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.15 Triple product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.16 Dot and cross product of vectors: miscellaneous formulas . . . . . . . . . . 17

2 Tensor calculus 19
2.1 Tensor functions & tensor elds . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Derivative of tensor functions . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Derivatives of tensor elds, Gradient, Nabla operator . . . . . . . . . . . . . 20
2.4 Integrals of tensor functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5 Line integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.6 Path independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.7 Surface integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8 Divergence and curl operators . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.9 Laplace's operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.10 Gauss' and Stoke's Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.11 Miscellaneous formulas and theorems involving nabla . . . . . . . . . . . . . 27
2.12 Orthogonal curvilinear coordinate systems . . . . . . . . . . . . . . . . . . . 28

iii
iv CONTENTS

2.13 Dierentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.14 Dierential transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.15 Dierential operators in curvilinear coordinates . . . . . . . . . . . . . . . . 31
Foreword
A quick review of vector and tensor algebra, geometry and analysis is presented. The reader
is supposed to have sucient familiarity with the subject and the material is included as
an entry point as well as a reference for the subsequence.

v
vi FOREWORD
Chapter 1

Vectors and tensors algebra


Algebra is concerned with operations dened in sets with certain properties. Tensor and vec-
tor algebra deals with properties and operations in the set of tensors and vectors. Through-
out this section together with algebraic aspects, we also consider geometry of tensors to
obtain further insight.

1.1 Scalars and vectors


There are physical quantities that can be specied by a single real number, like temperature
and speed. These are called scalars. Other quantities whose specication requires mag-
nitude (a positive real number) and direction, for instance velocity and force, are called
vectors. A typical illustration of a vector is an arrow which is a directed line segment.
Given an appropriate unit, length of the arrow reects magnitude of the vector. In the
present context the precise terminology would be Euclidean vector or geometric vector,
to acknowledge that in mathematics, vector has a broader meaning with which we are not
concerned here. It is instructive to have a closer look at the meaning of direction of a vector.
Given a vector v and a directed line L (Fig. 1.1), projection
of the vector onto the direction is a scalar vL = kvk cos θ in v
which θ is the angle between v and L . Therefore, a vector
can be thought as a function that assigns a scalar to each θ eL
L
direction v(L ) = vL . This notion can be naturally extended
kvk cos θ
to understand tensors. A tensor (of second order) is a function
that assigns vectors to directions T (L ) = T L in the sense of Fig. 1.1: Pro jection.
projection. In other words the projection of tensor T on direction L is a vector like T L .
This looks rather abstract but its meaning is going to be clear in the sequel when we explain
the Cauchy's formula in which the dot product of stress (tensor) and area (vector) yields
traction force (vector).

Notation 1. Vectors and tensors are denoted by bold-face letters like A and v , and mag-
nitude of a vector by kvk or simply by its normal-face v . In handwriting an underbar is
used to denote tensors and vectors like v .

1
2 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

1.2 Geometrical operations on vectors


Denition 2 (Equality of vectors). Two vectors are equal if they have the same magnitude
and direction.

Denition 3 (Summation of vectors). The sum of every two vectors v and w is a vector
u = v + w formed by placing initial point of w on terminal point of v and joining initial
point of v and terminal point of w (Fig. 1.2(a,b)). This is equivalent to parallelogram
law for vector addition as shown in Fig. 1.2(c). Subtraction of vectors v − w is dened as
v + (−w).

w v
v w u=v+w

v u=v+w w

(a) (b) (c)


Fig. 1.2: Vector summation.

Denition 4 (Multiplication of a vector by a scalar). For any real number (scalar) α and
vector v , their multiplication is a vector αv whose magnitude is |α| times the magnitude
of v and whose direction is the same as v if α > 0 and opposite to v if α < 0. If α = 0
then αv = 0.

1.3 Fundamental properties


The set of geometric vectors V together with the just dened summation and multiplication
operations is subject to the following properties

Property 5 (Vector summation).


1. v+0=0+v =v Existence of additive identity
2. v + (−v) = (−v) + v = 0 Existence of additive inverse
3. v+w =w+v Commutative law
4. u + (v + w) = (u + v) + w Associative law

Property 6 (Multiplication with scalars).

1. α (v + w) = αv + αw Distributive law
2. (α + β) v = αv + βv Distributive law
3. α (βv) = (αβ) v Associative law
4. 1v = v Multiplication with scalar identity.
A vector can be rigorously dened as an element of a vector space.

Denition 7 (Real vector space). The mathematical system (V, +, R, ∗) is called a real
vector space or simply a vector space, where V is the set of geometrical vectors, + is vector
1.4. VECTOR DECOMPOSITION 3

summation with the four properties mentioned, R is the set of real numbers furnished with
summation and multiplication of numbers, and ∗ stands for multiplication of a vector with
a real number (scalar) having the above mentioned properties.
The reader may wonder why this recent denition is needed. Remember that a quantity
whose expression requires magnitude and direction is not necessarily a vector(!). A well
known example is nite rotation which has both magnitude and direction but is not a vector
because it does not follow the properties of vector summation, as dened before. Therefore,
having magnitude and direction is not sucient for a quantity to be identied as a vector,
but it must also obey the rules of summation and multiplication with scalars.
Denition 8 (Unit vector). A unit vector is a vector of unit magnitude. For any vector v
its unit vector is referred to by ev or v̂ which is equal to v̂ = v/kvk. One would say that
the unit vector carries the information about direction. Therefore magnitude and direction
as constituents of a vector are multiplicatively decomposed as v = vv̂ .

1.4 Vector decomposition


Writing a vector v as the sum of its components is called decomposition. Components of a
vector are its projections onto the coordinate axes (Fig. 1.3)
v = v1 e1 + v2 e2 + v3 e3 , (1.1)
where {e1 , e2 , e3 } are unit basis vectors of coordinate system, {v1 , v2 , v3 } are components
of v and {v1 e1 , v2 e2 , v3 e3 } are component vectors of v .
Since a vector is specied by its components relative
to a given coordinate system, it can be alternatively de- X3
noted by array notation
v v e
 
v1 3 3
T
v ≡ v2 = (v1 v2 v3 ) .
  (1.2)
v3 v1 e 1 X2
Using Pythagorean theorem the norm (magnitude) of a X1 v2 e2
vector v in three-dimensional space can be written as Fig. 1.3: Components.
q
kvk = v12 + v22 + v32 . (1.3)
Remark 9. Representation of a geometrical vector by its equivalent array (or tuple) is
the underlying idea of analytic geometry which basically paves the way for an algebraic
treatment of geometry.

1.5 Dot product of vectors


Dot product (also called inner or scalar product) of two vectors v and w is dened as
v · w = v w cos θ , (1.4)
where θ is the angle between v and w.
4 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

Property 10 (Dot product).

1. v·w =w·v Commutativity


2. u · (αv + βw) = αu · v + βu · w Linearity
3. v · v ≥ 0 and v · v = 0 ⇔ v = 0 Positivity.

Geometrical interpretation of dot product


Denition of dot product (1.4) can be restated as

v · w = (vv̂) · (wŵ) = v (v̂ · w) = w (v · ŵ) = vw cos θ , (1.5)

which says the dot product of vectors v and w equals projection of w on v direction
times magnitude of v (see Fig. 1.1), or the other way around. Also, dividing leftmost and
rightmost terms of (1.5) by vw gives

v̂ · ŵ = cos θ , (1.6)

which says the dot product of unit vectors (standing for directions) determines the angle
between them. A simple and yet interesting result is obtained for components of a vector
v as
v1 = v · e1 , v2 = v · e2 , v3 = v · e3 . (1.7)

Important results
1. In any orthonormal coordinate system including Cartesian coordinates it holds that

e1 · e1 = 1 , e1 · e2 = 0 , e1 · e3 = 0
e2 · e1 = 0 , e2 · e2 = 1 , e2 · e3 = 0 (1.8)
e3 · e1 = 0 , e3 · e2 = 0 , e3 · e3 = 1 ,

2. therefore ∗

v · w = v1 w1 + v2 w2 + v3 w3 . (1.9)

3. For any two nonzero vectors a and b, a · b = 0 i a is normal to b.

4. Norm of a vector v can be obtained by

kvk = (v · v) /2 .
1
(1.10)

Note that v · w = (v1 e1 + v2 e2 + v3 e3 ) · (w1 e1 + w2 e2 + w3 e3 ) = v1 w1 e1 · e1 + v1 w2 e1 · e2 + · · · .
1.6. INDEX NOTATION 5

1.6 Index notation


A typical tensorial calculation demands component-wise arithmetic derivations which is
usually a tedious task. As an example let us try to expand the left-hand-side of equation
(1.9)

v · w = (v1 e1 + v2 e2 + v3 e3 ) · (w1 e1 + w2 e2 + w3 e3 )
= v1 w1 e1 · e1 + v1 w2 e1 · e2 + v1 w3 e1 · e3
+ v2 w1 e2 · e1 + v2 w2 e2 · e2 + v2 w3 e2 · e3
+ v3 w1 e3 · e1 + v3 w2 e3 · e2 + v3 w3 e3 · e3 , (1.11)

which together with equation (1.8) yields the right-hand-side of (1.9). As can be seen
for a very basic expansion considerable eort is required. The so called index notation is
developed for simplication.
Index notation uses parametric index instead of explicit index. If for example the letter
i is used as index in vi it addresses any of the possible values i = 1, 2, 3 in three dimensional
space, therefore representing any of the vector v 's components. Then {vi } is equivalent to
the set of all vi 's, namely {v1 , v2 , v3 }.

Notation 11. When an index appears twice in one term then that term is summed up
over all possible values of the index. This is called Einstein's convention.

To clarify this, let's have a look at trace of a 3 × 3 matrix tr(A) = 3i=1 Aii = A11 +
P
A22 + A33 . Based on Einstein's convention one could write tr(A) = Aii , because the index
i has appeared twice and shall be summed up over all possible values of i = 1, 2, 3 then
Aii = A11 + A22 + A33 . As another example a vector v can be written as v = vi ei because
the index i has appeared twice and we can write vi ei = v1 e1 + v2 e2 + v3 e3 .
Remark 12. A repeating index (which is summed up over) can be freely renamed. For
example vi wi Amn can be rewritten as vk wk Amn because in fact vi wi Amn = Bmn and index
i does not appear as one of the free indices which can vary among {1, 2, 3}. Note that
renaming i to m or n would not be possible as it changes the meaning.

1.7 Tensors of second order


Based on the denition of inner product, length and angle which are the building blocks
of Euclidean geometry become completely general concepts (Eqs. (1.10) and (1.6)). That
is why an n-dimensional vector space (Def. 7) together with an inner product operation
(Prop. 10) is called a Euclidean space denoted by En . Since we are basically unable to draw
geometrical scenes in higher than three dimensions (or maybe two in fact!), it may seem just
too abstract to generalize classical geometry to higher dimensions. If you remember that
the physical space-time can be best expressed in a four-dimensional geometrical framework,
then you might agree that this limitation of dimensions is more likely to be a restriction of
our perception rather than a part of physical reality.
6 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

Another generalization would concern projection. Projection can be seen as geometrical


transformation that can be dened based on inner product. Up to three dimensions vector
projection is easily understood. Now if we allow vectors to be entities belonging to an
arbitrary vector space endowed with an inner product, one can think of inner product as
the means of projection of a vector in the space onto another vector or direction. Note
that we have put no restrictions on the vector space such as its dimension. By the way
one would ask what do we need all these cumbersome generalizations for? The answer is
to understand more complex constructs based on the basic understanding we already have
of components of a geometrical vector.
So far we have seen that a scalar can be expressed by a single real number. This can
be seen as: scalars have no components on the three coordinate axes so require 30 real
numbers to be specied. On the other hand a vector v has three components which are its
projections onto the three coordinate axes Xi 's each obtained by vi = v · ei . Therefore, v
requires 31 scalars to be specied.
There are more complex constructs which are called second order tensors. The three
projections of a second order tensor A onto coordinate axes are obtained by inner product
of A with basis vectors as Ai = A · ei . These projections are vectors (not scalars!), and
since each vector is determined by three scalars  its components  a second order tensor
requires 3 × 3 = 32 real numbers to be specied. In array form the three components of a
tensor A are vectors denoted by

(1.12)

A = A1 A2 A3

where each vector Ai has three components


 
A1i
Ai =  A2i  (1.13)
A3i

therefore A is written in the matrix form


 
A11 A12 A13
A =  A21 A22 A23  (1.14)
A31 A32 A33

where each vector Ai is written component-wise in one column of the matrix.


In analogy to the notation
v = vi e i (1.15)

for vectors (based on Einstein's summation convention), a tensor can be written as

A = Aij ei ej or A = Aij ei ⊗ ej (1.16)

with ⊗ being dyadic product. Note that we usually use the rst notation for brevity.
1.8. DYADIC PRODUCT 7

1.8 Dyadic product


Dyadic product of two vectors v and w is a tensor A such that

Aij = vi wj . (1.17)

Property 13. Dyadic product has the following properties


1. v ⊗ w 6= w ⊗ v
2. u ⊗ (αv + βw) = αu ⊗ v + βu ⊗ w
(αu + βv) ⊗ w = αu ⊗ w + βv ⊗ w
3. (u ⊗ v) · w = (v · w) u
u · (v ⊗ w) = (u · v) w
From equation (1.16) considering the above properties we obtain

Aij = ei · A · ej , (1.18)

which is the analog of equation (1.7). In equation (1.16) the dyads ei ej are basis tensors
which are given in matrix notation as
     
100 010 00 1
e1 e1 =  0 0 0  e1 e2 =  0 0 0  e1 e3 =  0 0 0 (1.19)
000 000 00 0
     
000 000 00 0
e2 e1 =  1 0 0  e2 e2 =  0 1 0  e2 e3 =  0 0 1 (1.20)
000 000 00 0
     
000 000 00 0
e3 e1 =  0 0 0  e3 e2 =  0 0 0  e3 e3 =  0 0 0 (1.21)
100 010 00 1
Note that not every second order tensor can be expressed as dyadic product of two vectors
in general, however every second order tensor can be written as linear combination of dyadic
products of vectors, as in its most common form given by the decomposition onto a given
coordinate system as A = Aij ei ej .

1.9 Tensors of higher order


Dyadic product provides a convenient passage to denition of higher order tensors. We
know that dyadic product of two vectors (rst-order tensor) yields a second-order tensor.
Generalization of the idea is immediate for a set of n given vectors v 1 , . . . , v n as

T = v 1 v2 · · · v n (1.22)

which is a tensor of the nth order. To make sense of it, let's take a look at inner product of
the above tensor with vector w

T · w = (v 1 v 2 · · · v n ) · w = (v n · w) (v 1 v 2 · · · v n−1 ) (1.23)
8 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

where (v n · w) is of course a scalar and v 1 · · · v n−1 a tensor of (n − 1)th order. This means
the tensor T maps a rst-order tensor (vector) to a tensor of the (n − 1)th order. We
reemphasize that not all tensors can be written as dyadic product of vectors, however
tensors can be decomposed on the basis of a coordinate system. Decomposition of the
nth -order tensor T is written as
T = Ti1 i2 ···in ei1 ei2 · · · ein . (1.24)
where i1 · · · in ∈ {1, 2, 3} and ei1 ei2 · · · ein is the basis tensor of the nth order. It should
be clear that, visualization of e.g. third-order tensors will require three-dimensional arrays
and so on.
Remark 14. A vector is a rst-order tensor and a scalar is a zeroth-order tensor. Order of
a tensor is equal to number of its indices in index notation. Furthermore, a tensor of order
n requires 3n scalars to be specied. We usually address a second order tensor by simply
calling it a tensor and the meaning should be clear from the context.
If an mth order tensor A is multiplied with an nth order tensor B we get an (m + n)th
order tensor C such that
C = A ⊗ B = (Ai1 ···im ei1 · · · eim ) ⊗ (Bj1 ···jn ej1 · · · ejn )
= Ai1 ···im Bj1 ···jn ei1 · · · eim ej1 · · · ejn
= Ck1 ···km+n ek1 · · · ekm+n . (1.25)

Algebra and properties of higher order tensors


On the set of nth -order tensors summation operation can be dened as
A + B = (Ai1 ···in + Bi1 ···in ) ei1 · · · ein , (1.26)
and multiplication with a scalar as
αA = (αAi1 ···in ) ei1 · · · ein , (1.27)
having the properties similar to 5 and 6. In fact tensors of order n are elements of a vector
space having the dimension 3n .
Exercise 1. Show that dimension of a vector space consisting of all nth -order tensors is
3n . (Hint: nd 3n independent tensors that can span all elements of the space.)

Generalization of inner product


Inner/Dot product of two tensors A = Ai1 ···im ei1 · · · eim and B = Bj1 ···jn ej1 · · · ejn is given
as
A · B = Ai1 ···im ei1 · · · eim · Bj1 ···jn ej1 · · · ejn
= Ai1 ···im−1 k Bkj2 ···jn ei1 ei2 · · · eim−1 ej2 ej3 · · · ejn (1.28)
which is a tensor of (m + n − 2)th order. Note that the innermost index k is common and
summed up. Generalization of this idea follows
1.9. TENSORS OF HIGHER ORDER 9

Denition 15 (contraction product). For any two tensors A of the mth order and B of
r
the nth order, their r-contraction product for r ≤ min{m, n} yields a tensor C = A B
of order (m + n − 2r) such that
C = Ai1 ···im−r k1 ···kr Bk1 ···kr jr+1 ···jn ei1 · · · eim−r ejr · · · ejn . (1.29)
Note that the r innermost indices are common. The frequently used case of contraction
2
product is the so called double-contraction which is denoted by A : B ≡ A B .
Exercise 2. Write the identity σ = C : ε in index notation, where C is a fourth-order
tensor and σ and ε are second-order tensors. Then, expand the components σ12 and σ11
for the case of two-dimensional Cartesian coordinates.
r
Remark 16. Contraction product of two tensors is not symmetric in general, i.e. A B 6=
r
B A, except when r = 1 and A and B are rst-order tensors (vectors), or A and B
have certain symmetry properties.

Kronecker delta
The second order unity tensor or identity tensor denoted by I is dened by the following
fundamental property.
Property 17. For every vector v it holds that
I ·v = v·I = v. (1.30)
Representation of unity tensor by index notation as
1 , if i = j

δij = (1.31)
0 , if i 6= j .
is called Kronecker's delta. Its expansion for i, j ∈ {1, 2, 3} gives in matrix form
 
100
(δij ) =  0 1 0  . (1.32)
001
Some important properties of Kronecker delta are
1. Orthonormality of the basis vectors {ei } in equation (1.8), can be abbreviated as
ei · ej = δij . (1.33)

2. The so called index exchange rule is given as


ui δij = uj or in general Aij···m···z δmn = Aij···n···z . (1.34)

3. Trace of Kronecker delta


δii = 3 . (1.35)

Exercise 3. Proofs are left to the reader as a practice of index notation.


10 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

Totally antisymmetric tensor


The so called totally antisymmetric tensor or permutation symbol, also named after Italian
mathematician Tullio Levi-Civita as Levi-Civita symbol, is a third-order tensor dened as
 +1 , if (i, j, k) is an even permutation of (1, 2, 3)

ijk = −1 , if (i, j, k) is an odd permutation of (1, 2, 3) (1.36)


0 , otherwise, i.e. if i = j or j = k or k = i.

There are important properties and relations on permutation symbol as follows


1. It is immediately known from the denition that
ijk = −jik = −ikj = −kji = kij = jki . (1.37)

2. Determinant of a 3 × 3 matrix A can be expressed by


det(A) = ijk A1i A2j A3j . (1.38)

3. The general relationship between Kronecker delta and permutation symbol reads
 
δil δim δin
ijk lmn = det δjl δjm δjn  (1.39)
δkl δkm δkn
and as special cases
jkl jmn = δkm δln − δkn δlm , (1.40)
ijk ijm = 2δkm . (1.41)

Exercise 4. Verify the identities in equations (1.40) and (1.41) based on (1.39).

1.10 Coordinate transformation


Either seen as an empirical fact or a mathematical postulate, it is well accepted that
Denition 18 (Principle of frame indierence). A mathematical model of any physical
phenomenon must be formulated without reference to any particular coordinate system.
This is a merit of vectors and tensors that provide mathematical models with such a
generality. However in solving real world problems everything shall be expressed in terms
of scalars nally for computation, and therefore introduction of a coordinate system is
inevitable. The immediate question would be how to transform vectorial and tensorial
quantities form one coordinate system to another. Substitution from (1.42) into the latter
equation gives
v10 = (v1 e1 + v2 e2 + v3 e3 ) · e01 = e01 · e1 v1 + e01 · e2 v2 + e01 · e3 v3
  

v20 = (v1 e1 + v2 e2 + v3 e3 ) · e02 = e02 · e1 v1 + e02 · e2 v2 + e02 · e3 v3


  

v30 = (v1 e1 + v2 e2 + v3 e3 ) · e03 = e03 · e1 v1 + e03 · e2 v2 + e03 · e3 v3 . (1.44)


  
1.10. COORDINATE TRANSFORMATION 11

Suppose that vector v is decomposed in Cartesian coor-


dinates X1 X2 X3 with bases {e1 , e2 , e3 } (Fig. 1.4) X3
X30

v = v1 e1 + v2 e2 + v3 e3 . (1.42) v
Since a vector is completely determined by its compo-
X10 X2
nents, we should be able to nd the decomposition of X1
vector v on a second coordinate system X10 X20 X30 with
bases {e01 , e02 , e03 } in terms of {v1 , v2 , v3 }. Using equa- X20
tion (1.7) we have
Fig. 1.4: Transformation of coor-

dinates.
v10 = v · e01 , v20 = v · e02 , v30 = v · e03 . (1.43)

Restatement in array notation reads


 0  0
e1 · e1 e01 · e2 e01 · e3
 
v1 v1
 v20  =  e02 · e1 e02 · e2 e02 · e3   v2  , (1.45)
v30 e03 · e1 e03 · e2 e03 · e3 v3
and in index notation
vj0 = e0j · ei vi . (1.46)

 
Then again, introducing transformation tensor Q = e0j · ei we can rewrite the transfor-
mation rule as
v 0 = Qv . (1.47)
Property 19 (Orthonormal tensor). Transformation Q is geometrically a rotation map
with the property
QT Q = I or Q−1 = QT (1.48)
which is called orthonormality.

Transformation of tensors
Having the decomposition of an nth -order tensor A in X1 X2 X3 coordinates as

A = Ai1 ···in ei1 · · · ein , (1.49)

we look for its components in another coordinates X10 X20 X30 . Following the same approach
as for vectors, using equation (1.29)
n
A0j1 ···jn = A e0j1 · · · e0jn

n
= Ai1 ···in ei1 · · · ein e0j1 · · · e0jn


= ei1 · e0j1 · · · ein · e0jn Ai1 ···in , (1.50)


 

For the special case when A is a second order tensor we can write in matrix notation

A0 = QAQT , Q = e0j · ei . (1.51)



12 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

Remark 20. The above transformation rules are substantial. They are so fundamental that,
as an alternative, we could have dened vectors and tensors based on their transformation
properties instead of the geometric and algebraic approach.

Exercise 5. Find the transformation matrix for a rotation by angle θ around X1 in three-
dimensional Cartesian coordinates X1 X2 X3 .

Exercise 6. For a symmetric second order tensor A derive the transformation formula
with a transformation given as
 
cos ϕ sin ϕ 0
(Qij ) =  − sin ϕ cos ϕ 0  .
0 0 1

1.11 Eigenvalues and eigenvectors


The question of transformation from one coordinate system to another is answered. In
practical situations we face a second question which is the choice of coordinate system in
order to simplify the representation of the formulation at hand. A very basic example is
explained here.
Suppose that we want to write a vector v in components. With an arbitrary choice
T
of reference the answer is v = v1 v2 v3 , with v1 , v2 , v3 ∈ R. However with the choice
of reference coordinates such that e.g. X1 axis coincides with vector v we come up with
T
v = v1 0 0 . This second representation of the vector is simplied to one component
only which is in fact a scalar, and therefore being one-dimensional. Of course in more
complicated situations an appropriate choice of coordinate system can make a much greater
dierence. We will see that the consequences of our rather practical question turns out to
be theoretically subtle. The next step in complexity is how to simplify tensors with a proper
choice of coordinate system.

Denition 21. Given a second order tensor A, every nonzero vector like ψ is called an
eigenvector of A if there exists a real number λ such that

A · ψ = λψ (1.52)

where λ is the corresponding eigenvalue of ψ .

In general, a tensor multiplied by a vector gives another vector with a dierent mag-
nitude and direction, interestingly however, there are vectors ψ on which the tensor acts
like a scalar λ as laid down by equation (1.52). Now suppose that a coordinate system is
chosen so that X1 axis coincides with ψ . It is clear that

A · e1 = A11 e1 + A21 e2 + A31 e3

and also back to (1.52)


A · e1 = λe1 (1.53)
1.11. EIGENVALUES AND EIGENVECTORS 13

comparing both equation gives

A11 = λ , A21 = 0 , A31 = 0 . (1.54)

It is possible to show that for non-degenerate symmetric tensors in R3 there are three or-
thonormal eigenvectors which establish an orthonormal coordinate system which is called
principal coordinates. Following equation (1.54) a symmetric tensor σ in principal coordi-
nates takes the diagonal form
 
σ11 0 0
σ =  0 σ22 0  , (1.55)
0 0 σ33

where its diagonal elements are eigenvalues corresponding to principal directions which are
parallel to eigenvectors. The following theorem generalizes these ideas to n dimensions.

Theorem 22 (Spectral decomposition). Let A be a tensor of second order in n-dimensional


space with ψ 1 , · · · , ψ n being its n linearly independent eigenvectors and their corresponding
eigenvalues λ1 , · · · , λn . Then A can be written as
A = ΨΛΨ−1 (1.56)

where Ψ is the n × n matrix having ψ 's as its columns Ψ = (ψ 1 , · · · , ψ n ), and Λ is an


n × n diagonal matrix with ith diagonal element as λi .

Remark 23. In the case of symmetric tensors equation (1.56) takes the form of
A = ΨΛΨT (1.57)

because for orthonormal eigenvectors, the matrix Ψ is an orthonormal matrix which is in


fact a rotation transformation.
Remark 24. The Λ is the representation of A in principal coordinates. Since it is diagonal,
one can write
X n
Λ= λi ni ni . (1.58)
i=1

Suppose that we want to derive some relationship or develop a model based on tensorial
expressions. If the task is accomplished in principal coordinates it takes much less eort and
at the same time, the results are completely general i.e. they hold for any other coordinate
system. This is often done in material modeling and in other branches of science as well.
As the nal step we would like to solve the equation (1.52). It can be recast in the form

(A − λI) ψ = 0 . (1.59)

This equation has nonzero solutions if

det(A − λI) = 0 (1.60)


14 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

which is called the characteristic equation of tensor A. Let us have a look at its expansion

A11 − λ A12 A13
(1.61)

A21 A22 − λ A23 = 0 ,

A31 A32 A33 − λ

which gives

  A11 A12 A13
A A A A A A
λ3 −(A11 + A22 + A33 ) λ2 + 22 23 + 11 13 + 11 12 λ− A21 A22 A23 = 0

A32 A33 A31 A33 A21 A22 A31 A32 A33
(1.62)
which is a cubic equation with at most three distinct roots.

1.12 Invariants
Components of a vector v change from one coordinate system to another, however its
length remains the same in all coordinates because length is a scalar. This is interesting
because one can write the length of a vector in terms of its components in any coordinate
system as kvk = (vi vi ) /2 , which means there is a combination of components that does not
1

change by coordinate transformation while the components themselves do. Any function of
components f (v1 , v2 , v3 ) that does not change under coordinate transformation is called an
invariant. This independence of coordinate system makes invariants physically important,
and the reason should become clear soon.
Tensors of higher order have also invariants. For a second order tensor A, looking back
on equation (1.62), the values λ, λ2 and λ3 are scalars and stay unchanged under coordinate
transformation, therefore coecients of the equation must remain unchanged as well, and
they are all invariants of tensor A denoted by

IA = A11 + A22 + A33 (1.63)



A A A A A A
IIA = 22 23 + 11 13 + 11 12 (1.64)
A32 A33 A31 A33 A21 A22

A11 A12 A13
(1.65)

IIIA = A21 A22 A23
A31 A32 A33

where I, II and III are called rst, second and third principal invariants of A. The above
formulas look familiar. The rst invariant is trace, the second is sum of principal minors
and the third is determinant. Of course, we could express invariants in terms of eigenvalues
of the tensor

IA = tr(A) = λ1 + λ2 + λ3 (1.66)
1 
IIA = (tr(A))2 − tr A2 = λ1 λ2 + λ2 λ3 + λ3 λ1 (1.67)
2
IIIA = det(A) = λ1 λ2 λ3 . (1.68)
1.13. SINGULAR VALUE DECOMPOSITION 15

In material modeling we often look for a scalar valued function of a tensor in the form
of φ(A). Since the function value φ is a scalar it must be invariant under transformation
of coordinates. To fulll this requirement, the function is formulated in terms of invariants
of A, then it will be invariant itself. Therefore the most natural form of such a formulation
would be
φ = φ(IA , IIA , IIIA ) . (1.69)
This will be used in the sequel when hyper-elastic material models are explained.

1.13 Singular value decomposition


The Euclidean norm of a vector is a non-negative scalar that represents its magnitude. It is
also possible to talk about norm of a tensor. Avoiding the details, we only mention that the
question of norm of a tensor A comes down to nding the maximum eigenvalue of AT A.
Since the matrix AT A is positive semidenite its eigenvalues are non-negative.

Denition 25. If λ2i is an eigenvalue of AT A, then λi > 0 is called a singular value of A.

When A is a symmetric tensor singular values of A equal the absolute values of its
eigenvalues.

1.14 Cross product


Cross product of two vectors v and w is a vector u = v × w
such that u is perpendicular to v and w, and in the direction u=v×w
that the triple (v, w, u) be right-handed (Fig. 1.5). Magnitude
of u is given by

u = vw sin θ 0 ≤ θ ≤ π. (1.70)
v w
The cross product of vectors has the following properties. θ
Property 26 (Cross product). Fig. 1.5: Cross product.

1. v × w = −w × v Anti-symmetry
2. u × (αv + βw) = αu × v + βu × w Linearity.

Important results
1. In any orthonormal coordinate system including Cartesian coordinates it holds that

e1 × e1 = 0 , e1 × e2 = e3 , e1 × e3 = −e2
e2 × e1 = −e3 , e2 × e2 = 0 , e2 × e3 = e1 (1.71)
e3 × e1 = e2 , e3 × e2 = −e1 , e3 × e3 = 0 ,
16 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

2. therefore
e1 e2 e3
(1.72)

v × w = v1 v2 v3 ,
w1 w2 w3

3. which in index notation reads

v × w = ijk vj wk ei or [v × w]i = ijk vj wk . (1.73)

Exercise 7. Using equation (1.71) prove the identity (1.72).

Geometrical interpretation of cross product


Norm of the vector v × w is equal to the area of parallelogram
having sides v and w, and since it is normal to plane of the

w sin θ
parallelogram we consider it to be the area vector. Then it is w
clear the the area A of a triangle with sides v and w is given
by θ
1 v
A = v ×w. (1.74)
2 Fig. 1.6: Area of triangle.

Cross product for tensors


Cross product can be naturally extended to the case of a tensor A and a vector v as

A × v = jmn Aim vn ei ej and v × A = imn vm Anj ei ej . (1.75)

1.15 Triple product


For every three vectors u, v and w, combination of dot and cross products gives

u · (v × w) , (1.76)

the so called triple product. Based on equation (1.72) it can


be written in component form u
w
u1 u2 u3
(1.77)

u · (v × w) = v1 v2 v3 , v
w1 w2 w3 Fig. 1.7: Parallelepiped.

or in index notation
u · (v × w) = ijk ui vj wk . (1.78)
Geometrically, triple product of the three vectors equals the volume of parallelepiped with
sides u, v and w (Fig. 1.7).
1.16. DOT AND CROSS PRODUCT OF VECTORS: MISCELLANEOUS FORMULAS 17

1.16 Dot and cross product of vectors: miscellaneous formu-


las
u × (v × w) = (u · w) v − (u · v) w (1.79)
(u × v) × w = (u · w) v − (v · w) u (1.80)
(u × v) · (w × x) = (u · w) (v · x) − (u · x) (v · w) (1.81)

(u × v) × (w × x) = (u · (v × x)) w − (u · (v × w)) x
= (u · (w × x)) v − (v · (w × x)) u (1.82)

Exercise 8. Using index notation verify the above identities.

Exercise 9. For a tensor given as T = I + vw with v and w being P orthogonal vectors i.e.
v · w = 0, calculate T 2 , T 3 , · · · , T n and nally eT (Hint: eT = ∞ 1 n
n=0 n! T ).
18 CHAPTER 1. VECTORS AND TENSORS ALGEBRA
Chapter 2

Tensor calculus
So far, vectors and tensors are considered based on operations by which we can combine
them. In this section we study vector and tensor valued functions in three dimensional
space, together with their derivatives and integrals. It is important to keep in mind that
we address tensors and vectors both by the term tensor, and the reader should already
know that a vector is a rst-order tensor and a scalar a zeroth-order tensor.

2.1 Tensor functions & tensor elds


A tensor function A(u) assigns a tensor A to a real number u. In formal mathematical
notation
A : R −→ T

(2.1)
u 7−→ A(u)

where T is the space of tensors of the same order, for instance second order tensors T =
R3×3 or vectors T = R3 or maybe fourth order tensors T = R3×3×3×3 .
A typical example is the position vector of a moving particle
given as a function of time x(t) where t is a real number and X2
x is a vector with initial point at origin and terminal point at
the moving particle (Fig. 2.1).
On the other hand a tensor led A(x) assigns a tensor A
to a point x in space, say three dimensional. Again one could
t)
x(

formally write
A : R3 −→ T

. (2.2) O X1
x 7−→ A(x)
Fig. 2.1: Position vector.
Here, R3 is the usual three dimensional Euclidean space. For
instance, we can think of temperature distribution in a given domain T (x) which is a scalar
eld that assigns scalars (temperatures) to points x in the domain. Another example could
be strain led ε(x) in a solid body under deformation, which assigns second-order tensors
ε to points x in the body.

19
20 CHAPTER 2. TENSOR CALCULUS

2.2 Derivative of tensor functions


Derivative of a dierentiable tensor function A(u) is dened as

dA A(u + ∆u) − A(u)


= lim . (2.3)
du ∆u→0 ∆u
We assume that all derivatives exist unless otherwise stated. For the case of a vector
function A(u) = Ai (u) ei in Cartesian coordinates the above denition becomes

dA dAi dA1 dA2 dA3


= ei = e1 + e2 + e3 . (2.4)
du du du du du
Because ei 's remain unchanged in Cartesian coordinates their derivatives are zero.

Exercise 10. For a given second-order tensor function A(u) = Aij (u) ei ej in Cartesian
coordinates write down the derivatives.

Property 27. For tensor functions A(u) , B(u) , C(u), and vector function a(u) we have

d dB dA
1. (A · B) = A · + ·B (2.5)
du du du
d dB dA
2. (A × B) = A × + ×B (2.6)
du du du    
d dA dB dC
3. [A · (B × C)] = · (B × C) + A · ×C +A· B× (2.7)
du du du du
da da
4. a· =a (2.8)
du du
da
5. a· =0 i kak = const (2.9)
du

Exercise 11. Verify the 4th and the 5th properties above.

2.3 Derivatives of tensor elds, Gradient, Nabla operator


Since a tensor led A(x) is a multi-variable function its dierential dA depends on the
direction of dierential of independent variable dx which is a vector dx = dxi ei . When dx is
parallel to one of coordinate axes, partial derivatives of the tensor eld A(x) = A(x1 , x2 , x3 )
are obtained
∂A A(x1 + ∆x1 , x2 , x3 ) − A(x1 , x2 , x3 )
= lim
∂x1 ∆x1 →0 ∆x1
∂A A(x1 , x2 + ∆x2 , x3 ) − A(x1 , x2 , x3 )
= lim (2.10)
∂x2 ∆x2 →0 ∆x2
∂A A(x1 , x2 , x3 + ∆x3 ) − A(x1 , x2 , x3 )
= lim
∂x3 ∆x3 →0 ∆x3
2.4. INTEGRALS OF TENSOR FUNCTIONS 21

or in compact form
∂A A(x + ∆xi ei ) − A(x)
= lim . (2.11)
∂xi ∆xi →0 ∆xi
In general, dx is not parallel to coordinate axes. Therefore based on the chain rule of
dierentiation
∂A ∂A ∂A ∂A
dA(x1 , x2 , x3 ) = dxi = dx1 + dx3 + dx3 , (2.12)
∂xi ∂x1 ∂x2 ∂x3
Remembering from calculus of multi-variable functions, this can also be written as
∂A
dA = grad(A) · dx with grad(A) = ei . (2.13)
∂xi
In the latter formula gradient of tensor eld grad(A) appears which is a tensor of one order
higher than A itself. To highlight this fact, we rewrite the above equation as
∂A
grad(A) = ⊗ ei . (2.14)
∂xi
A proper choice of notation in mathematics is substantial and introduction of operators
notation in calculus is no exception. Since linear operators follow algebraic rules somewhat
similar to that of arithmetics of real numbers, writing relations in operators notation is
invaluable. Here, we introduce the so called nabla operator as
∂ ∂ ∂ ∂
∇ ≡ ei = e1 + e2 + e3 . (2.15)
∂xi ∂x1 ∂x2 ∂x3
Then gradient of the tensor led in equation (2.13) can be written as∗
grad(A) = A∇ = A ⊗ ∇ . (2.16)

2.4 Integrals of tensor functions


If A(u) and B(u) are tensor functions such that A(u) = d/du B(u), the indenite integral
of A is Z
A(u) du = B(u) + C (2.17)

with C = const, and the denite integral of A over the interval [a, b] is
Z b
A(u) du = B(b) − B(a) . (2.18)
a

Decomposition of equation (2.17) for (say) a second-order tensor gives


Z
Aij (u) du = Bij (u) + Cij , (2.19)

where Aij is a real valued function of real numbers. Therefore, integration of a tensor
function comes down to integration of its components.

In recent equations the dyadic ⊗ is written explicitly sometimes for pedagogical reasons.
22 CHAPTER 2. TENSOR CALCULUS

2.5 Line integrals


Consider a space curve C joining two points P1 (a1 , a2 , a3 )
and P2 (b1 , b2 , b3 ) as shown in Fig. 2.2. If the curve is divided X3
P2
into N parts by subdivision points x1 , . . . , xN −1 , then the line C
integral of a tensor eld A(x) along C is given by
P1 xn
Z Z P2 N
A · dx = A · dx = lim
X
A(xi ) · ∆xi (2.20) X2
C P1 N →∞ X1
i=1
Fig. 2.2: Space curve.

where ∆xi = xi − xi−1 , and when N → ∞ the largest of


divisions tends to zero
max k∆xi k → 0 .
i

In Cartesian coordinates the line integral can be written as


Z Z
A · dx = (A1 dx1 + A2 dx2 + A3 dx3 ) . (2.21)
C C

Line integral has the following properties.

Property 28.
Z P2 Z P1
1. A · dx = − A · dx (2.22)
P1 P2
Z P2 Z P3 Z P2
2. A · dx = A · dx + A · dx P3 is between P1 and P2 (2.23)
P1 P1 P3

Parameterization
A curve in space is a one-dimensional entity which can be specied by a parameterization
of the form
x(s) (2.24)

which obviously means

x1 = x1 (s) , x2 = x2 (s) , x3 = x3 (s) .

Therefore a path integral can be parametrized in terms of s by


Z x2 Z s2
dx
A · dx = A· ds . (2.25)
x1 s1 ds
2.6. PATH INDEPENDENCE 23

2.6 Path independence


A line integral as introduced above, depends on its limit points P1 and P2 , on particular
choice of the path C , and on its integrand A(x). However, there are cases where the line
integral is independent of the path C . Then, having specied the tensor eld A(x) and the
initial point P1 , the line integral is a function of the position of P2 only. If we denote the
position of P2 by x then Z x
φ(x) = A · dx . (2.26)
P1

Now if we consider dierentiating the tensor eld φ(x) it yields

dφ = A · dx , (2.27)

which according denition of gradient (2.13) means

A = grad(φ) = φ∇ . (2.28)

This argument could be reversed which would lead to the following theorem.
Theorem 29. The line integral C A · dx is path independent if and only if there is a
R

function φ(x) such that A = φ∇.


Exercise 12. Prove the reverse argument of the above theorem. That is, starting from
existence of φ(x) such that A = φ∇, prove that the line integral is path independent.
Remark 30. Note that in particular case of C being a closed path the line integral is path
independent if and only if I
A · dx = 0 , (2.29)
C
for arbitrary path C . The circle on integral sign shows that C is closed. This is another
statement of path-independence which is equivalent to the ones mentioned before.

2.7 Surface integrals


Consider a smooth connected surface S in space (Fig. 2.3)
subdivided into N elements ∆Si , i = 1, . . . , N at positions xi X3 ni
∆Si
with unit normal vectors to the surface ni . Let tensor eld
A(x) be dened over the domain Ω ⊇ S . Then the surface
S
integral of normal component of A over S is dened as
Z N
X
A · n dS = lim A(xi ) · ni ∆Si (2.30) X2
S N →∞
i=1
X1
where it holds that
Fig. 2.3: Surface integral.

N →∞ : max k∆Si k → 0 .
i
24 CHAPTER 2. TENSOR CALCULUS

The surface integral of tangential component of A over S is dened as


Z XN
A × n dS = lim A(xi ) × ni ∆Si (2.31)
S N →∞
i=1

Note the surface integral (2.30) is a tensor of one order lower than A due to appearance of
A · n in the integrand, while the surface integral (2.31) is a tensor of the same order as A
due to the term A × n.

Surface integrals in terms of double-integrals


Surface integrals can be calculated in Cartesian coordinates. This is the case usually when
the surface S does not have any symmetries, such as rotational symmetry. For this the
surface S is projected onto Cartesian plane X1 X2 which gives a domain like S (Fig. 2.3).
Then Z ZZ
dx1 dx2
A · n dS = A·n . (2.32)
S n · e3
S
The transformation between the area element dS and its projection dx1 dx2 is given by
dx1 dx2 = (ndS) · e3 = (n · e3 ) dS ,
where n is the unit normal vector to dS and e3 is the unit normal vector to dx1 dx2 .
Exercise 13. Calculate the surface integral of the normal component of vector eld v =
1/r2 er over the unit sphere centered at origin.

2.8 Divergence and curl operators


In surface integrals introduced above, the case when the sur-
face S is closed has a special importance. In a physical con- X3 n
dS
text, integral of the normal component of a tensor eld over
a closed surface expresses the overall ux of the eld through
S
the surface which is a measure of intensity of sources † . On the
other hand, closed surface integral of the tangential compo-
nent represents the so called rotation of the eld around the X2
surface. X1
Assume that S is a closed surface with outward unit nor-
mal vector n, occupying the domain Ω with volume V . Inte- Fig. 2.4: Closed surface.
gral over closed surfaces is usually denoted by . We can dene surface integrals of a given
H

tensor eld A(x) over S as in equations (2.30) and (2.31). As mentioned before, in the case
of closed surfaces these have special physical meanings. A question that naturally arises is:
what are the average intensity of sources and rotations over the domain Ω. The answer is
I I
1 1
D= A · n dS C= A × n dS , (2.33)
V S V S

A source, as it literally suggests, is what causes the tensor eld.
2.9. LAPLACE'S OPERATOR 25

where integrals are divided by volume V . Now if the domain Ω shrinks, that is V → 0, then
these average values reect the local intensity or concentration of sources and rotations of
the tensor eld. This is the physical analogue of density which is the concentration of mass.
These local values are called divergence and curl of the tensor eld A, denoted and formally
dened by I
1
div(A) = lim A · n dS (2.34)
V →0 V S
I
1
curl(A) = lim A × n dS . (2.35)
V →0 V S

It can be shown that


div(A) = A · ∇ (2.36)
curl(A) = A × ∇ (2.37)
in operator notation. The proof is straightforward in Cartesian coordinates and can be
found in most calculus books.
Exercise 14. Using a good calculus book, starting from (2.34) and (2.35) derive equations
(2.36) and (2.37) in Cartesian coordinates for a vector eld u(x) as

div(u) = u · ∇ = ui ∂i and curl(u) = u × ∇ = kij ui ∂j .

Remark 31. So far gradient, divergence and curl operators are applied to their operand
form the right-hand-side i.e. grad(A) = A∇ , div(A) = A · ∇ , curl(A) = A × ∇. This is
not a strict requirement by their denition. In fact the direction from which the operator
is applied depends on the context. If we exchange the order of dyadic, inner and outer
products in equations (2.26), (2.34) and (2.35) respectively, the direction of application of
nabla will be reversed, to wit
Z I I
1 1
φ = dx · ∇φ lim n · A dS = ∇ · A lim n × A dS = ∇ × A (2.38)
V →0 V S V →0 V S

which are equally valid. However, keeping a consistence convention shall not be overlooked.
Exercise 15. Calculate ∇ × (∇ × v) for the vector eld v .

2.9 Laplace's operator


We remember from equation (2.28) that a tensor eld A whose line integrals are path-
independent can be written as the gradient of a lower-order tensor eld as A = φ∇. Now
if we are interested in the concentration of sources that generate A then

div(A) = div(grad(φ)) = φ∇ · ∇ (2.39)

This combination of gradient and divergence operators appears so often that it is given a
distinct name, the so called Laplace's operator or Laplacian which is denoted by

∆ ≡ ∇ · ∇ = ∂k ∂k . (2.40)
26 CHAPTER 2. TENSOR CALCULUS

Laplacian of a eld ∆A is sometimes denoted by ∇2 A. Laplace's operator is also called


harmonic operator. That is why two consecutive application of Laplacian over a eld is
called biharmonic operator denoted by

∇4 A = ∆∆A = ∆ (∆A) (2.41)

which is expanded in Cartesian coordinates as

∂4φ ∂4φ ∂4φ ∂4φ ∂4φ ∂4φ


∆∆φ = + + + 2 + 2 + 2 . (2.42)
∂x41 ∂x42 ∂x43 ∂x21 ∂x22 ∂x22 ∂x23 ∂x23 ∂x21

2.10 Gauss' and Stoke's Theorems


Theorem 32 (Gauss theorem). Let S be a closed surface bounding a region Ω of volume
V with outward unit normal vector n, and A(x) a tensor eld (Fig. 2.4). Then we have

Z I
A · ∇ dV = A · n dS . (2.43)
V S

This result is also called Green's or divergence theorem. If we remember the physical
interpretation of divergence, then the Gauss theorem means the ux of a tensor eld through
a closed surface equals the intensity of the sources bounded by the surface, which physically
makes perfect sense.
n
Theorem 33 (Stoke's theorem). Let S be an open two-sided X3 dS
surface bounded by a closed non-self-intersecting curve C as
in Fig. 2.5. Then
S
I Z
A · dx = (A × ∇) · n dS (2.44) C
C S X2
where n is directed towards counter-clockwise view of the path X1
direction as indicated in the gure.
Fig. 2.5: Stoke's theorem.

Physical interpretation of Stoke's theorem is much similar to that of Gauss' theorem


except that it involves closed line integration and enclosed surface, instead of closed surface
integration and enclosed volume. Namely, the overall rotation of the tensor eld along a
closed path equals the intensity of rotations enclosed by the path.
2.11. MISCELLANEOUS FORMULAS AND THEOREMS INVOLVING NABLA 27

2.11 Miscellaneous formulas and theorems involving nabla


For given vector elds v and w, tensor elds A and B , and scalar elds α and β we have

∇(A + B) = ∇A + ∇B (2.45)
∇ · (A + B) = ∇ · A + ∇ · B (2.46)
∇ × (A + B) = ∇ × A + ∇ × B (2.47)
∇ · (αA) = (∇α) · A + α(∇ · A) (2.48)
∇ × (αA) = (∇α) × A + α(∇ × A) (2.49)
∇ × (∇ × A) = ∇(∇ · A) − ∇ A 2
(2.50)
∇ · (v × w) = (∇ × v) · w + v · (∇ × w) (2.51)
∇ × (v × w) = (w · ∇)v − (∇ · v)w − (v · ∇)w + (∇ · w)v (2.52)
∇(v · w) = (w · ∇)v + (v · ∇)w + w × (∇ × v) + v × (∇ × w) (2.53)

Theorem 34. For a tensor eld A


∇ × (∇A) = 0 (A∇) × ∇ = 0 (2.54)
∇ · (∇ × A) = 0 (A × ∇) · ∇ = 0 (2.55)

given that A is smooth enough.


Let us rephrase the above theorem as follows:
ˆ Gradient of a tensor eld is curl-free, i.e. its curl vanishes.

ˆ Curl of a tensor eld is divergence-free, i.e. its divergence vanishes.


Comparing to Gauss and Stokes's theorems, these mean
I Z
(A × ∇) · n dS = (A × ∇) · ∇ dV = 0 (2.56)
S V

and I Z
(A∇) · dx = (A∇) × ∇ · n dS = 0 . (2.57)
C S

Exercise 16. Show that in general ∇ · A × ∇ 6= 0 and ∇ × A · ∇ 6= 0.


Remark 35. The recent equation, according (2.29), is equivalent to path-independence.
Theorem 36 (Green's identities). For two scalar elds φ and ψ the rst Green's identity
is Z I
[φ (ψ∆) + (φ∇) · (ψ∇)] dV = φ (ψ∇) · n dS (2.58)
V S
and the second Green's identity
Z I
[φ (ψ∆) − (φ∆) ψ] dV = [φ (ψ∇) − (φ∇) ψ] · n dS . (2.59)
V S
28 CHAPTER 2. TENSOR CALCULUS

Alternative forms of Gauss' and Stoke's theorems


For tensor eld A and scalar eld φ we have
Z I I Z
A × ∇ dV = A × n dS φ dx = (φ∇) × n dS . (2.60)
V S C S

which could be equivalently stated as


Z I I Z
∇ × A dV = n × A dS φ dx = n × (∇φ) dS . (2.61)
V S C S

u3

c2
= e3 u1 =
u2
c1
e1 e2
u1 u3 = c3
u2

Fig. 2.6: Curvilinear coordinates.

2.12 Orthogonal curvilinear coordinate systems


A point P in space (Fig. 2.6) can be specied by its rectangular coordinates (another name
for Cartesian coordinates) (x1 , x2 , x3 ), or curvilinear coordinates (u1 , u2 , u3 ). Then there
must be transformation rules of the form

x = x(u) and u = u(x) (2.62)

where the transformation functions are smooth and on-to-one. Smoothness means being
dierentiable, and being one-to-one requires Jacobian determinant to be non-zero

 ∂u 1 ∂u1 ∂u1

∂x12 ∂u
∂x2 ∂x3

∂ (u1 , u2 , u3 )
det = ∂u 2 ∂u2
∂x1 ∂x2 ∂x3 6= 0 (2.63)
∂ (x1 , x2 , x3 ) ∂u 3 ∂u3 ∂u 3
∂x ∂x ∂x
1 2 3

Denition 37 (Jacobian). For a dierentiable


 function
 f (x) such that f : Rm → Rn , its
∂fi
Jacobian is an m × n matrix dened as J = ∂x j
with i = 1, . . . , m and j = 1, . . . , n.

At the point P there are three surfaces described by

u1 = const , u2 = const , u3 = const (2.64)

passing through P , called coordinate surfaces. And there are also three curves which are
intersections of every two coordinate surfaces. These are called coordinate curves. On each
2.13. DIFFERENTIALS 29

coordinate curve only one of the three coordinates ui varies and the other two are constant.
If the position vector is denoted by r then the vector ∂r/∂ui is tangent to ui coordinate
curve, and the unit tangent vectors (Fig. 2.6) are obtained from

∂r/∂u1 ∂r/∂u2 ∂r/∂u3


e1 = , e2 = , e3 = (2.65)
k∂r/∂u1 k k∂r/∂u2 k k∂r/∂u3 k

Introducing scale factors hi = k∂r/∂ui k we have

∂r/∂u1 = h1 e1 , ∂r/∂u2 = h2 e2 , ∂r/∂u3 = h3 e3 (2.66)

If e1 , e2 and e3 are mutually orthogonal then we call (u1 , u2 , u3 ) an orthogonal curvilinear


coordinate system.
Remark 38. In rectangular coordinates the coordinate surfaces are at planes and coordi-
nate curves are straight lines. Also, the scale factors hi 's are equal to one due to straight
coordinate curves. Furthermore, the unit vectors ei 's are the same for all points in space.
These properties do not generally hold for curvilinear coordinate systems.

2.13 Dierentials
In three dimensional space, there are three types of geometrical objects (other than points)
regarding dimensions

ˆ one-dimensional objects: lines


ˆ two-dimensional objects: surfaces
ˆ three-dimensional objects: volumes.

Each type has its own dierential element, namely line elements, surface elements and
volume elements. We assume that the reader has already dealt with the related ideas in
rectangular (Cartesian) coordinates. Now we want to generalize those ideas to curvilinear
coordinates.

Line elements
An innitesimal line segment with an arbitrary direction in space can be projected onto
coordinate axes. Consider the position vector r explained in a curvilinear coordinate system
by r(u1 , u2 , u3 ). A line element is the dierential of the position vector obtained by chain
rule and using equation (2.66)

∂r ∂r ∂r
dr = du1 + du2 + du3 = h1 du1 e1 + h2 du2 e2 + h3 du3 e3 . (2.67)
∂u1 ∂u2 ∂u3

where
dr1 = h1 du1 , dr2 = h2 du2 , dr3 = h3 du3 (2.68)
30 CHAPTER 2. TENSOR CALCULUS

are the basis line elements. If the position vector sweeps a curve (which is typical for
example in kinematics) the arc length element denoted by ds is obtained by

(ds)2 = dr · dr = h21 (du1 )2 + h22 (du2 )2 + h23 (du3 )2 . (2.69)

Note that the above formulas are completely general. For instance in Cartesian coordi-
nates where hi = 1 they are reduced to familiar forms

dr = dx1 e1 + dx2 e2 + dx3 e3 and (ds)2 = (dx1 )2 + (dx2 )2 + (dx3 )2 .

Area elements
An area element is a vector described by its magnitude and direction

dS = dS n , (2.70)

whose decomposition is obtained by

dS = (dS · e1 ) e1 + (dS · e2 ) e2 + (dS · e3 ) e3 (2.71)

where dS · ei is in fact the projection of dS on the coordinate surface normal to ei .


On the other hand, having the coordinate line elements (2.68), the area elements on
each coordinate surface are obtained by

dS1 e1 = dr2 e2 × dr3 e3 = h2 h3 du2 du3 e1


dS2 e2 = dr3 e3 × dr1 e1 = h3 h1 du3 du1 e2 (2.72)
dS3 e3 = dr1 e1 × dr2 e2 = h1 h2 du1 du2 e3

which are called basis area elements. Note that an area element is a parallelogram with
its side being line elements, and its area is obtained by cross product of the line element
vectors. Comparing the two above equation gives

dS = (h2 h3 du2 du3 ) e1 + (h3 h1 du3 du1 ) e2 + (h1 h2 du1 du2 ) e3 (2.73)

As the simplest case, in Cartesian coordinates the latter formula gives the familiar form

dS = dx2 dx3 e1 + dx3 dx1 e2 + dx1 dx2 e3 .

Volume element
A volume element dV is a parallelepiped built by the three coordinate line elements dr1 e1 ,
dr2 e2 and dr3 e3 . Since the volume of a parallelepiped is obtained by triple product of its
side vectors as in equation (1.77), we have

dV = (dr1 e1 ) · (dr2 e2 ) × (dr3 e3 ) = h1 h2 h3 du1 du2 du3 . (2.74)


2.14. DIFFERENTIAL TRANSFORMATION 31

2.14 Dierential transformation


For two given orthogonal curvilinear coordinate (u1 , u2 , u3 ) and (q1 , q2 , q3 ), we are interested
in transformation of dierentials between them. Here we adopt the index notation. There
are two types of expressions in the aforementioned formulas to be transformed. One is the
dierentials dui and the other is the factors of the form ∂r/∂ui or their norms hi . We
consider the transformation of each separately. Starting from transformations of the form
u(q), applying the chain rule we have
∂ui ∂r ∂r ∂qj
dui = dqj and = (2.75)
∂qj ∂ui ∂qj ∂ui
and because of orthogonality of coordinates
 1/2
X  ∂r ∂qj 2

∂r
∂ui =
  . (2.76)
∂qj ∂ui
j

Transformation of area element form one curvilinear coordinates to another is obtained by


∂ (q1 , q2 , q3 ) ∂ (q1 , q2 , q3 ) −T
 
dAq = · dAu (2.77)
∂ (u1 , u2 , u3 ) ∂ (u1 , u2 , u3 )
And volume element transforms according

∂ (q1 , q2 , q3 )
dVq =
dVu (2.78)
∂ (u1 , u2 , u3 )

2.15 Dierential operators in curvilinear coordinates


Here we briey address the general formulations of gradient, divergence, curl and Laplacian
in orthogonal curvilinear coordinates. Formulations are given in a concrete form for a scalar
eld φ and a vector eld a = a1 e1 + a2 e2 + a3 e3 however the reader should be able to use
them for tensor elds of any order.
1 ∂φ 1 ∂φ 1 ∂φ
∇φ = + + (2.79)
h1 ∂u1 h2 ∂u2 h3 ∂u3
 
1 ∂ ∂ ∂
∇·a = (h2 h3 a1 ) + (h3 h1 a2 ) + (h1 h2 a3 ) (2.80)
h1 h2 h3 ∂u1 ∂u2 ∂u3

h1 e1 h2 e2 h3 e3
1
(2.81)

∇×a = ∂/∂u1 ∂/∂u2 ∂/∂u3
h1 h2 h3

h1 a1 h2 a2 h3 a3
      
1 ∂ h2 h3 ∂φ ∂ h3 h1 ∂φ ∂ h1 h2 ∂φ
∆φ = + + . (2.82)
h1 h2 h3 ∂u1 h1 ∂u1 ∂u2 h2 ∂u2 ∂u3 h3 ∂u3
Let us apply these formulas for the two most commonly used curvilinear coordinate systems,
namely, cylindrical and spherical coordinates.
32 CHAPTER 2. TENSOR CALCULUS

X3

z = x3

x
x1
X2
ϕ r
x2

X1
Fig. 2.7: Cylindrical coordinates.

Cylindrical coordinates
In cylindrical coordinates a point is determined by (r, ϕ, z) (Fig. 2.7). Transformation to
rectangular coordinates are given as

x1 = r cos ϕ , x2 = r sin ϕ , x3 = z (2.83)


q
r = x21 + x22 , ϕ = arctan(x2 /x1 ) , z = x3 (2.84)

Scale factors are


hr = 1 hϕ = r hz = 1 , (2.85)

and dierential elements

dr = dr er + r dϕ eϕ + dz ez (2.86)
dS = r dϕ dz er + dr dz eϕ + r dr dϕ ez (2.87)
dV = r dr dϕ dz . (2.88)

Finally the dierential operators can be listed as

∂ 1 ∂ ∂
∇ ≡ er + eϕ + ez (2.89)
∂r r ∂ϕ ∂z
1 ∂ 1 ∂aϕ ∂az
∇·a = (rar ) + + (2.90)
r ∂r r ∂ϕ ∂z
1 ∂2φ ∂2φ ∂ 2 φ 1 ∂φ 1 ∂2φ ∂2φ
 
1 ∂ ∂φ
∆φ = r + 2 + = + + + 2 (2.91)
r ∂r ∂r r ∂ϕ2 ∂z 2 ∂r2 r ∂r r2 ∂ϕ2 ∂z
     
1 ∂az ∂aϕ ∂ar ∂az 1 ∂ ∂ar
∇×a = − er + − eϕ + (raϕ ) − ez (2.92)
r ∂ϕ ∂z ∂z ∂r r ∂r ∂ϕ
2.15. DIFFERENTIAL OPERATORS IN CURVILINEAR COORDINATES 33

X3

x3

r
θ x1
X2
ϕ
x2

X1

Fig. 2.8: Spherical coord.

Spherical coordinates
In spherical coordinates a point is determined by (r, θ, φ) (Fig. 2.8). Transformation to
rectangular coordinates are given as
x1 = r sin θ cos φ , x2 = r sin θ sin φ , x3 = r cos θ (2.93)
q
r = x21 + x22 + x23 , θ = arccos(x3 /r) , φ = arctan(x2 /x1 ) .
Scale factors are
hr = 1 hθ = r hφ = r sin θ , (2.94)
and dierential elements
dr = dr er + r dθ eθ + r sin θ dφ eφ (2.95)
2
dS = r sin θ dθ dφ er + r sin θ dr dφ eθ + r dr dθ eφ (2.96)
2
dV = r sin θ dr dθ dφ . (2.97)
Finally the dierential operators can be listed as
∂ 1 ∂ 1 ∂
∇ ≡ er + eθ + eφ (2.98)
∂r r ∂θ r sin θ ∂φ
1 ∂ 1 ∂ 1 ∂aφ
r 2 ar + (2.99)

∇·a = 2 (sin θaθ ) +
r ∂r r sin θ ∂θ r sin θ ∂φ
2
∂ ξ 2 ∂ξ cos θ ∂ξ 2
1 ∂ ξ 1 ∂2ξ
∆ξ = + + + + (2.100)
∂r2 r ∂r r2 sin θ ∂θ r2 ∂θ2 r2 sin2 θ ∂φ2
   
1 ∂ ∂aθ 1 1 ∂ar ∂
∇×a = (sin θaφ ) − er + − (raφ ) eθ (2.101)
r sin θ ∂θ ∂φ r sin θ ∂φ ∂r
 
1 ∂ ∂ar
+ (raθ ) − eφ (2.102)
r ∂r ∂θ
34 CHAPTER 2. TENSOR CALCULUS
Bibliography
[1] A.I. Borisenko and I.E. Tarapov. Vector and Tensor Analysis with Applications. Dover
Pub. Inc, 1968.

[2] P.C. Chou and N.J. Pagano. Elasticity: Tensor, Dyadic, and Engineering Approaches.
Dover Pub. Inc, 1992.

[3] R.W. Ogden. Non-Linear Elastic Deformations. Dover Pub. Inc, 1997.

[4] M.R. Spiegel. Mathematical Handbook of Formulas and Tables. McGraw-Hill (Schaum's
outline), 1979.

35

You might also like