Professional Documents
Culture Documents
Lecture Notes - Math PDF
Lecture Notes - Math PDF
Klaus Hackl
Mehdi Goodarzi
2010
ii
Contents
Foreword v
1 Vectors and tensors algebra 1
1.1 Scalars and vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Geometrical operations on vectors . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Fundamental properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Vector decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Dot product of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Index notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Tensors of second order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8 Dyadic product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.9 Tensors of higher order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.10 Coordinate transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.11 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.12 Invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.13 Singular value decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.14 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.15 Triple product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.16 Dot and cross product of vectors: miscellaneous formulas . . . . . . . . . . 17
2 Tensor calculus 19
2.1 Tensor functions & tensor elds . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Derivative of tensor functions . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Derivatives of tensor elds, Gradient, Nabla operator . . . . . . . . . . . . . 20
2.4 Integrals of tensor functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5 Line integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.6 Path independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.7 Surface integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8 Divergence and curl operators . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.9 Laplace's operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.10 Gauss' and Stoke's Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.11 Miscellaneous formulas and theorems involving nabla . . . . . . . . . . . . . 27
2.12 Orthogonal curvilinear coordinate systems . . . . . . . . . . . . . . . . . . . 28
iii
iv CONTENTS
2.13 Dierentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.14 Dierential transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.15 Dierential operators in curvilinear coordinates . . . . . . . . . . . . . . . . 31
Foreword
A quick review of vector and tensor algebra, geometry and analysis is presented. The reader
is supposed to have sucient familiarity with the subject and the material is included as
an entry point as well as a reference for the subsequence.
v
vi FOREWORD
Chapter 1
Notation 1. Vectors and tensors are denoted by bold-face letters like A and v , and mag-
nitude of a vector by kvk or simply by its normal-face v . In handwriting an underbar is
used to denote tensors and vectors like v .
1
2 CHAPTER 1. VECTORS AND TENSORS ALGEBRA
Denition 3 (Summation of vectors). The sum of every two vectors v and w is a vector
u = v + w formed by placing initial point of w on terminal point of v and joining initial
point of v and terminal point of w (Fig. 1.2(a,b)). This is equivalent to parallelogram
law for vector addition as shown in Fig. 1.2(c). Subtraction of vectors v − w is dened as
v + (−w).
w v
v w u=v+w
v u=v+w w
Denition 4 (Multiplication of a vector by a scalar). For any real number (scalar) α and
vector v , their multiplication is a vector αv whose magnitude is |α| times the magnitude
of v and whose direction is the same as v if α > 0 and opposite to v if α < 0. If α = 0
then αv = 0.
1. α (v + w) = αv + αw Distributive law
2. (α + β) v = αv + βv Distributive law
3. α (βv) = (αβ) v Associative law
4. 1v = v Multiplication with scalar identity.
A vector can be rigorously dened as an element of a vector space.
Denition 7 (Real vector space). The mathematical system (V, +, R, ∗) is called a real
vector space or simply a vector space, where V is the set of geometrical vectors, + is vector
1.4. VECTOR DECOMPOSITION 3
summation with the four properties mentioned, R is the set of real numbers furnished with
summation and multiplication of numbers, and ∗ stands for multiplication of a vector with
a real number (scalar) having the above mentioned properties.
The reader may wonder why this recent denition is needed. Remember that a quantity
whose expression requires magnitude and direction is not necessarily a vector(!). A well
known example is nite rotation which has both magnitude and direction but is not a vector
because it does not follow the properties of vector summation, as dened before. Therefore,
having magnitude and direction is not sucient for a quantity to be identied as a vector,
but it must also obey the rules of summation and multiplication with scalars.
Denition 8 (Unit vector). A unit vector is a vector of unit magnitude. For any vector v
its unit vector is referred to by ev or v̂ which is equal to v̂ = v/kvk. One would say that
the unit vector carries the information about direction. Therefore magnitude and direction
as constituents of a vector are multiplicatively decomposed as v = vv̂ .
which says the dot product of vectors v and w equals projection of w on v direction
times magnitude of v (see Fig. 1.1), or the other way around. Also, dividing leftmost and
rightmost terms of (1.5) by vw gives
v̂ · ŵ = cos θ , (1.6)
which says the dot product of unit vectors (standing for directions) determines the angle
between them. A simple and yet interesting result is obtained for components of a vector
v as
v1 = v · e1 , v2 = v · e2 , v3 = v · e3 . (1.7)
Important results
1. In any orthonormal coordinate system including Cartesian coordinates it holds that
e1 · e1 = 1 , e1 · e2 = 0 , e1 · e3 = 0
e2 · e1 = 0 , e2 · e2 = 1 , e2 · e3 = 0 (1.8)
e3 · e1 = 0 , e3 · e2 = 0 , e3 · e3 = 1 ,
2. therefore ∗
v · w = v1 w1 + v2 w2 + v3 w3 . (1.9)
kvk = (v · v) /2 .
1
(1.10)
∗
Note that v · w = (v1 e1 + v2 e2 + v3 e3 ) · (w1 e1 + w2 e2 + w3 e3 ) = v1 w1 e1 · e1 + v1 w2 e1 · e2 + · · · .
1.6. INDEX NOTATION 5
v · w = (v1 e1 + v2 e2 + v3 e3 ) · (w1 e1 + w2 e2 + w3 e3 )
= v1 w1 e1 · e1 + v1 w2 e1 · e2 + v1 w3 e1 · e3
+ v2 w1 e2 · e1 + v2 w2 e2 · e2 + v2 w3 e2 · e3
+ v3 w1 e3 · e1 + v3 w2 e3 · e2 + v3 w3 e3 · e3 , (1.11)
which together with equation (1.8) yields the right-hand-side of (1.9). As can be seen
for a very basic expansion considerable eort is required. The so called index notation is
developed for simplication.
Index notation uses parametric index instead of explicit index. If for example the letter
i is used as index in vi it addresses any of the possible values i = 1, 2, 3 in three dimensional
space, therefore representing any of the vector v 's components. Then {vi } is equivalent to
the set of all vi 's, namely {v1 , v2 , v3 }.
Notation 11. When an index appears twice in one term then that term is summed up
over all possible values of the index. This is called Einstein's convention.
To clarify this, let's have a look at trace of a 3 × 3 matrix tr(A) = 3i=1 Aii = A11 +
P
A22 + A33 . Based on Einstein's convention one could write tr(A) = Aii , because the index
i has appeared twice and shall be summed up over all possible values of i = 1, 2, 3 then
Aii = A11 + A22 + A33 . As another example a vector v can be written as v = vi ei because
the index i has appeared twice and we can write vi ei = v1 e1 + v2 e2 + v3 e3 .
Remark 12. A repeating index (which is summed up over) can be freely renamed. For
example vi wi Amn can be rewritten as vk wk Amn because in fact vi wi Amn = Bmn and index
i does not appear as one of the free indices which can vary among {1, 2, 3}. Note that
renaming i to m or n would not be possible as it changes the meaning.
(1.12)
A = A1 A2 A3
with ⊗ being dyadic product. Note that we usually use the rst notation for brevity.
1.8. DYADIC PRODUCT 7
Aij = vi wj . (1.17)
Aij = ei · A · ej , (1.18)
which is the analog of equation (1.7). In equation (1.16) the dyads ei ej are basis tensors
which are given in matrix notation as
100 010 00 1
e1 e1 = 0 0 0 e1 e2 = 0 0 0 e1 e3 = 0 0 0 (1.19)
000 000 00 0
000 000 00 0
e2 e1 = 1 0 0 e2 e2 = 0 1 0 e2 e3 = 0 0 1 (1.20)
000 000 00 0
000 000 00 0
e3 e1 = 0 0 0 e3 e2 = 0 0 0 e3 e3 = 0 0 0 (1.21)
100 010 00 1
Note that not every second order tensor can be expressed as dyadic product of two vectors
in general, however every second order tensor can be written as linear combination of dyadic
products of vectors, as in its most common form given by the decomposition onto a given
coordinate system as A = Aij ei ej .
T = v 1 v2 · · · v n (1.22)
which is a tensor of the nth order. To make sense of it, let's take a look at inner product of
the above tensor with vector w
T · w = (v 1 v 2 · · · v n ) · w = (v n · w) (v 1 v 2 · · · v n−1 ) (1.23)
8 CHAPTER 1. VECTORS AND TENSORS ALGEBRA
where (v n · w) is of course a scalar and v 1 · · · v n−1 a tensor of (n − 1)th order. This means
the tensor T maps a rst-order tensor (vector) to a tensor of the (n − 1)th order. We
reemphasize that not all tensors can be written as dyadic product of vectors, however
tensors can be decomposed on the basis of a coordinate system. Decomposition of the
nth -order tensor T is written as
T = Ti1 i2 ···in ei1 ei2 · · · ein . (1.24)
where i1 · · · in ∈ {1, 2, 3} and ei1 ei2 · · · ein is the basis tensor of the nth order. It should
be clear that, visualization of e.g. third-order tensors will require three-dimensional arrays
and so on.
Remark 14. A vector is a rst-order tensor and a scalar is a zeroth-order tensor. Order of
a tensor is equal to number of its indices in index notation. Furthermore, a tensor of order
n requires 3n scalars to be specied. We usually address a second order tensor by simply
calling it a tensor and the meaning should be clear from the context.
If an mth order tensor A is multiplied with an nth order tensor B we get an (m + n)th
order tensor C such that
C = A ⊗ B = (Ai1 ···im ei1 · · · eim ) ⊗ (Bj1 ···jn ej1 · · · ejn )
= Ai1 ···im Bj1 ···jn ei1 · · · eim ej1 · · · ejn
= Ck1 ···km+n ek1 · · · ekm+n . (1.25)
Denition 15 (contraction product). For any two tensors A of the mth order and B of
r
the nth order, their r-contraction product for r ≤ min{m, n} yields a tensor C = A B
of order (m + n − 2r) such that
C = Ai1 ···im−r k1 ···kr Bk1 ···kr jr+1 ···jn ei1 · · · eim−r ejr · · · ejn . (1.29)
Note that the r innermost indices are common. The frequently used case of contraction
2
product is the so called double-contraction which is denoted by A : B ≡ A B .
Exercise 2. Write the identity σ = C : ε in index notation, where C is a fourth-order
tensor and σ and ε are second-order tensors. Then, expand the components σ12 and σ11
for the case of two-dimensional Cartesian coordinates.
r
Remark 16. Contraction product of two tensors is not symmetric in general, i.e. A B 6=
r
B A, except when r = 1 and A and B are rst-order tensors (vectors), or A and B
have certain symmetry properties.
Kronecker delta
The second order unity tensor or identity tensor denoted by I is dened by the following
fundamental property.
Property 17. For every vector v it holds that
I ·v = v·I = v. (1.30)
Representation of unity tensor by index notation as
1 , if i = j
δij = (1.31)
0 , if i 6= j .
is called Kronecker's delta. Its expansion for i, j ∈ {1, 2, 3} gives in matrix form
100
(δij ) = 0 1 0 . (1.32)
001
Some important properties of Kronecker delta are
1. Orthonormality of the basis vectors {ei } in equation (1.8), can be abbreviated as
ei · ej = δij . (1.33)
3. The general relationship between Kronecker delta and permutation symbol reads
δil δim δin
ijk lmn = det δjl δjm δjn (1.39)
δkl δkm δkn
and as special cases
jkl jmn = δkm δln − δkn δlm , (1.40)
ijk ijm = 2δkm . (1.41)
Exercise 4. Verify the identities in equations (1.40) and (1.41) based on (1.39).
v = v1 e1 + v2 e2 + v3 e3 . (1.42) v
Since a vector is completely determined by its compo-
X10 X2
nents, we should be able to nd the decomposition of X1
vector v on a second coordinate system X10 X20 X30 with
bases {e01 , e02 , e03 } in terms of {v1 , v2 , v3 }. Using equa- X20
tion (1.7) we have
Fig. 1.4: Transformation of coor-
dinates.
v10 = v · e01 , v20 = v · e02 , v30 = v · e03 . (1.43)
Transformation of tensors
Having the decomposition of an nth -order tensor A in X1 X2 X3 coordinates as
we look for its components in another coordinates X10 X20 X30 . Following the same approach
as for vectors, using equation (1.29)
n
A0j1 ···jn = A e0j1 · · · e0jn
n
= Ai1 ···in ei1 · · · ein e0j1 · · · e0jn
For the special case when A is a second order tensor we can write in matrix notation
Remark 20. The above transformation rules are substantial. They are so fundamental that,
as an alternative, we could have dened vectors and tensors based on their transformation
properties instead of the geometric and algebraic approach.
Exercise 5. Find the transformation matrix for a rotation by angle θ around X1 in three-
dimensional Cartesian coordinates X1 X2 X3 .
Exercise 6. For a symmetric second order tensor A derive the transformation formula
with a transformation given as
cos ϕ sin ϕ 0
(Qij ) = − sin ϕ cos ϕ 0 .
0 0 1
Denition 21. Given a second order tensor A, every nonzero vector like ψ is called an
eigenvector of A if there exists a real number λ such that
A · ψ = λψ (1.52)
In general, a tensor multiplied by a vector gives another vector with a dierent mag-
nitude and direction, interestingly however, there are vectors ψ on which the tensor acts
like a scalar λ as laid down by equation (1.52). Now suppose that a coordinate system is
chosen so that X1 axis coincides with ψ . It is clear that
It is possible to show that for non-degenerate symmetric tensors in R3 there are three or-
thonormal eigenvectors which establish an orthonormal coordinate system which is called
principal coordinates. Following equation (1.54) a symmetric tensor σ in principal coordi-
nates takes the diagonal form
σ11 0 0
σ = 0 σ22 0 , (1.55)
0 0 σ33
where its diagonal elements are eigenvalues corresponding to principal directions which are
parallel to eigenvectors. The following theorem generalizes these ideas to n dimensions.
Remark 23. In the case of symmetric tensors equation (1.56) takes the form of
A = ΨΛΨT (1.57)
Suppose that we want to derive some relationship or develop a model based on tensorial
expressions. If the task is accomplished in principal coordinates it takes much less eort and
at the same time, the results are completely general i.e. they hold for any other coordinate
system. This is often done in material modeling and in other branches of science as well.
As the nal step we would like to solve the equation (1.52). It can be recast in the form
(A − λI) ψ = 0 . (1.59)
which is called the characteristic equation of tensor A. Let us have a look at its expansion
A11 − λ A12 A13
(1.61)
A21 A22 − λ A23 = 0 ,
A31 A32 A33 − λ
which gives
A11 A12 A13
A A A A A A
λ3 −(A11 + A22 + A33 ) λ2 + 22 23 + 11 13 + 11 12 λ− A21 A22 A23 = 0
A32 A33 A31 A33 A21 A22 A31 A32 A33
(1.62)
which is a cubic equation with at most three distinct roots.
1.12 Invariants
Components of a vector v change from one coordinate system to another, however its
length remains the same in all coordinates because length is a scalar. This is interesting
because one can write the length of a vector in terms of its components in any coordinate
system as kvk = (vi vi ) /2 , which means there is a combination of components that does not
1
change by coordinate transformation while the components themselves do. Any function of
components f (v1 , v2 , v3 ) that does not change under coordinate transformation is called an
invariant. This independence of coordinate system makes invariants physically important,
and the reason should become clear soon.
Tensors of higher order have also invariants. For a second order tensor A, looking back
on equation (1.62), the values λ, λ2 and λ3 are scalars and stay unchanged under coordinate
transformation, therefore coecients of the equation must remain unchanged as well, and
they are all invariants of tensor A denoted by
where I, II and III are called rst, second and third principal invariants of A. The above
formulas look familiar. The rst invariant is trace, the second is sum of principal minors
and the third is determinant. Of course, we could express invariants in terms of eigenvalues
of the tensor
IA = tr(A) = λ1 + λ2 + λ3 (1.66)
1
IIA = (tr(A))2 − tr A2 = λ1 λ2 + λ2 λ3 + λ3 λ1 (1.67)
2
IIIA = det(A) = λ1 λ2 λ3 . (1.68)
1.13. SINGULAR VALUE DECOMPOSITION 15
In material modeling we often look for a scalar valued function of a tensor in the form
of φ(A). Since the function value φ is a scalar it must be invariant under transformation
of coordinates. To fulll this requirement, the function is formulated in terms of invariants
of A, then it will be invariant itself. Therefore the most natural form of such a formulation
would be
φ = φ(IA , IIA , IIIA ) . (1.69)
This will be used in the sequel when hyper-elastic material models are explained.
When A is a symmetric tensor singular values of A equal the absolute values of its
eigenvalues.
u = vw sin θ 0 ≤ θ ≤ π. (1.70)
v w
The cross product of vectors has the following properties. θ
Property 26 (Cross product). Fig. 1.5: Cross product.
1. v × w = −w × v Anti-symmetry
2. u × (αv + βw) = αu × v + βu × w Linearity.
Important results
1. In any orthonormal coordinate system including Cartesian coordinates it holds that
e1 × e1 = 0 , e1 × e2 = e3 , e1 × e3 = −e2
e2 × e1 = −e3 , e2 × e2 = 0 , e2 × e3 = e1 (1.71)
e3 × e1 = e2 , e3 × e2 = −e1 , e3 × e3 = 0 ,
16 CHAPTER 1. VECTORS AND TENSORS ALGEBRA
2. therefore
e1 e2 e3
(1.72)
v × w = v1 v2 v3 ,
w1 w2 w3
w sin θ
parallelogram we consider it to be the area vector. Then it is w
clear the the area A of a triangle with sides v and w is given
by θ
1 v
A = v ×w. (1.74)
2 Fig. 1.6: Area of triangle.
u · (v × w) , (1.76)
or in index notation
u · (v × w) = ijk ui vj wk . (1.78)
Geometrically, triple product of the three vectors equals the volume of parallelepiped with
sides u, v and w (Fig. 1.7).
1.16. DOT AND CROSS PRODUCT OF VECTORS: MISCELLANEOUS FORMULAS 17
(u × v) × (w × x) = (u · (v × x)) w − (u · (v × w)) x
= (u · (w × x)) v − (v · (w × x)) u (1.82)
Exercise 9. For a tensor given as T = I + vw with v and w being P orthogonal vectors i.e.
v · w = 0, calculate T 2 , T 3 , · · · , T n and nally eT (Hint: eT = ∞ 1 n
n=0 n! T ).
18 CHAPTER 1. VECTORS AND TENSORS ALGEBRA
Chapter 2
Tensor calculus
So far, vectors and tensors are considered based on operations by which we can combine
them. In this section we study vector and tensor valued functions in three dimensional
space, together with their derivatives and integrals. It is important to keep in mind that
we address tensors and vectors both by the term tensor, and the reader should already
know that a vector is a rst-order tensor and a scalar a zeroth-order tensor.
where T is the space of tensors of the same order, for instance second order tensors T =
R3×3 or vectors T = R3 or maybe fourth order tensors T = R3×3×3×3 .
A typical example is the position vector of a moving particle
given as a function of time x(t) where t is a real number and X2
x is a vector with initial point at origin and terminal point at
the moving particle (Fig. 2.1).
On the other hand a tensor led A(x) assigns a tensor A
to a point x in space, say three dimensional. Again one could
t)
x(
formally write
A : R3 −→ T
. (2.2) O X1
x 7−→ A(x)
Fig. 2.1: Position vector.
Here, R3 is the usual three dimensional Euclidean space. For
instance, we can think of temperature distribution in a given domain T (x) which is a scalar
eld that assigns scalars (temperatures) to points x in the domain. Another example could
be strain led ε(x) in a solid body under deformation, which assigns second-order tensors
ε to points x in the body.
19
20 CHAPTER 2. TENSOR CALCULUS
Exercise 10. For a given second-order tensor function A(u) = Aij (u) ei ej in Cartesian
coordinates write down the derivatives.
Property 27. For tensor functions A(u) , B(u) , C(u), and vector function a(u) we have
d dB dA
1. (A · B) = A · + ·B (2.5)
du du du
d dB dA
2. (A × B) = A × + ×B (2.6)
du du du
d dA dB dC
3. [A · (B × C)] = · (B × C) + A · ×C +A· B× (2.7)
du du du du
da da
4. a· =a (2.8)
du du
da
5. a· =0 i kak = const (2.9)
du
Exercise 11. Verify the 4th and the 5th properties above.
or in compact form
∂A A(x + ∆xi ei ) − A(x)
= lim . (2.11)
∂xi ∆xi →0 ∆xi
In general, dx is not parallel to coordinate axes. Therefore based on the chain rule of
dierentiation
∂A ∂A ∂A ∂A
dA(x1 , x2 , x3 ) = dxi = dx1 + dx3 + dx3 , (2.12)
∂xi ∂x1 ∂x2 ∂x3
Remembering from calculus of multi-variable functions, this can also be written as
∂A
dA = grad(A) · dx with grad(A) = ei . (2.13)
∂xi
In the latter formula gradient of tensor eld grad(A) appears which is a tensor of one order
higher than A itself. To highlight this fact, we rewrite the above equation as
∂A
grad(A) = ⊗ ei . (2.14)
∂xi
A proper choice of notation in mathematics is substantial and introduction of operators
notation in calculus is no exception. Since linear operators follow algebraic rules somewhat
similar to that of arithmetics of real numbers, writing relations in operators notation is
invaluable. Here, we introduce the so called nabla operator as
∂ ∂ ∂ ∂
∇ ≡ ei = e1 + e2 + e3 . (2.15)
∂xi ∂x1 ∂x2 ∂x3
Then gradient of the tensor led in equation (2.13) can be written as∗
grad(A) = A∇ = A ⊗ ∇ . (2.16)
with C = const, and the denite integral of A over the interval [a, b] is
Z b
A(u) du = B(b) − B(a) . (2.18)
a
where Aij is a real valued function of real numbers. Therefore, integration of a tensor
function comes down to integration of its components.
∗
In recent equations the dyadic ⊗ is written explicitly sometimes for pedagogical reasons.
22 CHAPTER 2. TENSOR CALCULUS
Property 28.
Z P2 Z P1
1. A · dx = − A · dx (2.22)
P1 P2
Z P2 Z P3 Z P2
2. A · dx = A · dx + A · dx P3 is between P1 and P2 (2.23)
P1 P1 P3
Parameterization
A curve in space is a one-dimensional entity which can be specied by a parameterization
of the form
x(s) (2.24)
dφ = A · dx , (2.27)
A = grad(φ) = φ∇ . (2.28)
This argument could be reversed which would lead to the following theorem.
Theorem 29. The line integral C A · dx is path independent if and only if there is a
R
N →∞ : max k∆Si k → 0 .
i
24 CHAPTER 2. TENSOR CALCULUS
Note the surface integral (2.30) is a tensor of one order lower than A due to appearance of
A · n in the integrand, while the surface integral (2.31) is a tensor of the same order as A
due to the term A × n.
tensor eld A(x) over S as in equations (2.30) and (2.31). As mentioned before, in the case
of closed surfaces these have special physical meanings. A question that naturally arises is:
what are the average intensity of sources and rotations over the domain Ω. The answer is
I I
1 1
D= A · n dS C= A × n dS , (2.33)
V S V S
†
A source, as it literally suggests, is what causes the tensor eld.
2.9. LAPLACE'S OPERATOR 25
where integrals are divided by volume V . Now if the domain Ω shrinks, that is V → 0, then
these average values reect the local intensity or concentration of sources and rotations of
the tensor eld. This is the physical analogue of density which is the concentration of mass.
These local values are called divergence and curl of the tensor eld A, denoted and formally
dened by I
1
div(A) = lim A · n dS (2.34)
V →0 V S
I
1
curl(A) = lim A × n dS . (2.35)
V →0 V S
Remark 31. So far gradient, divergence and curl operators are applied to their operand
form the right-hand-side i.e. grad(A) = A∇ , div(A) = A · ∇ , curl(A) = A × ∇. This is
not a strict requirement by their denition. In fact the direction from which the operator
is applied depends on the context. If we exchange the order of dyadic, inner and outer
products in equations (2.26), (2.34) and (2.35) respectively, the direction of application of
nabla will be reversed, to wit
Z I I
1 1
φ = dx · ∇φ lim n · A dS = ∇ · A lim n × A dS = ∇ × A (2.38)
V →0 V S V →0 V S
which are equally valid. However, keeping a consistence convention shall not be overlooked.
Exercise 15. Calculate ∇ × (∇ × v) for the vector eld v .
This combination of gradient and divergence operators appears so often that it is given a
distinct name, the so called Laplace's operator or Laplacian which is denoted by
∆ ≡ ∇ · ∇ = ∂k ∂k . (2.40)
26 CHAPTER 2. TENSOR CALCULUS
Z I
A · ∇ dV = A · n dS . (2.43)
V S
This result is also called Green's or divergence theorem. If we remember the physical
interpretation of divergence, then the Gauss theorem means the ux of a tensor eld through
a closed surface equals the intensity of the sources bounded by the surface, which physically
makes perfect sense.
n
Theorem 33 (Stoke's theorem). Let S be an open two-sided X3 dS
surface bounded by a closed non-self-intersecting curve C as
in Fig. 2.5. Then
S
I Z
A · dx = (A × ∇) · n dS (2.44) C
C S X2
where n is directed towards counter-clockwise view of the path X1
direction as indicated in the gure.
Fig. 2.5: Stoke's theorem.
∇(A + B) = ∇A + ∇B (2.45)
∇ · (A + B) = ∇ · A + ∇ · B (2.46)
∇ × (A + B) = ∇ × A + ∇ × B (2.47)
∇ · (αA) = (∇α) · A + α(∇ · A) (2.48)
∇ × (αA) = (∇α) × A + α(∇ × A) (2.49)
∇ × (∇ × A) = ∇(∇ · A) − ∇ A 2
(2.50)
∇ · (v × w) = (∇ × v) · w + v · (∇ × w) (2.51)
∇ × (v × w) = (w · ∇)v − (∇ · v)w − (v · ∇)w + (∇ · w)v (2.52)
∇(v · w) = (w · ∇)v + (v · ∇)w + w × (∇ × v) + v × (∇ × w) (2.53)
and I Z
(A∇) · dx = (A∇) × ∇ · n dS = 0 . (2.57)
C S
u3
c2
= e3 u1 =
u2
c1
e1 e2
u1 u3 = c3
u2
where the transformation functions are smooth and on-to-one. Smoothness means being
dierentiable, and being one-to-one requires Jacobian determinant to be non-zero
∂u 1 ∂u1 ∂u1
∂x12 ∂u
∂x2 ∂x3
∂ (u1 , u2 , u3 )
det = ∂u 2 ∂u2
∂x1 ∂x2 ∂x3 6= 0 (2.63)
∂ (x1 , x2 , x3 ) ∂u 3 ∂u3 ∂u 3
∂x ∂x ∂x
1 2 3
passing through P , called coordinate surfaces. And there are also three curves which are
intersections of every two coordinate surfaces. These are called coordinate curves. On each
2.13. DIFFERENTIALS 29
coordinate curve only one of the three coordinates ui varies and the other two are constant.
If the position vector is denoted by r then the vector ∂r/∂ui is tangent to ui coordinate
curve, and the unit tangent vectors (Fig. 2.6) are obtained from
2.13 Dierentials
In three dimensional space, there are three types of geometrical objects (other than points)
regarding dimensions
Each type has its own dierential element, namely line elements, surface elements and
volume elements. We assume that the reader has already dealt with the related ideas in
rectangular (Cartesian) coordinates. Now we want to generalize those ideas to curvilinear
coordinates.
Line elements
An innitesimal line segment with an arbitrary direction in space can be projected onto
coordinate axes. Consider the position vector r explained in a curvilinear coordinate system
by r(u1 , u2 , u3 ). A line element is the dierential of the position vector obtained by chain
rule and using equation (2.66)
∂r ∂r ∂r
dr = du1 + du2 + du3 = h1 du1 e1 + h2 du2 e2 + h3 du3 e3 . (2.67)
∂u1 ∂u2 ∂u3
where
dr1 = h1 du1 , dr2 = h2 du2 , dr3 = h3 du3 (2.68)
30 CHAPTER 2. TENSOR CALCULUS
are the basis line elements. If the position vector sweeps a curve (which is typical for
example in kinematics) the arc length element denoted by ds is obtained by
Note that the above formulas are completely general. For instance in Cartesian coordi-
nates where hi = 1 they are reduced to familiar forms
Area elements
An area element is a vector described by its magnitude and direction
dS = dS n , (2.70)
which are called basis area elements. Note that an area element is a parallelogram with
its side being line elements, and its area is obtained by cross product of the line element
vectors. Comparing the two above equation gives
dS = (h2 h3 du2 du3 ) e1 + (h3 h1 du3 du1 ) e2 + (h1 h2 du1 du2 ) e3 (2.73)
As the simplest case, in Cartesian coordinates the latter formula gives the familiar form
Volume element
A volume element dV is a parallelepiped built by the three coordinate line elements dr1 e1 ,
dr2 e2 and dr3 e3 . Since the volume of a parallelepiped is obtained by triple product of its
side vectors as in equation (1.77), we have
X3
z = x3
x
x1
X2
ϕ r
x2
X1
Fig. 2.7: Cylindrical coordinates.
Cylindrical coordinates
In cylindrical coordinates a point is determined by (r, ϕ, z) (Fig. 2.7). Transformation to
rectangular coordinates are given as
dr = dr er + r dϕ eϕ + dz ez (2.86)
dS = r dϕ dz er + dr dz eϕ + r dr dϕ ez (2.87)
dV = r dr dϕ dz . (2.88)
∂ 1 ∂ ∂
∇ ≡ er + eϕ + ez (2.89)
∂r r ∂ϕ ∂z
1 ∂ 1 ∂aϕ ∂az
∇·a = (rar ) + + (2.90)
r ∂r r ∂ϕ ∂z
1 ∂2φ ∂2φ ∂ 2 φ 1 ∂φ 1 ∂2φ ∂2φ
1 ∂ ∂φ
∆φ = r + 2 + = + + + 2 (2.91)
r ∂r ∂r r ∂ϕ2 ∂z 2 ∂r2 r ∂r r2 ∂ϕ2 ∂z
1 ∂az ∂aϕ ∂ar ∂az 1 ∂ ∂ar
∇×a = − er + − eϕ + (raϕ ) − ez (2.92)
r ∂ϕ ∂z ∂z ∂r r ∂r ∂ϕ
2.15. DIFFERENTIAL OPERATORS IN CURVILINEAR COORDINATES 33
X3
x3
r
θ x1
X2
ϕ
x2
X1
Spherical coordinates
In spherical coordinates a point is determined by (r, θ, φ) (Fig. 2.8). Transformation to
rectangular coordinates are given as
x1 = r sin θ cos φ , x2 = r sin θ sin φ , x3 = r cos θ (2.93)
q
r = x21 + x22 + x23 , θ = arccos(x3 /r) , φ = arctan(x2 /x1 ) .
Scale factors are
hr = 1 hθ = r hφ = r sin θ , (2.94)
and dierential elements
dr = dr er + r dθ eθ + r sin θ dφ eφ (2.95)
2
dS = r sin θ dθ dφ er + r sin θ dr dφ eθ + r dr dθ eφ (2.96)
2
dV = r sin θ dr dθ dφ . (2.97)
Finally the dierential operators can be listed as
∂ 1 ∂ 1 ∂
∇ ≡ er + eθ + eφ (2.98)
∂r r ∂θ r sin θ ∂φ
1 ∂ 1 ∂ 1 ∂aφ
r 2 ar + (2.99)
∇·a = 2 (sin θaθ ) +
r ∂r r sin θ ∂θ r sin θ ∂φ
2
∂ ξ 2 ∂ξ cos θ ∂ξ 2
1 ∂ ξ 1 ∂2ξ
∆ξ = + + + + (2.100)
∂r2 r ∂r r2 sin θ ∂θ r2 ∂θ2 r2 sin2 θ ∂φ2
1 ∂ ∂aθ 1 1 ∂ar ∂
∇×a = (sin θaφ ) − er + − (raφ ) eθ (2.101)
r sin θ ∂θ ∂φ r sin θ ∂φ ∂r
1 ∂ ∂ar
+ (raθ ) − eφ (2.102)
r ∂r ∂θ
34 CHAPTER 2. TENSOR CALCULUS
Bibliography
[1] A.I. Borisenko and I.E. Tarapov. Vector and Tensor Analysis with Applications. Dover
Pub. Inc, 1968.
[2] P.C. Chou and N.J. Pagano. Elasticity: Tensor, Dyadic, and Engineering Approaches.
Dover Pub. Inc, 1992.
[3] R.W. Ogden. Non-Linear Elastic Deformations. Dover Pub. Inc, 1997.
[4] M.R. Spiegel. Mathematical Handbook of Formulas and Tables. McGraw-Hill (Schaum's
outline), 1979.
35