You are on page 1of 58

Chapter 6

Mathematical Preliminaries
In the section we introduce tensors, but in fact you have surely encountered tensors in your studies, e.g. the
stress, strain and inertia tensors. The presentation uses direct, indicial and matrix notations so that hopefully
you can rely on your experience with matrices to get through this material without too much effort. Some
results are stated without proof, however references are provided should you desire, and I hope you do, to
obtain a more thorough understanding of the material.

6.1 Vector Spaces
A set V is a real vector space, whose elements are often called vectors, if it is equipped with two operations:
addition, denoted by the plus sign +, that takes any vector pair (a, b) ∈ V × V and generates the sum a + b,
which is also a vector, and scalar multiplication that takes any scalar vector pair (α, a) ∈ R × V and
generates the scalar multiple α a, which is also a vector, such that the following properties hold for any
vectors a, b, c ∈ V and scalars α, β ∈ R:
V1 : Commutativity with respect to addition, a + b = b + a.
V2 : Associativity with respect to addition, (a + b) + c = a + (b + c).
V3 : Existence of the zero element 0 ∈ V such that a + 0 = a.
V4 : Existence of the negative elements −a ∈ V for each a ∈ V such that (−a) + a = 0.
V5 : Distributivity with respect to scalar multiplication, α (β a) = (α β) a.
V6 : Distributivity with respect to scalar addition, (α + β) a = α a + β a.
V7 : Distributivity with respect to vector addition, α (a + b) = α a + α b.
V8 : Existence of the identity element 1 ∈ R such that 1 a = a.

As seen here, we represent scalars as lower case Greek letters and vectors as lower case bold face Latin
letters. And because of your experience with N-dimensional vector arrays we do not mention the obvious
equalities, e.g. −0 = 0, −a = (−1) a, etc., which can be proved using the above axioms.
We say the set of k ≥ 1 vectors {a1 , a2 , · · · , ak } that are elements of the vector space V are linearly
dependent if there exists scalars α1 , α2 , · · · , αk that are not all zero such that
0 =

k
!

αi ai

i=1

= α1 a1 + α2 a2 + · · · + αk ak
253

(6.1)

mathematical preliminaries
otherwise we say the set is linearly independent. And we say a vector space V is k-dimensional (with k ≥ 1)
if a set of k linearly independent vectors exists, but no set of l > k linearly independent vectors exists. And
finally, a set of k linearly independent vectors in a k-dimensional vector space V forms a basis of V.
Example 6.1. The idea of vector spaces, linear dependence, linear independence and bases is not limited to “vectors”. Indeed, the set of N-dimensional vector arrays RN , e.g.









a=⎪






















a1
a2
..
.
aN

is an N-dimensional vector space, but so too is the set of N × M matrix arrays RN×M , e.g.

⎢⎢⎢ A11
⎢⎢⎢ A
⎢ 21
A = ⎢⎢⎢⎢⎢ .
⎢⎢⎢ ..

AN1

A12
A22
..
.

···
···

A1M
A2M
..
.

AN2

···

AN M


⎥⎥⎥
⎥⎥⎥
⎥⎥⎥
⎥⎥⎥
⎥⎥⎥

// 0 / 00
1
0
is an N×M-dimensional vector space. Possible bases for these two vector spaces with N = M = 2 are
,
0
1
/1
2 1
2 1
2 1
20
1 0
0 1
0 0
0 0
and
,
,
,
, respectively.
0 0
0 0
1 0
0 1
The set of trial functions K(Ω) = {u : Ω ⊂ R → R | u ∈ C1p (Ω) and u(x0 ) = 0}, cf. Eqn. 4.36, is an infinite
3M
dimensional vector space whereas the set Kh (Ω) = {uh : Ω ⊂ R → R | ψi ∈ C1p (Ω), Ui ∈ R, uh (x) = i=1
ψi (x) Ui } is
an M-dimensional vector space, cf. Eqn. 5.27.
Note that only in the first case do we refer to the elements as vectors even though matrices and continuous functions
are also elements of vector spaces.

A set E is a Euclidean vector space if it is a three-dimensional vector space equipped with the additional
two operations: the inner (dot or scalar) product, denoted by the dot ·, that takes any pair of vectors in
a, b ∈ V and generates the real number a · b ∈ R and the cross product, denoted by the wedge ∧, that takes
any pair of vectors in a, b ∈ V and generates the vector a ∧ b ∈ V, such that the following properties hold
for any vectors a, b, c ∈ V and scalars α, β ∈ R:
E1 : Symmetry, a · b = b · a.
E2 : Linearity, (α a + β b) · c = α (a · c) + β (b · c).
E3 : Positive definiteness, a · a ≥ 0 and a · a = 0 if and only if a = 0.
E4 : a ∧ b = −b ∧ a.
E5 : Associativity with respect to the cross product, (α a + β b) ∧ c = α (a ∧ c) + β (b ∧ c).
E6 : a · (a ∧ b) = 0.
E7 : (a ∧ b) · (a ∧ b) = (a · a) (b · b) − (a · b)2 .
254

vector spaces
Any, not necessarily three dimensional, vector space that also exhibits properties E1 , E2 and E3 is referred
to as an inner product space.
The norm (magnitude, length, or modulus) of a vector a ∈ E is defined as
1

|a| = (a · a) 2 .

(6.2)

If |a| = 1 then a is a unit vector and if a · b = 0 then a and b are orthogonal.
From property E7 with |a|2 = a · a and |b|2 = b · b we see that
4

|a ∧ b|
|a| |b|

52

4

a·b
+
|a| |b|

52

=1

(6.3)

to wit we define the angle θ ∈ [0, π] between the vectors a and b such that
cos θ =
sin θ =

a·b
,
|a| |b|
|a ∧ b|
.
|a| |b|

(6.4)

Certainly you have seen the result that the area enclosed by the parallelogram defined by the vectors a and b
is given by a = |a| |b| sin θ = |a∧b|, cf. Fig. 6.1. And from this observation you are comfortable with the fact
that a ∧ b = 0 if and only if a and b are linearly dependent. To show necessity, i.e. a ∧ b = 0 if a and b are
linearly dependent, let a = α b for some α ! 0 and apply property E5 to obtain (a∧b) = (α b)∧b = α (b ∧b)
and similarly with the help of property E4 we have (a∧b) = −(b∧a) = −α (b∧b). Combining these equalities
we see (a ∧ b) = −(a ∧ b) and hence (a ∧ b) = 0. To show sufficiency, i.e. a ∧ b = 0 only if a and b are
linearly dependent, please complete Exer. 6.1.
The scalar triple product [d, a, b] of the vectors a, b, d ∈ E is defined such that
[d, a, b] = d · (a ∧ b).

(6.5)

This can be interpreted as the signed volume of the parallelepiped defined by the vectors a, b and d, cf. Fig.
6.1. Indeed, the signed volume is the product of the base area a = |a ∧ b| and the “height” h = d · n = cos α|d|
where n = (1/|a ∧ b|) a ∧ b is the unit vector that is orthogonal to the plane defined by the vectors a and b.
It is not difficult to show that for any α, β ∈ R and any a, b, c, d ∈ E we have
6

[d, a, b] = [a, b, d] = [b, d, a] = −[d, b, a] = −[a, d, b] = −[b, a, d],
7
α a + β b, c, d = α [a, c, d] + β [b, c, d],
[a, b, c] = 0 if and only if {a, b, c} are linearly dependent.

(6.6)

And thus we see that the “volume” can be negative depending on how we order the vectors in the scalar
triple product. The equality [d, a, b] = −[d, b, a] follows from the definition of the triple product, i.e. Eqn.
6.5, and properties E2 and E4 . The equality [d, a, b] = −[a, d, b] follows from 0 = [a + d, a + d, b] =
[a, a + d, b] + [d, a + d, b] = [a, a, b] + [a, d, b] + [d, a, b] + [d, d, b] = [a, d, b] + [d, a, b], which follows
from properties E6 , E2 , E5 and E6 , respectively. To prove the necessity of the last equality, i.e. [a, b, c] = 0
if {a, b, c} are linearly dependent, note that if {a, b, c} are linearly dependent, then α a + β b + γ c = 0 for
some α, β, γ ∈ R not all zero. Without loss of generality, assume α ! 0 so that a = −β/α b − γ/α c and hence
[a, b, c] = [−β/α b − γ/α c, b, c] = −β/α [b, b, c] − γ/α [c, b, c] = −β/α [b, b, c] + γ/α [c, c, b] = 0, which
follows Eqns. 6.6.2 and 6.6.1 and E6 . To show sufficiency see Exer. 6.3.
The basis {e1 , e2 , e3 } for E is orthonormal if
ei · e j = δi j ,
255

(6.7)

mathematical preliminaries

a
e3

a

θ
e2

b

e3

e1
n

e2

b

e1
α d
a∧b

c

Figure 6.1: Illustrations of c = a ∧ b and [d, a, b] = d · (a ∧ b).
where
δi j =

/

1 if i = j
0 if i ! j

(6.8)

is the Kronecker delta. For example e1 · e1 = 1 whereas e1 · e2 = 0, which implies e1 is a unit vector that is
orthogonal to e2 . Using the basis allows us to express any a ∈ E as
a = a1 e1 + a2 e2 + a3 e3 ,

(6.9)

where a1 , a2 and a3 are the components of a relative to the basis {e1 , e2 , e3 }, cf. Fig. 6.2. Introducing the
indicial (or Einstein summation) convention we write the above as
a = ai ei ,

(6.10)

i.e. it is henceforth understood that when any, so called dummy, subscript appears twice in an expression
3
then it is to be summed from 1 to 3, e.g. ai ei = 3i=1 ai ei . The dummy terminology is due to the fact that
the result is independent of the index choice, e.g. ai ei = a j e j = ak ek = · · ·
Our use of an orthonormal basis allows us to compute the components quite easily, viz
a · e j = (ai ei ) · e j

= ai (ei · e j )
= ai δi j
= a j,

(6.11)

where we make use of the fact that ei · e j = δi j = 1 only when i = j. And hence we can express any vector
as
a = (a · ei ) ei .

(6.12)

It is emphasized that the appearance of the Kronecker delta in a summation generally allows one to eliminate
an index.
256

vector spaces

a3 e3

a

e3

a2 e2
e2
e1

a1 e1

Figure 6.2: Illustrations of the righthanded orthonormal basis {e1 , e2 , e3 } and vector components.
To perform our computations we express the vectors by their 3-dimensional array of components, i.e.



a1 ⎪






a
a=⎪

2




⎩ a ⎪
3

whence



α a1 + β b1



α a2 + β b3
αa + βb = ⎪


⎩ αa + βb
3
3

(6.13)






,



(6.14)

which harkens back to your first vector encounter. And referring to Exam. 6.1 we see that
⎧ ⎫

1 ⎪



⎨ ⎪

0 ⎪
e1 = ⎪
,



⎩ 0 ⎪




0 ⎪






1
e2 = ⎪




⎩ 0, ⎪

⎧ ⎫

0 ⎪



⎨ ⎪

0 ⎪
e3 = ⎪
.



⎩ 1 ⎪

(6.15)

In general we can show that [e1 , e2 , e3 ] = ±1, however herein we limit ourselves to righthanded bases
for which [e1 , e2 , e3 ] = 1 and this implies
e1 = e2 ∧ e3 ,

e2 = e3 ∧ e1 ,
e3 = e1 ∧ e2 ,

(6.16)

cf. Exer. 6.6. We will henceforth work with a righthanded, i.e. positively oriented, orthonormal basis, i.e.
the basis you are familiar with, cf. Fig. 6.2.
It is convenient now to introduce the alternator ϵi jk defined such that
ϵi jk



1



−1
=⎪


⎩ 0

if {i, j, k} = {1, 2, 3}, {2, 3, 1} or {3, 1, 2}, i.e. cyclic permutations of 1, 2, 3
if {i, j, k} = {1, 3, 2}, {2, 1, 3} or {3, 2, 1}, i.e. non-cyclic permutations of 1, 2, 3
otherwise ,
257

(6.17)

In this way property E4 and Eqns. e j . cf. 6. ⎪ ⎪ ⎪ ⎪ ⎩ ⎭ a1 b2 − a2 b1 258 (6.3.19) .18) and hence a ∧ b = (ai ei ) ∧ (b j e j ) = ai b j (ei ∧ e j ) = ϵki j ai b j ek : :: :: e1 e2 e3 ::: = ::: a1 a2 a3 ::: :: b b b :: 1 2 3 = (a2 b3 − a3 b2 ) e1 + (a3 b1 − a1 b3 ) e2 + (a1 b2 − a2 b1 ) e3 ⎧ ⎫ ⎪ a2 b3 − a3 b2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ a3 b1 − a1 b3 ⎪ = ⎪ . left: cyclic and right non-cyclic permutations.mathematical preliminaries 1 1 ϵi jk = 1 ϵi jk = −1 3 3 2 2 Figure 6. 6.5 and 6.3: Illustrations of the alternator. ek = ei · (e j ∧ ek ) = ei · ϵm jk em = ϵm jk ei · em = ϵm jk δim = ϵi jk (6.16 yield e j ∧ ek = ϵm jk em . 8 9 ei . Fig.

i. we may express ⎡ ⎤ ⎢⎢⎢ A11 A12 A13 ⎥⎥⎥ ⎢ ⎥ A = ⎢⎢⎢⎢ A21 A22 A23 ⎥⎥⎥⎥ . which eats the angular velocity vector and spits out the angular momentum vector. a. that eats elements from a vector space V and spits out elements into the same vector space V.e. as L. b] = c · (a ∧ b) = cm em · ϵki j ai b j ek = ai b j cm ϵki j (em · ek ) = ai b j cm ϵki j δkm = ϵki j ai b j ck :: : :: a1 a2 a3 ::: = ::: b1 b2 b3 ::: :: c c c :: 1 2 3 = (a1 b2 c3 + a2 b3 c1 + a3 b1 c2 ) − (a1 b3 c2 + a2 b1 c3 + a3 b2 c1 ). (6. β ∈ R and a.23) ⎣ ⎦ A31 A32 A33 259 . linear transformations from E to E. We denote tensors by upper case bold face Latin letters and the set of all tensors. a linear function. 1 |a| = (a · a) 2 1 = (a2i ) 2 1 = (a1 a1 + a2 a2 + a3 a3 ) 2 . Perhaps you have also seen tensors in your dynamics class. e. b ∈ V. Because A is a linear function we have. the inertia tensor. for any α. However.e.linear transformations – tensors where | · | denotes the determinant.22) As seen above. (6. for computational purposes. Combining the previous two results gives [c. A behaves like a matrix and for this reason we usually write the value b = A(a) simply as b = A a.g. i. (6. i. A : V → V. which you are probably familiar. (6.2 Linear Transformations – Tensors Students sometimes panic when they hear the word tensor. A(α a + β b) = α A(a) + β A(b). We are getting a bit ahead of ourselves here.21) 6.20) which are the “usual” results. a tensor A is nothing more than a linear transformation.e. but momentarily we show that indeed. Similarly we have a · b = (ai ei ) · (b j e j ) = ai b j (ei · e j ) = ai b j δi j = ai bi ⎧ ⎪ a1 ⎪ ⎪ ⎨ a2 = ⎪ ⎪ ⎪ ⎩ a ⎫T ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎭ ⎧ ⎫ ⎪ b1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ b ⎪ ⎪ 2 ⎪ ⎪ ⎪ ⎩ b ⎪ ⎭ 3 3 ⎧ ⎫ b1 ⎪ ⎪ ⎪ 8 9⎪ ⎪ ⎪ ⎨ ⎬ a1 a2 a3 ⎪ b = ⎪ 2 ⎪ ⎪ ⎪ ⎩ b ⎪ ⎭ 3 = a1 b1 + a2 b2 + a3 b3 . We will encounter strain and stress tensors.

O + A = A.27) For example. (A + B) C = A C + B C. 1 A = A. A composite tensor (function) is defined similarly. (−A) + A = O. the set of all tensors (that maps vectors to vectors). With this we define tensor addition and scalar multiplication as (A + B) a = A a + B a.1. α(A + B) = α A + α B. C ∈ L. we are left with b = A B a and for this reason. to wit (A + B) c = A c + B c = B c + A c = (B + A) c.1. B ∈ L are said to be equal if A a = B a for all vectors a ∈ E. B ∈ L. (α A) a = α (A a) (6. A(B C) = (A B) C. C ∈ L we have A + B = B + A. A (B + C) = A B + A C. A O = O A = O.24) for every α ∈ R and A. Specifically for every α. α(β A) = (α β) A. to prove A + B = B + A we resort to the tensor addition definition. (α + β) A = α A + β A.e.e. property V1 . 6. (6. Exam. a practice we continue henceforth. (A + B) + C = A + (B + C). L. A◦B such that b = A◦B(a) = A(B(a)). However.25) And the identity tensor I ∈ L is defined such that I a = a. B. The zero tensor O ∈ L is defined such that for every a ∈ E O a = 0. cf. i. tensor composition is generally referred to as tensor multiplication.28) . IA = AI = A for every α ∈ R and A. Using the above definitions the obvious identities follow α (A B) = (α A) B = A (α B).mathematical preliminaries where the Ai j are the components of A. (6. 6. You are familiar with composite functions f ◦g defined such that y = f ◦g(x) = f (g(x)). (6. The tensors A. We finally apply the tensor equality definition to (A + B) c = (B + C) c. is itself a vector space. β ∈ R and A. Using the above definitions it is not hard to show that the elements of L satisfy the vector space properties V1 – V8 set forth in Sect. and tensor addition definition. upon dropping the parentheses we have b = A◦B a and upon dropping the ◦. 260 (6. B. Note that in general A B ! B A.26) Hopefully these definitions are familiar looking. i.

b. by using the definition of Eqn.13 and 6.e. Using the vector space properties. and placing the scalar (c · b) = (b · c) to the right of a. f ∈ E.29 and definition of tensor multiplication (composition) it follows that for any a. Using the symmetry of the dot product. ⎦ a3 b2 a3 b3 (6. 6.20.30) Again.linear transformations – tensors 6. 261 (6. and using the vector representation gives (a ⊗ b) c = a (b · c) = {a} ({b}T {c}) = ({a} {b}T ) {c}. d. we are getting a bit ahead of ourselves here. cf. i.29 the arbitrariness of f. (6. And since the above holds for all f we have (a ⊗ b) (c ⊗ d) = (b · c) (a ⊗ d).1 Dyadic Product The dyadic (outer or tensor) product of the vectors a. Eqns. a ⊗ (α b + β c) = α(a ⊗ b) + β (a ⊗ c). (a ⊗ b) (c ⊗ d) f = (a ⊗ b) [(c ⊗ d) f] = (a ⊗ b) [(d · f) c] = (f · d) [(a ⊗ b) c] = (f · d) (b · c) a = (b · c) (f · d) a = (b · c) (a ⊗ d) f.32) . b ∈ E is the tensor a ⊗ b ∈ L defined such that (a ⊗ b) c = (c · b) a (6. β ∈ R.31) Arguing in a similar way as above.2.29) for every vector c ∈ E.e. I = ei ⊗ ei for any α. it can be verified that (α a + β b) ⊗ c = α(a ⊗ c) + β (b ⊗ c). c. i. definition of Eqn. 6. ⎫ ⎧ ⎪ a1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎨ a2 ⎪ a⊗b = ⎪ ⎪ ⎪ ⎪ ⎭ ⎩ a ⎪ 3 ⎧ ⎫ ⎪ a1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ a2 ⎪ = ⎪ ⎪ ⎪ ⎪ ⎩ a ⎪ ⎭ 3 ⎡ ⎢⎢⎢ a1 b1 ⎢ = ⎢⎢⎢⎢ a2 b1 ⎣ a3 b1 ⎫T ⎧ ⎪ b1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎨ b2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ ⎩ b ⎪ 3 8 b1 b2 b3 9 ⎤ a1 b2 a1 b3 ⎥⎥⎥ ⎥ a2 b2 a2 b3 ⎥⎥⎥⎥ . 6.

as a matrix of components. we suspect that the tensors ei ⊗ e j form a basis on L and this is indeed the case. ⎡ ⎢⎢⎢ 0 1 ⎢ e1 ⊗ e2 = ⎢⎢⎢⎢ 0 0 ⎣ 0 0 ⎡ ⎢⎢⎢ 0 0 ⎢ e2 ⊗ e2 = ⎢⎢⎢⎢ 0 1 ⎣ 0 0 ⎡ ⎢⎢⎢ 0 0 ⎢ e3 ⊗ e2 = ⎢⎢⎢⎢ 0 0 ⎣ 0 1 0 0 0 0 0 0 0 0 0 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ .37) . e2 .!!!!!!!!!!!!!!!!!!!<=!!!!!!!!!!!!!!!!!!!> = {[ei · (A e j )] (ei ⊗ e j )} b.36) we are able to express any tensor as i.15 and 6. e2 ⊗ e1 . ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ . and as expected L is 9 = 32 -dimensional. To see this more formally consider the the operation a = A b for which the following holds a = ai ei = (a · ei ) ei = (ei · a) ei = [ei · (A b)] ei = [ei · (A [b j e j ])] ei = [ei · (A [(b · e j ) e j ])] ei = [ei · (A e j )] (b · e j ) ei A . And hence upon defining the tensor components via Ai j = ei · (A e j ) (6. upon recalling Exam.e. ⎡ ⎢⎢⎢ 0 0 ⎢ e1 ⊗ e3 = ⎢⎢⎢⎢ 0 0 ⎣ 0 0 ⎡ ⎢⎢⎢ 0 0 ⎢ e2 ⊗ e3 = ⎢⎢⎢⎢ 0 0 ⎣ 0 0 ⎡ ⎢⎢⎢ 0 0 ⎢ e3 ⊗ e3 = ⎢⎢⎢⎢ 0 0 ⎣ 0 0 1 0 0 0 1 0 0 0 1 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ . 6. Associating the tensor with the matrix array makes computations easy to perform. · · · . ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ (6.!!!!!!!< (6. As seen here the basis {e1 . 6. For example consider a = Ab ai ei = [Ai j ei ⊗ e j ] (bk ek ) = (Ai j bk )(ei ⊗ e j ) ek = (Ai j bk )(ek · e j ) ei = (Ai j bk ) δ jk ei = Ai j b j ei .34) Ai j where we made use of Eqns. Eqn. 6. note that in particular e1 ⊗ e1 e2 ⊗ e1 e3 ⊗ e1 ⎡ ⎢⎢⎢ 1 0 ⎢ = ⎢⎢⎢⎢ 0 0 ⎣ 0 0 ⎡ ⎢⎢⎢ 0 0 ⎢ = ⎢⎢⎢⎢ 1 0 ⎣ 0 0 ⎡ ⎢⎢⎢ 0 0 ⎢ = ⎢⎢⎢⎢ 0 0 ⎣ 1 0 0 0 0 0 0 0 0 0 0 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ . e3 ⊗ e3 } on L. 6. 262 (6. ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ .23.mathematical preliminaries Referring back to Eqns.10.12 and 6.29 and the fact that (b· e j ) is a scalar.35) A = Ai j ei ⊗ e j . ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ . e3 } on E induces the basis {e1 ⊗ e1 . 6.33) and hence.1. =!!!!!!!>.30. ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ . (6. ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ . cf.

(α A + β B)T (AT )T T = α AT + β BT . For every tensor A ∈ L there exists a unique tensor AT ∈ L called the transpose of A such that for any a.39) Transpose. 6. 6. (a ⊗ b)T = b ⊗ a.40) It is not hard to show that for any scalars α. b ∈ E and tensors A.38) The other operations involving tensors and vectors are similarly performed. 6.35 and Eqn.e. i.e. b ∈ E we have a · (A b) = (AT a) · b.40. but here we take another approach from your familiar row–column interchange.40 with a = ei and b = e j combine to give Ai j = ei · (A e j ) = (AT ei ) · e j = e j · (AT ei ) = ATji .e. A = Sym(A) + Skew(A). B ∈ L that ATij = A ji . i. your usual row column interchange for the transpose holds true for our.2. (6. Skew and Projection Tensors We relied on the transpose above. A (a ⊗ b) = (A a) ⊗ b.41) For example. And since we have A = 1/2 (A + T A ) + 1/2 (A − AT ) we see that any tensor can be decomposed into symmetric and skew symmetric parts.linear transformations – tensors i. i. 6. ai = Ai j b j and hence we have ⎧ ⎫ ⎧ ⎪ a1 ⎪ A1 j b j ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ ⎨ a A2 j b j = ⎪ ⎪ ⎪ 2 ⎪ ⎪ ⎪ ⎪ ⎩ a ⎪ ⎭ ⎪ ⎩ A b 3 3j j ⎫ ⎡ ⎪ ⎢ A11 A12 A13 ⎪ ⎪ ⎬ ⎢⎢⎢⎢ ⎢⎢ A21 A22 A23 = ⎪ ⎪ ⎪ ⎭ ⎢⎣ A 31 A32 A33 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ ⎧ ⎫ ⎪ b1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ b . β ∈ R. ⎪ ⎪ 2 ⎪ ⎪ ⎪ ⎩ b ⎪ ⎭ 3 (6. (A B) = BT AT .42) . Alternatively the direct method may also be used. 263 (6. notably A B = [Ai j (ei ⊗ e j )] [Bkl (ek ⊗ el )] = Ai j Bkl (ei ⊗ e j ) (ek ⊗ el ) = Ai j Bkl (e j · ek ) (ei ⊗ el ) = Ai j Bkl δ jk (ei ⊗ el ) so (A B)i j = Aik Bk j . Eqn. Symmetric.e. All of the above equalities can be obtained by resorting to similar component manipulations. The tensor A ∈ L is symmetric if AT = A and skew if AT = −A.2 = Aik Bkl (ei ⊗ el ) ⎡ ⎢⎢⎢ A11 A12 A13 ⎢ = ⎢⎢⎢⎢ A21 A22 A23 ⎣ A31 A32 A33 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ ⎡ ⎢⎢⎢ B11 B12 B13 ⎢⎢⎢ ⎢⎢⎣ B21 B22 B23 B31 B32 B33 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ (6. definition. (a ⊗ b) A = a ⊗ (AT b). Utilizing the arbitrariness of a and b gives us (A B)T = BT AT . perhaps more abstract. vectors a. = A. For example the definitions of tensor multiplication and transposition. (6. Eqn. give ((A B)T a) · b = a · ((A B) b) = a · (A (B b)) = AT a · (B b) = BT (AT a) · b = ((BT AT ) a) · b.

45) where w = Axial(W) = W32 e1 + W13 e2 + W21 e3 . (6.1 Hopefully the duplicate use of the Skew notation will not cause confusion as the arguments are either tensors. 0 −(A21 − A12 ) A13 − A31 A21 − A12 0 −(A32 − A23 ) −(A13 + A31 ) A32 − A23 0 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ (6. In general a tensor has 9 independent components. 6. cf.44) for all vectors a ∈ E. for each skew tensor W there exist a unique axial vector w = Axial(W) defined such that Wa = w ∧ a (6.46. Conversely. 264 . the tensor product often appears in perpendicular projections.19 and 6.44. 3 (6. cf. 6. Sect.46) where W = Skew(w) = w1 (e3 ⊗ e2 − e2 ⊗ e3 ) + w2 (e1 ⊗ e3 − e3 ⊗ e1 ) + w3 (e2 ⊗ e1 − e1 ⊗ e2 ). obviously. ⎧ ⎫ ⎪ w2 a3 − w3 a2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ w3 a1 − w1 a3 ⎪ w∧a = ⎪ ⎪ ⎪ ⎪ ⎩ w a −w a ⎪ ⎭ 1 2 2 1 ⎡ ⎤⎧ ⎫ −w3 w2 ⎥⎥⎥ ⎪ a1 ⎪ ⎢⎢⎢ 0 ⎪ ⎪ ⎪ ⎪ ⎬ ⎢ ⎥⎨ 0 −w1 ⎥⎥⎥⎥ ⎪ a2 ⎪ = ⎢⎢⎢⎢ w3 ⎪ ⎪ ⎪ ⎣ ⎦⎪ ⎩ −w w 0 a ⎭ 2 1 = Skew(w) a.1 that satisfies Eqn. cf. Eqn. The part of a that lies along e is given by 6.38 shows ⎤⎧ ⎡ ⎫ a1 ⎪ −W21 W13 ⎥⎥⎥ ⎪ ⎢⎢⎢ 0 ⎪ ⎪ ⎪ ⎨ ⎬ ⎥⎥⎥ ⎪ ⎢⎢⎢ a 0 −W32 ⎥⎥ ⎪ W a = ⎢⎢ W21 ⎪ 2 ⎪ ⎪ ⎦⎪ ⎣ ⎩ a ⎪ ⎭ −W13 W32 0 3 ⎫ ⎧ ⎪ −W21 a2 + W13 a3 ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎨ W a − W a = ⎪ ⎪ 21 1 32 3 ⎪ ⎪ ⎪ ⎭ ⎩ −W a + W a ⎪ 13 1 32 2 ⎧ ⎫ ⎧ ⎫ ⎪ ⎪ W32 ⎪ a1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ ⎪ ⎨ ⎬ = ⎪ W ∧ a2 ⎪ ⎪ 13 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ W ⎭ ⎩ a ⎪ ⎭ 21 3 = Axial(W) ∧ a. Consider the vector a and the unit vector e. indeed. 6. Eqn. And for this reason. 6. a simple computation via Eqns.43 or vectors. Sym(A) = 1/2 (A + AT ) = 1 2 ⎡ A12 + A21 A13 + A31 ⎢⎢⎢ 2 A11 ⎢⎢⎢ 2 A22 A23 + A32 ⎢⎢⎣ A12 + A21 A13 + A31 A23 + A32 2 A33 Skew(A) = 1/2 (A − AT ) = 1 2 ⎡ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎣ ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ .2.mathematical preliminaries where. 1. for each vector w there exists a skew tensor W = Skew(w) 6. Indeed. And as seen above we are consistent with this fact since the symmetric tensor has 6 independent components and the skew tensor has 3 independent components.43) are the symmetric and skew parts. Besides being used to define a basis for L. three equal zero and three are the negative of the remaining three.

c] = [a. the first and third invariants are referred to as the trace. second and third principal invariants of a tensor A ∈ L are the scalar valued functions ı1 : E → R. 6.2. A c] + [A a. the part of a that lies along e. c ∈ E. c] is the volume of the transformed parallelepiped defined by the vectors A a. PT = P and P2 = P P = P.e. We encounter the determinant in the change of variable theorem where we relate differential volume elements. It is seen that a defining property of a perpendicular projection P is that it is symmetric and idempotent. c] = [A a. cf. is given by a − (a · e) e = (I − e ⊗ e) a. b.e. Upon recalling that [a.linear transformations – tensors a (e ⊗ e) a e (I − e ⊗ e) a Figure 6. 6. then for all subsequent projections we have a′ = (e ⊗ e) a′ . b. A b. 265 . 6. A b.4. denoted detA. ı3 (A) [a. b and c we see that detA [a. A b. that lies in the plane that is perpendicular to e. b. A c] (6. Fig.47) for every vector a.3 Tensor Invariants. and we encounter the trace when we differentiate the determinant. A c]. ı2 (A) [a. More typically. b. c] + [a. b. c] + [a.5. b. b. i. upon defining the vector a′ = (e ⊗ e) a as part of a that lies along e. A b. Fig. A c] + [A a. ı2 : E → R and ı3 : E → R defined such that ı1 (A) [a. cf. i. c] = [A a. b. Scalar Product and Norm The first. denoted trA and determinant.g. respectively. The latter equality comes from the observation that e. c]. b.4: Illustrations of perpendicular projections. A b and A c. (a · e) e = (e ⊗ e) a and the remaining part. c] is the volume of a parallelepiped defined by the vectors a.

A e2 . Noting that A e1 = (Ai j ei ⊗ e j ) e1 = Ai j (e1 · e j ) ei = Ai j δ1 j ei = Ai1 ei ⎧ ⎪ A11 ⎪ ⎪ ⎨ A21 = ⎪ ⎪ ⎪ ⎩ A 31 ⎡ ⎢⎢⎢ A11 ⎢ = ⎢⎢⎢⎢ A21 ⎣ A31 ⎫ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎭ A12 A13 A22 A23 A32 A33 so that in general A e j = Ai j ei we have. b. (6. ek ] = Ai1 A j2 Ak3 ϵi jk = (A11 A22 A33 + A21 A32 A13 + A31 A12 A23 ) − (A11 A32 A23 + A21 A12 A33 + A31 A22 A13 ) :: : :: A11 A12 A13 ::: = ::: A21 A22 A23 ::: . A e3 ] detA ϵ123 = [Ai1 ei .48) detA [e1 .18 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ ⎧ ⎫ ⎪ 1 ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎬ 0 ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0 ⎪ ⎭ (6. c] and [A a. A j2 e j . A c] = detA [a. A b.5: Illustration of detA: Solid and dashed lines show the parallelepiped with volumes [a. c]. e3 ] = [A e1 . b.mathematical preliminaries c b a Figure 6. Ak3 ek ] detA = Ai1 A j2 Ak3 [ei . via Eqns.49) :: A : : 31 A32 A33 266 . 6. e2 .16 and 6. e j .

A e2 . e2 . A3 j e j ] + [A1i ei . e3 ] = [A e1 . ei . e3 ] ı2 (A) = A2i A3 j [e1 . trI = 3. e3 ] + [e1 . A3 j e j ] + [A1i ei . det(A B) = detA detB. tr(a ⊗ b) = a · b. e3 ] + [e1 . detAT = detA. e2 . And finally a lengthy derivation gives ı2 (A) [e1 . e j .52) Similar to the manner in which any tensor can be expressed as the sum of a symmetric and skew com267 . e3 ] ı2 (A) ϵ123 = [e1 .e. A e3 ] + [A e1 . A e2 . e2 . A3i ei ] trA = A1i [ei . e3 ] = A2i A3 j ϵ1i j + A1i A3 j ϵi2 j + A1i A2 j ϵi j3 = (A22 A33 − A23 A32 ) + (A11 A33 − A13 A31 ) + (A11 A22 − A12 A21 ) 1 2 [A + A222 + A233 + 2 A22 A33 + 2 A11 A33 + 2 A11 A22 ] = 2 11 1 − [A211 + A222 + A233 + A23 A32 + A13 A31 + A12 A21 ] 2 1 1 (A11 + A22 + A33 )2 − Ai j A ji = 2 2 1 = [(trA)2 − trA2 ]. e2 . e j ] + A1i A2 j [ei . e2 . (6. A2i ei .50) i. Likewise for the trace we have trA [e1 . A2 j e j . e3 ] + [e1 .e. e3 ] + A3i [e1 . e2 . A e3 ] trA ϵ123 = [A1i ei . A e3 ] + [A e1 . trAT = trA. detI = 1. e2 . tr(A B) = tr(B A).51) It may also be verified that tr(α A + β B) = α trA + β trB. ei . your usual understanding of the trace as the sum of the diagonal elements also holds true. e2 . e2 . det(α A) = α3 detA. e3 ] + A2i [e1 . e j ] + A1i A3 j [ei .linear transformations – tensors i. e3 ] + [e1 . 2 (6. e2 . e2 . A2i ei . e3 ] = [e1 . A e2 . ei ] = A1i ϵi23 + A2i ϵ1i3 + A3i ϵ12i = A11 + A22 + A33 . your usual understanding of the determinant holds true. (6.

6.mathematical preliminaries ponents we define the deviatoric (traceless) and spherical parts of the tensor A via 1 Sph(A) = trA I 3 ⎤ ⎡ ⎢⎢⎢ 1 0 0 ⎥⎥⎥ 1 ⎥ ⎢⎢⎢ = (A11 + A22 + A33 ) ⎢⎢ 0 1 0 ⎥⎥⎥⎥ .58) which is analogous to the vector norm. Eqns. Using Eqns. C ∈ L. E2 and E3 . which follows from Eqns. W · Skew(A) if W ∈ L is skew 268 (6. β ∈ R and tensors A. ⎦ ⎣ 3 0 0 1 1 Dev(A) = A − trA I 3 ⎡ 2 1 1 A12 A13 ⎢⎢⎢⎢ 3 A11 − 3 A22 − 3 A33 ⎢ = ⎢⎢⎢ A21 − 13 A11 + 32 A22 − 13 A33 A23 ⎣ A31 A32 − 31 A11 − 13 A22 + 23 A33 so that ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ A = Sph(A) + Dev(A). (6. Eqn.59) .41. and that tr(ei ⊗ e j ) = ei · e j = δi j . B.7 and 6. trSym(A). (6. by recognizing that Dev(A)33 = −(Dev(A)11 + Dev(A)22 ) we see that Sph(A) and Dev(A) are defined by 1 and 8 independent components. cf. 6. 6. cf.20. is computed as A · B = tr[Aki Bk j ei ⊗ e j ] = Aki Bk j tr(ei ⊗ e j ) = Aki Bk j ei · e j = Aki Bk j δi j = Aki Bki . And we say two tensors A and B are orthogonal if A · B = 0.55) which. we can show. (6. We define the norm |A| of the tensor A as 1 |A| = (A · A) 2 1 = (Ai j Ai j ) 2 . (6. 6. analogously to properties E1 . these definitions on L are analogous to those on E. then it will be an inner product space. S · Sym(A) if S ∈ L is symmetric.55. Using the definition of the trace.39 and 6.52 and 6. A · (a ⊗ b). (a · c)(b · d). that the following relations hold A · B = B · A. 6. 6. (α A + β B) · C = α (A · C) + β (B · C).17.52. A · B = tr(AT B) = tr((AT B)T ) = tr(BT (AT )T ) = tr(BT A)) = B · A. For example.55 it is also readily verified that trA A · (B C) a · Ab (a ⊗ b) · (c ⊗ d) trA S·A W·A = = = = = = = A · I. Our dream is realized by defining the scalar product A · B = tr(AT B). cf. Again.53) (6.56) which is similar to the vector scalar product.52 and 6. Exer.20.57) A · A ≥ 0 for all A ∈ L and A · A = 0 if and only if A = O for all scalars α. Eqn. 6. (6. If we can define a suitable scalar product on L. upon noting that AT B = Aki Bk j ei ⊗ e j . 6. Eqns. Moreover.41.54) It is clearly seen that trDev(A) = 0 and hence the traceless terminology. cf. (BT A) · C = (ACT ) · B. cf.

e.e. For example. i. Ai j = ei · A e j as Ai j = A · (ei ⊗ e j ). (6. Rotation. 6.linear transformations – tensors for arbitrary tensors A.66) .6 and 6. 6.28.62) Note that by 6. If this is the case. Now suppose A a = 0 for some a ! 0. i. c. i.4 the angle between the transformed vectors.g.e. it is singular.52 and 6. Clearly the condition det A = 0 implies A has no inverse. then if A b = c we also have A (b + a) = A b + A a = c + 0 = c. cf.e. Additional insight into this claim is gained by noting that ⎡ ⎤⎧ ⎫ a1 ⎪ ⎢⎢⎢ A11 A12 A13 ⎥⎥⎥ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ ⎢⎢⎢ ⎥⎥⎥ ⎪ a2 ⎪ A a = ⎢⎢ A21 A22 A23 ⎥⎥ ⎪ ⎪ ⎪ ⎪ ⎣ ⎦⎪ A31 A32 A33 ⎩ a3 ⎭ ⎫ ⎧ ⎫ ⎧ ⎫ ⎧ ⎪ ⎪ ⎪ A13 ⎪ A12 ⎪ A11 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎨ ⎬ ⎨ ⎬ ⎨ A23 ⎪ A22 ⎪ A21 ⎪ a3 (6. b ∈ E. A tensor Q ∈ L is orthogonal if it preserves inner products. Moreover.23. As seen here and from Eqn.11.2.e. if (Q a) · (Q b) = a · b (6. 269 (6.55. (6. This detQ = ±1 result implies Q is invertible so upon manipulating QT Q Q−1 = I Q−1 we find QT = Q−1 .60) which is analogous to Eqn. i. This implies the columns of the matrix [A] are linearly dependent and thus by Eqns. applications of Eqn.65) for all vectors a. 6. Using the definition of the transpose. which follows from Eqns. ai = ei · a. the detA = 0. 6. and this is on agreement with your matrix experience.61) a2 + ⎪ a1 + ⎪ = ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ ⎩ A ⎪ ⎩ A ⎭ ⎩ A ⎭ 32 31 33 and hence A a is a linear combination of the columns of the matrix [A].40.52 we see that detA det A−1 = det(A A−1 ) = detI = 1 and hence det A−1 = (det A)−1 . 6. Q a. The third line above allows us to express the tensor components of Eqn. and d. 6.52 give 1 = det I = det(QT Q) = det QT det Q = det Q det Q = (detQ)2 and hence detQ = ±1. Eqn. 6. B and C and arbitrary vectors a. b. B ∈ L (A B)−1 = B−1 A−1 .21 the triple product of these three column vectors is zero. and matrix multiplication we see that (Q a) · (Q b) = a · QT (Q b) = a · (QT Q) b = a · b and hence QT Q = I. is unchanged as well as the length of the transformed vectors since (Q a) · (Q a) = a · a = |a|2 .e. 6. i.35. 6. Exer. Orthogonal. We can also show that the condition det A = 0 implies an a ! 0 exists such A a = 0. (6. (A−1 )T = (AT )−1 .e.63) It can also be shown that the following identities hold for all invertible A. (6.64) And for this reason we sometimes write A−T = (A−1 )T = (AT )−1 . i. If det A ! 0 then A is invertible and we say A−1 ∈ L is the unique inverse of A ∈ L and it satisfies A A−1 = A−1 A = I. cf.4 Inverse. Involution and Adjugate Tensors If detA = 0 we say that A is not invertible. trA = trAT = tr(AT I) = A · I. A is not one-to-one as both A b and A (b + a) equal c and and hence it has no inverse. 6.

orthogonal tensors with determinants equal to 1. cf.g. the product of the area and normal vector of the parallelogram defined by the vectors a and b is a ∧ b whereas the analogous product defined by the transformed vectors A a and A b is (A a) ∧ (A b) = A∗ (a ∧ b). e. If A is invertible. (6. q. q. Re Re a = a.40. 6. e. Notable orthogonal tensors are the central inversion Q = −I and the reflection about the plane with normal vector e Re = I − 2 e ⊗ e. our definition is consistent with your familiar notion that the inverse of an orthogonal matrix equals its transpose. The set of rotations contains all proper orthogonal tensors. These are examples of improper orthogonal tensors as their determinants equal -1. 6.6. Fig. We refer to p as the axis of rotation. 6. Fig. If {p. r} is an orthonormal (right-handed) basis for E then it is readily verified that the tensor Rp (θ) = p ⊗ p + (q ⊗ q + r ⊗ r) cos θ − (q ⊗ r − r ⊗ q) sin θ ⎤ ⎡ 0 0 ⎥⎥⎥ ⎢⎢⎢ 1 ⎥ ⎢ = ⎢⎢⎢⎢ 0 cos θ − sin θ ⎥⎥⎥⎥ ⎦ ⎣ 0 sin θ cos θ (6. i. The central inversion and the reflections are also examples of involutions since they equal their own inverses.e. 6. 6.69) for all a. For every A ∈ L there exists the unique tensor A∗ ∈ L called the adjugate that satisfies A∗ (a ∧ b) = (A a) ∧ (A b) (6. Re Re = I.68) is a rotation and the components of [R] are with respect to the {p. 6. b ∈ E. cf. Fig. We use the adjugate to relate areas and normal vectors in transformed domains. cf.7 and Exer. We encounter orthogonal tensors when we discuss rigid deformation and material symmetry.g.mathematical preliminaries a a e −I a Re a Figure 6. r} basis.67) Illustrations of the actions of these tensors on an arbitrary vector a are seen in Fig. Physically this is not surprising since the twice inverted or reflected object returns to its original state.6.6: Inversion (left) and reflection (right) illustrations. i. Indeed.69 with c ∈ E and using the “trick” that 270 .e. then upon taking the scalar product Eqn.8.

Note the exceptional case here.70) An application of Eqn. (6.linear transformations – tensors Ra θ a p r q Figure 6.71) 6. 6. a.2.72) We call λ the eigenvalue and v the eigenvector.e. i.73) .7: Rotation illustration. 6.72 we see that the eigenpair satisfies (A − λ I) v = 0 271 (6. (6. I = A A−1 we obtain c · A∗ (a ∧ b) = c · {(A a) ∧ (A b)} = (A A−1 c) · {(A a) ∧ (A b)} = [A (A−1 c). A a. (6. v) that satisfies A v = λ v. A b] = detA [(A−1 c). i. Rearranging Eqn. Spectral Representation and Polar Decomposition An eigenpair of a tensor A ∈ L is the nonzero scalar–vector pair (λ.64 and the arbitrariness of a ∧ b and c imply A∗ = detA A−T . b] = detA (A−1 c) · (a ∧ b) = c · detA (A−1 )T (a ∧ b).e the product of A v is parallel to v. A v ∥ v.5 Eigenpairs.

Solid and dashed lines show parallelograms with areas |a ∧ b| and |(A a) ∧ (A b)| = |A∗ (a ∧ b)|.8: Illustration of the adjugate.mathematical preliminaries A∗ (a ∧ b) = (A a) ∧ (A b) a∧b Ab b Aa a Figure 6. 272 .

cf.75) 3 i. Exer. c] + [a. we have the spectral decomposition of S. (A − λ I) b.e.32. i. A (a ⊗ b) = (A a) ⊗ b. c] = [(A − λ I) a. A b. I = vi ⊗ vi . (6. c]) + [A a. b. Similar results hold even if the eigenvalues of the symmetric A are not distinct. (6. In this way. the characteristic equation is an order 3 polynomial and as seen in Eqns. If S is symmetric then the eigenpairs are real. Then using the above arguments we have v1 ·v3 = v2 ·v3 = 0. [7].61 and the following discussion.e. 6. λ3 ] is expressed relative to the {v1 .e. v2 . cf. b. and 6. A b. The set of these three eigenvalues comprises the spectrum of A. c] + ([A a. Eqn. 6. 6. i. i. b.77) where [S] = diag[λ1 . b. of a linear transformation A ∈ L consist of three real numbers or one real number and a complex conjugate pair. Finally using the inequality (λi − λ j ) ! 0 we obtain v j · vi = 0. Subtracting these two equalities gives v j · S vi − vi · S v j = (λi − λ j ) v j · vi . v3 } basis.e. A c] + [A a.74) Substituting A − λ I for A in Eqn. cf. vi ) and (λ j . A b.e.50 and 6. 6. any perpendicular 273 . 6. Now assume that the eigenvalues of the symmetric A are distinct and that the eigenvectors are scaled to be unit vectors (as both vi and α vi satisfy Eqn. It then follows that the eigenvalues are the roots of the characteristic equation det(A − λ I) = 0. i. 6. v2 . the eigenvectors associated with distinct eigenvalues are orthogonal. and v3 form an orthonormal basis for E. 6.76. λ2 .e. v j ) be two eigenpairs of the symmetric tensor S such that λi ! λ j Then v j · S vi = v j · λi vi and vi · S v j = vi · λ j v j . Now using the definition of the transpose and the symmetry of S we obtain v j · S vi − vi · S v j = v j · S vi − ST vi · v j = v j · S vi − S vi · v j = v j · S vi − v j · S vi = 0 and hence 0 = (λi − λ j ) v j · vi . eigenvalues. c]. A c] @ −λ + ı1 (A) λ2 − ı2 (A) λ + ı3 (A) [a. (A − λ I) c] = −λ3 [a. A b. So. we see that the orthogonal unit eigenvectors v1 . i. Moreover. b.47 we see that det(A − λ I) [a.51 the invariants are real and hence the three roots.e.41. As seen in Eqn. To see this let (λi .72.27). c] + [a. the characteristic equation is also given by λ3 − ı1 (A) λ2 + ı2 (A) λ − ı3 (A) = 0 (6. 6. i. A − λ I is singular. b.76) thus offering additional insight into the invariants. upon using Eqns.linear transformations – tensors and thus for v ! 0 we require the columns of A − λ I to be linearly dependent. A c] + [A a. A c]) − = ? ([a. S = SI = S (vi ⊗ vi ) = (S vi ) ⊗ vi ) 3 ! = (λi vi ) ⊗ vi i=1 = 3 ! i=1 λi (vi ⊗ vi ) ⎡ ⎢⎢⎢ λ1 0 0 ⎢ = ⎢⎢⎢⎢ 0 λ2 0 ⎣ 0 0 λ3 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ (6.49. Consider the case for which λ = λ1 = λ2 ! λ3 .

S = 3 ! i=1 λi (vi ⊗ vi ) = λ (v1 ⊗ v1 + v2 ⊗ v2 ) + λ3 v3 ⊗ v3 = λ (I − v3 ⊗ v3 ) + λ3 v3 ⊗ v3 .4. ı2 (A) = λ2 λ3 + λ1 λ3 + λ1 λ2 . 1. ı3 (A) = det A = λ1 λ2 λ3 .49. (6.50 and 6. Sect.78) where we use Eqn. cf. 6.32 and recognize both I − v3 ⊗ v3 and v3 ⊗ v3 as perpendicular projections.80) .79) We say a tensor A is positive definite if a · A a > 0 for every a ! 0 ∈ E and positive semi definite if a · A a ≥ 0 for every a ∈ E . λi > 0 for i = 1. 6. Necessary and sufficient conditions that a symmetric tensor S is positive definite are that its eigenvalues are positive. S is positive semi-definite if λi ≥ 0. Fig. 6. (6.6.47. Lastly. 6. 274 (6. if λ = λ1 = λ2 = λ3 then {v1 . v3 } can be any orthonormal basis and we have S = 3 ! i=1 λi (vi ⊗ vi ) = λ (v1 ⊗ v1 + v2 ⊗ v2 + v3 ⊗ v3 ) = λ I.9: Spectral representation illustration for the λ1 = λ2 ! λ3 case. 6. vectors v1 and v2 that lie in the plane with normal vector v3 can be used in Eqn. 3 and similarly. 6. cf. viz. v2 .77. Using the spectral representation we may express invariants of Eqns.e.51 as ı1 (A) = trA = λ1 + λ2 + λ3 . 2.mathematical preliminaries v1 v2 v1 Figure 6. i.

82) which is again positive definite symmetric. (6.linear transformations – tensors If S is positive definite then its eigenvalues are positive and we can take their square roots giving √ S = 3 A ! λi (vi ⊗ vi ) i=1 ⎡ √ ⎢⎢⎢ λ1 ⎢ = ⎢⎢⎢⎢ 0 ⎣ 0 √0 λ2 0 which is obviously positive definite symmetric and satisfies the inverses of the eigenvalues to obtain 0 √0 λ3 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ .85) for all α. respectively. rather than writing B = C(A) we write B = C[A] and this is in contrast to the 2-tensors where we write b = C a rather than b = C(a). 6. 6. Eqn. which by Eqn. cf.41 is symmetric.83) where the tensors R and U are orthogonal and positive definite symmetric.81.61. cf. which is Eqn. We must now show R is orthogonal.e. i. AT A is positive √ definite symmetric and hence there exists a positive definite (invertible square root) symmetric tensor U = AT A. 6. by property V3 we see that a · (AT A) a = A a · A a ≥ 0 and a · (AT A) a = 0 only if A a = 0. It is readily verified that S−1 S = S S−1 = I. linear functions that eat vectors and spit out vectors are technically called second-order tensors (or 2-tensors). β ∈ R and A. Now we show that any invertible tensor A can be expressed via the (right) polar decomposition A = R U. 6. Moreover. 6. C : L → L such that C[α A + β B] = α C[A] + β C[B] (6. i.e. the [·] notation is used above to indicate the argument of the function. e.83. Using the above argument we can show that V = R U RT is positive definite symmetric and hence we also have the left polar decomposition A = V R. Additionally we can take 3 ! 1 (vi ⊗ vi ) λ i=1 i ⎤ ⎡ 1 ⎢⎢⎢ λ1 0 0 ⎥⎥⎥ ⎥ ⎢⎢⎢ = ⎢⎢⎢ 0 λ12 0 ⎥⎥⎥⎥⎥ . 275 . To these ends note that RT R = (A U−1 )T (A U−1 ) = U−T AT A U−1 = U−1 U2 U−1 = U−1 U U U−1 = I. For our purposes we treat fourth-order tensors (or 4-tensors ) as linear functions that eat 2-tensors and spit out 2-tensors. but A is invertible thusly A a = 0 implies a = 0.2. consider the tensor AT A. Next define the tensor R = A U−1 so that R U = (A U−1 ) U = A. ⎦ ⎣ 0 0 λ13 S−1 = (6. B ∈ L.g.81) B √ C2 √ √ S = S S = S. Consequently a · AT A a > 0 for all a ! 0. which implies R is orthogonal. The brackets are required here because in general C[A] B ! C[A B]. As seen above.g. the discussion following Eqn.6 Fourth-Order Tensors The aforementioned tensors.84) The fact that these decompositions are unique is verified in [3]. (6. Indeed. e. (6.

A + O = A. in our subsequent studies we encounter the elasticity 4-tensor. The analogy follows by defining 4-tensor addition and scalar multiplication as (A + B)[C] = A[C] + B[C].90) for every α ∈ R and A. Eqn. Herein we denote 4-tensors with upper case blackboard bold Latin letters and the set of all 4-tensors as L4 . A (B + C) = A B + A C. α(A + B) = α A + α B. IA = AI = A (6. (A + B) C = A C + B C. 4-tensors share properties analogous to those of 2-tensors. C ∈ L4 . B ∈ L is the 4-tensor A ⊗ B such that (A ⊗ B)[C] = (C · B) A for every C ∈ L. we see that L4 . B. which maps the strain 2-tensor into the stress 2-tensor. is a vector space. α(β A) = (α β) A. i. B ∈ L4 are said to be equal if A[C] = B[C] for all 2-tensors C ∈ L. Indeed.87) I[C] = C.91) .e.29. the analogous “operation” C a b does not appear as the operation a b is not defined. Not surprisingly. C ∈ L4 A + B = B + A. for every α. (6. 276 (6. the 4-tensors A. multiplication. Using the above definitions yields the identities α (A B) = (α A) B = A (α B). Why are we studying 4-tensors? Well. The dyadic product of the 2-tensors A. such that A B[C] = A[B[C]] and note that in general A B ! B A. (α + β) A = α A + β A. 1 A = A. (−A) + A = O. β ∈ R and A. B. i. the set of all 4-tensors (that maps 2-tensors to 2-tensors).89) As with 2-tensor composition we denote 4-tensor composition. cf. (A + B) + C = A + (B + C). which you may have seen by name of the generalized Hooke’s law. the zero 4-tensor O ∈ L4 such that O[C] = O (6.mathematical preliminaries for 2-tensors. (α A)[C] = α (A[C]) (6.e. 6.88) and the identity tensor I ∈ L4 such that In this way. A(B C) = (A B) C.86) for every scalar α ∈ R. A O = O A = O. (6.

C ∈ L4 . e3 ⊗ e3 } on L to induce a basis on L4 .linear transformations – tensors The same way that the the basis {e1 . 6. e2 . 6. 6. =!!!!!!!!!!!!!!!!!!!!!>. · · · . 6. cf. I = (6. B ∈ L.36.34 to obtain for A = C[B] A = Ai j (ei ⊗ e j ) = (A · (ei ⊗ e j )) (ei ⊗ e j ) = ((ei ⊗ e j ) · A) (ei ⊗ e j ) = {(ei ⊗ e j ) · C[B]} (ei ⊗ e j ) = {(ei ⊗ e j ) · C[Bkl (ek ⊗ el )]} (ei ⊗ e j ) = {(ei ⊗ e j ) · C[(B · (ek ⊗ el ))(ek ⊗ el )]} (ei ⊗ e j ) = {(ei ⊗ e j ) · C[ek ⊗ el ]} (B · (ek ⊗ el )) (ei ⊗ e j ) C .37 and 6.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!<=!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!> = {{(ei ⊗ e j ) · C[ek ⊗ el ]} {(ei ⊗ e j ) ⊗ (ek ⊗ el )}[B].92) Ci jkl where we made use of Eqns.32.91.60.94) where are the components relative to the basis {(e1 ⊗ e1 ) ⊗ (e1 ⊗ e1 ). Eqn.95) for any 2-tensor A ∈ L and 4-tensors B. 6. (6. B. e2 ⊗ e1 . e2 ⊗ e1 . D ∈ E we have (A ⊗ B)i jkl = Ai j Bkl . Now as per Eqns. · · · . 6. (α A + β B) ⊗ C = α(A ⊗ C) + β (B ⊗ C).96) Referring to Eqn. · · · (e3 ⊗ e3 ) ⊗ (e3 ⊗ e3 )} and thus we see that L4 is an 81 = 34 dimensional vector space.40 for every 4-tensor C there exists a unique 4-tensor CT called the transpose of C that satisfies A · C[B] = CT [A] · B (6. Thus any 4-tensor can be expressed as C = Ci jkl (ei ⊗ e j ) ⊗ (ek ⊗ el ).93) Ci jkl = (ei ⊗ e j ) · C[ek ⊗ el ] (6. (A ⊗ B) (C ⊗ D) = (B · C)(A ⊗ D). To see this we proceed as in Eqn. e3 ⊗ e3 } on L we use the basis {e1 ⊗ e1 .97) for all 2-tensors A. 6. C. (6.94 (ei ⊗ e j ) · C[(ek ⊗ el )] = CT [(ei ⊗ e j )] · (ek ⊗ el ) Ci jkl = (ek ⊗ el ) · CT [(ei ⊗ e j )] = 277 T Ckli j.98) . (C B)i jkl = Ci jmn Bmnkl (6. e3 } on E induces the basis {e1 ⊗ e1 . A ⊗ (α B + β C) = α(A ⊗ B) + β (A ⊗ C). (e2 ⊗ e1 ) ⊗ (e1 ⊗ e1 ).39 it can be verified that (C[A])i j = Ci jkl Akl . (ei ⊗ e j ) ⊗ (ei ⊗ e j ).59 and 6. Upon letting A = ei ⊗ e j and B = ek ⊗ el we find via Eqns.!!!!!!!!!!!!!!!!!!!!!< (6. This 4-tensor dyadic product shares properties of its second-order counterpart. 6. Namely for any α.57 and 6. β ∈ R and A.

91 and 6. PSkew [A] = Skew(A). B. We have a similar definition for 4-tensor projections and in particular we define four 4-tensor projections such that they remove the skew.41. Summarizing the above and using similar such arguments it can be verified that CiTjkl = Ckli j . Ci jkl = Ckli j .104) .41. 6. 6. Ci jkl = Ci jlk .e.e. Eqn. 6.95 and 6. cf. Ci jkl = C jikl . (6. C = C T.mathematical preliminaries cf.41. 6.100) T = (ei ⊗ e j ) ⊗ (e j ⊗ ei ).101. D ∈ L and 4-tensors A. 6. 6. 6.e.99) A (A ⊗ B) = A[A] ⊗ B.42. B. C ∈ L4 . 6. Eqns. i. Eqn. (A B) (A ⊗ B) (6. T[A] = (ei ⊗ e j ) ⊗ (e j ⊗ ei )[A] = [(e j ⊗ ei ) · A] (ei ⊗ e j ) = A ji (ei ⊗ e j ) = AT .53 and 6.103) The first line follows from Eqn. PSph [A] = Sph(A).36. 6. The major symmetry terminology arises because a 4-tensor can also exhibit minor symmetries. We say C possesses major symmetry if CT = C.54.60. These three symmetries respectively imply C = CT . (α A + β B)T = α AT + β BT . T = BT AT . (6. PSym [A] = Sym(A). i. cf. 6. T = (B ⊗ A).101) for every A and note that Indeed.102) which follows from Eqns. Referring to Fig.100 and that fact the ATij = A ji . C = T C. For convenience we define the transposition 4-tensor T such that T[A] = AT (6. and (6.41. (AT )T = A.4 we view a perpendicular projection as a 2-tensor that removes components of a vector. 6.98 whereas the second and third lines follow from Eqns. i. PDev [A] = Dev(A) 278 (6. deviatoric and spherical parts of any tensor A ∈ L. symmetric. 6. cf. C possesses the first minor symmetry if (C[A])T = C[A] and the second minor symmetry if C[AT ] = C[A] for all A ∈ L. (A ⊗ B)A = A ⊗ AT [B] for all 2-tensors A. Ci jkl = Ckli j . Eqn. C.

B. 6. B ∈ L is the 4-tensor A ! B such that (A ! B)[C] = A C BT (6. A ! (α B + β C) = α (A ! B) + β (A ! C).108) I = I ! I. where the eigentensors are normalized such that Ei · E j = δi j . D ∈ L that (A ! B)i jkl = Aik B jl . if (Q A) · (Q B) = A · B (6. It can be verified for all 2-tensors A. I am sure. (6. = 2 1 1 = I ⊗ I = I ⊗ I.66. (A ! B) (C ! D) = (A C) ! (B D).110) Referring to Eqns. (A ! B) T = T (B ! A). We say C ∈ L4 is invertible if there exists a unique C−1 ∈ L4 .g. 2 1 (I − T). Ei ) where the Ei ∈ L are the eigentensors defined such that C[Ei ] = λi Ei (6.107) for all 2-tensors C ∈ L. (A ! B)−1 = (A−1 ! B−1 ).linear transformations – tensors for all 2-tensors A ∈ L and note that 1 (I + T). where the last equality assumes that A and B are invertible. which makes the scaled identity (1/|I|) I behave like a unit vector e. 3 |I|2 1 1 = I− 2 I⊗I =I− I⊗I 3 |I| PSym = PSkew PSph PDev (6. T (A ! B) = (AT ! BT ).65 a 4-tensor Q ∈ L4 is orthogonal if it preserves inner products. called the inverse of C such that C−1 C = C C−1 = I. (α A + β B) ! C = α (A ! C) + β (B ! C). 279 (6. (6. fiber reinforced composites.106) The conjugation product of the 2-tensors A. i. Analogous to Eqn.77 you suspect.111) and that the spectral representation exists such that C= 9 ! i=1 λi Ei ⊗ Ei . orthogonal 4-tensors satisfy QT = Q−1 . C.105) note the use of the norm |I|. 6. e. Referring to Eqn. that if the fourth-order tensor C possesses major symmetry then it has 9 = 32 eigenpairs of the form (λi .109) for all 2-tensors A. 6.112) .72 and 6. Such 4-tensors arise when describing the constitutive response of anisotropic materials.e. (6. B ∈ L.

.114) vec (A X B) = (BT ⊙ A) vec (X) ..and 4-tensor computations in much the same way that 2-tensor and vector computations are performed in Eqn. ⎣ Ak1 B Ak2 B · · · A1m B A2m B . . . .113) Now we define the vector representation vec (X) of the m × n matrix X..117) .. ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎦ (6. Xm1 . ⎥⎥⎥ ⎦ (6. the Kronecker product is the k n × m l matrix defined as ⎡ ⎢⎢⎢ A11 B A12 B · · · ⎢⎢⎢ A B A B · · · 22 ⎢ 21 A ⊙ B = ⎢⎢⎢⎢⎢ .116) (6. of the 4-tensor C as ⎡ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢ mat (C) = ⎢⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎣ C1111 C2111 C3111 C1211 C2211 C3211 C1311 C2311 C3311 C1121 C2121 C3121 C1221 C2221 C3221 C1321 C2321 C3321 C1131 C2131 C3131 C1231 C2231 C3231 C1331 C2331 C3331 C1112 C2112 C3112 C1212 C2212 C3212 C1312 C2312 C3312 C1122 C2122 C3122 C1222 C2222 C3222 C1322 C2322 C3322 C1132 C2132 C3132 C1232 C2232 C3232 C1332 C2332 C3332 C1113 C2113 C3113 C1213 C2213 C3213 C1313 C2313 C3313 C1123 C2123 C3123 C1223 C2223 C3223 C1323 C2323 C3323 C1133 C2133 C3133 C1233 C2233 C3233 C1333 C2333 C3333 Using this matrix construct. Akm B ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ . 6.. ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ vec (X) = ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ In this way it may be verified that X11 X21 X31 .115) To simplify 4-tensor computations we introduce the matrix representation mat (C). it can be verified the components of the 2-tensor A = C [B] obey vec (A) = mat (C) vec (B) 280 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ . .e.2. i.38. Xmn ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ . and the n × l matrix B.mathematical preliminaries 6... by stacking the n columns of X to form a single column vector. X1n X2n X3n . (6. ⎢⎢⎢ .7 Matrix Representation Before proceeding we develop a matrix-vector abstraction that can be used to perform 2. ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ (6. This is particularly useful when developing finite element programs. . For a k × m matrix A.

(A ⊙ B)T = AT ⊙ BT . e. Tab. X = (0.3 Set Summary To make life easier down the road.4 Differentiation Differentiation arises in numerous places in the sequel. which maps reals into reals. Perhaps what you have not thought about is the fact that the derivative f ′ (x) of the function f at x (if 281 . B C mat AT = mat (A)T . (6.118) 6. 2) ⊂ R. B ∈ L4 A · B = vec (A)T vec (B) .e. mat (A ⊗ B) = vec (A) vec (B)T . For now consider the function f : X ⊂ R → Y ⊂ R.set summary for any 2. is defined as f ′ (x) = lim ϵ→0 f (x + ϵ) − f (x) ϵ (6. Consequently X must be an open subset of R. For example. mat (A B) = mat (A) mat (B) . i.119) and f is differentiable if f ′ (x) exists for all x ∈ X. B) L4 Set Real numbers Real numbers greater than 0 Vectors 2-tensors Invertible 2-tensors Invertible 2-tensors with positive determinant Symmetric 2-tensors Invertible positive definite symmetric 2-tensors Skew 2-tensors Orthogonal 2-tensors Orthogonal 2-tensors with positive determinant. g′ (0) does not exist. It may also be verified that for any 2-tensors A. 6. we differentiate the displacement to obtain the strain tensor.g. rotations Linear mappings from A to B 4-tensors Table 6. 6. mat (A ! B) = B ⊙ A. f ′ (x) = 2 x and for g(x) = |x|. Referring to your first calculus class you also know that for f (x) = x2 .and 4-tensors B ∈ L and C ∈ L4 .1: Set notation. if it exists.1 Notation R R+ E L LInv L+ LSym + LSym LSkew LOrth LRot L(A. Then based on your past experience you know that the derivative f ′ (x) of the function f at x. Our use of f (x+ϵ) for small |ϵ| implies x+ϵ ∈ X and hence x must be an interior point of X. B ∈ L and any 4-tensors A. cf. we list several commonly used sets.

120) and this makes sense. · · · ⎤ . A33 ) ⎥⎥⎥ ⎥⎥⎥ . e2 . And this is the reason that g′ (0) does not exist. x3 ) ∂ fˆ1 ∂x2 (x1 . x3 ) ∂αˆ ∂x2 (x1 . while the one sided limits limϵ→0+ (g(0 + ϵ) − g(0))/ϵ = 1 and limϵ→0− (g(0 + ϵ) − g(0))/ϵ = −1 exist.k. A12 . i.e. A12 . α from α.e.e. · · · ∂βˆ ∂A23 (A11 .mathematical preliminaries it exists) is unique because it is defined via the limit. the limit limϵ→0 (g(0 + ϵ) − g(0))/ϵ does not. A33 ) . · · · ∂βˆ ∂A21 (A11 . the scalar valued component function βˆ is differentiated with respect the 3 × 3 tensor components Ai j . if it exists. e3 } are ∂x i j respectively. x2 . x3 ) ∂ fˆ1 ∂x3 (x1 .e.e. i. · · · . cf. i. · · · . B : L → L at A. x2 . x2 . if it exists. In the above it is understood that 1) x = xi ei . e. x2 . · · · ∂βˆ (A11 . Generalizing your knowledge we define the gradient of a scalar valued function of a 2-tensor. x2 . A33 ) (ei ⊗ e j ) ⊗ (ek ⊗ el ) ∂Akl 282 (6. A12 . x2 . After taking your vector calculus class you know that for a scalar valued function of a vector. · · · ∂βˆ ∂A32 (A11 . 6. A33 ) ∂βˆ ∂A13 (A11 .10. A33 ) ⎥⎥⎥⎥ ⎥⎥⎦ . x3 ) and f(x) |x=xi ei = fˆi (x1 . the 3 component functions fˆi of the vector valued function f are differentiated with respect the 3 vector components x j . as the 2-tensor ∇β(A) ei ⊗ e j ⎡ ⎢⎢⎢ ⎢⎢⎢ = ⎢⎢⎢⎢ ⎢⎢⎣ = ∂βˆ ∂A11 (A11 . 2-tensor Df(x) with respect to the fixed orthonormal basis {e1 . α : E → R the gradient ∇α(x) of α at x. x3 ) ei ∂xi ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ (6.a. A12 . the derivative of a 2-tensor valued function of a 2-tensor. Note the care that is taken to distinguish the function from its component function. is defined as the 2-tensor (a. Eqn. 6. x2 . · · · . x3 ) ∂ fˆ3 ∂x2 (x1 . which is unique. x2 . A12 .122) This makes sense. if it exists. x3 ) and ∂x (x1 . β : L → R at A. x2 . ˆ Also note the care that is taken to have consistent variables on each side of the above equations.119 with xi replacing x. · · · ∂βˆ ∂A22 (A11 .e. · · · ∂βˆ ∂A31 (A11 . via Eqn. x2 . x3 ) ei ⊗ e j ∂x j ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎦ (6. x3 ) ∂ fˆ3 ∂x1 (x1 . x2 . ∂Ai j . i. x3 ). x3 ) ∂ fˆ2 ∂x3 (x1 .e. 2) α(x) |x=xi ei = α(x ˆ 1 . x3 ) ∂ fˆ2 ∂x2 (x1 . x3 ) ∂αˆ (x1 .e. A12 . x3 ) ∂ fˆ2 ∂x1 (x1 .121) and this makes sense. x3 ). if it exists. f : E → E the derivative Df(x) of f at x. x2 . x3 ) ei are represented by their component functions 3 αˆ : R → R and fˆi : R3 → R and 3) the partial derivatives are evaluated in the “usual” manner.123) . x2 . x2 . A33 ) . x2 . A12 . A12 . A33 ) . the scalar valued component function αˆ is differentiated with respect the 3 vector components x j . A33 ) ei ⊗ e j . A12 . i. i. A12 . For a vector valued function of a vector. is defined as the vector ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ∇α(x) = ⎪ ⎪ ⎪ ⎪ ⎩ = ∂αˆ ∂x1 (x1 . x2 . x2 . A33 ) (6. x3 ) ∂αˆ ∂x3 (x1 . · · · ∂βˆ ∂A33 (A11 . A33 ) .g. i. And finally note that the components of the vector ∇α(x) and the ∂ fˆi ∂αˆ (x1 .e. And finally. matrix) ⎡ ⎢⎢⎢ ⎢⎢⎢ Df(x) = ⎢⎢⎢⎢⎢ ⎢⎢⎣ = ∂ fˆ1 ∂x1 (x1 . x3 ) ∂ fˆi (x1 . namely (x1 . A33 ) ∂βˆ ∂A12 (A11 . x3 ) ∂ fˆ3 ∂x3 (x1 . is the 4-tensor DB(A) = ∂ Bˆ i j (A11 . x2 . i. x2 . A12 . i.

cf.36. cf.13. for other choices of 283 . ı1 : L → R where from Eqn. Here is it is understood that 1) A = Ai j ei ⊗ e j . 6. ˆ 11 . where the second equality follows from Eqn. 6.3 and 6. 6. Indeed. A33 ) ei ⊗ e j is represented by its component functions Bˆ i j : R9 → R. 6. Eqn.50 ı1 (A) = tr A = A11 + A22 + A33 . lim = = = Example 6.g. Using Eqn.2. Application of Eqn. the 3×3 component functions Bˆ i j of the tensor valued function B are differentiated with respect the 3 × 3 tensor components Akl .e. 6. 6. However. To these ends we revisit the derivative definition of Eqn. It generalizes nicely for other choices of the co-domain Y. · · · .120 gives ⎧ ⎫ ⎪ 2 x1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ 2 x ∇φ(x) = ⎪ = 2 x. Determine the gradient of the trace. A33 ) is represented by its component function βˆ : R9 → R and 3) 6. 6.122 yields ⎡ ⎢⎢⎢ 1 ⎢ ∇ı1 (A) = ⎢⎢⎢⎢ 0 ⎣ 0 0 1 0 0 0 1 ⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ = I.119. The above differentiation via partial differentiation is absolutely fine. Example 6.4 we have to expand the function in terms of its components and this is not always straightforward. e.93. 6. cf.3. i. however there are situations where it is cumbersome. 6.4. Example 6. as seen in Exams. Eqn. Using Eqn.20.differentiation and this makes sense. 2) β(A) |A=Ai j = β(A B(A) |A=Ai j = Bˆ i j (A11 .49 or the tensor inverse function G : L → L such that G(A) = A−1 . Eqn. A12 . rather than Y ⊂ R we could have Y ⊂ E or Y ⊂ L. 6. · · · . For example consider the determinant function det = ı3 : L → R. Determine the derivative of the function f : R → R f (x) = x2 .e. ⎪ 2 ⎪ ⎪ ⎪ ⎩ 2x ⎪ ⎭ 3 where the second equality follows from Eqn. A12 . Determine the derivative of the function φ : E → R such that φ(x) = x · x = x21 + x22 + x23 .119 gives f ′ (x) (x + ϵ)2 − x2 ϵ→0 ϵ 2 x ϵ + ϵ2 lim ϵ→0 ϵ 2 x. i.

is the linear operator D f (x) : U → V that eats any u ∈ U and spits out the unique element D f (x)[u] ∈ V such that D f (x)[u] = f (x + u) − f (x) + o(|u|).121 and whence we have the “usual” result that the components of the derivative Df(x) are equal partial derivatives of the component functions. in Eqn. x2 . i. x3 ) ei ⊗ e j u. the derivative D f (x) considers all u ∈ U whereas the directional derivative only considers a specific u. its value at u ∈ U is denoted by D f (x)[u] ∈ V. e. 6. Letting u = e1 and using the component functions we subsequently gives 1 Df(x)[e1 ] = lim [f(x + ϵ e1 ) − f(x)] ϵ→0 ϵ 1 = lim [ fˆi (x1 + ϵ. i. Finally.119 is ill-defined. 6.124 for the function f : E → E gives 1 Df(x)[u] = lim [f(x + ϵ u) − f(x)] ϵ→0 ϵ (6. x3 ) ei ∂x1 (6. if it exists.62 we see the appearance of the directional derivative of f at x with respect to u. 6. Df(x) is a linear operator meaning Df(x)[α u + β v] = j α Df(x)[u] + β Df(x)[v] so in particular upon expressing u = ui ei we have Df(x)[u] = Df(x)[u j e j ] = u j Df(x)[e j ] ∂ fˆi (x1 . 6. 6. and because it is a linear function we use the square brackets (as we did with 4-tensors) to delineate the argument u. 6. D f (x)[u] = lim [ f (x + ϵ u) − f (x)] = ϵ→0 ϵ dϵ (6. For example.125) that holds for all u ∈ E. 6. δ f (x.124 derivative definition.123. x3 ).128) . Now.126) ˆ ∂ fi and hence Df(x)[e j ] = ∂x (x1 . x3 ) ei = (u · e j ) ∂x j C B ∂ fˆi = (x1 . This linear operator derivative definition is consistent with those appearing in Eqns. ∂x j (6. x2 . x2 . j Some remarks concerning the derivative are worth noting. 6.mathematical preliminaries the domain X this is not the case as the division. x2 . the arbitrariness of u is used to obtain 6. 4. x3 ) ei . is the linear operator D f (x) : U → V that eats any u ∈ U and spits out the unique element D f (x)[u] ∈ V such that d 1 f (x + ϵ u) |ϵ=0 = δ f (x. 6. 284 (6. specializing Eqn. x2 . 6. x3 ) ei ] ϵ→0 ϵ ∂ fˆi = (x1 . we could have equivalently stated that the derivative of the function f : X ⊂ U → Y ⊂ V at x ∈ X. x2 . Again we emphasize that D f (x) : U → V is a function.127) which follows from the partial derivative Eqn. by ϵ ∈ X ⊂ E.121.124) Recalling Eqn. x2 . To remedy this problem we modify Eqn.126.29.20 and the dyadic product Eqn. the inner product Eqn.122 and 6. if it exists. D2 As an alternative to the Eqn. However.120. u). D1 Be careful with the notation: we denote the (derivative) function of the function f : X ⊂ U → Y ⊂ V at x ∈ X ⊂ U as D f (x) : U → V and its value at u ∈ U as D f (x)[u] ∈ V. u).e. x3 ) ei − fˆi (x1 .119 so that the derivative of the function f : X ⊂ U → Y ⊂ V at x ∈ X. ∂ fˆi (Df(x))i j = ∂x (x1 .e.g.

5) = −1/ 1 − (0.5)2 = −1.e. 2) ⊂ R and u = −1 " X we have limϵ→0 (x + ϵ u) ∈ X.e. cf.e.g.g. i. for x = 1 ∈ X = (0. 1) ⊂ R → (0.01] = −0. 6. which is linear via property V2 .differentiation f (x) u x f (x + u) y x+u ≈ D f (x)[u] X Y Figure 6. However. which gives. i. D7 Upon similar consideration of f : X ⊂ U → Y ⊂ V. i.4. cf. cf. 6. f (x) ∈ Y we need not have D f (x)[ϵ u] ∈ Y. Fig.129) We refer to ∇ f (x) as the gradient of f at x.11. because the components are unique. we note that although the value D f (x)[ϵ u] approximates the difference f (x+ϵ u)− f (x) and both f (x+ϵ u). i. D3 Since derivative function D f (x) is linear we have D f (x)[ϵ1 u1 +ϵ2 u2 ] = ϵ1 D f (x)[u1 ]+ϵ2 D f (x)[u2 ] ≈ f (x + ϵ1 u1 + ϵ2 u2 ) − f (x) for ϵ1 . so this condition poses no limitation. f : X ⊂ R → Y ⊂ V then we use the ordinary derivative notation. e. And thus we see that the derivative at x acting on u.e. For example considerAthe function f : (−1. we only work with normed linear spaces.1547 and f ′ (0. where the o(|u|) term tends to zero faster than |u|. e. D8 The derivative is unique and this follows from their component representations.e. e.10: Derivative illustration. π) ∈ R with f (x) = arccos(x). D6 Considering f : X ⊂ U → Y ⊂ V we note that the x+ϵ u ∈ X requirement does not imply that u ∈ X. to express D f (x)[u] = ∇ f (x) · u. f : X ∈ U → Y ⊂ R then D f (x) is a linear function that eats u ∈ U and spits out real numbers D f (x)[u] ∈ R and hence we can use the vector inner product. D5 If f is a differentiable real valued function. D f (x)[u] approximates the difference f (x + u) − f (x) for “small” u. Eqn. i. the gradient ∇ f (x) is an element of the vector space U.011547 " Y. D4 If f is a function of real number. Using this definition requires U to be a normed linear space due to the appearance of |u|.5)[0. (6.g. u2 ∈ U. f ′ (x) u = D f (x)[u]. ϵ2 ∈ R and u1 . Eqn. 1. f ′ (0.120 and Exer. 6. Obviously since u ∈ U we see that ∇ f (x) ∈ U. rather it only requires that limϵ→0 (x + ϵ u) is in X. 285 .21.

it can be described by the dot product. 6.6.128. First. 1.21.1 .12 − 2. Here we repeat the results of Exam. f (x + u) − f (x) = = (x + u)2 − x2 2 x u + u2 = = D f (x)[u] + o(|u|) f ′ (x) u + o(|u|). viz. and upon referring to ordinary derivative remark D4 above we finally have D f (x)[u] = f ′ (x) u. where we again see that f ′ (x) = 2 x and note that limu→0 u2 /|u| = 0. 6. x2 x1 Figure 6. 6. ∇φ(x) · u = = = = 7 16 φ(x + ϵ u) − φ(x) ϵ 1 lim [(x + ϵ u) · (x + ϵ u) − x · x] ϵ→0 ϵ 9 18 lim 2 ϵ x · u + ϵ 2 u · u ϵ→0 ϵ 2x·u lim ϵ→0 286 .124. Moreover.129.3 using Eqn. which justifies the statement that u2 = o(|u|). i.124 and 6. 6. Dφ(x) is a linear operator that maps to the reals. f (2. D f (x)[u] = D f (x) u. On this occasion the derivative of φ : E → R at x eats elements in E and spits out elements in R and hence Dφ(x) : E → R.1) − f (2) 2.4. Here we repeat the results of Exam. ∇φ(x) ∈ E. which is the usual result.11: Gradient illustration ∇φ(x) = 2 x for the function φ : E → R such that φ(x) = x · x of Exams.41 ≈ f ′ (2) × 0. Eqn. In this way Dφ(x)[u] = ∇φ(x) · u where the gradient is a vector. 6.1 4 .128. 6. 6. 6. Example 6. i. and since this is a linear operator it acts on the increment u via scalar multiplication.mathematical preliminaries Example 6. cf. viz.6.2 using Eqn.3 and 6.2 ≈ ≈ . Now we have f ′ (x) : R → R and. it is worth taking a moment to note what the derivative of f : R → R at x. With this in mind and referring to the discussion surrounding Eqn. for example. Now we proceed by applying Eqn.e. Now we proceed by applying Eqns. cf.e.5. remark D5 above.124 we note that the derivative of f at x will eat elements in R and spit out elements in R and hence we have D f (x) : R → R.

287 (6. With x = e1 + 2 e2 and u = 0. ı2 (A) ı3 (A) cf. 6. Using Eqns.59. Here determine the derivatives of the tensor invariants ı j : L → R of Eqn.1 e2 + 0. 6.129 with ı1 (A) = trA = I · A gives ∇ı1 (A) · U = = = = 1 [ı1 (A + ϵ U) − ı1 (A)] ϵ 1 lim [I · (A + ϵ U) − I · A] ϵ→0 ϵ 1 lim [I · ϵ U] ϵ→0 ϵ I·U lim ϵ→0 whence ∇ı1 (A) = I.129 to the invariants ı j : L → R such that ı1 (A) = tr A.2 e3 we have φ(x + u) − φ(x) ≈ (1.66 ≈ ∇φ(x) · u 2 (e1 + 2 e2 ) · (0. cf. 6.130) which agrees with our Exam.124 and 6. 6. The same process with Eqns.129 gives ∇ı2 (A) · U = = = = = = 1 [ı2 (A + ϵ U) − ı2 (A)] ϵ→0 ϵ 1 @ 1? @2 1 1? lim [tr(A + ϵ U)]2 − tr(A + ϵ U)2 − (trA)2 − trA2 ϵ→0 ϵ 2 2 1 ? @ 1? @2 1 1 lim [(A + ϵ U) · I]2 − (A + ϵ U)T · (A + ϵ U) − (A · I)2 − AT · A ϵ→0 ϵ 2 2 1 9 ϵ2 8 92 1 8 (U · I)2 − UT · U lim ϵ (A · I)(I · U) − AT · U + ϵ→0 ϵ 2 lim (A · I)(I · U) − AT · U (trA I − AT ) · U whence ∇ı2 (A) = trA I − AT . 6. Eqn. And as in Exam. i.1 e1 + 2. Fig.e.131) .6. Dı j (A)[U] = ∇ı j (A) · U where the gradients are tensors.7. i. ∇ı j (A) ∈ L. 6. 6.51. = 2 = det A. since the linear operator Dı j (A) maps to the reals we describe it via the dot product.1 e2 + 0.2 e3 ) · (1.differentiation so that again we have ∇φ(x) = 2 x.4 result.124 and 6.e.1 e2 + 0.2 e3) − (e1 + 2e2 ) · (e1 + 2e2 ) ≈ .124 and 6. 6.11.59. 1 {(tr A)2 − trA2 }.1 e1 + 0. Example 6.1 e2 + 0. 6.1 e1 + 2.2 e3) . 6.6. 6.47. Now we proceed by applying Eqns.51. These derivatives eat elements in L and spit out elements in R and hence Dı j (A) : L → R. (6.1 e1 + 0.

A c] + [A a.47 to obtain for ı3 (A) = detA [ı3 (A + ϵ U) − ı3 (A)][a. subsets of R.g. e. ı3 : LInv ⊂ L → R \ 0 ⊂ R. When deriving the governing equations for linear elasticity we need to differentiate functions that are products in nature. g : X ⊂ U → W and h : X ⊂ U → Z have the same domain X and respective co-domains Y. 6.1 Product rule You used the product rule to evaluate derivatives. U c]} + ϵ 2 {[A a. b. which are. A b. and c we have ı3 (A + ϵ U) − ı3 (A) = ϵ ı1 (U A−1 ) ı3 (A) + ϵ 2 ı2 (U A−1 ) ı3 (A) + ϵ 3 ı3 (U) so that Eqns. U c] + [U a. b. Note that this result assumes A is invertible. b. U c] ? @ ϵ [U A−1 A a. f : E → E. U b. To differentiate α we cannot use the product rule in the naive way.120.mathematical preliminaries In regard to ı3 (A) we use Eqn. (A + ϵ U) c] − [A a. But perhaps you have not evaluated the derivative of products of these “complicated” functions. A c] + ϵ 3 ı3 (U)[a. e. E and/or L. for the real valued function α : E → R you evaluated the gradient ∇α(x) of Eqn. U c] = = ϵ ı (U A−1 ) [A a. e.e. For example consider α(x) = g(x) · h(x) where f and g are vector valued functions of vectors.124 and 6. i. e. A c] + ϵ 2 ı2 (U A−1 ) [A a. 6. 6. U A−1 A b. U b. A b. 6.4. g and h are functions on the reals to the reals. b. A b. U b. A c]} + ϵ 3 [U a.59.g. U A−1 A b. f : R → R. we do not have ∇α(x) = Dg(x) · h(x) + g(x) · Dh(x) where Dg(x) and Dh(x) are matrices similar to Df(x) in Eqn. Indeed the “inner product” in this derivative expression is nonsense since it is between a tensor (matrix) and a vector.g. b.132) where we use Eqn. (A + ϵ U) b.e. 6. A c] ϵ {[U a. i. U c] + [U a. W and Z. e. A b.e.g. c] − ı3 (A)[a. For example. U A−1 A b. U A−1 A c]+ @ [U A−1 A a. A b. And you evaluated derivatives of more “complicated” functions. A c] + [A a. b. A c] + ϵ 3 [U a. A c] + [A a..g. ⋆ could represent the scalar multiplication between a real valued function 288 . U b. A b. for f (x) = g(x) h(x) you compute f ′ (x) = g′ (x) h(x) + g(x) h′ (x) where f . f (x) = g(x) ⋆ h(x) where the functions f : X ⊂ U → Y.121. c] Using the arbitrariness of a. c] B 1 C ϵ ı1 (U A−1 ) ı3 (A) + ϵ 2 ı2 (U A−1 ) ı3 (A) + ϵ 3 ı3 (U) [a. A b. A c] + [A a. c] [(A + ϵ U) a.71. A b. A b. (6. i. c] = = = = ı3 (A + ϵ U)[a. U A−1 A c] + [U A−1 A a.129 give ∇ı3 (A) · U = = = 1 [ı3 (A + ϵ U) − ı3 (A)] ϵ ı1 (U A−1 ) ı3 (A) lim ϵ→0 ı3 (A) A−T · U whence ∇ı3 (A) = = detA A−T A∗ . 6. U b. 6. U A−1 A c] + ? ϵ 2 [A a.

129 and 6.29.133) In the following examples we apply the product rule to obtain results that are required in our subsequent developments. On the other hand. 6. k ∈ Z and reals α.133 to obtain Da(x)[u] = (Dα(x)[u] h(x) + α(x) Dh(x)[u] Da(x) u = (∇α(x) · u) h(x) + α(x) Dh(x) u = (h(x) ⊗ ∇α(x) + α(x) Dh(x)) u. if we fix h ∈ Z then ⋆ is linear operator on W and visa versa.40.9. 6. The derivative of α linearly maps vectors u ∈ E to reals Dα(x)[u] and hence we have Dα(x)[u] = ∇α(x) · u where ∇α(x) ∈ E is the gradient vector.129 and 6. For the cases of the inner and dyadic products between a pair of vectors this is clearly the case as seen through properties E1 and E2 and Eqn. which we recognize as a tensor. Being that as it may. Here we determine the derivative of the function φ : E → R where φ(x) = g(x) · h(x) with g : E → E and h : E → E smooth functions. u · (Dg(x))T h(x) + (Dh(x))T g(x) · u ? @ (Dg(x))T h(x) + (Dh(x))T g(x) · u. Example 6. β ∈ R we have (α d + β g) ⋆ h = α (d ⋆ h) + β (g ⋆ h) and g ⋆ (α h + β k) = α (g ⋆ h) + β (g ⋆ k). Our task here is to differentiate such functions. In all cases. then D f (x) : U → Y exists and satisfies D f (x)[u] = Dg(x)[u] ⋆ h(x) + g(x) ⋆ Dh(x)[u]. 4. (6.8. As seen above ∇φ(x) = (Dg(x))T h(x) + (Dh(x))T g(x) ∈ E. It could also represent the inner product between a pair of vector valued functions (as just discussed) or tensor valued functions.32. Forging on. 6. Here we determine the derivative of the function a : E → E where a(x) = α(x) h(x) with α : E → R and h : E → E smooth functions. respectively. g ∈ W. we apply Eqns. On the other hand. we now apply Eqns.133 to obtain Dφ(x)[u] = ∇φ(x) · u = = Dg(x)[u] · h(x) + g(x) · Dh(x)[u]. which states that if Dg(x) : U → W and Dh(x) : U → Z exist. cf. the derivatives of a and h linearly map vectors u ∈ E to vectors Da(x)[u] ∈ E and Dh(x)[u] ∈ E and hence we write Da(x)[u] = Da(x) u and Dh(x)[u] = Dh(x) u as Da(x) ∈ L and Dh(x) ∈ L are 2-tensors. Here we have Da(x) = h(x) ⊗ ∇α(x) + α(x) Dh(x). cf.4. Exam. 6. Sect. 6. In essence. The dyadic product ⊗ between a pair of vector or tensor valued functions is yet another example. Dφ(x) linearly maps vectors u ∈ E to reals Dφ(x)[u] and hence Dφ(x)[u] = ∇φ(x) · u where ∇φ(x) ∈ E is the gradient vector. Example 6. the operation ⋆ is bilinear meaning that for d. To differentiate f we use the product rule. 6.differentiation and either another real valued function. h. a vector valued function or a tensor valued function. 289 . The derivatives of the smooth functions g and h at x linearly map vectors u ∈ E to the vectors Dg(x)[u] ∈ E and Dh(x)[u] ∈ E and hence Dg(x) ∈ L and Dh(x) ∈ L are 2-tensors to wit we write Dg(x)[u] = Dg(x) u and Dh(x)[u] = Dh(x) u.2.6.

134) which is a fourth-order tensor.7 and 6. Determine the derivative of the function F : LInv ⊂ L → LInv ⊂ L where F(A) = A−1 . And based on the above verbiage.10. This is in agreement with your prior knowledge. the set of 2-tensors. In this example we determine the derivative of the adjugate A∗ of A. since G(A) = I a constant we trivially have DG(A) = O. 6.11. Indeed at x = 0 f is not defined. However our problem is to find DF(A) to wit we use Eqn.100 and define F : LInv ⊂ L → LInv ⊂ L such that F(A) = A−1 . Moreover. 6. we know that DG(A) ∈ L4 is a 4-tensor. 290 .10 we have Dı3 (A)[U] = ∇ı3 (A) · U = det(A) A−T · U and DF(A)[U] = −(A−1 ! A−T )[U] and hence Eqns.e.7 and 6. an element of L4 . Of course the domain of F is restricted to the subspace of invertible tensors. it is the element in R with no inverse. Example 6. To address this problem we define G : LInv ⊂ L → LInv ⊂ L such that G(A) = F(A) A = I. where we used that fact that for H : L → L such that H(A) = A we trivially have DH(A) = I ∈ L4 and DH(A)[U] = U. DF(A) linearly maps 2-tensors U ∈ L to 2-tensors DF(A)[U] ∈ L and hence DF(A) ∈ L4 is a 4-tensor.e. to utilize the results of Exams.e. where we use Eqns. Moreover.e. i. for f : R → R with f (x) = 1/x we have f ′ (x) = −1/x2 that is defined at all x ∈ R except x = 0. i. However.71 that A∗ = detA A−T we define the function H : LInv ⊂ L → LInv ⊂ L such that H(A) = A∗ .129 and 6. 6. Rearranging the above and using Eqn. 6.10 we define G : LInv ⊂ L → LInv ⊂ L such that G(A) = ı3 (A) F(A) so that H(A) = A∗ = = detA A−T T[detA A−1 ] = = T[ı3 (A) F(A)] T[G(A)]. However.71 and 6. 6. i. From Exams. 6.133 to obtain DG(A)[U] = O[U] = O = DF(A)[U]A + F(A) I[U] DF(A)[U]A + F(A) U DF(A)[U]A + F(A) U.133 give DG(A)[U] = Dı3 (A)[U] F(A) + ı3 (A) DF(A)[U] = (∇ı3 (A) · U) F(A) − ı3 (A) (A−1 ! A−T )[U] B C = F(A) ⊗ ∇ı3 (A) − ı3 (A) (A−1 ! A−T ) [U] = detA (A−1 ⊗ A−T − A−1 ! A−T )[U].mathematical preliminaries Example 6. 6. (6. 6. Noting from Eqn.64.107 gives DF(A)[U] = = = −F(A) U A−1 −A−1 U A−1 −(A−1 ! A−T ) U or DF(A) = −(A−1 ! A−T ). the domain of DF(A) is L. i.

6. 6.137) 6.136) 1 Next we define the scalar valued function α : LInv ⊂ L → R such that α(A) = (h(A) · h(A)) 2 = |A∗ a| and use the elementary rules of differentiation to obtain ∇α(A) · U = = = = = = = = = = = = 1 1 (h(A) · h(A))− 2 2 h(A) · Dh(A)[U] 2 1 (h(A) · h(A))− 2 h(A) · DH(A)[U] a 1 (h(A) ⊗ a) · DH(A)[U] α(A) 1 DT H(A)[h(A) ⊗ a] · U α(A) 8 9@T 1 ? T detA A−1 ⊗ A−T − A−1 ! A−T [(A∗ a) ⊗ a] · U α(A) 8 9T 1 detA A−1 ⊗ A−T − A−1 ! A−T TT [(A∗ a) ⊗ a] · U α(A) 8 9 1 detA A−T ⊗ A−1 − A−T ! A−1 T[(A∗ a) ⊗ a] · U α(A) 8 9 1 detA A−T ⊗ A−1 − A−T ! A−1 [a ⊗ (A∗ a)] · U α(A) 8B C 9 1 detA A−1 · (a ⊗ (A∗ a)) A−T − A−T (a ⊗ (A∗ a)) A−T · U α(A) 8B C 9 1 detA (A−1 A∗ a) · a I − (A−T a) ⊗ (A∗ a) A−T · U α(A) 8B C 9 1 detA (A∗ a) · (A−T a) I − (A−T a) ⊗ (A∗ a) A−T · U α(A) 7 1 6 ∗ ((A a) · (A∗ a)) I − (A∗ a) ⊗ (A∗ a) A−T · U. 6. 6. amongst others. 6. And you also used the chain rule to evaluate derivatives. 6. 6.2 Chain rule When deriving the governing equations for linear elasticity we also need to differentiate composite functions.40. 6. 6. 6.97 and 6.134. here a is an arbitrary constant vector.100.71. α(A) where we used Eqns.12. e. α(A) (6. 6. 6. And hence we have ∇α(A) = 7 1 6 ∗ ((A a) · (A∗ a)) I − (A∗ a) ⊗ (A∗ a) A−T .91.59.11 so that Dh(A)[U] = = DH(A)[U] a 8 9 detA T (A−1 ⊗ A−T − A−1 ! A−T ) [U] a.55. (6.104.4. 6. 6.41.132 and 6.135) Example 6. 6.g. 6. for h(x) = g◦ f (x) = g( f (x)) you compute 291 . Again noting from Eqn. We continue with the previous Exam.11 and now differentiate the function |A∗ a| with respect to A.108.86.57. recalling the composition result A B[C] = A[B[C]] we have DH(A) = detA T (A−1 ⊗ A−T − A−1 ! A−T ). The above gives the fourth order tensor DG(A) = detA (A−1 ⊗ A−T − A−1 ! A−T ) and hence.64.71 that A∗ = detA A−T we define the function h : LInv ⊂ L → E such that h(A) = A∗ a = H(A) a where we utilize the results of Exam. (6.differentiation where we used Eqns.

g. g : E → E.!!!!<=!!!!> Dh(x)[u] = Dg( f (x) )[D f (x)[u]] (6.e. g and h are functions on the reals to the reals. f : R → R.138) Dh(x) = Dg( f (x))◦D f (x).12. care must be taken when applying the chain rule to more “complicated” functions. For example. we have the equality Dh(x)[u] = Dg(y)[v] where the derivative of g(y) at y = f (x) acts on the increment v = D f (x)[u]. In our chain rule presentation we consider the two functions f : X ⊂ U → Y ⊂ V and g : T ⊂ Y → W ⊂ Z where T is an open subset of Y and U. However. then Dh(x) : U → Z exists and y that leads to v . i.g. if D f (x) : U → Y and Dg(y) : Y → Z exist.139) As seen in Fig.12. we cannot merely say that ∇β(x) = ∇α(g(x)) Dg(x). as seen with the product rule. where y = f (x). Fig. h(x) = g(y) ≈ Dh(x)[u] = Dg(y)[v] x X u h(x + u) W U Z f (x + u) T Y ≈ v = D f (x)[u] y = f (x) Figure 6. 292 V .mathematical preliminaries h′ (x) = g′ ( f (x)) f ′ (x) where f . indeed the vector times the tensor operation is nonsense. e. Now to the crux of the matter. β : E → R and g is a vector valued functions of vectors. e. Using these functions we define the composite function h : X ⊂ U → W ⊂ Z such that h(x) = g◦ f (x) = g( f (x)) for all x ∈ X.<=> . 6. 6. (6. for β(x) = α(g(x)) where α and β are scalar valued functions of vectors. cf.12: Composite function derivative illustration. V and Z are vector spaces.

Example 6.138 which give Dβ(A)[U] = Dα(G(A))◦DG(A)[U] ∇β(A) · U Dα(G(A))[DG(A)[U]] ∇α(G(A)) · DG(A)[U] (DG(A))T ∇α(G(A)) · U = = = The arbitrariness of U implies ∇β(A) = (DG(A))T ∇α(G(A)). The arbitrariness of u gives us g′ (ϵ) = D f (h(ϵ)).129 and 6. the trivial result h′ (ϵ) = 1 and the fact that u is a scalar. 5.13. Now we assume the real valued function α : L → R is differentiable and thus it has a gradient that linearly maps 2-tensors U ∈ R to scalars ∇α(A) · U ∈ R making ∇α(A) : L → R a 2-tensor.87 – 5. Dh(ϵ) = h′ (ϵ). Dg(ϵ)[u] = g′ (ϵ) u = = D f (h(ϵ))◦Dh(ϵ)[u] D f (h(ϵ))[h′ (ϵ) u] D f (h(ϵ)) u.124 in which f : X ⊂ U → Y ⊂ V and define the composite function g = f ◦h : R → Y ⊂ V where h : R → X ⊂ U such that h(ϵ) = x + ϵ u so that x+ϵ u In this way the chain-rule gives . Next we use the composition (multiplication) relation A B[C] = A[B[C]] and the arbitrariness of U whence DH(A) = DF(G(A)) DG(A). We assume the functions F : L → L and G : L → L are differentiable and thus they have derivatives that linearly map 2-tensors U ∈ L into 2-tensors DF(A)[U] ∈ L and DG(A)[U] ∈ L making DF(A) ∈ L4 and DG(A) ∈ L4 4-tensors. i.138. 6. 6. The gradient ∇β(A) ∈ L is a 2-tensor that linearly maps 2-tensors U ∈ L into reals ∇β(A) · U ∈ R.<=> g(ϵ) = f ◦h(ϵ) = f ( h(ϵ) ). it asserts that if f : X ⊂ U → Y ⊂ U has an invertible derivative D f (x) ∈ L(U.differentiation Example 6. which is differentiable since α and G are differentiable. To evaluate this gradient we refer to Eqns.14. e.e. DH(A)[U] = DF(G(A))[DG(A)[U]]. We evaluate this 4-tensor by appealing to Eqn. then f has a smooth inverse function f ↓ : Y ⊂ U → X ⊂ U at x. And thus. the directional derivative is obtained via the chain rule. where we used the ordinary derivative notation of remark D4 .90 we mention the implicit function theorem.e. In regard to Eqns. i. assuming that D f (x) exists. which is identical to Eqn. 6. From this and the above we define the real valued composite function β = α◦G : L → R.g. 6. 6. Moreover 293 . We now refer to Eqn. Using these functions we define the composite function H = F◦G : L → L.97.124 for ϵ = 0. u). The derivative DH(A) ∈ L4 is also a 4-tensor as it also linearly maps 2-tensors U ∈ L into 2-tensors DH(A)[U] ∈ L. g′ (0) = δ f (x. which is differentiable since F and G are differentiable. U) atx (which linearly maps elements u ∈ U into elements of the same space).

144) 2 D f (x) [w.143) = D(D f (x)[u])[v]. Rearranging the above gives D f ↓ (y) |y= f (x) = [D f (x)]−1 . 6. 6. which eats elements u ∈ U and spits out elements D2 f (x)[u] ∈ L(U.128 we see that D2 f (x)[u] = D f (x + ϵ u) − D f (x) + o(|u|) (6. α u + βv] = α (D f (x) [w. we state again that the derivative of the function f : X ⊂ U → Y ⊂ V at x. Such derivatives arise when we discuss the elasticity tensor. Indeed. which we express more compactly as D2 f (x) [u.e. w] = α (D2 f (x) [u. 2 2 (6.142) for all u ∈ U. V) or D2 f (x) : U × U → V. v] = (D2 f (x)[u])[v] = (DD f (x)[u])[v] (6. and L(U. w]) + β (D2 f (x) [v. In the latter interpretation it can be shown that the map D2 f (x) : U × U → V is both bilinear and symmetric in the sense that D2 f (x) [α u + βv. u]) + β (D f (x) [w. v]). V). but now DD f replaces D f . which themselves are linear operators. where I is the identity operator on U.141) for all u ∈ U where we introduce the notation D2 f (x) in place of DD f (x). Now if D f (x) exists for all x ∈ X then we can view D f as a function mapping elements of X ⊂ U into elements of L(U. w]). i.e. i. u) ϵ→0 ϵ dϵ (6. v] = D2 f (x) [v. D f replaces f . if it exists. 6. V). if it exists. Following the development of Eqn.mathematical preliminaries we can define the composite functions f ↓ ◦ f : X ⊂ U → X ⊂ U and f ◦ f ↓ : Y ⊂ U → Y ⊂ U such that f ↓ ◦ f (x) = x and f ◦ f ↓ (y) = y. i. is defined as d 1 D2 f (x)[u] = lim [D f (x + ϵ u) − D f (x)] = D f (x + ϵ u) |ϵ=0 = δ D f (x. Based on this in the differentiation. for the smooth function f : R → R we have ddx2f (x) u = dx discussion we can write either D2 f (x) : U → L(U.e.e. Being a derivative D2 f (x) is a linear operator. 294 . V) and hence for v ∈ U we have (D2 f (x)[u])[v] ∈ V.e. a linear transformation from U to V.3 Higher order derivatives With the risk of beating a dead horse. the derivative of the inverse function D f ↓ (y) equals the inverse of the derivative [D f (x)]−1 at the corresponding points y = f (x). This second derivative of f is defined analogously to the first derivative in Eqn. e. V) is not generally linear. u].2 If this case we can think about the derivative D2 f (x) of the function D f evaluated at x. Differentiating these composite maps via the chain rule yields reveals I = D f ↓ ◦ f (x) = D f ↓ (y)◦ D f (x) |y= f (x) = D f ↓ (y) D f (x) |y= f (x) . where the second line follows from the notation D2 f (x) = DD f (x) and the third by viewing u as a constant 2 d df ( dx (x) u). is the linear operator D f (x) : U → V that eats elements u ∈ U and spits out elements D f (x)[u] ∈ V making D f (x) ∈ L(U. V) replaces V. (6.124. V). 6.g. the second derivative of f at x.2 Note that this D f : X ⊂ U → L(U. which by definition is always linear. i. D2 f (x) [u. unlike D f (x) : U → V. D2 f (x) : U → L(U.140) i. V). the second derivative of the strain energy function. D f : X ⊂ U → L(U.4. 6.

· · · . x2 ) is just the “usual” derivative obtained by 1 viewing f as a function of only the first variable x1 . α) ∈ E. 6. v] = D (D f (x)[u]) [v] (6. For example the function f : E × R → E eats the vector – scalar pair (a. 6. (6. 6. In all cases the second derivatives D2 ıi (A) are fourth-order tensors. Refer to Exam. x2 = α ∈ X2 = R and f(a. And for functions of two variables ∂f f : R × R → R you know that the partial derivative ∂x (x1 . i = 1. Of course the second equality follows from the other two.91 and the trivial result that for F : L → L such that F(A) = AT = T[A] we have DF(A) = T. 1.148) 6.7 and determine the second derivatives of the tensor invariants. where we utilized Eqns. D2 f (x)[u.4 Partial derivatives The component results of Eqns.141.5. β ∈ R. Since ∇ı1 (A) = I.147) Equations 6.132 and 6.146) From Eqn. i. we trivially have D2 ı1 (A) = O.128 and 6. ∇ı2 (A) = ı1 (A) I − AT . Herein we define the partial derivative of the function f : X ⊂ U → Y ⊂ V where X = X1 × X2 × · · · × Xn is the n-fold set product of the open subsets Xi ⊂ Ui . Thus we have f (x) = y ∈ Y where x = (x1 . α) = E × R and spits out the vector f(a.142. To obtain the symmetry equality we use Eqns.59 and 6. Example 6. 2. a constant. i. 6. · · · . u].135 render the last result D2 ı3 (A) = detA T (A−1 ⊗ A−T − A−1 ! A−T ). Utilizing the arbitrariness of U gives D2 ı2 (A) = (I ⊗ I − T). 6. (6.e. And this interpretation is absolutely correct.e.4. Continuing in this manner we can define still higher order derivatives and this progression leads to the function classification discussion of Sect. (6.15.123 utilize partial derivatives. x2 . i. the product rule and noting that the derivative of a sum is the sum of the derivatives we obtain D2 ı2 (A)[U] = = = = Dı1 (A)[U] I − T[U] (∇ı1 (A) · U) I − UT (I · U) I − UT (I ⊗ I − T)[U].e.120 – 6. α) ∈ Y = E.131. x1 = a ∈ X1 = E. D2 ı j (A) ∈ L4 .differentiation where w ∈ U and α. n. The first bilinearity equality follows from Eqn. 6. 295 . in this case X = E × R. xn ) ∈ X with xi ∈ Xi ⊂ Ui .145) = D f (x + v)[u] − D f (x)[u] + o(|v|) 6 7 6 7 = f (x + v + u) − f (x + v) + o(|u|) − f (x + u) − f (x) + o(|u|) + o(|v|) 6 7 6 7 = f (x + u + v) − f (x + u) + o(|v|) − f (x + v) − f (x) + o(|v|) + o(|u|) = D f (x + u)[v] − D f (x)[v] + o(|u|) = D (D f (x)[v]) [u] = D2 f (x)[v. which follows from the facts that for the scalar valued ı j : L → R we have ∇ıi (A) : L → L and hence D2 ıi (A) : L → L4 .

· · · . or dϵ = f (x1 . · · · . 0)] =>. · · · . x j−1 . x2 . 0. Di j f (x) [u. w]) + β (Di j f (x) [u. if all of the partial derivatives exist at x.< (6. (6. which follows from an argument similar to that of Eqn. xi+1 . x j+1 . then D f (x)[u] = n ! Di f (x)[ui ]. · · · . x j−1 . V). x2 . · · · . w]). (6. xi + ϵ ui . 6.152) = Di f (x1 . x j+1 . x j + u j . Di j f (x) [α u + βv. xi−1 . · · · . · · · . α) that linearly maps scalars u ∈ R into vectors D2 f(a. the second two are linearity results and the last is the symmetry. xi . v ∈ Ui . α)[u] ∈ E making D1 f(a. xn ) + o(|ui |) (6. · · · . dϵ or (6. Following the second derivative discussion. V) such that 1 Di j f (x)[u j ] = lim [Di f (x1 . ϵ→0 ϵ = d Di f (x1 . x2 . w. for our f : E × R → E example we have D1 f(a.120 – 6. x j . x j+1 .149) for all ui ∈ Ui . un ) ∈ U = U1 × U2 × · · · × Un . 6. if D f (x) exists then u . xi + ϵ ui . xn ) |ϵ=0 . w] = α (Di j f (x) [u.128.151) i=1 where u = (u1 . · · · . x j . · · · . · · · . xi−1 . · · · . Of course the components of these partial derivatives can be expressed using expressions such as those appearing in Eqns. · · · . 0. · · · . xn )]. x2 .120 – 6. x j + ϵ u j . xn ) − Di f (x1 . x2 . Indeed. xn ) − f (x1 . x j+1 . α) ∈ E a vector. 6.145. · · · . if it exists. · · · . 296 . 0. z]). If Di f (x) exists for all x ∈ X then we can define the second partial derivative Di j f (x) : U j → L(Ui . · · · .124 and 6. xi−1 . we can also treat the second partial derivative as the bilinear map Di j f (x) : Ui × U j → V that satisfies Di j f (x) [u. xi−1 . x j + ϵ u j . xn )] ϵ→0 ϵ = d f (x1 . Note the similarity of the above with Eqns. α w + βz] = α (Di j f (x) [u. xi+1 . which. Not surprisingly. · · · . β ∈ R. x j−1 .123. x2 . u] for every u. xi+1 . · · · .mathematical preliminaries The i-th partial derivative of f at x. xi+1 . ui . z ∈ U j and α. xn ) |ϵ=0 . · · · . α) ∈ L a 2-tensor and D2 f(a. xn ) − Di f (x1 . 6. x2 . is the linear operator Di f (x) : Ui → V that eats elements ui ∈ Ui and spits out elements Di f (x)[ui ] ∈ V such that 1 Di f (x)[ui ] = lim [ f (x1 . w]) + β (Di j f (x) [v. x2 . · · · .!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!<=!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!> Di f (x)[ui ] = D f (x)[(0. x j−1 . α)[u] ∈ E making D2 f(a. xn ) − f (x1 . We emphasize that these partial derivatives are not necessarily scalars like those appearing in Eqns. is the linear map from U j to L(Ui .123. x j+1 . xi−1 . w] = D j (Di f (x)[u])[w]. xi+1 . · · · . α) that linearly maps vectors u ∈ E into vectors D1 f(a. x j−1 . x2 . if it exists. u2 . · · · . xi + ui . xn ) + o(|u j |). x2 .150) ith place and conversely. The first equality defines the bilinear map.153) Di j f (x) [u. v] = D ji f (x) [v. xi .

in Ω and ∂Ω define the closure Ω Functions defined on the body. V) is. a vector. e2 . However. Examples of such functions are the vector displacement field u : Ω ⊂ E → R and the tensor stress field T : Ω ⊂ E → L. The body itself is comprised of particles or points y in a Euclidean point space E.e. 2-tensor.g. If f : R → R is continuous.e.e. In this way. • Use your knowledge of U and V to note what D f (x) ∈ L(U. We define a single rectangular Cartesian coordinate system that consists of the fixed basis {e1 . Figure 6. Much ado can and probably should be made about the Euclidean point space. i. conjugate products etc. ∂Ω.13. 6.4. we identify all points y in our body via their position vectors xy ∈ E relative to o and we associate the body B itself with the collection of points. it is not always easier to evaluate the derivatives via the component route. to factor u from your D f (x)[u] expression. product rule and chain-rule to obtain an expression for D f (x)[u]. how it acts on arbitrary vectors etc. 6. • Use the definition of the derivative.e. The collection of points. Rearranging this result D x+ϵ D x+ϵ as f ( x¯) = 1/(2 ϵ) x−ϵ f (y) dy and taking the limit as ϵ → 0 gives f (x) = limϵ→0 1/(2 ϵ) x−ϵ f (y) dy. • Use scalar. i. open and connected with a piecewise smooth boundary. x + ϵ]. i. that comprise the region Ω in which the body resides. e3 } and the fixed point o that we call the origin.e. i. 6. Ω ⊂ E.6 – 4. i.g.the euclidean point space • Use the definition of the function f : X ⊂ U → Y ⊂ V. 6. To 297 . e. • Note the domain U and codomain V of the linear mapping D f (x) : U → V.8. We assume that Ω is bounded. Rather to evaluate the derivative of the function f : X ⊂ U → Y ⊂ V at x ∈ X. dyadic.6 The Localization Theorem We now formalize the localization argument that we discussed in Eqns. etc. position vectors. e. we can bypass this discussion and cut to the core of the matter. and • Use the arbitrariness of u to define D f (x). i.13: Procedure to compute the derivative D f (x) : U → V of the function f : X ⊂ U → Y ⊂ V at x ∈ X. position vectors. i.e.5 The Euclidean Point Space In our analyses we are interested in quantifying the motion of a body B subjected to loads. surface. Ω. which has a unit outward normal vector n.5 Differentiation summary As seen from the above examples. and the union of all such points. 4.e.e. are referred to as fields. y ∈ E and B ⊂ E. position vectors. the linear operator D f (x) : U → V from U to V it is recommended to follow the steps in Fig. that form the surface define the set ¯ of Ω. D x+ϵ then the mean value theorem gives x−ϵ f (y) dy = 2 ϵ f ( x¯) where x¯ ∈ [x − ϵ. i.

x3 ) + (x1 . e.155) f(x) = lim ϵ→0 Vol(Nϵ (x)) Nϵ (x) where Nϵ (x) is the sphere with radius ϵ and center x.e. and Vol(Nϵ (x)) = D dv is the volume D of the subregion Nϵ (x) ⊂ Ω. 298 (6. x3 ). which must be satisfied for each location in the body. if we have Ω′ f(y) dv = 0 for all subregions Ω′ ⊂ Ω then we deduce f(x) = 0 for all x ∈ Ω by selecting Ω′ = Nϵ (x) and taking the limit as ϵ → 0. x3 ) = ∂xi ∂ fˆ1 ∂ fˆ2 ∂ fˆ3 = (x1 . Nϵ (x) 6.e. Indeed.7 The Divergence and Divergence Theorem The conservation laws we will discuss are most naturally stated in terms of the entire body. i. ∂x1 ∂x2 ∂x3 (6.g. cf. x3 ) + (x1 . 1. x2 .x+ϵ] | f (x) − f (y)| = 0.50 and 6.1. the sum of the forces acting on a body equals the time rate of change of the linear momentum of that body.x+ϵ] max ϵ→0 y∈[x−ϵ.mathematical preliminaries see this note that :: :: E x+ϵ :: : lim : f (x) − 1/(2 ϵ) f (y) dy:: = lim 1/|2 ϵ| : ϵ→0 : ϵ→0 x−ϵ ::E x+ϵ :: :: : [ f (x) − f (y)] dy:: :: : x−ϵ E x+ϵ ≤ lim 1/|2 ϵ| | f (x) − f (y)| dy ϵ→0 E x−ϵ x+ϵ ≤ lim 1/|2 ϵ| max | f (x) − f (y)| dy ϵ→0 = lim x−ϵ y∈[x−ϵ. vector and tensor fields giving the localization theorem. it is the vector field divS : Ω ⊂ E → Ω that satisfies a · divS(x) = div(ST (x) a) for every uniform vector a. This theorem is used repeatedly in the sequel. (6. divf : Ω ⊂ E → R. 6. While this proves useful.121. x2 . Fig. equally useful are the local conservation equations.157) . we make extensive use of the localization (cf. 6. e. This one-dimensional result generalizes to smooth scalar. x2 . For a smooth vector field f : Ω ⊂ E → E.g for f : E → E we have E 1 f(y) dvy . As seen above divf is a scalar field. Sect.6) and divergence theorems.156) where we refer to Eqns. we define the divergence as divf(x) = tr Df(x) = (Df(x))ii ∂ fˆi (x1 . i. The divergence of a smooth tensor field S : Ω ⊂ E → L is defined through the divergence of a vector field. To arrive at these local statements. x2 . (6.154) where the continuity of f is invoked to justify the final limit.

x3 ) ei ⊗ e j . x2 . x3 ) ak ∂Sˆ k1 (x1 . Taking the trace of the above provides the divergence of ST (x) a. 6. x ) a (x .158) from which we identify (ST (x) a)i = Sˆ ki (x1 .160 with the arbitrariness of a finally gives ∂Sˆ i j ei ∂x j ⎧ ˆ ⎫ ∂S 1 j ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ∂x ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ∂Sˆ 2j j ⎪ ⎬ ⎨ = ⎪ ⎪ ⎪ ∂x j ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ Sˆ 3 j ⎪ ⎪ ⎪ ⎭ ⎩ ∂∂x j ⎧ ∂Sˆ ∂Sˆ 12 11 ⎪ ⎪ ⎪ ∂x1 + ∂x2 + ⎪ ⎪ ⎨ ∂Sˆ 21 ∂Sˆ 22 = ⎪ ⎪ ∂x1 + ∂x2 + ⎪ ⎪ ⎪ ⎩ ∂Sˆ 31 + ∂Sˆ 32 + ∂x1 ∂x2 divS = 299 ∂Sˆ 13 ∂x3 ∂Sˆ 23 ∂x3 ∂Sˆ 33 ∂x3 ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ . x2 . Applications of Eqns. This fact combined with Eqn. x2 . x3 ) a j ei (6.11.the divergence and divergence theorem To obtain the component representation of the above we express x = xi ei . ∂x j (6.50 and 6.52. x2 . 6. x . x3 ) ak ei ⊗ e j . x . x2 .41 gives ST (x) a = (ST (x))i j a j ei = (S(x)) ji a j ei = Sˆ ji (x1 . 6. and introduce the component functions Sˆ i j : R3 → R to write S(x) = Sˆ i j (x1 . cf. x2 . x3 ) ak (ei · e j ) ∂x j ∂Sˆ ki (x1 . 6.161) .20. x3 ) (a · ek ) ∂xi ∂Sˆ ki = (x1 . div(ST (x) a) = trD(ST (x) a) 5 4 ∂Sˆ ki = tr (x1 .160) which follows from Eqns. x2 .10. x2 . Eqn. 6. 6. x . x ) a (x . x ) a 1 2 3 k 1 2 3 k 1 2 3 k ∂x2 ∂x3 ⎢⎣⎢ ˆ 1 ⎥⎥⎦⎥ ˆ ˆ ∂S k3 ∂S k3 ∂S k3 (x .7. x3 ) ak (where we replaced the dummy index j with k).157 and 6. x . x3 ) ak ei ⊗ e j ∂x j ∂Sˆ ki = (x1 . x2 . 6. x . 6. x3 ) ak ⎥⎥⎥ ∂x2 ∂x3 ⎢⎢⎢ ∂xˆ 1 ⎥⎥⎥ S k2 ∂Sˆ k2 ∂Sˆ k2 ⎥⎥⎥ D(ST (x) a) = ⎢⎢⎢⎢ ∂∂x (x . x3 ) ak ∂xi ∂Sˆ ki = (x1 . x2 . x ) a 1 2 3 k 1 2 3 k 1 2 3 k ∂x1 ∂x2 ∂x3 = ∂Sˆ ki (x1 . Equations 6. x3 ) ak ∂Sˆ k1 (x1 . x2 .121 provides ⎡ ∂Sˆ ⎤ ⎢⎢⎢ k1 (x1 .37 and 6. x3 ) ak δi j = ∂x j ∂Sˆ ki = (x1 . x .159) where we use the fact that a is uniform. x ) a (x . x3 ) ek · a. x ) a (x . ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ (6. x2 . x2 .10. ∂xi (6.

For example.x . as follows E E divf dv = f · n da. Upon applying the localization theorem. (6. boundary ∂Ωϵ. Exer. We see that the divergence is the limiting value of the net flux f·n or S n over the boundary of the sphere per unit volume. x2 }.165) divS(x) = lim ϵ→0 Vol(Ωϵ. x2 ) to its boundary {x1 . 6. Recall that if A and B are tensors. cf.59.163) where the over line ( ) means the function is treated as uniform for the differentiation.8 The Group and Invariance The following concepts come in handy when we discuss material symmetry. This divergence theorem result generalizes to the smooth vector and tensor fields f : Ω ⊂ E → E and S : Ω ⊂ E → L defined over the region Ω with boundary ∂Ω having outward unit normal vector n.e. we transform an integral in the domain (x1 . div(ST f) = f · divS + S · Df. (6.e. i. 6. A B ∈ L. The first result can be found in D any vector calculus book.x where Ωϵ. Eqn. 6.155. to the left hand side of Eqn. B ∈ LOrth 300 .164 we obtain the physical interpretation of the divergence fields E E 1 1 divf(x) = lim divf dv = lim f · n da. the product rule (cf.55). 1 i.14 for the function f(x) = (e1 ·x)3 e1 +(e2 ·x)3 e2 = x31 e1 +x32 e2 for which divf(x) = 3(e1 · x)2 + 3(e2 · x)2 = 3 (x21 + x22 ). To obtain the second equality consider a · Ω divS(x) dv for a an arbitrary uniform vector and use Eqn.x ϵ→0 Vol(Ωϵ.x ) ∂Ωϵ.162) where α : Ω ⊂ E → E is a smooth scalar valued function and the argument x ∈ Ω ⊂ E has been suppressed for conciseness. i.x ϵ→0 Vol(Ωϵ.133) and the tensor inner product (cf.x ) ∂Ωϵ. 6.164) Ω ∂Ω E E divS dv = S n da. Dx For g : R → R a smooth function you are comfortable with the result that x 2 g′ (x) dx = g(x2 ) − g(x1 ). div(ST f) = div(ST f) + div(ST f) = f · divS + trD(ST f) = f · divS + tr(ST Df) = f · divS + S · Df.mathematical preliminaries where the arguments have been dropped for conciseness. At x = 0 the net flux entering the “sphere” is zero because divf(0) = 0 whereas for x = e1 + e2 the net flux is leaving the “sphere” because divf(x) = 6 > 0.x E E 1 1 divS dv = lim S n da. (6. 6.x ) Vϵ.e. then their product A B is also a tensor. cf. 6. ∂Ω Ω where again the argument x ∈ Ω ⊂ E has been suppressed for conciseness.157 and the first equality. (6. Eqn. 6. 6.x ) Vϵ. Using multiplication as our combination it can be shown that for any orthogonal tensors A. and outward unit surface normal vector n. Eqn. This is seen in Fig.x is the spherical region centered at x with radius ϵ. to verify the second equality we use the above definitions. ϵ→0 Vol(Ωϵ. It can also be verified that div(α f) = α divf + ∇α · f.

relations for the derivatives of invariant functions. 6. 301 . b. the identity is necessarily in the group.2.e.1. Now consider the cases when a · b = −|a| |b|. i.14: Divergence illustration: (a) divf(0) = 0 and (b) divf(e1 + e2 ) = 6.e.68). The vectors depict the values of f(x) = (e1 · x)3 e1 + (e2 · x)3 e2 = x31 e1 + x32 e2 . Hint: use Eqn. suppose a. 6. A scalar valued function φ : X ⊂ L → R is invariant under G if X is invariant under G and φ(Q A QT ) = φ(A) for all A ∈ X and all Q ∈ G. cf. Prove Eqn. Prove Eqn.1 and the discussion following Eqn. (Q ! Q)T D2 φ(Q A QT ) (Q ! Q) = D2 φ(A) and DT(Q A QT ) (Q ! Q) = (Q ! Q) DT(A). b and c are linearly independent.e. which implies |b| a − |a| b = 0 and hence. it is in fact a subgroup of LOrth . G1 A B is an orthogonal tensor. A B ∈ LOrth . which we denote as G ⊂ LOrth (cf.4 we know that b ∧ c ! 0. LOrth forms a group in L. i.e. 90o .2. i. Prove that a ∧ b = 0 only if a.6. 6. Hint: Use propert E7 to show that if a ∧ b = 0 then a · b = ±|a| |b|. The above two group properties are referred to as closure and existence of a reverse element. then from Exer. b and c are all orthogonal to b ∧ c. G2 A−1 is an orthogonal tensor. Eqn.5 and properties E2 . Similarly. Exer. that [a.9 Exercises 6. Next show that a. a and b are linearly dependent. 6.64). b and c are linearly dependent. b ∈ E are linearly dependent. a tensor valued function T : X ⊂ L → L is invariant under G if X is invariant under G and T(Q A QT ) = Q T(A) QT for all A ∈ X and all Q ∈ G. Note that because A and A−1 are in the group. 6.3. We say the set X ⊂ L is invariant under the group G ⊂ LOrth if Q A QT ∈ X for all A ∈ X and all Q ∈ G. is also a group. Differentiating these equalities gives (Q ! Q)T ∇φ(Q A QT ) = QT ∇φ(Q A QT ) Q = ∇φ(A). 6.65. Re3 (n π/2) where n is an integer (cf.e.exercises e2 e2 e1 e1 (a) (b) Figure 6. 6. A−1 ∈ LOrth and thus we say the set of orthogonal tensors.3. i. 6. i. Next assume a · b = |a| |b| and verify that | |b| a − |a| b | = 0. Exer.6. I = A A−1 ∈ LOrth . 270o . i. It is hopefully apparent to you that the set of 0.e. 180o . i. a = 0 and b = 0. 6. · · · rotations about the e3 axis.e. 6. Suppose this is not the case. c] = 0 only if a. 6.

(g) Repeat this process to obtain the other equalitites. 6. In regard to the equality a = a1 e1 + a2 e2 + a3 e3 of Eqn. Prove Eqn.6. e ⊗ e and I − e ⊗ e. i. 6. B ∈ L. Hint: Use Eqns.) This contradiction implies a. e3 . (a) Express e2 ∧ e3 in terms of its components.11. illustrate the vector a and P2 a and explain the significance of the requirement that P2 = P. e2 .32. 6. Hint: Represent the symmetric P via the spectral decomposition theorem and use the fact that P2 = P to show that eigenvalues of P equal either zero or one. e3 ] e1 . 6.9. (b) Illustrate the action of the above four tensors acting on the arbitrary vector a. 6. Eqn.5 and 6. 6.4. e3 ] = ±1. (f) Finally restrict yourself to an orthonormal righthanded basis for which [e1 . 6.30.) Now since a. 6. 6.12. c] = 0. γ ∈ R that are not all zero. (d) Show that all perpendicular projections take the form of the above four tensors. Prove e1 = e2 ∧ e3 . (a) Verify that 0. 6. e2 . respectively. (b) Next show that e2 ∧ e3 = [e1 . 6. a2 and a3 are unique. 6.7. cf. I. e3 } is a basis for E and hence what does this imply about the four vectors e1 . cf.27. and e3 = e1 ∧ e2 without using Eqn. Prove Eqn. 302 .e. e2 . 6. e3 } is a basis for E.8. 6.12.15. Hint: express a via the different components a′1 . Prove Eqn.5. cf. with e a unit vector. 6. e2 . The tensor P is a perpendicular projection if PT = P and P2 = P. Hint: provide a counter example. β. Fig.13.9 (a) Prove that this expression exists. Justify the matrix representation of Eqn. a′2 and a′3 . e2 .28.6. Evaluate δ j j . e2 = e3 ∧ e1 .4. 6. Hint: evaluate the components (a ⊗ b)i j = ei · (a ⊗ b) e j . 6. 6.7. e3 } is an orthonormal basis for E. Prove that in general AB ! BA for A. Hint: use property E7 and the fact that {e1 . Hint: Use that fact that {e1 . 6. e2 . e2 . Eqn. b and c are linearly dependent. Now consider the ”equality” (b ∧ c) · (b ∧ c) = (α a + β b + γ c) · (b ∧ c) and show that the left and righthand sides are nonzero and zero. (c) Now verify the equality |e2 ∧ e3 |2 = 1. Eqn. are perpendicular projections. e3 ] = 1. a ∈ E? (b) Prove the components a1 . (e) Insert this result into the second to obtain e2 ∧ e3 = ±e1 . b. 6. (d) Combine the previous two results to show that [e1 . subtract the two expressions and use the fact that {e1 . 6.mathematical preliminaries (Hint: use the given equality [a. Prove Eqn. illustrate the vector a and P a. 6.41. b and c are linearly dependent they form a basis for E and hence b ∧ c = α a + β b + γ c for some α. (Hint: use properties E2 and E3 and the above orthogonality results.5 and properties E4 and E6 . (c) Continuing along the same vein.10.

i. Finally. Verify that the reflection of Eqn.61.25. 6. 6.22 to define a. A a = 0 implies det A = 0 see the discussion following Eqn. i. i. 6. 6. Finally use Eqns.14. Hint: use Eqn. (b) To prove sufficiency. (a) Prove (A B)−1 = B−1 A−1 .24. Prove that this condition renders Eqn. 6. C ∈ L. 6. and (c) an involution. (Q a) · (Q b) = a · b.59.52. Prove that if R is a rotation. A e2 . Prove Eqn. Prove Eqn. taking the transpose of I = A A−1 in the above and appling Eqn.20. 6. Prove Eqn. 6.e Eqn.22. amongst other things. (b) orthogonal. 6. 6. 6.1 and 6. 6. The product P−1 A P is a similarity transformation of the tensor A where P is an invertible tensor. Hint: replace a with a − b and expand the defining equality (Q (a − b)) · (Q (a − b)) = (a − b) · (a − b). A e3 ] = 0 and then apply Eqn.59 and (a) Prove that if S · T = 0 for every symmetric tensor S then T is a skew tensor.18. 6. iii. and hence eigenvectors can always be scaled to be unit vectors. det A = 0 implies there exists an a ! 0 ∈ E such that A a = 0 use Eqn. A e2 . 6. Such operations the appear in coordinate transformations. 6. ii.53.e. If (λ. i.47 to show that [A e1 . 6. (a) To prove necessity.e. (b) Prove that if W · T = 0 for every skew symmetric tensor W then T is a symmetric tensor. cf. 6. 303 .67 is (a) symmetric.41. 6.62 with both A and AT to obtain A A−1 = I = (AT )−1 AT . v) is an eigenpair of A prove that (λ. Next using Eqn. 6.19.27. 6. 6. 6. α v) for any (nonzero) scalar α is also an eigenpair. Given the tensor A ∈ L prove that there is an a ! 0 ∈ E such that A a = 0 if and only if det A = 0. Prove Eqn. Eqn. b ∈ E.62 and refer to definition of tensor composition. 6. 6. 6.65 is replaced by (Q a) · (Q a) = a · a. i.exercises 6.26. Evaluate the invariants of the tensor A = a ⊗ b where a. Verify that the tensor R of Eqn.68 is a rotation. then (R a) ∧ (R b) = R (a ∧ b) for arbitrary vectors a and b.16. Refer to Eqn.28. B. Verify the equality trDev(A) = 0. (b) To prove (A−1 )T = (AT )−1 you might consider i. 6. 6. and A e3 are linearly dependent.21. prove that R∗ = R.57. 6. 6. Hint: replace A with A B in Eqn.6 to show that A e1 .64. Express tr(A B C) using indicial (component) notation for A. 6.15. Some authors define an orthogonal tensor as only preserving length. 6.e. Proving AT is invertible if A is invertible.17.52. 6.65.23.e.

6.35. the components of p. q. 6. (d) Use the above results to show p·W q = p·W r = q·W p = r·W p = 0 and r·W q = −q·W r = ω for some scalar ω. Evaluate the right polar decomposition of the tensor A = ⎢⎢⎢⎢ 4 2 −1 ⎥⎥⎥⎥. 6.31. 6. q. To find these remaining 3 components first prove that W has a zero eigenvalue.72 with the above (with the eigenvector p replacing ei ). Hint: Try w = Axial(W). 304 .16 and 6.45. cf.6 to show (v⊗u−u⊗v) (u∧v) = 0 and hence (0. (a) Use Eqn. (c) Let p be the unit eigenvector corresponding to this zero eigenvalue. Hint: use property E7 and Eqn.30.32.34.80. Find a real eigenpair of a skew tensor W. To these ends (a) Consider the arbitrary basis vectors ei and e j and justify the equalities ei · W e j = −e j · W ei and ei · W ei = −ei · W ei = 0.mathematical preliminaries (a) Verify that principal invariants of P−1 A P equal those of A and hence the terminology invariants. To these ends use the results of Exer.e.41 to show (v ⊗ u − u ⊗ v)T = −(v ⊗ u − u ⊗ v). u∧v) is an eigenpair of (v⊗u−u⊗v). 6.36 you now know α u ∧ v is the axial vector corresponding to the skew tensor (v⊗u−u⊗v).84.43 we used the components of skew tensor W to show that it only has three distinct components. 6. 6. (0. Prove the v ⊗ u − u ⊗ v is a skew tensor with corresponding axial vector u ∧ v for arbitrary vectors u and v.6 in the righthand side. cf. 6. r} and use Eqns.11.e. (b) You have just proved the 3 components Wii (no sum) are zero and that Wi j = −W ji for i ! j and hence there are only three distinct components. Hint: combine Eqn. λ = 0. 6.37. 6.35 and let w = ω p and verify that W a = w ∧ a for every vector a. (b) Determine the relationship between the eigenpairs of P−1 A P and A. i. 6.33. 6. (b) Use Eqn. To compute scalar α evaluate v·[(v⊗u−u⊗v) u] = v·[(α u∧v)∧u]. 6. Hint: Express a via the orthonormal basis {p. 6. (c) Using the results of Exers.29.5 and 6.35 and 6.29 on the left hand side and Eqns. Verify Eqn. 6. ⎦ ⎣ 2 −1 2 ⎤ ⎡ ⎢⎢⎢ 1 3 3 ⎥⎥⎥ ⎥ ⎢ 6.29. 6. Eqn. 6. ⎦ ⎣ 2 6 1 6. r}.36.e. i. In Eqn. Eqn.45 we used the components of skew tensor W to show that it has an associated axial vector w. Verify that the eigenvalues of a positive definite symmetric tensor S are positive. p) is an eigenpair of W and use it to define the orthonormal basis {p. 6. Verify Eqn. Evaluate the eigenpairs of the tensor A = ⎢⎢⎢⎢ 3 2 −1 ⎥⎥⎥⎥. In Eqn. (e) Finally combine the above results to verify the equality W = ω (r ⊗ q − q ⊗ r). 6. 6. ⎤ ⎡ 2 ⎥⎥⎥ ⎢⎢⎢ 1 3 ⎥ ⎢ 6. We see one obvious component ω and two “hidden” components. 6. i. There are only two components of p because the third component is determined by the |p| = 1 unit length condition.45.

6. Hint: cf. 6. and second minor symmetries? 6. r}. p · Q p = 1. 6.e.55. p · Q q = p · Q r = q · Q p = r · Q p = 0.37 and verify the equalities (v ⊗ u − u ⊗ v) w = (u ∧ v) ∧ w = (u · w) v − (v · w) u for arbitrary vectors u.40. 6. use the arbitrariness of A.43. i. β.38. How many independent components does C have if it possesses major.35. B ∈ L. 6.42. Hint: Try p. 6. 6. cf. (i) Combine steps f and h to evaluate the remaining 4 components. (d) Let p be the eigenvector corresponding to this unit eigenvalue. Hint: evaluate the three possible dot products between Q q and Q p and use the equality det Q = 1 = [Q p.100 and as always.95. Verify the following (a) C = PSym C if C possesses the first minor symmetry (b) C = C PSym if C possesses the second minor symmetry and that (c) If C possesses the major and either minor symmetry. first. 6. Prove that every rotation can be defined as such. Q q · Q r = 0. Verify Eqn. then it possesses the other minor symmetry. To find the remaining 4 first prove the following i. 6. γ and δ. Hint: refer to Eqns.44. 6. 6. v and w. p = Q p = QT p. Find a real eigenpair of the rotation tensor R of Eqn. γ q + δ p]. Verify Eqn. (g) Justify the equalities α2 + β2 = 1.105 and use the Kronecker delta δi j to express the components 305 .96.exercises 6. Hint: Apply Eqns.35. Refer to Exer.41. 6. and 6.52. Q q · Q q = Q r · Q r = 1. ii. 6. In Eqn. γ2 + δ2 = 1 and α γ + β δ = 0 and α δ − β γ = 1. 6.11.45. 6. 6.39.46. 6. Verify Eqn.99. 6. Eqn. α q + β p. (b) Take the determinant of the above to prove det(Q − I) = 0. (a) From QT Q = I derive the equality QT (Q − I) = −(Q − I)T . Refer to Eqn. (1. Now prove that i.36 and 6. 6.68 we introduced a particular rotation R = p ⊗ p + (q ⊗ q + r ⊗ r) cos θ − (q ⊗ r − r ⊗ q) sin θ.68. Q r] = [Q p. (h) Verify that α = δ = cos θ and β = −γ = sin θ is a solution to the above four equations. (f) Justify the equalities Q q = α q + β p and Q r = γ q + δ p and provide definitions of α. q. p) is an eigenpair of Q and use it to define the orthonormal basis {p.9 and 6. 6. Q q. 6. Eqns.68. ii. Verify that TT = T.97. 6. (c) Use this det(Q − I) = 0 result to explain why λ = 1 is an eigenvalue of Q. iii.47. (e) You now have 5 components of Q.

Refer to Eqn. 6. 6. 6.108. 6. Evaluate the derivative of the function F : L → L where F(A) = tr(A) A.55.15 results to show ı2 ((a⊗b) A−1 ) = 0 and det(a⊗b) = 0 and apply Eqn. Use the intermediate result of Exam.mathematical preliminaries (a) PSymi jkl (b) PSkewi jkl (c) PSphi jkl (d) PDevi jkl 6. 6.2. 6. Evaluate the first and second derivatives of the function α : E → R such that α(x) = |x|. 6.7 to verify the equation det(A + a ⊗ b) = det(A) [1 + a · (A−1 b)] Hint: Let ϵ = 1 and U = a⊗b. 6. Refer to Eqn. Use the chain rule to evaluate the gradient and derivative of the maps (a) α◦f : E → R and (b) g◦f : E → E where α : E → R. 6. Verify Eqn. 6.163 and 6. 306 . (x − y) ∧ T(x) n(x) da = w· ∂Ω Ω where w = Axial(W) is a uniform vector and T : Ω ∈ E → L is differentiable. Hint: Apply the transpose and divergence definitions of Eqns. g : E → E. Verify the equality E E (W (x − y) · divT(x) + W · T(x)) dv. Verify Eqn.52.105 and evaluate the following (a) PSym PSkew (b) PSym + PSkew (c) PSph PDev (d) PSph + PDev (e) PSym PSph (f) Comment on your findings. 6.59. 6.44.40. 6. 6. 6. 6.157 multiple times. 6.59. use your Exer.49. 6.48.164. 6. 6. 6.57.53. 6.58.51.56. Evaluate the derivative of the function F : L → L where F(A) = tr(A) B AT where B is a tensor. 6.50. 6.5. and f : E → E are differentiable functons.6.109 and verify that R = R ! R is orthogonal if R ∈ L is a rotation. 6.54.60. Verify Eqn. Evaluate the derivative of the function F : L → L where F(A) = AT A.164.118. Hint: Refer to Eqns. 6. For the tensor field B : E → L and the uniform tensor A ∈ L verify that div(AB(x)) = A divB(x).40 and 6.

63.15: Surface of discontinuity. 6.164. Hint: Split the integrals on the left hand side into two regions Ω+ and Ω− and then apply Eqn. 307 .exercises 6. Equation 6..8.65. Assume that X is invariant under the group G ⊂ LOrth and that φ : X ⊂ L → R and T : X ⊂ L → R are invariant under G and verify the derivative following relations: (a) Q ∇φ(QT A Q) QT = ∇φ(A). 6.15). Now assume they are smooth everywhere except across the singular surface Γ with unit normal vector m (cf.61. Fig.164: E E E divf dv = f · n da − [[f]] · m da. (6. n ∂Ω m Ω+ Ω− Γ V Figure 6. cf. Show that the set of all rotations Re3 (n π/2) where n is an integer is a subgroup of LOrth . Under these conditions derive the following counterpart to Eqn. Sect. 6.64.164 assumes that the fields f.62. Verify the group closure property G1 for the set of orthogonal tensors LOrth . 6. 6. and S are smooth everywhere in Ω.166) ∂Ω Ω Γ where the jump of the field f is defined [[f(x)]] = f + (x) − f − (x) with f ± (x) = lim+ f(x ± ϵ m). 6. 6. 6. EΩ E∂Ω EΓ divS dv = S n da − [[S]] m da. Show that the set of all rotations LRot is a subgroup of LOrth . ϵ→0 The jump[[S(x)]] is defined analogously to [[f(x)]].

(c) DT(QT A Q) (Q ! Q)T = (Q ! Q)T DT(A). 308 .mathematical preliminaries (b) (Q ! Q) D2 φ(QT A Q) (Q ! Q)T = D2 φ(A).

Urbana-Champaign. Spencer. 1988. Continuum Mechanics Course Notes. Chadwick. Gurtin. 2010. 1998. [5] M. [9] M. [3] P. [7] R. New York. Cambridge. Introduction to Continuum Mechanics. M. 1989. New York. Gurtin. Academic Press. Dover Publications. New York. [2] D. 309 . Introduction to Continuum Mechanics for Engineers. E.BIBLIOGRAPHY Bibliography [1] R. E. Bowen. The Mechanics and Thermodynamics of Continua. and L. Linear Vector Spaces and Cartesian Tensors. Silhavy. K. [10] A. Carlson. Oxford University Press. W. Non-Linear Elastic Deformations. Anand. Podio-Guidugli. [4] M. Ogden. 2000. New York. John Wiley & Sons. Dover Publications. University of Illinois. J. [8] P. The Mechanics and Thermodynamics of Continuous Media. A primer in elasticity. 1997. Continuum Mechanics. Journal of Elasticity. New York. 1981. 1984. Plenum Press. E. 1999. 58(1). Knowles. New York. 2004. New York. [6] J. Fried. Continuum Mechanics: Concise Theory and Problems. Springer. E. New York. M.

BIBLIOGRAPHY 310 .