Professional Documents
Culture Documents
Tensors PDF
Tensors PDF
1
Tensors
1.1 Introduction
on space and time. At the end of the chapter we will introduce tensor fields and some field
operators which can be used to interpret these fields.
In this textbook we will work indiscriminately with the following notations: tensorial,
indicial, and matricial. Additionally, when the tensors are symmetrical, it is also possible to
represent their components using the Voigt notation.
There now follows a brief review of vectors, in the Euclidean vector space (E ) , so that we
may become acquainted with the nomenclature used in this textbook.
r r
Addition: Let a , b be arbitrary vectors, we can show the sum of adding them, (see Figure
r
1.1 (a)), with a new vector ( c ) thus defined as:
r r r r r
c =a+b=b+ a (1.1)
r r r
a r d a r
c c
r
r −b r
b b
a) b)
Figure 1.1: Addition and subtraction of vectors.
r r
Subtraction: The subtraction between two arbitrary vectors ( a , b ), (see Figure 1.1 (b)), is given
as follows:
r r r
d=a−b (1.2)
r r r
Considering three vectors a , b and c the following properties are satisfied:
r r r r r r r r r
(a + b) + c = a + (b + c ) = a + b + c (1.3)
r r
Scalar multiplication: Let a be a vector, we can define the scalar multiplication with λa .
r
The product of this operation is another vector with the same direction of a , and whose
length and orientation is defined with the scalar λ as shown in Figure 1.2.
r r r
r r λa a a
a a
r
λa
r
λa
Scalar Product: The Scalar Product (also known as the dot product or inner product) of two
r r r r
vectors a , b , denoted by a ⋅ b , is defined as follows:
r r r r
γ = a ⋅ b = a b cos θ (1.4)
where θ is the angle between the two vectors, (see Figure 1.3(a)), and • represents the
Euclidean norm (or magnitude) of • . The result of the operation (1.4) is a scalar.
r r r r r r
Moreover, we can conclude that a ⋅ b = b ⋅ a . The expression (1.4) is also true when a = b ,
therefore:
r r r r r r r r r r r
a ⋅ a = a a cos θ θ
→ a ⋅ a = a a ⇒ a = a⋅a
=0 º 2
(1.5)
r r r
Hence, the norm of a vector is a = a ⋅ a .
r
Unit Vector: A unit vector, associated with the a -direction, is shown with a â , which has
r
the same direction and orientation of a . In this textbook, the hat symbol ( •ˆ ) denotes a
r
unit vector. Thus, the unit vector, â , codirectional with a , is defined as:
r
ˆa = a
r (1.6)
a
r r
where a represents the norm (magnitude) of a . If â is the unit vector, then the
following must be true:
aˆ = 1 (1.7)
Zero Vector (or Null Vector): The zero vector is represented by a:
r
0 (1.8)
r r
Projection Vector: The projection vector of a onto b , (see Figure 1.3(b)), is defined as:
r r r r
proj br a = proj br a bˆ Projection vector of a onto b (1.9)
r r r
where proj br a is the projection of a onto b , and b̂ is the unit vector associated with the
r r
b -direction. The magnitude of proj br a is obtained by means of the scalar product:
r r r r
proj br a = a ⋅ bˆ Projection of a onto b (1.10)
So, taking into account the definition of the unit vector, we obtain:
r r
r a
projbr a = r
⋅b
(1.11)
b
r
Then, the projection vector, proj br a , can be calculated by:
r r r r r r r
r a ⋅b
proj br a = r bˆ =
a⋅b b a⋅b r
r r = r 2 b
b b b b (1.12)
{
scalar
r 0≤θ≤π r
a a
b̂ . r
θ r θ
b r b
b̂ projbr a
r r r r
a ⋅ b = a b cos θ
r r
Orthogonality between vectors: Two vectors a and b are orthogonal if the scalar product
between them is zero, i.e.:
r r
a⋅b = 0 (1.13)
r r
Vector Product (or Cross Product): The vector product of two vectors, a , b , results in
r
another vector c , which is perpendicular to the plane defined by the two input vectors,
(see Figure 1.4). The vector product has the following characteristics:
Representation:
r r r r r
c = a ∧ b = −b ∧ a (1.14)
r r r
The vector c is orthogonal to the vectors a and b , thus:
r r r r
a⋅c = b ⋅c = 0 (1.15)
r
The magnitude of c is defined by the formula:
r r r
c = a b sin θ (1.16)
r r
where θ measures the smallest angle between a and b , (see Figure 1.4).
r r
The magnitude of the vector product a ∧ b is geometrically expressed as the area of the
parallelogram defined by the two vectors, (see Figure 1.4):
r r
A= a∧b (1.17)
Therefore, the triangle area defined by the points OCD , (see Figure 1.4 (a)), is:
1 r r
AT =
a∧b (1.18)
2
r r r r
If a and b are linearly dependent, i.e. a = αb with α denoting a scalar, the vector product
r r r r r
of two linearly dependent vectors becomes a zero vector, a ∧ b = αb ∧ b = 0 .
r r r
Scalar Triple Product (or Mixed Product): Let a , b , c be arbitrary vectors, we can
define the scalar triple product as:
( )
r r r r r r r r r
(
a ⋅ b ∧ c = b ⋅ (c ∧ a) = c ⋅ a ∧ b = V )
( ) ( )
r r r r r r r r r (1.19)
V = −a ⋅ c ∧ b = −b ⋅ (a ∧ c ) = −c ⋅ b ∧ a
r r r
where the scalar V represents the volume of the parallelepiped defined by a, b, c , (see
Figure 1.5).
r r r r r D
c = a∧b c
D b
r
b AT
A θ
.. r C
a
.. θ r O
a C
O r r r
−c =b∧a
a) b)
Figure 1.4: Vector product.
If two vectors are linearly dependent then, the scalar triple product is zero, i.e.:
(
r r r r
a⋅ b ∧ a = 0 ) (1.20)
r r r r
Let a , b , c , d , be vectors and α , β be scalars, the following property is satisfied:
r r r r r r r r r r
(α a + β b) ⋅ (c ∧ d) = α a ⋅ (c ∧ d) + β b ⋅ (c ∧ d) (1.21)
r
r r r
NOTE: Some authors represent the scalar triple product as, [a, b, c ] ≡ a ⋅ b ∧ c ,
r r
( )
r r r r r r r r r r r r
[b, c, a] ≡ b ⋅ (c ∧ a) , [c, a, b] ≡ c ⋅ a ∧ b . ■ ( )
r r r r V
a∧b c
b
.. θ r
a
r r r
Vector Triple Product: Let a , b , c be vectors, we can define the vector triple product as
r r r r
( )
w = a ∧ b ∧ c . Then, we can demonstrate that the following relationships to be true:
r r r r
w =a∧ b∧c ( ) (
r r r r r r
)
= −c ∧ a ∧ b = c ∧ b ∧ a ( )
(1.22)
r r r r r r
( )
= (a ⋅ c ) b − a ⋅ b c
r
whereby it is clear that the result of the vector triple product is another vector w , belonging to
r r
the plane Π1 formed by the vectors b and c , (see Figure 1.6).
r r
Π1 Π1 - plane defined by b , c
Π2 r r r r
c Π 2 - plane defined by a , b ∧ c
r r
b w belonging to the plane Π1
r
a
r r
r b∧c
w
r r
Problem 1.1: Let a and b be arbitrary vectors. Prove that the following relationship is
true:
(ar ∧ br )⋅ (ar ∧ br ) = (ar ⋅ ar )(br ⋅ br ) − (ar ⋅ br )
2
Solution:
(ar ∧ br )⋅ (ar ∧ br ) r r 2
= a∧b
( r r
= a b sin θ
2
)
r 2 r 2
= a b sin 2 θ
r 2 r 2
(
= a b 1 − cos 2 θ )
r 2 r 2 r 2 r 2
= a b − a b cos 2 θ
r 2 r 2
= a b − a b cos θ
r r
( 2
)
( )
r 2 r 2 r r 2
= a b − a⋅b
r r r r
( ) ( ) r r 2
= (a ⋅ a) b ⋅ b − a ⋅ b
Linear Transformation
r r
Let u and v be arbitrary vectors, and α be a scalar, we can state F is a linear
transformation if the following is true:
r r r r
F (u + v ) = F (u) + F ( v )
r r
F (αu) = αF (u)
1
Problem 1.2: Given the following functions σ(ε) = Eε and ψ(ε) = Eε 2 , demonstrate
2
whether these functions show a linear transformation or not.
Solution:
σ(ε 1 + ε 2 ) = E [ε1 + ε 2 ] = Eε 1 + Eε 2 = σ(ε1 ) + σ(ε 2 ) (linear transformation)
σ( ε)
σ (ε 1 + ε 2 ) = σ (ε 1 ) + σ ( ε 2 )
σ (ε 2 )
σ (ε 1 )
ε1 ε2 ε1 + ε 2 ε
1
The function ψ(ε) = Eε 2 does not show a linear transformation because the condition
2
ψ (ε1 + ε 2 ) = ψ (ε1 ) + ψ (ε 2 ) has not been satisfied:
1 1
2
[ ] 1
2
1
2
1
ψ(ε1 + ε 2 ) = E [ε1 + ε 2 ]2 = E ε12 + 2ε1ε 2 + ε 22 = Eε12 + Eε 22 + E 2ε1ε 2
2 2
= ψ ( ε 1 ) + ψ ( ε 2 ) + Eε 1 ε 2 ≠ ψ ( ε 1 ) + ψ ( ε 2 )
ψ ( ε)
ψ (ε1 + ε 2 )
ψ (ε1 ) + ψ (ε 2 )
ψ (ε 2 )
ψ (ε1 )
ε1 ε2 ε1 + ε 2 ε
A tensor, which has physical meanings, must be independent of the adopted coordinate
system. Sometimes for reasons of convenience, we need to represent a tensor in a specific
coordinate system, hence, we have the concept of tensor components, (see Figure 1.7).
TENSORS
Mathematical representation of the physical
quantities
(Independent of the coordinate system)
COMPONENTS
Tensor Representation in a
Coordinate System
ξ3 ξ2
r
a ( a1 , a 2 , a 3 )
r r
a a
a) b)
ξ1
Figure 1.8: Vector representation in a general coordinate system.
The Cartesian coordinate system is defined by three unit vectors: î , ĵ , k̂ , denoted by the
Cartesian basis, which make up an orthonormal basis. The orthonormal basis has the
following properties:
1. The vectors that make up this basis are unit vectors:
ˆi = ˆj = kˆ = 1 (1.23)
or:
ˆi ⋅ ˆi = ˆj ⋅ ˆj = kˆ ⋅ kˆ = 1 (1.24)
ĵ ĵ ĵ
î î î
k̂ k̂
k̂
ay
r
a
ĵ
î ax
k̂
x
az
r r r
Let a , b , c be arbitrary vectors, we can describe some vector operations in the Cartesian
coordinate system, as follows:
r r
The scalar product a ⋅ b becomes a scalar, which is defined in the Cartesian
system as:
r r
a ⋅ b = (a x ˆi + a y ˆj + a z kˆ ) ⋅ (b x ˆi + b y ˆj + b z kˆ ) = (a x b x + a y b y + a z b z ) (1.29)
r r r
Thus, it is true that a ⋅ a = a x a x + a y a y + a z a z = a 2x + a 2y + a 2z = a .
2
NOTE: The projection of a vector onto a given direction was established in the
equation (1.10), thus defining the component concept. For example, if we want to
know the vector component along the y -direction, all we need to do is calculate:
r
a ⋅ ˆj = (a x ˆi + a y ˆj + a z kˆ ) ⋅ (ˆj) = a y .■
r
The norm of a is:
r
a = a 2x + a 2y + a 2z (1.30)
r
Then, the unit vector codirectional with a is:
r
a ax ay az
aˆ = r = ˆi + ˆj + kˆ
a 2 2 2
(1.31)
ax + a y + az a 2x + a 2y + a 2z a 2x + a 2y + a 2z
ax ay az
r r r
( )
r r r r r r r r r
V (a, b, c ) = a ⋅ b ∧ c = b ⋅ (c ∧ a) = c ⋅ a ∧ b = b x ( ) by bz
cx cy cz
by bz bx bz bx by (1.37)
= ax − ay + az
cy cz cx cz cx cy
( ) (
= a x b y c z − b z c y − a y (b x c z − b z c x ) + a z b x c y − b y c x )
r r r
The vector triple product made up of the vectors ( a, b, c ) is obtained, in the
Cartesian coordinate system, as:
(
r r r
a∧ b∧c ) r r r r r r
( )
= (a ⋅ c ) b − a ⋅ b c
(1.38)
( )
= (λ 1b x − λ 2 c x ) ˆi + λ 1b y − λ 2 c y ˆj + (λ 1b z − λ 2 c z ) kˆ
r r r r
where λ 1 = a ⋅ c = a x c x + a y c y + a z c z , and λ 2 = a ⋅ b = a x b x + a y b y + a z b z .
Problem 1.3: Consider the points: A(1,3,1) , B (2,−1,1) , C (0,1,3) and D(1,2,4 ) , defined in the
Cartesian coordinate system.
→ →
1) Find the parallelogram area defined by AB and AC ; 2) Find the volume of the
→ → → →
parallelepiped defined by AB , AC and AD ; 3) Find the projection vector of AB onto
→
BC .
Solution:
→ →
1) Firstly we calculate the vectors AB and AC :
r → → →
a = AB = OB − OA = (2ˆi − 1ˆj + 1kˆ ) − (1ˆi + 3ˆj + 1kˆ ) = 1ˆi − 4ˆj + 0kˆ
(0ˆi + 1ˆj + 3kˆ )− (1ˆi + 3ˆj + 1kˆ ) = −1ˆi − 2ˆj + 2kˆ
r → → →
b = AC = OC − OA =
With reference to the equation (1.36) we can evaluate the vector product as follows:
ˆi ˆj kˆ
r r
a∧b= 1 − 4 0 = ( −8)ˆi − 2ˆj + ( −6)kˆ
−1 − 2 2
Then, the parallelogram area can be obtained using definition (1.19), thus:
r r
A = a ∧ b = (−8) 2 + (−2) 2 + ( −6) 2 = 104
→
2) Next, we can evaluate the vector AD as:
r → → →
( ) (
c = AD = OD − OA = 1ˆi + 2ˆj + 4kˆ − 1ˆi + 3ˆj + 1kˆ = 0ˆi − 1ˆj + 3kˆ )
and using the equation (1.37) we can obtain the volume of the parallelepiped:
r r r r r r
(
V (a, b, c ) = c ⋅ a ∧ b ) (
= 0ˆi − 1ˆj + 3kˆ )⋅ (− 8ˆi − 2ˆj − 6kˆ )
= 0 + 2 − 18 = 16
→
3) The BC vector can be calculated as:
→ → →
( ) ( )
BC = OC − OB = 0ˆi + 1ˆj + 3kˆ − 2ˆi − 1ˆj + 1kˆ = −2ˆi + 2ˆj + 2kˆ
→ →
Hence, it is possible to evaluate the projection vector of AB onto BC , (see equation
(1.12)), as:
→
proj BC→ AB =
→
BC ⋅ AB
→
→
BC =
(− 2ˆi + 2ˆj + 2kˆ )⋅ (1ˆi − 4ˆj + 0kˆ ) (− 2ˆi + 2ˆj + 2kˆ )
→
BC
1
42 ⋅4
BC
3
→
(− 2ˆi + 2ˆj + 2kˆ )⋅ (− 2ˆi + 2ˆj + 2kˆ )
→ 2
BC
=
(− 2 − 8 + 0 ) (− 2ˆi + 2ˆj + 2kˆ ) = 5 ˆi − 5 ˆj − 5 kˆ
(4 + 4 + 4 ) 3 3 3
Then, we introduce the summation convention, according to which the “repeated indices”
indicate summation. So, equation (1.41) can be represented as follows:
r
a = a1eˆ 1 + a 2 eˆ 2 + a 3 eˆ 3 = a i eˆ i (i = 1,2,3)
r (1.42)
a = a i eˆ i (i = 1,2,3)
NOTE: The summation notation was introduced by Albert Einstein in 1916, which led to
the indicial notation. ■
Using indicial notation, the three axes of the coordinate system are designated by the letter
x with a subscript. So, xi is not a single value but i values, i.e. x1 , x 2 , x3 (if i = 1,2,3 )
where the orthonormal basis is represented by ê1 , ê 2 , ê 3 , (see Figure 1.11), and a1 , a 2 ,
a 3 are the vector components. In indicial notation the vector components are represented
by a i . If the range of the subscript is not indicated, we assume that 1,2,3 show these
values. Therefore, the vector components are represented as:
a1
r
(a) i = a i = a 2 (1.44)
a 3
y ≡ x2
a y ≡ a2
r
a
ˆj ≡ eˆ
2
ˆi ≡ eˆ a x ≡ a1
1
x ≡ x1
a z ≡ a3 kˆ ≡ eˆ 3
z ≡ x3
Scalar product: Using definitions (1.4) and (1.29), we can express the scalar product
r r
( a ⋅ b ) as follows:
r r r r
γ = a ⋅ b = a b cos θ = a1b1 + a 2 b 2 + a 3b 3 = a i b i = a j b j (i, j = 1,2,3) (1.47)
1) a1 x1 x 3 + a 2 x 2 x 3 + a 3 x 3 x 3
Solution: a i xi x 3 (i = 1,2,3)
2) x1 x1 + x2 x2
Solution: xi x i (i = 1,2)
a11 x + a12 y + a13 z = b x
3) a 21 x + a 22 y + a 23 z = b y
a 31 x + a 32 y + a 33 z = b z
Solution:
a11 x1 + a12 x 2 + a13 x 3 = b1 a1 j x j = b1
dummy free
a 21 x1 + a 22 x 2 + a 23 x 3 = b2
index j
→ a 2 j x j = b2 index
i → a ij x j = bi
a x + a x + a x = b
31 1 32 2 33 3 3 a 3 j x j = b3
As we can appreciate in this problem, the use of the indicial notation means that the
equation becomes very concise. In many cases, if algebraic operation do not use indicial or
tensorial notation they become almost impossible to deal with due to the large number of
terms involved.
1 iff i= j
δ ij = (1.48)
0 iff i≠ j
Also note that the scalar product of the orthonormal basis eˆ i ⋅ eˆ j is equal to 1 if i = j and
equal to 0 if i ≠ j . Hence, eˆ i ⋅ eˆ j can be expressed in matrix form as:
eˆ 1 ⋅ eˆ 1 eˆ 1 ⋅ eˆ 2 eˆ 1 ⋅ eˆ 3 1 0 0
eˆ i ⋅ eˆ j = eˆ 2 ⋅ eˆ 1 eˆ 2 ⋅ eˆ 2 eˆ 2 ⋅ eˆ 3 = 0 1 0 = δ ij (1.49)
eˆ 3 ⋅ eˆ 1 eˆ 3 ⋅ eˆ 2 eˆ 3 ⋅ eˆ 3 0 0 1
An interesting property of the Kronecker delta is shown in the following example. Let Vi
r
be the components of the vector V , therefore:
δ ij Vi = δ 1 jV1 + δ 2 jV2 + δ 3 jV3 (1.50)
As ( j = 1,2,3) is a free index, we have three values to be calculated, namely:
j = 1 ⇒ δ ij Vi = δ 11V1 + δ 21V2 + δ 31V3 = V1
j = 2 ⇒ δ ij Vi = δ 12V1 + δ 22V2 + δ 32V3 = V 2 ⇒ δ ij Vi = V j (1.51)
j = 3 ⇒ δ ijVi = δ 13V1 + δ 23V 2 + δ 33V3 = V3
That is, in the presence of the Kronecker delta symbol we replace the repeated index as
follows:
δi j V i =V j (1.52)
For this reason, the Kronecker delta is often called the substitution operator.
Other examples using the Kronecker delta are presented below:
δ ij Aik = A jk , δ ij δ ji = δ ii = δ jj = δ 11 + δ 22 + δ 33 = 3 , δ ji a ji = a ii = a11 + a 22 + a 33 (1.53)
r
To obtain the components of the vector a in the coordinate system represented by ê i , it
r r
is sufficient to obtain the scalar product with a and ê i , i.e. a ⋅ eˆ i = a p eˆ p ⋅ eˆ i = a p δ pi = a i .
With that, it is also possible to represent the vector as:
r r
a = a i eˆ i = (a ⋅ eˆ i )eˆ i (1.54)
The permutation symbol ijk (also known as Levi-Civita symbol or alternating symbol) is defined as:
ijk = 1
1 i ijk = jki = kij
a) b)
δ 1i δ 2i δ 3i δ 1 p δ 1q δ 1r
ijk pqr = δ1 j δ 2 j δ 3 j δ 2 p δ 2q δ 2r (1.60)
δ 1k δ 2 k δ 3k δ 3 p δ 3q δ 3r
Taking into account that det (AB ) = det (A )det (B ) , where det (•) ≡ • is the determinant
of the matrix • , the equation (1.60) can be rewritten as:
δ 1i δ 2i δ 3i δ 1 p δ 1q δ 1r δ ip δ iq δ ir
ijk pqr = δ 1 j δ2j δ 3 j δ 2 p δ 2 q δ 2 r ⇒ ijk pqr = δ jp δ jq δ jr (1.61)
δ 1k δ 2k δ 3k δ 3 p δ 3q δ 3r δ kp δ kq δ kr
δ ip δ iq δ ik
ijk pqr = δ jp δ jq δ jk ⇒ ijk pqk = δ ip δ jq − δ iq δ jp i, j , k , p, q = 1,2,3 (1.62)
δ kp δ kq 3
Problem 1.7: a) Prove the following is true ijk pjk = 2δ ip and ijk ijk = 6 . b) Obtain the
numerical value of ijk δ 2 j δ 3k δ 1i .
Solution: a) Using the equation in (1.62), i.e. ijk pqk = δ ip δ jq − δ iq δ jp , and by substituting q
for j , we obtain:
ijk pjk = δ ip δ jj − δ ij δ jp = δ ip 3 − δ ip = 2δ ip
Based on the above result, it is straight forward to check that:
ijk ijk = 2δ ii = 6
b) ijk δ 2 j δ 3k δ 1i = 123 = 1
r r r
The vector product of two vectors ( a ∧ b ) leads to a new vector c , defined in
r
(1.36), and the components of c , in Cartesian system, are given by:
eˆ 1 eˆ 2 eˆ 3
r r r
c = a ∧ b = a1 a 2 a 3
b1 b 2 b 3 (1.63)
= (a 2 b 3 − a 3b 2 )eˆ 1 + (a 3b1 − a1b 3 )eˆ 2 + (a1b 2 − a 2 b1 )eˆ 3
142 4 43 4 14243 14243
c1 c2 c3
Using the definition of the permutation symbol ijk , defined in (1.55), we can express the
r
components of c as follows:
c 1 = 123 a 2 b 3 + 132 a 3b 2 = 1 jk a j b k
c 2 = 231a 3b1 + 213 a1b 3 = 2 jk a j b k ⇒ c i = ijk a j b k (1.64)
c 3 = 312 a1b 2 + 321a 2 b1 = 3 jk a j b k
r r
Then, the vector product ( a ∧ b ) can be represented by means of the permutation symbol
as:
r r
a ∧ b = ijk a j b k eˆ i
a j eˆ j ∧ b k eˆ k = a j b k ijk eˆ i (1.65)
a j b k (eˆ j ∧ eˆ k ) = a j b k ijk eˆ i = a j b k jki eˆ i
The permutation symbol and the orthonormal basis can be interrelated using the triple
scalar product as follows:
eˆ i ∧ eˆ j = ijm eˆ m ⇒ (eˆ i ∧ eˆ j )⋅ eˆ
= ijm eˆ m ⋅ eˆ k = ijm δ mk ⇒ eˆ i ∧ eˆ j ⋅ eˆ k = ijk
k ( ) (1.67)
r r r
The triple scalar product made up of the vectors a, b, c is expressed by:
r r r
( ) ( ) (
λ = a ⋅ b ∧ c = a i eˆ i ⋅ b j eˆ j ∧ c k eˆ k = a i b j c k eˆ i ⋅ eˆ j ∧ eˆ k = ijk a i b j c k ) (1.68)
r r r
(
λ = a ⋅ b ∧ c = ijk a i b j c k ) (i, j , k = 1,2,3) (1.69)
or
a1 a 2 a3
(
r r r r r r r r r
)
λ = a ⋅ b ∧ c = b ⋅ (c ∧ a) = c ⋅ a ∧ b = b1 b 2 ( ) b3 (1.70)
c1 c2 c3
Starting from the equation (1.69) we can prove the following are true:
( )
r r r r r r r r r
a ⋅ b ∧ c = b ⋅ (c ∧ a) = c ⋅ a ∧ b : ( )
r r r r r r
(
[a, b, c] ≡ a ⋅ b ∧ c = ijk aib j c k ) r r r r r r
= jki aib j c k = b ⋅ (c ∧ a) ≡ [b, c, a]
(
r r r
) r r r
= kij aib j c k = c ⋅ a ∧ b ≡ [c, a, b]
(
r r r
) r r r
= −ikj aib j c k = −a ⋅ c ∧ b ≡ −[a, c, b]
r r r r r r
(1.71)
(r r ) (r r )
Problem 1.8: Rewrite the expression a ∧ b ⋅ c ∧ d without using the vector product
symbol.
r r
(
Solution: The vector product a ∧ b can be expressed as )
(r r
)
a ∧ b = a j eˆ j ∧ b k eˆ k = ijk a j b k eˆ i . Likewise, it is possible to express c ∧ d as
r r
( )
(r r
)
c ∧ d = nlm c l d m eˆ n , thus:
(
r r r r
)( )
a ∧ b ⋅ c ∧ d = ijk a j b k eˆ i ) ⋅ ( nlm c l d m eˆ n ) = ijk nlm a j b k c l d m eˆ i ⋅ eˆ n
= ijk nlm a j b k c l d m δ in = ijk ilm a j b k c l d m
Taking into account that ijk ilm = jki lmi (see equation (1.57)) and by applying the
equation (1.62), i.e.: jki lmi = δ jl δ km − δ jm δ kl = jki ilm , we obtain:
ijk ilm a j b k c l d m = (δ jl δ km − δ jm δ kl ) a j b k c l d m = a l b m c l d m − a m b l c l d m
r r
( r r
)
Since a l c l = (a ⋅ c ) and b m d m = b ⋅ d holds true, we can conclude that:
(a ∧ b)⋅ (c ∧ d) = (arr ⋅ cr )r(br ⋅ dr ) − (ar ⋅ dr )(br ⋅ cr )
r r r r
r r
Therefore, it is also valid when a = c and b = d , thus:
(ar ∧ br )⋅ (ar ∧ br ) = ar ∧ br 2 r r r r
( ) ( )( )
r r r r r
= (a ⋅ a ) b ⋅ b − a ⋅ b b ⋅ a = a
2 r
b
2
( )
r r
− a⋅b
2
(r r ) (r r ) r [r
Problem 1.9: Prove that a ∧ b ∧ c ∧ d = c d ⋅ (a ∧ b) − d c ⋅ (a ∧ b)
r r
] r [r r r
]
Solution: Expressing the correct equality term in indicial notation we obtain:
[
r r r
cr d ⋅ (a
] [
rr r r
] [ (
∧ b) − d c ⋅ (a ∧ b) = c p d i ijk a j b k − d p c i ijk a j b k
p
)] [ ( )]
⇒ ijk a j b k c p d i − ijk a j b k c i d p ⇒ ijk a j b k c p d i − c i d p ( )
Using the Kronecker delta the above equation becomes:
⇒ ijk a j b k (δ pm c m d n δ ni − δ im c m d n δ np ) ⇒ ( ijk a j b k ) c m d n (δ pm δ ni − δ im δ np )
and by applying the equation δ pm δ ni − δ im δ np = pil mnl , (see eq. (1.62)), the above equation
can be rewritten as follows:
⇒ ( ijk a j b k ) c m d n ( pil mnl ) ⇒ pil [( ijk a j b k ) ( mnl c m d n )]
Since ijk a j b k and mnl c m d n represent the components of (ar ∧ br ) and (cr ∧ dr ) ,
respectively, we can conclude that:
[(r r ) (r r )]
pil [( ijk a j b k ) ( mnl c m d n )] = a ∧ b ∧ c ∧ d p
r r r r
Problem 1.10: Let a , b , c be linearly independent vectors, and v be a vector,
demonstrate that:
r r r r r
v = αa + βb + γ c ≠ 0
where the scalars α , β , γ are given by:
ijk v i b j c k ijk a i v j c k ijk a i b j v k
α= ; β= ; γ=
pqr a p b q c r pqr a p b q c r pqr a p b q c r
r r r
Solution: The scalar product made up of v and ( b ∧ c ) becomes:
r r r
r r r r r r r r r r r r v ⋅ (b ∧ c )
v ⋅ (b ∧ c ) = αa ⋅ (b ∧ c ) + β b ⋅ (b ∧ c ) + γ c ⋅ (b ∧ c ) ⇒ α= r r r
14243 14243
=0 =0
a ⋅ (b ∧ c )
which is the same as:
v1 v2 v3 v1 b1 c1
b1 b2 b3 v2 b2 c2
c1 c2 c3 v3 b3 c3 ijk v i b j c k
α= = =
a1 a2 a3 a1 b1 c1 pqr a p b q c r
b1 b2 b3 a2 b2 c2
c1 c2 c3 a3 b3 c3
One can obtain the parameters β and γ in a similar fashion.
[ar ∧ (br ∧ cr )]
q = qsi a s ( ijk b j c k ) = qsi ijk a s b j c k = qsi jki a s b j c k
= (δ qj δ sk − δ qk δ sj ) a s b j c k = δ qj δ sk a s b j c k − δ qk δ sj a s b j c k
r r
( )
r r
= a k b q c k − a j b j c q = b q (a ⋅ c ) − c q a ⋅ b
[ (
r r r
)] [ r r r rr r
⇒ a ∧ b ∧ c q = b(a ⋅ c ) − c a ⋅ b q ( )]
1.5.1 Dyadic
r r
The tensor product, made up of two vectors v and u , becomes a dyad, which is a particular
case of a second-order tensor. The dyad is represented by:
rr r r
uv ≡ u ⊗ v = A (1.72)
where the operator ⊗ denotes the tensor product. Then, we define a dyadic as a linear
combination of dyads. Furthermore, as we will see later, any tensor can be represented by
means of a linear combination of dyads, (see Holzapfel (2000)).
The tensor product has the following properties:
r r r r r r r r r
1. (u ⊗ v ) ⋅ x = u( v ⋅ x ) ≡ u ⊗ ( v ⋅ x ) (1.73)
r r r r r r r
2. u ⊗ (αv + βw ) = αu ⊗ v + βu ⊗ w (1.74)
r r r r r r r r r r r
3. (αv ⊗ u + βw ⊗ r ) ⋅ x = α ( v ⊗ u) ⋅ x + β ( w ⊗ r ) ⋅ x
r r r r r r (1.75)
= α[v ⊗ (u ⋅ x )] + β [w ⊗ (r ⋅ x )]
where α and β are scalars. By definition, the dyad does not contain the commutative
r r r r
property, i.e., u ⊗ v ≠ v ⊗ u .
The equation (1.72) can also be expressed in the Cartesian system as:
r r
A = u ⊗ v = (u i eˆ i ) ⊗ ( v j eˆ j )
= u i v j (eˆ i ⊗ eˆ j ) (i, j = 1,2,3) (1.76)
= A ij (eˆ i ⊗ eˆ j )
A = A ij eˆ i ⊗ eˆ j
{ { 1
Tensor 424 3 (i, j = 1,2,3) (1.77)
components basis
A 11 A 12 A 13
( A ) ij = A ij = A = A 21 A 22 A 23 (1.79)
A 31 A 32 A 33
It is easy to identify the tensor order by the number of free indices in the tensor
components, i.e.:
Second-order tensor U = U ij eˆ i ⊗ eˆ j
Third-order tensor T = Tijk eˆ i ⊗ eˆ j ⊗ eˆ k (i, j , k , l = 1,2,3) (1.80)
Fourth-order tensor I = I ijkl eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l
OBS.: The tensor order is given by the number of free indices in its components.
OBS.: The number of tensor components is given by a n , where the base a is the
maximum value in the index range, and the exponent n is the number of the free
index.
Problem 1.12: Define the order of the tensors represented by their Cartesian components:
v i , Φ ijk , Fijj , ε ij , C ijkl , σ ij . Determine the number of components in tensor C .
Solution: The order of the tensor is given by the number of free indices, so it follows that:
r r
First-order tensor (vector): v , F ; Second-order tensor: ε , σ ; Third-order tensor: Φ ;
Fourth-order tensor: C
The number of tensor components is given by the maximum index range value, i.e.
i, j , k , l = 1,2,3 , to the power of the number of free indices which is equal to 4 in the case of
C ijkl . Thus, the number of independent components in C is given by:
3 4 = (i = 3) × ( j = 3) × (k = 3) × (l = 3) = 81
The fourth-order tensor C ijkl has 81 components.
Let A and B be second-order tensors, we can then define some algebraic operations
including:
Addition: The sum of two tensors of same order is a new tensor defined as follows:
C = A +B =B + A (1.81)
The components of C are represented by:
(C) ij = ( A + B) ij or C ij = A ij + B ij (1.82)
or, in matrix notation as:
C =A+B (1.83)
Multiplication of a tensor by a scalar: The multiplication of a second-order tensor
( A ) by a scalar ( λ ) is defined by a new tensor D , so that:
D = λA incomponents
→(D) ij = λ( A ) ij (1.84)
or, in matrix form:
A 11 A 12 A 13 λA 11 λA 12 λA 13
A = A 21 A 22
A 23 → λA = λA 21 λA 22 λA 23 (1.85)
A 31 A 32 A 33 λA 31 λA 32 λA 33
It is also true that:
r r
(λ A ) ⋅ v = λ ( A ⋅ v ) (1.86)
r
for any vector v .
Scalar Product (or Dot Product): The scalar product (also known as single
r
contraction) between a second-order tensor A and a vector x is another vector
r
(first-order tensor) y , defined as:
r r δ kl
y = A⋅x
= ( A jk eˆ j ⊗ eˆ k ) ⋅ ( x l eˆ l )
= A jk x l δ kl eˆ j
(1.87)
= A jk x k eˆ j
123
yj
= y j eˆ j
The scalar product between two second-order tensors A and B is another second-order
tensor, that verifies: A ⋅ B ≠ B ⋅ A :
δ jk δ jk
C = A ⋅ B = ( A ij eˆ i ⊗ eˆ j ) ⋅ (B kl eˆ k ⊗ eˆ l ) D = B ⋅ A = (B ij eˆ i ⊗ eˆ j ) ⋅ ( A kl eˆ k ⊗ eˆ l )
= A ij B kl δ jk eˆ i ⊗ eˆ l = B ij A kl δ jk eˆ i ⊗ eˆ l
= A ik B kl eˆ i ⊗ eˆ l = B ik A kl eˆ i ⊗ eˆ l (1.88)
123 123
AB BA
= C il eˆ i ⊗ eˆ l = D il eˆ i ⊗ eˆ l
δ il
δ jk
A ⋅ ⋅ B = ( A ij eˆ i ⊗ eˆ j )⋅ ⋅ (B kl eˆ k ⊗ eˆ l )
= A ij B kl δ jk δ il
(1.92)
= A ij B ji
=γ ( scalar )
δ ik
A : B = ( A ij eˆ i ⊗ eˆ j ) : (B kl eˆ k ⊗ eˆ l )
= A ij B kl δ ik δ jl δ jl (1.95)
= A ij B ij
=λ ( scalar )
In general, A : B ≠ A ⋅ ⋅B , however, they are equal if at least one of them is symmetric, i.e.
A sym : B = A sym ⋅ ⋅ B or A : B sym = A ⋅ ⋅ B sym , so A sym : B sym = A sym ⋅ ⋅ B sym .
The double contraction with a third-order tensor ( S ) and a second-order tensor ( B )
becomes:
(
r r r r r
)
r r r r
S : B = c ⊗ d ⊗ a : (u ⊗ v ) = (a ⋅ v ) d ⋅ u ( )cr
( ) ( )ar
r r r r r r r r r (1.96)
B : S = (u ⊗ v ) : c ⊗ d ⊗ a = (u ⋅ c ) v ⋅ d
As we can verify the result is a vector. In symbolic notation, the double contraction ( B : S )
is represented by:
[ (
r r r
)]
a ∧ b ∧ a j = (a k a k )b j − (a k b k )a j = (a k a k )b p δ jp − (a k b p δ kp )a j
[ ] [
= (a k a k )δ jp − (a k δ kp )a j b p = (a k a k )δ jp − a p a j b p] (1.104)
{ r r r
= [(a ⋅ a)1 − a ⊗ a] ⋅ b j
r r
}
Thus, the following relationships are valid:
r r r
( ) (
r r r r r r r r r r r
a ∧ (b ∧ c ) = (a ⋅ c ) b − a ⋅ b c = b ⊗ c − c ⊗ b ⋅ a
r r
) (1.105)
r r r
a ∧ (b ∧ a) = [(a ⋅ a)1 − a ⊗ a]⋅ b
r r r
T = Tij eˆ i ⊗ eˆ j = Ti1eˆ i ⊗ eˆ 1 + Ti 2 eˆ i ⊗ eˆ 2 + Ti 3 eˆ i ⊗ eˆ 3
= T11eˆ 1 ⊗ eˆ 1 + T12 eˆ 1 ⊗ eˆ 2 + T13 eˆ 1 ⊗ eˆ 3 +
(1.106)
+ T21eˆ 2 ⊗ eˆ 1 + T22 eˆ 2 ⊗ eˆ 2 + T23 eˆ 2 ⊗ eˆ 3 +
+ T31eˆ 3 ⊗ eˆ 1 + T32 eˆ 3 ⊗ eˆ 2 + T33 eˆ 3 ⊗ eˆ 3
r ˆ r ˆ r ˆ
Graphical representation of these three vectors t ( e1 ) , t (e 2 ) , t (e 3 ) , in the Cartesian basis, is
r ˆ
shown in Figure 1.13. Note also that t ( e1 ) is the projection of T onto ê1 , nˆ (i1) = [1,0,0] ,
which can be verified by:
T11 T12 T13 1 T11
( T ⋅ nˆ ) i = T21 T23 0 = T21 = t i( e1 )
ˆ
T22 (1.109)
T31 T32 T33 0 T31
x3 r ˆ
t (e3 )
ê 3
r ˆ
t (e 2 )
r ˆ
t ( e1 ) ê 2 x2
ê 1
x1
Figure 1.13: The projection of T in the Cartesian basis.
The same result obtained in (1.109) could have been evaluated by the scalar product of T ,
given in (1.106), with the basis ê1 , i.e.:
T ⋅ eˆ 1 = [ T11eˆ 1 ⊗ eˆ 1 + T12 eˆ 1 ⊗ eˆ 2 + T13 eˆ 1 ⊗ eˆ 3 +
+ T21eˆ 2 ⊗ eˆ 1 + T22 eˆ 2 ⊗ eˆ 2 + T23 eˆ 2 ⊗ eˆ 3 +
(1.110)
+ T31eˆ 3 ⊗ eˆ 1 + T32 eˆ 3 ⊗ eˆ 2 + T33 eˆ 3 ⊗ eˆ 3 ] ⋅ eˆ 1
r ˆ
= T11eˆ 1 + T21eˆ 2 + T31eˆ 3 = t ( e1 )
components. The components displayed tangentially to the plane are called tangential
components, and correspond to the off-diagonal terms of Tij .
x3 rˆ
T33 ê 3 t (e 3 )
T31ê 3
r ˆ T22 ê 2
t ( e1 ) T21ê 2
T12 ê1
x2
T11ê1
x1
Figure 1.14: Representation of the second-order tensor components in the Cartesian
coordinate system.
A ⋅ B = ( A ij eˆ i ⊗ eˆ j ) ⋅ (B kl eˆ k ⊗ eˆ l )
= A ij B kl δ jk (eˆ i ⊗ eˆ l ) (1.111)
= A ij B jl (eˆ i ⊗ eˆ l )
Cartesian basis
Indicial notation
Note that the index is not repeated more than twice either in symbolic notation or in
indicial notation. Also note that the indicial notation is equivalent to the tensor notation
r r
only when dealing with scalars, e.g. A : B = A ijB ij = λ , or a ⋅ b = a i b i . ■
(A T ) ij = A ji (1.113)
r r r r
If A = u ⊗ v , the transpose of the dyad A is given by A T = v ⊗ u :
r r T r r
AT = (u ⊗ v ) = v ⊗u
(
= u i eˆ i ⊗ v j eˆ j )
T
= v j eˆ j ⊗ u i eˆ i = v i eˆ i ⊗ u j eˆ j
(1.114)
(
= u i v j eˆ i ⊗ eˆ j )
T
= u i v j eˆ j ⊗ eˆ i = u j v i eˆ i ⊗ eˆ j
= (A )T
ˆ ⊗ eˆ j
ij e i = A ij eˆ j ⊗ eˆ i = A ji eˆ i ⊗ eˆ j
The transpose of the matrix A is formed by changing rows for columns and vice versa,
i.e.:
T
A 11 A 12 A 13 A 11 A 12 A 13 A 11 A 21 A 31
A = A 21 A 22 A 23 → A = A 21
transpose T
A 22 A 23 = A 12
A 22 A 32 (1.117)
A 31 A 32 A 33 A 31 A 32 A 33 A 13 A 23 A 33
Problem 1.13: Let A , B and C be arbitrary second-order tensors. Demonstrate that:
A : (B ⋅ C ) = B T ⋅ A : C = A ⋅ C T : B ( ) ( )
Solution: Expressing the term A : (B ⋅ C ) in indicial notation we obtain:
A : (B ⋅ C ) = A ij eˆ i ⊗ e j : (B lk eˆ l ⊗ e k ⋅ C pq eˆ p ⊗ eˆ q )
= A ij B lk C pq eˆ i ⊗ e j : (δ kp eˆ l ⊗ eˆ q )
= A ij B lk C pq δ kp δ il δ jq = A ij B ik C kj
Note that, when we are dealing with indicial notation the position of the terms does not
matter, i.e.:
A ij B ik C kj = B ik A ij C kj = A ij C kj B ik
We can now observe that the algebraic operation B ik A ij is equivalent to the components of
the second-order tensor (B T ⋅ A ) kj , thus,
(
B ik A ij C kj = (B T ⋅ A ) kj C kj = B T ⋅ A : C . )
Likewise, we can state that A ij C kj B ik = (A ⋅ C ): B . T
r r
Problem 1.14: Let u , v be vectors and A be a second-order tensor. Show that the
following relationship holds:
r r r r
u⋅ AT ⋅ v = v ⋅ A ⋅u
Solution:
r r r r
u ⋅ AT ⋅ v = v ⋅ A ⋅u
u i eˆ i ⋅ A jl eˆ l ⊗ eˆ j ⋅ v k eˆ k = v k eˆ k ⋅ A jl eˆ j ⊗ eˆ l ⋅ u i eˆ i
u i A jl δ il v k δ jk = v k δ kj A jl u i δ il
u l A jl v j = v j A jl u l
A second-order tensor A is symmetric, i.e.: A ≡ A sym , if the tensor is equal to its transpose:
if A = A T incomponents
→ A ij = A ji ⇔ A is symmetric (1.118)
in matrix form:
A 11 A 12 A 13
A=A T
→ A ≡ A sym
= A 12 A 22 A 23 (1.119)
A 13 A 23 A 33
From the above it is clear that a symmetric second-order tensor has 6 independent
components, namely: A 11 , A 22 , A 33 , A 12 , A 23 , A 13 .
According to equation (1.118), a symmetric tensor can be represented by:
A ij = A ji
A ij + A ij = A ij + A ji
2 A ij = A ij + A ji (1.120)
1 1
A ij = ( A ij + A ji ) ⇒ A= (A + A T )
2 2
A fourth-order tensor C , whose components are C ijkl , may have the following types of
symmetries:
Minor symmetry:
C ijkl = C jikl = C ijlk = C jilk (1.121)
Major symmetry:
C ijkl = C klij (1.122)
A fourth-order tensor that does not exhibit any kind of symmetry has 81 independent
components. If the tensor C has only minor symmetry, i.e. symmetry in ij = ji (6) , and
symmetry in kl = lk (6) , the tensor features 36 independent components. If besides
presenting minor symmetry it also provides major symmetry, the tensor features 21
independent components.
A tensor A is antisymmetric (also called skew-symmetric tensor or skew tensor), i.e.: A ≡ A skew :
if A = − A T incomponents
→ A ij = − A ji ⇔ A is antisymmetric (1.123)
which broken down into its components is the same as:
0 A 12 A 13
A = −A
→ A T skew
= − A 12 0 A 23 (1.124)
− A 13 − A 23 0
Therefore, an antisymmetric second-order tensor has 3 independent components, namely:
A 12 , A 23 , A 13 .
Under the conditions expressed in (1.123), an antisymmetric tensor can be represented by:
A ij + A ij = A ij − A ji
2 A ij = A ij − A ji (1.125)
1 1
A ij = ( A ij − A ji ) ⇒ A= (A − A T )
2 2
Let us consider an antisymmetric second-order tensor denoted by W , then satisfy the
above relationship (1.125):
1 1 1
Wij = (Wij − W ji ) = (Wkl δ ik δ jl − Wkl δ jk δ il ) = Wkl (δ ik δ jl − δ jk δ il ) (1.126)
2 2 2
Using the relation between the Kronecker delta and the permutation symbol given by
(1.62), i.e. δ ik δ jl − δ jk δ il = − ijr lkr , the equation (1.126) is rewritten as:
1
Wij = − Wkl ijr lkr (1.127)
2
Expanding the term Wkl lkr , for the dummy indices ( k , l ), we can obtain the following
nonzero terms:
Wkl lkr = W12 21r + W13 31r + W2112 r + W23 32 r + W3113r + W32 23r (1.128)
thus,
r =1 ⇒ Wkl lkr = − W23 + W32 = −2W23 = 2 w1
r=2 ⇒ Wkl lkr = W13 − W31 = 2W13 = 2w2 ⇒ Wkl lkr = 2wr (1.129)
r =3 ⇒ Wkl lkr = − W12 + W21 = −2W12 = 2w3
where we have applied the relation rij kij = 2δ rk , which was evaluated in Problem 1.7,
thus we can conclude that:
1
wk = − kij Wij (1.135)
2
Graphical representation of the antisymmetric tensor components and its corresponding
axial vector, in the Cartesian system, is shown in Figure 1.15.
x3
r
w = w1eˆ 1 + w2 eˆ 2 + w3 eˆ 3
w3 = − W12
x1
Figure 1.15: Antisymmetric tensor components and the axial vector.
r r
Let a and b be arbitrary vectors and W be an antisymmetric tensor, it follows that:
r r r r r r
b ⋅ W ⋅ a = a ⋅ W T ⋅ b = −a ⋅ W ⋅ b (1.136)
r r
when a = b , it holds that:
r r r r
a ⋅ W ⋅ a = W : (a ⊗ a) = 0 (1.137)
r r
NOTE: Note that (a ⊗ a) is a symmetric second-order tensor. Later on we will show that
the result of the double contraction between a symmetric tensor and an antisymmetric
tensor equals zero. ■
r
Let us consider an antisymmetric tensor W and an arbitrary vector a . The components of
r
the scalar product W ⋅ a are given by:
Wij a j = Wi1a1 + Wi 2 a 2 + Wi 3 a 3
i = 1 ⇒ W11a1 + W12 a 2 + W13 a 3
(1.138)
i = 2 ⇒ W21a1 + W22 a 2 + W23 a 3
i =3⇒ W31a1 + W32 a 2 + W33 a 3
Bearing in mind that the normal components are equal to zero for an antisymmetric tensor,
i.e., W11 = 0 , W22 = 0 , W33 = 0 , the scalar product (1.138) becomes:
i = 1 ⇒ W12 a 2 + W13 a 3
(W ⋅ a)i ⇒ i = 2 ⇒ W21a1 + W23 a 3
r
(1.139)
i = 3 ⇒ W a + W a
31 1 32 2
r r
The above components are the same as the result of the algebraic operation w ∧ a :
eˆ 1 eˆ 2 eˆ 3
r r
w ∧ a = w1 w2 w3
a1 a2 a3 (1.140)
= (− w3 a 2 + w2 a 3 ) eˆ 1 + (w3 a1 − w1a 3 ) eˆ 2 + (− w2 a1 + w1a 2 ) eˆ 3
= (W12 a 2 + W13 a 3 )eˆ 1 + (W21a1 + W23 a 3 ) eˆ 2 + (W31a1 + W32 a 2 ) eˆ 3
where w1 = −W23 = W32 , w2 = W13 = −W31 , w3 = −W12 = W21 . Then, given an antisymmetric
r
tensor W and the axial vector w associated with W , it holds that:
r r r
W⋅a = w ∧ a (1.141)
r
for any vector a . The property (1.141) could easily have been obtained by taking into
account the components of W given by (1.133), i.e.:
r r
(W ⋅ a)i = Wik a k = −w j jik a k = ijk w j a k = (wr ∧ a)i (1.142)
r r
The vector w can be represented by its magnitude, w = ω , and by the unit vector
r r
codirectional with w , i.e. w = ωê1* . Then, the equation (1.141) can still be expressed as:
r r r r
W ⋅ a = w ∧ a = ωeˆ 1* ∧ a (1.143)
Additionally, we can choose two unit vectors ê *2 , ê *3 , which make up an orthonormal basis
with the unit vector ê1* , (see Figure 1.16), so that:
eˆ 1* = eˆ *2 ∧ eˆ *3 ; eˆ *2 = eˆ *3 ∧ eˆ 1* ; eˆ *3 = eˆ 1* ∧ eˆ *2 (1.144)
r
w = ωê1*
ê*2 ê3 ê 2
ê1*
ê1
ê*3
Figure 1.16: Orthonormal basis defined by the axial vector.
r r
By representing the vector a in this new basis, a = a1* eˆ 1* + a*2 eˆ *2 + a*3 eˆ *3 , the relationship
shown in (1.143) obtains the form below:
r r
W ⋅ a = ωeˆ 1* ∧ a = ωeˆ 1* ∧ (a1* eˆ 1* + a *2 eˆ *2 + a *3 eˆ *3 )
= ω (a1* eˆ 1* ∧ eˆ 1* + a *2 eˆ 1* ∧ eˆ *2 + a *3 eˆ 1* ∧ eˆ *3 ) = ω (a *2 eˆ *3 − a *3 eˆ *2 )
1 424 3 1 424 3 1 424 3 (1.145)
=0ˆ =eˆ *3 =− eˆ *2
[ r
= ω (eˆ *3 ⊗ eˆ *2 − eˆ *2 ⊗ eˆ *3 ) ⋅ a ]
Thus, an antisymmetric tensor can be represented, in the space defined by the axial vector,
as follows:
W = ω (eˆ *3 ⊗ eˆ *2 − eˆ *2 ⊗ eˆ *3 ) (1.146)
Note that by using the antisymmetric tensor representation shown in (1.146), the
projections of the tensor W according to directions ê1* , ê *2 and ê *3 are respectively:
r
W ⋅ eˆ 1* = 0 ; W ⋅ eˆ *2 = ωeˆ *3 ; W ⋅ eˆ *3 = −ωeˆ *2 (1.147)
We can also verify that:
[
eˆ *3 ⋅ W ⋅ eˆ *2 = eˆ *3 ⋅ ω (eˆ *3 ⊗ eˆ *2 − eˆ *2 ⊗ eˆ *3 ) ] ⋅ eˆ *
2 =ω
(1.148)
eˆ *2 ⋅ W ⋅ eˆ *3 = eˆ *2 ⋅ [ω (eˆ *
3 ⊗ eˆ *2 − eˆ *2 ⊗ eˆ )] ⋅ eˆ
*
3
*
3 = −ω
Then, the tensor components of W in the basis formed by the orthonormal basis ê1* , ê *2 ,
ê *3 , are given by:
0 0 0
Wij* = 0 0 − ω
(1.149)
0 ω 0
In Figure 1.17 we can see these components and the axial vector representation. Note that
if we take any basis that is formed just by rotation along the ê1* -axis, the components of
W in this new basis will be the same as those provided in (1.149).
r
x3 x2 w = ωê1*
ê*2
0 0 0
ê1*
Wij* = 0 0 − ω
0 ω 0 ω
x1
ω
ê*3
Figure 1.17: Antisymmetric tensor components in the space defined by the axial vector.
Any arbitrary second-order tensor A can be split additively into a symmetric and an
antisymmetric part:
1 1
A= ( A + A T ) + ( A − A T ) = A sym + A skew
2 4243 1
1 2 4243 (1.150)
sym skew
A A
(A T
⋅B ⋅ A)
sym
=
1 T
2
( ) (
T
1
A ⋅ B ⋅ A + A T ⋅ B ⋅ A = A T ⋅ B ⋅ A + A T ⋅ BT ⋅ A
2
) [ ]
(1.152)
1
[ ]
= A T ⋅ B + B T ⋅ A = A T ⋅ B sym ⋅ A
2
Taking into account the characteristics of a symmetric and an antisymmetric tensor, i.e.
σ12 = σ 21 , σ 31 = σ13 , σ 32 = σ 23 , and W11 = W22 = W33 = 0 , W21 = − W12 , W31 = − W13 ,
W32 = − W23 , the equation above becomes:
σ :W =0
r r r r
Problem 1.16: Show that a) M ⋅ Q ⋅ M = M ⋅ Q sym ⋅ M ; b) A : B = A sym : B sym + A skew : B skew
r
where M is a vector, and Q , A , B are arbitrary second-order tensors.
Solution:
a) M ⋅ Q ⋅ M = M ⋅ (Q sym + Q skew )⋅ M = M ⋅ Q sym ⋅ M + M ⋅ Q skew ⋅ M
r r r r r r r r
r r
Since the relation M ⋅ Q skew ⋅ M = Q skew : 1 (r r
)
M ⊗ M = 0 holds, it follows that:
424 3
symmetric tensor
r r r r
M ⋅ Q ⋅ M = M ⋅ Q sym ⋅ M
b)
A :B = ( A sym + A skew ) : (B sym + B skew )
= A sym : B sym + 1
A sym
42
skew
: B43 A skew
+1 sym
43 + A
42: B
skew
: B skew
=0 =0
= A sym : B sym + A skew : B skew
Then, it is also valid that:
A : B sym = A sym : B sym ; A : B skew = A skew : B skew
r
Problem 1.17: Let T be an arbitrary second-order tensor, and n be a vector. Check if the
r r
relationship n ⋅ T = T ⋅ n is valid.
Solution:
r r
n ⋅ T = n i eˆ i ⋅ Tkl (eˆ k ⊗ eˆ l ) T ⋅ n = Tlk (eˆ l ⊗ eˆ k ) ⋅ n i eˆ i
= n i Tkl δ ik eˆ l = n i Tlk δ ki eˆ l
and
= n k Tkl eˆ l = n k Tlk eˆ l
= (n1 T1l + n 2 T2 l + n 3 T3l )eˆ l = (n1 Tl1 + n 2 Tl 2 + n 3 Tl 3 )eˆ l
With the above we can prove that n k Tkl ≠ n k Tlk , then:
r r
n⋅ T ≠ T ⋅n
r r
If T is a symmetric tensor, it follows that the relationship n ⋅ T sym = T sym ⋅ n holds.
r
Problem 1.18: Obtain the axial vector w associated with the antisymmetric tensor
r r
( x ⊗ a ) skew .
r
Solution: Let z be an arbitrary vector, it then holds that:
r r r r r
( x ⊗ a ) skew ⋅ z = w ∧ z
r r r
where w is the axial vector associated with ( x ⊗ a ) skew . Using the definition of an
antisymmetric tensor:
r r 1 r r
[ r r 1 r r r r
]
( x ⊗ a ) skew = ( x ⊗ a ) − ( x ⊗ a ) T = [ x ⊗ a − a ⊗ x ]
2 2
r r skew r r r
and by replacing it with ( x ⊗ a ) ⋅ z = w ∧ z , we obtain:
1 r r r r r r r
[x ⊗ a − a ⊗ x ] ⋅ z = w ∧ z ⇒ [xr ⊗ ar − ar ⊗ xr ] ⋅ zr = 2wr ∧ zr
2
r r r r r r r r
By using the equation [x ⊗ a − a ⊗ x ] ⋅ z = z ∧ ( x ∧ a ) , (see Eq. (1.105)), the above equation
becomes:
[xr ⊗ ar − ar ⊗ xr ] ⋅ zr = zr ∧ ( xr ∧ ar ) = (ar ∧ xr ) ∧ zr = 2wr ∧ zr
with the above we can conclude that:
r 1 r r r r
w = (a ∧ x ) is the axial vector associated with ( x ⊗ a ) skew
2
Tr (eˆ i ⊗ eˆ j ) = eˆ i ⋅ eˆ j = δ ij (1.158)
Thus, we can define the trace of a second-order tensor A as follows:
Tr ( A ) = Tr ( A ij eˆ i ⊗ eˆ j ) = A ij Tr (eˆ i ⊗ eˆ j ) = A ij (eˆ i ⋅ eˆ j ) = A ij δ ij = A ii
(1.159)
= A 11 + A 22 + A 33
r r
And, the trace of the dyad (u ⊗ v ) can be evaluated as:
r r r r
Tr (u ⊗ v ) = Tr (u ⊗ v ) = u i v j Tr (eˆ i ⊗ eˆ j ) = u i v j (eˆ i ⋅ eˆ j ) = u i v j δ ij = u i v i
r r (1.160)
= u1 v 1 + u 2 v 2 + u 3 v 3 = u ⋅ v
NOTE: As we will show later, the tensor trace is an invariant, i.e. it is independent of the
coordinate system. ■
Let A , B be arbitrary tensors, then:
The transposed tensor trace is equal to the tensor trace:
( )
Tr A T = Tr (A ) (1.161)
The trace of (A + B ) is the sum of traces:
Tr(A + B ) = Tr (A ) + Tr (B ) (1.162)
Proving this is very simple:
Tr(A + B ) = Tr (A ) + Tr (B )
(1.163)
[(A 11 + B 11 ) + (A 22 + B 22 ) + (A 33 + B 33 )] = (A 11 + A 22 + A 33 ) + (B 11 + B 22 + B 33 )
The scalar product trace ( A ⋅ B) becomes:
[
Tr(A ⋅ B ) = Tr ( A ij eˆ i ⊗ eˆ j ) ⋅ (B lm eˆ l ⊗ eˆ m ) ]
144244
[
= A ij B lm δ jl Tr (eˆ i ⊗ eˆ m )
3
] (1.164)
δ im
= A il B li = A ⋅ ⋅B = Tr ( B ⋅ A )
and, the double scalar product ( : ) can be expressed in trace terms as:
A : B = A ij B ij
= A kj B lj δ ik δ il = A ik B il δ jk δ jl
= A kj B lj δ kl = A ik B il δ kl
123 123
( A⋅BT ) ( AT ⋅B ) (1.165)
kl kl
(
= A ⋅ BT ) kk (
= AT ⋅B ) kk
= Tr ( A ⋅ B ) T
= Tr ( A T
⋅ B)
Similarly, it is possible to show that:
Tr(A ⋅ B ⋅ C ) = Tr (B ⋅ C ⋅ A ) = Tr (C ⋅ A ⋅ B ) = A ij B jk C ki (1.166)
Tr (A ) = A ii
[Tr (A )]2 = Tr (A ) Tr(A ) = A ii A jj (1.167)
Tr(A ⋅ A ) = Tr (A 2 ) = A il A li ; Tr(A ⋅ A ⋅ A ) = Tr (A 3 ) = A ij A jk A ki
For the second demonstration we can use the trace property Tr (T T ) = Tr (T ) , thus:
( )
Tr T T
m
= Tr T m ( ) T
( )
= Tr T m
The second-order unit tensor, also called the identity tensor, is defined as:
1 = δ ij eˆ i ⊗ eˆ j = eˆ i ⊗ eˆ i = 1 eˆ i ⊗ eˆ j (1.168)
where 1 is the matrix with the components of tensor 1 . δ ij is the Kronecker delta symbol
defined in (1.48).
Fourth-order unit tensors can be defined as follows:
I = 1⊗1 = δ ik δ jl eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l = I ijkl eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l (1.169)
I = 1⊗1 = δ il δ jk eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l = I ijkl eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l (1.170)
I = 1 ⊗ 1 = δ ij δ kl eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l = I ijkl eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l (1.171)
Taking into account the fourth-order unit tensors defined above, it holds that:
( )( )
I : A = δ ik δ jl eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l : A pq eˆ p ⊗ eˆ q = δ ik δ jl A pq δ kp δ lq eˆ i ⊗ eˆ j ( )
( )
= δ ik δ jl A kl eˆ i ⊗ eˆ j = A ij eˆ i ⊗ eˆ j ( ) (1.172)
=A
and
( )( )
I : A = δ il δ jk eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l : A pq eˆ p ⊗ eˆ q = δ il δ jk A pq δ kp δ lq eˆ i ⊗ eˆ j ( )
( )
= δ il δ jk A kl eˆ i ⊗ eˆ j = A ji eˆ i ⊗ eˆ j ( ) (1.173)
T
=A
and
( )( )
I : A = δ ij δ kl eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l : A pq eˆ p ⊗ eˆ q = δ ij δ kl A pq δ kp δ lq eˆ i ⊗ eˆ j ( )
( )
= δ ij δ kl A kl eˆ i ⊗ eˆ j = A kk δ ij eˆ i ⊗ eˆ j ( ) (1.174)
= Tr ( A )1
The symmetric part of the fourth-order unit tensor I is defined as:
I sym ≡ I =
1
2
( ) 1
→ I ijkl = δ ik δ jl + δ il δ jk
1⊗1 + 1⊗1 incomponents
2
( ) (1.175)
The property of the tensor product ⊗ is presented below. Consider a second-order unit
tensor, 1 = δ ij eˆ i ⊗ eˆ j . Then, the tensor product ⊗ can be defined as:
( ) (
1⊗1 = δ ij eˆ i ⊗ eˆ j ⊗(δ kl eˆ k ⊗ eˆ l ) = δ ij δ kl eˆ i ⊗ eˆ k ⊗ eˆ j ⊗ eˆ l ) (1.176)
( ) (
1⊗1 = δ ij eˆ i ⊗ eˆ j ⊗(δ kl eˆ k ⊗ eˆ l ) = δ ij δ kl eˆ i ⊗ eˆ l ⊗ eˆ k ⊗ eˆ j ) (1.178)
or
(
I = 1⊗1 = δ il δ jk eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l ) (1.179)
The antisymmetric part of I is defined as:
I skew =
1
2
(
1⊗1 − 1⊗1 ) incomponents
→ skew
I ijkl =
1
2
(
δ ik δ jl − δ il δ jk ) (1.180)
r
With a second-order tensor A and a vector b , the following relationships are valid:
r r
b ⋅1 = b
I:A = A ; I sym : A = A sym
A : 1 = Tr (A ) = A ii (1.181)
( )
A 2 : 1 = Tr A 2 = Tr (A ⋅ A ) = A il A li
A3 : 1 = Tr (A ) = Tr (A ⋅ A ⋅ A ) = A
3
ij A jk A ki
The Levi-Civita Pseudo-Tensor, also known as the Permutation Tensor, is a third-order pseudo-
tensor and is denoted by:
= ijk eˆ i ⊗ eˆ j ⊗ eˆ k (1.182)
which is not a “true” tensor in the strict meaning of the word, and whose components ijk
were defined in (1.55), the permutation symbol.
det ( A ⋅ B) = det ( A )det (B) , det (αA ) = α 3 det ( A ) where α is a scalar (1.187)
1
Problem 1.22: Show that A = rjk tpq A rt A jp A kq .
6
Solution:
Starting with the definition A tpq = rjk A rt A jp A kq , (see Problem 1.21), and by multiplying
both sides of the equation by tpq , we obtain:
A tpq tpq = rjk tpq A rt A jp A kq (1.190)
Using the property defined in expression (1.62), we obtain
tpq tpq = δ tt δ pp − δ tp δ tp = δ tt δ pp − δ tt = 6 . Then, the relationship (1.190) becomes:
1
A = rjk tpq A rt A jp A kq
6
r r
Problem 1.23: Let a , b be arbitrary vectors and α be a scalar. Show that:
( r r
)r r
det µ1 + αa ⊗ b = µ 3 + µ 2 α a ⋅ b (1.191)
Solution: The determinant of A is given by A = ijk A i1 A j 2 A k 3 . If we denote by
A ij µδ ij + αa i b j , thus, A i1 = µδ i1 + αa i b1 , A k 3 = µδ k 3 + αa k b 3 , A k 3 = δ k 3 + αa k b 3 , then
the equation in (1.191) can be rewritten as:
( r r
)
det µ1 + αa ⊗ b = ijk (µδ i1 + αa i b1 )(µδ j 2 + αa j b 2 )(µδ k 3 + αa k b 3 ) (1.192)
By developing the equation (1.192), we obtain:
( )
det µ1 + αa ⊗ b = ijk [µ 3 δ i1δ j 2 δ k 3 + µ 2 αa k b 3 δ i1δ j 2 + µ 2 αa j b 2 δ i1δ k 3 + µ 2 αa i b 1δ j 2 δ k 3 +
r r
+ µα 2 a j b 2 a k b 3 δ i1 + µα 2 a i a k b 1b 3 δ j 2 + µα 2 a i a j b 1b 2 δ k 3 + α 3 a i a j a k b 1b 2 b 3 ]
Note that: µ 3 ijk δ i1δ j 2 δ k 3 = µ 3 123 = µ 3 ,
µ 2 α ( ijk a k b 3 δ i1δ j 2 + ijk a j b 2 δ i1δ k 3 + ijk a i b1δ j 2 δ k 3 ) =
r r
µ 2 α (12 k a k b 3 + 1 j 3 a j b 2 + i 23 a i b1 ) = µ 2 α (a 3b 3 + a 2 b 2 + a1b1 ) = µ 2 α (a ⋅ b)
ijk a i a k b1b 3 δ j 2 = i 2 k a i a k b1b 3 = a1a 3b1b 3 − a 3 a1b1b 3 = 0
ijk a i a j b1b 2 δ k 3 = ij 3 a i a j b1b 2 = 123 a1 a 2 b1b 2 − 213 a 2 a1b1b 2 = 0
ijk a i a j a k b1b 2 b 3 = 0
Notice that, there was no need to expand the terms ijk a i a k b1b 3 δ j 2 , ijk a i a j b1b 2 δ k 3 , and
ijk a i a j a k b1b 2 b 3 to realize that these terms equal zero, since
r r
ijk a i a k b1b 3 δ j 2 = (a ∧ a) j b1b 3 δ j 2 = 0 , similarly for other terms.
Taking into account the above considerations we can prove that:
(
r r
det µ1 + αa ⊗ b = µ 3 + µ 2 α a ⋅ b
r r
)
For the particular case when µ = 1 the above equation becomes:
r r
det 1 + αa ⊗ b = 1 + α a ⋅ b ( r r
)
r r
Then, it is simple to prove that det αa ⊗ b = 0 , since ( )
r r
( )
det αa ⊗ b = α 3 ijk a i a j a k b1b 2 b 3 = α 3b1b 2 b 3 [a ⋅ (a ∧ a)] = 0
r r r
[ r r r r r r r r
] r r r r r r
det 1 + α (a ⊗ b) + β (b ⊗ a) = 1 + α (a ⋅ b) + β (a ⋅ b) + αβ (a ⋅ b) 2 − (a ⋅ a)(b ⋅ b) (1.193) [ ]
r r
where α , β are scalars. If β = 0 we can regain the equation det 1 + αa ⊗ b = 1 + αa ⋅ b ,
r r
( )
(see Problem 1.23). If α = β we obtain:
( r r r r r r r r
) r r
det 1 + α a ⊗ b + α b ⊗ a = 1 + α a ⋅ b + α a ⋅ b + α 2 a ⋅ b
( ) ( ) ( ) 2
( )
r r r r
− (a ⋅ a) b ⋅ b
(1.194)
r r r r
= 1 + α 2 a ⋅ b − α a ∧ b
2
( ) ( )
(r r )
where we have used the property a ⋅ b − (a ⋅ a) b ⋅ b = − a ∧ b , (see Problem 1.1).
2 r r r r
( ) (r r ) 2
To achieve this goal we start with the definition of the scalar triple product given in (1.69),
(
r r r
)
i.e. a ⋅ b ∧ c = ijk a i b j c k , and by multiplying both sides of this equation by the
determinant of A we obtain:
r r r
( )
a ⋅ b ∧ c A = ijk a i b j c k A (1.198)
It was proven in Problem 1.21 that A ijk = pqr A pi A qj A rk , thus:
(
r r r
)
a ⋅ b ∧ c A = ijk a i b j c k A = pqr A pi A qj A rk a i b j c k = pqr ( A pi a i )( A qj b j )( A rk c k )
r r
[ r r r
] [
= ( A ⋅ a) ⋅ ( A ⋅ b) ∧ ( A ⋅ c ) = ( A ⋅ b) ∧ ( A ⋅ c) ⋅ ( A ⋅ a)
r
] (1.199)
[ r r
= ( A ⋅ a) ∧ ( A ⋅ b) ⋅ A ⋅ A −1
12r3 ]
⋅
r
d
(1.202)
=c
r r r r r r
(
In equation (1.199) we demonstrated that c ⋅ a ∧ b A = ( A ⋅ a) ∧ ( A ⋅ b) ⋅ ( A ⋅ c) thus, ) [ ]
B −1 ⋅ A −1 =
[adj(B)] ⋅ [adj(A)] ⇒ A B B −1 ⋅ A −1 = [adj(B)] ⋅ [adj(A)]
B A
⇒ A B (A ⋅ B ) = [adj(B)] ⋅ [adj( A )] ⇒ A B
−1 [adj(A ⋅ B)] = [adj(B)] ⋅ [adj(A)] (1.208)
A ⋅B
⇒ adj( A ⋅ B) = [adj(B)] ⋅ [adj( A )]
where we have used the property A ⋅ B = A B . Similarly, it is possible to show that
cof( A ⋅ B) = [cof( A )] ⋅ [cof(B)] .
Procedure for obtaining the inverse of the matrix A
1) Obtain the cofactor matrix: cof (A ) as follows:
Consider the matrix A as:
A 11 A 12 A 13
A = A 21 A 22 A 23 (1.209)
A 31 A 32 A 33
We define the matrix M , where the component Mij is the determinant of the resulting
matrix by removing the i th row and the j th column, i.e.:
A 22 A 23 A 21 A 23 A 21 A 22
A 32 A 33 A 31 A 33 A 31 A 32
A A 13 A 11 A 13 A 11 A 12
M = 12 (1.210)
A 32 A 33 A 31 A 33 A 31 A 32
A 12 A 13 A 11 A 13 A 11 A 12
A 22 A 23 A 21 A 23 A 21 A 22
Thus, we define the cofactor matrix as:
cof (A ) = (−1) i + j Mij (1.211)
2) Obtain the adjugate matrix, adj(A ) , as follows:
adj(A ) = [cof (A )]
T
(1.212)
3) Obtain the inverse matrix:
adj(A )
A −1 = (1.213)
A
So, the relation A [adj(A )] = A 1 holds, where 1 is the identity matrix.
Taking into account (1.64), we can express the components of the first, second, and third
row of the cofactor matrix, (1.211), as: M1i = ijk A 2 j A 3k , M 2i = ijk A 1 j A 3k , M3i = ijk A 1 j A 2 k ,
respectively.
Problem 1.24: Let A be an arbitrary second-order tensor. Show that there is a nonzero
r r r r
vector n ≠ 0 so that A ⋅ n = 0 if and only if det ( A ) = 0 , Chadwick (1976).
r r
Solution: Firstly, we show that, if det ( A ) ≡ A = 0 ⇒ n ≠ 0 . Secondly, we show that, if
r r
n ≠ 0 ⇒ det ( A ) ≡ A = 0 .
r r r
We assume that det ( A ) ≡ A = 0 , and we choose an arbitrary basis {f , g, h} (linearly
independent). We apply these vectors in the definition seen in (1.197):
r r r
( ) r r
[
r
f ⋅ g ∧ h A = ( A ⋅ f ) ⋅ ( A ⋅ g) ∧ ( A ⋅ h) ]
Due to the fact that det ( A ) ≡ A = 0 , the implication is that:
r r r
[
( A ⋅ f ) ⋅ ( A ⋅ g) ∧ ( A ⋅ h) = 0
r r r
]
Thus, we can conclude that the vectors ( A ⋅ f ) , ( A ⋅ g) , ( A ⋅ h) , are linearly dependent.
This implies that there are nonzero scalars α , β , γ so that:
r
r
r r
r r r
(r
α ( A ⋅ f ) + β ( A ⋅ g) + γ ( A ⋅ h) = 0 ⇒ A ⋅ αf + βg + γh = 0 ⇒ A ⋅ n = 0
r r r
r r
) r r r
r r
where n = αf + βg + γh ≠ 0 since {f , g, h} is linearly independent, (see Problem 1.10).
r r r
Now we choose two vectors k , m , which are linearly independent to n . We apply
definition (1.199) once more:
r r r r
k ⋅ (m ∧ n) A = ( A ⋅ k ) ⋅ [( A ⋅ m) ∧ ( A ⋅ n)]
r r
r r r r r r r r
Considering that A ⋅ n = 0 , and k ⋅ (m ∧ n) ≠ 0 owing to the fact that k , m , n are linearly
independent, we can conclude that:
r r r
k ⋅ (m ∧ n) A = 0 ⇒ A =0
14243
≠0
Q ⋅ QT = QT ⋅ Q = 1 ; Q ik Q jk = Q ki Q kj = δ ij (1.214)
A proper orthogonal tensor has the following properties:
The inverse of Q is equal to the transpose (orthogonality):
Q −1 = Q T (1.215)
The tensor Q is a proper, rotation tensor, if:
det (Q) ≡ Q = +1 (1.216)
If Q = −1 , the orthogonal tensor is said to be improper (a reflection tensor).
If A and B are orthogonal tensors, the resulting tensor A ⋅ B = C is also an orthogonal
tensor, i.e.:
C −1 = ( A ⋅ B) −1 = B −1 ⋅ A −1 = B T ⋅ A T = ( A ⋅ B) T = C T (1.217)
r r
Consider two arbitrary vectors a and b . An orthogonal transformation applied to these
vectors becomes:
r r r
~ r ~
a = Q⋅a b = Q ⋅b
; (1.218)
r r
~
And the dot product between these new vectors ( ~a ) and ( b ) is given by:
r r r r r r
~ ~ r r
a ⋅ b = (Q ⋅ a) ⋅ (Q ⋅ b) = a ⋅ Q T ⋅ Q ⋅ b = a ⋅ b
123
=1
~~ (1.219)
ai b i = (Q ik a k )(Q ij b j ) = a k Q ik Q ij b j = a k b k
123
δ kj
r ~r r r r 2 r r r 2
So, it is also true when ~a = b , thus ~a ⋅ a
~ ~
= a = a ⋅ a = a . Therefore, we can conclude
that in an orthogonal transformation, the magnitude vectors and the angle between them
are maintained, (see Figure 1.18).
Q
r r
~
r a = a
b r
~ r r
b ~
θ r
~ b = b
θ a
r r r
a ~ ~ r r
a⋅b = a⋅b
Problem 1.25: Let F be an arbitrary second-order tensor. Show that the resulting tensors
C = F T ⋅ F and b = F ⋅ F T are symmetric tensors and semi-positive definite tensors. Also check in
what condition are C and b positive definite tensors.
Solution: Symmetry:
C T = (F T ⋅ F )T = F T ⋅ (F T )T = F T ⋅ F = C
b T = (F ⋅ F T ) T = (F T )T ⋅ F T = F ⋅ F T = b
Thus, we have shown that C = F T ⋅ F and b = F ⋅ F T are symmetric tensors.
To prove that the tensors C = F T ⋅ F and b = F ⋅ F T are semi-positive definite tensors, we
start with the definition of a semi-positive definite tensor, i.e., a tensor A is semi-positive
r
definite if xˆ ⋅ A ⋅ xˆ ≥ 0 holds, for all xˆ ≠ 0 . Thus:
xˆ ⋅ ( F T ⋅ F ) ⋅ xˆ = F ⋅ xˆ ⋅ F ⋅ xˆ xˆ ⋅ ( F ⋅ F T ) ⋅ xˆ = xˆ ⋅ F ⋅ F T ⋅ xˆ
= ( F ⋅ xˆ ) ⋅ ( F ⋅ xˆ ) = ( F T ⋅ xˆ ) ⋅ ( F T ⋅ xˆ )
2
= F ⋅ xˆ = F T ⋅ xˆ ≥ 0
2
≥0
Or in indicial notation:
x i C ij x j = x i ( Fki Fkj ) x j x i bij x j = x i ( Fik F jk ) x j
= ( Fki x i )( Fkj x j ) = ( Fik x i )( F jk x j )
2 2
= Fki x i ≥0 = Fik x i ≥0
Thus, we proved that C = F T ⋅ F and b = F ⋅ F T are semi-positive definite tensors. Note
r r
that xˆ ⋅ C ⋅ xˆ = F ⋅ xˆ equals zero, when xˆ ≠ 0 , if F ⋅ xˆ = 0 . Furthermore, by definition
2
r r
F ⋅ xˆ = 0 with xˆ ≠ 0 if and only if det ( F ) = 0 , (see Problem 1.24). Then, the tensors
C = F T ⋅ F and b = F ⋅ F T are positive definite if and only if det ( F ) ≠ 0 .
Given two arbitrary tensors S , T ≠ 0 , and a scalar α , we can represent S by means of the
following additive decomposition of tensors:
S = αT + U where U = S − αT (1.224)
Note that, depending on the value of α , we have an infinite number of possibilities for
representing S . But, if Tr ( T ⋅ UT ) = Tr (U ⋅ T T ) = 0 , the additive decomposition is unique.
From the relationship in (1.224), we can evaluate the value of α as follows:
S ⋅ T T = αT ⋅ T T + U ⋅ T T ⇒ Tr (S ⋅ T T ) = αTr ( T ⋅ T T ) + Tr (U ⋅ T T ) = αTr ( T ⋅ T T )
14243 (1.225)
=0
Tr (S ⋅ T T )
⇒ α= (1.226)
Tr ( T ⋅ T T )
For example, let us suppose that T = 1 . In this case α is evaluated as follows:
Tr (S ⋅ T T ) Tr (S ⋅ 1) Tr (S ) Tr (S )
α= = = = (1.227)
Tr ( T ⋅ T T ) Tr (1 ⋅ 1) Tr (1) 3
We can then define U as:
Tr (S )
U = S − αT = S − 1 ≡ S dev (1.228)
3
Thus:
Tr (S )
S= 1 + S dev = S sph + S dev (1.229)
3
Tr (S ) Tr (S )
NOTE: S sph = 1 is the spherical part of the tensor S , and S dev = S − 1 is
3 3
known as a deviatoric tensor. ■
1
Now suppose that T is given by T = (S + S T ) then α can be evaluated as follows:
2
Tr (S ⋅ T T )
1
2
[
Tr S ⋅ (S + S T ) T ]
α= = =1 (1.230)
[
Tr ( T ⋅ T T ) 1 Tr (S + S T ) ⋅ (S + S T ) T
4
]
1 1
We can then define U as U = S − αT = S − (S + S T ) = (S − S T ) . Then we obtain S
2 2
represented by the additive decomposition as follows:
1 1
S= (S + S T ) + (S − S T ) = S sym + S skew (1.231)
2 2
which is the same as the equation obtained in (1.150) in which we split the tensor into
symmetric and antisymmetric parts.
TENSORS
Mathematical interpretation of physical concepts
(Independent of the coordinate system)
COMPONENTS
Representation of a tensor in
a coordinate system
COMPONENT
COORDINATE SYSTEM COORDINATE SYSTEM
I TRANSFORMATION II
LAW
x3
x 2′
x3′ γ1
x1′
ê′2
ê3 ê1′
ê′3 β1
ê 2 x2
ê1
α1
x1
Now consider a new coordinate system (x1′ , x 2′ , x3′ ) represented by the orthogonal basis
(eˆ 1′ , eˆ ′2 , eˆ ′3 ) , (see Figure 1.20). In this new system, the vector vr is represented by v ′j eˆ ′j . As
mentioned before, a tensor is independent of the adopted system, so:
r
v = v ′k eˆ ′k = v j eˆ j (1.234)
To obtain the components of a tensor in a given system one need only make the dot
product between the tensor and the system basis:
v ′k eˆ ′k ⋅ eˆ ′i = ( v j eˆ j ) ⋅ eˆ ′i
v ′k δ ki = ( v j eˆ j ) ⋅ eˆ ′i (1.235)
v ′i = ( v 1 eˆ 1 + v 2 eˆ 2 + v 3 eˆ 3 ) ⋅ eˆ ′i
Or in matrix form:
v 1′ ( v 1eˆ 1 + v 2 eˆ 2 + v 3 eˆ 3 ) ⋅ eˆ 1′
v ′ = ( v eˆ + v eˆ + v eˆ ) ⋅ eˆ ′
2 1 1 2 2 3 3 2 (1.236)
v ′3 ( v 1 eˆ 1 + v 2 eˆ 2 + v 3 eˆ 3 ) ⋅ eˆ ′3
Or in indicial notation:
v ′i = a ij v j (1.238)
where we have introduced the transformation matrix A ≡ aij as:
eˆ 1 ⋅ eˆ 1′ eˆ 2 ⋅ eˆ 1′ eˆ 3 ⋅ eˆ 1′ eˆ 1′ ⋅ eˆ 1 eˆ 1′ ⋅ eˆ 2 eˆ 1′ ⋅ eˆ 3
A ≡ a ij = eˆ 1 ⋅ eˆ ′2 eˆ 2 ⋅ eˆ ′2 eˆ 3 ⋅ eˆ ′2 = eˆ ′2 ⋅ eˆ 1 eˆ ′2 ⋅ eˆ 2 eˆ ′2 ⋅ eˆ 3
eˆ 1 ⋅ eˆ ′3 eˆ 2 ⋅ eˆ ′3 eˆ 3 ⋅ eˆ ′3 eˆ ′3 ⋅ eˆ 1 eˆ ′3 ⋅ eˆ 2 eˆ ′3 ⋅ eˆ 3
(1.239)
a11 a12 a13
a ij ≡ A = a 21 a 22 a 23
a 31 a 32 a33
The matrix ( A ) is not symmetric, i.e. A ≠ A T . With reference to the scalar product
eˆ ′i ⋅ eˆ j = eˆ ′i eˆ j cos(xi′ , x j ) = cos(xi′ , x j ) , (see equation (1.4)), the relationship in (1.237) is
expressed by means of the direction cosines as:
v ′ = Av
The direction cosines of a vector are those of the angles between the vector and the three
coordinate axes. According to Figure 1.20 we can verify that cos α 1 = cos(x1′ , x1 ) ,
cos β 1 = cos( x1′ , x 2 ) and cos γ 1 = cos( x1′ , x3 ) .
In the equation (1.235) we have projected the vector onto eˆ ′i . Now, we can project the
vector onto ê i :
v k eˆ k ⋅ eˆ i = v ′j eˆ ′j ⋅ eˆ i
v k δ ki = v ′j a ji
(1.241)
v i = v ′j a ji
v = A T v′
Therefore, it is also true that:
eˆ i = a ji eˆ ′j (1.242)
Second-order tensor
Consider a coordinate system formed by the orthogonal basis ê i then, how the basis
changes from the ê i system to a new one represented by the orthogonal basis eˆ ′i . This is
illustrated in transformation law as eˆ k = a ik eˆ ′i , which allow us to represent a second-order
tensor T as follows:
T = Tkl eˆ k ⊗ eˆ l
= Tkl a ik eˆ ′i ⊗ a jl eˆ ′j
(1.245)
= Tkl a ik a jl eˆ ′i ⊗ eˆ ′j
= Tij′ eˆ ′i ⊗ eˆ ′j
Then, the transformation law of the components between systems for a second-order
tensor is given by:
Tij′ = Tkl a ik a jl = a ik Tkl a jl Matrix
form
→ T ′ = A T AT (1.246)
Third-order tensor
A third-order tensor ( S ) can be shown in two systems represented by orthogonal bases ê i
and eˆ ′i as follows:
S = S lmn eˆ l ⊗ eˆ m ⊗ eˆ n
= S lmn ail eˆ ′i ⊗ a jm eˆ ′j ⊗ a kn eˆ ′k
(1.247)
= S lmn ail a jm a kn eˆ ′i ⊗ eˆ ′j ⊗ eˆ ′k
= S ′ijk eˆ ′i ⊗ eˆ ′j ⊗ eˆ ′k
In conclusion the components of the third-order tensor in the new basis ( eˆ ′i ) are:
S ′ijk = S lmn a il a jm a kn (1.248)
The following table summarizes the transformation law of the components according to
the tensor rank:
3 ′ = a il a jm a kn S lmn
S ijk ′
S ijk = a li a mj a nk S lmn
4 ′ = a im a jn a kp a lq S mnpq
S ijkl ′
S ijkl = a mi a nj a pk a ql S mnpq
T ′ = A T AT x3′
′
T33
′
T23
′
T32 ′
T22
x3
x3′ T33 T13′
′
T31 T12′ x2′
T13 ′
T21
T23 x2′ T11′
T32
T31 T22
T12
T21 x2 x1′
T11
x1
x1′ T = AT T ′ A
Figure 1.21: Transformation law of the second-order tensor components.
Consider now four sets of coordinate systems, represented by (x1 , x 2 , x3 ) , (x1′ , x 2′ , x3′ ) ,
(x1′′, x 2′′ , x3′′ ) and (x1′′′, x 2′′′, x3′′′) , (see Figure 1.22), and consider also the following
transformation matrices:
A : Transformation matrix from (x1 , x 2 , x3 ) to (x1′ , x 2′ , x3′ ) ;
X′
A −1
=A T
B −1 = B T
A B
BA
X X ′′
A T B T = (BA ) T
CBA C
(CBA ) T = A T B T C T C −1 = C T
X ′′′
y
α y ′y = α
x′ π
y′ α x′y = −α
α y ′y α x′y 2
π
α y ′x = +α
α y ′x α x′x = α 2
x
Figure 1.23: Transformation of a coordinate system in 2D.
x P cos(α ) − sin(α ) x ′P
y = sin(α ) cos(α ) y ′ (1.263)
P P
yP′
y x′
P
xP
yP
yP
sin
co
(α
s(
)
α)
y′ α x ′P
r
r
β
)
n (α
y ′P si
α xP yP
α) x
o s(
c
xP
Problem 1.29: Find the transformation matrix between the systems: x, y , z and x′′′, y ′′′, z′′′ .
These systems are represented in Figure 1.25.
z = z′
z ′′ = z ′′′
y ′′′
β
γ y ′ = y ′′
α y
x α x′
γ
x ′′′
x ′′
Figure 1.25: Rotation.
Solution: The coordinate system x′′′, y ′′′, z′′′ can be obtained by different combinations of
rotations as follows:
z = z′
from x, y , z to x′, y ′, z′
cos α sin α 0
y′ A = − sin α cos α 0
α 0 0 1
y
α with 0 ≤ α ≤ 360 º
x x′
♦ Rotation along the y′ -axis
from x′, y ′, z′ to x′′, y ′′, z′′
z = z′ z ′′
cos β 0 − sin β
B = 0 1 0
sin β 0 cos β
β y ′ = y ′′
with 0 ≤ β ≤ 180 º
α y
z = z′ z ′′
x α x′
x′
β
x ′′
z = z′
z ′′ = z ′′′
from x′′, y ′′, z′′ to x′′′, y ′′′, z′′′
y ′′′
β cos γ sin γ 0
γ y ′ = y ′′
C = − sin γ cos γ 0
α y
0 0 1
x α x′ with 0 ≤ γ ≤ 360º
γ
x ′′′
x ′′
The transformation matrix from ( x, y , z ) to ( x′′′, y ′′′, z′′′ ), (see Figure 1.22), is given by:
D = CBA
After multiplying the matrices, we obtain:
(cos α cos β cos γ − sin α sin γ ) (sin α cos β cos γ + cos α sin γ ) − sin β cos γ
D = ( − cos α cos β sin γ − sin α cos γ ) ( − sin α cos β sin γ + cos α cos γ ) sin β sin γ
cos α sin β sin α sin β cos β
The angles α, β , γ are known as Euler angles and were introduced by Leonhard Euler to
describe the orientation of a rigid body motion.
Problem 1.30: Let T be a second-order tensor whose components in the Cartesian system
(x1 , x 2 , x3 ) are given by:
3 − 1 0
(T )ij = Tij = T = − 1 3 0
0 0 1
Given that the transformation matrix between two systems, (x1 , x 2 , x 3 ) - (x1′ , x 2′ , x 3′ ) , is:
0 0 1
2 2
A= 0
2 2
2 2
− 0
2 2
Obtain the tensor components Tij in the new coordinate system (x1′ , x 2′ , x 3′ ) .
Solution: As defined in equation (1.249), the transformation law for second-order tensor
components is:
Tij′ = aik a jl Tkl
To enable the previous calculation to be carried out in matrix form we use:
[ ]
Tij′ = [a i k ] [Tk l ] a l j
T
Thus
T ′ = A T AT
0 2 2
0 1 0 −
2 2
2 3 − 1 0
2 2 2
T ′= 0 − 1 3 0 0
2 2 2 2
2 2 0 0 1
− 0 1 0 0
2 2
On carrying out the operation of the previous matrices we now have:
1 0 0
T ′ = 0 2 0
0 0 4
NOTE: As we can verify in the above example, the components of the tensor T , in the
new basis, have one particular feature, i.e. the off-diagonal terms are equal to zero. The
question now is: Given an arbitrary tensor T , is there a transformation which results in the
off-diagonal terms being zero? This type of problem is called the eigenvalue and eigenvector
problem. ■
n̂
nˆ ′
n̂ - principal direction of T
x3
λ - eigenvalue of T associated
x2 with the direction n̂ .
x1
r
The previous set of homogeneous equations only have nontrivial solution, i.e. nˆ ≠ 0 , if and
only if:
det ( T − λ1) = 0 ; Tij − λδ ij = 0 (1.266)
The determinant (1.266) is called the characteristic determinant of the tensor T , explicitly given
by:
I T = Tr (T ) = Tii
δ jk
II T =
1
2
[
( TrT ) 2 − Tr ( T 2 ) ]
=
1
2
{ [( ) ]}
Tr ( Tij eˆ i ⊗ eˆ j ) Tr ( Tkl eˆ k ⊗ eˆ l ) − Tr Tij eˆ i ⊗ eˆ j ⋅ (Tkl eˆ k ⊗ eˆ l )
=
1
2
{ [ ]}
Tij δ ij Tkl δ kl − Tij Tkl δ jk Tr (eˆ i ⊗ eˆ l ) (1.269)
=
1
2
{ Tii Tkk − Tij Tkl δ jk δ il }
=
1
2
{ }
Tii Tkk − Tij T ji = Mii = Tr[cof( T )]
where Mii is the matrix trace defined in equation (1.210), Mii = M11 + M 22 + M33 . More
explicitly the invariants are given by:
IT = T11 + T22 + T33
T22 T23 T11 T13 T T12
II T = + + 11
T32 T33 T31 T33 T21 T22 (1.270)
= T22 T33 − T23 T32 + T11 T33 − T13 T31 + T11 T22 − T12 T21
III T = T11 ( T22 T33 − T32 T23 ) − T12 (T21 T33 − T31 T23 ) + T13 ( T21 T32 − T31 T22 )
⇒ (λ 1 − λ 2 ) nˆ (1) ⋅ nˆ ( 2 ) = 0 (1.280)
To satisfy the equation (1.280), with λ1 ≠ λ 2 ≠ 0 , the following must be true:
nˆ (1) ⋅ nˆ ( 2 ) = 0 (1.281)
Similarly, it is possible to show that nˆ (1) ⋅ nˆ (3) = 0 and nˆ ( 2) ⋅ nˆ (3) = 0 and then we can
conclude that the eigenvectors are mutually orthogonal, and constitute an orthogonal basis,
(see Figure 1.27), where the transformation matrix between systems is:
nˆ (1) nˆ 1(1) nˆ (21) nˆ 3(1)
A = nˆ ( 2) = nˆ 1( 2) nˆ (22 ) nˆ 3( 2 ) (1.282)
nˆ (3) nˆ (3) nˆ (23) nˆ (33)
1
diagonalization
T ′ = A T AT
x3 T33 T3
x3′ T2
( 3)
x3′ n̂
n̂( 2 )
T23 x 2′
T13 x 2′
T23
T13 T22 n̂(1)
T12 T12
x2 T1
T11
x1′
x1 x1′
T = AT T ′ A Principal Space
where we clearly distinguish the spherical and the deviatoric part of the tensor in the
principal space. Note that, if T is a spherical tensor the following relationship holds
I T2 = 3 II T , then S = 0 .
Problem 1.33: Find the principal values and directions of the second-order tensor T ,
where the Cartesian components of T are:
3 − 1 0
(T )ij = Tij = T = − 1 3 0
0 0 1
Solution: We need to find nontrivial solutions for (Tij − λδ ij ) nˆ j = 0 i , which are constrained
by nˆ j nˆ j = 1 (unit vector). As we have seen, the nontrivial solution requires that:
Tij − λδ ij = 0
Explicitly, the above equation is:
T11 − λ T12 T13 3 − λ −1 0
T21 T22 − λ T23 = −1 3 − λ 0 =0
T31 T32 T33 − λ 0 0 1− λ
Developing the above determinant, we can obtain the cubic equation:
[
(1 − λ ) (3 − λ ) 2 − 1 = 0 ]
3 2
λ − 7 λ + 14λ − 8 = 0
We could have obtained the characteristic equation directly in terms of invariants:
I T = Tr ( Tij ) = Tii = T11 + T22 + T33 = 7
T T23 T11 T13 T11 T12
II T =
1
2
( )
Tii T jj − Tij Tij = 22
T32 T33
+
T31 T33
+
T21 T22
= 14
Principal directions:
Each eigenvalue, λ i , is associated with a corresponding eigenvector, nˆ (i ) . We can use the
equation in (1.265), i.e. ( Tij − λδ ij ) nˆ j = 0 i , to obtain the principal directions.
λ1 = 1
3 − λ 1 −1 0 n 1 3 − 1 − 1 0 n1 0
−1 3 − λ1 0 n 2 = − 1 3 − 1 0 n 2 = 0
0 0 1 − λ 1 n 3 0 0 1 − 1 n 3 0
These become the following system of equations:
2n1 − n 2 = 0
⇒ n1 = n 2 = 0
− n1 + 2n 2 = 0
0n = 0
3
n i n i = n12 + n 22 + n 32 = 1
Then we can conclude that: λ1 = 1 ⇒ nˆ i(1) = [0 0 ± 1] .
NOTE: This solution could have been directly determined by the specific features of the
T matrix. As the terms T13 = T23 = T31 = T32 = 0 imply that T33 = 1 is already a principal
value, then, consequently, the original direction is a principal direction. ■
λ2 = 2
3 − λ 2 −1 0 n 1 3 − 2 − 1 0 n1 0
−1 3 − λ2
0 n 2 = − 1 3 − 2 0 n 2 = 0
0 0 1 − λ 2 n 3 0 0 1 − 2 n 3 0
n1 − n 2 = 0 ⇒ n1 = n 2
− n1 + n 2 = 0
− n = 0
3
The first two equations are linearly dependent, after which we need an additional equation:
1
n i n i = n12 + n 22 + n 32 = 1 ⇒ 2n12 = 1 ⇒ n1 = ±
2
Thus:
1 1
λ2 = 2 ⇒ nˆ i( 2 ) = ± ± 0
2 2
λ3 = 4
3 − λ 3 −1 0 n 1 3 − 4 − 1 0 n1 0
−1 3 − λ3
0 n 2 = − 1 3 − 4 0 n 2 = 0
0 0 1 − λ 3 n 3 0 0 1 − 4 n 3 0
− n1 − n 2 = 0
⇒ n1 = −n 2
− n1 − n 2 = 0
− 3n = 0
3
1
n i n i = n12 + n 22 + n 32 = 1 ⇒ 2n 22 = 1 ⇒ n 2 = ±
2
Then:
1 1
λ3 = 4 ⇒ nˆ i(3) = m ± 0
2 2
Afterwards, we summarize the eigenvalues and eigenvectors of T :
λ1 = 1 ⇒ nˆ i(1) = [0 0 ± 1]
1 1
λ2 = 2 ⇒ nˆ i( 2) = ± ± 0
2 2
1 1
λ3 = 4 ⇒ nˆ i(3) = m ± 0
2 2
NOTE: The tensor components of this problem are the same as those used in Problem
1.30. Additionally, we can verify that the eigenvectors make up the transformation matrix,
A , between the original system, (x1 , x 2 , x 3 ) , and the principal space, ( x1′ , x 2′ , x 3′ ) , (see
Problem 1.30). ■
With reference to the fact that eigenvectors form a transformation matrix, A , so that:
T ′ = A T AT (1.288)
Since A −1 = A T , the inverse form is:
T = AT T ′ A (1.289)
where
nˆ (1) nˆ 1(1) nˆ (21) nˆ 3(1)
A = nˆ ( 2) = nˆ 1( 2) nˆ (22 ) nˆ 3( 2 ) (1.290)
nˆ (3) nˆ (3) nˆ (23) nˆ (33)
1
Explicitly, the relation in (1.289) is given by:
T11 T12 T13 nˆ 1(1) nˆ 1( 2 ) nˆ 1(3) T1 0 0 nˆ 1(1) nˆ (21) nˆ (31)
T
12 T22 T23 = nˆ (21) nˆ (22 ) nˆ (23) 0 T2 0 nˆ 1( 2) nˆ (22 ) nˆ 3( 2)
T13 T23 T33 nˆ (1) nˆ ( 2 ) nˆ 3(3) 0 0 T3 nˆ 1(3) nˆ (23) nˆ 3(3)
3 3
T1 0 0
= A T 0 T2 0 A (1.291)
0 0 T3
1 0 0 0 0 0 0 0 0
= T1A 0 0 0 A + T2 A 0 1 0A + T3 A 0 0 0 A
T T T
Whereas:
which is the spectral representation of the tensor. Note that, in the above equation we have to
resort to the summation symbol, because the dummy index appears thrice in the
expression.
NOTE: The spectral representation in (1.295) could easily have been obtained from the
definition of the second-order unit tensor, given in (1.168), i.e. 1 = nˆ i ⊗ nˆ i , which can also
3
be represented by means of the summation symbol as 1 = ∑ nˆ ( a ) ⊗ nˆ ( a ) . Then, it follows
a =1
that:
3 3 3
∑
T = T ⋅ 1 = T ⋅ nˆ ( a ) ⊗ nˆ ( a ) =
a =1
∑a =1
T ⋅ nˆ ( a ) ⊗ nˆ ( a ) = ∑T
a =1
a nˆ ( a ) ⊗ nˆ ( a ) (1.296)
The spectral representation is very useful for making algebraic operations with tensors. For
example, tensor power in the principal space can be expressed as:
T1n 0 0
(T )
n
ij
= 0 T2n 0
(1.298)
0 0 T3n
So, the spectral representation of T n is given by:
3
Tn = ∑T
a =1
n
a nˆ ( a ) ⊗ nˆ ( a ) (1.299)
Now, if we need the square root of the tensor, T , this can easily be obtained from the
spectral representation as:
3
T= ∑
a =1
Ta nˆ ( a ) ⊗ nˆ ( a ) (1.300)
Next, we can show that a positive definite tensor has positive eigenvalues. For this
purpose, we can consider a semi-positive definite tensor, T , by which the condition
r
xˆ ⋅ T ⋅ xˆ ≥ 0 holds for all xˆ ≠ 0 . Replacing the tensor by its spectral representation, we
obtain:
xˆ ⋅ T ⋅ xˆ ≥ 0
3
∑
⇒ xˆ ⋅ Ta nˆ ( a ) ⊗ nˆ ( a ) ⋅ xˆ ≥ 0
a =1 (1.301)
3
⇒ ∑T
a =1
a xˆ ⋅ nˆ ( a ) ⊗ nˆ ( a ) ⋅ xˆ ≥ 0
∑ T [1
( xˆ ⋅ nˆ ) ] ≥ 0
3 3
∑ Ta xˆ ⋅ nˆ ( a ) ⊗ nˆ ( a ) ⋅ xˆ ≥ 0 ⇒ a
4243
(a ) 2
a =1 a =1
>0 (1.302)
⇒ T1 ( xˆ ⋅ nˆ (1) ) 2 + T2 ( xˆ ⋅ nˆ ( 2 ) ) 2 + T3 ( xˆ ⋅ nˆ (3) ) 2 ≥ 0
r
The above expression must hold for all xˆ ≠ 0 . If we take xˆ = nˆ (1) , the above equation is
reduced to T1 (nˆ (1) ⋅ nˆ (1) ) 2 = T1 ≥ 0 . The same is true for T2 and T3 . Thus, we have
demonstrated that if a tensor is semi-positive definite, its eigenvalues are greater than or
equal to zero, i.e. T 1 ≥ 0 , T 2 ≥ 0 , T 3 ≥ 0 . Therefore we can conclude that a tensor is positive
definite, i.e. xˆ ⋅ T ⋅ xˆ > 0 , if and only if its eigenvalues are positive and nonzero, i.e. T 1 > 0 ,
T 2 > 0 , T 3 > 0 . Consequently, the positive definite tensor trace is greater than zero. If the
positive definite tensor trace is zero, this implies that the tensor is the zero tensor.
The spectral representation of the fourth-order unit tensor, I , can be obtained starting
from the definition in (1.169), i.e.:
3 3
I = δ ik δ jl eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l = eˆ i ⊗ eˆ j ⊗ eˆ i ⊗ eˆ j = ∑∑ eˆ
a =1 b =1
a ⊗ eˆ b ⊗ eˆ a ⊗ eˆ b (1.303)
As I is an isotropic tensor, (see 1.5.8 Isotropic and Anisotropic tensors), then the
representation in (1.303) is also valid in any orthonormal basis, nˆ ( a ) , so:
3 3
I= ∑∑ nˆ
a =1 b =1
(a)
⊗ nˆ (b ) ⊗ nˆ ( a ) ⊗ nˆ (b ) (1.304)
and
I = δ ij δ kl eˆ i ⊗ eˆ j ⊗ eˆ k ⊗ eˆ l = eˆ i ⊗ eˆ i ⊗ eˆ k ⊗ eˆ k (1.307)
3
I= ∑ nˆ
a ,b =1
(a)
⊗ nˆ ( a ) ⊗ nˆ (b ) ⊗ nˆ (b ) (1.308)
Solution:
It is true that
w ⋅ 1 = w ⋅ ∑ nˆ ( a ) ⊗ nˆ ( a) = ∑ w ⋅ nˆ (a ) ⊗ nˆ (a ) = ∑ (wr ∧ nˆ ( a) ) ⊗ nˆ ( a)
3 3 3
a =1 a =1 a =1
∑ w (nˆ )
3
= b
(b )
∧ nˆ ( a ) ⊗ nˆ ( a )
a ,b =1
r r
where we have applied an antisymmetric tensor property w ⋅ nˆ = w ∧ nˆ , where w is the
axial vector associated with w . Expanding the above equation, we obtain:
w = wb (nˆ (b) ∧ nˆ (1) ) ⊗ nˆ (1) + wb (nˆ (b) ∧ nˆ ( 2) ) ⊗ nˆ ( 2) + wb (nˆ (b) ∧ nˆ (3) ) ⊗ nˆ (3) =
( ) (
= w1 nˆ (1) ∧ nˆ (1) ⊗ nˆ (1) + w2 nˆ ( 2) ∧ nˆ (1) ⊗ nˆ (1) + w3 nˆ (3) ∧ nˆ (1) ⊗ nˆ (1) + ) ( )
+ w (nˆ
1
(1)
∧ nˆ (2)
) ⊗ nˆ (2)
+ w2 nˆ ( ( 2)
∧ nˆ ( 2)
) ⊗ nˆ ( 2)
+ w3 nˆ ( ( 3)
∧ nˆ ( 2)
) ⊗ nˆ ( 2)
+
+ w (nˆ
1
(1)
∧ nˆ ( 3)
) ⊗ nˆ ( 3)
+ w2 (nˆ ( 2)
∧ nˆ ( 3)
) ⊗ nˆ
( 3)
+ w3 (nˆ ( 3)
∧ nˆ ( 3)
) ⊗ nˆ( 3)
b =1
∑
a≠b
3 3
= ∑λ w
a ,b =1
b ab nˆ ( a ) ⊗ nˆ (b ) ⋅ nˆ (b ) ⊗ nˆ (b ) = ∑λ w
a ,b =1
b ab nˆ ( a ) ⊗ nˆ (b )
a ≠b a≠b
and
3
3 (a)
3
∑
V ⋅ w = λ a n ⊗ n ⋅
a =1
ˆ (a ) ˆ
a ,b =1
ˆ (a )
∑
w ab n ⊗ n = λ a w ab nˆ ( a) ⊗ nˆ (b)
ˆ (b )
a ,b =1
∑
a≠b a≠b
Then,
3 3
a ,b =1
a
∑
w ⋅ V − V ⋅ w = λ b w ab nˆ ⊗ nˆ − λ a w ab nˆ ⊗ nˆ
( ) ( b )
a ,b =1
( a ) ( b )
∑
a ≠b a≠b
3
= ∑w
a ,b =1
ab (λ b − λ a ) nˆ ( a ) ⊗ nˆ (b )
a ≠b
Tr ( T 3 ) − I T Tr ( T 2 ) + II T Tr ( T ) − III T Tr (1) = 0
123
=3
(1.312)
1
[
⇒ III T = Tr ( T 3 ) − I T Tr ( T 2 ) + II T Tr ( T )
3
]
Replacing the values of the invariants, I T , II T , given by equation (1.269), we obtain:
1 3 1 3
III T = Tr ( T 3 ) − Tr ( T 2 ) Tr ( T ) + [Tr ( T )] (1.313)
3 2 2
or in indicial notation
1 3 1
III T = Tij T jk Tki − Tij T ji Tkk + Tii T jj Tkk (1.314)
3 2 2
Problem 1.35: Based on the Cayley-Hamilton theorem, find the inverse of a tensor T in
terms of tensor power.
Solution: The Cayley-Hamilton theorem states that:
T 3 − T 2 I T + T II T − III T 1 = 0
Carrying out the dot product between the previous equation and the tensor T −1 , we
obtain:
T 3 ⋅ T −1 − T 2 ⋅ T −1 I T + T ⋅ T −1 II T − III T 1 ⋅ T −1 = 0 ⋅ T −1
T 2 − TI T + 1 II T − III T T −1 = 0
⇒ T −1 =
1
III T
(T 2 − I T T + II T 1 )
The Cayley-Hamilton theorem also applies to square matrices of order n . Let An×n be a
square n by n matrix. The characteristic determinant is given by:
λ1n×n − A = 0 (1.315)
where 1n×n is the identity n by n matrix. Developing the determinant (1.315) we ontain:
λn − I 1 λn −1 + I 2 λn − 2 − L (−1) n I n = 0 (1.316)
where I 1 , I 2 , L , I n are the invariants of A . In the particular case when n = 3 , the
invariants are the same obtained for a second-order tensor, i.e.: I 1 = I A , I 2 = II A , I 3 = III A .
Applying the Cayley-Hamilton theorem it is true that:
A n − I 1A n −1 + I 2 A n − 2 − L + (−1) n I n 1 = 0 (1.317)
By means of the relationship (1.317), we can obtain the inverse of the matrix An×n by
multiplying all the terms by the inverse, A −1 , i.e.:
A n A −1 − I 1A n −1A −1 + I 2 A n − 2 A −1 − L + (−1) n I n 1A −1 = 0
(1.318)
⇒ A n −1 − I 1 A n − 2 + I 2 A n −3 − L (−1) n −1 I n −11 + (−1) n I n A −1 = 0
then
A −1 =
(−1) n −1
In
(A n −1 − I 1A n − 2 + I 2 A n−3 − L (−1) n −1 I n −11 ) (1.319)
Problem 1.36: Check the Cayley-Hamilton theorem by using a second-order tensor whose
Cartesian components are given by:
5 0 0
T = 0 2 0
0 0 1
Solution:
The Cayley-Hamilton theorem states that:
T 3 − T 2 I T + T II T − III T 1 = 0
where I T = 5 + 2 + 1 = 8 , II T = 10 + 2 + 5 = 17 , III T = 10 , and
5 3 0 0 125 0 0 5 2 0 0 25 0 0
T 3
=0 23 0 = 0 8 0 ; T 2
=0 22 0 = 0 4 0
0 0 1 0 0 1 0 0 1 0 0 1
By applying the Cayley-Hamilton theorem, we can verify that it is true:
125 0 0 25 0 0 5 0 0 1 0 0 0 0 0
0 8 0 − 8 0 4 0 + 17 0 2 0 − 10 0 1 0 = 0 0 0
0 0 1 0 0 1 0 0 1 0 0 1 0 0 0
x 2′
T = T : T = I T2 − 2 II T
T2
T1
x1′
T3
x3′
Figure 1.28: Norm of a second-order tensor.
An immediate observation of the isotropy of unit tensor 1 is that any spherical tensor
( α1 ) is also an isotropic tensor. So, if a second-order tensor is isotropic it is spherical and
vice versa.
Isotropic third-order tensor
An example of a third-order isotropic tensor is the Levi-Civita pseudo-tensor, defined in
(1.182), which is not a “real” tensor in the strict meaning of the word. With reference to
the transformation law for the third-order tensor components, (see equation (1.248)), we
can conclude that:
D = a 0 I + a1 I + a 2 I
D = a 0 1 ⊗ 1 + a1 1⊗1 + a 2 1⊗1 (1.329)
D ijkl = a 0 δ ij δ kl + a1δ ik δ jl + a 2 δ il δ jk
Problem 1.37: Let C be a fourth-order tensor, whose components are given by:
C ijkl = λδ ij δ kl + µ (δ ik δ jl + δ il δ jk )
where λ , µ are constant real numbers. Show that C is an isotropic tensor.
Solution:
Applying the transformation law for fourth-order tensor components:
C ′ijkl = a im a jn a kp a lq C mnpq
and by replacing the relation C mnpq = λδ mn δ pq + µ (δ mp δ nq + δ mq δ np ) in the above equation,
we obtain:
C ′ijkl = a im a jn a kp a lq [λδ mn δ pq + µ (δ mp δ nq + δ mq δ np )]
(
= λa im a jn a kp a lq δ mn δ pq + µ a im a jn a kp a lq δ mp δ nq + a im a jn a kp a lq δ mq δ np )
(
= λa in a jn a kq a lq + µ a ip a jq a kp a lq + a iq a jn a kn a lq )
(
= λδ ij δ kl + µ δ ik δ jl + δ il δ jk )
= C ijkl
which is proof that C is an isotropic tensor.
An immediate result of (1.330) is that the tensor S and its inverse S −1 are coaxial tensors:
S −1 ⋅ S = S ⋅ S −1 = 1
3 3
1 ˆ (a ) ˆ (a ) (1.332)
S= ∑
a =1
S a nˆ ( a ) ⊗ nˆ ( a ) ; S −1 = ∑S
a =1 a
n ⊗n
1
where S a , , are the eigenvalues of S and S −1 , respectively.
Sa
If S and T are coaxial symmetric tensors, the resulting tensor ( S ⋅ T ) becomes another
symmetric tensor. To prove this we start from the definition of coaxial tensors:
T ⋅S = S ⋅ T ⇒ T ⋅S − S ⋅ T = 0 ⇒ T ⋅ S − ( T ⋅ S ) T = 0 ⇒ 2( T ⋅ S ) skew = 0 (1.333)
Then, if the antisymmetric part of a tensor is a zero tensor, it follows that this tensor is
symmetric:
( T ⋅ S ) skew = 0 ⇒ ( T ⋅ S ) ≡ ( T ⋅ S ) sym (1.334)
ˆ ( a ) , we can obtain:
since det ( F ) ≠ 0 . After that, given an orthonormal basis N
3
F −1 ⋅ F = 1 = ∑ Nˆ
a =1
(a) ˆ (a)
⊗N
3 3
⇒ F = F ⋅1 = F ⋅ ∑
a =1
ˆ (a) ⊗ N
N ˆ (a ) =
∑ F ⋅ Nˆ
a =1
(a) ˆ (a)
⊗N (1.335)
3
⇒F = ∑ λ nˆ
a =1
a
(a) ˆ (a )
⊗N
r ˆ (1 ) r r
f (N ) = F ⋅ N
ˆ (1) F ⋅N
ˆ (1) = f (Nˆ (1) ) = f (Nˆ (1) ) nˆ (1) = λ nˆ (1)
1
r r
F ⋅N
ˆ ( 2) = f (Nˆ ( 2 ) ) = f (Nˆ ( 2 ) ) nˆ ( 2 ) = λ nˆ ( 2 )
2
r ˆ (2) r r
f (N ) = F ⋅ N
ˆ ( 2)
N̂(3) F ⋅N
ˆ (3) = f (Nˆ ( 3) ) = f (Nˆ ( 3) ) nˆ (3) = λ nˆ (3)
3
n̂(1)
n̂( 2 ) N̂( 2 )
n̂(3) N̂(1)
r ˆ (3)
f (N ) = F ⋅ N
ˆ ( 3)
ˆ (a) .
Figure 1.29: Projecting F onto N
Note that for the arbitrary orthonormal basis N ˆ ( a ) , the new basis nˆ ( a ) will not necessarily
be orthonormal. We seek to find a basis N ˆ ( a ) so that the new basis nˆ ( a ) is orthonormal,
r r r r r r
(see Figure 1.29), i.e. f (N ) ⋅ f (N ) = 0 , f (N ) ⋅ f (N ) = 0 , f (N ) ⋅ f (N ) = 0 . Then we
ˆ (1 ) ˆ (2) ˆ (2) ˆ (3) ˆ ( 3) ˆ ( 1)
3
where we have defined the symmetric second-order tensor V = ∑ λ a nˆ ( a ) ⊗ nˆ ( a ) . By
a =1
comparing the spectral representation of U with V , we can conclude that they have the
same eigenvalues but different eigenvectors, and they are related by nˆ ( a ) = R ⋅ Nˆ ( a ) .
With reference to the above considerations, we can define the polar decomposition:
12⋅3F = F T ⋅ R ⋅ U = (R T ⋅ F ) T ⋅ U = UT ⋅ U = U 2 ⇒ U= ± FT ⋅F = ± C
T
F (1.339)
C
Since det ( F ) ≠ 0 , the tensors C and b are positive definite symmetric tensors, (see
Problem 1.25), which implies that the eigenvalues of C and b are all real and positive.
However, up to now, det ( F ) ≠ 0 is the only restriction imposed on the tensor F .
Therefore, we have the following possibilities:
If det ( F ) > 0
In this scenario, we have det ( F ) = det (R )det(U) = det ( V )det(R ) > 0 , which results in the
following cases:
= 2 A ji (eˆ i ⊗ eˆ j ) = 2A T
We leave the reader with the following demonstration:
[
∂ Tr ( A 3 ) ] = 3( A 2 ) T (1.345)
∂A
Then, if we are considering a symmetric second-order tensor, C , it is true that
∂[Tr (C)]
=1,
∂[Tr (C)]
2
= 2 Tr (C)1 ,
∂ Tr (C 2 ) [ ] = 2C T = 2C ,
[
∂ Tr (C 3 ) ] = 3(C 2 ) T = 3C 2 .
∂C ∂C ∂C ∂C
Moreover, we can say that the derivative of the Frobenius norm of C is given by:
∂ Tr (C ⋅ C T ) ∂ Tr (C 2 )
∂C
=
∂ C:C ( = ) = = 1 Tr (C 2 ) [ ] [Tr(C )],
−1
2 2
C
∂C ∂C ∂C ∂C 2 (1.346)
[ ]
−1
1
= Tr (C 2 ) 2 2C
2
or:
∂C C C
= = (1.347)
∂C Tr (C )2 C
⇒
( )δ
∂ C iq−1
= −C iq−1
( )
∂ C qj
C −jr1
qr
∂C kl ∂C kl
whereas C qj =
1
2
( )
C qj + C jq , so we can conclude that:
( )δ
∂ C iq−1
= −C iq−1
(
1 ∂ C qj + C jq ) −1
qr C jr
∂C kl 2 ∂C kl
( )
∂ C ir−1 1
[ 1
] [
= −C iq−1 δ qk δ jl + δ jk δ ql C −jr1 = − C iq−1δ qk δ jl C −jr1 + C iq−1δ jk δ ql C −jr1 ] (1.351)
∂C kl 2 2
( )
∂ C ir−1 1
[
= − C ik−1C lr−1 + C il−1 C −kr1 ]
∂C kl 2
Or in tensorial notation:
∂C −1
∂C
1
[
= − C −1 ⊗C −1 + C −1 ⊗C −1
2
] (1.352)
NOTE: Note that, if we had not replaced the symmetric part of C qj in (1.351), we would
∂[ II T ] ∂ 1
= [Tr[( T ) ]2
− Tr ( T 2
) =
1 ∂[Tr ( T )]
]
2
−
∂ Tr ( T 2 ) [ ]
∂T ∂T 2 2 ∂T ∂T
(1.354)
1
= 2( TrT )1 − 2 T T
2
[ ]
= Tr ( T )1 − T T
Next, we apply the Cayley-Hamilton theorem so as to represent T as:
T 3 : T −2 − I T T 2 : T −2 + II T T : T −2 − III T 1 : T −2 = 0
T − I T 1 + II T T −1 − III T T − 2 = 0 (1.355)
⇒ T = I T 1 − II T T −1 + III T T − 2
By substituting (1.355) into the equation in (1.354), we obtain:
∂[ II T ]
∂T
(
= Tr ( T )1 − T T = Tr ( T )1 − I T 1 − II T T −1 + III T T − 2 ) = ( II
T
TT
−1
− III T T − 2 )
T
(1.356)
To find the partial derivative of the third invariant, we can start with the definition given in
(1.313), so:
∂[ III T ] ∂ 1 1 1 3
Tr ( T ) − Tr ( T ) Tr ( T ) + [Tr ( T )]
3 2
=
∂T ∂T 3 2 6
1
= 3( T 2 ) T −
1 ∂ Tr ( T 2 ) [ 1 ]
Tr ( T ) − Tr ( T 2 )
∂[Tr ( T )] 3
+ [Tr ( T )] 1
2
3 2 ∂T 2 ∂T 6
1 1 (1.357)
Tr ( T 2 )1 + [Tr ( T )] 1
2
= ( T 2 ) T − Tr ( T ) T T −
2 2
= ( T 2 ) T − Tr ( T ) T T +
1
2
[
[Tr(T )] − Tr( T 2 ) 1
2
]
= ( T 2 ) T − I T T T + II T 1
Once again using the Cayley-Hamilton theorem we obtain:
T 3 ⋅ T −1 − I T T 2 ⋅ T −1 + II T T ⋅ T −1 − III T 1 ⋅ T −1 = 0
T 2 − I T T + II T 1 − III T T −1 = 0 (1.358)
−1 2
⇒ III T T = T − I T T + II T 1
and the transpose:
( III TT ) = (T
−1 T 2
− I T T + II T 1 ) = (T )
T 2 T
− I T T T + II T 1 (1.359)
By comparing (1.357) with (1.359) we find another way to express the derivative of III T
with respect to T , i.e.:
∂[ III T ]
∂T
(
= III T T −1 )
T
= III T T −T (1.360)
Let us assume a second-order tensor depends on the time, t , i.e. T = T(t ) . Then, we define
the first time derivative and the second time derivative of the tensor T , respectively, as:
D D2
T = T& ; &&
T=T (1.361)
Dt Dt 2
The time derivative of a tensor determinant is defined as:
DT
D
Dt
[det(T )] = ij cof Tij
Dt
( ) (1.362)
where cof (Tij ) is the cofactor of Tij and defined as [cof (Tij )]T = det(T ) (T −1 )ij .
1 1
Problem 1.38: Consider that J = [det (b )] 2 = ( III b ) 2 , where b is a symmetric second-order
tensor, i.e. b = b T . Obtain the partial derivatives of J and ln(J ) with respect to b .
Solution:
1
∂ ( III b ) 2
∂J
⇒ =
∂b ∂b
1 −1 ∂ III 1 −1
= ( III b ) 2 b
= ( III b ) 2 III b b −T
2 ∂b 2
1 1 1
= ( III b ) 2 b −1 = J b −1
2 2
1
∂ ln III b 2
∂[ln( J )] 1 ∂ III b 1 −1
⇒ = = = b
∂b ∂b 2 III b ∂b 2
Tr ( T )
T dev = T − 1 = T − Tm 1 (1.364)
3
For the following operations, we consider that T is a symmetric tensor, T = T T , then
under this condition the deviatoric tensor components, Tijdev , become:
Tr ( T ) Tr ( T )
I = Tr ( T dev ) = Tr T − 1 = Tr ( T ) − Tr (1) = 0 (1.366)
T dev 3 3 123
δ =3 ii
Thus, we can conclude that the trace of any deviatoric tensor is equal to zero.
For simplicity we can use the principal space to obtain the second and third invariant of the
deviatoric tensor. In the principal space the components of T are given by:
T1 0 0
Tij = 0 T2 0 (1.367)
0 0 T3
We could also have obtained the above result, by directly starting from the definition of the
second invariant of a tensor given in (1.269), i.e.:
II T dev =
1
2
{(I T dev [
) 2 − Tr ( T dev ) 2 = − ]} 1
2
{ [
Tr ( T dev ) 2 ]}
=
1
2
{− Tr[(T − T m 1)
2
]}
=
1
2
{− Tr[(T 2
− 2 Tm T ⋅ 1 + Tm2 1) ]}
=
1
2
[− Tr(T 2
) + 2 Tm Tr ( T ) − Tm2 Tr (1) ] (1.370)
1 2 IT I T2
= − Tr ( T ) + 2 I T − 3
2 3 9
1 2 I T2
= − Tr ( T ) +
2 3
Observing that Tr ( T 2 ) = T12 + T22 + T32 = I T2 − 2 II T , (see Problem 1.31), the equation
(1.370) becomes:
1 2 I T2 1 2 I T2 1
II T dev = −
T
2
I + 2 II T + =
3 2 T
2 II −
3 3
2
= 3 II T − I T ( ) (1.371)
x3
T 33
T23
T13 T23
T13
T12 T22
T12
T11 x2
x
14414444442444444443
x3 x3
Tm T33dev
T23
T13 T23
T13
Tm
+ T12
T12
dev
T22
Tm x2 T11dev x2
x1 x1
II
T dev
=−
1
2
[ 1
] [ 1
Tr ( T dev ) 2 = − Tr ( T dev ⋅ T dev ) = − T dev
2 2
] ⋅ ⋅T dev = − 1 Tijdev T jidev
2
(1.372)
II
T dev
=−
1
2
[
( T11dev ) 2 + ( T22
dev 2
) + ( T33dev ) 2 + 2( T12dev ) 2 + 2( T13dev ) 2 + 2( T23
dev 2
) ] (1.373)
II
T dev
=−
1 dev dev
2
1
2
[
Tij T ji = − ( T1dev ) 2 + ( T2dev ) 2 + ( T3dev ) 2 ] (1.374)
or
1
II T dev = − T22
2
dev
( ) − 2 T T + (T ) + (T ) − 2 T T + (T )
2 dev
22
dev
33
dev 2
33
dev 2
11
dev
11
dev
33
dev 2
33 +
(T dev 2
11 ) − 2 T T + (T ) + (T ) + (T ) + (T )
dev
11
dev
22
dev 2
22
dev 2
11
dev 2
22
dev 2
33
(1.376)
− ( T12dev ) 2 − ( T23
dev 2
) − ( T13dev ) 2
Note that, from equation (1.373), we can state that:
( T11dev ) 2 + ( T22
dev 2
) + ( T33dev ) 2 = −2 II − 2( T12dev ) 2 − 2( T13dev ) 2 − 2( T23
dev 2
) (1.377)
T dev
Substituting (1.377) into (1.376), we find:
II
T dev
=−
1
6
[
dev
( T22 − T33dev ) 2 + ( T11dev − T33dev ) 2 + ( T11dev − T22
dev 2
) − ( T12dev ) 2 − ( T23
dev 2
]
) − ( T13dev ) 2
(1.378)
Moreover, if we consider the principal space we obtain:
II
T dev
=−
1
6
[
( T2dev − T3dev ) 2 + ( T1dev − T3dev ) 2 + ( T1dev − T2dev ) 2 ] (1.379)
= s kl
∂s
s: =s
∂σ
To show that two tensors are coaxial, we must prove that σ dev ⋅ σ = σ ⋅ σ dev :
Iσ
σ ⋅ σ dev = σ ⋅ (σ − σ sph ) = σ ⋅ σ − σ ⋅ σ sph = σ ⋅ σ − σ ⋅ 1
3
Iσ I
= σ ⋅σ − σ ⋅
1 = σ ⋅σ − σ 1⋅σ
3 3
I
= σ − σ 1 ⋅ σ = σ dev ⋅ σ
3
Therefore, we have shown that σ and σ dev are coaxial tensors. In other words, they have
the same principal directions.
Tensor-valued tensor function can be of the types: scalar, vector, or higher-order tensors.
As examples of scalar-valued tensor functions we can list:
Ψ = Ψ ( T ) = det (T )
(1.382)
Ψ = Ψ (T , S) = T : S
where T and S are second-order tensors. Additionally, as an example of a second-order-
valued tensor function we have:
Π = Π (T ) = α1 + βT (1.383)
where α and β are scalars.
the function at the application point x = a . We can extrapolate that definition for use on
tensors. For example, let us suppose we have a scalar-valued tensor function ψ in terms of
a second-order tensor, E , then we can approximate ψ (E ) as:
1 1 ∂ψ ( E 0 ) 1 ∂ 2 ψ( E 0 )
ψ( E ) ≈ ψ ( E 0 ) + ( E ij − E 0 ij ) + ( E ij − E ij 0 )( E kl − E kl 0 ) + L
0! 1! ∂E ij 2! ∂E ij ∂E kl
(1.384)
∂ψ ( E 0 ) 1 ∂ 2 ψ( E 0 )
≈ ψ0 + : (E − E0 ) + (E − E0 ) : : (E − E0 ) + L
∂E 2 ∂E ⊗ ∂E
A second-order-valued tensor function, S (E ) , can be approximated as:
1 1 ∂S ( E 0 ) 1 ∂ 2 S( E 0 )
S( E ) ≈ S( E 0 ) + : (E − E0 ) + (E − E0 ) : : (E − E0 ) + L
0! 1! ∂E 2! ∂E ⊗ ∂E
(1.385)
∂S ( E 0 ) 1 ∂ 2S( E 0 )
≈ S0 + : (E − E0 ) + (E − E0 ) : : (E − E0 ) + L
∂E 2 ∂E ⊗ ∂E
Other tensor algebraic expressions can be represented by series, e.g.:
1 2 1 3
exp S = 1 + S + S + S +L
2! 3!
1 1
ln(1 + S ) = S − S 2 + S 3 − L (1.386)
2 3
1 3 1 5
sin(S ) = S − S + S − L
3! 5!
With reference to the spectral representation of a symmetric second-order tensor, S , it is
also true that:
3 λ2a λ3a 3
exp S = ∑ 1 + λ
a =1
a +
2!
+
3!
+ L nˆ ( a ) ⊗ nˆ ( a ) = ∑ exp λ a nˆ ( a ) ⊗ nˆ ( a )
a =1
(1.387)
3 3
1 1
∑
ln(1 + S ) = λ a − λ2a + λ3a + L nˆ ( a ) ⊗ nˆ ( a ) =
a =1 2 3
∑ ln(1 + λ
a =1
a)n
ˆ (a)
⊗ nˆ (a)
where λ a and nˆ (a )
are the eigenvalues and eigenvectors, respectively, of the tensor S .
144244
(
Π * (T ) = Q ⋅ Π (T ) ⋅ Q T = Π Q ⋅ T ⋅ Q T
3
) (1.388)
Π T*( )
We can show that Π (T ) has the same principal directions of T , i.e. Π (T ) and T are
coaxial tensors. To demonstrate this we can regard the components of T in the principal
space as:
λ 1 0 0
(T )ij = 0 λ2 0 (1.389)
0 0 λ 3
After having done the calculation for the matrices (1.391), we obtain:
Π 11 − Π 12 − Π 13 Π 11 Π 12 Π 13
Π = − Π 12 Π 22 − Π 23 = Π 12
*
Π 22 Π 23 = Π
− Π 13 − Π 23 Π 33 Π 13 Π 23 Π 33
(1.393)
Π 11 0 0
*
⇒ Π = 0 Π 22 0
0 0 Π 33
Note that T and Π have the same principal directions nˆ (i ) . Then, we can put the
following set of equations together:
1 = nˆ (1) ⊗ nˆ (1) + nˆ ( 2) ⊗ nˆ ( 2) + nˆ (3) ⊗ nˆ (3)
T = λ 1nˆ ⊗ nˆ + λ 2 nˆ ⊗ nˆ + λ 3nˆ ⊗ nˆ
(1) (1) (2) (2) ( 3) ( 3)
(1.397)
2 2 ˆ (1) ˆ (1) 2 ˆ (2) ˆ ( 2) 2 ˆ ( 3) ˆ ( 3)
T = λ 1n ⊗ n + λ 3n ⊗ n + λ 3n ⊗ n
Solving the set above, we obtain nˆ ( a ) ⊗ nˆ ( a ) ≡ M ( a ) as a function of the tensor T , and we
obtain:
λ 2λ3 (λ 2 + λ 3 ) T2
M (1) = 1− T+
(λ 1 − λ 3 )(λ 1 − λ 2 ) (λ 1 − λ 3 )(λ 1 − λ 2 ) (λ 1 − λ 3 )(λ 1 − λ 2 )
λ 1λ 3 (λ 1 + λ 3 ) T2
M ( 2) = 1− T+ (1.398)
(λ 2 − λ 1 )(λ 2 − λ 3 ) (λ 2 − λ 1 )(λ 2 − λ 3 ) (λ 2 − λ 1 )(λ 2 − λ 3 )
λ 1λ 2 (λ 1 + λ 2 ) T2
M ( 3) = 1− T+
(λ 3 − λ 1 )(λ 3 − λ 2 ) (λ 3 − λ 1 )(λ 3 − λ 2 ) (λ 3 − λ 1 )(λ 3 − λ 2 )
(
Π * (T ) = Q ⋅ Π (T ) ⋅ Q T = Q ⋅ Φ 0 1 + Φ 1 T + Φ 2 T 2 ⋅ Q T )
= Φ 0 Q ⋅ 1 ⋅ Q T + Φ 1Q ⋅ T ⋅ Q T + Φ 2 Q ⋅ T 2 ⋅ Q T = Φ 0 1 + Φ 1 T * + Φ 2 T *
2
(1.401)
= Π(T * )
∂I C ∂ II C ∂ III C
Now, by substituting the following values =1, = I C 1 − C and = III C C −1
∂C ∂C ∂C
into the equation in (1.407), we obtain:
∂Ψ ∂Ψ
Ψ ,C = 1+ (I C 1 − C ) + ∂Ψ III C C −1 (1.409)
∂I C ∂ II C ∂ III C
∂Ψ ∂Ψ ∂Ψ ∂Ψ
Ψ , C = + I C 1 − C + III C C −1 (1.410)
∂I C ∂ II C ∂ II C ∂ III C
∂I C ∂ II C
Another way to express the relation (1.410) is by substituting =1, = IC 1 − C
∂C ∂C
∂ III C
and = C 2 − I C C + II C 1 into the equation in (1.407), thus:
∂C
∂Ψ ∂Ψ ∂Ψ ∂Ψ ∂Ψ ∂Ψ 2
Ψ , C = + IC + II C 1 − + I C C + C (1.411)
∂I C ∂ II C ∂ III C ∂ II C ∂ III C ∂ III C
∂I C ∂ II C ∂ III C
If we now consider the values =1, = II C C −1 − III C C − 2 , = III C C −1 , in the
∂C ∂C ∂C
equation in (1.407), we obtain:
∂Ψ ∂Ψ ∂Ψ ∂Ψ
Ψ , C = 1 + II C + III C C −1 − III C C − 2 (1.412)
∂I C ∂ II C ∂ III C ∂ II C
If we now observe both I C = I b , II C = II b , III C = III b , and the equation in (1.410), we can
draw the conclusion that:
∂Ψ ∂Ψ ∂Ψ ∂Ψ
Ψ , b = + I b 1 − b+ III b b −1 (1.413)
∂I b ∂ II b ∂ II b ∂ III b
⇒ F ⋅1 ⋅ F T = F ⋅ F T = b
C = FT ⋅F (1.415)
⇒ F ⋅C ⋅ F T = F ⋅ F T ⋅ F ⋅ F T = b ⋅ b = b2
C −1 = F −1 ⋅ b −1 ⋅ F
(1.416)
⇒ F ⋅ C −1 ⋅ F T = F ⋅ F −1 ⋅ b −1 ⋅ F ⋅ F T = b −1 ⋅ b
The equation (1.414) can be rewritten as:
∂Ψ ∂Ψ ∂Ψ 2 ∂Ψ
F ⋅ Ψ , C ⋅F T = + I C b − b + III C b −1 ⋅ b
∂
C I ∂ II C ∂ II C ∂ III C
(1.417)
∂Ψ ∂Ψ ∂Ψ ∂Ψ
= + I C 1 − b+ III C b −1 ⋅ b
∂I C ∂ II C ∂ II C ∂ III C
In light of the equation in (1.413) and (1.417), we can draw the conclusion that:
∂Ψ ∂Ψ ∂Ψ ∂Ψ
F ⋅ Ψ , C ⋅F T = + I b 1 − b+ III b b −1 ⋅ b
∂I b ∂ II b ∂ II b ∂ III b (1.418)
= Ψ ,b ⋅ b = b ⋅ Ψ ,b
Therefore, we can draw the conclusion that Ψ , C and U are coaxial tensors.
Let A be a symmetric second-order tensor, and Ψ = Ψ (A ) be a scalar-valued tensor
function. The following relationships hold:
Ψ , b = 2b ⋅ Ψ , A for A = b T ⋅ b
Ψ , b = 2Ψ , A ⋅b for A = b ⋅ b T
(1.424)
Ψ , b = 2b ⋅ Ψ , A = 2Ψ , A ⋅b
= b ⋅ Ψ , A + Ψ , A ⋅b for A = b ⋅ b and b = b T
When dealing with symmetric tensors, it may be advantageous to just work with the
independent components. For example, a symmetric second-order tensor has 6
As we have seen before, a fourth-order tensor, C , that presents minor symmetry, i.e.
C ijkl = C jikl = C ijlk = C jilk , has 6 × 6 = 36 independent components. Note that, due to the
symmetry of (ij ) we have 6 independent components, and due to the symmetry of (kl )
we have 6 independent components. In Voigt Notation we can represent these
components in a 6 -by- 6 matrix as:
C 1111 C1122 C1133 C 1112 C1123 C1113
C C 2222 C 2233 C 2212 C 2223 C 2213
2211
C C 3322 C 3333 C 3312 C 3323 C 3313
[C ] = 3311 (1.427)
C 1211 C1222 C1233 C 1212 C1223 C1213
C 2311 C 2322 C 2333 C 2312 C 2323 C 2313
C 1311 C1322 C1333 C 1312 C1323 C1313
In addition to minor symmetry the tensor also has major symmetry, i.e. C ijkl = C klij , and the
number of independent components have reduced to 21 . One can easily memorize the
order of the components in the matrix [C ] if we consider the order of the second-order
tensor in Voigt Notation, i.e.:
(11)
(22)
(33)
[(11) (22) (33) (12) (23) (13)] (1.428)
(12)
(23)
(13)
1
1
1 0 0
1
δ ij ≡ 1 = 0 1 0 →{δ} =
Voigt
(1.429)
0 0 1 0
0
0
In the subsection 1.5.2.5.1 Unit Tensors we have defined three fourth-order unit tensors,
namely, I ijkl = δ ik δ jl , I ijkl = δ il δ jk and I ijkl = δ ij δ kl , among which only I ijkl = δ ij δ kl is a
symmetric tensor. The representation of I ijkl = δ ij δ kl in Voigt notation can be evaluated by
observing how a symmetric fourth-order tensor is represented in (1.427), thus:
1 1 1 0 0 0
1 1 1 0 0 0
I ijkl = δ ij δ kl Voigt
1
→ I = [] 1 1 0 0 0
(1.430)
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
33 {b} = [N ] {T }
b = 0 n T T
2 2 0 n 1 n 3 0 T ⇒ (1.434)
b 3 0 0 n 3 0 n 2 n1 12
144444244444 3 T23
[N ]T
T13
{E ′} = [N ]{E } (1.440)
where
a11 2 a12 2 a13 2 a11 a12 a12 a13 a11 a13
2 2 2
a 21 a 22 a 23 a 21 a 22 a 22 a 23 a 21 a 23
2 2 2
[N ] = a31 a 32 a 33 a 31 a 32 a 32 a 33 a 31 a 33
2a 21 a11 2a 22 a12 2a13 a 23 (a11 a 22 + a12 a 21 ) (a13 a 22 + a12 a 23 ) (a13 a 21 + a11 a 23 )
2a a 2a 32 a 22 2a 33 a 23 (a 31 a 22 + a 32 a 21 ) (a 33 a 22 + a 32 a 23 ) (a 33 a 21 + a 31 a 23 )
31 21
2a 31 a11 2a 32 a12 2a 33 a13 (a 31 a12 + a 32 a11 ) (a 33 a12 + a 32 a13 ) (a 33 a11 + a 31 a13 )
(1.441)
The matrices (1.439) and (1.441) are not orthogonal matrices, i.e. [M]−1 ≠ [M]T and
[N ]−1 ≠ [N ]T . However, it is possible to show that [M]−1 = [N ]T .
where A is the transformation matrix between the original set and the principal space,
made up of the eigenvectors nˆ ( a ) . The above equation can be rewritten in terms of
components as follows:
T11 T12 T13 T1 0 0 0 0 0 0 0 0
T T22 T
T23 = A 0 T
0 0 A + A 0 T 2
0 A + A 0 0 0 A
T
12 (1.443)
T13 T23 T33 0 0 0 0 0 0 0 0 T3
or
T11 T12 T13 a112
a11 a12 a11 a13 a 212
a 21 a 22 a 21 a 23
T
12 T22 T23 = T1 a11 a12 2
a12 a12 a13 + T2 a 21 a 22 2
a 22 a 22 a 23
T13 T23 T33 a a a12 a13 a132 a a a 22 a 23 2
a 23
11 13 21 23
(1.444)
2
a 31 a 31 a 32 a 31 a 33
2
+ T3 a 31 a 32 a 32 a 32 a 33
a a a 32 a 33 2
a 33
31 33
By regarding how second-order tensors are presented in Voigt Notation as in (1.438), the
spectral representation of a second-order tensor in Voigt notation becomes:
T11 a11 2
a 21 2
a 31 2
T 2 2 2
22 a12 a 22 a 32
T 2 2 2
{T } = 33 = T1 a13 + T2 a 23 + T3 a33 (1.445)
T12 a11 a12 a 21 a 22 a 31 a 32
T23 a a a a a a
12 13 22 23 32 33
T
13 a11 a13 a 21 a 23 a 31 a 33
T11dev 2 − 1 − 1 0 0 0 T11
dev − 1 2 − 1 0 0 0 T
T22 22
T33dev 1 − 1 − 1 2 0 0 0 T33
dev = (1.447)
T12 3 0 0 0 3 0 0 T12
T dev 0 0 0 0 3 0 T23
23
dev
T13 0 0 0 0 0 3 T13
r
Problem 1.40: Let T ( x , t ) be a symmetric second-order tensor, which is expressed in
r
terms of the position ( x ) and time (t ) . Also, bear in mind that the tensor components,
along direction x3 , are equal to zero, i.e. T13 = T23 = T33 = 0 .
r
NOTE: In the next section we will define T ( x , t ) as a field tensor, i.e. the value of T
depends on position and time. As we will see later, if the tensor is independent of any one
r r
direction at all points ( x ) , e.g. if T ( x , t ) is independent of the x3 -direction, (see Figure
1.31), the problem becomes a two-dimensional problem (plane state) so that the problem is
greatly simplified. ■
2D
x2
x2
T22
T12 T22
T12 T12
T11 T11
T11
x1
T12
x1
T22
x3
Figure 1.31: A two-dimensional problem (2D).
x2
x 2′ x1′
θ
x1
T ′ = A T AT
x2
x 2′ ′
T22
T22 T12′ T11′
T12
T11 T11 x1′
P
P
T11′ θ
T12 T12′ x1
x1 ′
T22
T22
T = AT T ′ A
T′ =
22 sin θ
2
cos θ
2
− 2 sin θ cos θ T22 (1.450)
T12′ − sin θ cos θ cos θ sin θ cos 2 θ − sin 2 θ T12
Making use of the trigonometric identities, 2 cos θ sin θ = sin 2θ ,
following
1 − cos 2θ 1 + cos 2θ
cos 2 θ − sin 2 θ = cos 2θ , sin 2 θ = , cos 2 θ = , (1.450) becomes:
2 2
1 + cos 2θ 1 − cos 2θ
sin 2θ
T ′
11 2 2 T11
T ′ = 1 − cos 2θ 1 + cos 2θ
22 − sin 2θ T22
2 2
T12′ T12
− sin 2θ sin 2θ
cos 2θ
2 2
Explicitly, the above components are given by:
1 + cos 2θ 1 − cos 2θ
T11′ = T11 + T22 + T12 sin 2θ
2 2
1 − cos 2θ 1 + cos 2θ
′ =
T22 T11 + T22 − T12 sin 2θ
2 2
sin 2θ sin 2θ
T12′ = − T11 + T22 + T12 cos 2θ
2 2
Reordering the previous equation, we obtain:
T11 + T22 T11 − T22
T11′ = + cos 2θ + T12 sin 2θ
2 2
T + T22 T11 − T22
′ = 11
T22 − cos 2θ − T12 sin 2θ (1.451)
2 2
T − T22
T12′ = − 11 sin 2θ + T12 cos 2θ
2
b) Recalling that the principal directions are characterized by the lack of any tangential
components, i.e. Tij = 0 if i ≠ j , in order to find the principal directions in the plane, we let
T12′ = 0 , hence:
T − T22 T − T22
T12′ = − 11 sin 2θ + T12 cos 2θ = 0 ⇒ 11 sin 2θ = T12 cos 2θ
2 2
sin 2θ 2 T 2 T
⇒ = 12
⇒ tg(2θ ) = 12
cos 2θ T11 − T22 T11 − T22
Then, the angle corresponding to the principal direction is:
1 2 T12
θ = arctg (1.452)
2 T11 − T22
To find the principal values (eigenvalues) we must solve the following characteristic
equation:
T11 − T T12
T12 T22 − T
=0 ⇒ ( )
T 2 − T ( T11 + T22 ) + T11 T22 − T122 = 0
T(1, 2 ) =
− [− ( T11 + T22 )] ± [− (T11 + T22 )]2 (
− 4(1) T11 T22 − T122 )
2(1)
=
T11 + T22
±
[(T11 + T22 )]2 (
− 4 T11 T22 − T122 )
2 4
By rearranging the above equation we obtain the principal values for the two-dimensional
case as:
2
T11 + T22 T − T22 (1.453)
T(1, 2 ) = ± 11 + T122
2 2
c) We directly apply equation (1.451) to evaluate the values of the components Tij′ ,
(i, j = 1,2) , where T11 = 1 , T22 = 2 , T12 = −4 and θ = 45º , i.e.:
1 + 2 1 − 2
T11′ = 2 + 2 cos 90º −4 sin 90º = −2.5
1 + 2 1 − 2
′ =
T22 − cos 90º +4 sin 90º = 5.5
2 2
1 − 2
T12′ = − sin 90º −4 cos 90º = 0.5
2
And the angle corresponding to the principal direction is:
1 2 T12 2 × (−4)
θ = arctg = ⇒ (θ = 41.4375º )
2 T11 − T22 1− 2
r
The principal values of T ( x , t ) can be evaluated as follows:
2
T + T22 T − T22 T1 = 5.5311
T(1, 2 ) = 11 ± 11 + T122 ⇒
2 2 T2 = −2.5311
d) By referring to equation in (1.451) and by varying θ from 0º to 360º , we can obtain
different values of T11′ , T22
′ , T12′ , which are illustrated in the following graph:
x1′
T1
θ = 41.437 º
T2
x1′
θ = 131.437º
8 σ2
Components
T1 = 5.5311
′
T22
6
T22 2
T12′
T11
0
0 50 100 150 200 250 300 350
θ
-2
45º x1′ T11′
T12 -4 T2 = −2.5311
θ = 86.437º
-6
TS max = 4.0311
r r
x3 t = t1 x3 t = t1 v ( x , t1 )
T5
r
T4 ( x ( 4) , t1 )
T6
T8
T3
T1 T7
x2 x2
T2
1.8.2 Gradient
The gradient of a scalar field
The gradient ∇ xr φ or gradφ is defined as:
r
→ dφ = ∇ xr φ ⋅ dx
∇ xr φ (1.460)
where the operator ∇ xr is known as the Nabla symbol. Expressing the equation (1.460) in
the Cartesian basis we obtain:
∂φ ∂φ ∂φ
dx1 + dx 2 + dx3 =
∂x1 ∂x 2 ∂x3
(1.461)
[
= (∇ xr φ) x eˆ 1 + (∇ xr φ) x eˆ 2 + (∇ xr φ) x eˆ 3
1 2 3
]⋅ [(dx )eˆ
1 1 + (dx 2 )eˆ 2 + (dx3 )eˆ 3 ]
Evaluating the above scalar product we find:
∂φ ∂φ ∂φ
dx1 + dx 2 + dx3 = (∇ xr φ) x1 dx1 + (∇ xr φ) x2 dx 2 + (∇ xr φ) x3 dx3 (1.462)
∂x1 ∂x 2 ∂x 3
Therefore, we can draw the conclusion that the ∇ xr φ components in the Cartesian basis
are:
∂φ ∂φ ∂φ
(∇ xr φ )1 ≡ ; (∇ xr φ) 2 ≡ ; (∇ xr φ) 3 ≡ (1.463)
∂x1 ∂x 2 ∂x3
Hence, the gradient in terms of components is defined as such:
∂φ ˆ ∂φ ˆ ∂φ ˆ
∇ xr φ = e1 + e2 + e3 (1.464)
∂x1 ∂x 2 ∂x3
The Nabla symbol ∇ xr is defined as:
∂ ˆ
∇ xr = e i ≡ ∂ , i eˆ i Nabla symbol (1.465)
∂xi
The surface φ = const , called the surface level, or isosurface or equiscalar surface, is the
surface formed by points which all have the same value of φ , so, if we move along the
level surface the values of the function do not change.
r r
The gradient of a vector field v ( x ) :
r r
grad( v ) ≡ ∇ xr v (1.467)
Using the definition of ∇ xr , given in (1.465), the gradient of the vector field becomes:
r ∂ ( v i eˆ i )
∇ xr v = ⊗ eˆ j = ( v i eˆ i ) , j ⊗ eˆ j = v i , j eˆ i ⊗ eˆ j (1.468)
∂x j
n̂ ∇ xr φ
φ = const = c1
∂(•)
∇ xr (•) = ⊗ ê j Gradient of a tensor field in the (1.469)
∂x j Cartesian basis
As noted, the gradient of a vector field becomes a second-order tensor field, whose
components are:
∂v 1 ∂v 1 ∂v 1
∂x1 ∂x 2 ∂x3
∂v i ∂v 2 ∂v 2 ∂v 2
v i, j ≡ = (1.470)
∂x j ∂x1 ∂x 2 ∂x3
∂v 3 ∂v 3 ∂v 3
∂x1 ∂x 2 ∂x3
r
The gradient of a second-order tensor field T ( x ) :
∂ (Tij eˆ i ⊗ eˆ j )
∇ xr T = ⊗ eˆ k = Tij ,k eˆ i ⊗ eˆ j ⊗ eˆ k (1.471)
∂x k
and its components are represented by:
(∇ xr T )ijk ≡ Tij ,k (1.472)
Problem 1.41: Find the gradient of the function f ( x1 , x 2 ) = cos( x1 ) + exp x1x2 at the point
( x1 = 0, x 2 = 1) .
Solution: By definition, the gradient of a scalar function is given by:
∂f ˆ ∂f ˆ
∇ xr f = e1 + e2
∂x1 ∂x 2
∂f ∂f
where: = − sin( x1 ) + x 2 exp x1x2 ; = x1 exp x1x2
∂x1 ∂x 2
[ ] [ ]
∇ xr f ( x1 , x 2 ) = − sin( x1 ) + x 2 exp x1x2 eˆ 1 + x1 exp x1x2 eˆ 2 ⇒ ∇ xr f (0,1) = [1] eˆ 1 + [0] eˆ 2 = 2eˆ 1
r r
Problem 1.42: Let u( x ) be a stationary vector field. a) Obtain the components of the
r r r
differential du . b) Now, consider that u( x ) represents a displacement field, and is
independent of x3 . With these conditions, graphically illustrate the displacement field in
the differential area element dx1 dx 2 .
Solution: According to the differential and gradient definitions, it holds that:
r r r r r r
r r r u( x ) dx u( x + dx )
r r r
du ≡ u( x + dx ) − u( x )
r r r
du = ∇ xr u ⋅ dx
r
x2 x r r
x + dx
x1
x3
or:
∂u1 ∂u ∂u
du1 = dx1 + 1 dx 2 + 1 dx3
∂x1 ∂x 2 ∂x3
∂u 2 ∂u ∂u
du 2 = dx1 + 2 dx 2 + 2 dx3
∂x1 ∂x 2 ∂x3
∂u ∂u ∂u
du 3 = 3 dx1 + 3 dx 2 + 3 dx 3
∂x1 ∂x 2 ∂x3
with
du1 = u1 ( x1 + dx1 , x 2 + dx 2 , x3 + dx3 ) − u1 ( x1 , x 2 , x3 )
du 2 = u 2 ( x1 + dx1 , x 2 + dx 2 , x3 + dx3 ) − u 2 ( x1 , x 2 , x3 )
du = u ( x + dx , x + dx , x + dx ) − u ( x , x , x )
3 3 1 1 2 2 3 3 3 1 2 3
As the field is independent of x3 , the displacement field in the differential area element is
defined as:
∂u1 ∂u1
du1 = u1 ( x1 + dx1 , x 2 + dx 2 ) − u1 ( x1 , x 2 ) = ∂x dx1 + ∂x dx 2
1 2
du = u ( x + dx , x + dx ) − u ( x , x ) = 2 dx + 2 dx ∂u ∂u
2 2 1 1 2 2 2 1 2
∂x1
1
∂x 2
2
or:
∂u1 ∂u1
u1 ( x1 + dx1 , x 2 + dx 2 ) = u1 ( x1 , x 2 ) + ∂x dx1 + ∂x dx 2
1 2
u ( x + dx , x + dx ) = u ( x , x ) + ∂u 2 dx + ∂u 2 dx
2 1 1 2 2 2 1 2
∂x1
1
∂x 2
2
Note that the above equation is equivalent to the Taylor series expansion taking into
account only up to linear terms. The representation of the displacement field in the
differential area element is shown in Figure 1.36.
∂u 2 ∂u 2 ∂u
u2 + dx 2 u2 + dx1 + 2 dx 2
∂x 2 ∂x1 ∂x 2
( x1 , x 2 + dx 2 ) ( x1 + dx1 , x 2 + dx 2 )
∂u1 ∂u1 ∂u
u1 + dx 2 u1 + dx1 + 1 dx 2
∂x 2 ∂x1 ∂x 2
r
du
dx 2
∂u 2
u2 + dx1
(u 2 ) ∂x1
( x1 , x 2 ) ( x1 + dx1 , x 2 )
x2 ∂u1
(u1 ) u1 + dx1
∂x1
dx1
x1
144444444444444444424444444444444444443
=
644444444444444444474444444444444444448
x 2 ,u 2
∂u1
dx2
∂x2
∂u 2
u2 + dx2
∂x2 B′
B B B′
dx 2 dx 2
+
O′ A′ A′
∂u 2
u2 A
dx1
O A ∂x1
dx1 O′
dx1
u1
∂u1
u1 + dx1
∂x1
x1 ,u1
1.8.3 Divergence
r r
The divergence of a vector field, v ( x ) , is denoted as follows:
r r
div ( v ) ≡ ∇ xr ⋅ v (1.473)
which by definition is:
r r r r
div ( v ) ≡ ∇ xr ⋅ v = ∇ xr v : 1 = Tr (∇ xr v ) (1.474)
Then:
r r
[ ][
∇ xr ⋅ v = ∇ xr v : 1 = v i , j eˆ i ⊗ eˆ j : δ kl eˆ k ⊗ eˆ l ] = v i , j δ kl δ ik δ jl = v k ,k
∂v ∂v ∂v (1.475)
= 1 + 2 + 3
∂x1 ∂x 2 ∂x3
or
r r
[ ][
∇ xr ⋅ v = ∇ xr v : 1 = v i , j eˆ i ⊗ eˆ j : δ kl eˆ k ⊗ eˆ l ]
= [v δ δ eˆ ]⋅ eˆ
i, j kl lj i k
= [v eˆ ]⋅ eˆ
i ,k i k
(1.476)
∂[v eˆ ]
= i
⋅ eˆi
k
∂x k
Which we can use to insert the following operator into the Cartesian basis:
∂ (•)
∇ xr ⋅ (•) = ⋅ ê k Divergence of (•) in Cartesian (1.477)
∂x k basis
We can also verify that, when divergence is applied to a tensor field its rank decreases by
one order.
r
Divergence of a second-order tensor field T ( x )
The divergence of a second-order tensor field T is denoted by ∇ xr ⋅ T = ∇ xr T : 1 , which
becomes a vector:
∂(Tij eˆ i ⊗ eˆ j )
∇ xr ⋅ T ≡ divT = ⋅ eˆ k
∂x k
∂Tij (1.478)
= δ jk eˆ i
∂x k
= Tik ,k eˆ i
r
NOTE: In this text book, when dealing with gradient or divergence of a tensor field, e.g. ∇ xr v
(the gradient of the vector field), ∇ xr T (the gradient of a second-order tensor field), ∇ xr ⋅ T
(divergence of a second-order tensor field), this does not indicate that we are making a
r r r r
tensor operation between a vector and a tensor, i.e. ∇ xr v ≠ (∇ xr ) ⊗ ( v ) , ∇ xr T ≠ (∇ xr ) ⊗ ( T )
r
and ∇ xr ⋅ T ≠ (∇ xr ) ⋅ ( T ) and so on. In this textbook, ∇ xr is an operator which must be
applied to the entire tensor field, so, the tensor must be inside the operator, (see equations
r
(1.477) and (1.469)). Nevertheless, it is possible to relate ∇ xr v , ∇ xr T or ∇ xr ⋅ T to tensor
operations between tensors, and it is easy to show that:
r r r
∇ xr v = ( v ) ⊗ (∇ xr )
r
∇ xr T = ( T ) ⊗ (∇ xr ) (1.479)
r r
∇ xr ⋅ T = ( T ) ⋅ (∇ xr ) = (∇ xr ) ⋅ ( T T ) ■
Once the Nabla symbol is defined we introduce the Laplacian operator ∇ 2 as:
∂ ∂ ∂ ∂ ∂2
∇ xr = ∇ xr ⋅ ∇ xr = eˆ j ⋅ eˆ i =
2
∂x j
δ =
∂xi ∂xi ∂x j ij ∂xi ∂xi
(1.480)
∂22 ∂2 ∂2
∇ xr ≡ 2 + 2 + 2 = ∂, k ∂, k = ∂, kk
∂x1 ∂x 2 ∂x3
r r
Then, the vector Laplacian of a vector field, v ( x ) , is given by:
2r r 2r r
[
→ ∇ xr v = [∇ xr ⋅ (∇ xr v )] = v i ,kk
∇ xr v = ∇ xr ⋅ (∇ xr v ) components ] i i
(1.481)
r r
Problem 1.43: Let a and b be vectors. Show that the following identity
r r r r
∇ xr ⋅ (a + b) = ∇ xr ⋅ a + ∇ xr ⋅ b holds.
Solution:
r r ∂ r r
Observing that a = a j eˆ j , b = b k eˆ k , ∇ xr = ê i , we can express ∇ xr ⋅ (a + b) as:
∂x i
∂ (a j eˆ j + b k eˆ k ) ∂a j ∂b k ∂a ∂b r r
⋅ eˆ i = eˆ j ⋅ eˆ i + eˆ k ⋅ eˆ i = i + i = ∇ xr ⋅ a + ∇ xr ⋅ b
∂x i ∂x i ∂x i ∂x i ∂x i
Working directly with indicial notation we obtain:
r r r r
∇ xr ⋅ (a + b) = (a i + b i ), i = a i , i +b i , i = ∇ xr ⋅ a + ∇ xr ⋅ b
r r
Problem 1.44: Find the components of (∇ xr a) ⋅ b .
r r ∂
Solution: Bearing in mind that a = a j eˆ j , b = b k eˆ k , ∇ xr = ê i ( i = 1,2,3 ), the following is
∂x i
true:
r r ∂ (a j eˆ j ) ∂a j ∂a j ∂a j
(∇ xr a) ⋅ b = ⊗ eˆ i ⋅ (b k eˆ k ) = eˆ j ⊗ eˆ i ⋅ (b k eˆ k ) = b k δ ik eˆ j = b k eˆ j
∂x i
∂x i ∂x i ∂x k
Expanding the dummy index k , we obtain:
∂a j ∂a j ∂a j ∂a j
bk = b1 + b2 + b3
∂x k ∂x1 ∂x 2 ∂x 3
Thus,
∂a1 ∂a ∂a
j =1 ⇒ b1 + b2 1 + b3 1
∂x1 ∂x 2 ∂x 3
∂a 2 ∂a ∂a
j = 2 ⇒ b1 + b2 2 + b3 2
∂x1 ∂x 2 ∂x 3
∂a 3 ∂a ∂a
j = 3 ⇒ b1 + b 2 3 + b3 3
∂x1 ∂x 2 ∂x 3
Problem 1.45: Prove that the following relationship is valid:
r
q 1 r 1 r
∇ xr ⋅ = ∇ xr ⋅ q − 2 q ⋅ ∇ xr T
T T T
r r r
where q( x , t ) is an arbitrary vector field, and T ( x , t ) is a scalar field.
Solution:
r
q ∂ qi qi 1 1
∇ xr ⋅ = ≡ = q i ,i − 2 q i T,i
T
∂x i T T
,i T T
1 r 1 r
= ∇ xr ⋅ q − 2 q ⋅ ∇ xr T (scalar)
T T
Note that the curl is already a tensor operator between two vectors. Using the definition of
the vector product we obtain the curl of a vector field as:
r r r ∂ ˆ ∂v ∂v
rot ( v ) = ∇ xr ∧ v = e j ∧ ( v k eˆ k ) = k eˆ j ∧ eˆ k = k ijk eˆ i = ijk v k , j eˆ i (1.483)
∂x j ∂x j ∂x j
where ijk is the permutation symbol defined in (1.55). Moreover, we have applied the
definition eˆ j ∧ eˆ k = ijk eˆ i and we can also note that:
eˆ 1 eˆ 2 eˆ 3
r r r ∂ ∂ ∂
rot ( v ) = ∇ xr ∧ v = = ijk v k , j eˆ i
∂x1 ∂x 2 ∂x3
v1 v2 v3 (1.484)
∂v ∂v ∂v ∂v ∂v ∂v
= 3 − 2 eˆ 1 + 1 − 3 eˆ 2 + 2 − 1 eˆ 3
∂x 2 ∂x3 ∂x3 ∂x1 ∂x1 ∂x 2
We can verify that the antisymmetric part of a vector field gradient, which is illustrated by
r
(∇ xr v ) skew ≡ W , has as components:
1 ∂v 1 ∂v 2 1 ∂v 1 ∂v 3
0 − −
2 ∂x 2 ∂x1 2 ∂x 3 ∂x1
1 ∂v 1 ∂v 2 ∂v 3
[(∇ r
x v)
r skew
]ij ≡ v iskew
,j
∂v
= 2 − 1 0 −
2 ∂x 3 ∂x 2
2 ∂x1 ∂x 2
1 ∂v ∂v 1 ∂v 3 ∂v 2 (1.485)
3 − 1 − 0
2 ∂x1 ∂x3 2 ∂x 2 ∂x3
0 W12 W13 0 W12 W13 0 − w3 w2
= W21 0 W23 = − W12 0 W23 = w3 0 − w1
W31 W32 0 − W13 − W23 0 − w2 w1 0
r
where w1 , w2 , w3 are the components of the axial vector w associated with W , (see
subsection: 1.5.2.2.2. Antisymmetric Tensor).
With reference to the definition of the curl in (1.484) and the relationship in (1.485), we can
conclude that:
r r r ∂v ∂v ∂v ∂v ∂v ∂v
rot ( v ) ≡ ∇ xr ∧ v = 3 − 2 eˆ 1 + 1 − 3 eˆ 2 + 2 − 1 eˆ 3
∂x 2 ∂x3 ∂x3 ∂x1 ∂x1 ∂x 2
ˆ ˆ
= 2W32 e1 + 2W13 e 2 + 2W21e 3 ˆ (1.486)
= 2(w1 eˆ 1 + w2 eˆ 2 + w3 eˆ 3 )
r
= 2w
And, if we use the identity in (1.141), we obtain:
r r r 1 r
2
r r
W ⋅ v = w ∧ v = ∇ xr ∧ v ∧ v ( ) (1.487)
It could be interesting to note that the equation in (1.486) can be obtained by means of
1 r r
Problem 1.18, in which we showed that (a ∧ x ) is the axial vector associated with the
2
r r
antisymmetric tensor ( x ⊗ a ) skew . Therefore, the axial vector associated with the
r
antisymmetric tensor W = (∇ xr v ) skew = ( v ) ⊗ (∇ ) [r r
]
skew
is the vector
2
(
1 rr r
∇x ∧ v . )
As we can see, the curl describes the rotational tendency of the vector field.
Summary
Divergence Gradient Curl
r
• div (•) ≡ ∇ xr ⋅ • grad(•) ≡ ∇ xr • rot (•) ≡ ∇ xr ∧ •
Scalar vector
r r r r r r r r r r r
∇ xr ∧ (a ∧ b) = (∇ xr ⋅ b)a − (∇ xr ⋅ a)b + (∇ xr a) ⋅ b − (∇ xr b) ⋅ a (1.489)
r r r r
The components of the vector product (a ∧ b) are given by (a ∧ b) k = kij a i b j , thus:
[∇r r
x
r r
]
∧ (a ∧ b) l = lpk ( kij a i b j ) , p = kij lpk (a i , p b j + a i b j , p ) (1.490)
Regarding that kij = ijk and ijk lpk = δ il δ jp − δ ip δ jl , the above equation becomes:
[∇r r
x
r r
]
∧ (a ∧ b) l = kij lpk (a i , p b j + a i b j , p ) = (δ il δ jp − δ ip δ jl )(a i , p b j + a i b j , p )
= δ il δ jp a i , p b j − δ ip δ jl a i , p b j + δ il δ jp a i b j , p − δ ip δ jl a i b j , p (1.491)
= al , p b p − a p, p b l + al b p, p − a p b l, p
[ r r
] [ r r
We can also verify that (∇ xr a) ⋅ b l = a l , p b p , (∇ xr ⋅ a)b l = a p , p b l , (∇ xr ⋅ b)a l = a l b p , p ,] [ r r
]
[ r r
]
(∇ xr b) ⋅ a l = a p b l , p .
r r r r 2r
∇ xr ∧ (∇ xr ∧ a) = ∇ xr (∇ xr ⋅ a) − ∇ xr a (1.492)
r r r r
The components of (∇ xr ∧ a) are given by (∇ xr ∧ a) i = ijk a k , j , thus:
123
ci
[∇r r
x
r r
]
∧ (∇ xr ∧ a) q = qli c i ,l = qli ( ijk a k , j ) ,l = qli ijk a k , jl (1.493)
Once again considering that qli ijk = qli jki = δ qj δ lk − δ qk δ lj , the above equation becomes:
[∇r r
x
r r
]
∧ (∇ xr ∧ a) q = qli ijk a k , jl = (δ qj δ lk − δ qk δ lj )a k , jl = δ qj δ lk a k , jl − δ qk δ lj a k , jl
(1.494)
= a k ,kq − a q ,ll
where φ and ψ are scalar fields. Other interesting equations derived from the above are:
∇ xr ⋅ (φ∇ xr ψ ) = φ∇ xr ψ + (∇ xr φ ) ⋅ (∇ xr ψ )
2
(1.497)
∇ xr ⋅ (ψ∇ xr φ ) = ψ∇ xr φ + (∇ xr ψ ) ⋅ (∇ xr φ)
2
(1.498)
⇒ ∇ xr ⋅ (φ∇ xr ψ − ψ∇ xr φ ) = φ∇ xr ψ − ψ∇ xr φ
2 2
r r
If the function φ satisfies the relation (1.499), then φ is a potential function of b( x , t ) .
r r r r r
A necessary but insufficient condition for b( x , t ) to be conservative is that ∇ xr ∧ b = 0 . In
r r
other words, given a conservative field, the curl ∇ xr ∧ b equals zero. However, if the curl
of a vector field equals zero, this does not necessarily mean that the field is conservative.
r
Problem 1.46: Let φ be a scalar field, and u be a vector field. a) Show that
⋅ (∇ xr )
r r r r
∇ xr ∧ v = 0 and ∇ xr ∧ (∇ xr φ ) = 0 .
r r
[(
r r
)
r r
] r r r r
[ r r
] r
b) Show that ∇ xr ∧ ∇ xr ∧ v ∧ v = (∇ xr ⋅ v )(∇ xr ∧ v ) + ∇ xr (∇ xr ∧ v ) ⋅ v − (∇ xr v ) ⋅ (∇ xr ∧ v ) ;
r r
r r r r r
c) Referring ω = ∇ xr ∧ v , show that ∇ xr ∧ (∇ xr 2 v ) = ∇ xr 2 (∇ xr ∧ v ) = ∇ xr 2 ω .
Solution:
r r
Regarding that: ∇ xr ∧ v = ijk v k , j ê i
⋅ (∇ xr )
∂ r ∂ ∂
r
∇ xr
∧v =
∂x l
(
ijk v k , j eˆ i ⋅ eˆ l = ijk )
∂x l
( )
v k , j δ il = ijk
∂x i
( )
v k , j = ijk v k , ji
r
The second derivative of v is symmetrical with ij , i.e. v k , ji = v k ,ij , while ijk is
antisymmetric with ij , i.e., ijk = − jik , thus:
ijk v k , ji = ij1v1, ji + ij 2 v 2, ji + ij 3 v3, ji = 0
We can observe that ij1v1, ji equals the double scalar product by using a symmetric and an
antisymmetric tensor, so ij1v1, ji = 0 .
Likewise, we can show that:
r r
∇ xr ∧ (∇ xr φ ) = ijk φ , kj eˆ i = 0 i eˆ i = 0
r r r
b) Denoting by ω = ∇ xr ∧ v we obtain:
r r
[( r r
) ]
∇ xr ∧ ∇ xr ∧ v ∧ v
r r r
= ∇ xr ∧ (ω ∧ v )
Observing the equation in (1.489), it holds that:
r r r r r r r r r r r
∇ xr ∧ (ω ∧ v ) = (∇ xr ⋅ v ) ω − (∇ xr ⋅ ω)v + (∇ xr ω) ⋅ v − (∇ xr v ) ⋅ ω
r r r
Note that ∇ xr ⋅ ω = ∇ xr ⋅ (∇ xr ∧ v ) = 0 . Then, we can draw the conclusion that:
r r r r r r r r r
∇ xr ∧ (ω ∧ v ) = (∇ xr ⋅ v )ω + (∇ xr ω) ⋅ v − (∇ xr v ) ⋅ ω
r r r r
[
r r r
] r r
= (∇ xr ⋅ v )(∇ xr ∧ v ) + ∇ xr (∇ xr ∧ v ) ⋅ v − (∇ xr v ) ⋅ (∇ xr ∧ v )
c) Observing the equation in (1.492) we obtain:
2r r r r r
∇ xr v = ∇ xr (∇ xr ⋅ v ) − ∇ xr ∧ (∇ xr ∧ v )
r r r
= ∇ xr (∇ xr ⋅ v ) − ∇ xr ∧ ω
Applying the curl to the above equation we obtain:
r 2r
r r r r r
∇ xr ∧ (∇ xr v ) = ∇ xr ∧ [∇ xr (∇ xr ⋅ v )] − ∇ xr ∧ (∇ xr ∧ ω)
14442r 444 3
=0
r r r
Referring once again to the equation in (1.492) to express the term ∇ xr ∧ (∇ xr ∧ ω) :
r 2r
r r r r 2r
144244
r
[r
]
2r
∇ xr ∧ (∇ xr v ) = −∇ xr ∧ (∇ xr ∧ ω) = −∇ xr (∇ xr ⋅ ω) + ∇ xr ω = −∇ xr ∇ xr ⋅ (∇ xr ∧ v ) + ∇ xr ω
3
=0
2
r r
= ∇ x (∇ x ∧ v )
r r
∫ ∫ v( x)u ′( x)dx
b
u ( x)v ′( x)dx = u ( x)v( x) a
− (1.500)
a a
dv
where v ′( x) = , and the functions u (x) , v(x) are differentiable in a ≤ x ≤ b .
dx
S r
dS
x2 n̂
dS
B
r
x
x1
x3
Figure 1.37.
Let T be a second-order tensor field defined in the domain B . The divergence theorem
applied to this field is defined as:
r
∫∇ r
x ⋅T dV = ∫ T ⋅ nˆ dS = ∫ T ⋅ dS ∫T ij , j ∫ ∫
dV = Tij nˆ j dS = Tij dS j (1.502)
V S S V S S
∫ (x
V
k ), j dV ∫
= (δ ik xi ), j dV
V
∫
= δ ik x i nˆ j dS
S
= ∫ [δ
V
ik , j x i ]
+ δ ik xi , j dV ∫
= x k nˆ j dS
S
(1.503)
∫
= x k , j dV
V
∫
= x k nˆ j dS
S
δ kj ∫ dV = ∫ x k nˆ j dS ⇒ ∫
Vδ kj = x k nˆ j dS
V S S
(1.504)
r
∫
V 1 = x ⊗ nˆ dS
S
∫ (x σ
V
i jk ∫
), k dV = ( xi σ jk ), k dV = xi σ jk nˆ k dS
V
∫ S
= ∫ [x
V
i , k σ jk ] ∫
+ xi σ jk ,k dV = xi σ jk nˆ k dS
S
(1.505)
= ∫ [δ
V
ik σ jk ] ∫
+ xi σ jk ,k dV = xi σ jk nˆ k dS
S
∫x σ
V
i jk , k ∫
dV = xi σ jk nˆ k dS − σ ji dV
S
∫
V
(1.506)
r r
∫ x ⊗ ∇ xr ⋅ σ dV = x ⊗ (σ ⋅ nˆ ) dS − σ T dV∫ ∫
V S V
or
r r r
∫ x ⊗ ∇ xr ⋅ σ dV = ∫ ( x ⊗ σ ) ⋅ dS − ∫ σ
T
dV (1.507)
V S V
∫ [m : ∇
Ω
x (∇ x ω )
r r ]dΩ = ∫ [(∇ xr ω ) ⋅ m] ⋅ nˆ dΓ − ∫ [(∇ xr ⋅ m) ⋅ ∇ xr ω ]dΩ
Γ Ω
∫Ω [m ij ω , ij ] dΩ = ∫ ( ω , m
Γ
i
ˆ dΓ −
ij )n j ∫Ω [m ij , j ω , i ] dΩ
Ω n̂
x2
x1 Γ
Figure 1.38
Solution: We could directly apply the definition of integration by parts to demonstrate the
above relationship. But, here we will start with the definition of the divergence theorem.
r
That is, given a tensor field v , it is true that:
r r
∫Ω ∇ r
x ⋅v dΩ = ∫Γ v ⋅ nˆ dΓ → Ω∫ v
indicial
j, j ∫
dΩ = v j nˆ j dΓ
Γ
r r
Observing that the tensor v is the result of the algebraic operation v = ∇ xr ω ⋅ m and the
equivalent in indicial notation to v j = ω , i m ij , and by substituting it in the above equation
we obtain:
∫v
Ω
j, j ∫
dΩ = v j nˆ j dΓ
Γ
⇒ ∫ [ω,
Ω
i m ij ],j ∫
dΩ = ω , i m ij nˆ j dΓ
Γ
⇒ ∫Ω [ω, ij ] ∫
m ij + ω , i m ij , j dΩ = ω , i m ij nˆ j dΓ
Γ
⇒ ∫ [ω,
Ω
ij ] ∫
m ij dΩ = ω , i m ij nˆ j dΓ −
Γ
∫ [ω,
Ω
i ]
m ij , j dΩ
∫ [m : ∇
Ω
x (∇ x ω )
r r ]dΩ = ∫ [(∇ xr ω ) ⋅ m] ⋅ nˆ dΓ − ∫ [∇ xr ω ⋅ (∇ xr ⋅ m)]dΩ
Γ Ω
NOTE: Consider now the domain defined by the volume V , which is bounded by the
r
surface S with the outward unit normal to the surface n̂ . If N is a vector field and T is a
scalar field, it is also true that:
∫ N T,
V
i ij ∫
dV = N i T , i nˆ j dS − N i , j T , i dV
S
∫
V
r r r
⇒ N ⋅ ∇ xr (∇ xr T )dV = (∇ xr T ⋅ N ) ⊗ nˆ dS − ∇ xr T ⋅ ∇ xr N dV
∫ ∫ ∫
V S V
where we have directly applied the definition of integration by parts.
x1
r
As the field is conservative, the curl of b is the zero vector:
eˆ 1 eˆ 2 eˆ 3
r r r ∂ ∂ ∂
∇ xr ∧ b = 0 ⇒ = 0i (1.511)
∂x1 ∂x 2 ∂x3
b1 b2 b3
We can therefore conclude that:
∂b 3 ∂b 2 ∂b 3 ∂b 2
− =0 =
∂x 2 ∂x3 ∂x 2 ∂x 3
∂b1 ∂b 3 ∂b ∂b
− =0 ⇒ 1 = 3 (1.512)
∂x3 ∂x1 ∂x3 ∂x1
∂b 2 ∂b1 ∂b 2 ∂b1
− =0 =
∂x1 ∂x 2 ∂x1 ∂x 2
Therefore, if the above condition is not satisfied, the field is not conservative.
If p̂ denotes the unit vector tangent to the boundary Γ , the Stokes’ theorem becomes:
r r r r r r
∫Γ F ⋅ pˆ dΓ = Ω∫ (∇ r
x ∧ F) ⋅ dS = (∇ xr ∧ F ) ⋅ nˆ dS
Ω
∫ (1.514)
r
With reference to the vector representation in the Cartesian basis: F = F1eˆ 1 + F2 eˆ 2 + F3 eˆ 3 ,
r r r
dS = dS1eˆ 1 + dS 2 eˆ 2 + dS 3 eˆ 3 , dΓ = dx1 eˆ 1 + dx 2 eˆ 2 + dx3 eˆ 3 , the components of the curl of F
are given by:
eˆ 1 eˆ 2 eˆ 3
(
r r
∇ xr ∧ F i =) ∂
∂x1
∂
∂x 2
∂ ∂F ∂F
= 3 − 2
∂x3 ∂x 2 ∂x3
∂F ∂F
eˆ 1 + 1 − 3
∂F ∂F
eˆ 2 + 2 − 1
eˆ 3 (1.515)
∂x3 ∂x1 ∂x1 ∂x 2
F1 F2 F3
Next, the Stokes’ theorem expressed in terms of components becomes:
∂F ∂F ∂F ∂F ∂F ∂F
∫ F dx
Γ
1 1 ∫
+ F2 dx 2 + F3 dx3 = 3 − 2
Ω 2
∂x ∂x3
dS1 + 1 − 3
∂x3 ∂x1
dS 2 + 2 − 1
∂x1 ∂x 2
dS 3
(1.516)
n̂
x3
S
Ω p̂
Γ
x2
x1
which is known as the Stokes’ theorem in the plane or Green’s theorem, which is expressed in
terms of components as:
∂F ∂F
∫ F dx
Γ
1 1
Ω 1
∫
+ F2 dx 2 = 2 − 1
∂x ∂x 2
dS 3
(1.518)
x3 n̂
x2
x1
Figure 1.41.
r
x3 dS = dSê 3
x2
Γ
ê3
x1
Figure 1.42: Green’s theorem.
∇ xr ⋅ (φ∇ xr ψ ) = φ∇ xr ψ + (∇ xr φ ) ⋅ (∇ xr ψ )
2
(1.520)
∇ xr ⋅ (φ∇ xr ψ − ψ∇ xr φ) = φ∇ xr ψ − ψ∇ xr φ
2 2
(1.521)
r
and also regarding that F = φ∇ xr ψ , and by substituting (1.520) into (1.519) we obtain:
∫ φ∇ r
x
2
ψ + (∇ xr φ) ⋅ (∇ xr ψ) dV = ∫ φ∇ xr ψ ⋅ nˆ dS
V S
(1.522)
⇒ (∇ xr φ) ⋅ (∇ xr ψ ) dV =
∫ ∫ φ∇ ψ ⋅ nˆ dS − ∫ φ∇ xr ψ dV 2
r
x
V S V
∫ φ∇ r
x
2
ψ − ψ∇ xr 2 φ dV = ∫ (φ∇ xr ψ − ψ∇ xr φ) ⋅ nˆ dS (1.523)
V S
∫ λb nˆ
S
i i ∫
d S = λ, i b i dV
V
r
where λ = λ( x ) .
r r r
Solution: The Cartesian components of b = ∇ xr ∧ v are b i = ijk v k , j and by substituting
them in the above surface integral we obtain:
∫ λb nˆ
S
i i ∫
dS = λ ijk v k , j nˆ i dS
S
Applying the divergence theorem we obtain:
∫ λb nˆ
S
i i ∫
dS = λ ijk v k , j nˆ i dS
S
∫
= ( ijk λv k , j ), i dV
V
∫
= ( ijk λ, i v k , j + ijk λv k , ji ) dV
V
∫
V
3 1424 3 ∫
= (λ, i ijk v k , j + λ ijk v k , ji ) dV = λ, i b i dV
1
424
V
bi 0