Professional Documents
Culture Documents
IntroTensors PDF
IntroTensors PDF
1
Tensors
1.1 Introduction
As seen previously in the introductory chapter, the goal of continuum mechanics is to
establish a set of equations that governs a physical problem from a macroscopic
perspective. The physical variables featuring in a problem are represented by tensor fields,
in other words, physical phenomena can be shown mathematically by means of tensors
whereas tensor fields indicate how tensor values vary in space and time. In these equations
one main condition for these physical quantities is they must be independent of the
reference system, i.e. they must be the same for different observers. However, for matters
of convenience, when solving problems, we need to express the tensor in a given
coordinate system, hence we have the concept of tensor components, but while tensors are
independent of the coordinate system, their components are not and change as the system
change.
In this chapter we will learn the language of TENSORS to help us interpret physical phenomena.
These tensors can be classified according to the following order:
Zeroth-Order Tensors (Scalars): Among some of the quantities that have magnitude but
not direction are e.g.: mass density, temperature, and pressure.
First-Order Tensors (Vectors): Quantities that have both magnitude and direction, e.g.:
velocity, force. The first-order tensor is symbolized with a boldface letter and by an arrow
r
at the top part of the vector, i.e.: .
Second-Order Tensors: Quantities that have magnitude and two directions, e.g. stress and
strain. The second-order and higher-order tensors are symbolized with a boldface letter.
In the first part of this chapter we will study several tools to manage tensors (scalars,
vectors, second-order tensors, and higher-order tensors) without heeding their dependence
10
on space and time. At the end of the chapter we will introduce tensor fields and some field
operators which can be used to interpret these fields.
In this textbook we will work indiscriminately with the following notations: tensorial,
indicial, and matricial. Additionally, when the tensors are symmetrical, it is also possible to
represent their components using the Voigt notation.
Addition: Let a , b be arbitrary vectors, we can show the sum of adding them, (see Figure
r
1.1 (a)), with a new vector ( c ) thus defined as:
r r r r r
c =a+b=b+ a
r
a
(1.1)
r
d
r
c
r
a
r
b
r
b
a)
r
c
r
b
b)
Figure 1.1: Addition and subtraction of vectors.
r r
Subtraction: The subtraction between two arbitrary vectors ( a , b ), (see Figure 1.1 (b)), is given
as follows:
r r r
d=ab
r
r r
Considering three vectors a , b and c the following properties are satisfied:
r r r r r
r r
r r
(a + b) + c = a + (b + c ) = a + b + c
(1.2)
(1.3)
r
Scalar multiplication: Let a be a vector, we can define the scalar multiplication with a .
r
The product of this operation is another vector with the same direction of a , and whose
length and orientation is defined with the scalar as shown in Figure 1.2.
=1
r
a
>1
r
a
<0
r
a
0 < <1
r
a
r
a
r
a
r
a
1 TENSORS
11
Scalar Product: The Scalar Product (also known as the dot product or inner product) of two
r r
r r
vectors a , b , denoted by a b , is defined as follows:
r r
r r
= a b = a b cos
(1.4)
where is the angle between the two vectors, (see Figure 1.3(a)), and represents the
Euclidean norm (or magnitude) of . The result of the operation (1.4) is a scalar.
r r r r
r r
Moreover, we can conclude that a b = b a . The expression (1.4) is also true when a = b ,
therefore:
r r r r
r r r r
r
=0
a a = a a cos
a a = a a a
r r
= aa
(1.5)
r r
Unit Vector: A unit vector, associated with the a -direction, is shown with a a , which has
r
the same direction and orientation of a . In this textbook, the hat symbol ( ) denotes a
r
unit vector. Thus, the unit vector, a , codirectional with a , is defined as:
r
a = a
r
a
(1.6)
r
where a represents the norm (magnitude) of a . If a is the unit vector, then the
following must be true:
a = 1
(1.7)
(1.8)
r
r
Projection Vector: The projection vector of a onto b , (see Figure 1.3(b)), is defined as:
r
r
r
r
proj br a = proj br a b Projection vector of a onto b
(1.9)
r
r
where proj br a is the projection of a onto b , and b is the unit vector associated with the
r
r
b -direction. The magnitude of proj br a is obtained by means of the scalar product:
r r
r
r
proj br a = a b Projection of a onto b
(1.10)
So, taking into account the definition of the unit vector, we obtain:
r r
r
b
a
projbr a = r
b
(1.11)
r r
ab
r
b
r
r r
b ab r
r = r 2 b
b
b
{
(1.12)
scalar
12
r
a
r
a
r
b
r
projbr a
r
b
r r r r
a b = a b cos
a) Scalar product
b) Projection vector
Orthogonality between vectors: Two vectors a and b are orthogonal if the scalar product
between them is zero, i.e.:
r r
ab = 0
(1.13)
r r
Vector Product (or Cross Product): The vector product of two vectors, a , b , results in
r
another vector c , which is perpendicular to the plane defined by the two input vectors,
(see Figure 1.4). The vector product has the following characteristics:
Representation:
r r
r r r
c = a b = b a
(1.14)
r
r
r
The vector c is orthogonal to the vectors a and b , thus:
r r r r
ac = b c = 0
r
The magnitude of c is defined by the formula:
r
r r
c = a b sin
(1.15)
(1.16)
where measures the smallest angle between a and b , (see Figure 1.4).
r
The magnitude of the vector product a b is geometrically expressed as the area of the
parallelogram defined by the two vectors, (see Figure 1.4):
r r
A= ab
(1.17)
Therefore, the triangle area defined by the points OCD , (see Figure 1.4 (a)), is:
1 r r
ab
(1.18)
2
r
r
r
r
If a and b are linearly dependent, i.e. a = b with denoting a scalar, the vector product
r r r
r r
of two linearly dependent vectors becomes a zero vector, a b = b b = 0 .
r r r
Scalar Triple Product (or Mixed Product): Let a , b , c be arbitrary vectors, we can
AT =
)
( )
r r r r r r r r r
a b c = b (c a) = c a b = V
r r r
r r r
r r r
V = a c b = b (a c ) = c b a
(1.19)
1 TENSORS
13
r r r
where the scalar V represents the volume of the parallelepiped defined by a, b, c , (see
Figure 1.5).
r r r
c = ab
r
b
..
D
A
r
a
r
b
r
c
a)
AT
..
r
a
r r r
c =ba
b)
If two vectors are linearly dependent then, the scalar triple product is zero, i.e.:
r r r r
a b a = 0
(1.20)
r r r r
Let a , b , c , d , be vectors and , be scalars, the following property is satisfied:
r r r
r r r
r
r r r
(1.21)
( a + b) (c d) = a (c d) + b (c d)
r
r
r r r
r
NOTE: Some authors represent the scalar triple product as, [a, b, c ] a b c ,
r r r r r r
r r r r r r
[b, c, a] b (c a) , [c, a, b] c a b .
r
c
..
r
b
r
a
Vector Triple Product: Let a , b , c be vectors, we can define the vector triple product as
r r r r
w = a b c . Then, we can demonstrate that the following relationships to be true:
r r r r
w =a bc
r r r r r r
= c a b = c b a
r r r r r r
= (a c ) b a b c
( )
(1.22)
r
whereby it is clear that the result of the vector triple product is another vector w , belonging to
r
r
the plane 1 formed by the vectors b and c , (see Figure 1.6).
14
r r
1 - plane defined by b , c
1
2
r r r
2 - plane defined by a , b c
r
c
r
b
r
w belonging to the plane 1
r
a
r r
bc
r
w
Problem 1.1: Let a and b be arbitrary vectors. Prove that the following relationship is
true:
Solution:
(ar br ) (ar br )
r r 2
= ab
2
r r
= a b sin
r 2 r 2
= a b sin 2
r 2 r 2
= a b 1 cos 2
r 2 r 2
r 2 r 2
= a b a b cos 2
2
r 2 r 2
r r
= a b a b cos
r 2 r 2 r r 2
= a b ab
r r r r
r r 2
= (a a) b b a b
( )
( ) ( )
Linear Transformation
r
r r
r
r
F (u + v ) = F (u) + F ( v )
r
r
F (u) = F (u)
1 TENSORS
15
1
2
1 + 2
1
2
(1 + 2 ) = (1 ) + ( 2 ) has not been satisfied:
1
1
1
1
1
(1 + 2 ) = E [1 + 2 ]2 = E 12 + 21 2 + 22 = E12 + E 22 + E 21 2
2
2
2
2
2
= ( 1 ) + ( 2 ) + E 1 2 ( 1 ) + ( 2 )
The function () = E 2 does not show a linear transformation because the condition
( )
(1 + 2 )
(1 ) + ( 2 )
( 2 )
(1 )
1
1 + 2
16
COMPONENTS
Tensor Representation in a
Coordinate System
Figure 1.7: Tensor components.
r
Let a be a first-order tensor (vector) as shown in Figure 1.8 (a), the tensor representation
in a general coordinate system, defined as 1 , 2 , 3 , is made up of its components
( a1 , a 2 , a 3 ), (see Figure 1.8 (b)). Some examples of coordinate system are: the Cartesian
coordinate system; the cylindrical coordinate system; and the spherical coordinate system.
3
r
a ( a1 , a 2 , a 3 )
r
a
r
a
a)
b)
1.3.1
The Cartesian coordinate system is defined by three unit vectors: i , j , k , denoted by the
Cartesian basis, which make up an orthonormal basis. The orthonormal basis has the
following properties:
1. The vectors that make up this basis are unit vectors:
i = j = k = 1
(1.23)
i i = j j = k k = 1
(1.24)
or:
1 TENSORS
17
(1.25)
j k = i
k i = j
(1.26)
The direction and orientation of the orthonormal basis can be obtained using the righthand rule as shown in Figure 1.9.
j k = i
i j = k
j
k i = j
j
j
i
i
k
(1.27)
1.3.2
The vector a , (see Figure 1.10), in the Cartesian coordinate system, is represented by its
different components ( a x , a y , a z ) and by the Cartesian bases ( i , j , k ) as:
r
a = a x i + a y j + a z k
(1.28)
y
ay
r
j a
k
ax
x
az
z
Let a , b , c be arbitrary vectors, we can describe some vector operations in the Cartesian
coordinate system, as follows:
18
r r
(1.29)
NOTE: The projection of a vector onto a given direction was established in the
equation (1.10), thus defining the component concept. For example, if we want to
know the vector component along the y -direction, all we need to do is calculate:
r
a j = (a x i + a y j + a z k ) (j) = a y .
r
The norm of a is:
r
a = a 2x + a 2y + a 2z
(1.30)
ay
a 2x + a 2y + a 2z
j +
az
a 2x + a 2y + a 2z
(1.31)
r
0 = 0 i + 0 j + 0 k
r
r
Addition: The vector sum of a and b is represented by:
(1.32)
r r
a + b = (a x i + a y j + a z k ) + (b x i + b y j + b z k ) = (a x + b x ) i + a y + b y
r
r
Subtraction: The difference between a and b is:
r r
a b = (a x i + a y j + a z k ) (b x i + b y j + b z k ) = (a x b x ) i + a y b y
r
Scalar multiplication: The resulting vector defined by a
r
a = a x i + a y j + a z k
r r
The vector product ( a b ) is evaluated as:
i
r r r
c = a b = ax
) j + (a
+ b z ) k
(1.33)
) j + (a
b z ) k
(1.34)
j
ay
is:
k
a y az
a ay
i a x a z j + x
k
az =
by bz
bx b y
bx bz
bx b y bz
= (a y b z a z b y )i (a xb z a z b x )j + (a xb y a y b x )k
(1.35)
(1.36)
r r r
The scalar triple product [a, b, c] is the determinant of the 3 by 3 matrix, defined
as:
1 TENSORS
19
ax
r r r
r r r r r r r r r
V (a, b, c ) = a b c = b (c a) = c a b = b x
cx
by bz
bx by
bx bz
= ax
ay
+ az
cy cz
cx cy
cx cz
ay
az
by
cy
bz
cz
(1.37)
= a x b y c z b z c y a y (b x c z b z c x ) + a z b x c y b y c x
r r r
The vector triple product made up of the vectors ( a, b, c ) is obtained, in the
r r r
a bc
( )
r r r r r r
= (a c ) b a b c
(1.38)
= ( 1b x 2 c x ) i + 1b y 2 c y j + ( 1b z 2 c z ) k
r r
r r
where 1 = a c = a x c x + a y c y + a z c z , and 2 = a b = a x b x + a y b y + a z b z .
Problem 1.3: Consider the points: A(1,3,1) , B (2,1,1) , C (0,1,3) and D(1,2,4 ) , defined in the
Cartesian coordinate system.
1) Find the parallelogram area defined by AB and AC ; 2) Find the volume of the
BC .
Solution:
r
a = AB = OB OA =
r
b = AC = OC OA =
(2i 1j + 1k ) (1i + 3j + 1k ) = 1i 4j + 0k
(0i + 1j + 3k ) (1i + 3j + 1k ) = 1i 2j + 2k
With reference to the equation (1.36) we can evaluate the vector product as follows:
i
r r
ab= 1
j k
4 0 = ( 8)i 2j + ( 6)k
1 2
Then, the parallelogram area can be obtained using definition (1.19), thus:
r r
A = a b = (8) 2 + (2) 2 + ( 6) 2 = 104
) (
r
c = AD = OD OA = 1i + 2j + 4k 1i + 3j + 1k = 0i 1j + 3k
and using the equation (1.37) we can obtain the volume of the parallelepiped:
r r r
r r r
V (a, b, c ) = c a b
= 0i 1j + 3k
) ( 8i 2j 6k )
= 0 + 2 18 = 16
) (
BC = OC OB = 0i + 1j + 3k 2i 1j + 1k = 2i + 2j + 2k
20
proj BC AB =
BC AB
BC
BC
4
1
42
3
BC
BC
1.3.3
( 2i + 2j + 2k ) (1i 4j + 0k ) ( 2i + 2j + 2k )
( 2i + 2j + 2k ) ( 2i + 2j + 2k )
( 2 8 + 0 ) ( 2i + 2j + 2k ) = 5 i 5 j 5 k
(4 + 4 + 4 )
3
3
3
As we saw in equation (1.28) a in the Cartesian coordinate system was defined as:
r
a = a x i + a y j + a z k
(1.39)
r
a = a1e 1 + a 2 e 2 + a 3 e 3
(1.40)
a e
i
(1.41)
i =1
Then, we introduce the summation convention, according to which the repeated indices
indicate summation. So, equation (1.41) can be represented as follows:
r
a = a1e 1 + a 2 e 2 + a 3 e 3 = a i e i
(i = 1,2,3)
r
a = a i e i
(i = 1,2,3)
(1.42)
NOTE: The summation notation was introduced by Albert Einstein in 1916, which led to
the indicial notation.
(1.43)
1 TENSORS
21
(1.44)
y x2
a y a2
r
a
j e
2
i e
1
k e 3
a z a3
a x a1
x x1
z x3
Unit vector components: Let a be a vector, the normalized vector a is defined as:
r
a
a = r
a
a = 1
with
(1.45)
ai
a12
a 22
a 32
ai
a ja j
ai
ak ak
(i, j , k = 1,2,3)
(1.46)
22
Scalar product: Using definitions (1.4) and (1.29), we can express the scalar product
r r
( a b ) as follows:
r r
r r
= a b = a b cos = a1b1 + a 2 b 2 + a 3b 3 = a i b i = a j b j
(i, j = 1,2,3)
(1.47)
3) a 21 x + a 22 y + a 23 z = b y
a 31 x + a 32 y + a 33 z = b z
Solution:
a11 x1 + a12 x 2 + a13 x 3 = b1
a 21 x1 + a 22 x 2 + a 23 x 3 = b2
a x + a x + a x = b
32 2
33 3
3
31 1
a1 j x j = b1
a 2 j x j = b2
a 3 j x j = b3
dummy
index j
free
index
i
a ij x j = bi
As we can appreciate in this problem, the use of the indicial notation means that the
equation becomes very concise. In many cases, if algebraic operation do not use indicial or
tensorial notation they become almost impossible to deal with due to the large number of
terms involved.
Problem 1.5: Expand the equation: Aij x i x j
(i, j = 1,2,3)
Solution: The indices i, j are dummy indices, and indicate index summation and there is no
free index in the expression Aij x i x j , therefore the result is a scalar. So, we expand first the
dummy index i and later the index j to obtain:
expanding j
i A1 j x1 x j + A2 j x 2 x j + A3 j x 3 x j
Aij x i x j expanding
1
424
3 1
424
3 1
424
3
A11 x1 x1 A21 x 2 x1 A31 x 3 x1
+
+
+
A12 x1 x 2
A22 x 2 x 2
A32 x 3 x 2
A13 x1 x 3
A23 x 2 x 3
A33 x 3 x 3
1.4.1
Some Operators
1.4.1.1
Kronecker Delta
1 TENSORS
1 iff
ij =
0 iff
23
i= j
(1.48)
i j
Also note that the scalar product of the orthonormal basis e i e j is equal to 1 if i = j and
equal to 0 if i j . Hence, e i e j can be expressed in matrix form as:
e 1 e 1
e i e j = e 2 e 1
e 3 e 1
e 1 e 2
e 2 e 2
e 3 e 2
e 1 e 3 1 0 0
e 2 e 3 = 0 1 0 = ij
e 3 e 3 0 0 1
(1.49)
An interesting property of the Kronecker delta is shown in the following example. Let Vi
r
be the components of the vector V , therefore:
ij Vi = 1 jV1 + 2 jV2 + 3 jV3
(1.50)
(1.51)
That is, in the presence of the Kronecker delta symbol we replace the repeated index as
follows:
i
=V j
(1.52)
For this reason, the Kronecker delta is often called the substitution operator.
Other examples using the Kronecker delta are presented below:
ij Aik = A jk , ij ji = ii = jj = 11 + 22 + 33 = 3 , ji a ji = a ii = a11 + a 22 + a 33
(1.53)
(1.54)
Permutation Symbol
The permutation symbol ijk (also known as Levi-Civita symbol or alternating symbol) is defined as:
Notes on Continuum Mechanics(Springer/CIMNE) - (Chapter 01) - By: Eduardo W.V. Chaves
24
ijk
(1.55)
NOTE: ijk are the components of the Levi-Civita pseudo-tensor, which will be introduced
later on.
The values of ijk can be easily memorized using the mnemonic device shown in Figure
1.12(a), in which if the index values are arranged in a clockwise direction, the value of ijk
is equal to 1 , if not it has the value of 1 . In the same way we can use this mnemonic
device to switch indices, (see Figure 1.12(b)).
ijk = 1
ijk = 1
ijk = ikj
ijk = ikj
j
= kji
= jik
b)
a)
ijk = (i j )( j k )(k i )
(1.56)
Using both the definition seen in (1.55) and Figure 1.12 (b), it is possible to verify that the
following relations are valid:
(1.57)
= lmn li mj nk
= 1i 2 j 3k 1i 3 j 2 k 2i 1 j 3k + 3i 1 j 2 k + 2i 3 j 1k 3i 2 j 1k
= 1i 2 j 3k 3 j 2 k 1 j ( 2i 3k 3i 2 k ) + 1k 2i 3 j 3i 2 j
(1.58)
ijk
1i 1 j 1k 1i 2i 3i
= 2i 2 j 2 k = 1 j 2 j 3 j
3i 3 j 3k 1k 2 k 3k
(1.59)
ijk pqr
1i
= 1 j
1k
2i 3i 1 p 1q 1r
2 j 3 j 2 p 2q 2r
2 k 3k 3 p 3q 3r
(1.60)
1 TENSORS
25
Taking into account that det (AB ) = det (A )det (B ) , where det () is the determinant
of the matrix , the equation (1.60) can be rewritten as:
ijk pqr
1i
= 1 j
1k
3i 1 p 1q 1r
3 j 2 p 2 q 2 r
3k 3 p 3q 3r
2i
2j
2k
ijk pqr
ip iq ir
= jp jq jr
kp kq kr
(1.61)
ip iq ik
= jp jq jk
kp kq 3
ijk pqk = ip jq iq jp
i, j , k , p, q = 1,2,3
(1.62)
Problem 1.7: a) Prove the following is true ijk pjk = 2 ip and ijk ijk = 6 . b) Obtain the
numerical value of ijk 2 j 3k 1i .
Solution: a) Using the equation in (1.62), i.e. ijk pqk = ip jq iq jp , and by substituting q
for j , we obtain:
ijk pjk = ip jj ij jp = ip 3 ip = 2 ip
Based on the above result, it is straight forward to check that:
ijk ijk = 2 ii = 6
b) ijk 2 j 3k 1i = 123 = 1
c2
(1.63)
c3
Using the definition of the permutation symbol ijk , defined in (1.55), we can express the
r
components of c as follows:
c 1 = 123 a 2 b 3 + 132 a 3b 2 = 1 jk a j b k
as:
r r
a b = ijk a j b k e i
a j e j b k e k = a j b k ijk e i
(1.65)
a j b k (e j e k ) = a j b k ijk e i = a j b k jki e i
Notes on Continuum Mechanics(Springer/CIMNE) - (Chapter 01) - By: Eduardo W.V. Chaves
26
(1.66)
The permutation symbol and the orthonormal basis can be interrelated using the triple
scalar product as follows:
(e
) e
e i e j = ijm e m
e j
(1.67)
(1.68)
(1.69)
or
a1 a 2
r r r r r r r r r
= a b c = b (c a) = c a b = b1 b 2
c1
c2
a3
b3
(1.70)
c3
Starting from the equation (1.69) we can prove the following are true:
r r r r r r r r r
a b c = b (c a) = c a b :
r r r r r r
[a, b, c] a b c = ijk aib j c k
r r r
r r r
= jki aib j c k = b (c a) [b, c, a]
r r r
r r r
= kij aib j c k = c a b [c, a, b]
r r r
r r r
= ikj aib j c k = a c b [a, c, b]
r r r
r r r
= jik aib j c k = b (a c ) [b, a, c ]
r r r
r r r
= kji aib j c k = c b a [c, b, a]
(1.71)
where we take into account the property of the permutation symbol as given in (1.57).
(r r ) (r r )
Problem 1.8: Rewrite the expression a b c d without using the vector product
symbol.
r r
Solution: The vector product a b can be expressed as
(
(
)
)
r r
r r
a b = a j e j b k e k = ijk a j b k e i . Likewise, it is possible to express c d as
r r
c d = nlm c l d m e n , thus:
r r r r
a b c d = ijk a j b k e i ) ( nlm c l d m e n ) = ijk nlm a j b k c l d m e i e n
= ijk nlm a j b k c l d m in = ijk ilm a j b k c l d m
)(
Taking into account that ijk ilm = jki lmi (see equation (1.57)) and by applying the
equation (1.62), i.e.: jki lmi = jl km jm kl = jki ilm , we obtain:
ijk ilm a j b k c l d m = ( jl km jm kl ) a j b k c l d m = a l b m c l d m a m b l c l d m
r r
r r
(
)
r r r r
(a b) (c d) = (arr cr )r(br dr ) (ar dr )(br cr )
1 TENSORS
(ar br ) (ar br ) = ar br
27
( ) ( )( )
r r r r
r r r r
r
= (a a ) b b a b b a = a
r
b
( )
r r
ab
(r r ) (r r ) r [r
] r [r
] [
r r r
rr r r
cr d (a
b) d c (a b) = c p d i ijk a j b k d p c i ijk a j b k
p
ijk a j b k c p d i ijk a j b k c i d p
ijk a j b k c p d i c i d p
[ (
)]
[ (
)]
(ar br )
and
(cr dr ) ,
[(r r ) (r r )]
=0
r r r
v (b c )
= r r r
a (b c )
v1
b1
v2
b2
v3
b3
c1
c2
c3
a1
b1
a2
b2
a3
b3
c1
c2
c3
v1
v2
b1
b2
c1
c2
v3
b3
c3
a1
a2
b1
b2
c1
c2
a3
b3
c3
ijk v i b j c k
pqr a p b q c r
(r ) r
( )
r
r r
Solution: Taking into account that (d) = (b c ) = b c and that (a d)
i
ijk
= qjk b j c k , we
obtain:
28
[ar (br cr )]
( )
( )]
r r
r r
= a k b q c k a j b j c q = b q (a c ) c q a b
r r r rr r
r r r
a b c q = b(a c ) c a b q
[ (
)] [
Dyadic
r
The tensor product, made up of two vectors v and u , becomes a dyad, which is a particular
case of a second-order tensor. The dyad is represented by:
rr r r
uv u v = A
(1.72)
where the operator denotes the tensor product. Then, we define a dyadic as a linear
combination of dyads. Furthermore, as we will see later, any tensor can be represented by
means of a linear combination of dyads, (see Holzapfel (2000)).
The tensor product has the following properties:
1.
2.
3.
r r r r r r
r
r r
(u v ) x = u( v x ) u ( v x )
r
r
r
r r
r
r
u (v + w ) = u v + u w
r r
r r r
r r r
r r r
(v u + w r ) x = ( v u) x + ( w r ) x
r
r r
r
r r
= [v (u x )] + [w (r x )]
(1.73)
(1.74)
(1.75)
where and are scalars. By definition, the dyad does not contain the commutative
r r r r
property, i.e., u v v u .
The equation (1.72) can also be expressed in the Cartesian system as:
r r
A = u v = (u i e i ) ( v j e j )
= u i v j (e i e j )
(i, j = 1,2,3)
(1.76)
(i, j = 1,2,3)
(1.77)
= A ij (e i e j )
A = A ij e i e j
{
{ 1
424
3
Tensor
components
basis
components
(1.78)
r r
( A ) ij = (u v ) ij = u i v j = A ij
1 TENSORS
A 11
( A ) ij = A ij = A = A 21
A 31
A 12
A 22
A 32
29
A 13
A 23
A 33
(1.79)
It is easy to identify the tensor order by the number of free indices in the tensor
components, i.e.:
Second-order tensor U = U ij e i e j
Third-order tensor
Fourth-order tensor
T = Tijk e i e j e k
(i, j , k , l = 1,2,3)
(1.80)
I = I ijkl e i e j e k e l
OBS.: The tensor order is given by the number of free indices in its components.
OBS.: The number of tensor components is given by a n , where the base a is the
maximum value in the index range, and the exponent n is the number of the free
index.
Problem 1.12: Define the order of the tensors represented by their Cartesian components:
v i , ijk , Fijj , ij , C ijkl , ij . Determine the number of components in tensor C .
Solution: The order of the tensor is given by the number of free indices, so it follows that:
r r
First-order tensor (vector): v , F ; Second-order tensor: , ; Third-order tensor: ;
Fourth-order tensor: C
The number of tensor components is given by the maximum index range value, i.e.
i, j , k , l = 1,2,3 , to the power of the number of free indices which is equal to 4 in the case of
C ijkl . Thus, the number of independent components in C is given by:
3 4 = (i = 3) ( j = 3) (k = 3) (l = 3) = 81
Addition: The sum of two tensors of same order is a new tensor defined as follows:
C = A +B =B + A
(1.81)
or
C ij = A ij + B ij
(1.82)
C =A+B
(1.83)
D = A incomponents
(D) ij = ( A ) ij
(1.84)
30
A 11
A = A 21
A 31
A 13
A 11
A 23
A = A 21
A 31
A 33
A 12
A 22
A 32
A 12
A 22
A 32
A 13
A 23
A 33
(1.85)
(1.86)
r
for any vector v .
Scalar Product (or Dot Product): The scalar product (also known as single
r
contraction) between a second-order tensor A and a vector x is another vector
r
(first-order tensor) y , defined as:
kl
r
r
y = Ax
= ( A jk e j e k ) ( x l e l )
= A jk x l kl e j
= A jk x k e j
123
(1.87)
yj
= y j e j
The scalar product between two second-order tensors A and B is another second-order
tensor, that verifies: A B B A :
jk
jk
C = A B = ( A ij e i e j ) (B kl e k e l )
= A ij B kl jk e i e l
= A ik B kl e i e l
123
AB
D = B A = (B ij e i e j ) ( A kl e k e l )
= B ij A kl jk e i e l
= B ik A kl e i e l
123
(1.88)
BA
= C il e i e l
= D il e i e l
A (B C) = ( A B) C
(1.89)
The scalar product allows us to define the power of second-order tensors, as seen below:
A 0 = 1 ; A1 = A ; A 2 = A A
A 3 = A A A , and so on,
(1.90)
where 1 is the second-order unit tensor (also called the identity tensor).
(1.91)
In components
Notes on Continuum Mechanics(Springer/CIMNE) - (Chapter 01) - By: Eduardo W.V. Chaves
1 TENSORS
31
il
jk
A B = ( A ij e i e j ) (B kl e k e l )
= A ij B kl jk il
(1.92)
= A ij B ji
=
( scalar )
( )
r r r r
r r r r
A : B = c d : (u v ) = (c u) d v
( )
(1.93)
( )
r r r r
r r r r
r r r r
B : A = (u v ) : c d = (u c ) v d = (c u) d v = A : B
(1.94)
jl
= A ij B kl ik jl
= A ij B ij
=
(1.95)
( scalar )
In general, A : B A B , however, they are equal if at least one of them is symmetric, i.e.
A sym : B = A sym B or A : B sym = A B sym , so A sym : B sym = A sym B sym .
The double contraction with a third-order tensor ( S ) and a second-order tensor ( B )
becomes:
( )cr
( )ar
r r r r r
r r r r
S : B = c d a : (u v ) = (a v ) d u
r r r r r
r r r r
B : S = (u v ) : c d a = (u c ) v d
(1.96)
As we can verify the result is a vector. In symbolic notation, the double contraction ( B : S )
is represented by:
S ijk e i e j e k : B pq e p e q = S ijk B pq jp kq e i = S ijk B jk e i
(1.97)
= C ijkl kl e i e j
= ij e i e j
(1.98)
32
(1.99)
demonstrate that:
r
r
a A b = a p e p A ij e i e j b r e r = a p A ij b r pi jr = a i A ij b j = A ij (a i b j )
r r
= A : ( a b)
(1.101)
Vector product
r
(1.102)
where we have used the definition (1.67), i.e. e j e k = ljk e l . In Problem 1.11, we have
r
(r r )
(r r ) r
r r r
[ar (br cr )]
[(
) ]
r r r r r
= (a k c k )b j (a k b k )c j = (b j c k c j b k )a k = b c c b a
r r
In the particular case when a = c we obtain:
r r r
a b a j = (a k a k )b j (a k b k )a j = (a k a k )b p jp (a k b p kp )a j
[ (
)]
= (a k a k ) jp (a k kp )a j b p = (a k a k ) jp a p a j b p
r r
r r r
= [(a a)1 a a] b j
(1.103)
(1.104)
( ) (
r r
r
r r r r r r r r r r r
a (b c ) = (a c ) b a b c = b c c b a
r r
r
r r
r r r
a (b a) = [(a a)1 a a] b
1.5.1.1
(1.105)
1 TENSORS
33
T = Tij e i e j = Ti1e i e 1 + Ti 2 e i e 2 + Ti 3 e i e 3
= T11e 1 e 1 + T12 e 1 e 2 + T13 e 1 e 3 +
+ T21e 2 e 1 + T22 e 2 e 2 + T23 e 2 e 3 +
(1.106)
(1.107)
r ( e )
T e k = Tik e i
(1.108)
k = 2 Ti 2 e i = T12 e 1 + T22 e 2 + T32 e 3 = t 2
r (e )
3
k = 3 Ti 3 e i = T13 e 1 + T23 e 2 + T33 e 3 = t
r r
r
Graphical representation of these three vectors t ( e1 ) , t (e 2 ) , t (e 3 ) , in the Cartesian basis, is
r
shown in Figure 1.13. Note also that t ( e1 ) is the projection of T onto e 1 , n (i1) = [1,0,0] ,
T12
T22
T13 1 T11
T23 0 = T21 = t i( e1 )
T33 0 T31
T32
x3
r
t (e3 )
e 3
r
t ( e1 )
(1.109)
r
t (e 2 )
e 2
x2
e 1
x1
(1.110)
34
components. The components displayed tangentially to the plane are called tangential
components, and correspond to the off-diagonal terms of Tij .
x3
T11
Tij = T21
T31
T12
T22
T32
r
t (e 3 )
T33 e 3
T13
T23
T33
r
t (e 2 )
T23 e 2
T13 e 1
T32 e 3
T31e 3
r
t ( e1 )
T21e 2
T22 e 2
T12 e 1
x2
T11e 1
x1
Symbolic notation
A B = ( A ij e i e j ) (B kl e k e l )
(1.111)
= A ij B kl jk (e i e l )
= A ij B jl (e i e l )
Cartesian basis
Indicial notation
Note that the index is not repeated more than twice either in symbolic notation or in
indicial notation. Also note that the indicial notation is equivalent to the tensor notation
r r
only when dealing with scalars, e.g. A : B = A ijB ij = , or a b = a i b i .
1.5.2
1.5.2.1
Properties of Tensors
Tensor Transpose
(1.112)
If A ij are the components of A , it follows that the components of the transpose are:
1 TENSORS
35
(A T ) ij = A ji
r
(1.113)
r r T
= (u v )
= u i e i v j e j
= u i v j e i e j
(
(
= (A
e j
ij e i
)
)
T
T
r r
= v u
= v j e j u i e i
= u i v j e j e i
= v i e i u j e j
= u j v i e i e j
= A ij e j e i
= A ji e i e j
(1.114)
(A T )T = A
(
: B = (A
)
e ) : (B
; (B A ) T = A T B T
(1.115)
A : B T = A ij e i e j : (B kl e l e k ) = A ij B kl il jk = A ij B ji = A B
A
ij e j
(1.116)
e l ) = A ij B kl jk il = A ij B ji = A B
kl e k
The transpose of the matrix A is formed by changing rows for columns and vice versa,
i.e.:
A 11
A = A 21
A 31
A 12
A 22
A 32
A 13
A 11
transpose
T
A 23 A = A 21
A 31
A 33
A 12
A 22
A 13
A 11
A 23 = A 12
A 13
A 33
A 32
A 21
A 22
A 23
A 31
A 32
A 33
(1.117)
A : (B C ) = B T A : C = A C T : B
Solution: Expressing the term A : (B C ) in indicial notation we obtain:
A : (B C ) = A ij e i e j : (B lk e l e k C pq e p e q )
= A ij B lk C pq e i e j : ( kp e l e q )
= A ij B lk C pq kp il jq = A ij B ik C kj
Note that, when we are dealing with indicial notation the position of the terms does not
matter, i.e.:
A ij B ik C kj = B ik A ij C kj = A ij C kj B ik
We can now observe that the algebraic operation B ik A ij is equivalent to the components of
the second-order tensor (B T A ) kj , thus,
B ik A ij C kj = (B T A ) kj C kj = B T A : C .
Problem 1.14: Let u , v be vectors and A be a second-order tensor. Show that the
following relationship holds:
Solution:
r
r r
r
u AT v = v A u
r
r
u AT v
u i e i A jl e l e j v k e k
r
r
= v A u
= v k e k A jl e j e l u i e i
u i A jl il v k jk
u l A jl v j
= v k kj A jl u i il
= v j A jl u l
36
1.5.2.2
1.5.2.2.1
Symmetric tensor
A second-order tensor A is symmetric, i.e.: A A sym , if the tensor is equal to its transpose:
A = A T incomponents
A ij = A ji
if
A is symmetric
(1.118)
A 12
A 22
A 13
A 23
A 33
(1.119)
in matrix form:
T
A A
A=A
sym
A 11
= A 12
A 13
A 23
From the above it is clear that a symmetric second-order tensor has 6 independent
components, namely: A 11 , A 22 , A 33 , A 12 , A 23 , A 13 .
According to equation (1.118), a symmetric tensor can be represented by:
A ij = A ji
A ij + A ij = A ij + A ji
(1.120)
2 A ij = A ij + A ji
A ij =
1
( A ij + A ji )
2
A=
1
(A + A T )
2
A fourth-order tensor C , whose components are C ijkl , may have the following types of
symmetries:
Minor symmetry:
C ijkl = C jikl = C ijlk = C jilk
(1.121)
C ijkl = C klij
(1.122)
Major symmetry:
A fourth-order tensor that does not exhibit any kind of symmetry has 81 independent
components. If the tensor C has only minor symmetry, i.e. symmetry in ij = ji (6) , and
symmetry in kl = lk (6) , the tensor features 36 independent components. If besides
presenting minor symmetry it also provides major symmetry, the tensor features 21
independent components.
1.5.2.2.2
Antisymmetric tensor
A tensor A is antisymmetric (also called skew-symmetric tensor or skew tensor), i.e.: A A skew :
if
A = A T incomponents
A ij = A ji
A is antisymmetric
(1.123)
A
A = A
skew
0
= A 12
A 13
A 12
0
A 23
A 13
A 23
0
(1.124)
1 TENSORS
37
Under the conditions expressed in (1.123), an antisymmetric tensor can be represented by:
A ij + A ij = A ij A ji
2 A ij = A ij A ji
A ij =
(1.125)
1
( A ij A ji )
2
A=
1
(A A T )
2
1
1
1
(Wij W ji ) = (Wkl ik jl Wkl jk il ) = Wkl ( ik jl jk il )
2
2
2
(1.126)
Using the relation between the Kronecker delta and the permutation symbol given by
(1.62), i.e. ik jl jk il = ijr lkr , the equation (1.126) is rewritten as:
1
Wij = Wkl ijr lkr
2
(1.127)
Expanding the term Wkl lkr , for the dummy indices ( k , l ), we can obtain the following
nonzero terms:
Wkl lkr = W12 21r + W13 31r + W2112 r + W23 32 r + W3113r + W32 23r
(1.128)
(1.129)
thus,
r =1
r=2
r =3
0
0
w1
W23 = W12
W23 = w3
(1.130)
0 W13 W23
0 w2
0
W32
w1
r
Hence, we introduce the axial vector w associated with the antisymmetric tensor, W , as:
r
w = w1e 1 + w2 e 2 + w3 e 3
(1.131)
r
The magnitude of the axial vector w is given by:
r 2 r r
(1.132)
2 = w = w w = w12 + w22 + w32 = W232 + W132 + W122
0
Wij = W21
W31
W12
0
Substituting (1.129) into (1.127) and by considering that ijr = rij we obtain:
Wij = wr rij
(1.133)
(1.134)
where we have applied the relation rij kij = 2 rk , which was evaluated in Problem 1.7,
thus we can conclude that:
38
1
wk = kij Wij
2
(1.135)
r
w = w1e 1 + w2 e 2 + w3 e 3
w3 = W12
0
Wij = W12
W13
W12
0
W13
W23
0
W23
W13
W23
W12
W12
w1 = W23
x2
W23
w2 = W13
W13
x1
r
r
r
r r
r
b W a = a W T b = a W b
(1.136)
r
r
r r
a W a = W : (a a) = 0
(1.137)
NOTE: Note that (a a) is a symmetric second-order tensor. Later on we will show that
the result of the double contraction between a symmetric tensor and an antisymmetric
tensor equals zero.
r
(1.138)
Bearing in mind that the normal components are equal to zero for an antisymmetric tensor,
i.e., W11 = 0 , W22 = 0 , W33 = 0 , the scalar product (1.138) becomes:
i = 1 W12 a 2 + W13 a 3
r
(W a)i i = 2 W21a1 + W23 a 3
i = 3 W a + W a
31 1
32 2
(1.139)
r
The above components are the same as the result of the algebraic operation w a :
1 TENSORS
e 1
r r
w a = w1
a1
e 2
e 3
w2
a2
w3
a3
39
(1.140)
where w1 = W23 = W32 , w2 = W13 = W31 , w3 = W12 = W21 . Then, given an antisymmetric
r
tensor W and the axial vector w associated with W , it holds that:
r r r
Wa = w a
(1.141)
for any vector a . The property (1.141) could easily have been obtained by taking into
account the components of W given by (1.133), i.e.:
r
(1.142)
The vector w can be represented by its magnitude, w = , and by the unit vector
r
codirectional with w , i.e. w = e 1* . Then, the equation (1.141) can still be expressed as:
r r r
r
W a = w a = e 1* a
(1.143)
Additionally, we can choose two unit vectors e *2 , e *3 , which make up an orthonormal basis
with the unit vector e 1* , (see Figure 1.16), so that:
e 1* = e *2 e *3
e *2 = e *3 e 1*
e *2
e 3
e *3 = e 1* e *2
(1.144)
r
w = e 1*
e 2
e 1*
e 1
e *3
By representing the vector a in this new basis, a = a1* e 1* + a*2 e *2 + a*3 e *3 , the relationship
shown in (1.143) obtains the form below:
r
r
W a = e 1* a = e 1* (a1* e 1* + a *2 e *2 + a *3 e *3 )
= (a1* e 1* e 1* + a *2 e 1* e *2 + a *3 e 1* e *3 ) = (a *2 e *3 a *3 e *2 )
1
424
3
1
424
3
1
424
3
=0
=e *3
r
= (e *3 e *2 e *2 e *3 ) a
= e *2
(1.145)
Thus, an antisymmetric tensor can be represented, in the space defined by the axial vector,
as follows:
W = (e *3 e *2 e *2 e *3 )
(1.146)
Note that by using the antisymmetric tensor representation shown in (1.146), the
projections of the tensor W according to directions e 1* , e *2 and e *3 are respectively:
40
r
W e 1* = 0
W e *2 = e *3
W e *3 = e *2
[
[ (e
] e
e )] e
(1.147)
*
2
e *2 W e *3 = e *2
*
3
*
3
e *2 e *2
*
3
(1.148)
Then, the tensor components of W in the basis formed by the orthonormal basis e 1* , e *2 ,
e *3 , are given by:
Wij*
0
0 0
= 0 0
0 0
(1.149)
In Figure 1.17 we can see these components and the axial vector representation. Note that
if we take any basis that is formed just by rotation along the e 1* -axis, the components of
W in this new basis will be the same as those provided in (1.149).
x3
x2
r
w = e 1*
e *2
Wij*
0
0 0
= 0 0
0 0
e 1*
x1
e *3
Figure 1.17: Antisymmetric tensor components in the space defined by the axial vector.
1.5.2.2.3
Any arbitrary second-order tensor A can be split additively into a symmetric and an
antisymmetric part:
A=
1
1
( A + A T ) + ( A A T ) = A sym + A skew
2 4243 1
2 4243
1
A
sym
(1.150)
skew
1
( A ij + A ji ) and
2
A ijskew =
1
( A ij A ji )
2
(1.151)
(A
B A)
sym
) (
T
1 T
1
A B A + A T B A = A T B A + A T BT A
2
2
1
= A T B + B T A = A T B sym A
2
(1.152)
1 TENSORS
41
Taking into account the characteristics of a symmetric and an antisymmetric tensor, i.e.
12 = 21 , 31 = 13 , 32 = 23 , and W11 = W22 = W33 = 0 , W21 = W12 , W31 = W13 ,
W32 = W23 , the equation above becomes:
:W =0
(r
r
r r
r
M Q M = M Q sym M
b)
= ( A sym + A skew ) : (B sym + B skew )
A :B
skew
sym
skew
= A sym : B sym + 1
A sym
: B43
+1
A skew
: B skew
42
42: B
43 + A
=0
=0
Problem 1.17: Let T be an arbitrary second-order tensor, and n be a vector. Check if the
r
r
relationship n T = T n is valid.
Solution:
r
n T = n i e i Tkl (e k e l )
= n i Tkl ik e l
= n k Tkl e l
and
r
T n = Tlk (e l e k ) n i e i
= n i Tlk ki e l
= n k Tlk e l
42
Problem 1.18: Obtain the axial vector w associated with the antisymmetric tensor
r r
( x a ) skew .
r
Solution: Let z be an arbitrary vector, it then holds that:
r r
r r r
( x a ) skew z = w z
r
r r
where w is the axial vector associated with ( x a ) skew . Using the definition of an
antisymmetric tensor:
r r
r r
1 r r r r
1 r r
( x a ) skew = ( x a ) ( x a ) T = [ x a a x ]
2
2
r r skew r r r
and by replacing it with ( x a ) z = w z , we obtain:
1 r r r r r r r
[x a a x ] z = w z [xr ar ar xr ] zr = 2wr zr
2
r r r r r r
r r
By using the equation [x a a x ] z = z ( x a ) , (see Eq. (1.105)), the above equation
becomes:
1.5.2.3
Let A be a second-order tensor and a , b be arbitrary vectors then there is then a unique
tensor cof(A ) , known as the cofactor of A , as we can see below:
r
r r
r
cof( A ) (a b) = ( A a) ( A b)
(1.153)
(1.154)
[adj(A)]T
= adj( A T )
(1.155)
The components of cof(A ) are obtained by expressing the equation (1.153) in terms of its
components, i.e.:
(1.156)
By multiplying both sides of the equation by qpr and by also considering that
tpr qpr = 2 tq , we can conclude that:
[cof( A )]iq
1.5.2.4
1
= ijk qpr A jp A kr
2
(1.157)
Tensor Trace
1 TENSORS
43
Tr (e i e j ) = e i e j = ij
(1.158)
(1.159)
(1.160)
NOTE: As we will show later, the tensor trace is an invariant, i.e. it is independent of the
coordinate system.
Let A , B be arbitrary tensors, then:
( )
Tr A T = Tr (A )
(1.161)
(1.162)
Tr(A + B ) = Tr (A ) + Tr (B )
[(A 11 + B 11 ) + (A 22 + B 22 ) + (A 33 + B 33 )] = (A 11 + A 22 + A 33 ) + (B 11 + B 22 + B 33 )
(1.163)
Tr(A B ) = Tr ( A ij e i e j ) (B lm e l e m )
= A ij B lm jl Tr (e i e m )
144244
3
im
(1.164)
= A il B li = A B = Tr ( B A )
and, the double scalar product ( : ) can be expressed in trace terms as:
A : B = A ij B ij
= A kj B lj ik il
= A kj B lj kl
123
( ABT )
kl
= A BT
kk
= Tr ( A B )
T
= A ik B il jk jl
= A ik B il kl
123
( AT B )
kl
= AT B
= Tr ( A
(1.165)
kk
B)
(1.166)
Tr (A ) = A ii
(1.167)
44
(T ) = (T )
m T
Solution:
(T )
T m
( )
Tr T T
and
( )
= Tr T m .
( )
= (T T L T ) = T T T T L T T = T T
m T
For the second demonstration we can use the trace property Tr (T T ) = Tr (T ) , thus:
( )
Tr T T
( )
= Tr T m
( )
= Tr T m
1.5.2.5
Particular Tensors
1.5.2.5.1
Unit Tensors
The second-order unit tensor, also called the identity tensor, is defined as:
1 = ij e i e j = e i e i = 1 e i e j
(1.168)
where 1 is the matrix with the components of tensor 1 . ij is the Kronecker delta symbol
defined in (1.48).
(1.169)
I = 11 = il jk e i e j e k e l = I ijkl e i e j e k e l
(1.170)
I = 1 1 = ij kl e i e j e k e l = I ijkl e i e j e k e l
(1.171)
Taking into account the fourth-order unit tensors defined above, it holds that:
)(
I : A = ik jl e i e j e k e l : A pq e p e q = ik jl A pq kp lq e i e j
= ik jl A kl e i e j = A ij e i e j
(1.172)
=A
and
)(
I : A = il jk e i e j e k e l : A pq e p e q = il jk A pq kp lq e i e j
= il jk A kl e i e j = A ji e i e j
=A
)
(1.173)
and
)(
(
I : A = ij kl e i e j e k e l : A pq e p e q = ij kl A pq kp lq e i e j
= ij kl A kl e i e j = A kk ij e i e j
)
(1.174)
= Tr ( A )1
1
1
I ijkl = ik jl + il jk
11 + 11 incomponents
2
2
(1.175)
The property of the tensor product is presented below. Consider a second-order unit
tensor, 1 = ij e i e j . Then, the tensor product can be defined as:
1 TENSORS
45
11 = ij e i e j ( kl e k e l ) = ij kl e i e k e j e l
(1.176)
I = 11 = ik jl e i e j e k e l
(1.177)
11 = ij e i e j ( kl e k e l ) = ij kl e i e l e k e j
or
I = 11 = il jk e i e j e k e l
(1.178)
(1.179)
1
11 11
2
incomponents
skew
I ijk
l =
1
ik jl il jk
2
(1.180)
With a second-order tensor A and a vector b , the following relationships are valid:
r
r
b 1 = b
I:A = A
I sym : A = A sym
A : 1 = Tr (A ) = A ii
( )
: 1 = Tr (A ) = Tr (A A A ) = A
(1.181)
A 2 : 1 = Tr A 2 = Tr (A A ) = A il A li
A3
ij A jk A ki
Levi-Civita Pseudo-Tensor
The Levi-Civita Pseudo-Tensor, also known as the Permutation Tensor, is a third-order pseudotensor and is denoted by:
= ijk e i e j e k
(1.182)
which is not a true tensor in the strict meaning of the word, and whose components ijk
were defined in (1.55), the permutation symbol.
1.5.2.6
Determinant of a Tensor
(1.183)
46
A 12
A 13
det ( A ) = A = A 21 A 22 A 23
A 31 A 32 A 33
= A 11 ( A 22 A 33 A 23 A 32 ) A 21 ( A 12 A 33 A 13 A 32 ) + A 31 ( A 12 A 23 A 13 A 22 )
= A 11 (1 jk A j 2 A k 3 ) A 21 ( 2 jk A j 2 A k 3 ) + A 31 ( 3 jk A j 2 A k 3 )
= 1 jk A 11 A j 2 A k 3 + 2 jk A 21 A j 2 A k 3 + 3 jk A 31 A j 2 A k 3
= ijk A i1 A j 2 A k 3
(1.184)
(1.185)
(1.186)
where is a scalar
(1.187)
If all elements of a row or column equal zero, the determinant is also zero.
If you multiply all the elements of a row or column by a constant c (scalar), the
determinant is c A .
If two or more rows (or column) are linearly dependent, the determinant is zero.
rt
= jt
kt
rp
jp
kp
rq
jq
kq
(1.188)
(1.189)
= rt jp kq + rp jq kt + rq jt kp rq jp kt jq kp rt kq jt rp
1 TENSORS
47
Starting with the definition A tpq = rjk A rt A jp A kq , (see Problem 1.21), and by multiplying
both sides of the equation by tpq , we obtain:
A tpq tpq = rjk tpq A rt A jp A kq
(1.190)
1
rjk tpq A rt A jp A kq
6
r r
r r
det 1 + a b = 3 + 2 a b
(1.191)
+ 2 a j b 2 a k b 3 i1 + 2 a i a k b 1b 3 j 2 + 2 a i a j b 1b 2 k 3 + 3 a i a j a k b 1b 2 b 3
r r
2 (12 k a k b 3 + 1 j 3 a j b 2 + i 23 a i b1 ) = 2 (a 3b 3 + a 2 b 2 + a1b1 ) = 2 (a b)
ijk a i a k b1b 3 j 2 = i 2 k a i a k b1b 3 = a1a 3b1b 3 a 3 a1b1b 3 = 0
ijk a i a j b1b 2 k 3 = ij 3 a i a j b1b 2 = 123 a1 a 2 b1b 2 213 a 2 a1b1b 2 = 0
ijk a i a j a k b1b 2 b 3 = 0
Notice that, there was no need to expand the terms ijk a i a k b1b 3 j 2 , ijk a i a j b1b 2 k 3 , and
(
(
r r
r r
det 1 + a b = 1 + a b
r r
Then, it is simple to prove that det a b = 0 , since
r r r
r r
det a b = 3 ijk a i a j a k b1b 2 b 3 = 3b1b 2 b 3 [a (a a)] = 0
r r
r r
r r
r r
r r
r r r r
(1.193)
det 1 + (a b) + (b a) = 1 + (a b) + (a b) + (a b) 2 (a a)(b b)
r r
r r
where , are scalars. If = 0 we can regain the equation det 1 + a b = 1 + a b ,
(see Problem 1.23). If = we obtain:
48
( ) ( ) ( )
( ) ( )
r r
r r
r r
r r
r r
det 1 + a b + b a = 1 + a b + a b + 2 a b
r
r
2
r
r
= 1 + 2 a b a b
(r r )
( )
r r r r
(a a) b b
(1.194)
( ) (r r )
r r r r
r
r
r
r r r
( A a) ( A b) ( A c ) = det ( A ) a (b c)
(1.197)
To achieve this goal we start with the definition of the scalar triple product given in (1.69),
r r r
i.e. a b c = ijk a i b j c k , and by multiplying both sides of this equation by the
determinant of A we obtain:
r r r
a b c A = ijk a i b j c k A
(1.198)
r r r
a b c A = ijk a i b j c k A = pqr A pi A qj A rk a i b j c k = pqr ( A pi a i )( A qj b j )( A rk c k )
r
r
r
r
r
r
= ( A a) ( A b) ( A c ) = ( A b) ( A c) ( A a)
1.5.2.7
] [
(1.199)
Inverse of a Tensor
(1.200)
Or in indicial notation:
if A 0 A ij1 A ik A kj1 = A ik1 A kj = ij
(1.201)
To obtain the inverse of a tensor we start from the definition of the adjugate tensor given
r
r r
r
in (1.153), i.e. adj( A T ) (a b) = ( A a) ( A b) . Then by applying the dot product
r
{[adj(A)]
(1.202)
r
r
r
1
= ( A a) ( A b) A A
d
12r3
=c
r
r r r
r
r
In equation (1.199) we demonstrated that c a b A = ( A a) ( A b) ( A c) thus,
1 TENSORS
49
(a b)} d = [( A a) ( A b)] A A 1 d =
{[adj(A)]
r
r r
A a b A 1 d
(1.203)
(1.204)
[adj(A)] = A A 1
A 1 =
1
[adj(A)] = 1 [cof(A)]T
A
A
(1.205)
(A )
1 1
=A
(1.206)
(A )1 = 1 A 1
det ( A ) = [det ( A )]
(1.207)
Next, we prove the relation adj( A B) = adj(B) adj( A ) holds. To do this, we take the
definition of the inverse of a tensor given in (1.205) as a starting point:
B 1 A 1 =
A B (A B ) = [adj(B)] [adj( A )] A B
1
(1.208)
A 11
A = A 21
A 31
A 12
A 22
A 32
A 13
A 23
A 33
(1.209)
We define the matrix M , where the component Mij is the determinant of the resulting
matrix by removing the i th row and the j th column, i.e.:
50
A 22
A 32
A
M = 12
A 32
A 12
A 22
A 23
A 33
A 13
A 21
A 31
A 11
A 23
A 33
A 13
A 21
A 31
A 11
A 22
A 32
A 12
A 33
A 13
A 31
A 11
A 33
A 13
A 23
A 21
A 23
A 31
A 11
A 21
A 32
A 12
A 22
(1.210)
(1.211)
A 1 =
(1.212)
adj(A )
(1.213)
r r r
r r r
r
r
r
f g h A = ( A f ) ( A g) ( A h)
r
r
r
( A f ) ( A g) ( A h) = 0
r
r
r
Thus, we can conclude that the vectors ( A f ) , ( A g) , ( A h) , are linearly dependent.
(r
( A f ) + ( A g) + ( A h) = 0 A f + g + h = 0 A n = 0
r
r r r
1 TENSORS
r r r
k (m n) A = 0
14243
51
A =0
1.5.2.8
Orthogonal Tensors
Q ik Q jk = Q ki Q kj = ij
(1.214)
(1.215)
(1.216)
r
r
~
b = Q b
r
r
~
And the dot product between these new vectors ( ~a ) and ( b ) is given by:
r r
r
r r r
r
r
~ ~
a b = (Q a) (Q b) = a Q T Q b = a b
123
r
r
~
a = Qa
=1
~~
ai b i = (Q ik a k )(Q ij b j ) = a k Q ik Q ij b j = a k b k
123
(1.218)
(1.219)
kj
r r r 2 r r r 2
r ~r
~ ~
So, it is also true when ~a = b , thus ~a a
= a = a a = a . Therefore, we can conclude
that in an orthogonal transformation, the magnitude vectors and the angle between them
are maintained, (see Figure 1.18).
52
r
r
~
a = a
r
b
r
~
b
r
a
r
~
a
r
r
~
b = b
r r
~ ~ r r
ab = ab
Indicial notation
Matrix notation
(1.220)
x i Tij x j > 0
x T x > 0
xT T x > 0
r
for all vectors x 0 . Conversely, a tensor is said to be negative definite when these notations
hold:
Tensorial notation
Indicial notation
Matrix notation
x i Tij x j < 0
xT T x < 0
x T x < 0
r r
for all vectors x 0 .
(1.221)
x j + Tij x i
= Tij ik x j + Tij x i jk = Tkj x j + Tik x i = (Tki + Tik ) x i
= Tij
x k
x k
x k
(1.222)
= 2 T sym x
x
2
= 2 T sym
x x
(1.223)
Remember that it is also true that x T x = x T sym x , therefore if the symmetric part of
a tensor is positive definite the tensor is too.
NOTE: As we will demonstrate later, the eigenvalues must be positive for T to be
positive definite. The proof is in the subsection Spectral Representation of Tensors.
Problem 1.25: Let F be an arbitrary second-order tensor. Show that the resulting tensors
C = F T F and b = F F T are symmetric tensors and semi-positive definite tensors. Also check in
what condition are C and b positive definite tensors.
Solution: Symmetry:
1 TENSORS
53
C T = (F T F )T = F T (F T )T = F T F = C
b T = (F F T ) T = (F T )T F T = F F T = b
Or in indicial notation:
x ( F F T ) x = x F F T x
= ( F T x ) ( F T x )
2
= F T x 0
= x i ( Fki Fkj ) x j
= ( Fki x i )( Fkj x j )
x i C ij x j
= Fki x i
x i bij x j
= x i ( Fik F jk ) x j
= ( Fik x i )( F jk x j )
= Fik x i
r
r
F x = 0 with x 0 if and only if det ( F ) = 0 , (see Problem 1.24). Then, the tensors
where
U = S T
(1.224)
Note that, depending on the value of , we have an infinite number of possibilities for
representing S . But, if Tr ( T UT ) = Tr (U T T ) = 0 , the additive decomposition is unique.
From the relationship in (1.224), we can evaluate the value of as follows:
S T T = T T T + U T T Tr (S T T ) = Tr ( T T T ) + Tr (U T T ) = Tr ( T T T )
14243
Tr (S T T )
Tr ( T T T )
=0
(1.225)
(1.226)
Tr (S T T ) Tr (S 1) Tr (S ) Tr (S )
=
=
=
3
Tr ( T T T ) Tr (1 1) Tr (1)
(1.227)
(1.228)
Tr (S )
1 + S dev = S sph + S dev
3
(1.229)
U = S T = S
Thus:
S=
54
NOTE: S sph =
Tr (S )
Tr (S )
1 is the spherical part of the tensor S , and S dev = S
1 is
3
3
1
Tr S (S + S T ) T
Tr (S T T )
2
=
=
=1
Tr ( T T T ) 1 Tr (S + S T ) (S + S T ) T
4
(1.230)
1
2
1
2
1
1
(S + S T ) + (S S T ) = S sym + S skew
2
2
(1.231)
which is the same as the equation obtained in (1.150) in which we split the tensor into
symmetric and antisymmetric parts.
Problem 1.26: Find a fourth-order tensor P so that P : A = A dev , where A is a secondorder tensor.
Solution: Taking into account the additive decomposition into spherical and deviatoric parts,
we obtain:
A = A sph + A dev =
Tr ( A )
1 + A dev
3
A dev = A
Tr ( A )
1
3
Referring to the definition of fourth-order unit tensors seen in (1.172), and (1.174), where
the relations I : A = Tr ( A )1 and I : A = A hold, we can now state:
A dev = A
Tr ( A )
1
1
1
1 = I : A I : A = I I : A = I 1 1 : A
3
3
3
3
1.5.3
The tensor components depend on the coordinate system, so, if the coordinate system is
changed due to a rotation so do the tensor components. The tensor components between
these coordinate systems are interrelated to each other by the component transformation
law, which is defined below, (see Figure 1.19).
Consider a Cartesian coordinate system (x1 , x 2 , x3 ) formed by the orthogonal basis
(e 1 , e 2 , e 3 ) , (see Figure 1.20). In this system, an arbitrary vector vr is represented by its
components as follows:
r
v = v i e i = v 1 e 1 + v 2 e 2 + v 3 e 3
(1.232)
1 TENSORS
55
TENSORS
Mathematical interpretation of physical concepts
(Independent of the coordinate system)
COMPONENTS
Representation of a tensor in
a coordinate system
COMPONENT
TRANSFORMATION
LAW
COORDINATE SYSTEM
I
COORDINATE SYSTEM
II
x3
x1
e 2
e 3
e 3
e 1
1
x2
e 2
e 1
x1
56
(1.233)
Now consider a new coordinate system (x1 , x 2 , x3 ) represented by the orthogonal basis
(e 1 , e 2 , e 3 ) , (see Figure 1.20). In this new system, the vector vr is represented by v j e j . As
mentioned before, a tensor is independent of the adopted system, so:
r
v = v k e k = v j e j
(1.234)
To obtain the components of a tensor in a given system one need only make the dot
product between the tensor and the system basis:
v k e k e i = ( v j e j ) e i
v k ki = ( v j e j ) e i
(1.235)
v i = ( v 1 e 1 + v 2 e 2 + v 3 e 3 ) e i
Or in matrix form:
v 1 ( v 1e 1 + v 2 e 2 + v 3 e 3 ) e 1
v = ( v e + v e + v e ) e
2 2
3 3
2
2 1 1
v 3 ( v 1 e 1 + v 2 e 2 + v 3 e 3 ) e 3
(1.236)
e 2 e 1
e 2 e 2
e 2 e 3
e 3 e 1 v 1
e 3 e 2 v 2
e 3 e 3 v 3
a ij = e j e i = e i e j
(1.237)
Or in indicial notation:
v i = a ij v j
(1.238)
A a ij = e 1 e 2
e 1 e 3
e 2 e 1
e 2 e 2
e 2 e 3
e 3 e 1 e 1 e 1
e 3 e 2 = e 2 e 1
e 3 e 3 e 3 e 1
a11
a ij A = a 21
a 31
a12
a 22
a 32
a13
a 23
a33
e 1 e 2
e 2 e 2
e 3 e 2
e 1 e 3
e 2 e 3
e 3 e 3
(1.239)
The matrix ( A ) is not symmetric, i.e. A A T . With reference to the scalar product
e i e j = e i e j cos(xi , x j ) = cos(xi , x j ) , (see equation (1.4)), the relationship in (1.237) is
expressed by means of the direction cosines as:
1 TENSORS
57
(1.240)
v = Av
The direction cosines of a vector are those of the angles between the vector and the three
coordinate axes. According to Figure 1.20 we can verify that cos 1 = cos(x1 , x1 ) ,
cos 1 = cos( x1 , x 2 ) and cos 1 = cos( x1 , x3 ) .
In the equation (1.235) we have projected the vector onto e i . Now, we can project the
vector onto e i :
v k e k e i = v j e j e i
v k ki = v j a ji
(1.241)
v i = v j a ji
v = A T v
e i = a ji e j
(1.242)
v = A 1 v
(1.243)
and by comparing the equations (1.243) with (1.241) we can conclude that the matrix A is
an orthogonal matrix, i.e.:
A 1 = A T
Indicial
A T A = 1 notation
a ki a kj = ij
(1.244)
Second-order tensor
Consider a coordinate system formed by the orthogonal basis e i then, how the basis
changes from the e i system to a new one represented by the orthogonal basis e i . This is
illustrated in transformation law as e k = a ik e i , which allow us to represent a second-order
tensor T as follows:
T = Tkl e k e l
= Tkl a ik e i a jl e j
= Tkl a ik a jl e i e j
= Tij e i e j
(1.245)
Then, the transformation law of the components between systems for a second-order
tensor is given by:
Tij = Tkl a ik a jl = a ik Tkl a jl
Matrix
form
T = A T AT
(1.246)
Third-order tensor
A third-order tensor ( S ) can be shown in two systems represented by orthogonal bases e i
and e i as follows:
58
S = S lmn e l e m e n
= S lmn ail e i a jm e j a kn e k
= S lmn ail a jm a kn e i e j e k
= S ijk e i e j e k
(1.247)
In conclusion the components of the third-order tensor in the new basis ( e i ) are:
S ijk = S lmn a il a jm a kn
(1.248)
The following table summarizes the transformation law of the components according to
the tensor rank:
rank
to
from (x1 , x 2 , x 3 )
(x1 , x 2 , x3 )
to
from (x1 , x 2 , x 3 )
(x1 , x 2 , x3 )
0 (scalar)
1 (vector)
S i = a ij S j
S i = a ji S j
S ij = a ik a jl S kl
S ij = a ki a lj S kl
= a il a jm a kn S lmn
S ijk
S ijk = a li a mj a nk S lmn
= a im a jn a kp a lq S mnpq
S ijkl
S ijkl = a mi a nj a pk a ql S mnpq
(1.249)
where the components of T and A are shown, respectively, as Tij and a ij . Afterwards,
given that a ij are the components of the transformation matrix, represent graphically the
components of the tensors T and T on both systems.
Solution: The expression T = A T A T in symbolic notation is given by:
(e a e b ) = a rs (e r e s ) T pq (e p e q ) a kl (e l e k )
Tab
= a rs T pq a kl sp ql (e r e k )
= a rp T pq a kq (e r e k )
To obtain the components of T one only need make the double scalar product with the
basis (e i e j ) , the result of which is:
(e a e b ) : (e i e j ) = a rp T pq a kq (e r e k ) : (e i e j )
Tab
ai bj = a rp T pq a kq ri kj
Tab
Tij = a ip T pq a jq
1 TENSORS
59
x3
T = A T AT
T33
T23
x3
x3
T13
T33
T31
T13
T23
x2
T32
x1
T12
T22
x2
T21
T11
T22
T31
T11
T32
T21
T12
x1
x2
T = AT T A
Figure 1.21: Transformation law of the second-order tensor components.
x1
II T =
1
2
I T Tr ( T 2 )
2
III T = det ( T )
= ( a ik a jl Tkl )( a ip a jq T pq )
= a ik a ip a jl a jq Tkl T pq
123 123
kp
lq
= T pl T pl
= T : T = Tr ( T T ) = Tr ( T 2 )
c)
det ( T ) = det ( T ) = det (A T A T ) = det (A )det ( T )det (A T ) = det ( T )
1
424
3
1424
3
=1
=1
60
=A
B 1 = B T
BA
A T B T = (BA ) T
CBA
(CBA ) T = A T B T C T
C
C 1 = C T
(1.250)
v = AT v
(1.251)
v = Bv
(1.252)
v = B T v
(1.253)
v = BA v
(1.254)
v = A T B T v
(1.255)
v = (BA ) v = A 1B 1 v = A T B T v
1
(1.256)
Then, it is easy to find the components of the vector in the coordinate system (x1, x 2, x3) ,
(see Figure 1.22):
v = CBA v
inverse
form
v = A T B T C T v
(1.257)
1 TENSORS
1.5.3.1
61
y y =
x
xy
y y
y x
= +
2
xy =
y x
xx =
x
a12
a 22
0 cos( xx ) cos( xy ) 0
0 = cos( y x ) cos( y y ) 0
1
0
0
1
(1.258)
= sin( ) ,
2
(1.259)
sin( ) cos( )
A=
(1.260)
Another way to prove (1.260) is by considering the vector position of the point P in both
systems, (see Figure 1.24).
Moreover, in view of Figure 1.24, said coordinates are interrelated as shown below:
x P = x P cos( ) + y P cos( )
x P = x P cos( ) + y P cos 2
y = x cos + y cos( )
y P = x P cos( ) + y P cos 2
P
P
P
2
(1.261)
x P = x P cos( ) + y P sin( )
y P = x P sin() + y P cos( )
Or in matrix form:
Inverse
x cos( ) sin( )
x P cos() sin( ) x P transforma
tion
P =
=
y sin( ) cos( ) y
P
y P sin( ) cos( )
P
x P
y
P
(1.262)
62
x P cos( ) sin( ) x P
y = sin( ) cos( ) y
P
P
(1.263)
yP
x P
r
r
y P
yP
xP
xP
)
s(
co
)
(
sin
yP
xP
P
yP
)
(
n
si
)
s(
o
c
Problem 1.29: Find the transformation matrix between the systems: x, y , z and x, y , z .
These systems are represented in Figure 1.25.
z = z
z = z
y
y = y
x
x
1 TENSORS
63
from x, y , z to x, y , z
cos sin 0
A = sin cos 0
0
0
1
with 0 360
x
from x, y , z to x, y , z
z = z
y = y
cos
B = 0
sin
with 0 180
y
z = z
0 sin
1
0
0 cos
z = z
z = z
from x, y , z to x, y , z
y
y = y
cos
C = sin
0
sin
cos
0
0
0
1
with 0 360
x
x
64
D = ( cos cos sin sin cos ) ( sin cos sin + cos cos ) sin sin
cos sin
sin sin
cos
The angles , , are known as Euler angles and were introduced by Leonhard Euler to
describe the orientation of a rigid body motion.
Problem 1.30: Let T be a second-order tensor whose components in the Cartesian system
(x1 , x 2 , x3 ) are given by:
3 1 0
= Tij = T = 1 3 0
0
0 1
(T )ij
Given that the transformation matrix between two systems, (x1 , x 2 , x 3 ) - (x1 , x 2 , x 3 ) , is:
0
2
A=
2
0
2
2
2
2
Obtain the tensor components Tij in the new coordinate system (x1 , x 2 , x 3 ) .
Solution: As defined in equation (1.249), the transformation law for second-order tensor
components is:
Tij = aik a jl Tkl
[ ]
Tij = [a i k ] [Tk l ] a l j
Thus
0
2
T =
2
T = A T AT
0
2
2
2
2
1
0
3
1
0
0 1 3 0 0
0
0
1
1
0
2
2
2
2
0
2
2
2
NOTE: As we can verify in the above example, the components of the tensor T , in the
new basis, have one particular feature, i.e. the off-diagonal terms are equal to zero. The
question now is: Given an arbitrary tensor T , is there a transformation which results in the
Notes on Continuum Mechanics(Springer/CIMNE) - (Chapter 01) - By: Eduardo W.V. Chaves
1 TENSORS
65
off-diagonal terms being zero? This type of problem is called the eigenvalue and eigenvector
problem.
1.5.4
As we have seen, the scalar product between a second-order tensor T and a vector (or unit
vector n ) leads to a vector. In other words, projecting a second-order tensor onto a
certain direction results in a vector that does not necessarily have the same direction as n ,
(see Figure 1.26(a)).
The aim of the eigenvalue and eigenvector problem is to find a direction n , in such a way
r
that the resulting vector, t (n) = T n , coincides with it, (see Figure 1.26 (b)).
b) Principal direction.
a) Projection of T onto an
arbitrary plane.
r
t (n) = T n
r
t (n) = T n = n
n
n - principal direction of T
x3
- eigenvalue of T associated
with the direction n .
x2
x1
(1.264)
Tij ij n j = 0 i
(1.265)
r
(T 1) n = 0
Tensorial
notation
The previous set of homogeneous equations only have nontrivial solution, i.e. n 0 , if and
only if:
det ( T 1) = 0 ;
Tij ij = 0
(1.266)
The determinant (1.266) is called the characteristic determinant of the tensor T , explicitly given
by:
Notes on Continuum Mechanics(Springer/CIMNE) - (Chapter 01) - By: Eduardo W.V. Chaves
66
T11
T21
T12
T22
T13
T23
T31
T32
T33
=0
(1.267)
(1.268)
where I T , II T , III T are the principal invariants of T , and are defined in components terms
as:
I T = Tr (T ) = Tii
II T
=
=
=
=
=
jk
1
( TrT ) 2 Tr ( T 2 )
2
1
Tr ( Tij e i e j ) Tr ( Tkl e k e l ) Tr Tij e i e j (Tkl e k e l )
2
1
Tij ij Tkl kl Tij Tkl jk Tr (e i e l )
2
1
Tii Tkk Tij Tkl jk il
2
1
Tii Tkk Tij T ji = Mii = Tr[cof( T )]
2
{
{
[(
]}
]}
(1.269)
where Mii is the matrix trace defined in equation (1.210), Mii = M11 + M 22 + M33 . More
explicitly the invariants are given by:
IT
II T
III T
(1.270)
II T
III T
2
= T11 T22 + T11 T33 + T22 T33 T122 T132 T23
2
= T11 T22 T33 + T12 T13 T23 + T13 T12 T23 T122 T33 T23
T11 T132 T22
(1.271)
1 TENSORS
67
0
2
0
T1
=0
2 0
0
0
0
T2
0
0
0
T3
(1.272)
(1.273)
whose values must match the values obtained in (1.270), since they are invariant with a
change of basis.
If T is a spherical tensor, i.e. T1 = T2 = T3 = T , it holds that I T2 = 3 II T , III T = T 3 .
Let W be an antisymmetric tensor. The principal invariants of W are given by:
IW
II W
= Tr (W ) = 0
Tr (W 2 )
1
= ( TrW ) 2 Tr (W 2 ) =
2
2
0
0
W23
W13
0
=
+
+
W23
0
W13
0
W12
W12
0
(1.274)
= 2
=0
r r
3 2 I W + II W III W = 0
3 + 2 = 0
2 + 2 = 0
(1.275)
In this case, one eigenvalue is real and equal to zero and the others are imaginary roots:
2 + 2 = 0
1.5.4.1
2 = 2 = 0
(1, 2 ) = 1 = i
(1.276)
T n ( 2 ) = 2 n ( 2)
T n (3) = 3 n (3)
(1.277)
Applying the dot product between n ( 2) and T n (1) = 1n (1) , and the dot product between
n (1) and T n ( 2 ) = 2 n ( 2 ) we obtain:
n ( 2 ) T n (1) = 1n ( 2 ) n (1)
n (1) T n ( 2 ) = 2 n (1) n ( 2 )
(1.278)
(1.279)
68
( 1 2 ) n (1) n ( 2 ) = 0
(1.280)
(1.281)
Similarly, it is possible to show that n (1) n (3) = 0 and n ( 2) n (3) = 0 and then we can
conclude that the eigenvectors are mutually orthogonal, and constitute an orthogonal basis,
(see Figure 1.27), where the transformation matrix between systems is:
n (1) n 1(1)
A = n ( 2) = n 1( 2)
n (3) n (3)
n (21)
n (22 )
n (23)
n 3(1)
n 3( 2 )
n (33)
(1.282)
diagonalization
T = A T AT
x3
T3
T33
x3
x3
T23
T13
T2
n ( 2 )
x 2
x 2
T23
T13
n (1)
T22
T12
T12
T1
x2
T11
x1
n (3)
x1
x1
T = AT T A
Principal Space
C14 + C 24 + C 34
where C1 , C 2 , C 3 are the eigenvalues of the second-order tensor C .
;
C13 + C 23 + C 33
Solution: Any combination of invariants is also an invariant, so, on this basis, we can try to
express the above expressions in terms of their principal invariants.
II C
1 TENSORS
69
(
)
= det (Q E Q 1)
= det (Q E Q Q 1 Q )
= det [Q (E 1 ) Q ]
(3) det (E 1 ) det
(Q3)
= det
12Q
1
424
0 = det E * 1
T
T
= det (E 1 )
(
= det (Q
= det (Q
0 = det E *ij ij
ik E kp Q jp
ij
ik E kp Q jp
Q ik Q jp kp
[ (
) ]
= det (Q )det (E ) det (Q )
= det (E )
= det Q ik E kp kp Q jp
ik
kp
kp
jp
kp
kp
IT
3
(1.283)
IT
3
where
R=
I T2 3 II T
;
3
S=
R
;
3
Q=
I T II T
2I 3
III T T ;
3
27
T=
R3
;
27
2T
= arccos
where is in radians.
(1.284)
0
2
0
0
0
cos 3
0
1 0 0
IT
2
0=
0 1 0 + 2 S
0
cos +
0
3
3
3
0 0 1
3
4
144244
3
0
+
0
cos
Spherical part
344343
14
4444444244444
(1.285)
Deviatoric part
where we clearly distinguish the spherical and the deviatoric part of the tensor in the
principal space. Note that, if T is a spherical tensor the following relationship holds
I T2 = 3 II T , then S = 0 .
Notes on Continuum Mechanics(Springer/CIMNE) - (Chapter 01) - By: Eduardo W.V. Chaves
70
Problem 1.33: Find the principal values and directions of the second-order tensor T ,
where the Cartesian components of T are:
(T )ij
3 1 0
= Tij = T = 1 3 0
0
0 1
Solution: We need to find nontrivial solutions for (Tij ij ) n j = 0 i , which are constrained
by n j n j = 1 (unit vector). As we have seen, the nontrivial solution requires that:
Tij ij = 0
T12
T22
T13
T23
3 1
= 1 3
T31
T32
T33
0
0
=0
(1 ) (3 ) 2 1 = 0
3
7 + 14 8 = 0
T
1
Tii T jj Tij Tij = 22
T32
2
T23
T33
T11
T13
T31
T33
T11
T12
T21
T22
= 14
3 72 + 14 8 = 0
2 = 2;
3 = 4
I T = 1 + 2 + 3 = 1 + 2 + 4 = 7
II T = 1 2 + 2 3 + 3 1 = 1 2 + 2 4 + 4 1 = 14
III T = 1 2 3 = 8
Thus, we can see that the invariants are the same as those evaluated previously.
Principal directions:
Each eigenvalue, i , is associated with a corresponding eigenvector, n (i ) . We can use the
equation in (1.265), i.e. ( Tij ij ) n j = 0 i , to obtain the principal directions.
1 = 1
3 1
1
1
3 1
0
0 n 1 3 1 1
0 n1 0
0 n 2 = 1 3 1 0 n 2 = 0
1 1 n 3 0
0 1 1 n 3 0
1 TENSORS
71
2n1 n 2 = 0
n1 = n 2 = 0
n1 + 2n 2 = 0
0n = 0
3
n i n i = n12 + n 22 + n 32 = 1
1
0 n 1 3 2 1
0 n1 0
3 2
1
3 2
0 n 2 = 1 3 2
0 n 2 = 0
0
0
1 2 n 3 0
0
1 2 n 3 0
n1 n 2 = 0 n1 = n 2
n1 + n 2 = 0
n = 0
3
The first two equations are linearly dependent, after which we need an additional equation:
n i n i = n12 + n 22 + n 32 = 1 2n12 = 1 n1 =
1
2
Thus:
2 = 2
1
n i( 2 ) =
2
1
2
3 = 4
1
0 n 1 3 4 1
0 n1 0
3 3
1
3 3
0 n 2 = 1 3 4
0 n 2 = 0
0
0
1 3 n 3 0
0
1 4 n 3 0
n1 n 2 = 0
n1 = n 2
n1 n 2 = 0
3n = 0
3
n i n i = n12 + n 22 + n 32 = 1 2n 22 = 1 n 2 =
1
2
Then:
3 = 4
n i(3) = m
1
2
1
2
n i(1)
= [0 0 1]
2 = 2
n i( 2)
1
2
1
2
3 = 4
n i(3)
= m
1
2
1
2
72
NOTE: The tensor components of this problem are the same as those used in Problem
1.30. Additionally, we can verify that the eigenvectors make up the transformation matrix,
A , between the original system, (x1 , x 2 , x 3 ) , and the principal space, ( x1 , x 2 , x 3 ) , (see
Problem 1.30).
1.5.5
[
= [ n
= [ n
T1
n (i1)
= n 1(1)
T2
n (i 2 )
T3
n i(3)
n (21)
n 3(1)
( 2)
1
n (22)
n 3( 2 )
( 3)
1
n (23)
n (33)
]
]
(1.286)
The principal space is formed by the orthogonal basis n (1) , n ( 2) , n (3) , and the tensor
components are represented by their eigenvalues as:
T1
Tij = T = 0
0
0
T2
0
0
0
T3
(1.287)
With reference to the fact that eigenvectors form a transformation matrix, A , so that:
T = A T AT
(1.288)
T = AT T A
(1.289)
n (1) n 1(1)
A = n ( 2) = n 1( 2)
n (3) n (3)
n (21)
n (22 )
n (23)
n 3(1)
n 3( 2 )
n (33)
(1.290)
T12
T22
T23
n 1(1) n 1( 2 )
T13
n 1(3) T1
n (23) 0
n 3(3) 0
0
0 A
T3
0
T2
0
0 n 1(1)
0 n 1( 2)
T3 n 1(3)
n (21)
n (22 )
n (23)
n (31)
n 3( 2)
n 3(3)
(1.291)
1 0 0
0 0 0
0 0 0
T
T
= T1A 0 0 0 A + T2 A 0 1 0A + T3 A 0 0 0 A
0 0 0
0 0 0
0 0 1
T
Whereas:
1 TENSORS
n 1(1) n 1(1)
1 0 0
A T 0 0 0A = n (21) n 1(1)
n (1) n (1)
0 0 0
3 1
n 1(1) n (21)
n (21) n (21)
n 3(1) n (21)
n 1( 2 ) n 1( 2 )
0 0 0
A T 0 1 0A = n (22) n 1( 2)
n ( 2 ) n ( 2 )
0 0 0
3 1
n 1( 2 ) n (22 )
n (22 ) n (22 )
n 3( 2 ) n (22 )
n 1(3) n 1(3)
0 0 0
A T 0 0 0A = n (23) n 1(3)
n (3) n (3)
0 0 1
3 1
n 1(3) n (23)
n (23) n (23)
n (33) n (23)
73
n 1(1) n 3(1)
(1.292)
(1.293)
As we can see, the tensor is represented as a linear combination of dyads and the above
representation in tensorial notation becomes:
T = T1 n (1) n (1) + T2 n ( 2 ) n ( 2 ) + T3 n (3) n (3)
(1.294)
or:
T=
n ( a ) n ( a )
a =1
Spectral representation of a
second-order tensor
(1.295)
which is the spectral representation of the tensor. Note that, in the above equation we have to
resort to the summation symbol, because the dummy index appears thrice in the
expression.
NOTE: The spectral representation in (1.295) could easily have been obtained from the
definition of the second-order unit tensor, given in (1.168), i.e. 1 = n i n i , which can also
3
that:
3
T = T 1 = T n ( a ) n ( a ) =
a =1
T n ( a ) n ( a ) =
a =1
n ( a ) n ( a )
(1.296)
a =1
R N
a =1
(a)
(a ) =
N
(a )
(a)
N
(1.297)
a =1
The spectral representation is very useful for making algebraic operations with tensors. For
example, tensor power in the principal space can be expressed as:
74
T1n
= 0
0
(T )
n
ij
0
T2n
0
0
T3n
(1.298)
Tn =
n
a
n ( a ) n ( a )
(1.299)
a =1
Ta n ( a ) n ( a )
(1.300)
a =1
Next, we can show that a positive definite tensor has positive eigenvalues. For this
purpose, we can consider a semi-positive definite tensor, T , by which the condition
r
x T x 0 holds for all x 0 . Replacing the tensor by its spectral representation, we
obtain:
x T x 0
3
x Ta n ( a ) n ( a ) x 0
a =1
(1.301)
x n ( a ) n ( a ) x 0
a =1
Ta x n ( a ) n ( a ) x 0
a =1
( x n ) ] 0
T [1
4243
3
(a ) 2
a =1
>0
(1.302)
T1 ( x n (1) ) 2 + T2 ( x n ( 2 ) ) 2 + T3 ( x n (3) ) 2 0
r
The above expression must hold for all x 0 . If we take x = n (1) , the above equation is
reduced to T1 (n (1) n (1) ) 2 = T1 0 . The same is true for T2 and T3 . Thus, we have
demonstrated that if a tensor is semi-positive definite, its eigenvalues are greater than or
equal to zero, i.e. T 1 0 , T 2 0 , T 3 0 . Therefore we can conclude that a tensor is positive
definite, i.e. x T x > 0 , if and only if its eigenvalues are positive and nonzero, i.e. T 1 > 0 ,
T 2 > 0 , T 3 > 0 . Consequently, the positive definite tensor trace is greater than zero. If the
positive definite tensor trace is zero, this implies that the tensor is the zero tensor.
The spectral representation of the fourth-order unit tensor, I , can be obtained starting
from the definition in (1.169), i.e.:
I = ik jl e i e j e k e l = e i e j e i e j =
e b e a e b
(1.303)
a =1 b =1
As I is an isotropic tensor, (see 1.5.8 Isotropic and Anisotropic tensors), then the
representation in (1.303) is also valid in any orthonormal basis, n ( a ) , so:
I=
(a)
n (b ) n ( a ) n (b )
(1.304)
a =1 b =1
1 TENSORS
75
I=
(1.305)
n (b ) n (b ) n ( a )
(a)
(1.306)
a ,b =1
and
I = ij kl e i e j e k e l = e i e i e k e k
I=
(1.307)
n ( a ) n (b ) n (b )
(a)
(1.308)
a ,b =1
n ( a ) n ( a )
a =1
a ,b =1
a b
w V V w = w ab ( b a ) n ( a ) n (b)
a ,b =1
a b
Solution:
It is true that
a =1
a =1
w 1 = w n ( a ) n ( a) = w n (a ) n (a ) = (wr n ( a) ) n ( a)
3
a =1
w (n
3
(b )
n ( a ) n ( a )
a ,b =1
(
+ w (n
+ w (n
(1)
(2)
(1)
( 3)
) n
) n
(2)
( 3)
(
(n
+ w2 n
+ w2
( 2)
( 2)
( 2)
( 3)
) n
) n
( 2)
( 3)
(
(n
+ w3 n
+ w3
( 3)
( 3)
) n
) n
( 2)
( 3)
( 2)
( 3)
( )
(n ) n
( )
(n ) n
+ w1 n (3) n ( 2 ) w3 n (1) n ( 2 ) +
( 2)
( 3)
( 3)
w1
+ w2 (1)
Taking into account that w1 = w 23 = w 32 , w2 = w13 = w 31 , w3 = w12 = w 21 , the
76
w = w ab n (a ) n (b)
The terms
wV
a ,b =1
a b
3
3
(
)
(
)
a
b
w V = w ab n n b n (b) n (b)
b =1
a ,b =1
ab
w
b
ab
n ( a ) n (b ) n (b ) n (b ) =
a ,b =1
a b
ab
n ( a ) n (b )
a ,b =1
ab
and
3
3
3
(a )
(b )
(a )
(a)
V w = a n n
w ab n n = a w ab n ( a) n (b)
a ,b =1
a =1
a ,b =1
ab
ab
Then,
3
3
(
)
(
)
(
)
(
)
a
b
a
b
w V V w = b w ab n n a w ab n n
a ,b =1
a ,b =1
ab
a b
ab ( b
a ) n ( a ) n (b )
a ,b =1
a b
w V 2 V 2 w = w ab (2b 2a ) n ( a ) n (b)
a ,b =1
a b
1.5.6
Cayley-Hamilton Theorem
The Cayley-Hamilton theorem states that any tensor, T , satisfies its own characteristic
equation, i.e. if the eigenvalues of T satisfy the equation 3 2 I T + II T III T = 0 , so
does the tensor T :
T 3 T 2 I T + T II T III T 1 = 0
(1.309)
One of the applications of the Cayley-Hamilton theorem is to express the power of tensor,
T n , as a combination of T n 1 , T n 2 , T n 3 . For example, T 4 is obtained as:
T 3 T T 2 TI T + T T II T III T 1 T = 0
T 4 = T 3 I T T 2 II T + III T T
(1.310)
1 TENSORS
77
(1.311)
1
III T = Tr ( T 3 ) I T Tr ( T 2 ) + II T Tr ( T )
3
(1.312)
(1.313)
or in indicial notation
1
3
1
(1.314)
Problem 1.35: Based on the Cayley-Hamilton theorem, find the inverse of a tensor T in
terms of tensor power.
Solution: The Cayley-Hamilton theorem states that:
T 3 T 2 I T + T II T III T 1 = 0
Carrying out the dot product between the previous equation and the tensor T 1 , we
obtain:
T 3 T 1 T 2 T 1 I T + T T 1 II T III T 1 T 1 = 0 T 1
T 2 TI T + 1 II T III T T 1 = 0
T 1 =
1
T 2 I T T + II T 1
III T
The Cayley-Hamilton theorem also applies to square matrices of order n . Let Ann be a
square n by n matrix. The characteristic determinant is given by:
1nn A = 0
(1.315)
where 1nn is the identity n by n matrix. Developing the determinant (1.315) we ontain:
n I 1 n 1 + I 2 n 2 L (1) n I n = 0
(1.316)
(1.317)
By means of the relationship (1.317), we can obtain the inverse of the matrix Ann by
multiplying all the terms by the inverse, A 1 , i.e.:
Notes on Continuum Mechanics(Springer/CIMNE) - (Chapter 01) - By: Eduardo W.V. Chaves
78
A n A 1 I 1A n 1A 1 + I 2 A n 2 A 1 L + (1) n I n 1A 1 = 0
then
A n 1 I 1 A n 2 + I 2 A n 3 L (1) n 1 I n 11 + (1) n I n A 1 = 0
A 1 =
(1) n 1
A n 1 I 1A n 2 + I 2 A n3 L (1) n 1 I n 11
In
(1.318)
(1.319)
Problem 1.36: Check the Cayley-Hamilton theorem by using a second-order tensor whose
Cartesian components are given by:
5 0 0
T = 0 2 0
0 0 1
Solution:
The Cayley-Hamilton theorem states that:
T 3 T 2 I T + T II T III T 1 = 0
where I T = 5 + 2 + 1 = 8 , II T = 10 + 2 + 5 = 17 , III T = 10 , and
T
5 3
=0
0
0
23
0
0 125 0 0
0 = 0 8 0
1 0 0 1
; T
5 2
=0
0
0
22
0
0 25 0 0
0 = 0 4 0
1 0 0 1
0 0 1
0 0 1
0 0 1
0 0 1 0 0 0
1.5.7
Norms of Tensors
The magnitude (module) of a tensor, also known as the Frobenius norm, is given below:
r r
r
v = v v = v i vi (vector)
(1.320)
(1.321)
(1.322)
(1.323)
(1.324)
1 TENSORS
79
x 2
T = T : T = I T2 2 II T
T2
T
T1
x3
x1
T3
1.5.8
A tensor is called isotropic when its components are the same in any coordinate system,
otherwise the tensor is said to be anisotropic.
Let T and T represent the tensor components T in the systems e i and e i ,
respectively, so, the tensor is isotropic if T = T on any arbitrary basis.
Isotropic first-order tensor
r
v i = a ij v j
(1.325)
r
By definition, v is an isotropic tensor if it holds that v i = v i , and this is only possible if
e i = e j , i.e. there is no change of system, or if the tensor is the zero vector, i.e.
r
v i = v i = 0 i . Then, the unique isotropic first-order tensor is the zero vector 0 .
AA T =1
(1.326)
An immediate observation of the isotropy of unit tensor 1 is that any spherical tensor
( 1 ) is also an isotropic tensor. So, if a second-order tensor is isotropic it is spherical and
vice versa.
Isotropic third-order tensor
An example of a third-order isotropic tensor is the Levi-Civita pseudo-tensor, defined in
(1.182), which is not a real tensor in the strict meaning of the word. With reference to
the transformation law for the third-order tensor components, (see equation (1.248)), we
can conclude that:
80
(1.327)
{
1
I ijkl = ik jl
I ijkl = il jk
(1.328)
(1.329)
D ijkl = a 0 ij kl + a1 ik jl + a 2 il jk
Problem 1.37: Let C be a fourth-order tensor, whose components are given by:
C ijkl = ij kl + ( ik jl + il jk )
where , are constant real numbers. Show that C is an isotropic tensor.
Solution:
Applying the transformation law for fourth-order tensor components:
C ijkl = a im a jn a kp a lq C mnpq
= a im a jn a kp a lq mn pq + a im a jn a kp a lq mp nq + a im a jn a kp a lq mq np
= a in a jn a kq a lq + a ip a jq a kp a lq + a iq a jn a kn a lq
= ij kl + ik jl + il jk
= C ijkl
1.5.9
Coaxial Tensors
Two arbitrary second-order tensors, T and S , are coaxial tensors if they have the same
eigenvectors. It is easy to show that if two tensors are coaxial, this means the dot product
between them is commutative, and vice versa, i.e.:
if
T S = S T
S, T are coaxial
(1.330)
If T and S are coaxial as well as symmetric tensors, the spectral representations of these
tensors are given by:
T=
a =1
Ta n ( a ) n ( a )
S=
n ( a ) n ( a )
(1.331)
a =1
An immediate result of (1.330) is that the tensor S and its inverse S 1 are coaxial tensors:
1 TENSORS
81
S 1 S = S S 1 = 1
S=
S a n ( a ) n ( a )
S 1 =
a =1
where S a ,
a =1
(1.332)
1 (a ) (a )
n n
1
, are the eigenvalues of S and S 1 , respectively.
Sa
If S and T are coaxial symmetric tensors, the resulting tensor ( S T ) becomes another
symmetric tensor. To prove this we start from the definition of coaxial tensors:
T S = S T
T S S T = 0
T S ( T S ) T = 0 2( T S ) skew = 0 (1.333)
Then, if the antisymmetric part of a tensor is a zero tensor, it follows that this tensor is
symmetric:
( T S ) skew = 0
( T S ) ( T S ) sym
(1.334)
= f (N) = f (N) n = n 0 ,
Additionally, as previously seen, it satisfies the condition F N
(n )
( a ) , we can obtain:
since det ( F ) 0 . After that, given an orthonormal basis N
F 1 F = 1 =
(a)
(a)
N
a =1
F = F 1 = F
(a) N
(a ) =
N
a =1
F =
n
a
(a)
F N
(a)
(a)
N
(1.335)
a =1
(a )
N
a =1
r (1 )
(1)
f (N ) = F N
r (2)
( 2)
f (N ) = F N
n (1)
n ( 2 )
r
r
( 2) = f (N ( 2 ) ) = f (N ( 2 ) ) n ( 2 ) = n ( 2 )
F N
2
r
r
(3) = f (N ( 3) ) = f (N ( 3) ) n (3) = n (3)
F N
3
( 3)
N
( 2)
N
n (3)
(1)
N
r (3)
( 3)
f (N ) = F N
(a) .
Figure 1.29: Projecting F onto N
82
(2)
(2)
(3)
( 3)
( 1)
F=
n
a
(a )
a =1
(a) =
N
R N
a
(a)
(a) = R
N
a =1
F = R U
N
a
(a)
(a) = R U
N
a =1
U=R
(1.336)
(a ) N
( a ) . Note that U is a symmetric
where we have defined the tensor U = a N
a =1
(a) N
( a ) is also
tensor, i.e. U = U . This condition is easily verified by the fact that N
(a) N
( a ) = R T n ( a ) = n ( a ) R , we obtain:
symmetric. Now considering that n ( a ) = R N
T
F=
(a ) =
a n ( a ) N
a =1
a n ( a ) n ( a ) R =
a =1
F = V R
( a ) n ( a ) R = V R
an
a =1
V = F R
(1.337)
T
3
comparing the spectral representation of U with V , we can conclude that they have the
same eigenvalues but different eigenvectors, and they are related by n ( a ) = R N ( a ) .
With reference to the above considerations, we can define the polar decomposition:
F = R U = V R
Polar Decomposition
(1.338)
U= FT F = C
(1.339)
V = F FT = b
(1.340)
Since det ( F ) 0 , the tensors C and b are positive definite symmetric tensors, (see
Problem 1.25), which implies that the eigenvalues of C and b are all real and positive.
However, up to now, det ( F ) 0 is the only restriction imposed on the tensor F .
Therefore, we have the following possibilities:
If det ( F ) > 0
In this scenario, we have det ( F ) = det (R )det(U) = det ( V )det(R ) > 0 , which results in the
following cases:
Notes on Continuum Mechanics(Springer/CIMNE) - (Chapter 01) - By: Eduardo W.V. Chaves
1 TENSORS
83
If det ( F ) < 0
In this situation, we have det ( F ) = det (R )det (U) = det ( V )det (R ) < 0 , which give us the
following cases:
R Proper orthogonal tensor
R Improper orthogonal tensor
or
NOTE: In Chapter 2 we will work with some special tensors where F is a nonsingular
tensor, det ( F ) 0 , and det ( F ) > 0 . U and V are positive definite tensors and R is a
rotation tensor, i.e. a proper orthogonal tensor.
(1.341)
(1.342)
The derivative of the tensor trace squared with respect to the tensor is given by:
[Tr ( A )]
[Tr ( A )]
= 2 Tr ( A )1
= 2 Tr ( A )
A
A
2
(1.343)
And, the derivative of the trace of the tensor squared with respect to tensor is given by:
Tr ( A 2 )
A
( A sr A rs )
( A sr )
( A rs )
(e i e j ) = A rs
+ A sr
(e i e j )
A ij
A ij
A ij
= A rs si rj + A sr ri sj (e i e j ) = A ji + A ji (e i e j )
(1.344)
= 2 A ji (e i e j ) = 2A T
Tr ( A 3 )
A
(1.345)
= 3( A 2 ) T
Tr (C 2 )
[Tr (C)]
[Tr (C)]
= 2 Tr (C)1 ,
=1,
C
C
C
2
= 2C T = 2C ,
Tr (C 3 )
C
= 3(C 2 ) T = 3C 2 .
Moreover, we can say that the derivative of the Frobenius norm of C is given by:
84
Tr (C C T )
Tr (C 2 )
C:C
= 1 Tr (C 2 )
=
=
=
C
C
C
C
2
1
1
= Tr (C 2 ) 2 2C
2
] [Tr(C )],
1
2
(1.346)
or:
C
Tr (C )
C
C
(1.347)
n i C ij n j
n k
n j
n i
= ik C ij n j + n i C ij jk = C kj n j + n i C ik
C ij n j + n i C ij
n k
n k
= C kj n j + C jk n j = (C kj + C jk ) n j =
2C kjsym n j
(1.348)
= 2C kj n j
1 C 1 C
=O
=
C
C
(1.349)
where O is the fourth-order zero tensor and the above equation in indicial notation
becomes:
C iq1C qj
C kl
( )C
) = (C ) C
1
iq
C kl
C iq1
C kl
qj
= C iq1
( )
C iq1
C kl
whereas C qj =
( )
( )
C qj
C kl
C kl
= C iq1
( )
C qj
( )
C qj
C kl
= O ikjl
( )C
C iq1
C kl
1
qj C jr
= C iq1
( )
C qj
C kl
C jr1
C jr1
qr
= C iq1
( )
1 C qj + C jq
C kl
2
C jr
C ir1
1
1
= C iq1 qk jl + jk ql C jr1 = C iq1 qk jl C jr1 + C iq1 jk ql C jr1
2
2
C kl
( )
(1.350)
1
C qj + C jq , so we can conclude that:
2
C iq1
C kl
qr
+ C iq1
qj
C ir1
1
= C ik1C lr1 + C il1 C kr1
2
C kl
(1.351)
Or in tensorial notation:
1
C 1
= C 1 C 1 + C 1 C 1
2
C
(1.352)
1 TENSORS
85
NOTE: Note that, if we had not replaced the symmetric part of C qj in (1.351), we would
have found that
( )
C iq1
C kl
qr
= C iq1
( )
C qj
C kl
symmetric tensor.
1.5.11.1 Partial Derivative of Invariants
Let T be a second-order tensor. The partial derivative of I T with respect to T , (see
equation (1.342)), is:
[I T ] [Tr ( T )]
= [Tr ( T )], T = 1
=
T
T
(1.353)
2
[ II T ] 1
Tr ( T 2 )
1 [Tr ( T )]
2
2
[
]
Tr
Tr
T
T
=
=
(
)
(
)
T
T
T
T 2
2
1
= 2( TrT )1 2 T T
2
= Tr ( T )1 T T
(1.354)
(1.355)
T = I T 1 II T T 1 + III T T 2
) = ( II
T
TT
III T T 2
(1.356)
To find the partial derivative of the third invariant, we can start with the definition given in
(1.313), so:
[ III T ]
3
1
1
1
3
2
=
Tr ( T ) Tr ( T ) Tr ( T ) + [Tr ( T )]
T
T 3
2
6
2
[Tr ( T )] 3
1
1 Tr ( T 2 )
1
= 3( T 2 ) T
+ [Tr ( T )] 1
Tr ( T ) Tr ( T 2 )
T
T
3
2
2
6
= ( T 2 ) T Tr ( T ) T T
2
1
1
Tr ( T 2 )1 + [Tr ( T )] 1
2
2
= ( T 2 ) T Tr ( T ) T T +
2
1
[Tr(T )] Tr( T 2 ) 1
2
(1.357)
= ( T 2 ) T I T T T + II T 1
(1.358)
= T I T T + II T 1
86
( III
TT
) = (T
1 T
I T T + II T 1
) = (T )
T
2 T
I T T T + II T 1
(1.359)
By comparing (1.357) with (1.359) we find another way to express the derivative of III T
with respect to T , i.e.:
[ III T ]
= III T T 1
T
= III T T T
(1.360)
D2
&&
T=T
Dt 2
(1.361)
( )
(1.362)
where cof (Tij ) is the cofactor of Tij and defined as [cof (Tij )]T = det(T ) (T 1 )ij .
( III b ) 2
J
=
b
b
1 III
1
1
1
b
= ( III b ) 2
= ( III b ) 2 III b b T
2
2
b
1
1
1
= ( III b ) 2 b 1 = J b 1
2
2
1
ln III b 2
1 III b 1 1
[ln( J )]
=
= b
=
b
2 III b b
2
b
I
Tr ( T )
1 + T dev = T 1 + T dev = Tm 1 + T dev
3
3
(1.363)
1 TENSORS
T dev = T
87
Tr ( T )
1 = T Tm 1
3
(1.364)
T11dev
= T12dev
T dev
13
T12dev
dev
T22
dev
T23
T13dev T11 Tm
dev
T23
= T12
T33dev T13
T12
=
T13
T12
T22 Tm
T23
T12
1
3
T33 Tm
T13
T23
T23
1
(2 T33 T11 T22 )
3
T13
(1.365)
T dev
Tr ( T )
Tr ( T )
= Tr ( T dev ) = Tr T
1 = Tr ( T )
Tr (1) = 0
3
3 123
=3
(1.366)
ii
Thus, we can conclude that the trace of any deviatoric tensor is equal to zero.
1.5.12.2 Second Invariant of the Deviatoric Tensor
For simplicity we can use the principal space to obtain the second and third invariant of the
deviatoric tensor. In the principal space the components of T are given by:
T1
Tij = 0
0
0
T2
0
0
0
T3
(1.367)
T1 Tm
= 0
0
0
T2 Tm
0
T3 Tm
0
0
(1.368)
= ( T1 Tm )( T2 Tm ) + ( T1 Tm )( T3 Tm ) + ( T2 Tm )( T3 Tm )
= ( T1 T2 + T1 T3 + T2 T3 ) 2 Tm ( T1 + T2 + T3 ) + 3Tm2
2I T
I T2
= II T
(I T ) +
3
3
1
2
= 3 II T I T
3
(1.369)
88
We could also have obtained the above result, by directly starting from the definition of the
second invariant of a tensor given in (1.269), i.e.:
1
2
1
=
2
1
=
2
1
=
2
II T dev =
{(I T
dev
]}
) 2 Tr ( T dev ) 2 =
{ Tr[(T T
{ Tr[(T
[ Tr(T
]}
2 Tm T 1 + Tm2 1)
m 1)
{ [
1
Tr ( T dev ) 2
2
]}
) + 2 Tm Tr ( T ) Tm2 Tr (1)
IT
I T2
1
2
(
)
2
3
Tr
T
I
T
2
3
9
I T2
1
2
(
)
Tr
T
2
3
]}
(1.370)
Observing that Tr ( T 2 ) = T12 + T22 + T32 = I T2 2 II T , (see Problem 1.31), the equation
(1.370) becomes:
II T dev =
I T2 1
2 I T2 1
1 2
2
I
+
2
I
I
+
=
2
I
I
= 3 II T I T
T
2
3 2 T
3 3
x3
(1.371)
T 33
T23
T13
T13
T23
T12
T22
T12
x2
T11
x
14414444442444444443
x3
x3
Tm
T33dev
T23
T13
Tm
Tm
x1
T13
x2
T23
T12
T11dev
dev
T22
T12
x2
x1
1 TENSORS
89
T dev
1
1
1
Tr ( T dev ) 2 = Tr ( T dev T dev ) = T dev
2
2
2
(1.372)
T dev
1
dev 2
dev 2
( T11dev ) 2 + ( T22
) + ( T33dev ) 2 + 2( T12dev ) 2 + 2( T13dev ) 2 + 2( T23
)
2
(1.373)
T dev
1 dev dev
1
Tij T ji = ( T1dev ) 2 + ( T2dev ) 2 + ( T3dev ) 2
2
2
(1.374)
dev
T22
dev
T23
dev
T23
T33dev
T11dev
T13dev
T13dev
T33dev
T11dev
T12dev
T12dev
dev
T22
1
dev dev
dev
dev 2
) ( T13dev ) 2
= 2 T22
T33 2 T11dev T33dev 2 T11dev T22
( T12dev ) 2 ( T23
2
(1.375)
or
( ) 2 T T + (T ) + (T ) 2 T T + (T )
) 2 T T + (T ) + (T ) + (T ) + (T )
1
dev
II T dev = T22
2
(T
dev 2
11
dev
11
dev
22
dev
22
dev 2
33
dev
33
dev 2
22
dev 2
11
dev 2
11
dev
11
dev 2
22
dev
33
dev 2
33
dev 2
33
(1.376)
dev 2
) ( T13dev ) 2
( T12dev ) 2 ( T23
T dev
dev 2
2( T12dev ) 2 2( T13dev ) 2 2( T23
)
(1.377)
T dev
1
dev
dev 2
dev 2
( T22
T33dev ) 2 + ( T11dev T33dev ) 2 + ( T11dev T22
) ( T12dev ) 2 ( T23
) ( T13dev ) 2
6
(1.378)
Moreover, if we consider the principal space we obtain:
II
T dev
1
( T2dev T3dev ) 2 + ( T1dev T3dev ) 2 + ( T1dev T2dev ) 2
6
(1.379)
= ( T1 Tm )( T2 Tm )( T3 Tm )
= T1 T2 T3 Tm ( T1 T2 + T1 T3 + T2 T3 ) + Tm2 ( T1 + T2 + T3 ) Tm3
I
I2
I3
= III T T II T + T I T T
3
9
27
I T II T 2 I T3
= III T
+
3
27
1
3
=
2 I T 9 I T II T + 27 III T
27
(1.380)
90
T dev
(1.381)
s
= s . Also show that and dev are coaxial tensors.
I
1+s
3
s=
I
1.
3
Afterwards we calculate:
I
1
3 [ ] 1 [I ]
s
=
=
1
3
s ij
kl
Therefore
s ij
s ij
kl
ij
kl
1 [I ]
1
ij = ik jl kl ij
3 kl
3
1
1
1
= s ij ik jl kl ij = s ij ik jl s ij kl ij = s kl kl s ii
{
3
3
3
=0
= s kl
s:
s
=s
To show that two tensors are coaxial, we must prove that dev = dev :
dev = ( sph ) = sph =
I
1
3
I
I
1 = 1
3
3
I
= 1 = dev
3
Therefore, we have shown that and dev are coaxial tensors. In other words, they have
=
1 TENSORS
91
(1.382)
where T and S are second-order tensors. Additionally, as an example of a second-ordervalued tensor function we have:
= (T ) = 1 + T
(1.383)
1.6.1
The
function
f (x)
can
be
approximated
by
the
Taylor
series
as
1 f (a)
( x a ) n , where n! denotes the factorial of n , and f (a) is the value of
n
x
n =0
the function at the application point x = a . We can extrapolate that definition for use on
f ( x) =
n!
tensors. For example, let us suppose we have a scalar-valued tensor function in terms of
a second-order tensor, E , then we can approximate (E ) as:
1
1 ( E 0 )
1 2 ( E 0 )
( E ij E 0 ij ) +
( E ij E ij 0 )( E kl E kl 0 ) + L
( E ) ( E 0 ) +
0!
1! E ij
2! E ij E kl
( E 0 )
2 ( E 0 )
1
0 +
: (E E0 ) + (E E0 ) :
: (E E0 ) + L
E
E E
2
(1.384)
S( E )
(1.385)
(1.386)
92
exp S =
1 +
a =1
2a 3a
+
+ L n ( a ) n ( a ) =
2!
3!
exp
1
1
ln(1 + S ) = a 2a + 3a + L n ( a ) n ( a ) =
2
3
a =1
where a and n
1.6.2
n ( a ) n ( a )
a =1
(a )
(1.387)
ln(1 +
a)n
(a)
(a)
a =1
* (T ) = Q (T ) Q T = Q T Q T
14
4244
3
( )
T*
(1.388)
We can show that (T ) has the same principal directions of T , i.e. (T ) and T are
coaxial tensors. To demonstrate this we can regard the components of T in the principal
space as:
(T )ij
1
= 0
0
0
2
0
0
0
3
(1.389)
(1.390)
(1.391)
(Q)ij
0
1 0
= 0 1 0
0 0 1
(1.392)
After having done the calculation for the matrices (1.391), we obtain:
11 12 13 11
= 12 22 23 = 12
13 23 33 13
0
0
11
*
0
= 0 22
0
0
33
*
12
22
23
13
23 =
33
(1.393)
1 TENSORS
= (T) = 0 1 + 1T + 2 T 2
93
(1.394)
T=
=
a =1
3
(1.395)
(1.396)
an
a =1
Note that T and have the same principal directions n (i ) . Then, we can put the
following set of equations together:
1 = n (1) n (1) + n ( 2) n ( 2) + n (3) n (3)
(1)
(1)
(2)
(2)
( 3)
( 3)
T = 1n n + 2 n n + 3n n
2
2 (1)
2 (2)
2 ( 3)
(1)
( 2)
( 3)
T = 1n n + 3n n + 3n n
(1.397)
( 2 + 3 )
23
T2
1
T+
( 1 3 )( 1 2 )
( 1 3 )( 1 2 )
( 1 3 )( 1 2 )
M ( 2) =
M ( 3) =
( 1 + 3 )
1 3
T2
1
T+
( 2 1 )( 2 3 )
( 2 1 )( 2 3 )
( 2 1 )( 2 3 )
(1.398)
( 1 + 2 )
1 2
T2
1
T+
( 3 1 )( 3 2 )
( 3 1 )( 3 2 )
( 3 1 )( 3 2 )
(1.399)
1 2 3
2 1 3
3 1 2
+
+
( 1 3 )( 1 2 ) ( 2 1 )( 2 3 ) ( 3 1 )( 3 2 )
1 =
2 =
1 ( 2 + 3 )
2 ( 1 + 3 )
3 ( 1 + 2 )
( 1 3 )( 1 2 ) ( 2 1 )( 2 3 ) ( 3 1 )( 3 2 )
(1.400)
3
1
2
+
+
( 1 3 )( 1 2 ) ( 2 1 )( 2 3 ) ( 3 1 )( 3 2 )
We can now show that if a tensor function ( T ) is given in (1.399), this tensor function is
isotropic:
94
* (T ) = Q (T ) Q T = Q 0 1 + 1 T + 2 T 2 Q T
= 0 Q 1 Q T + 1Q T Q T + 2 Q T 2 Q T = 0 1 + 1 T * + 2 T *
(1.401)
= (T * )
1.6.3
(1.402)
, A =
(e i e j )
A
A ij
(1.403)
(1.404)
b = F FT
(1.405)
where F is an arbitrary second-order tensor with the restriction det ( F ) > 0 imposed on it.
We must also bear in mind that there is a scalar-valued isotropic tensor function,
= (I C , II C , III C ) , expressed in terms of the principal invariants of C , where I C = I b ,
II C = II b , III C = III b . Next, we can find the partial derivative of with respect to C , and
with respect to b . We must also verify that the following relation holds:
F ,C F T = , b b
(1.406)
(I C , II C , III C ) I C
II C
III C
=
+
+
C
I C C II C C III C C
(1.407)
Considering the partial derivatives of the principal invariants, we can state that:
I C
=1
C
II C
= I C 1 C T = I C 1 C = II C C 1 III C C 2
C
(1.408)
III C
= III C C T = III C C 1 = C 2 I C C + II C 1
C
II C
III C
I C
=1,
= I C 1 C and
= III C C 1
C
C
C
(I C 1 C ) + III C C 1
1+
I C
II C
III C
(1.409)
1 TENSORS
, C =
I C 1
+
II C
I C II C
C +
III C C 1
III C
95
III C
= C 2 I C C + II C 1 into the equation in (1.407), thus:
C
II C
I C
=1,
= IC 1 C
C
C
, C =
+
IC +
II C 1
+
I C C +
III C
I C II C
II C III C
III C
(1.410)
2
C
(1.411)
II C
III C
I C
=1,
= II C C 1 III C C 2 ,
= III C C 1 , in the
C
C
C
1 +
II C +
III C C 1
III C C 2
III C
II C
II C
(1.412)
If we now observe both I C = I b , II C = II b , III C = III b , and the equation in (1.410), we can
draw the conclusion that:
, b =
I b
b+
I b 1
III b b 1
II b
II b
III b
(1.413)
+
F , C F T =
I C F 1 F T
F C F T +
III C F C 1 F T
II C
III C
I C II C
(1.414)
C = FT F
(1.415)
F C F T = F F T F F T = b b = b2
C 1 = F 1 b 1 F
F C 1 F T = F F 1 b 1 F F T = b 1 b
(1.416)
+
F , C F T =
b +
I C b
III C b 1 b
I
I
I
I
I
I
I
I
C
C
C
C
=
+
b+
I C 1
III C b 1 b
II C
III C
I C II C
(1.417)
In light of the equation in (1.413) and (1.417), we can draw the conclusion that:
F , C F T =
b+
I b 1
III b b 1 b
+
II b
III b
I b II b
= ,b b = b ,b
(1.418)
96
,F =
(C ) C
=
:
F
C F
notation
indicial
( , F )kl
C ij
C ij Fkl
(1.419)
=
=
Fqi Fqj
Fkl
( )
Fqi
Fkl
Fqj + Fqi
( )
Fqj
(1.420)
Fkl
= qk il Fqj + qk jl Fqi
= il Fkj + jl Fki
( , F )kl
il Fkj + jl Fki
C ij
= Fkj
+ Fki
C lj
C il
(1.421)
( , F )kl
=2
Fkj = 2
Fkj
C lj
C jl
, F = 2 , C F T = 2 F , C
(1.422)
(1.423)
Therefore, we can draw the conclusion that , C and U are coaxial tensors.
Let A be a symmetric second-order tensor, and = (A ) be a scalar-valued tensor
function. The following relationships hold:
, b = 2b , A for A = b T b
, b = 2 , A b for A = b b T
, b = 2b , A = 2 , A b
= b , A + , A b
for A = b b and b = b T
(1.424)
1 TENSORS
97
Tij = T12
T13
T12
T22
T23
T11
T13
T
22
T33
{T } =
T23 Voigt
T12
T23
T33
T13
(1.425)
This representation is called the Voigt Notation. It is also possible to represent a secondorder tensor as:
E11
E ij = E12
E13
E12
E 22
E 23
E11
E13
E
22
E 33
{E } =
E 23 Voigt
2E12
2E 23
E 33
2E13
(1.426)
As we have seen before, a fourth-order tensor, C , that presents minor symmetry, i.e.
C ijkl = C jikl = C ijlk = C jilk , has 6 6 = 36 independent components. Note that, due to the
symmetry of (ij ) we have 6 independent components, and due to the symmetry of (kl )
we have 6 independent components. In Voigt Notation we can represent these
components in a 6 -by- 6 matrix as:
C 1111
C
2211
C
[C ] = 3311
C 1211
C 2311
C 1311
C1122
C1133
C 1112
C1123
C 2222
C 3322
C 2233
C 3333
C 2212
C 3312
C 2223
C 3323
C1222
C 2322
C1233
C 2333
C 1212
C 2312
C1223
C 2323
C1322
C1333
C 1312
C1323
C1113
C 2213
C 3313
C1213
C 2313
C1313
(1.427)
In addition to minor symmetry the tensor also has major symmetry, i.e. C ijkl = C klij , and the
number of independent components have reduced to 21 . One can easily memorize the
order of the components in the matrix [C ] if we consider the order of the second-order
tensor in Voigt Notation, i.e.:
(11)
(22)
(33)
(13)
1.7.1
(1.428)
98
1
1
1
0
0
Voigt
ij 1 = 0 1 0 {} =
0
0 0 1
0
0
(1.429)
In the subsection 1.5.2.5.1 Unit Tensors we have defined three fourth-order unit tensors,
namely, I ijkl = ik jl , I ijkl = il jk and I ijkl = ij kl , among which only I ijkl = ij kl is a
symmetric tensor. The representation of I ijkl = ij kl in Voigt notation can be evaluated by
observing how a symmetric fourth-order tensor is represented in (1.427), thus:
1
1
1
Voigt
I =
0
0
[]
I ijkl = ij kl
1 1 0 0 0
1 1 0 0 0
1 1 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
(1.430)
components
I ijkl =
1
ik jl + il jk , which in Voigt notation becomes:
2
I ijkl
of
I1111
I
2211
I 3311
Voigt
[I ] =
I1211
I 2311
I1311
fourth-order
unit
I1122
I 2222
I1133
I 2233
I1112
I 2212
I1123
I 2223
I 3322
I1222
I 3333
I1233
I 3312
I1212
I 3323
I1223
I 2322
I1322
I 2333
I1333
I 2312
I1312
I 2323
I1323
tensor,
I sym ,
I1113 1
I 2213 0
I 3313 0
=
I1213 0
I 2313 0
I1313 0
are
represented
0 0 0 0 0
1 0 0 0 0
0 1 0 0 0
0 0 12 0 0
0 0 0 12 0
0 0 0 0 12
by
(1.431)
[I ]1
1.7.2
1
0
0
=
0
0
0 0 0 0 0
1 0 0 0 0
0 1 0 0 0
0 0 2 0 0
0 0 0 2 0
0 0 0 0 2
(1.432)
The dot product between a symmetric second-order tensor, T , and a vector n , is given by
r
r
r
b = T n where the components of b can be evaluated as follows:
1 TENSORS
b1 T11
b = T
2 12
b 3 T13
T12
T22
T23
99
(1.433)
T
b = 0 n
33
0
0
n
n
2
2
1
3
T
b 3 0 0 n 3 0 n 2 n1 12
144444244444
3 T23
[N ]T
T13
1.7.3
{b} = [N ] {T }
T
(1.434)
(1.435)
or in matrix form:
T11
T
12
T13
T12
T22
T23
T13 a11
= a 21
T23
a 31
T33
a12
a 22
a 32
a13 T11
a 23 T12
a 33 T13
T12
T22
T23
T13 a11
T23 a 21
T33 a 31
a12
a 22
a 32
a13
a 23
a 33
(1.436)
By multiplying the matrices and by rearranging the result in Voigt notation we obtain:
{T } = [M] {T }
(1.437)
where:
T11
T
22
T
{T } = 33 ;
T12
T23
T13
T11
T
22
T
{T } = 33
T12
T23
T13
(1.438)
and [M] is the transformation matrix for the second-order tensor components in Voigt
Notation. The matrix [M] is given by:
a11 2
2
a 21
2
[M] = a 31
a 21 a11
a a
31 21
a 31 a11
a12 2
a 22 2
a 32 2
a 22 a12
a 32 a 22
a 32 a12
a13 2
a 23 2
a 33 2
a13 a 23
a 33 a 23
a 33 a13
2a11 a12
2a 21 a 22
2a 31 a 32
(a11 a 22 + a12 a 21 )
(a 31 a 22 + a 32 a 21 )
(a 31 a12 + a 32 a11 )
2a12 a13
2a 22 a 23
2a 32 a 33
(a13 a 22 + a12 a 23 )
(a 33 a 22 + a 32 a 23 )
(a 33 a12 + a 32 a13 )
2a11 a13
2a 21 a 23
2a 31 a 33
(1.439)
(a13 a 21 + a11 a 23 )
(a 33 a 21 + a 31 a 23 )
(a 33 a11 + a 31 a13 )
100
{E } = [N ]{E }
(1.440)
where
a11 2
2
a 21
2
[N ] = a31
2a 21 a11
2a a
31 21
2a 31 a11
a12 2
a13 2
a11 a12
a12 a13
a 22
a 23
a 21 a 22
a 22 a 23
a 32
a 33
a 31 a 32
a 32 a 33
2a 22 a12
2a 32 a 22
2a13 a 23
2a 33 a 23
2a 32 a12
2a 33 a13
a 21 a 23
a 31 a 33
(a13 a 21 + a11 a 23 )
(a 33 a 21 + a 31 a 23 )
(a 33 a11 + a 31 a13 )
a11 a13
(1.441)
The matrices (1.439) and (1.441) are not orthogonal matrices, i.e. [M]1 [M]T and
[N ]1 [N ]T . However, it is possible to show that [M]1 = [N ]T .
1.7.4
n ( a ) n ( a )
Matricial
form
a =1
T = AT T A
(1.442)
where A is the transformation matrix between the original set and the principal space,
made up of the eigenvectors n ( a ) . The above equation can be rewritten in terms of
components as follows:
T11
T
12
T13
T12
T22
T23
T13
T1
T
T23 = A 0
0
T33
0 0
0 0
T
0 0 A + A 0 T 2
0 0
0 0
0
0 0 0
T
0 A + A 0 0 0 A
0 0 T3
0
(1.443)
or
T11
T
12
T13
T12
T22
T23
2
a11
T13
+ T3 a 31 a 32
a a
31 33
2
a 31
a11 a12
2
a12
a12 a13
a 31 a 32
2
a 32
a 32 a 33
2
a 21
a11 a13
a12 a13 + T2 a 21 a 22
2
a a
a13
21 23
a 21 a 22
2
a 22
a 22 a 23
a 31 a 33
a 32 a 33
2
a 33
a 21 a 23
a 22 a 23
2
a 23
(1.444)
By regarding how second-order tensors are presented in Voigt Notation as in (1.438), the
spectral representation of a second-order tensor in Voigt notation becomes:
2
2
2
a11
a 21
a 31
T11
T
2
2
2
a12
a 22
a 32
22
2
2
2
T
{T } = 33 = T1 a13 + T2 a 23 + T3 a33
a11 a12
a 21 a 22
a 31 a 32
T12
a a
T23
a a
a a
12 13
22 23
32 33
T
13
a11 a13
a 21 a 23
a 31 a 33
(1.445)
1 TENSORS
1.7.5
101
=
T12
T13
T12
1
3
T23
1
(2 T33 T11 T22 )
3
T13
(1.446)
T33dev 1 1 1 2 0 0 0 T33
dev =
(1.447)
0
0 3 0 0 T12
T12 3 0
T dev
0
0
0 0 3 0 T23
23
dev
0
0 0 0 3 T13
0
T13
r
Problem 1.40: Let T ( x , t ) be a symmetric second-order tensor, which is expressed in
r
terms of the position ( x ) and time (t ) . Also, bear in mind that the tensor components,
x2
T22
T22
T12
T12
T12
T11
T11
T11
x1
T12
x3
T22
x1
102
x2
x1
x 2
x1
a12
a 22
a 22 a12
T11
2a 21 a 22
T22
a11 a 22 + a12 a 21 T12
2a11 a12
(1.448)
T = A T AT
x2
T22
x 2
T22
T12
T11
T12
T11
T11
T11
T12
T12
x1
T22
x1
T22
x1
T = AT T A
a 32
a 33
T =
2
2
cos
2 sin cos T22
22 sin
T12 sin cos cos sin cos 2 sin 2 T12
(1.450)
Making
use
of
the
following
2
1 TENSORS
1 + cos 2
T
11
T = 1 cos 2
22
2
T12
2
sin
103
1 cos 2
sin 2
2
T11
1 + cos 2
sin 2 T22
2
T12
sin 2
cos 2
1 + cos 2
1 cos 2
T11 +
T22 + T12 sin 2
T11 =
2
2
1 cos 2
1 + cos 2
=
T22 T12 sin 2
T11 +
T22
2
2
sin 2
sin 2
T12 =
T11 +
T22 + T12 cos 2
T T22
T12 = 11
sin 2 + T12 cos 2
(1.451)
b) Recalling that the principal directions are characterized by the lack of any tangential
components, i.e. Tij = 0 if i j , in order to find the principal directions in the plane, we let
T12 = 0 , hence:
T T22
T T22
T12 = 11
sin 2 = T12 cos 2
sin 2 + T12 cos 2 = 0 11
2
2
2
2
T
T
sin 2
12
12
=
tg(2 ) =
cos 2 T11 T22
T11 T22
2 T12
T11 T22
1
2
= arctg
(1.452)
To find the principal values (eigenvalues) we must solve the following characteristic
equation:
T11 T
T12
T12
=0
T22 T
=
=
[ ( T11 + T22 )]
T11 + T22
By rearranging the above equation we obtain the principal values for the two-dimensional
case as:
104
T(1, 2 ) =
T11 + T22
T T22
11
2
2
(1.453)
+ T122
c) We directly apply equation (1.451) to evaluate the values of the components Tij ,
(i, j = 1,2) , where T11 = 1 , T22 = 2 , T12 = 4 and = 45 , i.e.:
1 + 2 1 2
T11 = 2 + 2 cos 90 4 sin 90 = 2.5
1 + 2 1 2
1 2
T12 =
sin 90 4 cos 90 = 0.5
2 T12
T11 T22
= arctg
r
2 (4)
=
1 2
( = 41.4375 )
T + T22
T T22
= 11
11
2
2
+ T122
T1 = 5.5311
T2 = 2.5311
= 41.437
T2
x1
Components
= 131.437
T22
T1 = 5.5311
T22
T12
T11
0
0
50
100
-2
45
T12
-4
T2 = 2.5311
150
200
250
300
350
T11
x1
= 86.437
-6
TS max = 4.0311
1 TENSORS
105
A tensor field indicates how the tensor, T ( x , t ) , varies in space ( x ) and time ( t ). In this
section, we regard the tensor field as a differentiable function of position and time. For
more information about it, we need to define some operators, e.g. gradient, divergence, curl,
which we can use as indicators of how these fields vary in space.
A tensor field which is independent of time is called a stationary or steady-state tensor
r
field, i.e. T = T ( x ) . However, if the field is only dependent on t then it is said to be
r
homogeneous or uniform. That is, T (t ) has the same value at every x position.
Tensor fields can be classified according to their order as: scalar, vector, second-order
r
tensor fields, etc. As an example of a scalar field we can quote temperature T ( x , t ) and in
Figure 1.34(a) we can see temperature distribution over time t = t1 . Then, as an example of
r r
a vector field we can quote velocity v ( x , t ) and Figure 1.34(b) shows velocity distribution,
r
in which each point is associated with a vector v over time t = t1 .
t = t1
x3
T5
T8
t = t1
x3
r r
v ( x , t1 )
r
T4 ( x ( 4) , t1 )
T6
T3
T7
T1
x2
x2
T2
x1
a) Scalar field
x1
b) Vector field
= ( x, t )
(1.454)
r r r
v = v ( x, t )
r
v i = v i ( x, t )
(1.455)
r
T = T ( x, t )
r
Tij = Tij ( x , t )
(1.456)
Vector Field
Tensorial notation
Indicial notation
Second-Order Tensor Field
Tensorial notation
Indicial notation
106
1.8.1
Scalar Fields
r
The next analysis is carry out with reference to a stationary scalar field, i.e. = ( x ) , with
continuous values of / x1 , / x 2 and / x 3 . Then, observe that the value of the
r
r
r
r
scalar function at point ( x ) is ( x ) , and if we observe a second point located at ( x + dx ) ,
the total derivative (differential) of the function is defined as:
r
( x + dx ) ( x ) d
( x1 + dx1 , x 2 + dx 2 , x3 + dx3 ) ( x1 , x 2 , x3 ) d
(1.457)
dx1 +
dx 2 +
dx3
x1
x 2
x 3
d =
(1.458)
d = , i dxi
, i
xi
1.8.2
(1.459)
Gradient
(1.460)
where the operator xr is known as the Nabla symbol. Expressing the equation (1.460) in
the Cartesian basis we obtain:
dx1 +
dx 2 +
dx3 =
x1
x 2
x3
= ( xr ) x e 1 + ( xr ) x e 2 + ( xr ) x e 3
1
] [(dx )e
1
+ (dx 2 )e 2 + (dx3 )e 3
(1.461)
dx1 +
dx 2 +
dx3 = ( xr ) x1 dx1 + ( xr ) x2 dx 2 + ( xr ) x3 dx3
x1
x 2
x 3
(1.462)
Therefore, we can draw the conclusion that the xr components in the Cartesian basis
are:
( xr )1
x1
( xr ) 2
x 2
( xr ) 3
x3
(1.463)
e1 +
e2 +
e3
x 2
x3
x1
(1.464)
1 TENSORS
xr =
107
e i , i e i Nabla symbol
xi
(1.465)
The direction of
The magnitude of
xr
xr
(1.466)
The surface = const , called the surface level, or isosurface or equiscalar surface, is the
surface formed by points which all have the same value of , so, if we move along the
level surface the values of the function do not change.
r r
(1.467)
Using the definition of xr , given in (1.465), the gradient of the vector field becomes:
r ( v i e i )
xr v =
e j = ( v i e i ) , j e j = v i , j e i e j
x j
n
(1.468)
xr
= const = c1
= const = c 2
c1 > c 2 > c3
= const = c 3
Therefore, we can define the gradient of a tensor field (( x , t )) in the Cartesian basis as:
xr () =
(1.469)
As noted, the gradient of a vector field becomes a second-order tensor field, whose
components are:
108
v i, j
v 1
x1
v i v 2
=
x j x1
v 3
x1
v 1
x3
v 2
x3
v 3
x3
v 1
x 2
v 2
x 2
v 3
x 2
(1.470)
(Tij e i e j )
x k
e k = Tij ,k e i e j e k
(1.471)
( xr T )ijk
Tij ,k
(1.472)
Problem 1.41: Find the gradient of the function f ( x1 , x 2 ) = cos( x1 ) + exp x1x2 at the point
( x1 = 0, x 2 = 1) .
Solution: By definition, the gradient of a scalar function is given by:
f
f
e1 +
e2
x1
x 2
f
;
= x1 exp x1x2
x 2
xr f =
where:
f
= sin( x1 ) + x 2 exp x1x2
x1
r r
Problem 1.42: Let u( x ) be a stationary vector field. a) Obtain the components of the
r r
r
differential du . b) Now, consider that u( x ) represents a displacement field, and is
independent of x3 . With these conditions, graphically illustrate the displacement field in
the differential area element dx1 dx 2 .
Solution: According to the differential and gradient definitions, it holds that:
r r
u( x )
r r r
r r r
du u( x + dx ) u( x )
r
r r
du = xr u dx
x2
r
x
r
dx
r r
r
u( x + dx )
r
r
x + dx
x1
x3
du i =
u i
dx j
x j
u1
du1 x1
du = u 2
2 x
du 3 1
u 3
x1
u1
x 2
u 2
x 2
u 3
x 2
u1
x3 dx
1
u 2
dx
2
x3
u 3 dx3
x3
1 TENSORS
109
or:
u1
u
u
dx1 + 1 dx 2 + 1 dx3
du1 =
x1
x 2
x3
u 2
u
u
dx1 + 2 dx 2 + 2 dx3
du 2 =
x1
x 2
x3
u
u
u
du 3 = 3 dx1 + 3 dx 2 + 3 dx 3
x1
x 2
x3
with
du1 = u1 ( x1 + dx1 , x 2 + dx 2 , x3 + dx3 ) u1 ( x1 , x 2 , x3 )
du 2 = u 2 ( x1 + dx1 , x 2 + dx 2 , x3 + dx3 ) u 2 ( x1 , x 2 , x3 )
du = u ( x + dx , x + dx , x + dx ) u ( x , x , x )
3
1
1
2
2
3
3
3
1
2
3
3
As the field is independent of x3 , the displacement field in the differential area element is
defined as:
u1
u1
1
2
u
u
du = u ( x + dx , x + dx ) u ( x , x ) = 2 dx + 2 dx
2
1
1
2
2
2
1
2
1
2
2
x1
x 2
or:
u1
u1
u1 ( x1 + dx1 , x 2 + dx 2 ) = u1 ( x1 , x 2 ) + x dx1 + x dx 2
2
1
u ( x + dx , x + dx ) = u ( x , x ) + u 2 dx + u 2 dx
2
1
1
2
2
2
1
2
2 1
x1
x 2
Note that the above equation is equivalent to the Taylor series expansion taking into
account only up to linear terms. The representation of the displacement field in the
differential area element is shown in Figure 1.36.
110
u2 +
u 2
dx 2
x 2
u2 +
( x1 + dx1 , x 2 + dx 2 )
( x1 , x 2 + dx 2 )
u1 +
u
u 2
dx1 + 2 dx 2
x 2
x1
u1
dx 2
x 2
u1 +
r
du
dx 2
u
u1
dx1 + 1 dx 2
x 2
x1
u2 +
(u 2 )
( x1 + dx1 , x 2 )
( x1 , x 2 )
x2
u 2
dx1
x1
u1 +
(u1 )
u1
dx1
x1
dx1
x1
144444444444444444424444444444444444443
=
644444444444444444474444444444444444448
x 2 ,u 2
u2 +
u1
dx2
x2
u 2
dx2
x2
dx 2
O
u2
O
dx1
u1
u1 +
dx 2
A
O
u 2
dx1
x1
dx1
u1
dx1
x1
x1 ,u1
1 TENSORS
1.8.3
111
Divergence
r r
(1.473)
r
r
r
r
div ( v ) xr v = xr v : 1 = Tr ( xr v )
(1.474)
][
r
r
xr v = xr v : 1 = v i , j e i e j : kl e k e l
v
v
v
= 1 + 2 + 3
x1 x 2 x3
= v i , j kl ik jl = v k ,k
(1.475)
or
[
][
= [v e ] e
= [v e ] e
[v e ]
e
=
r
r
xr v = xr v : 1 = v i , j e i e j : kl e k e l
i, j
kl
i ,k
lj
(1.476)
x k
Which we can use to insert the following operator into the Cartesian basis:
xr () =
() Divergence of () in Cartesian
ek
x k
basis
(1.477)
We can also verify that, when divergence is applied to a tensor field its rank decreases by
one order.
r
=
=
(Tij e i e j )
Tij
x k
jk e i
e k
(1.478)
x k
= Tik ,k e i
NOTE: In this text book, when dealing with gradient or divergence of a tensor field, e.g. xr v
(the gradient of the vector field), xr T (the gradient of a second-order tensor field), xr T
(divergence of a second-order tensor field), this does not indicate that we are making a
r
r
r
r
tensor operation between a vector and a tensor, i.e. xr v ( xr ) ( v ) , xr T ( xr ) ( T )
r
and xr T ( xr ) ( T ) and so on. In this textbook, xr is an operator which must be
applied to the entire tensor field, so, the tensor must be inside the operator, (see equations
r
(1.477) and (1.469)). Nevertheless, it is possible to relate xr v , xr T or xr T to tensor
operations between tensors, and it is easy to show that:
112
r
r
r
xr v = ( v ) ( xr )
r
xr T = ( T ) ( xr )
r
r
xr T = ( T ) ( xr ) = ( xr ) ( T T )
(1.479)
Once the Nabla symbol is defined we introduce the Laplacian operator 2 as:
2
xr = xr xr =
xi
=
e j e i =
x j
xi x j ij xi xi
2
2
2
xr 2 + 2 + 2 = , k , k = , kk
x1 x 2 x3
r r
Then, the vector Laplacian of a vector field, v ( x ) , is given by:
(1.480)
r
r
2r
2r
xr v = xr ( xr v ) components
xr v = [ xr ( xr v )] = v i ,kk
i
(1.481)
r r
Observing that a = a j e j , b = b k e k , xr = e i
(a j e j + b k e k )
x i
e i
a j
x i
e j e i +
r
r
b k
a
b
e k e i = i + i = xr a + xr b
x i
x i x i
r
r r
r
xr (a + b) = (a i + b i ), i = a i , i +b i , i = xr a + xr b
r
true:
a j
a j
r r (a j e j )
a j
e i (b k e k ) =
( xr a) b =
e j e i (b k e k ) = b k ik
e j = b k
e j
x i
x i
x k
x i
Thus,
j =1
b1
a1
a
a
+ b2 1 + b3 1
x1
x 2
x 3
j = 2 b1
a 2
a
a
+ b2 2 + b3 2
x1
x 2
x 3
j = 3 b1
a 3
a
a
+ b 2 3 + b3 3
x 2
x 3
x1
1 TENSORS
113
r
r
q 1
1 r
xr = xr q 2 q xr T
T
T T
r r
r
where q( x , t ) is an arbitrary vector field, and T ( x , t ) is a scalar field.
Solution:
1.8.4
r
q
qi qi
1
1
xr =
= q i ,i 2 q i T,i
T
x
T
T
T
T
,i
i
r
1
1 r
= xr q 2 q xr T (scalar)
T
T
The Curl
The curl (or rotor) of a vector field, v ( x ) is denoted by curl( v ) rot ( v ) xr v , and is
defined in the Cartesian basis as:
r
e j () The curl (rotor) of a tensor field in
xr () =
x j
the Cartesian basis
(1.482)
Note that the curl is already a tensor operator between two vectors. Using the definition of
the vector product we obtain the curl of a vector field as:
r
r
r
v
v
e j ( v k e k ) = k e j e k = k ijk e i = ijk v k , j e i
rot ( v ) = xr v =
x j
x j
x j
(1.483)
where ijk is the permutation symbol defined in (1.55). Moreover, we have applied the
definition e j e k = ijk e i and we can also note that:
e 1
r
r
r
rot ( v ) = xr v =
x1
v1
v
= 3
x 2
e 2
x 2
v2
v
2
x3
e 3
= ijk v k , j e i
x3
v3
v
v
v
v
e 1 + 1 3 e 2 + 2 1 e 3
x1 x 2
x3 x1
(1.484)
We can verify that the antisymmetric part of a vector field gradient, which is illustrated by
r
( xr v ) skew W , has as components:
[(
skew
r
x v)
ij
v iskew
,j
0
= W21
W31
1 v
v
= 2 1
2 x1 x 2
1 v
v
3 1
2 x1 x3
W12
0
W32
1 v 1 v 2
2 x 2 x1
0
1 v 3 v 2
2 x 2 x3
W13 0
W23 = W12
0 W13
W12
0
W23
1 v 1 v 3
2 x 3 x1
1 v 2 v 3
2 x 3 x 2
W13 0
W23 = w3
0 w2
w3
0
w1
(1.485)
w2
w1
0
114
where w1 , w2 , w3 are the components of the axial vector w associated with W , (see
subsection: 1.5.2.2.2. Antisymmetric Tensor).
With reference to the definition of the curl in (1.484) and the relationship in (1.485), we can
conclude that:
r
r
r
v
v
v
v
v
v
rot ( v ) xr v = 3 2 e 1 + 1 3 e 2 + 2 1 e 3
x1 x 2
x3 x1
x 2 x3
(1.486)
r
r
r r r 1 r
W v = w v = xr v v
2
(1.487)
It could be interesting to note that the equation in (1.486) can be obtained by means of
Problem 1.18, in which we showed that
r
1 r r
(a x ) is the axial vector associated with the
2
antisymmetric tensor ( x a ) skew . Therefore, the axial vector associated with the
[r
skew
is the vector
1 rr r
x v .
2
As we can see, the curl describes the rotational tendency of the vector field.
Summary
Divergence
Gradient
div () xr
grad() xr
Scalar
Curl
r
rot () xr
vector
Vector
Scalar
Second-order tensor
Vector
Second-order tensor
Vector
Third-order tensor
Second-order tensor
r
r
r
r
r
r
rot (a) = xr (a) = ( xr a) + ( xr a)
r
r
The result of the algebraic operation xr (a) is a vector, whose components are
given by:
[r
r
x
r
(a) i
= ijk (a k ) , j
= ijk ( , j a k + a k , j )
= ijk a k , j ijk , j a k
r
= ( xr a) i ijk ( xr ) j a k
r
r
= ( xr a) i ( xr a) i
(1.488)
1 TENSORS
115
r r
r r
r
r r
r r
r r
(1.489)
xr (a b) = ( xr b)a ( xr a)b + ( xr a) b ( xr b) a
r r
r r
The components of the vector product (a b) are given by (a b) k = kij a i b j , thus:
[r
r r
(a b) l = lpk ( kij a i b j ) , p = kij lpk (a i , p b j + a i b j , p )
r
x
(1.490)
Regarding that kij = ijk and ijk lpk = il jp ip jl , the above equation becomes:
[r
r
x
r r
(a b) l = kij lpk (a i , p b j + a i b j , p ) = ( il jp ip jl )(a i , p b j + a i b j , p )
(1.491)
= il jp a i , p b j ip jl a i , p b j + il jp a i b j , p ip jl a i b j , p
= al , p b p a p, p b l + al b p, p a p b l, p
r r
r r
r r
( xr b) a l = a p b l , p .
r
r
r
r
2r
xr ( xr a) = xr ( xr a) xr a
r
r
r
r
The components of ( xr a) are given by ( xr a) i = ijk a k , j , thus:
123
(1.492)
ci
[r
r
x
r
r
( xr a) q = qli c i ,l = qli ( ijk a k , j ) ,l = qli ijk a k , jl
(1.493)
Once again considering that qli ijk = qli jki = qj lk qk lj , the above equation becomes:
[r
r
x
r
r
( xr a) q = qli ijk a k , jl = ( qj lk qk lj )a k , jl = qj lk a k , jl qk lj a k , jl
= a k ,kq a q ,ll
(1.494)
xr ( xr ) = xr + ( xr ) ( xr )
2
(1.495)
xr ( xr ) = ( ,i ) ,i = ,ii + ,i ,i = xr + ( xr ) ( xr )
2
(1.496)
where and are scalar fields. Other interesting equations derived from the above are:
xr ( xr ) = xr + ( xr ) ( xr )
2
(1.497)
xr ( xr ) = xr + ( xr ) ( xr )
2
xr ( xr xr ) = xr xr
2
1.8.5
(1.498)
(1.499)
116
r r
other words, given a conservative field, the curl xr b equals zero. However, if the curl
of a vector field equals zero, this does not necessarily mean that the field is conservative.
r
Problem 1.46: Let be a scalar field, and u be a vector field. a) Show that
( xr
r
r
r
v = 0 and xr ( xr ) = 0 .
r
r
r
r
r r
r r
r
r r
r
r
b) Show that xr xr v v = ( xr v )( xr v ) + xr ( xr v ) v ( xr v ) ( xr v ) ;
r
r
r
r
r
r
r
c) Referring = xr v , show that xr ( xr 2 v ) = xr 2 ( xr v ) = xr 2 .
xr
[(
Solution:
r
r
Regarding that: xr v = ijk v k , j e i
( xr
r
v =
ijk v k , j e i e l = ijk
v k , j il = ijk
v k , j = ijk v k , ji
x l
x l
x i
r
The second derivative of v is symmetrical with ij , i.e. v k , ji = v k ,ij , while ijk is
xr
( )
( )
b) Denoting by = xr v we obtain:
[(
) ]
r
r
r r
xr xr v v
r
r r
= xr ( v )
r
r r
r r
r r
r r
r r
xr ( v ) = ( xr v ) ( xr )v + ( xr ) v ( xr v )
r
r
r
Note that xr = xr ( xr v ) = 0 . Then, we can draw the conclusion that:
r
r r
r r
r r
r r
xr ( v ) = ( xr v ) + ( xr ) v ( xr v )
r
r
r r
r
r r
r
r
= ( xr v )( xr v ) + xr ( xr v ) v ( xr v ) ( xr v )
r
r
r
r
2r
xr v = xr ( xr v ) xr ( xr v )
r
r
r
= xr ( xr v ) xr
r
r
r
r
r
r
2r
xr ( xr v ) = xr [ xr ( xr v )] xr ( xr )
14442r 444
3
=0
r
r
r
r
r
r
r
2r
2r
2r
xr ( xr v ) = xr ( xr ) = xr ( xr ) + xr = xr xr ( xr v ) + xr
144244
3
=0
r
r
2
r
r
= x ( x v )
1 TENSORS
117
Integration by Parts
where v ( x) =
1.9.2
v( x)u ( x)dx
(1.500)
dv
, and the functions u (x) , v(x) are differentiable in a x b .
dx
Given a domain B with a volume V , and bounded by the surface S , (see Figure 1.37), the
divergence theorem, also called the Gauss theorem, applied to the vector field states that:
r
xr v dV =
r
v n dS =
i ,i
r r
v dS
(1.501)
dV = v i n i dS = v i dS i
x2
r
dS
dS
B
r
x
x1
x3
Figure 1.37.
Let T be a second-order tensor field defined in the domain B . The divergence theorem
applied to this field is defined as:
r
x
dV =
T n dS = T dS
S
ij , j
dV = Tij n j dS = Tij dS j
S
(1.502)
118
(x
= ik x i n j dS
= ( ik xi ), j dV
), j dV
+ ik xi , j dV
ik , j x i
= x k n j dS
(1.503)
= x k n j dS
= x k , j dV
V
V kj = x k n j dS
r
V 1 = x n dS
(1.504)
(x
i
jk
), k dV = ( xi jk ), k dV = xi jk n k dS
[x
i , k jk
ik jk
+ xi jk ,k dV = xi jk n k dS
(1.505)
+ xi jk ,k dV = xi jk n k dS
x
i
jk , k
dV = xi jk n k dS ji dV
S
r
r
x xr dV = x ( n ) dS T dV
(1.506)
or
r
x xr dV =
r
T
( x ) dS
S
dV
(1.507)
r
x
) x dV = ( n ) x
S
dS dV
(1.508)
[m :
r
r
x ( x )
]d = [( xr ) m] n d [( xr m) xr ]d
1 TENSORS
[m
ij , ij
] d = ( , m
i
119
[m
ij )n j
ij , j , i
] d
x2
x1
Figure 1.38
Solution: We could directly apply the definition of integration by parts to demonstrate the
above relationship. But, here we will start with the definition of the divergence theorem.
r
That is, given a tensor field v , it is true that:
r
x
d =
r
v n d v
indicial
j, j
d = v j n j d
Observing that the tensor v is the result of the algebraic operation v = xr m and the
equivalent in indicial notation to v j = , i m ij , and by substituting it in the above equation
we obtain:
j, j
d = v j n j d
[,
[,
ij
[,
ij
m ij
,j
d = , i m ij n j d
m ij + , i m ij , j d = , i m ij n j d
m ij d = , i m ij n j d
[,
m ij , j d
[m :
r
r
x ( x )
]d = [( xr ) m] n d [ xr ( xr m)]d
NOTE: Consider now the domain defined by the volume V , which is bounded by the
r
surface S with the outward unit normal to the surface n . If N is a vector field and T is a
scalar field, it is also true that:
N T,
i
ij
dV = N i T , i n j dS N i , j T , i dV
S
r
r
r
N xr ( xr T )dV = ( xr T N ) n dS xr T xr N dV
120
1.9.3
Independence of Path
A curve which connects two points A and B is denoted by the path from A to B , (see
Figure 1.39). We can then establish the condition by which a line integral is independent of
path, (see Figure 1.39).
C1
r
dr
r
b
If
r r
r r
b dr = b dr
C1
x3
C2
r
b - Conservative field
C2
x2
x1
r r
Let b( x ) be a continuous vector fields, then the integral
r r
b dr is independent of the path if
and only if b is a conservative field. This means that there is a scalar field so that b = xr .
Regarding the above, we can draw the conclusion that:
B
r r B
r
b dr = xr dr
r
r
(b1 e 1 + b 2 e 2 + b 3 e 3 ) dr =
e 1 +
e2 +
e 3 dr
x3
x 2
x1
A
A
(1.509)
Thus
b1 =
x1
; b2 =
x 2
; b3 =
x3
(1.510)
e 1
x1
b1
e 2
x 2
b2
e 3
= 0i
x3
b3
(1.511)
=0
x 2 x3
b1 b 3
=0
x3 x1
b 2 b1
=0
x1 x 2
b 3 b 2
=
x 2 x 3
b
b
1 = 3
x3 x1
b 2 b1
=
x1 x 2
(1.512)
Therefore, if the above condition is not satisfied, the field is not conservative.
Notes on Continuum Mechanics(Springer/CIMNE) - (Chapter 01) - By: Eduardo W.V. Chaves
1 TENSORS
1.9.4
121
Let S be a regular surface, (see Figure 1.40), and F( x , t ) be a vector field. According to the
Kelvin-Stokes Theorem:
r r
r
r
r
r
r
r F ) dS = ( r F ) n
dS
F
=
(
x
x
(1.513)
If p denotes the unit vector tangent to the boundary , the Stokes theorem becomes:
r
F p d = (
r
x
r
r
r
r
F) dS = ( xr F ) n dS
(1.514)
xr F i =
x1
F1
e 2
x 2
F2
e 3
F
F
= 3 2
x3 x 2 x3
F3
F
F
e 1 + 1 3
x3 x1
F
F
e 2 + 2 1
x1 x 2
e 3
(1.515)
dS 3
(1.516)
F dx
F
F
+ F2 dx 2 + F3 dx3 = 3 2
x
x3
2
F
dS1 + 1 3
x3 x1
F
F
dS 2 + 2 1
x1 x 2
x3
x2
x1
(1.517)
which is known as the Stokes theorem in the plane or Greens theorem, which is expressed in
terms of components as:
F dx
F
F
+ F2 dx 2 = 2 1
x
x 2
1
dS 3
(1.518)
122
x3
x2
x1
Figure 1.41.
r
dS = dSe 3
x3
x2
e 3
x1
1.9.5
Greens Identities
r
xr F dV =
r
F n dS
(1.519)
(1.520)
xr ( xr xr ) = xr xr
(1.521)
and also regarding that F = xr , and by substituting (1.520) into (1.519) we obtain:
1 TENSORS
r
x
123
+ ( xr ) ( xr ) dV = xr n dS
S
( xr ) ( xr ) dV =
r
x
(1.522)
n dS xr dV
2
r
x
xr 2 dV = ( xr xr ) n dS
(1.523)
b n
i
d S = , i b i dV
where = ( x ) .
r
b n
i
dS = ijk v k , j n i dS
b n
i
dS = ijk v k , j n i dS
S
= ( ijk v k , j ), i dV
V
= ( ijk , i v k , j + ijk v k , ji ) dV
V
= (, i ijk v k , j + ijk v k , ji ) dV = , i b i dV
1
424
3
1
424
3
V
bi
124