Professional Documents
Culture Documents
Introductory Remarks
In the study of particle mechanics and the mechanics of solid rigid bodies vector notation provides
a convenient means for describing many physical quantities and laws.
In studying the mechanics of solid deformable media, physical quantities of a more complex
nature, such as stress and strain, assume importance.1 Mathematically such physical quantities are
represented by matrices.
In the analysis of general problems in continuum mechanics, the physical quantities encountered
can be somewhat more complex than vectors and matrices. Like vectors and matrices these physical
quantities are independent of any particular coordinate system that may be used to describe them.
At the same time, these physical quantities are very often specified most conveniently by referring to
an appropriate system of coordinates. Tensors, which are a generalization of vectors and matrices,
offer a suitable way of mathematically representing these quantities.
As an abstract mathematical entity, tensors have an existence independent of any coordinate
system or frame of reference, yet are most conveniently described by specifying their components
in an appropriate system of coordinates. Specifying the components of a tensor in one coordinate
system determines the components in any other system. Indeed, the law of transformation of tensor
components is often used as a means for defining the tensor.
The objective of this appendix is to present a brief overview of tensors. Further details pertaining
to this subject can be found in standard books on the subject such as [2, 4], or in books dealing
with Continuum Mechanics such as [1, 3].
1 We recall that in describing stresses and strains one must specify not only the magnitude of the quantity, but also
the orientation of the face upon which this quantity acts.
A.2
General Characteristics
A.3
Indicial Notation
A tensor of any order, its components, or both may be represented clearly and concisely by the
use of indicial notation . This convention was believed to have been introduced by Einstein. In
this notation, letter indices, either subscripts or superscripts, are appended to the generic or kernel
letter representing the tensor quantity of interest; e.g. Aij , Bijk , ij , akl , etc. Some benefits of using
indicial notation include: (1) economy in writing; and, (2) compatibility with computer languages
(e.g. easy correlation with do loops). Some rules for using indicial notation follow.
Index rule
In a given term, a letter index may occur no more than twice.
Range Convention
When an index occurs unrepeated in a term, that index is understood to take on the values
1, 2, , N where N is a specified integer that, depending on the space considered, determines
the range of the index.
Summation Convention
When an index appears twice in a term, that index is understood to take on all the values of its
range, and the resulting terms are summed. For example, Akk = a11 + a22 + + aN N .
Free Indices
By virtue of the range convention, unrepeated indices are free to take the values over the range, that
is, 1, 2, , N . These indices are thus termed free. The following items apply to free indices:
Any equation must have the same free indices in each term.
The tensorial rank of a given term is equal to the number of free indices.
N (no.
of free indices)
Dummy Indices
In the summation convention, repeated indices are often referred to as dummy indices, since their
replacement by any other letter not appearing as a free index does not change the meaning of the
term in which they occur.
In the following equations, the repeated indices are thus dummy indices: Akk = Amm and
aik bkl = ain bnl . In the equation Eij = eim emj i and j represent free indices and m is a dummy
index. Assuming N = 3 and using the range convention, it follows that Eij = ei1 e1j + ei2 e2j + ei3 e3j .
Care must be taken to avoid breaking grammatical rules in the indicial language. For example,
the expression a b = (ak
ek ) (bk
ek ) is erroneous since the summation on the dummy indices is
ambiguous. To avoid such ambiguity, a dummy index can only be paired with one other dummy
index in an expression. A good rule to follow is use separate dummy indices for each implied
summation in an expression.
Contraction of Indices
Contraction refers to the process of summing over a pair of repeated indices. This reduces the order
of a tensor by two.
For example:
Contracting the indices of Aij (a second-order tensor) leads to Akk (a zeroth-order tensor or
scalar).
Contracting the indices of Bijk (a third-order tensor) leads to Bikk (a first-order tensor).
Contracting the indices of Cijkl (a fourth-order tensor) leads to Cijmm (a second-order tensor).
Comma Subscript Convention
A subscript comma followed by a subscript index i indicates partial differentiation with respect
to each coordinate xi . Thus,
,m
xm
ai,j
ai
xj
Cij,kl
Cij
xk xl
etc.
(A.1)
If i remains a free index, differentiation of a tensor with respect to i produces a tensor of order
one higher. For example
Aj,i =
Aj
xi
(A.2)
Vm
V1
V2
VN
=
+
+ +
xm
x1
x2
xN
(A.3)
A.4
Coordinate Systems
The definition of geometric shapes of bodies is facilitated by the use of a coordinate system. With respect to a particular coordinate system, a vector may be defined by specifying the scalar components
of that vector in that system.
A rectangular Cartesian coordinate (RCC) system is represented by three mutually perpendicular
axes in the manner shown in Figure A.1.
e^
e^
x1
^
e
x
2
e2 ,
e3 , directed parallel to the x1 , x2 and x3 coordinate axes, respectively.
.
Remark
1. The summation convention is very often employed in connection with the representation of
vectors and tensors by indexed base vectors written in symbolic notation. In Euclidean space any
vector is completely specified by its three components. The range on indices is thus 3 (i.e., N = 3).
A point with coordinates (q1 , q2 , q3 ) is thus located by a position vector x, where
x = q1
e1 + q2
e2 + q3
e3
(A.4)
x = qi
ei
(A.5)
where i is a summed index (i.e., the summation convention applies even though there is no repeated
index on the same kernal).
.
The base vectors constitute a right-handed unit vector triad or right orthogonal triad that satisfy
the following relations:
ei
ej = ij
and
(A.6)
ei
ej = ijk
ek
(A.7)
The set of base vectors satisfying the above conditions are often called an orthonormal basis.
In equation (A.6), ij denotes the Kronecker delta (a second-order tensor typically denoted by
I), defined by
(
1
ij =
0
if i = j
if i =
6 j
(A.8)
In equation (A.7), ijk is the permutation symbol or alternating tensor (a third-order tensor),
that is defined in the following manner:
ijk
0
if i, j, k are not a permutation of 1,2,3
(A.9)
(A.10)
ij Cjk = Cik
(A.11)
(A.12)
(A.13)
|a| = (a a) 2 = (ak ak ) 2
(A.14)
(A.15)
A11
A12
A22
A32
A13
A23
A33
(A.16)
Ai1
Ai3 A1i
Aj3 = A2i
Ak3 A3i
A1j
A2j
A3j
ir is it
ijk rst = jr js jt
kr ks kt
Ai2
Aj2
Ak2
A1k
A2k
A3k
A.5
Define a point P in space referred to two rectangular Cartesian coordinate systems. The base vectors
for one coordinate system are unprimed, while for the second one they are primed. The origins of
both coordinate systems are assumed to coincide. The position vector to this point is given by
x = xi
ei = x0j
e0j
(A.17)
To obtain a relation between the two coordinate systems, form the scalar product of the above
equation with either set of base vectors; viz.,
e0k (xi
ei ) =
e0k x0j
e0j
(A.18)
xi (
e0k
ei ) = x0j kj = x0k
(A.19)
Upon expansion,
Since
ei and
e0k
are unit vectors, it follows from the definition of the scalar product that
e0k
ei = (1)(1) cos (
e0k ,
ei ) Rki
(A.20)
x0k
The Rki are computed by taking (pairwise) the cosines of angles between the
and xi axes.
For a prescribed pair of coordinate axes, the elements of Rki are thus constants that can easily be
computed.
From equation (A.19) it follows that the coordinate transformation for first-order tensors (vectors) is thus
x0k = Rki xi
(A.21)
e0j
ek (xi
ei ) =
ek x0j
(A.22)
xi ki = x0j
ek
e0j
(A.23)
xk = Rjk x0j
(A.24)
Thus,
or
(A.25)
(A.26)
(A.27)
(A.28)
T
implying that the R are orthogonal tensors (i.e, R = R ). Linear transformations such as those
given by equations (A.21) and (A.24), whose direction cosines satisfy the above equation, are thus
called orthogonal transformations.
The transformation rules for second-order Cartesian tensors are derived in the following manner.
Let S be a second-order Cartesian tensor, and let
u = Sv
(A.29)
(A.30)
0
Next we desire to relate S to S . Using equation (A.21), substitute for u and v to give
Ru = S0 Rv
(A.31)
Ru = RSv
(A.32)
RSv = S0 Rv
(A.33)
implying that
0
or Sij
= Rik Rjl Skl
(A.34)
In a similar manner,
0
S = RT S0 R or Sij = Rmi Rnj Smn
(A.35)
The transformation rules for higher-order tensors are obtained in a similar manner. For example,
for tensors of rank three,
A0ijk = Ril Rjm Rkn Almn
(A.36)
(A.37)
and
10
(A.38)
0
Cijkl = Rpi Rqj Rrk Rsl Cpqrs
(A.39)
and
A.6
In the present discussion, only symmetric second-order tensors with real components are considered.
For every symmetric tensor A, defined at some point in space, there is associated with each direction
(specified by the unit normal n) at the point, a vector given by the inner product
v = An
(A.40)
n
v = An
(A.41)
or
(Aij ij ) nj = 0
(A.42)
For a non-trivial solution, the determinant of the coefficients must be zero; viz.,
det (A I) = 0
or
Aij ij = 0
(A.43)
11
This is called the characteristic equation of A. In light of the symmetry of A, the expansion of
equation (A.43) gives
(A11 )
A12
A13
A12
(A22 )
A23 = 0
(A.44)
A13
A23
(A33 )
The evaluation of this determinant leads to a cubic polynomial in , known as the characteristic
polynomial of A; viz.,
3 I1 2 + I2 I3 = 0
(A.45)
(A.46)
1
I2 = (Aii Ajj Aij Aij )
2
(A.47)
I3 = det (A)
(A.48)
where
The scalar coefficients I1 , I2 and I3 are called the first, second and third invariants, respectively,
derived from the characteristic
equation
of A.
The three roots (i) ; i = 1, 2, 3 of the characteristic polynomial are called the principal values
or eigenvalues of A. Associated with each eigenvalue is an eigenvector n(i) . For a symmetric tensor
with real components, the principal values are real. If the three principal values are distinct, the
three principal directions are mutually orthogonal. When referred to principal axes, A assumes a
diagonal form; viz.,
(1)
A= 0
0
0
(2)
0
0
(3)
(A.49)
.
Remark
1. Eigenvalues and eigenvectors have a useful geometric interpretation in two- and threedimensional space. If is an eigenvalue of A corresponding to v, then Av = v, so that depending
on the value of , multiplication by A dilates v (if > 1), contracts v (if 0 < < 1), or reverses
the direction of v (if < 0).
.
Example 1: Invariants of First-Order Tensors
Consider a vector v. If the coordinate axes are rotated, the components of v will change.
However, the length (magnitude) of v remains unchanged. As such, the length is said to be invariant.
In fact a vector (first-order tensor) has only one invariant, its length.
.
Example 2: Invariants of Second-Order Tensors
12
A second order tensor possesses three invariants. Denoting the tensor by A, its invariants are
(these differ from the ones derived from the characteristic equation of A)
I1 = tr (A) = A11 + A22 + A33 = Akk
I2 =
1 2 1
tr A
= Aik Aki
2
2
(A.50)
(A.51)
1 3 1
tr A
= Aik Akj Aji
(A.52)
3
3
Any function of the invariants is also an invariant. To verify that the first invariant is unchanged
under coordinate transformation, recall that
I3 =
(A.53)
(A.54)
(A.55)
Thus,
(A.56)
.
A.7
13
Tensor Calculus
e1 +
e2 +
e3 =
ei
x1
x2
x3
xi
(A.57)
=
ei ,i
xi
(A.58)
xi
(A.59)
If n = ni
ei is a unit vector, the scalar operator
n = ni
vi
= vi,i
xi
(A.60)
uk
ei = ijk uk,j
ei
xj
(A.61)
(A.62)
whereas u v = kij ui vj
ek .
.
14
2( )
= ( ),ii
xi xi
(A.63)
ei (,j
ej )
xi
2
(
ei
ej ) = ,ji ij = ,ii
=
xj xi
2
(A.64)
Let v(x1 , x2 , x3 ) be a vector field. The Laplacian of v is the following vector quantity:
2 v =
2 uk
ek = uk,ii
ek
xi xi
(A.65)
.
Remark
1. An alternate statement of the Laplacian of a vector is
2 v = ( v) ( v)
(A.66)
.
References
[1] Fung, Y. C., A First Course in Continuum Mechanics, Second Edition. Englewood Cliffs, NJ:
Prentice Hall (1977).
[2] Joshi, A. W., Matrices and Tensors in Physics, 2nd Edition. A Halsted Press Book, New York:
J. Wiley and Sons (1984).
[3] Mase, G. E., Continuum Mechanics, Schaums Outline Series. New York: McGraw-Hill Book Co.
(1970).
[4] Sokolnikoff, I. S., Tensor Analysis, Theory and Applications. New York: J. Wiley and Sons
(1958).
15