You are on page 1of 27

Lecture Note1

Mathematical Preliminaries
AE51003/AE61009: Applied Elasticity and Plasticity

Prasun Jana
Assistant Professor, Aerospace Engineering
IIT Kharagpur

1 Introduction
The aim of this section is to present fundamental rules and standard results of tensor algebra
used in the discussion of concept of elasticity and plasticity. Many of the statements are
given without proof. For a more detailed exposition see some standard book of vectors and
tensors.

2 Algebra of vectors
Scalar- A physical quantity, completely described by a single number. Example: tempera-
ture, density, mass etc.
Vector- A directed line element in space. It is a model of physical quantities having both
direction and length. Example: force, velocity, acceleration etc.

2.1 Properties of vectors


Some useful properties of vectors are discussed below. To know more details, please read
about ‘linear vector space’, dot product, cross product, basis vectors, etc. from some stan-
1
This lecture note should be used for reading purposes only. Many texts of this note may not be original;
are taken directly from some reference materials.

1
dard Engineering Mathematics book e.g. Kreyszig.

Dot product (or scalar product or inner product):

u · v = |u||v| cos θ

Projection of a vector (u) along the direction of another vector (v):

u · êv = |u| cos θ


v
where êv is the unit vector along vector v defined by êv = |v|
. See Fig. 1.

u–
θ eˆ V
v–
u.e
— ˆv

Figure 1: Projection of vector u along the direction of vector v.

Cross product (or vector product):

|u × v| = |u||v| sin θ with 0 ≤ θ ≤ π.

This represents the area of a parallelogram spanned by the vectors u and v (see Fig . 2).
Triple scalar product (or box product): This represents the volume (V ) of a parallelepiped
spanned by u, u and w forming a right-handed triad.

V = (u × v) · w = (v × w) · u = (w × u) · v.

2.2 Basis vectors


In elasticity, it is essential to refer vector (or tensor) quantities to a basis and work with their
components along these basis vectors. A basis of a vector space Rn is defined as a subset v 1 ,

2
_u x v_

v_

_u

-u
_ x v=v
_ _ x _u

Figure 2: Cross product of vector u and v.

v2 , v 3 , ... ,v n of vectors in Rn that are linearly independent and span Rn . With this basis,
every vector v ∈ Rn can be uniquely written as

v = a1 v 1 + a2 v 2 + a3 v 3 + − − − − +an v n .

Examples:
(1) Is the collection {ê1 , ê1 + ê2 , 2ê2 } a basis for R2 ?

Solution:
No. Although, they span R2 , they are not linearly independent.
(2) Is the collection {ê1 + ê2 , ê2 + ê3 } a basis for R3 ?
Solution:
No. Although, they are linearly independent, they don’t span all of R3 . For example,
ê1 + ê2 + ê3 cannot be expressed as a linear combination of these two vectors.

In this course, we will work with a right-handed orthonormal (RHON) system, as shown
in Fig. 3, with basis vectors ê1 , ê2 , ê3 (sometimes î, ĵ, k̂) called Cartesian basis. Vectors of
unit length which are mutually orthogonal form a so-called orthogonal system.

u = u1 ê1 + u2 ê2 + u3 ê3 ,

3
where, u1 , u2 , u3 are the components of vector u along ê1 , ê2 , ê3 , respectively.

ˆ
e3
u
u3
o ˆe 2
u1
u2
ˆ
e
1

Figure 3: Right-hand orthogonal Cartesian coordinate system êi in three dimensional space.

3 Index notation
In terms of the basis system êi , an arbitrary vector u is given in component form by
3
X
u = u1 ê1 + u2 ê2 + u3 ê3 = ui êi = ui êi .
i=1

Here, we adopt the summation convention invented by Einstein. It says that whenever
an index is repeated (only once) in the same term, then, a summation over the range of this
index is implied unless otherwise indicated. Therefore,

u = ui êi = uj êj = um êm .

The index i that summed over is said to be dummy index (since replacement does not affect
the sum). And, an index that is not summed over is called a free (or live) index.

Example:

bi = Aij xj ,

4
here i is free index and j is dummy index. This expression in index notation actually
represents a system of three equations;

b1 =A11 x1 + A12 x2 + A13 x3


b2 =A21 x1 + A22 x2 + A23 x3
b3 =A31 x1 + A32 x2 + A33 x3

OR     
b1 
  A11 A12 A13  x1 

b2 = A21 A22 A23  x2
 

  
 
b3 A31 A32 A33 x3
 

OR
b = Ax (Matrix form)

More examples:
3 X
X 3
b= aij xi xj = aij xi xj
i=1 j=1

The above equation gives nine terms (i,j repeated only once).
3
X
b= ai xi xi 6= ai xi xi .
i=1

Here, i is repeated more than once.

Tij = Aim Bmj

Here, m is a dummy index and i, j are free indices. This equation represents a set of nine
equation. The same equation in matrix form is written as.

T = AB.

Where, T, A and B are 3 × 3 square matrices.

Few more examples of equations:


Some equations have no meaning. Examples:

ai = bj

5
Tij = Tik .

These are not valid equations.

Equations that have meaning:


ai + bi = ci

ai + bi cj dj = 0.

These are valid equations.

3.1 Kronecker delta



1, if i = j,
δij =
0, if i =
6 j.
Dot product of basis vectors:
êi · êj = δij .

Useful properties:
a) δii = 3
b) δij ui = uj
δij Aik = Ajk
c) δij δjk = δik
Note that in the two equations shown in point b) above, δij acts as a replacement operator.

Projection of u on the basis vector êi yields the ith components of u.

u · êi = (um êm ) · êi


= um êm · êi
= um δmi
= ui

6
Dot product of two arbitrary vectors:

u · v = (ui êj ) · (vj êj )


= ui vj êi · êj
= ui vj δij
= ui vi
= u1 v1 + u2 v2 + u2 v2

3.2 Permutation or Levi-Civita (ε) symbol



1,

 for even permutation of i, j, k (i.e. 123, 231, 312),

εijk = −1, for odd permutation of i, j, k (i.e. 132, 213, 321),


0,

if there is any repeated index.
Properties:
εijk = εjki = εkij = −εikj = −εjik = −εkji
Cross product of basis vectors:

êi × êj = εijk êk

Cross product of two arbitrary vectors:

u × v = (ui êj ) × (vj êj )


= ui vj êi × êj
= εijk ui vj êk
= wk êk with wk = εijk ui vj .
This gives w1 = u2 v3 − u3 v2 , w2 = u3 v1 − u1 v3 , w3 = u1 v2 − u2 v1 .
Triple product:
V = (u × v) · w
= εijk ui vj êk · wm êm
= εijk ui vj wm δkm
= εijk ui vj wk .
Note: Three vectors u, v, and w are said to be linearly dependent if and only if their triple
product vanishes (the parallelepiped has no volume).

7
3.3 Manipulation with index notation
Substitution:
If ai = uim bm and bi = vim cm , then what is ai in terms of u, v and c?
Solution: bm = vmn cn . Therefore, ai = uim vmn cn .

Multiplication:
If a = pm qm and b = rm sm , then what is ab?
Solution: ab = pm qm rn sn .
Factoring:
How to factorize Tij nj − λni = 0?
Solution: We know that ni = δij nj . Therefore,

Tij nj − λδij nj = 0

OR
(Tij − λδij )nj = 0.

The above equation can be written in matrix form as [T − λI]n = 0, eigenvalue problem!

4 Definition of a second-order tensor


A second-order tensor A is a linear operator that acts on a vector u generating a vector v.
We write
v = A u.

This defines a linear transformation that assigns a vector v to each vector u. For a linear
transformation:
A(αu + βv) = αA u + βA v.

Examples:
(1) Let A be a nonzero transformation that transforms every vector into a fixed nonzero
vector n. Is this transformation a tensor?

Solution:
According to the problem statement, A u = n and A v = n. Since u + v is also a vector

8
A (u + v) = n. Therefore,
A (u + v) 6= A u + A v.

Thus, this transformation is not a linear one. A is not a tensor.

(2) Let A be a nonzero transformation that transforms every vector into a vector that is k
times the original vector. Is this transformation a tensor?

Solution:
According to the problem statement, A u = ku and A v = kv. Also,

A (αu + βv) = k(αu + βv)


= αku + βkv
= αA u + βA v

This transformation is a linear transformation. Therefore, A is a tensor.

4.1 Components of a tensor


The components of a vector depend on the basis vector used to describe the components.
This will also be true for tensors as well.
Let, under a transformation A, the basis vectors ê1 , ê2 , ê3 become Aê1 , Aê2 , Aê3 . Each
of these Aêi , being a vector, can be written as:

Aê1 = A11 ê1 + A21 ê2 + A31 ê3


Aê2 = A12 ê1 + A22 ê2 + A32 ê3
Aê3 = A13 ê1 + A23 ê2 + A33 ê3

In compact form, the above equations are written as Aêi = Aji êj .
Here Aij , which can be expressed as Aij = êi · Aêj , are defined as the components of the
tensor A. In matrix form, we write
 
A11 A12 A13
A = A21 A22 A23  .
 

A31 A32 A33

9
Examples:
1) Obtain the matrix for the tensor A that transforms the base vectors as follows:

Aê1 = 4ê1 + ê2


Aê2 = 2ê1 + 3ê3
Aê3 = −ê1 + 3ê2 + ê3

Solution:  
4 2 −1
A = 1 0 3 .
 

0 3 1

2) Let R corresponds to a right-hand rotation of a rigid body about the x3 -axis by an angle
θ (see Fig. 4). Fina a matrix of R.

ˆe 2

ˆ
R e2
––

R– e1
θ –

θ
ˆe
ˆ ̦ ˆ
1
e3 R e 3

Figure 4: Right-hand rotation about x3 -axis.

Solution:
Rê1 = cos θê1 + sin θê2
Rê2 = − sin θê1 + cos θê2
Rê3 = ê3

10
Therefore,  
cos θ − sin θ 0
R =  sin θ cos θ 0 .
 

0 0 1

4.2 Components of a transformed vector


Given a vector u and the tensor R, we would like to find the components of v = A u. We
know

u = u1 ê1 + u2 ê2 + u3 ê3 ,


v = Au
= A(u1ê1 + u2 ê2 + u3 ê3 )
= u1 Aê1 + u2 Aê2 + u3 Aê3
Now to get components of v

v1 = ê1 · v
= ê1 · (u1 Aê1 + u2 Aê2 + u3 Aê3 )
= u1 (ê1 · Aê1 ) + u2 (ê1 · Aê2 ) + u3 (ê1 · Aê3 )
= A11 u1 + A12 u2 + A13 u3

Similarly,
v2 = A21 u1 + A22 u2 + A23 u3
v3 = A31 u1 + A32 u2 + A33 u3
In matrix form, we can write
    

 v1  A11 A12 A13 u1 
 
v2 = A21 A22 A23  u2
 

  
 
v3 A31 A32 A33 u3
 

OR
v = Au (Matrix form)

Here, A is a 3 × 3 matrix representing the tensor A, u and v are 3 × 1 column matrix


representing vectors.

11
What is the form of above equation in the indicial notation?
Ans: From observation, we can find
vi = Aij uj .

More formal way to get above relations:

u = ui êi

v = Au
= Aui êi
= ui Aêi
Now for the components of v,
vi = êi · v
= êi · uj Aêj
= (êi · Aêj )uj
= Aij uj

Example:
Let R corresponds to a right-hand rotation of a rigid body about the x3 -axis by an angle θ.
Consider some vector u = r cos φê1 + r sin φê2 . When R acts on u, it generates a transformed
vector v. See Fig. 5. Find v .

Solution:
According to the tensor algebra,
v = Ru

12
ˆe 2 v

ˆ
R e2
––
u

R– e1
θ –
θ

φ θ

e ̦ R e
ˆe 1

ˆ ˆ
3 3

Figure 5: Transformed vector v generated through the transformation of u by a right-hand


rotation about x3 -axis.

OR
v = Ru
  
cos θ − sin θ 0  r cos φ 
=  sin θ cos θ 0 r sin φ
 
 
0 0 1 0
 
 

 r cos θ cos φ − r sin θ sin φ 

= r sin θ cos φ + r cos θ sin φ
 
0
 
 

 r cos(θ + φ) 

= r sin(θ + φ) .
 
0
 

Therefore,
v = r cos(θ + φ)ê1 + r sin(θ + φ)ê2 .

13
4.3 Product of two tensors
Let A and B are two tensors and u be an arbitrary vector. Then, A B and B A are defined
to be the transformation2

(A B)u = A(Bu) and (B A)u = B(Au).

Components:
(A B)ij = eˆi · (A B)eˆj
= eˆi · A(B eˆj )
= eˆi · A(Bmj eˆm )
= eˆi · Aeˆm Bmj
= Aim Bmj
Similarly, we can get
(B A)ij = Bim Amj .

In matrix form
[AB] = [A][B]

[BA] = [B][A]

Similar to matrix product, tensor product is also non-commutative.

A B 6= B A as [A][B] 6= [B][A] (physical example - rotation of a die)

Index notation to matrix form:

Cij = Aim Bmj =⇒ [C] = [A][B]


Cij = Ami Bmj =⇒ [C] = [A]T [B] as C11 = A11 B11 + A21 B21 + A31 B31
Cij = Ami Bnj Dmn =⇒ [C] = [A]T [D][B]
2
Note that (A B) is a second order tensor as (A B)(αu + βv) = A(B(αu + βv)) = A(αB u + βB v) =
α(A B)u + β(A B)v.

14
4.4 Transpose of a tensor
The transponse of a tensor A, denoted by AT , is defined to be the tensor which satisfies the
following identity for all vector u and v.

u · A v = v · AT u.

From above, we have


eˆi · Aeˆj =eˆj · AT eˆi
Aij =AT T
ji (or Aij = Aji )

4.5 Trace of a tensor


tr(A) = Aii

4.6 Identity tensor


The linear transformation which transforms every vector into itself is called an identity
tensor.
Iu=u

4.7 Orthogonal tensor


An orthogonal tensor is a linear transformation under which the transformed vectors preserve
their lengths and angles. Let Q denote an orthogonal tensor (see Fig. 6), then

|Qu| = |u|

and
cos(u, v) = cos(Qu, Qv).

Therefore,
Qu · Qv = u · v

=⇒ [Qu]T [Qv] = [u]T [v]

=⇒ [[Q][u]]T [Q][v] = [u]T [v]

=⇒ [u]T [Q]T [Q][v] = [u]T [I][v]

15
_v
Q
θ

θ _
v
u
_
_u

Figure 6: Lengths of the vectors and angles between them remain unchanged during an
orthogonal transformation.

=⇒ [Q]T [Q] = [I] = [Q][Q]T

=⇒ Qim Qjm = δij = Qmi Qmj

Thus,
[Q]T = [Q]−1

Determinant of orthogonal tensor:

det[[Q][Q]T ] = (det[Q])2 = (det[I])2 = 1

Hence,
det[Q] = ±1

If det[Q] = ±1, Q is said to be proper orthogonal tensor which corresponds to rotation.


If det[Q] = −1, Q is said to be improper orthogonal tensor which corresponds to reflection.

5 Tensor transformation laws


5.1 Transformation law for basis vectors

Let êi and êi are two right-handed orthogonal Cartesian coordinate systems (see Fig []).

As both are right-handed, êi can be made to coincide with êi through a suitable rotation.

Hence, êi and êi can be related by an orthogonal tensor Q.


êi = Qêi = Qji êj

16
Therefore,

ê1 = Q11 ê1 + Q21 ê2 + Q31 ê3

ê2 = Q12 ê1 + Q22 ê2 + Q32 ê3

ê2 = Q13 ê1 + Q23 ê2 + Q33 ê3
The elements of the tensor Q can be obtained using the following relation;

Qij = êi · Qêj



= êi · êj
′ ′
= cos(êi , êj ) − > Cosine of the angle between êi and êj .

As Q is orthogonal, the following relationship holds (written in tensorial, index and


matrix form, respectively).
QQT = QT Q = I

Qim Qjm = Qmi Qmj = δij

QQT = QQT = I.

5.2 Transformation law for component of vectors (primed compo-


nents)

Consider two RHON basis vectors êi and êi , and an arbitrary vector u. The vector u can be
expressed using any of these basis system. Hence, we can write,
′ ′
u = ui êi or u = ui êi .

To find the primed components,


′ ′
ui = u · êi
= u · Qji êj
= Qji u · êj
= Qji uj .

In matrix form,

u = QT u.

17
Above is the transformation law relating components of the same vector with respect to
different RHON systems. Using the orthogonality relation, we can also write,

u = Qu

or

ui = Qij uj .

Example:

Let the prime coordinate system êi corresponds to a right-hand rotation of a rigid body
about the x3 -axis by an angle θ. Consider some vector u = r cos φê1 + r sin φê2 . Find the

components of u with respect to the prime system êi .

Solution:
Here, the transformation Q is given by,
 
cos θ − sin θ 0
Q =  sin θ cos θ 0
 

0 0 1

We know that u = QT u. Therefore,
  
cos θ sin θ 0   r cos φ 


u = − sin θ cos θ 0 r sin φ
 
 
0 0 1 0
 
 
r cos θ cos φ + r sin θ sin φ
 
= r sin θ cos φ − r cos θ sin φ
 
0
 
 

 r cos(φ − θ) 

= r sin(φ − θ) .
 
0
 

Hence,
′ ′
u = r cos(φ − θ)ê1 + r sin(φ − θ)ê2 .

18
5.3 Transformation law for component of tensors (primed compo-
nents)

Consider two RHON basis vectors êi and êi , and an arbitrary tensor A. The component of
tensor A can be expressed using any of these two basis system. Hence, we can write,
′ ′ ′ ′
Aij = êi · Aêj and Aij = êi · Aêj , with êi = Qji êj .

Therefore, to find the prime components of tensor A,



Aij = Qmi êm · AQnj ên
= Qmi Qnj êm · Aên
= Qmi Qnj Amn
= Qmi Amn Qnj .
In matrix form we write

A = QT AQ.

Using the orthogonality relation, we can also write,



A = QA QT .

OR

Aij = Qim Amn Qjn .

Example:
 
0 1 −3
′ ′ ′
Tensor A is given with respect to êi as 1 2 0  . Find A with respect to êi , where êi is
 

0 0 1
obtained by rotating êi about ê3 through 90o .

Solution:
Here, the transformation Q is given by,
 
0 −1 0
Q = 1 0 0
 

0 0 1

19

We know that u = QT u. Therefore,
     
0 1 0 0 1 −3 0 −1 0 2 −1 0

A = −1 0 0 1 2 0  1 0 0 = −1 0 3 .
     

0 0 1 0 0 1 0 0 1 0 0 1
 
2 −1 0

Therefore, representation of A with respect to êi is −1 0 3 .
 

0 0 1

5.4 Defining tensors by transformation laws


Through the use of transformation laws relating components of a tensor with respect to
different bases, the types of tensor can be defined. Confining ourselves to only rectangular
Cartesian coordinate systems and using unit vectors along positive coordinate directions as
base vectors, we now define Cartesian components of tensors of different orders in terms of
their transformation laws in the following, where the primed quantities are referred to basis
′ ′ ′
êi and unprimed quantities to basis êi , where the êi and êi are related by êi = Qêi , Q being
an orthogonal transformation:


α = α =⇒ Zeroth-order tensor (or scalar)

ui = Qmi um =⇒ First-order tensor (or vector)

Aij = Qmi Qnj Amn =⇒ Second-order tensor (or tensor)

Sijk = Qmi Qnj Qrk Smnr =⇒ Third-order tensor

Cijkl = Qmi Qnj Qrk Qsl Cmnrs =⇒ Fourth-order tensor.

6 Eigenvalues and eigenvectors of a tensor


Consider a tensor A. If u is a vector which transform under A into a vector parallel to itself,
i.e.,
A u = λu ,

then u is an eigenvector and λ is the corresponding eigenvalue.

Key points:

1. Any vector parallel to u is also an eigenvector with eigenvalue λ.

20
2. Any scalar multiple of u is also an eigenvector with eigenvalue λ.

A(αu) = αA u = αλu = λ(αu)

For definiteness, we generally take all eigenvectors of unit length.

3. A tensor may have infinitely many eigenvectors. Example:

I u = u − > any vector is an eigenvector with eigenvalue = 1.

4. Some tensor have eigenvectors in only one direction. Example:


 
cos θ − sin θ 0
R =  sin θ cos θ 0 with θ 6= nπ, n = 1, 2, 3....
 

0 0 1

Here, only those vectors which are parallel to the axis of rotation (ê3 -axis) will remain
parallel to themselves.

Characteristics equation:
Let n̂ be a unit eigenvector, then for the eigenvalue problem we write

An̂ = λn̂ = λI n̂

(A − λ I)n̂ = 0.

In matrix form, as An̂ = [A]{n}, we can write

[[A] − λ[I]]{n} = [0].

If n̂ = n1 ê1 + n2 ê2 + n3 ê3 , then


    
A11 − λ A12 A13 n1 
  0
  
 A21 A22 − λ A23  n2 = 0
 

   
 
A31 A32 A33 − λ n3 0

We know that n1 = n2 = n3 = 0 is a trivial solution of above equation.


However, for non-trivial solution

det[[A] − λ[I]] = 0

21
=> λ3 − I1 λ2 + I2 λ − I3 = 0.
This is called the characteristic equation of the eigenvalue problem and the roots of this
equation are the eigenvalues. Here,
I1 = A11 + A22 + A33 = Aii = tr[A]
" # " # " #
A11 A12 A22 A23 A11 A13
I2 = det + det + det
A21 A22 A32 A33 A31 A33
1
= (Aii Ajj − Aij Aji )
2
1
(tr[A])2 − tr[A2 ]

=
2
 
A11 A12 A13
I3 = det A21 A22 A23  = det[A].
 

A31 A32 A33


These three terms, I1 , I2 and 3 are called principal scalar invariants of A. They remain
invariant on the choice of basis vectors. (Note that the eigenvalues of A do not depend on
the choices of the basis vectors).

Example:
Let the matrix of a tensor  
2 1 0
[A] = 1 3 0 .
 

0 0 4
To determine the eigenvalues,
 
2−λ 1 0
det[A − λ[I] =  1 3−λ 0  = (4 − λ)((2 − λ)(3 − λ) − 1) = 0
 

0 0 4−λ

So the eigenvalues are 4 and (5 ± 5)/2.

Homework: Find the three eigenvectors corresponding to these eigenvalues.

Principal values and principal directions of real symmetric tensors:


In this course, we will mostly deal with real symmetric tensors. We know that ‘the
eigenvalues of any real symmetric tensor are all real’. (proof omitted! Refer any mathematics
book)

22
For a real symmetric tensor, there always exist at least three real eigenvectors which
are mutually perpendicular. They are called the principal directions and the corresponding
eigenvalues are called the principal values.

Proof:

• Case 1: All eigenvalues are distinct (λ1 6= λ2 6= λ3 ).


Let us first consider two eigenvalues λ1 and λ1 . We have,

Anˆ1 = λ1 nˆ1 =⇒ nˆ2 · Anˆ1 = λ1 nˆ2 · nˆ1

and
Anˆ2 = λ2 nˆ2 =⇒ nˆ1 · Anˆ2 = λ2 nˆ1 · nˆ2 .

As the right side of the equations are scalar (dot product of two vectors) we can write,

λ1 nˆ2 · nˆ1 = nˆ2 · Anˆ1


= [n2 ]T [A][n1 ]
= [[n2 ][A][n1 ]]T as it is scalar
= [n1 ]T [A][n2 ] [A]T = [A] as symmetric
= nˆ1 · Anˆ2
= λ2 nˆ1 · nˆ2

Therefore,

λ1 nˆ2 · nˆ1 = λ2 nˆ1 · nˆ2


=⇒ (λ1 − λ2 )nˆ1 · nˆ2 = 0
=⇒ nˆ1 · nˆ2 = 0 Hence nˆ1 and nˆ2 are perpendicular.

Adopting similar procedure, we can show that nˆ1 , nˆ2 and nˆ3 are mutually perpendic-
ular.

• Case 2: Eigenvalues are not distinct. There are repeated roots in the charateristic
equation. Let
λ1 = λ2 = λ 6= λ3 .

23
Anˆ1 = λnˆ1 , Anˆ2 = λnˆ2 , Anˆ3 = λnˆ3 .

By previous method we can prove that

nˆ1 · nˆ3 = nˆ2 · nˆ3 = 0

Therefore, nˆ3 is perpendicular to both nˆ1 and nˆ2 .


Now,
A(αnˆ1 + β nˆ2 ) = αAnˆ1 + βAnˆ2
= λ(αnˆ1 + β nˆ2 )

If nˆ1 and nˆ2 are two eigenvectors corresponding to the same eigenvalue λ, then αnˆ1 +β nˆ2
is also an eigenvector with the same eigenvalue λ.
Therefore, there are infinitely many eigenvectors lying on the plane whose normal is nˆ3 .

• Case 2: No distinct eigenvalues, all are repeated. Say, λ1 = λ2 = λ3 = λ.


Then, it can be shown that any vector will be an eigenvector.

Thus, for every real symmetric tensor, there always exists at least one set of principle direc-
tions which are orthogonal to each other (mutually perpendicular.)

Matrix of a symmetric tensor with respect to principle directions


Let nˆ1 , nˆ2 and nˆ3 be the three principal directions which forms an orthonormal basis.
The primed components w.r.t these basis vectors are


Aij = n̂i · Anˆj = n̂i · (λjth )nˆj
= λjth n̂i · nˆj
= λjth δij
Therefore, 
λ , for i = j,
′ i
Aij =
0, for i =6 j.
Hence, the representation of A with respect to the principal directions turns out to be

24
 
λ1 0 0
[A]n̂i =  0 λ2 0
 

0 0 λ3
Note that the above matrix is diagonal and the diagonal elements are the eigenvalues of A.

Principal scalar invariants in terms of λ1 , λ2 and λ3 .

I1 = λ 1 + λ 2 + λ 3
I2 = λ 1 λ 2 + λ 2 λ 3 + λ 3 λ 1
I3 = λ 1 λ 2 λ 3

7 Exercises
1. Is the collection {ê1 + ê2 , ê1 − ê3 } a basis for R2 ?

2. Prove the following relations


 
δi1 δi2 δi3
(a) εijk = det  δj1 δj2 δj3 
 

δk1 δk2 δk3


 
δip δiq δir
(b) εijk εpqr = det  δjp δjq δjr 
 

δkp δkq δkr


(c) εijk εpqk = δip δjq − δiq δjp
(d) εijk εpjk = 2δpi ,
(e) εijk εijk = 6.

3. Let u, v, and w be three arbitrary vectors. Making use of the ε-δ relation: εijk εpqk =
δip δjq − δiq δjp , show that

u × (v × w) = (w · u)v − (v · u)w.

4. Given an arbitrary vector v and an arbitrary unit vector ê, show that

v = (v · ê)ê + ê × (v × ê)

and give physical interpretations of (v · ê)ê and ê × (v × ê).

25
 
1 4 5
5. Given a = 3ê1 + 2ê2 + ê3 , b = 2ê1 + 3ê2 and S = 3 2 9 . Evaluate
 

6 0 10

(a) Aij = εijk ak


(b) hi = εijk Sjk
(c) dk = εijk ai bj and show that this result is the same as dk = a × b · êk .

6. Consider a transformation A that transforms every vector into its mirror image with
respect to a fixed plane. Is this transformation a tensor?

7. Find the matrix of the tensor R which transform any vector u into a vector v = m(u·n)
where
√ √
m = 1/ 2(eˆ1 + eˆ2 ) and n = 1/ 2(eˆ1 − eˆ3 )

8. (a) Let R correspond to the right hand rotation of angle θ about the eˆ1 axis.Find the
matrix of R.
(b) Do part (a) if the rotation is about the eˆ2 axis.

9. Show that, if A is a symmetric tensor then εijk Aij = 0.


 
1 3 −1
10. (a) The matrix of the tensor R with respect to the basis eˆi is −2 0 1  . Find
 

2 −1 4
′ ′ ′ ′ ′
the R11 , R12 , R31 with respect to the right hand basis eˆi where eˆ1 = −eˆ2 + 2eˆ2

and eˆ2 = eˆ1 .
′ ′
(b) For the tensor R, find R if eˆi is obtained by a 60◦ right-hand rotation about the
eˆ3 axis.
(c) Compare both the sum of the diagonal elements and the determinants of [R] and

[R ].

11. Let the tensor R corresponds to a right-hand rotation of angle θ about the ê3 -axis.

(a) Find the matrix of R2 = RR.


(b) Show that R2 corresponds to a rotation of angle 2θ about the same axis.

26
(c) What will be the matrix of Rn for any integer n? (Note: for this question,
derivation is not necessary. Only the final answer is needed!)
 
5 4 1
12. A tensor has a matrix4 −1 2 .
 

1 2 4

(a) Find the scalar invariants, the principle values and corresponding principle direc-
tion of the tensor A.

(b) Find the matrix of the transformed tensor A along the basis formed by the three
principle directions.

13. Prove the Cayley-Hamilton theorem, which says that a symmetric second-order tensor
A satisfies its own characteristic equation, that is, that

A3 − I1 A2 + I2 A − I3 I = 0,

where I1 , I2 , and I3 are the principal scalar invariants of A.


 
3 −1 0
14. The matrix of a tensor A with respect to the basis êi is  −1 3 0 . The three
 

0 0 1
eigenvalues of A are given as λ1 = 1, λ2 = 2 and λ3 = 4. It is also given that the eigen-
1
vectors corresponding to λ1 and λ2 are n̂1 = ê3 and n̂2 = √ (ê1 + ê2 ) respectively.
2

(a) Consider the tensor A2 = AA and find its principal values (one is given for you;
λ1 = 1) and corresponding principal directions.
(b) What will be the matrix of the transformed tensor A2 along the right-hand or-
thonormal basis formed by the three principal directions?
(c) Comment on the eigenvalues and eigenvectors of An for any integer n.

27

You might also like