You are on page 1of 16

Introductory Tensor Analysis

Mathematics, rightly viewed, possesses not only truth, but


supreme beauty - a beauty cold and austere, like that of a
sculpture
Bertrand Russell
Dyadic Algebra
Consider two vectors, a and b . As we saw in chapter 3 we can
write them as follows:
a =a 1 u x+a2 u y +a3 uz
b=b 1 u x+b2 u y +b 3 uz
where each vector has three components in our Cartesian space. If we
multiply them in the 'normal' distributive fashion:
a b
= ( a 1 u x+a2 u y +a3 uz )( b1 ux +b2 u y +b3 u z )
=a 1 b1 ux ux +a1 b2 ux uy +a1 b 3 ux uz
+a2 b1 uy ux +a2 b2 uy uy +a2 b3 uy uz
+a3 b 1 uz ux+a3 b2 uz uy +a3 b3 uz uz
This is the direct product of a and b , referred to briefly in chapter
2, and the resulting object is called a dyad. Note that, in our
Cartesian space, there are now nine scalar coefficients, aibj , that
is, 3x3 from the vectors a and b . One represents this compactly as1:
D=a b
Just as ux is termed a unit vector, ux ux is a unit dyad.
Is this product commutative, you ask? Let's see:

1 Do not confuse this notation with the outer product notation of chapter 3. Recall that
the outer product of vectors a and b as we have defined it is the product of a and the
transpose of b. Note also that we use an underscore here to represent dyadic (and higher)
products.

ba
= ( b 1 u x+b2 u y +b3 uz )( a1 ux +a2 u y +a3 u z )
=b 1 a1 ux ux +b1 a2 ux uy +b1 a3 ux uz
+b2 a1 uy ux +b2 a2 uy uy +b2 a3 uy uz
+b 3 a1 uz ux+b3 a2 uz uy +b 3 a3 uz uz
Now we subtract the dyadic products:
a b ba
=( a1 b1b 1 a1 ) ux ux+(a1 b2b1 a2) ux uy +(a 1 b3 b1 a3 ) ux uz
=(a2 b1b 2 a1 ) uy ux+( a2 b2b 2 a 2) uy uy +(a2 b 3b 2 a 3) uy uz
=(a 3 b 1b3 a1 ) uz ux +(a 3 b 2b3 a2 ) uz uy +(a3 b3 b3 a3 ) uz uz
The terms with the same subscripts are all zero however the nonidentical subscript terms are not necessarily equal. Therefore the
dyadic product is not commutative in general.
Now, what would be the result of, say, the inner product of
vector c with dyad D ? We define this operation by 'associating' the
vector c with the vector 'beside' it in D . Thus, if we premultiply by
c :
cD=( ca ) b= b
Post multiplication gives:
Dc =a ( bc )= a

Thus, this type of inner product is not commutative, ie.


cDDc
unlike the inner product of two vectors. We see that the inner
product of vector with a dyad gives back one of the vectors that make
up the dyad multiplied by a constant. We will use this property to
construct a set of very general hamiltonian operators.
The astute reader will by now be saying to herself Huh? What is
this?. And well so. To rationalize this in terms of previous
discussions of vectors lets switch to our matrix representations and
construct our dyad again:

[] []

a1
a = a2
a3

b1
b= b
2
b3

In order to construct the dyad such that doing an inner product of


the dyad with a vector makes sense in terms of matrices we must use
the transpose of b :

[]

a1
a = a2
a3

b T = [ b1 b 2 b3 ]
D=a b T

This is, of course, the outer product of chapter 3. We could write


this equivalently using the operator as was mentioned in
chapter 3. The convention when writing a dyadic product of vectors is
not to explicitly indicate that the second vector is actually a
transposed vector.
Now, if we do an inner product of D with vector c explicitly in
terms of matrices:

[] []
[]

a1
c1

D=( a b )c = a2 [ b1 b 2 b3 ] c 2
a3
c3
a1
= a2
a3

we see that the inner product of b with c is, in terms of matrices, as


we have already seen in the chapter on matrices. We might perhaps
make this a bit clearer using Dirac notation:
D=|a >< b |
Dc =| a>< b |c >
= | a >

What about premultiplication? A little thought will show that this


must require the use of the transpose of vector c . Again, in the
Dirac notation:

D=| a> <b |


c D=< c | a ><b |
= |a >
T

We can do the same type of analysis with cross products:


c D=( c a ) b=d b=N
Dc =a ( bc )=a f =O

and again we find that the product is not commutative:


c DDc
but this time the result of the product of a dyad with vector is a
new dyad.
The third type of product that we will consider is the same type that
we started with .. the normal distributive multiplication. Thus we
will multiply dyad D by vector c :
c D=c a b
Long multiplication will produce:
c a b
( c 1 ux +c 2 uy +c3 uz ) ( a 1 ux+a2 uy +a3 uz )( b1 ux +b2 uy +b3 uz )
=c 1 a 1 b 1 ux ux ux +c 1 a 1 b2 ux ux uy +c 1 a1 b3 ux ux uz
c 3 a 3 b 3 uz uz uz
in which there are now 81 or 3x3x3 coefficients. This product of
three vectors is called a triad. Hopefully, you can see that we can
take this as far as we wish to produce tetrads, pentads etc. We can
consider a way to calculate the number of terms or coeffiecients in
each of these objects. Our vectors have three terms, our dyads have 9
terms and our triads have 81 terms. Let us say that a vector has a
rank of 1, a dyad a rank of 2 and a triad has a rank of three. Using
these numbers we can now say that the number of coefficients in one
of these objects is:
ncoefficients=3

rank

Now, consider the inner product of a vector with a dyad that we


just discussed. The same operation on a triad will produce a dyad
times a constant (try it for yourself). Thus, the inner product
reduces the rank by one to produce a lower ranked object. With this

in mind we can see that a scalar will be an object of rank 0 since


the inner product of a rank 1 object with a vector (or simply the
inner product of a vector with a vector) is a scalar as we have seen
in chapter 3.
Our determination of the number of coefficients is a little
artificial since we have been using Cartesian space for our
deliberations. To be completely general we would write:
ncoefficients=d

rank

where 'd' represents the number of dimensions of the space under


consideration. For clarity, however, we will continue to work with
Cartesian space.
Let's take a closer look at the dyad, D . Our longhand
representation of it is:
D=a b
= ( a 1 u x+a2 u y +a3 uz )( b1 ux +b2 u y +b3 u z )
=a 1 b1 ux ux +a1 b2 ux uy +a1 b 3 ux uz
+a2 b1 uy ux +a2 b2 uy uy +a2 b3 uy uz
+a3 b 1 uz ux+a3 b2 uz uy +a3 b3 uz uz
with coefficients and unit dyads, very similar to the longhand
representation of vectors. Recall from chapter 3 that a simple
Euclidean vector can be represented using a 1x3 column matrix. Thus
vector a has the components a1, a2 and a3: which we include in a matrix
representing the vector:

[]

a1
a = a2
a3

In a very similar manner we can represent the dyad as a square matrix


using the coefficients of the unit dyads:

a1 b 1 a1 b2 a1 b3
D= a2 b 1 a 2 b2 a2 b3
a3 b 1 a3 b2 a3 b3

d 11 d12 d13
= d 21 d22 d23
d 31 d32 d33

This makes sense, considering our earlier discussion of the formation


of a dyad from two vectors using the Dirac notation. This involves an
outer product which, as we have seen in chapter 3, results in a
matrix. Since we can represent the dyad as a square matrix we expect
that the algebra of the dyad will be identical to that of matrices:
A+B=B+A (commutative)
A+( B+C)=( A+B)+C (associative)
A+0=A (identity)
A+(A)=0 (additive inverse)
( A+B)= A+ B (scalar distributive )
(+) A= A+ A (matrix distributive )
( ) A=( A) (associative law for multiplication )
The dyad 0 represents the zero dyad with all zero coefficients, as you
no doubt already suspected.
So, all dyads can be represented by matrices. How about the
reverse? Are all square matrices representations of dyads? From our
previous discussion this would require that we be able to factor a
dyad into two vectors. In matrix notation this is:
T
D a b
or

][]

a1 b1 a 1 b 2 a1 b3
a1
a2 b1 a 2 b 2 a2 b3 a2 [ b1 b2 b 3 ]
a3 b1 a 3 b 2 a3 b3
a3

Any matrix can have any values that we want to put into it so if we
have the matrix:

a1 b1 a 2 b 2 a3 b3
a2 b1 a 2 b 2 a2 b3
a3 b1 a 3 b 2 a3 b3

in which the first row differs from the previous matrix, we cannot
construct this matrix from the direct product of vectors a and b
(except in the trivial case of 0 vectors ) nor can it be factored into
a and b . Therefore we cannot say that in general, all square matrices
are dyads.
There is an operation that we can do on a dyad called
contraction. As we have learned, the dyad can be constructed from the

direct product of two vectors. The dyad is said to be contracted if


inner product is taken of the two component vectors (using Dirac
notation):
Dab
D(contracted )=< a |b >=
This reduces dyad, D , to a scalar. Of course for higher rank objects
there are multiple contractions in general there will be (rank 1)
possible ways to do a contraction. Also, note that the contraction
operation reduces the rank by two.
We must point out some potential notational problems before
proceeding. First, in our discussion of matrices we distinguished
between the matrix product of two matrices and the direct product of
two matrices (equation [2-3]). We must also be careful to do so here
for dyads. 'Regular' multiplication is the same as matrix
multiplication:

a11 a12
= a21 a22
a31 a32

A
a13 b11 b12 b13
a23 b21 b22 b23
a33 b31 b32 b33

][

a11 b11+a12 b 21+a13 b 31 a11 b12+a12 b22+a13 b32 a11 b13 +a12 b 23 +a13 b33
= a 21 b11+a22 b 21+a23 b 31 a21 b12+a22 b22+a 23 b32 a 21 b13 +a22 b 23 +a23 b33
a31 b11+a32 b 21+a33 b 31 a31 b12+a32 b22+a33 b32 a31 b13 +a32 b 23 +a33 b33
=C

The direct product of two dyads is:

a11 a12
= a21 a 22
a31 a32

[
[
[
[

b 11 b12 b13
a 11 b 21 b22 b23
b 31 b32 b33

= a 21

b 11 b12 b13
b 21 b22 b23
b 31 b32 b33

b 11 b12 b13
a 31 b 21 b22 b23
b 31 b32 b33

] [
] [
] [

AB
a13 b11 b12 b13
a23 b21 b 22 b23
a33 b31 b32 b33

][

b11 b 12 b13
a12 b21 b 22 b23
b31 b 32 b33
a22

b11 b 12 b13
b21 b 22 b23
b31 b 32 b33

b11 b 12 b13
a32 b21 b 22 b23
b31 b 32 b33

]
] [
] [
] [
]

a11 b11 a11 b12 a11 b13 a13 b 13

a31 b11

a33 b 33

b 11 b12 b 13
a13 b 21 b22 b 23
b 31 b32 b 33

a 23

]
]
]

b 11 b12 b 13
b 21 b22 b 23
b 31 b32 b 33

b 11 b12 b 13
a33 b 21 b22 b 23
b 31 b32 b 33
(81 terms)

Also, in linear algebra it is common to write the


premultiplication of a vector by a matrix as:
x =y
M
the result of which is a new vector. However, in the context of dyads
this would produce a triad, increasing the rank of the object:
M x =O
To produce a vector we must use the inner product notation:
Mx=y
We must take care not to confuse the two. Our notation here has been
do denote a matrix and M do denote a dyad. Usually the
to use M
context will tell us which is which; however in other texts the
B is used in this text for
distinction may not be so clear. Thus A
standard matrix multiplication and A B or AB for dyad (or triad,
tetrad etc.) direct product multiplication.
So, to recap, we have some new mathematical objects developed
from the application of the direct product of vectors with each

other. Each of these objects has a 'rank' associated with it which is


equivalent to the power that the dimensionality of the space of the
vector(s) is raised to in order to generate the number of
coefficients of the object. Thus, the dyad results from the direct
product of two 3D vectors and has rank 2 or the power of 2 in 32.
Three vectors give a triad of rank 3 and four vectors give a tetrad
of rank 4. Scalars are ranked 0 since they consist of no vectors.
The Gradient of a Vector
In chapter 4 we alluded to the gradient calculation:
a or grad a

and made the assertion that the result is a dyad. We now show that
this is so via the direct product of and a :
a = ux+ uy + uz ( a x ux +a y uy +a z uz )
x
y
z
a x
a y
a
=
ux ux+
ux uy + z ux uz
x
x
x
ax
a y
a z
+
u u +
u u +
u u
y y x y y y y y z
a
a
a
+ x uz ux + y uz uy + z uz uz
z
z
z

The ui u j are the unit dyads as above and the partial derivatives are
the components of the dyad. We can compact this a bit using matrix
notation:

[ ]

ax
x
ax
a =D=
y
ax
z

a y
x
a y
y
a y
z

az
x
az
y
az
z

Transformations
We have seen in chapter three that the norm of a vector or more

generally, the inner product of a pair of vectors is invariant to


rotations. Rotation operators are orthogonal which in visual terms
means that an operator and its transpose rotate in opposite
directions by the same amount. Also, we have seen that the rotation
operation may be considered a rotation of coordinates with a
consequent change of basis set. One can also envision a change of
coordinates involving translation or perhaps both translation and
rotation together. In magnetic resonance spectroscopy we are
primarily concerned with rotations. Intuitively, we expect that the
norm of the vector will remain the same in the new coordinate system
as in the old one, as it did for rotations only.
Thus, a vector in coordinate system A is considered to be the
same vector in coordinate system B, assuming the beginning and ending
points of the vector do not move with the coordinate system change.
The components of the vector in each coordinate system will generally
not be equal however we expect the norm (the length) to remain
constant. Let's suppose, then, that we have a 2D vector, a =a 1 ux+a2 uy ,
in coordinate system A. To transform to coordinate system B we use a
function of some type:
b1=b1 (a1, a 2)
b2=b2 (a1, a2)

and our vector is now:


b=b 1 vx +b2 vy
However, we have just
to be the same as the
change as a result of
system A must see the
Thus to indicate that

said that we expect the norm of the new vector


old vector since the vector itself doesn't
the transformation. An observer in coordinate
same vector that an observer in B will see.
these are the same vector we write:
{a =b }<a |a >=< b | b >=

which is meant to indicate that (although their components are


different) their norms are the same and that they are in reality the
same vector.
Are there any vectors to which this reasoning might not apply?
Yes, the position vector that locates a point in space is one
example. The head of the vector is at the point in space and the tail
is located at the origin of the coordinate system. Moving the
coordinate system (as in translational motion) will potentially move
the origin and very likely change the length of the position vector.
Thus, our condition for equivalence of vectors in different

coordinate systems is not, in general, satisfied for this type of


vector. This will not however, be a problem for us as all of our
considerations of coordinate changes will involve rotations in which
the origins of the old and new coordinates will be at the same point
in the space.
Let's apply this idea to our higher rank objects starting with
the dyad. Thus, we assert that in transforming from coordinate system
A to coordinate system B, the dyad in question will remain the same
dyad in both coordinate systems much the same as is the case with
vectors. An observer in A sees dyad D and an observer in B sees dyad
E . In the case of the vectors we used the inner product operation to
reduce them to scalars that were invariant with respect to coordinate
changes so let's try to do the same type of thing with dyads. Our
tool for doing so is the dyad contraction. We will contract dyads D
and E to scalars d and e and compare them. We begin by supposing the
the dyads are in fact, equal2:
D=a b ab=d
E=c d cd=e
{ D=E} a b=c d

Taking the left inner product with a :


aa b =ac d
a 2 b=(ac ) d
b= (ac ) d
a2

The term in brackets is the scalar result of an inner product


calculation which is divided by a2, another scalar. For convenience
we replace this with a single scalar variable:
( ac )
a2
b=
d

let =

Now we do the right inner product with b :

2 This exposition is that of J.C. Kolecki. See the references.

a bb =c db
a b2=c ( db )
c ( db)
a =
2
b
Using the result of our left inner product calculation:
c ( db )
b2
c ( bb) c
a =
=
b2
a =

Now, we have:

c
ab= d=cd
or
d=e
Thus, if the dyads are equivalent so are their associated scalars.
Presumably the reverse is true as well. If the contractions of each
dyad are equal to each other then so are the dyads. We mean this in
the same sense in which we discussed vectors. In other words,
although the components of D and E may not be the same, they
represent the same dyad if their contractions are equivalent.
A Tensor Definition
We can now define a tensor. We mean by this term a mathematical
object which is invariant to transformation of basis. We have already
seen that the scalar object is invariant to a change of basis as are
vectors and dyads. In other words a scalar such as temperature of a
cup of tea is the same whether the coordinate system's origin is on
the earth or on the moon. Formally, if the temperature in coordinate
system A is T and in coordinate system A' it is T' then the
transformation from T to T' is:
T ' =a T
where a is always unity and T' is therefore equal to T and is said to
be a tensor of rank 0.

The vector object is also a tensor if it too can be said to be


the same in any basis. In terms of coefficients of a 3D vector the
transformation from coordinate basis u to u' is, for vector a :
a =a1 u1+a2 u2+a3 u3
a '=a 1 ' u1 '+a2 ' u2 '+a3 ' u3 '
3

ai '= cos (
u i ' , u j )a j
j

using equation 3-27. Intuitively, we know that the vector itself does
not change even though its coefficients may do so. Thus a calculation
of the norm of the vector will be the same in the new basis as in the
old basis. If we have two vectors, a and b, and b has been produced
by a change of basis from u to u' then:
a =a 1 u1 +a2 u2+a3 u3
b=b 1 u1 '+b2 u2 '+b 3 u3 '
|a |2 =a21+a22+a32
| b |2 =b21+b 22+b32
and
|a |=| b |

or equivalently, as we showed in Dirac notation in the last section:


{a =b }<a |a >=< b | b >=

So, if two vectors are equivalent after a basis transformation then


they are said to be tensors of rank 1.
We seen that the dyad, as well, can be invariant to
transformation such that dyad D is equivalent to dyad E if E is produced
by a change of basis from D . The dyad D produced from a pair of 3D
vectors can conveniently be represented by a 3 x 3 matrix with 9 (or
32)components. In order to transform D to E we must perform a set of
operations that is similar to the vector transformation. In the case
of the vector (or rank-1 tensor in our present context) each new
component of the vector is a linear combination of all of the old
components. The case with tensors is much the same; each component of
the new rank-2 tensor is a linear combination of all of the
components of the old tensor and is expressed in a similar fashion to
vectors:

[
[

]
]

d11 d 12 d 13
D= d21 d 22 d 23
d31 d 32 d 33
e11 e12 e 13
E= e 21 e22 e 23
e31 e32 e 33
To transform from D to E :
3

e ij ' = cos( ui ' , uk ) cos (


ui ' , u l) d kl
where, as with the vector case, cos( ui ' , uk) is the direction cosine of
the angle between axes i' in the new set of coordinates and k in the
old coordinates (see Appendix II). We can simplify the equation a
bit:
3

e ij ' = i ' k j ' l d kl


where the cosine terms have been replace with for brevity.
An alternate way to look at how we might define rank-2
is to say that the action of the tensor T on a vector v is to
new vector times a scalar. We have already encountered this
projection operators (see eq. 3-22). Recall that we defined
projection operator as:

tensors
produce a
in
the

T
P i= ui ui
or

Pi =|ui ><ui |

and its action on vector v is:

P i v =v i ui
or

Pi v =| ui >< ui | v >=v i |u i >


We could say then that the projection operator is a tensor and in
terms of our notation for tensors:
T
P i =ui ui
Pi v =v i ui

where we emphasize the tensor character of the operator.


What would happen if we were to change our basis from u to u'?
Would the new tensor still give the same results when it operates on
the new vector? Intuitively, we would suspect that the answer is yes
but let us explore it a little.
We start with a vector a in coodinate system A and do a basis
transformation to coordinate system B in which we measure the vector
as b :
a =a 1 u1 +a2 u2+a3 u3
b=b u '+b u '+b u '
1 1
2 2
3 3
Since we are familiar with the effect of the projection
operator/tensor on a vector we will use it in our example. We apply
the operator in coordinate system A and then in coordinate system B,
remembering that the operator must be transformed as well as the
vector:
P i a =ai ui
P ' b=b u '
i

The transformations are:


3

b i= ij a j
3

j
3

pij ' = i' k j' l p kl


k

where, as before, the 's are the direction cosines for the
indicated axes in the old and new coordinate systems. Let us look
closely at one coordinate of the projection operator:
3

p11 ' = 1' k 1 ' l pkl

Problems

References
1. J.C. Kolecki, Foundations of Tensor Analysis for Students of
Physics and Engineering With an Introduction to the Theory of
Relativity, NASA Science and Technical Information, TP-2005213115.
2. A.I. Borisenko and I.E. Tarapov, Vector and Tensor Analysis with
Applications, Dover Publications Inc., 1968.

You might also like