## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Sumeet Khatri

Table of Contents

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

List of Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

1 The Index Notation and Einstein Summation Convention . . . . . . . . . . . 1

2 Covariant and Contravariant Vectors . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Introducing Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.1 The Inner Product and the First Tensor . . . . . . . . . . . . . . . . . . . . . . . 12

3.2 Creating Tensors from Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4 Tensor Deﬁnition and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4.1 Symmetry and Anti-Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4.2 Contraction of Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

i

List of Figures

ii

List of Tables

iii

List of Theorems

iv

Introduction

Tensors are geometric objects that describe linear relations between vectors, scalars, and other

tensors. Elementary examples of such relations include the dot product, the cross product, and

linear mappings. We will see that in fact vectors and scalars are also tensors.

Tensors are important in physics because they provide a concise mathematical framework for

formulating and solving physics problems in areas such as elasticity, ﬂuid mechanics, and special

and general relativity.

v

1 The Index Notation and Einstein Summation Convention

Let us ﬁrst introduct a new notation for vectors and matrices and their algebraic manipulations,

called the index notation.

Let us take a manifold with dimension n. We will denote the components of a vector v with the

numbers v

1

, v

2

, . . . , v

n

in some basis {e

1

, e

2

, . . . , e

n

}. If one modiﬁes the vector basis in which the

components of v are expressed, then these components will also change. Such a transformation

is often called a “change-of-basis matrix”, say A, in which the columns are the old basis vectors

e

1

, e

2

, . . . , e

n

expressed in the new basis, say {e

1

, e

2

, . . . , e

n

}. So we have

_

_

_

_

_

v

1

v

2

.

.

.

v

n

_

_

_

_

_

=

_

_

_

A

11

. . . A

1n

.

.

.

.

.

.

A

n1

. . . A

nn

_

_

_

_

_

_

_

_

v

1

v

2

.

.

.

v

n

_

_

_

_

_

, (1.1)

taking note of the fact that the ﬁrst index denotes the row of the matrix A and the second index

the column.

According to the rules of matrix multiplication, the above matrix equation is the system of

equations

v

1

= A

11

v

1

+ A

12

v

2

+· · · + A

1n

v

n

,

.

.

.

v

n

= A

n1

v

1

+ A

n2

v

2

+· · · + A

nn

v

n

,

(1.2)

or equivalently,

v

1

=

n

ν=1

A

1ν

v

ν

,

.

.

.

v

n

=

n

ν=1

A

nν

v

ν

,

(1.3)

or even more succinctly,

v

µ

=

n

ν=1

A

µν

v

ν

(∀µ ∈ N, 1 ≤ µ ≤ n) . (1.4)

1

Introduction to Tensors The Index Notation and Einstein Summation Convention

Each of the three systems above are written in the index notation. In (1.4), we call ν a dummy

index and µ a running index or a free index. Keep in mind that µ and ν are merely

labels—we could have equally well called them whatever else we like, say α and β.

Usually, the condition for µ in (1.4) are not explicitly stated because they should be obvious

from the context. We therefore have

v = y ⇔ v

µ

= y

µ

v = Ay ⇔ v

µ

=

n

ν=1

A

µν

y

ν

.

(1.5)

The index notation is also applicable to operation such as the dot product (and indeed inner

products in general), so that if v and w are any two vectors, then

v • w = v

1

w

1

+ v

2

w

2

+· · · + v

n

w

n

=

n

µ=1

v

µ

w

µ

. (1.6)

We also have

C = A + B ⇔ C

µν

= A

µν

+ B

µν

z = v +w ⇔ z

µ

= v

µ

+ w

µ

.

(1.7)

Example 1.0.1: Working with index notation.

1. A, B, and C are matrices of appropriate dimensions. Assume that A = BC. Write

out this matrix multiplication using index notation.

2. A and B are matrices and x is a vector. Show that

n

ν=1

A

µν

_

n

α=1

B

να

x

α

_

=

n

ν=1

n

α=1

(A

µν

B

να

x

α

)

=

n

α=1

n

ν=1

(A

µν

B

να

x

α

)

=

n

α=1

_

n

ν=1

(A

µν

B

να

) x

α

_

.

3. Which of the following statements is true?

(a) The summation signs in an expression can always be moved to the far left without

changing the meaning of the expression.

(b) If all summation signs are on the far left of an expression, you can exchange their

order without changing the meaning of the expression.

2

Introduction to Tensors The Index Notation and Einstein Summation Convention

(c) If all summation signs are on the far left of an expression, you cannot just change

the order of the variables in the expression because this changes the order in which

matrices are multiplied, and generally AB = BA for any two arbitrary matrices

A and B of appropriate dimensions.

(d) A

µν

=

_

A

T

_

νµ

.

(e) A

µν

=

_

A

T

_

µν

.

Solution:

1. Let

B =

_

_

_

B

11

. . . B

1n

.

.

.

.

.

.

B

m1

. . . B

mn

_

_

_ and C =

_

_

_

C

11

. . . C

1m

.

.

.

.

.

.

C

n1

. . . C

nm

_

_

_.

Then,

A =

_

_

_

B

11

C

11

+· · · + B

1n

C

n1

B

11

C

12

+· · · + B

1n

C

n2

. . . B

11

C

1m

+· · · + B

1m

C

nm

.

.

.

.

.

.

B

m1

C

11

+· · · + B

mn

C

n1

B

m1

C

12

+· · · + B

mn

C

n2

. . . B

m1

C

1m

+· · · + B

mn

C

nm

_

_

_,

so that

A

µν

=

n

i=1

B

µi

C

iν

.

2. We have

n

ν=1

A

µν

_

n

α=1

B

να

x

α

_

= A

µ1

_

n

α=1

B

1α

x

α

_

+· · · + A

µn

_

n

α=1

B

nα

x

α

_

=

n

α=1

A

µ1

B

1α

x

α

+

n

α=1

A

µ2

B

2α

x

α

+· · · +

n

α=1

A

µn

B

nα

x

α

=

n

ν=1

n

α=1

A

µν

B

να

x

α

,

proving the ﬁrst equality. The second equality follows from the commutativity of

addition (i.e., elements can be added in diﬀerent orders without altering the result),

and

n

α=1

_

n

ν=1

(A

µν

B

να

) x

α

_

=

n

α=1

n

ν=1

(A

µν

B

να

) x

α

=

n

α=1

n

ν=1

A

µν

B

να

x

α

.

3.

3

Introduction to Tensors The Index Notation and Einstein Summation Convention

(a) True, since doing so simply changes the order in which elements are multiplied,

and since addition of numbers is commutative, this is no problem.

(b) True, since this also merely changes the order of addition.

(c) False as long as the summand contains expression involving commutative opera-

tors, like multiplicaiton. As for the matrix multiplication mentioned, it is not the

order of the multiplication/addition of elements that makes matrix multiplication

non-commutative but rather the deﬁnition of matrix multiplication.

(d) True

(e) False

We have seen in the above example that the summation symbol can always be put at the start of

any expression and that if there is more than one summation sign then their order is irrelevant.

It is therefore convenient to omit the summation signs as long as we make it clear in advance

which index is being summed over, for instance, by putting it beside the formula as shown below:

n

ν=1

A

µν

v

ν

→ A

µν

v

ν

{ν}

n

β=1

n

γ=1

A

αβ

B

βγ

C

γδ

→ A

αβ

B

βγ

C

γδ

{β, γ} .

(1.8)

The above example also indicates to us the following:

• It appears that if an index only appears once in a summand then this index is not summed;

• It appears that if an index appears at least twice in a summand then that index is summed.

After making routine usage of index notation, however, indicating the summation index every

time might become irritating. Our two observations above will eventually allow us to easily

determine the summation index (or indices), so that indicating it no longer becomes necessary.

This leads to the Einstein Summation Convention:

Einstein Summation Convention

In a summation over one or more indices, the summation sign and the

index of summation may be omitted with the following conventions:

• A summation is assumed over all indices that appear twice in a

summand; and

• No summation is assumed over indices that appear only once in the

summand.

4

Introduction to Tensors The Index Notation and Einstein Summation Convention

We will use index notation with Einstein summation convention from now on. So we will write

n

ν=1

A

µν

v

ν

→ A

µν

v

ν

n

β=1

n

γ=1

A

αβ

B

βγ

C

γδ

→ A

αβ

B

βγ

C

γδ

.

(1.9)

Also,

n

i=1

A

i

A

i

→ A

i

A

i

n

i=1

n

j=1

A

ijk

B

ij

→ A

ijk

B

ij

,

(1.10)

and by extension we shall also understand summation in such expressions as

∂u

i

∂x

i

,

∂q

∂x

i

dx

i

dt

, etc. (1.11)

Example 1.0.2: Working with the Einstein summation convention.

1. Write as matrix multiplication:

(a) D

αβ

= A

αµ

B

µν

C

βν

;

(b) D

αβ

= A

αµ

B

βγ

C

µγ

;

(c) D

αβ

= A

αγ

(B

γβ

+ C

γβ

).

2. Consider a vector ﬁeld in an n-dimensional space, F(x). We perform a coordinate

transformation x

= Ax, where A is a n×n change-of-basis matrix. Show that F

= AF.

3. For a change of basis, we have x

= Ax. This corresponds to x

µ

=

n

ν=1

A

µν

x

ν

. Can

you understand the expression

n

ν=1

x

ν

A

µν

, and how can you construct the matrix

multiplication equivalent of

n

µ=1

x

µ

A

µν

?

Solution:

1. (a) We have by the summation convention

D

αβ

=

n

ν=1

n

µ=1

A

αµ

B

µν

C

βν

=

n

ν=1

(AB)

αν

(previous example)

=

n

ν=1

(AB)

αν

_

C

T

_

νβ

(previous example),

so that D = ABC

T

.

5

Introduction to Tensors The Index Notation and Einstein Summation Convention

(b) By the summation convention, and since the order of elements in a summand is

irrelevant (as multiplication is commutative), we have

D

αβ

=

n

γ=1

n

µ=1

A

αµ

C

µγ

B

βγ

=

n

γ=1

(AC)

αγ

B

βγ

=

n

γ=1

(AC)

αγ

_

B

T

_

γβ

,

so that D = ACB

T

.

(c) We have

D

αβ

= A

αγ

B

γβ

+ A

αγ

C

γβ

= (AB)

αβ

+ (AC)

αβ

,

so that D = AB + AC = A(B + C).

2.

3. Because we can change the order of elements in the summand, we have simply

n

ν=1

x

ν

A

µν

=

n

ν=1

A

µν

x

ν

= x

µ

.

And if we let

x =

_

_

_

_

_

x

1

x

2

.

.

.

x

n

_

_

_

_

_

=

_

x

1

x

2

. . . x

n

_

T

,

then

x

T

A =

_

x

1

x

2

. . . x

n

_

_

_

_

A

11

. . . A

1n

.

.

.

.

.

.

A

n1

. . . A

nn

_

_

_ =

_

_

_

_

_

x

1

A

11

+ x

2

A

21

+· · · + x

n

A

n1

x

1

A

12

+ x

2

A

22

+· · · + x

n

A

n2

.

.

.

x

1

A

1n

+ x

2

A

2n

+· · · + x

n

A

nn

_

_

_

_

_

,

so that the matrix multiplication equivalent of

n

µ=1

x

µ

A

µν

is x

T

A.

6

2 Covariant and Contravariant Vectors

In this chapter we will describe how vectors change under a coordinate transformation, i.e.,

under a change of basis. Doing this will allow us to make a distinction between two types of

vectors, which we will call contravariant vectors and covariant vectors (sometimes shortened to

“covector”).

In physics, a vector typically arises as the outcome of a measurement or series of measurements

and is represented as a tuple of numbers, such as (v

1

, v

2

, v

3

). This tuple of numbers, each of

which is called a coordinate, depends on the choice of coordinate system. Let us assume that we

use a linear coordinate system, so that we can use linear algebra to describe it. The position of

physical objects, for example, can be speciﬁed using a Cartesian coordinate system and are often

represented as an arrow from its origin. We can then use a chosen set of basis vectors belonging to

the coordinate system, for example, the standard basis {i = (1, 0, 0) , j = (0, 1, 0) , k = (0, 0, 1)}

in R

3

, specify the location of the object as r

1

i + r

2

j + r

3

k, so that (r

1

, r

2

, r

3

) is the 3-tuple of

the coordinates of the object.

In such a description of objects with coordinates, we must be fully aware that the coordinates

themselves have no meaning. Only with the corresponding basis vectors do these numbers

acquire meaning. Remember that the object one describes is independent of the coordinate

system (and hence the set of the basis vectors) chosen. We are thus interested in how the

coordinates of an object transform when the original coordinate system is changed to a new

one.

Now, suppose we have two bases in three-dimensional vector space V (indeed, we could generalise

this to n dimensions, but this makes the subsequent equations less cumbersome to write down),

{e

1

, e

2

, e

3

} and {e

1

, e

2

, e

3

}. Suppose every basis vector in the primed basis can be written as a

linear combination of the basis vectors of the unprimed basis, i.e.,

e

1

= a

11

e

1

+ a

12

e

2

+ a

13

e

3

,

e

2

= a

21

e

1

+ a

22

e

2

+ a

23

e

3

,

e

3

= a

31

e

1

+ a

32

e

2

+ a

33

e

3

,

(2.1)

or

_

_

e

1

e

2

e

3

_

_

= Λ

_

_

e

1

e

2

e

3

_

_

, where Λ =

_

_

a

11

a

12

a

13

a

21

a

22

a

23

a

31

a

32

a

33

_

_

, (2.2)

where we assume that Λ is non-singular (and hence invertible).

Now, let us take a vector v = (v

1

, v

2

, v

3

) in the unprimed basis and see how its coordinates

transform when written in terms of the primed basis. From linear algebra, we know that the

change-of-basis matrix is constructed by letting the columns be the old basis vectors expressed

7

Introduction to Tensors Covariant and Contravariant Vectors

in terms of the new ones, i.e., the columns will be unprimed basis vectors expressed in terms of

the primed basis vectors. Now, since Λ above is invertible, we have

_

_

e

1

e

2

e

3

_

_

= Λ

−1

_

_

e

1

e

2

e

3

_

_

,

and hence the columns of

_

Λ

−1

_

T

will contain unprimed basis vectors in terms of the primed

basis vectors.

Remark: To see why we must take the transpose, note that

Λ

T

=

a11 a21 a31

a12 a22 a32

a13 a23 a33

,

so that the columns of Λ

T

are the primed basis vectors expressed in terms of the unprimed basis vectors. In

the same way, Λ

−1

will have as it rows the coordinates of the unprimed basis vectors in terms of the primed

basis vectors, so that its transpose will contain these coordinates as its columns, as required.

Therefore,

v

=

_

Λ

−1

_

T

v ⇒ v

µ

=

_

Λ

−1

_

T

v

µ

. (2.3)

Vectors that transform in this manner are called contravariant vectors and the transformation

_

Λ

−1

_

T

represents a contravariant transformation. These are the vectors that we typically

deal with, which is why we almost always simply call them “vectors”. Observe that the coor-

dinates of v transform in the opposite way (i.e., “contrary”) as the basis vectors that describe

it. This means that the vector itself does not change, i.e., if we use an arrow to indicate our

vector, then physically this arrow will be unchanged, as we require. Instead, the components of

the vector make a change that cancels the change in the basis vectors, resulting in a change of

coordinates. In other words, if the basis vectors were rotated in one direction, the component

representation of the vector would rotate in exactly the same way (with the eﬀect seen in the

values of the coordinates). Similarly, if the basis vectors were stretched in one direction, the

components of the vector, like the coordinates, would reduce in an exactly compensating way.

Let us now look at vectors that transform in the same way as the basis vectors. Consider an

n-dimensional manifold V with coordinates x

1

, x

2

, . . . , x

n

. Let f be some scalar function. Then

the gradient of f (x

1

, x

2

, . . . , x

n

) is

_

∇f

_

µ

=

∂f

∂x

µ

, i.e.,

∇f =

∂f

∂x

1

e

1

+

∂f

∂x

2

e

2

+· · · +

∂f

∂x

n

e

n

. (2.4)

Suppose we have a vector ﬁeld deﬁned on this manifold V , V = V(x). Let us perform a

homogeneous linear transformation of the coordinates:

x

µ

= A

µν

x

ν

. (2.5)

8

Introduction to Tensors Covariant and Contravariant Vectors

As we saw in the previous example, we thus have a corresponding change in the vector ﬁeld V:

V

µ

(x) = A

µν

V

ν

(x) , (2.6)

where A is the same matrix as in (2.5). Note that this matrix describes the transformation of

the vector components, while previously our matrix Λ described the transformation of the basis

vectors, so that A =

_

Λ

−1

_

T

.

Now, take the function f (x

1

, x

2

, . . . , x

n

) and the gradient w

α

at a point P as so,

w

α

=

∂f

∂x

α

; (2.7)

and in the new coordinate system,

w

α

=

∂f

∂x

α

. (2.8)

(i.e., the w

α

are the components of the gradient vector) Then, by the chain rule,

∂f

∂x

1

=

∂f

∂x

1

∂x

1

∂x

1

+

∂f

∂x

2

∂x

2

∂x

1

+· · · +

∂f

∂x

n

∂x

n

∂x

1

,

that is,

∂f

∂x

µ

= w

µ

=

∂f

∂x

ν

∂x

ν

∂x

µ

= w

ν

∂x

ν

∂x

µ

⇒ w

µ

=

_

∂x

ν

∂x

µ

_

w

ν

.

(2.9)

Now, take (2.5) and rewrite it as

x

µ

=

_

A

−1

_

µν

x

ν

.

Then,

∂x

µ

∂x

α

=

∂

_

_

A

−1

_

µν

x

ν

_

∂x

α

=

_

A

−1

_

µν

∂x

ν

∂x

α

+

∂

_

A

−1

_

µν

∂x

α

x

ν

. (2.10)

Because in this case A does not depend on x

α

, the last term on the right-hand side of the above

equation vanishes. Also,

∂x

ν

∂x

α

= δ

να

, δ

να

=

_

1 when ν = α,

0 when ν = α.

(2.11)

Therefore, what remains is

∂x

µ

∂x

α

=

_

A

−1

_

µν

δ

να

=

_

A

−1

_

µα

. (2.12)

9

Introduction to Tensors Covariant and Contravariant Vectors

Finally, in combination with (2.9), we get the following transformation of the components of the

gradient:

w

µ

=

_

A

−1

_

T

µν

w

ν

⇒

_

∇f

_

=

_

A

−1

_

T

∇f. (2.13)

But remember that A =

_

Λ

−1

_

T

, so that

_

A

−1

_

T

= Λ. Therefore,

_

∇f

_

= Λ

_

∇f

_

, (2.14)

i.e., we have shown that the components of the gradient vector transform in exactly the same

way as the basis vectors. Vectors that transform in this manner are called covariant vectors

or simply covectors, and the matrix Λ represents a covariant transformation.

To distinguish contravariant vectors from covariant vectors, we will write the indices of con-

travariant vectors as a superscript and the indices of covariant vectors as subscripts.

y

α

: contravariant vector

w

α

: covariant vector

In addition, we will denote contravariant vectors in boldface (v) and write then explicitly as

column matrices, as we have been doing all along,

v =

_

_

_

_

_

v

1

v

2

.

.

.

v

n

_

_

_

_

_

(contravariant vector), (2.15)

and we will denote covariant vectors in boldface with a tilde (˜ v) and write them explicitly as

row matrices,

˜ v =

_

v

1

v

2

. . . v

n

_

(covariant vector). (2.16)

(Note the position of the index in both cases, following the convention in the box above.) We

also introduce a similar notation convention for matrices, which we will regard as an extension

of the Einstein summation convention. Instead of the usual index notation A

mn

used to refer to

the mth row and nth column of a matrix A, we will write A

m

n

, which means that the transpose

is

_

A

T

_

m

n

= A

n

m

. Then the transformation rules for contravariant vectors and covariant vectors,

respectively, are

v

µ

= A

µ

ν

v

ν

(contravariant vectors),

w

µ

=

_

A

−1

_

µ

ν

w

ν

(covariant vectors).

(2.17)

As for matrix multiplication, we get

(AB)

i

k

= A

i

j

B

j

k

. (2.18)

10

Introduction to Tensors Covariant and Contravariant Vectors

This new notation will be useful later because it will indicate that such matrices have mixed

contravariant and covariant transformation properties.

11

3 Introducing Tensors

3.1 The Inner Product and the First Tensor

The dot product is very important in physics. In classical mechanics, for example, we have that

the work that is done when an object is moved equals the dot product of the force F acting on

the object and the displacement vector x of the object: W = F • x. As we know from linear

algebra, the dot product is just a special case of the inner product (the dot product is often

called the standard inner product on R

n

), so we might also write W = F, x. The work must

of course be independent of the coordinate system in which the vectors F and x are expressed.

However, the dot product

s = a, b = a

µ

b

µ

does not in general have this invariance property for arbitrary vectors a and b and arbitrary

linear transformations a

µ

= A

µ

α

a

α

and b

µ

= A

µ

β

b

β

:

s

=

¸

a

, b

_

= A

µ

α

a

α

A

µ

β

b

β

=

_

A

T

_

α

µ

A

µ

β

a

α

b

β

.

So we see that s

= s if and only if A

−1

= A, i.e., if and only if we are dealing with orthog-

onal transformation (i.e., A is an orthogonal matrix). However, we would like s = s

for any

transformation matrix A. To try to accomplish this, notice that the dot product between a

(contravariant) vector x and a covector y, s = x

µ

y

µ

, is invariant under all transformations since

for all transformation matrices A

s

= x

µ

y

µ

= A

µ

α

x

α

_

A

−1

_

β

µ

y

β

=

_

A

−1

_

β

µ

A

µ

α

x

α

y

β

= δ

β

α

x

α

y

β

= s.

With the help of this dot product, we can introduce a new standard inner product between two

contravariant vectors that also has the invariance property. Let us deﬁne the inner product as

s = g

µν

x

µ

y

ν

, (3.1)

where, in R

3

,

g =

_

_

g

11

g

12

g

13

g

21

g

22

g

23

g

31

g

32

g

33

_

_

. (3.2)

Now, we must make sure that this object g is chosen so that our new inner product reproduces

the old one if we choose an orthonormal coordinate system. So, in R

3

, we should get

s = g

µν

x

µ

y

ν

=

_

x

1

x

2

x

3

_

_

_

g

11

g

12

g

13

g

21

g

22

g

23

g

31

g

32

g

33

_

_

_

_

y

1

y

2

y

3

_

_

= x

1

y

1

+ x

2

y

2

+ x

3

y

3

(in an orthonormal system).

12

Introduction to Tensors Introducing Tensors

This implies that

g

µν

=

_

_

1 0 0

0 1 0

0 0 1

_

_

in an orthonormal coordinate system. (3.3)

Note, however, that g

µν

does not have the transformation properties of an ordinary matrix.

Remember that the matrix A of the previous chapter had one index up and one index down,

like A

µ

ν

, indicating that it has mixed contravariant and covariant transformation properties.

This new object g

µν

, however, has been written with both indices down, so it transforms in a

covariant. This object, which looks like a matrix but does not transform like one, is an example

of a tensor. A matrix is also a tensor, as are vectors and covectors. Matrices, vectors, and

covectors are special cases of the more general class of objects called “tensors”. The object g

µν

is a kind of tensor that is neither a matrix nor a vector nor a covector. It is a new kind of object

for which only tensor mathematics has a proper description. It is called a metric tensor or

simply a metric.

3.2 Creating Tensors from Vectors

We have seen that the inner product of a vector with a covector is

s = x

µ

y

µ

.

In this case the indices are paired, indicating by the Einstein convention a summation over all

possible values of the index. We can also multiply vectors and covectors without pairing the

indices, and therefore without summation. For example, in three dimensions, we get

s

µ

ν

= x

µ

y

ν

=

_

_

x

1

y

1

x

1

y

2

x

1

y

3

x

2

y

1

x

2

y

2

x

2

y

3

x

3

y

1

x

3

y

2

x

3

y

3

_

_

.

This object still looks very much like a matrix, since a matrix is also nothing more or less than an

array of numbers labelled with two indices. To check if this is a true matrix, or something else, we

need to see how it transforms. From linear algebra, we know that if A is a matrix representing a

linear mapping, and S is a change-of-basis matrix (from unprimed to prime coordinate systems),

then A

= SAS

−1

, where A

**represents the matrix A in the primed coordinate system. Now,
**

_

s

_

α

β

= x

α

y

β

= A

α

µ

x

µ

_

A

−1

_

ν

β

y

ν

= A

α

(x

µ

y

ν

)

_

A

−1

_

ν

β

= A

α

µ

s

µ

ν

_

A

−1

_

ν

β

,

so that s

α

β

transforms like an ordinary matrix, which means that s

α

β

is indeed an ordinary

matrix. But if we instead use two covectors,

t

µν

= x

µ

y

ν

=

_

_

x

1

y

1

x

1

y

2

x

1

y

3

x

2

y

1

x

2

y

2

x

2

y

3

x

3

y

1

x

3

y

2

x

3

y

3

_

_

,

then we get a tensor with diﬀerent transformation properties,

t

αβ

= x

α

y

β

=

_

A

−1

_

µ

α

x

µ

_

A

−1

_

ν

β

y

ν

=

_

A

−1

_

µ

α

(x

µ

y

ν

)

_

A

−1

_

ν

β

=

_

A

−1

_

ν

α

t

µν

_

A

−1

_

ν

β

.

13

Introduction to Tensors Introducing Tensors

The diﬀerence here lies in the ﬁrst matrix of the transformation equation. For s it is the

transformation matrix for contravariant vectors, while for t it is the transformation for covariant

vectors. The tensor t is clearly not a matrix, so we indeed created something new here. The

metric tensor g of the previous section is of the same type as t.

The beauty of tensors is that they can have an arbitrary number of indices. One can also

product, for instance, a tensor with three indices,

A

αβγ

= x

α

y

β

z

γ

.

In three dimensions, this gives an ordered array of 27 elements, a kind of “super matrix”.

Let us know introduce some terminology.

• The tensor A

αβγ

is a rank 3 tensor. Tensors of rank 0 are scalars, tensors of rank 1

are vectors and covectors, and tensors of rank 2 are matrices and other types of tensors

(such as the metric tensor).

In general, in n-dimensional space, a tensor of rank r has n

r

elements.

• We can distinguish between the contravariant rank and covariant rank of a tensor. A

αβγ

is a tensor of covariant rank 3 and contravariant rank 0. Its total rank is 3. One can also

produce tensors of, for instance, contravariant rank 2 and covariant rank 3, B

αβ

µνφ

, with

total rank 5.

Typically, when tensor mathematics is applied, the meaning of each index has been deﬁned

beforehand: the ﬁrst index means this, the secon means that, etc. As long as this is well-

deﬁned, then one can have covariant and contravariant indices in any order.

Remark: Although a multiplication (without summation) of m vectors and m covectors produces a tensor

of rank m+n, not every tensor of rank m+n can be constructued as such a product. Tensors are much more

general than these simple products of vectors and covectors. It is therefore important to step away from this

picture of combining vectors and covects into a tensor, and consider this construction as nothing more than a

simple example.

Remark: We have said that tensors of rank 2 are matrices. It is not true, however, that all matrices are

tensors (of rank 2), as we have seen already with the object s

µ

ν

.

14

4 Tensor Deﬁnition and Properties

Let us now formally deﬁne a tensor.

Deﬁnition of Tensor

A (n, m) tensor t

µ

1

···µn

ν

1

···νm

at a given point in space can be described by an

array of numbers with n + m indices that transforms, upon coordinate

transformation by a given matrix A, in the following way:

t

α

1

···αn

β

1

···βm

= A

α

1

µ

1

· · · A

αn

µn

_

A

−1

_

ν

1

β

1

· · ·

_

A

−1

_

νm

βm

t

µ

1

···µn

ν

1

···νm

.

An (n, m) tensor in a k-dimensional manifold therefore has k

n+m

ele-

ments. It is contravariant in n components and covariant in m compo-

nents.

4.1 Symmetry and Anti-Symmetry

In practice is often happens that tensors display a certain amount of symmetry, like what we

know from matrices. Such symmetries have a strong eﬀect on the properties of tensors. Often,

many of these properties or even tensors equations can be derived solely on the basis of these

symmetries.

A tensor t is called symmetric in the indices µ and ν if the elements are equal upon exchange

of the index values. So, for a second-rand contravariant tensor,

t

µν

= t

νµ

(symmetric (2,0) tensor). (4.1)

A tensor t is called anti-symmetric in the indices µ and ν if the elements are equal in absolute

value but opposite in sign upon exchange of the index values. So, for a second-rank contravariant

tensor,

t

µν

= −t

νµ

(anti-symmetric (2,0) tensor). (4.2)

It is not useful to speak of symmetry in a pair of indices that are not of the same type (either

covariant to contravariant), i.e., we can only consider symmetry of a tensor with respect to two

indices that are either both covariant or both contravariant. The reason for this is that the

properties of symmetry only remain invariant upon basis transformatoin if the indices are of the

same type.

15

Introduction to Tensors Tensor Deﬁnition and Properties

4.2 Contraction of Indices

With tensors of at least on covariant and at least

16

- Lec 2 Engineering Mechanics
- Lieder IV
- mecánica de fluidos navier stokes.pdf
- 02 Vectors
- c Mug Lectures
- Mollon Dias Soubra NAG2012
- 1.Essentials
- Adding Vectors Algebraically
- Physics Exp 4
- R05222101-MATHEMATICS-FOR-AEROSPACE-ENGINEERS
- Hassler Whitney - Tensor Products of Abelian Groups
- Lecture 01
- 2010 Kinematic Feature MIL Ieee
- final-S07-ans.doc
- On_submanifolds_of_codimension_2_
- Pre-Calculus 11 Chapter 7 7.1 Errors and Problems.
- Kinematics
- m72
- Cart
- PhysicsChpt7
- Sec5.4
- AERSYS-7012
- 2003 Writer Identification -- Directional Features
- starters unit 4 week 2
- vector f5.docx
- IB REVIEW - Vectors Review 2012
- Runge Kutta Method. 1
- Ps150 Public Final
- CEEDSlides
- Lecture 2.pdf

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading