You are on page 1of 2

tensors_2.dvi - tensors_old.pdf https://web.stanford.edu/~wschlott/one_page/tensors_o...

Tensors Notation
• contravariant denoted by superscript Ai took a vector and gave
For vector calculus us a vector
• covariant denoted by subscript Ai
took a scaler and gave us a vector
Review To avoid confusion in cartesian coordinates both types are the same so
• Vectors we just opt for the subscript. Thus a vector x would be x1 , x2 , x3
• Summation representation of an n by n array As it turns out in an R3 cartesian space and other rectilinear coordi-
nate systems there is no dierence between contravariant and covariant
• Gradient, Divergence and Curl
vectors. This will not be the case for other coordinate systems such a
• Spherical Harmonics (maybe) curvilinear coordinate systems or in 4 dimensions. These denitions are
closely related to the Jacobian.
Motivation
If you tape a book shut and try to spin it in the air on each indepen- Denitions for Tensors of Rank 2
dent axis you will notice that it spins ne on two axes but not on the Rank 2 tensors can be written as a square array. They have con-
third. That’s the inertia tensor in your hands. Similar are the polar- travariant, mixed, and covariant forms. As we might expect in cartesian
izations tensor, index of refraction tensor and stress tensor. But tensors coordinates these are the same.
also show up in all sorts of places that don’t connect to an anisotropic
material property, in fact even spherical harmonics are tensors. What
are the similarities and dierences between such a plethora of tensors? Vector Calculus and Identifers
The mathematics of tensors is particularly useful for de- Tensor analysis extends deep into coordinate transformations of all
scribing properties of substances which vary in direction– kinds of spaces and coordinate systems. At a rather shallow level it can
although that’s only one example of their use. -Feynman II be used to signicantly simplify the operations and identifers of vector
31-1 calculus.

Denitions Scaler Product


• q = Scaler Take for example the Scaler or Dot Product.
a.k.a. Rank 0 tensor
scaler
One real number, invariant under rotation ∑   
• Ai = Vector A·B= Ai Bi = Ai Bi
a.k.a Rank 1 tensor where i=1,2,3... i
Components will transform under rotations. Simply put, an ar- Clearly i is just a general way of specifying the components to multiply.
row does not need to start at the origin or point in any specic Here we sum the product of the two components with i is the same. The
orientation, but it’s still the same arrow with the same magnitude. equation is also written without the summation. They are equivalent
• Aij = Rank 2 tensor because of the Einstein summation convention.
a.k.a. tensor
Can be writhen out as a square array. Not all square arrays are
Scaler Product with an invariant tensor (Kronecker
tensors, there are some specic requirements. In cartesian space
they must be an orthogonal norm preserving matrix. delta)
• In N-dimensional space a tensor of rank n has N n components. In a more thorough treatment we can also take the Scaler product
using a mixed tensor of rank 2, δlk more commonly recognized as the
Kronecker delta δij
Rank 1 Tensors (Vectors) {
1 i=j
The denitions for contravariant and covariant tensors are inevitably δij =
0 i = j
dened at the beginning of all discussion on tensors. Their denitions
or
are inviably without explanation. In that spirit we begin our discussion
of rank 1 tensors. The main dierence between contravaariant and co- (
1 0
)
variant tensors is in how they are transformed. For example, a rotation δlk =
0 1
of a vector.
Clearly, the Kronecker delta is a rank two tensor that can be expressed
as a matrix, but rarely takes such an inherently dangerous form. The
Contravariant Kronecker delta is a so called invariant tensor. Now we can write our
In general for we can dene a contravariant vector as one that can be dot product as
transformed as Aj is in the following: ∑
∑ ∂x′ A·B= Ai δij Bj = Ai δij Bj = Ai Bi
i
A′i = Aj j
∂xj
j The sum is, of course, neglected because of the summation convention.
here might x′i is the transformed x-axis, and xj is the old x-axis. Clearly the Kronecker delta facilitates the scaler product. Furthermore,
the Kronecker delta can be applied to any situation where the product
of orthogonal complements must be zero.
Covariant
The partial derivative above may have you thinking of a gradient. Summation Convention and Contraction
The gradient is the prototype for a covariant vector which is dened as
2 Summation Convention When a subscript (letter, not number)
∂ϕ′ ∑ ∂ϕ ∂xj
= appears twice on one side of an equation, summation with respect to
∂x′i ∂xj ∂x′i
j that subscript is implied.
Covariant vectors are actually a linear form an not a vector. The linear Contraction Consists of setting two unlike indices (subscripts) equal
form is a mapping of vectors into scalars which is additive and homoge- to each other and then summing as implied by the summation conven-
neous under multiplication by scalars. 1 tion.
1 http://xxx.lanl.gov/PS cache/gr-qc/pdf/9807/9807044.pdf 2 From Arfkin p138

Author: W. Schlotter Last Modied: May 20, 2003 One-pager


wschlott@stanford.edu 1.02

1 de 2 29/01/24, 20:08
tensors_2.dvi - tensors_old.pdf https://web.stanford.edu/~wschlott/one_page/tensors_o...

Cross Product and the invariant Levi-Civita Symbol Symmetry


The rank three analog of the Kronecker delta is known as the Levi-
symetric antisymetric
Civita Symbol, εijk .
1   1   
Aij = (Aij + Aji ) + (Aij − Aji )
 2 2
1
 if (ijk )= even permutation:ε123 = ε231 = ε312
εijk = −1


if (ijk ) = odd permutation: ε132 = ε213 = ε321 As used in P220
0 if any of the two indexes are the same
The functions of position are given by
It’s a three dimensional 27-element array with only six non-zero com- f = x′i
ponents. Levi-Civita is a completely antisymmetric unit tensor. We can
use it to preform some amazing operations. g = x′j
h = x′k (not given in class)
Cross product See Griths Appendix A on Vector Calculus and Curvilinear Coor-
dinates.

A =B×C
Vector identity examples
Ai = εijk Bj Ck
Let’s take a closer look
εijk εlmk = δjm (δil δjm − δim δjl )
1 −1
      = δil δji − δim δml
Ax = εxjk Bj Ck = εxyz By Cz + εxzy Bz Cy
= 3δil − δil = 2δil
−1 1
     
Ay = εyjk Bj Ck = εyxz Bx Cz + εyzx Bz Cx Okay now lets derive identities

  
1
  
−1 d = a × (b × c)
  
Az = εzjk Bj Ck = εzxy Bx Cy + εzyx By Cx e
di = εijk aj ek = εijk aj εklm bl cm
Now you can see why we just write Ai = εijk Bj Ck . = (δil δjm − δim δjl )aj bl cm
= bi (aj cj ) − ci (aj bj )
Partial derivative, Grad, Div and Curl d = b(a · c) − c(a · b)
We can write a partial derivative in general form as follows And now for another identity
. ∂ ∇ × (∇φ) = 0
∂i =
∂xi εijk ∂j (∂k φ) = 0

Grad And another

Thus ∇ · (∇ × A) = 0
.
∇φ = ∂i φ ∂i εijk ∂j Ax = 0
We know this will generate a vector since there is nothing to sum And now a really complicated one
over.
c = ∇ × (a × b)

Divergence ci = εijk ∂j (εkml al bm )


ci = (δil δjm − δim δlj ) [(∂j al )bm + al (∂j bm )]
Thus
. ci = (∂j ai )bi + ai (∂j bj ) − (∂j aj )bi − aj (∂j bj )
∇ · A = ∇ · Ai = ∂i Ai = ∂i δij Aj
Since two indexes are the same we will sum to generate at scaler. c = (b · ∇)a + a(∇ · b) − b(∇ · a) − (a · ∇)b

Curl References and Further Reading


Thus Arfkin and Weber, Mathematical Methods for Physicists 5th ed.
. (2001), Section 2.6 Tensor Analysis
∇ × A = ∇ × Ai = εijk ∂j Ak = ci
Feynman, Lectures on Physics, Lecture II 31
Now we have three indexes so we sum twice to get a vector. This is a
prime example of contraction.
Related topics
Rank 2 Tensors • Spherical Harmonics

The Kronoker delta was previously mentioned to be rank 2 mixed • Group Theory
tensor, but without further explanation. There are three types of rank Spherical Harmonics
2 tensors Group Theory

∑ ∂x′ ∂x′j
A′ij = i
Akl contravariant
kl
∂xk ∂xl
∑ ∂x′ ∂xl
Bj′i = i
B k mixed
kl
∂xk ∂x′j l
∑ ∂xk ∂xl

Cij = Ckl covariant
kl
∂x′i ∂x′j

Author: W. Schlotter Last Modied: May 20, 2003 One-pager


wschlott@stanford edu 1 02

2 de 2 29/01/24, 20:08

You might also like