You are on page 1of 256

peeter joot

GEOMETRIC ALGEBRA FOR ELECTRICAL ENGINEERS.


GEOMETRIC ALGEBRA FOR ELECTRICAL ENGINEERS.

peeter joot

Electromagnetism applications.
April 2018 – version V0.1.2
c
Peeter Joot: Geometric Algebra for Electrical Engineers., Electromagnetism applications.,
April 2018
COPYRIGHT

c
Copyright 2018 Peeter Joot All Rights Reserved
This book may be reproduced and distributed in whole or in part, without fee, subject to the
following conditions:

• The copyright notice above and this permission notice must be preserved complete on all
complete or partial copies.

• Any translation or derived work must be approved by the author in writing before distri-
bution.

• If you distribute this work in part, instructions for obtaining the complete version of this
document must be included, and a means for obtaining a complete version provided.

• Small portions may be reproduced as illustrations for reviews or quotes in other works
without this permission notice if proper citation is given.

Exceptions to these rules may be granted for academic purposes: Write to the author and ask.

Disclaimer: I confess to violating somebody’s copyright when I copied this copyright state-
ment.

v
DOCUMENT VERSION

Version 0.1.2
Sources for this notes compilation can be found in the github repository
git@gitlab.com:peeterjoot/GAelectrodynamics.git
The last commit (Apr/5/2018), associated with this pdf was
5b6b4f13c948c0b1dd7d6b97c5d15bdf7ca52203

vii
Dedicated to:
Aurora and Lance, my awesome kids, and
Sofia, who not only tolerates and encourages my studies, but is also awesome enough to think
that math is sexy.
P R E FA C E

This book introduces the fundamentals of geometric algebra and calculus, and applies those
tools to the study of electromagnetism.
Geometric algebra provides a structure that can represent oriented point, line, plane, and vol-
ume segments. Vectors, which can be thought of as a representation of oriented line segments,
are generalized to multivectors. A full fledged, but non-commutative (i.e. order matters) mul-
tiplication operation will be defined for products of vectors. Namely, the square of a vector is
the square of its length. This simple rule, along with a requirement that we can sum vectors and
their products, essentially defines geometric algebra. Such sums of scalars, vectors and vector
products are called multivectors.
The reader will see that familiar concepts such as the dot and cross product are related to
a more general vector product, and that algebraic structures such as complex numbers can be
represented as multivectors. We will be able to utilize generalized complex exponentials to do
rotations in arbitrarily oriented planes in space, and will find that simple geometric algebra
representations of many geometric transformations are possible.
Generalizations of the divergence and Stokes’ theorems are required once we choose to work
with multivector functions. There is an unfortunate learning curve required to express this gen-
eralization, but once overcome, we will be left with a single powerful multivector integration
theorem that has no analogue in conventional vector calculus. This fundamental theorem of geo-
metric calculus incorporates Green’s (area) theorem, the divergence theorem, Stokes’ theorems,
and complex residue calculus. Multivector calculus also provides the opportunity to define a few
unique and powerful Green’s functions that almost trivialize solutions of Maxwell’s equations.
Instead of working separately with electric and magnetic fields, we will work with a hybrid
multivector field that includes both electric and magnetic field contributions, and with a mul-
tivector current that includes both charge and current densities. The natural representation of
Maxwell’s equations is a single multivector equation that is easier to solve and manipulate then
the conventional mess of divergence and curl equations are familiar to the reader.
This book is aimed at graduate or advanced undergraduates in electrical engineering or
physics. While all the fundamental results of electromagnetism are derived from Maxwell’s
equations, there will be no attempt to motivate Maxwell’s equations themselves, so existing
familiarity with the subject is desirable.

These notes contain:

• An introduction to Geometric Algebra (GA).

xi
• Geometric calculus, and Green’s function solutions of differential equations.

• Application of Geometric Algebra to electromagnetism.

Prerequisites: It will be assumed that the reader is familiar with rotation matrices, complex
numbers, dot and vector products and vector spaces, coordinate representation, linear transfor-
mations, determinants, Stokes, Green’s and divergence theorems.

Thanks: I’d like to thank Steven De Keninck, and Dr. Wolfgang Lindner for their review of
some of the earlier drafts of this book. Their suggestions significantly improved the quality and
readability of the text.
Peeter Joot peeterjoot@protonmail.com

xii
CONTENTS

Preface xi
1 geometric algebra. 1
1.1 Prerequisites. 1
1.1.1 Vector space. 1
1.1.2 Basis, span and dimension. 2
1.1.3 Standard basis, length and normality. 3
1.2 Definitions 6
1.2.1 Multivector space. 6
1.2.2 Nomenclature. 14
1.3 Colinear vectors. 16
1.4 Normal vectors. 17
1.5 2D multiplication table. 18
1.6 Plane rotations. 20
1.7 Duality. 25
1.8 Vector product, dot product and wedge product. 27
1.9 Reverse. 34
1.10 Complex representations. 35
1.11 Multivector dot product. 37
1.11.1 Dot product of a vector and bivector 39
1.11.2 Bivector dot product. 41
1.11.3 Problems. 42
1.12 Permutation within scalar selection. 42
1.13 Multivector wedge product. 43
1.13.1 Problems. 45
1.14 Projection and rejection. 45
1.15 Normal factorization of the wedge product. 50
1.16 The wedge product as an oriented area. 51
1.17 General rotation. 53
1.18 Symmetric and antisymmetric vector sums. 56
1.19 Reflection. 57
1.20 Linear systems. 58
1.21 A summary comparision. 62
1.22 Problem solutions. 64
2 multivector calculus. 65

xiii
xiv contents

2.1Reciprocal frames. 65
2.1.1 Motivation and definition. 65
2.1.2 R2 reciprocal frame. 69
2.1.3 R3 reciprocal frame. 71
2.1.4 Problems. 72
2.2 Curvilinear coordinates. 73
2.2.1 Two parameters. 73
2.2.2 Three (or more) parameters. 77
2.2.3 Gradient. 77
2.2.4 Vector derivative. 79
2.2.5 Examples. 80
2.2.6 Problems. 89
2.3 Integration theory. 90
2.3.1 Hypervolume integral 90
2.3.2 Fundamental theorem. 91
2.3.3 Stokes’ theorem. 92
2.3.4 Line integral. 93
2.3.5 Surface integral. 96
2.3.6 Two parameter Stokes’ theorem. 99
2.3.7 Green’s theorem. 101
2.3.8 Volume integral. 103
2.3.9 Three parameter Stokes’ theorem. 106
2.3.10 Divergence theorem. 108
2.4 Multivector Fourier transform and phasors. 109
2.5 Green’s functions. 111
2.5.1 Motivation. 111
2.5.2 Green’s function solutions. 114
2.5.3 Helmholtz equation. 117
2.5.4 First order Helmholtz equation. 118
2.5.5 Spacetime gradient. 120
2.6 Helmholtz theorem. 122
2.7 Problem solutions. 125
3 electromagnetism. 129
3.1 Conventional formulation. 129
3.1.1 Problems. 132
3.2 Maxwell’s equation. 132
3.3 Wave equation and continuity. 135
3.4 Plane waves. 137
3.5 Statics. 141
contents xv

3.5.1 Inverting the Maxwell statics equation. 141


3.5.2 Enclosed charge. 143
3.5.3 Enclosed current. 144
3.5.4 Example field calculations. 147
3.6 Dynamics. 159
3.6.1 Inverting Maxwell’s equation. 159
3.7 Energy and momentum. 160
3.7.1 Field energy and momentum density and the energy momentum ten-
sor. 160
3.7.2 Poynting’s theorem (prerequisites.) 165
3.7.3 Poynting theorem. 169
3.7.4 Examples: Some static fields. 172
3.7.5 Complex energy and power. 176
3.8 Lorentz force. 178
3.8.1 Statement. 178
3.8.2 Constant magnetic field. 180
3.9 Polarization. 182
3.9.1 Phasor representation. 182
3.9.2 Transverse plane pseudoscalar. 182
3.9.3 Pseudoscalar imaginary. 188
3.10 Transverse fields in a waveguide. 190
3.11 Multivector potential. 193
3.11.1 Definition. 193
3.11.2 Gauge transformations. 196
3.11.3 Far field. 199
3.12 Dielectric and magnetic media. 202
3.12.1 Statement. 202
3.12.2 Alternative form. 205
3.12.3 Gauge like transformations. 206
3.12.4 Boundary value conditions. 208
a distribution theorems. 211
b proof sketch for the fundamental theorem of geometric calculus. 215
c green’s functions. 219
c.1 Helmholtz operator. 219
c.2 Delta function derivatives. 222
d ga electrodynamics in the literature. 225
e energy momentum tensor for vector parameters. 227
f mathematica notebooks. 229
xvi contents

index 234

bibliography 237
LIST OF FIGURES

Figure 1.1 Vector illustration. 7


Figure 1.2 Graphical vector addition. 7
Figure 1.3 Unit bivectors for R3 9
Figure 1.4 Circular representation of two bivectors. 9
Figure 1.5 Graphical representations of e1 e2 . 10
Figure 1.6 Graphical representation of bivector addition in plane. 11
Figure 1.7 Bivector addition. 11
Figure 1.8 e1 + e2 . 17
Figure 1.9 Multiplication by e1 e2 . 20
Figure 1.10 π/2 rotation in the plane using pseudoscalar multiplication. 21
Figure 1.11 Rotation in a plane. 22
Figure 1.12 Radial vector in polar coordinates. 23
Figure 1.13 R duality illustration.
3 26
Figure 1.14 Two vectors in a plane. 27
Figure 1.15 Equilateral triangle in R .3 29
Figure 1.16 Projection and rejection illustrated. 46
Figure 1.18 Parallelogram representations of wedge products. 52
Figure 1.19 Different shape representations of a wedge product. 53
Figure 1.20 Rotation with respect to the plane of a pseudoscalar. 56
Figure 1.21 Reflection. 59
Figure 1.22 Intersection of two lines. 61
Figure 1.17 Two equivalent bivector factorizations. 64
Figure 2.1 Oblique and reciprocal bases. 68
Figure 2.2 Contours for an elliptical region. 74
Figure 2.3 Differentials for an elliptical parameterization. 74
Figure 2.4 Two parameter manifold. 76
Figure 2.5 Polar coordinates. 81
Figure 2.6 Toroidal parameterization. 88
Figure 2.7 One parameter manifold. 94
Figure 2.8 Two parameter manifold differentials. 96
Figure 2.9 Contour for two parameter surface boundary. 99
Figure 2.10 Sum of infinitesimal loops. 100
Figure 2.11 Three parameter volume element. 103
Figure 2.12 Differential surface of a volume. 107

xvii
xviii List of Figures

Figure 3.1 Line charge density. 147


Figure 3.2 Circular line charge. 151
Figure 3.3 Field due to a circular distribution. 153
Figure 3.4 (a) A(z̃, ρ̃). (b) B(z̃, ρ̃). 155
Figure 3.5 Electric field direction for circular charge density distribution near z =
0. 156
Figure 3.6 Magnetic field direction for circular current density distribution near
z = 0. 156
Figure 3.7 Magnetic field for larger z. 157
Figure 3.8 Magnetic field between two current sources. 157
Figure 3.9 Linear polarization. 184
Figure 3.10 Electric field with elliptical polarization. 186
Figure 3.11 Vertical infinitesimal dipole and selected propagation direction. 201
Figure 3.12 Pillbox integration volume. 209
Figure C.1 Neighbourhood kx − x0 k < . 220
GEOMETRIC ALGEBRA.
1
1.1 prerequisites.

Geometric algebra (GA for short), generalizes and extends vector algebra. The following sec-
tion contains a lighting review of some foundational concepts, including vector space, basis,
orthonormality, and metric. If you are inclined to skip this, please at least examine the stated
dot product definition, since the conventional positive definite property is not assumed.

1.1.1 Vector space.

Vectors have many generalizations in mathematics, where a number of disparate mathematical


objects can all be considered vectors. A vector space is an enumeration of the properties and
operations that are common to a set of vector-like objects, allowing them to be treated in a
unified fashion, regardless of their representation and application.

Definition 1.1: Vector space.

A (real) vector space is a set V = {x, y, z, · · ·}, the elements of which are called vectors,
which has an addition operation designated + and a scalar multiplication operation desig-
nated by juxtaposition, where the following axioms are satisfied for all vectors x, y, z ∈ V
and scalars a, b ∈ R

1
2 geometric algebra.

Vector space axioms.


Addition is closed. x+y ∈ V
(Scalar) multiplication is closed. ax ∈ V
Addition is associative. (x + y) + z = x + (y + z)
Addition is commutative. y+x = x+y
There exists a zero element 0 ∈ V . x+0 = x
For any x ∈ V there exists a negative x + (−x) = 0
additive inverse −x ∈ V .
(Scalar) multiplication is distributive. a(x + y) = ax + ay, (a + b)x = ax + bx
(Scalar) multiplication is associative. (ab)x = a(bx)
There exists a multiplicative identity 1. 1x = x

Despite the generality of this definition, the vector spaces used in GA are fairly restricted. In
particular, electrodynamic applications of GA require only two, three or four dimensional real
vector spaces. No vector spaces with matrix, polynomial, or complex tuple elements will be
required, nor will any infinite dimensional vector spaces. The only unconventional vector space
of interest will be a “space-time” vector space containing a time like “direction”, 1-3 spatial
directions, and a generalized length operation that can be negative.

Exercise 1.1 RN
Define RN as the set of tuples {(x1 , x2 , · · ·) | xi ∈ R}. Show that RN is a vector space when
the addition operation is defined as x + y ≡ (x1 + y1 , x2 + y2 , · · ·) , and scalar multiplication is
defined as ax ≡ (ax1 , ax2 , · · ·) for any x = (x1 , x2 , · · ·) ∈ RN , y = (y1 , y2 , · · ·) ∈ RN , and a ∈ R.

1.1.2 Basis, span and dimension.

Definition 1.2: Linear combination

Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. A linear combination of vectors in


S is any sum

a1 x1 + a2 x2 + · · · + ak xk .
1.1 prerequisites. 3

Definition 1.3: Linear dependence.

Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. This set S is linearly dependent if


any equation

0 = a1 x1 + a2 x2 + · · · + ak xk ,

can be constructed for which not all of the coefficients ai are zero.

Definition 1.4: Linear independence.

Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. This set is linearly independent if


there are no equations with ai , 0 such that

0 = a1 x1 + a2 x2 + · · · + ak xk .

Definition 1.5: Span.

Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. The span of this set is the set of all
linear combinations of these vectors, denoted

span(S ) = {a1 x1 + a2 x2 + · · · + ak xk } .

Definition 1.6: Subspace.

Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. This subset is a subspace if S is a


vector space under the multiplication and addition operations of the vector space V.

Definition 1.7: Basis and dimension

Let S = {x1 , x2 , · · · , xn } be a linearly independent subset of V. This set is a basis if


span(S ) = V. The number of vectors n in this set is called the dimension of the space.

1.1.3 Standard basis, length and normality.


4 geometric algebra.

Definition 1.8: Dot product.

Let x, y be vectors from a vector space V. A dot product x · y is a mapping V × V → R


with the following properties
Dot product properties.
Symmetric in both arguments x·y = y·x
Bilinear (ax + by) · (a0 x0 + b0 y0 ) = aa0 (x · x0 ) +
bb0 (y · y0 ) + ab0 (x · y0 ) + ba0 (y · x0 )
(Optional) Positive length x · x > 0, x , 0

Because the dot product is bilinear, it is specified completely by the dot products of a set of
basis elements for the space. For example, given a basis {e1 , e2 , · · · , eN }, and two vectors
N
X
x= xi ei
i=1
(1.1)
XN
y= yi ei ,
i=1

the dot product of the two is


 N   N 
X  X 
x · y =  xi ei  ·  y j e j 

i=1 j=1
(1.2)
N
X  
= xi y j ei · e j .
i, j=1

Such an expansion in coordinates can be written in matrix form as


x · y = xTGy, (1.3)

where G is the symmetric matrix with elements gi j = ei · e j . This matrix G, or its elements gi j
is also called the metric for the space.
For Euclidean spaces, which are the primary focus of this book, the metric is not only diag-
onal, but is the identity matrix. Slightly more general metrics are of interest in electrodynam-
ics. In particular, a four dimensional (relativistic) vector space, where the metric has diagonals
(1, −1, −1, −1) or (−1, 1, 1, 1) allows for the construction of a geometric algebra that allows
Maxwell’s equations to take their very simplest form. Such a metric does not have the (Eu-
clidean) positive definite property xTGx > 0, x , 0.
1.1 prerequisites. 5

Definition 1.9: Length

The squared norm of a vector x is defined as

kxk2 = x · x,

a quantity that need not be positive. The length of a vector x is defined as


p
kxk = |x · x|.

Definition 1.10: Unit vector

A vector x is called a unit vector if its absolute squared norm is one (|x · x| = 1).

Definition 1.11: Normal

Two vectors from a vector space V are normal, or orthogonal, if their dot product is zero,
x · y = 0 ∀x, y ∈ V.

Definition 1.12: Orthonormal

Two vectors x, y are orthonormal if they are both unit vectors and normal to each other
(x · y = 0, |x · x| = |y · y| = 1).
A set of vectors {x, y, · · · , z} is an orthonormal set if all pairs of vectors in that set are
orthonormal.

Definition 1.13: Standard basis.

A basis {e1 , e2 , · · · , eN } is called a standard basis if that set is orthonormal.

Definition 1.14: Euclidean space.


6 geometric algebra.

A vector space with basis {e1 , e2 , · · · , eN } is called Euclidean if all the dot product pairs
between the basis elements are not only orthonormal, but positive definite. That is

ei · e j = δi j .

1.2 definitions

1.2.1 Multivector space.

Geometric algebra takes a vector space and adds two additional operations, a vector multipli-
cation operation, and a generalized addition operation that extends vector addition to include
addition of scalars and products of vectors. Multiplication of vectors is indicated by juxtaposi-
tion, for example, if x, y, e1 , e2 , e3 , · · · are vectors, then some vector products are

xy, xyx, xyxy,


e1 e2 , e2 e1 , e2 e3 , e3 e2 , e3 e1 , e1 e3 ,
(1.4)
e1 e2 e3 , e3 e1 e2 , e2 e3 e1 , e3 e2 e1 , e2 e1 e3 , e1 e3 e2 ,
e1 e2 e3 e1 , e1 e2 e3 e1 e3 e2 , · · ·

Vector multiplication is constrained by a rule, called the contraction axiom, that gives a mean-
ing to the square of a vector

xx ≡ x · x. (1.5)

The square of a vector, by this definition, is the squared length of the vector, and is a scalar.
This may not appear to be a useful way to assign meaning to the simplest of vector products,
since the product and the vector live in separate spaces. If we want a closed algebraic system that
includes both vectors and their products, we have to allow for the addition of scalars, vectors,
or any products of vectors. Such a sum is called a multivector, an example of which is
1 + 2e1 + 3e1 e2 + 4e1 e2 e3 . (1.6)

In this example, we have added a scalar (or 0-vector) 1, to a vector (or 1-vector) 2e1 , to a
bivector (or 2-vector) 3e1 e2 , to a trivector (or 3-vector) 4e1 e2 e3 . Geometric algebra uses vector
multiplication to build up a hierarchy of geometrical objects, representing points, lines, planes,
volumes and hypervolumes (in higher dimensional spaces.)

Scalar. A scalar, which we will also call a 0-vector, is a zero-dimensional object with sign,
and a magnitude. We may geometrically interpret a scalar as a (signed) point in space.
1.2 definitions 7

Vector. A vector, which we will also call a 1-vector, is a one-dimensional object with a sign, a
magnitude, and a rotational attitude within the space it is embedded. When vectors are negated
the relative placement of the “head” vs. the “tail” are toggled as illustrated in fig. 1.1.

x
-x

Figure 1.1: Vector illustration.

Vectors can be added graphically by connecting them head to tail in sequence, and joining
the first tail point to the final head, as illustrated in fig. 1.2.

a+b

Figure 1.2: Graphical vector addition.

Bivector. We now wish to define a bivector, or 2-vector, as a 2 dimensional object repre-


senting a signed plane segment with magnitude and orientation. Formally, assuming a vector
product, the algebraic description of a bivector is
8 geometric algebra.

Definition 1.15: Bivector.

A bivector, or 2-vector, is a sum of products of pairs of normal vectors. Given an N dimen-


sional vector space V with an orthonormal basis {e1 , e2 , · · · , eN }, a general bivector can be
expressed as
X
Bi j ei e j ,
1≤i, j≤N

where Bi j is a scalar. The vector basis V is said to be a generator of a bivector space.

The bivector provides a structure that can encode plane oriented quantities such as torque,
angular momentum, or a general plane of rotation. A quantity like angular momentum can be
represented as a magnitude times a quantity that represents the orientation of the plane of rota-
tion. In conventional vector algebra we use the normal of the plane to describe this orientation,
but that is problematic in higher dimensional spaces where there is no unique normal. Use of
the normal to represent a plane is also logically problematic in two dimensional spaces, which
have to be extended to three dimensions to use normal centric constructs like the cross prod-
uct. A bivector representation of a plane can eliminate the requirement utilize a third (normal)
dimension, which may not be relevant in the problem, and can allow some concepts (like the
cross product) to be generalized to dimensions other than three when desirable.
One of the implications of the contraction axiom eq. (1.5), to be discussed in more detail a
bit later, is a linear dependence between bivectors formed from normal products. For example,
given any pair of unit bivectors, where i , j we have
ei e j + e j ei = 0, (1.7)

This is why the sum in definition 1.15 was over only half the possible pairs of i , j indexes. The
reader can check that the set of all bivectors is a vector space per definition 1.1, so we will call
the set of all bivectors a bivector space. In R2 a basis for the bivector space is {e1 e2 }, whereas
in R3 a basis for the bivector space is {e1 e2 , e2 e3 , e3 e1 }. The unit bivectors for two possible R3
bivector space bases are illustrated in fig. 1.3.
We interpret the sign of a vector as an indication of the sense of the vector’s “head” vs
“tail”. For a bivector, we can interpret the sign as a representation of a a “top” vs. “bottom”,
or equivalently a left or right “handedness”, as illustrated using arrows around a plane segment
in fig. 1.4. For a product like e1 e2 , the sense of the handedness follows the path 0 → e1 →
e1 + e2 → e2 → 0 around the unit square in the x-y plane. This is illustrated for all the unit
bivectors in fig. 1.3. In R3 we can use the right hand rule to visualize such a handedness. You
could say that we are using the direction of the fingers around the normal to indicate the sign of
the bivector, but without actually drawing that normal.
1.2 definitions 9

(a) (b)

Figure 1.3: Unit bivectors for R3

Figure 1.4: Circular representation of two bivectors.


10 geometric algebra.

Similar to the interpretation of the magnitude of a vector as the length of that vector, we
interpret the magnitude of a bivector (to be defined more exactly later), as the area of the bivector.
Other than having a boundary that surrounds a given area, a graphical bivector representation
as a plane segment need not have any specific geometry, which is illustrated in fig. 1.5 for a set
of bivectors all representing e1 e2 .

e2

e1

Figure 1.5: Graphical representations of e1 e2 .

An oriented plane segment can always be represented as a bivector in any number of dimen-
sions, however, when the generating vector space has dimension N ≥ 4 not all bivectors defined
by definition 1.15 necessarily represent oriented plane segments. The restrictions required for
a bivector to have an associated oriented plane segment interpretation in higher dimensional
spaces will be defined later.
Vector addition can be performed graphically by connecting vectors head to tail, and joining
the first tail to the last head. A similar procedure can be used for bivector addition as well, but
gets complicated if the bivectors lie in different planes. Here is a simple bivector sum
3e1 e2 − 2e1 e2 + 5e1 e2 = 6e1 e2 , (1.8)

which can be interpreted as taking a 3 unit area, subtracting a 2 unit area, and adding a 5 unit
area. This sum is illustrated in fig. 1.6. An visualization of arbitrarily oriented bivector addition
can be found in fig. 1.7, where blue + red = green. This visualization shows that the moral of
the story is that we will almost exclusively be adding bivectors algebraically, but can interpret
the sum geometrically after the fact.

Trivector. Again, assuming a vector product


1.2 definitions 11

+ + =

Figure 1.6: Graphical representation of bivector addition in plane.

Figure 1.7: Bivector addition.


12 geometric algebra.

Definition 1.16: Trivector.

A trivector, or 3-vector, is a sum of products of triplets of mutually normal vectors. Given


an N dimensional vector space V with an orthonormal basis {e1 , e2 , · · · , eN }, a trivector is
any value
X
T i jk ei e j ek ,
1≤i, j,k≤N

where Ti jk is a scalar. The vector space V is said to generate a trivector space.

In R3 all trivectors are scalar multiples of e1 e2 e3 . Like scalars, there is no direction to such
a quantity, but like scalars trivectors may be signed. The magnitude of a trivector may be in-
terpreted as a volume. We will defer interpreting the sign of a trivector geometrically until we
tackle integration theory.

K-vector.

Definition 1.17: K-vector and grade.

A k-vector is a sum of products of k mutually normal vectors. Given an N dimensional


vector space with an orthonormal basis {e1 , e2 , · · · , eN }, a general k-vector can be expressed
as
X
Ki j···m ei e j · · · em ,
1≤i, j···,m≤N

where Ki j···m is a scalar, indexed by k indexes i, j, · · · , m.


The number k of normal vectors that generate a k-vector is called the grade.
A 1-vector is defined as a vector, and a 0-vector is defined as a scalar.
The vector space V is said to generate the k-vector space.

We will see that the highest grade for a k-vector in an N dimensional vector space is N.

Multivector space.

Definition 1.18: Multivector space.

Given an N dimensional (generating) vector space V with an orthonormal basis {e1 , e2 , · · · , eN },


and a vector multiplication operation represented by juxtaposition, a multivector is a sum
1.2 definitions 13

of k-vectors, k ∈ [1, N], such as a0 + i ai ei + i, j ai j ei e j + i, j,k ai jk ei e j ek + · · ·, where


P P P
a0 , ai , ai j , · · · are scalars.
The multivector space generated by V is a set M = {x, y, z, · · ·} of multivectors, where
the following axioms are satisfied
Multivector space axioms.
Contraction. x2 = x · x, ∀x ∈ V
Addition is closed. x+y ∈ M
Multiplication is closed. xy ∈ M
Addition is associative. (x + y) + z = x + (y + z)
Addition is commutative. y+x = x+y
There exists a zero element 0 ∈ M . x+0 = x
There exists a negative additive inverse x + (−x) = 0
−x ∈ M .
Multiplication is distributive. x(y + z) = xy + xz, (x + y)z = xz + yz
Multiplication is associative. (xy)z = x(yz)
There exists a multiplicative identity 1. 1x = x

Compared to the vector space, def’n. 1.1, the multivector space

• presumes a vector multiplication operation, which is not assumed to be commutative


(order matters),

• generalizes vector addition to multivector addition,

• generalizes scalar multiplication to multivector multiplication (of which scalar multipli-


cation and vector multiplication are special cases),

• and most importantly, specifies a rule providing the meaning of a squared vector (the
contraction axiom).

The contraction axiom is arguably the most important of the multivector space axioms, as
it allows for multiplicative closure without an infinite dimensional multivector space. The re-
maining set of non-contraction axioms of a multivector space are almost that of a field 1 (as
encountered in the study of complex inner products), as they describe most of the properties
1 A mathematician would call a multivector space a non-commutative ring with identity [22], and could state the
multivector space definition much more compactly without listing all the properties of a ring explicitly as done
above.
14 geometric algebra.

one would expect of a “well behaved” set of number-like quantities. However, a field also re-
quires a multiplicative inverse element for all elements of the space, which exists for some
multivector subspaces, but not in general.

1.2.2 Nomenclature.

The workhorse operator of geometric algebra is called grade selection, defined as

Definition 1.19: Grade selection operator

Given a set of k-vectors Mk , k ∈ [0, N], and any multivector of their sum
N
X
M= Mi ,
i=0

the grade selection operator is defined as

hMik ≡ Mk .

Due to its importance, selection of the (scalar) zero grade is given the shorthand

hMi ≡ hMi0 = M0 .

The grade selection operator will be used to define a generalized dot product between multi-
vectors, and the wedge product, which generalizes the cross product (and is related to the cross
product in R3 ).
To illustrate grade selection by example, given a multivector M = 3 − e3 + 2e1 e2 , then

hMi = 3
hMi1 = −e3
(1.9)
hMi2 = 2e1 e2
hMi3 = 0.

Definition 1.20: Orthonormal product shorthand.


1.2 definitions 15

Given an orthonormal basis {e1 , e2 , · · ·}, a multiple indexed quantity ei j···k should be inter-
preted as the product (in the same order) of the basis elements with those indexes

ei j···k = ei e j · · · ek .

For example,

e12 = e1 e2
e123 = e1 e2 e3 (1.10)
e23121 = e2 e3 e1 e2 e1 .

Definition 1.21: Pseudoscalar.

If {x1 , x2 , · · · , xk } is a normal basis for a k-dimensional (sub)space, then the product x1 x2 · · · xk


is called a pseudoscalar for that (sub)space. A pseudoscalar that squares to ±1 is called a
unit pseudoscalar.

A pseudoscalar is the highest grade k-vector in the algebra, so in R2 any bivector is a pseu-
doscalar, and in R3 any trivector is a pseudoscalar. In R2 , e1 e2 is a pseudoscalar, as is 3e2 e1 ,
both of which are related by a constant factor. In R3 the trivector e3 e1 e2 is a pseudoscalar,
as is −7e3 e1 e2 , and both of these can also be related by a constant factor. For the subspace
span e1 , e2 + e3 , one pseudoscalar is e1 (e2 + e3 ).
If all the vector factors of a pseudoscalar are not just normal but orthonormal, then it is a unit
pseudoscalar. It is conventional to refer to

e12 = e1 e2 , (1.11)

as “the pseudoscalar” for R2 , and to

e123 = e1 e2 e3 , (1.12)

as “the pseudoscalar” for a three dimensional space.


We will see that geometric algebra allows for many quantities that have a complex imaginary
nature, and that the pseudoscalars of eq. (1.11) and eq. (1.12) both square to −1.
For this reason, it is often convenient to use a imaginary notation for the R2 and R3 pseu-
doscalars
i = e12
(1.13)
I = e123 .
16 geometric algebra.

For three dimensional problems in this book, i will often be used as the unit pseudoscalar for
whatever planar subspace is relevant to the problem, which may not be the x-y plane. The
meaning of i in any such cases will always be defined explicitly.

Exercise 1.2 Permutations of the R3 pseudoscalar


Show that all the cyclic permutations of the R3 pseudoscalar are equal

I = e2 e3 e1 = e3 e1 e2 = e1 e2 e3 .

Unless otherwise stated, a Euclidean vector space with an orthonormal basis {e1 , e2 , · · ·} is
assumed for the remainder of this chapter. Generalizations required for non-Euclidean spaces
will be discussed when (if?) spacetime vectors are introduced.

1.3 colinear vectors.

It was pointed out that the vector multiplication operation was not assumed to be commutative
(order matters). The only condition for which the product of two vectors is order independent,
is when those vectors are colinear.
Theorem 1.1: Vector commutation.

Given two vectors x, y, if y = αx for some scalar α, then x and y commute

xy = yx.

The proof is simple.


yx = αxx
(1.14)
xy = xαx = αxx.
The contraction axiom ensures that the product of two colinear vectors is a scalar. In particu-
lar, the square of a unit vector, say u is unity. This should be highlighted explicitly, because this
property will be used again and again

u2 = 1. (1.15)

For example, the squares of any orthonormal basis vectors are unity (e1 )2 = (e2 )2 = (e3 )3 = 1.
A corollary of eq. (1.15) is that we can factor 1 into the square of any unit vector u

1 = uu. (1.16)
1.4 normal vectors. 17

This factorization trick will be used repeatedly in this book.

1.4 normal vectors.

An interchange of the order of the factors of two normal factors results in a change of sign, for
example e2 e1 = −e1 e2 . This is a consequence of the contraction axiom, and can be demonstrated
by squaring the vector e1 + e2 (fig. 1.8).

e1 + e2 e2

2
1

1
e1

Figure 1.8: e1 + e2 .

By the contraction axiom, the square of this vector is 2, so we have

2 = (e1 + e2 )2
= (e1 + e2 )(e1 + e2 ) (1.17)
= e21 + e22 + e2 e1 + e1 e2 .
= 2 + e2 e1 + e1 e2 .

We conclude immediately that


e2 e1 + e1 e2 = 0, (1.18)

or

e1 e2 = −e1 e2 . (1.19)

The same computation could have been performed for any two orthonormal vectors, so we
conclude that any interchange of two orthonormal vectors changes the sign. In general this is
true of any normal vectors.
18 geometric algebra.

Theorem 1.2: Anticommutation

Let u, and v be two normal vectors, the product of which uv is a bivector. Changing the
order of these products toggles the sign of the bivector.

uv = −vu.

This sign change on interchange is called anticommutation.

1.5 2d multiplication table.

The multiplication table for the R2 geometric algebra can be computed with relative ease. Many
of the interesting products involve i = e1 e2 , the unit pseudoscalar. Using eq. (1.19) the imagi-
nary nature of the pseudoscalar, mentioned early, can now be demonstrated explicitly

(e1 e2 )2 = (e1 e2 )(e1 e2 )


= −(e1 e2 )(e2 e1 )
= −e1 (e22 )e1 (1.20)
= −e21
= −1.
1.5 2d multiplication table. 19

Like the (scalar) complex imaginary, this bivector also squares to −1. The only non-trivial
products left to fill in the R2 multiplication table are those of the unit vectors with i, products
that are order dependent

e1 i = e1 ( e1 e2 )
= ( e1 e1 ) e2
= e2
ie1 = (e1 e2 ) e1
= (−e2 e1 ) e1
= −e2 (e1 e1 )
= −e2
(1.21)
e2 i = e2 ( e1 e2 )
= e2 (−e2 e1 )
= − (e2 e2 ) e1
= −e1
ie2 = (e1 e2 ) e2
= e1 ( e2 e2 )
= e1 .

The multiplication table for the R2 multivector basis can now be tabulated
2D Multiplication table.
1 e1 e2 e1 e2
1 1 e1 e2 e1 e2
e1 e1 1 e1 e2 e2
e2 e2 −e1 e2 1 −e1
e1 e2 e1 e2 −e2 e1 −1

It is important to point out that the pseudoscalar i does not commute with either basis vector,
but anticommutes with both, since ie1 = −e1 i, and ie2 = −e2 i. By superposition i anticommutes
with any vector in the x-y plane.
More generally, if u and v are orthonormal, and x ∈ span {u, v} then the bivector uv anticom-
mutes with x, or any other vector in this plane.
20 geometric algebra.

1.6 plane rotations.

Plotting eq. (1.21), as in fig. 1.9, shows that multiplication by i rotates the R2 basis vectors by
π/2 radians, with the rotation direction dependant on the order of multiplication.

e1 (e1 e2 )
e2
1.0 1.0

0.5 0.5

e1 e2 (e1 e2 ) (e1 e2 )e2


-1.0 -0.5 0.5 1.0 -1.0 -0.5 0.5 1.0

-0.5 -0.5

-1.0 -1.0

(e1 e2 )e1

(a) (b)

Figure 1.9: Multiplication by e1 e2 .

Using a polar vector representation


x = ρ (e1 cos θ + e2 sin θ) , (1.22)

it can be demonstrated directly that unit pseudoscalar multiplication of an arbitrary vector will
induce a π/2 rotation.
Multiplying the vector from the right by the pseudoscalar gives

xi = xe1 e2
= ρ (e1 cos θ + e2 sin θ) e1 e2 (1.23)
= ρ (e2 cos θ − e1 sin θ) ,

a counterclockwise rotation of π/2 radians, and multiplying the vector by the pseudoscalar from
the left gives

ix = e1 e2 x
= ρe1 e2 (e1 cos θ + e2 sin θ) e1 e2 (1.24)
= ρ (−e2 cos θ + e1 sin θ) ,

a clockwise rotation by π/2 radians (exercise 1.3).


1.6 plane rotations. 21

The transformed vector x0 = xe1 e2 = e2 e1 x(= xi = −ix) has been rotated in the direction that
takes e1 to e2 , as illustrated in fig. 1.10.

x(e1 e2 ) 1.0

x
0.5

-1.0 -0.5 0.5 1.0

-0.5

-1.0 (e1 e2 )x

Figure 1.10: π/2 rotation in the plane using pseudoscalar multiplication.

In complex number theory the complex exponential eiθ can be used as a rotation operator.
Geometric algebra puts this rotation operator into the vector algebra toolbox, by utilizing Euler’s
formula
eiθ = cos θ + i sin θ, (1.25)

which holds for this pseudoscalar imaginary representation too (exercise 1.4). By writing e2 =
e1 e1 e2 , a complex exponential can be factored directly out of the polar vector representation
eq. (1.22)

x = ρ (e1 cos θ + e2 sin θ)


= ρ (e1 cos θ + (e1 e1 )e2 sin θ)
= ρe1 (cos θ + e1 e2 sin θ) (1.26)
= ρe1 (cos θ + i sin θ)
= ρe1 eiθ .

We end up with a complex exponential multivector factor on the right. Alternatively, since
e2 = e2 e1 e1 , a complex exponential can be factored out on the left

x = ρ (e1 cos θ + e2 sin θ)


= ρ (e1 cos θ + e2 (e1 e1 ) sin θ)
= ρ (cos θ − e1 e2 sin θ) e1 (1.27)
= ρ (cos θ − i sin θ) e1
= ρe−iθ e1 .
22 geometric algebra.

Left and right exponential expressions have now been found for the polar representation

ρ (e1 cos θ + e2 sin θ) = ρe−iθ e1 = ρe1 eiθ . (1.28)

This is essentially a recipe for rotation of a vector in the x-y plane. Such rotations are illus-
trated in fig. 1.11.

xeiθ
1.0

x
0.5

-1.0 -0.5 0.5 1.0

-0.5

eiθ x

-1.0

Figure 1.11: Rotation in a plane.

This generalizes to rotations of RN vectors constrained to a plane. Given orthonormal vectors


u, v and any vector in the plane of these two vectors (x ∈ span {u, v}), this vector is rotated θ
radians in the direction of rotation that takes u to v by

x0 = xeuvθ = e−uvθ x. (1.29)

The sense of rotation for the rotation euvθ is opposite that of evuθ , which provides a first hint
that bivectors can be characterized as having an orientation, somewhat akin to thinking of a
vector as having a head and a tail.

Example 1.1: Velocity and acceleration in polar coordinates.

Complex exponential representations of rotations work very nicely for describing vectors
in polar coordinates. A radial vector can be written as
r = rr̂, (1.30)

as illustrated in fig. 1.12. The polar representation of the radial and azimuthal unit vector
are simply
1.6 plane rotations. 23

r = r r

θ

e2

r

θ e1

Figure 1.12: Radial vector in polar coordinates.

r̂ = e1 eiθ = e1 (cos θ + e1 e2 sin θ) = e1 cos θ + e2 sin θ


(1.31)
θ̂ = e2 eiθ = e2 (cos θ + e1 e2 sin θ) = e2 cos θ − e1 sin θ,

where i = e12 is the unit bivector for the x-y plane. We can show that these unit vectors are
orthogonal easily
D E
r̂ · θ̂ = r̂θ̂
D  E
= e1 eiθ e−iθ e2 (1.32)
= he1 e2 i
= 0,

since the scalar grade selection of a bivector is zero.


We can find the velocity and acceleration by taking time derivatives

v = r0 r̂ + rr̂0
(1.33)
a = r00 r̂ + 2r0 r̂0 + rr̂00 ,

but to make these more meaningful want to evaluate the r̂, θ̂ derivatives explicitly. Those
are
 0
r̂0 = e1 eiθ = e1 e1 e2 eiθ θ0 = e2 eiθ θ0 = θ̂ω
0  0 (1.34)
θ̂ = e2 eiθ = e2 e1 e2 eiθ θ0 = −e1 eiθ θ0 = −r̂ω,
24 geometric algebra.

where ω = dθ/dt. The velocity and acceleration vectors can now be written explicitly in
terms of radial and azimuthal components. The velocity is

v = r0 r̂ + rωθ̂, (1.35)

and the acceleration is

a = r00 r̂ + 2r0 ωθ̂ + r(ωθ̂)0 (1.36)


= r00 r̂ + 2r0 ωθ̂ + rω0 θ̂ − rω2 r̂,

or
  1  
a = r̂ r00 − rω2 + θ̂ r2 ω 0 . (1.37)
r
Using eq. (1.31), we also have the option of factoring out the rotation operation from
the position vector or any of its derivatives

r = (re1 ) eiθ
v = (r0 e1 + rωe2 ) eiθ
! (1.38)
  1  2 0
a = r − rω e1 + r ω e2 eiθ .
00 2
r

In particular, for uniform circular motion, each of the position, velocity and acceleration
vectors can be represented by a vector that is fixed in space, subsequently rotated by an
angle θ.

Exercise 1.3 R2 rotations.


Using familiar methods, such as rotation matrices, show that the counterclockwise and clock-
wise rotations of eq. (1.22) are given by eq. (1.23) and eq. (1.24) respectively.

Exercise 1.4 Multivector Euler’s formula and trig relations.


1.7 duality. 25

For a multivector x assume an infinite series representation of the exponential, sine and cosine
functions and their hyperbolic analogues

X xk
e =
x

k=0
k!
∞ ∞
X x2k X x2k+1
cos x = (−1) k
sin x = (−1)k
k=0
(2k)! k=0
(2k + 1)!
∞ ∞
X x2k X x2k+1
cosh x = sinh x =
k=0
(2k)! k=0
(2k + 1)!

a. Show that for scalar θ, and any multivector J that satisfies J 2 = −1

cosh(Jθ) = cos θ
sinh(Jθ) = J sin θ.

b. Show that the trigonometric and hyperbolic Euler formulas

e Jθ = cos θ + J sin θ
eKθ = cosh θ + K sinh θ,

hold for multivectors J, K satisfying J 2 = −1 and K 2 = 1 respectively.


c. Given multivectors X, Y, show that splitting a multivector exponential into factors of the
form eX+Y = eX E Y requires x and y to commute.

1.7 duality.

In eq. (1.23) and eq. (1.24) we saw that multiplication by a unit pseudoscalar for a given plane
generates normal vectors. For the x-y plane with i = e12 , these normal relations can be summa-
rized in polar form as

r = re1 eiθ
ri = re2 eiθ (1.39)
ir = −re2 eiθ .

Both ir and ri are perpendicular to r with the same magnitude. The direction they point
depends on the orientation of the pseudoscalar chosen. For example, had we used an oppositely
oriented pseudoscalar i = e2 e1 , these normal vectors would each have the opposite direction.
26 geometric algebra.

In three dimensions, pseudoscalar multiplication maps bivectors to their normals, or vectors


to the bivectors representing the planes that they are normal to. For example, the unit vectors
and bivectors are related in the following fashion

e2 e3 = Ie1 Ie2 e3 = −e1


e3 e1 = Ie2 Ie3 e1 = −e2 (1.40)
e1 e2 = Ie3 Ie1 e2 = −e3 .

For example, with r = ae1 + be2 , we have

rI = (ae1 + be2 ) e123


= ae23 + be31 (1.41)
= e3 (−ae2 + be1 ) .

Here e3 was factored out of the resulting bivector, leaving two factors both perpendicular to
the original vector. Every vector that lies in the span of the plane represented by this bivector is
perpendicular to the original vector. This is illustrated in figure fig. 1.13.

Figure 1.13: R3 duality illustration.

Using pseudoscalar multiplication to produce normal subspaces is referred to as a duality


transformation.

Definition 1.22: Dual

Given a multivector M, the dual of a multivector M is designated M ∗ and has the value
M ∗ = MI, where I is a unit pseudoscalar for the space.
1.8 vector product, dot product and wedge product. 27

Some notes on duality

• If A is a non-scalar k-vector, none of the vector factors of A and A∗ are common.

• In a sense that can be defined more precisely once the general dot product operator is
defined, the dual to a given k-vector A represents a (N − k)-vector object that is normal to
A.

• The dual of any scalar is a pseudoscalar, whereas the dual of a pseudoscalar is a scalar.

• Some authors use different sign conventions for duality, in particular, designating the dual
as M ∗ = MI −1 .

• For non-Euclidean spaces, in particular “null spaces” where the square of vectors can be
null, a different definition of duality may be required.

1.8 vector product, dot product and wedge product.

The product of two colinear vectors is a scalar, and the product of two normal vectors is a
bivector. The product of two general vectors is a multivector with structure to be determined.
A powerful way to examine this structure is to compute the product of two vectors in a polar
representation with respect to the plane that they span. Let u and v be an orthonormal pair of
vectors in the plane of a and b, oriented in a positive rotational sense as illustrated in fig. 1.14.

Figure 1.14: Two vectors in a plane.

With respect to the orthonormal vectors u and v, a a polar representation of a, b is


a = kak ueiab θa = kak e−iab θa u
(1.42)
b = kbk ueiab θb = kbk e−iab θb u,
28 geometric algebra.

where iab = uv is a unit pseudoscalar for the planar subspace spanned by a and b. The vector
product of these two vectors is

ab = kak e−iab θa u kbk ueiab θb


  

= kak kbk e−iab θa (uu)eiab θb (1.43)


= kak kbk eiab (θb −θa ) .
= kak kbk (cos(θb − θa ) + iab sin(θb − θa )) .
We see that the product of two vectors is a multivector that has only grades 0 and 2. This can
be expressed symbolically as
ab = habi + habi2 . (1.44)
We recognize the scalar grade of the vector product as the RN dot product, but the grade
2 component of the vector product is something new that requires a name. We respectively
identify and define operators for these vector grade selection operations
Definition 1.23: Dot and wedge products of two vectors.

Given two vectors a, b ∈ RN the dot product is identified as the scalar grade of their
product

habi = a · b.

A wedge product of the vectors is defined as a grade-2 selection operation

a ∧ b ≡ habi2 .

Given this notation, the product of two vectors can be written

ab = a · b + a ∧ b.

Scalar grade selection of a product of two vectors is an important new tool. There will be
many circumstances where the easiest way to compute a dot product is using scalar grade selec-
tion.
The split of a vector product into dot and wedge product components is also important. How-
ever, to utilize it, the properties of the wedge product have to be determined.
Summarizing eq. (1.43) with our new operators, we write

ab = kak kbk exp (iab (θb − θa ))


a · b = kak kbk cos(θb − θa ) (1.45)
a ∧ b = iab kak kbk sin(θb − θa ),
1.8 vector product, dot product and wedge product. 29

Example 1.2: Products of two unit vectors.

To develop some intuition about the vector product, let’s consider product of two unit
vectors a, b in the equilateral triangle of fig. 1.15, where
1
a = √ (e3 + e1 ) = e3 exp (e31 π/4)
2 (1.46)
1
b = √ (e3 + e2 ) = e3 exp (e32 π/4) .
2

Figure 1.15: Equilateral triangle in R3 .

The product of these vectors is

1
ab = (e3 + e1 ) (e3 + e2 )
2
1
= (1 + e32 + e13 + e12 ) (1.47)
2 √
1 3 e32 + e13 + e12
= + √ .
2 2 3
Let the bivector factor be designated
e32 + e13 + e12
j= √ . (1.48)
3
30 geometric algebra.

The reader can check (exercise 1.8) that j is a unit bivector (i.e. it squares to −1), allow-
ing us to write

1 3
ab = + j
2 2 (1.49)
= cos(π/3) + j sin(π/3)
= exp ( jπ/3) .

Since both vector factors were unit length, this “complex” exponential has no leading
scalar factor contribution from kak kbk.
From the above analysis, we can also see that in this case, the product of two 0,2 multi-
vectors was itself a 0,2 multivector. We can see that by forming the product

ab = (exp (−e31 π/4) e3 )(e3 exp (e32 π/4)) (1.50)


= exp (−e31 π/4) exp (e32 π/4) ,

so
exp (e13 π/4) exp (e32 π/4) = exp ( jπ/3) . (1.51)

Composition of two rotation operators has resulted in another rotation operator, but that
result is in a different rotational plane, and through a different rotation angle.

Two wedge product properties can be immediately deduced from the polar representation of
eq. (1.45)

1. b ∧ a = −a ∧ b.

2. a ∧ (αa) = 0, ∀α ∈ R.

We have now had a few hints that the wedge product might be related to the cross product.
Given two vectors a, b both the wedge and the cross product contain a kak kbk sin ∆θ factor, and
both the wedge and cross product are antisymmetric operators. The cross product is a bilinear
operator (a + b) × (c + d) = a × c + a × c + · · ·. To see whether this is the case for the wedge
product, let’s examine the coordinate expansion of the wedge product. Let
X
a= ai ei
i
X (1.52)
b= bi ei .
i
1.8 vector product, dot product and wedge product. 31

The product of these vectors is


   

X  X
ab =  ai ei   b j e j 
i j
X
= ai b j ei e j (1.53)
ij
X X
= ai b j ei e j + ai b j ei e j .
i= j i, j

Since ei ei = 1, we see again that the scalar component of the product is the dot product
P
i ai bi . The remaining grade 2 components are the wedge product, for which the coordinate
expansion can be simplified further
X
a∧b= ai b j ei e j
i, j
X X
= ai b j ei e j + ai b j ei e j
i< j j<i
X X (1.54)
= ai b j ei e j + a j bi e j ei
i< j i< j
X
= (ai b j − a j bi )ei e j .
i< j

The scalar factors can be written as 2x2 determinants



X a a
a∧b = i j
ei e j . (1.55)
i< j bi b j

It is now straightforward to show that the wedge product is distributive and bilinear (exer-
cise 1.7). It is also simple to use eq. (1.55) to show that b ∧ a = −a ∧ b and a ∧ a = 0.
For R2 there is only one term in eq. (1.55)

a1 a2
a ∧ b = e1 e2 . (1.56)
b1 b2

We are used to writing the cross product as a 3x3 determinant, which can also be done with
the coordinate expansion of the R3 wedge product

e2 e3 e3 e1 e1 e2
X a a
a∧b = i j

ei e j = a1

a2 a3 . (1.57)
b b

i j∈{12,13,23} i j
b1 b2 b3

32 geometric algebra.

Let’s summarize the wedge product properties and relations we have found so far, comparing
the R3 wedge product to the cross product
Cross product and R3 wedge product comparison.
Property Cross product Wedge product
Same vectors a×a = 0 a∧a = 0
Antisymmetry b × a = −a × b b ∧ a = −a ∧ b
Linear a × (αb) = α(a × b) a ∧ (αb) = α(a ∧ b)
Distributive a × (b + c) = a × b + a × c a ∧ (b + c) = a ∧ b + a ∧ c

e e e e e e e e e
1 2 3 2 3 3 1 1 2
Determinant expansion a × b = a1 a2 a3 a ∧ b = a1 a2 a3

b1 b2 b3 b1 b2 b3

Polar form n̂ab kak kbk sin(θb − θa ) iab kak kbk sin(θb − θa )

All the wedge properties except the determinant expansion above are valid in any dimension.
Comparing eq. (1.57) to the determinant representation of the cross product, and referring to
eq. (1.40), shows that the R3 wedge product is related to the cross product by a duality transfor-
mation iab = I n̂ab , or

a ∧ b = I(a × b). (1.58)

The direction of the cross product a × b is normal to the plane represented by the bivector
a ∧ b. The magnitude of both (up to a sign) is the area of the parallelogram spanned by the two
vectors.

Example 1.3: Wedge and cross product relationship.

To take some of the abstraction from eq. (1.58) let’s consider a specific example. Let

a = e1 + 2e2 + 3e3
(1.59)
b = 4e1 + 5e2 + 6e3 .

The reader should check that the cross product of these two vectors is
a × b = −3e1 − 6e2 − 3e3 . (1.60)
1.8 vector product, dot product and wedge product. 33

By direct computation, we find that the wedge and the cross products are related by a R3
pseudoscalar factor

a ∧ b = (e1 + 2e2 + 3e3 ) ∧ (4e1 + 5e2 + 6e3 )


= 5e12 + 6e13 + 8e21 + 12e23 + 12e31 + 15e32
= 3e21 + 6e31 + 3e32
= −3e12 − 6e13 − 3e23 (1.61)
= e123 (−3e3 ) + e132 (−6e2 ) + e231 (−3e1 )
= e123 (−3e3 − 6e2 − 3e1 )
= I(a × b).

The relationship between the wedge and cross products allows us to express the R3 vector
product as a multivector combination of the dot and cross products

ab = a · b + I(a × b). (1.62)

This is a very important relationship.


In particular, for electromagnetism, eq. (1.62) can be used with a = ∇ to combine (the scalar)
Gauss’s law with (the vector) Maxwell-Faraday equation, and to combine (the scalar) Gauss’s
law for magnetism with (the vector) Ampere-Maxwell equation. Such dot plus cross product
sums will yield a pair of multivector equations that can be further merged. The resulting multi-
vector equation will be called Maxwell equation (singular), and will be the starting point of all
our electromagnetic analysis.

Exercise 1.5 Wedge product of colinear vectors.


Given b = αa, use eq. (1.55) to show that the wedge product of any pair of colinear vectors
is zero.
Exercise 1.6 Wedge product antisymmetry.
Prove that the wedge product is antisymmetric using using eq. (1.55).

Exercise 1.7 Wedge product distributivity and linearity.


For vectors a, b, c and d, and scalars α, β use eq. (1.55) to show that
a. the wedge product is distributive

(a + b) ∧ (c + d) = a ∧ c + a ∧ d + b ∧ c + b ∧ d,
34 geometric algebra.

b. and show that the wedge product is bilinear

(αa) ∧ (βb) = (αβ)(a ∧ b).

Note that these imply the wedge product also has the cross product filtering property
a ∧ (b + αa) = a ∧ b.

Exercise 1.8 Unit bivector.


Verify by explicit multiplication that the bivector of eq. (1.48) squares to −1.

1.9 reverse.

Definition 1.24: Reverse

Let A be a multivector with j multivector factors, A = B1 B2 · · · B j , not necessarily normal.


The reverse A† , or reversion, of this multivector A is

A† = B†j B†j−1 · · · B†1 .

Scalars and vectors are their own reverse, and the reverse of a sum of multivectors is the
sum of the reversions of its summands.

Examples:

(1 + 2e12 + 3e321 )† = 1 + 2e21 + 3e123


(1.63)
((1 + e1 )(e23 − e12 )† = (e32 + e12 )(1 + e1 ).

A useful relation for k-vectors that are composed entirely of products of normal vectors exists.
We call such k-vectors blades

Definition 1.25: Blade.

A product of k normal vectors is called a k-blade, or a blade of grade k. A grade zero blade
is a scalar.
The notation F ∈ k is used in the literature to indicate that F is a blade of grade k.
V

Any k-blade is also a k-vector, but not all k-vectors are k-blades. For example in R4 the
bivector
e12 + e34 , (1.64)
1.10 complex representations. 35

is not a 2-blade, since it cannot be factored into normal products. This will be relevant when
formulating rotations since bivectors that are blades can be used to simply describe rotations
or Lorentz boosts 2 whereas it is not easily possible to compute an exponential of a non-blade
bivector argument.
The reverse of a blade is
Theorem 1.3: Reverse of k-blade.

The reverse of a k-blade Ak = a1 a2 · · · ak is given by

A†k = (−1)k(k−1)/2 Ak .

This can be proven by successive interchange of the factors


A†k = ak ak−1 · · · a1
= (−1)k−1 a1 ak ak−1 · · · a2
= (−1)k−1 (−1)k−2 a1 a2 ak ak−1 · · · a3 (1.65)
..
.
= (−1)k−1 (−1)k−2 · · · (−1)1 a1 a2 · · · ak , = (−1)k(k−1)/2 a1 a2 · · · ak . 
A special, but important case, is the reverse of the R3 pseudoscalar, which is negated by
reversion

I † = −I. (1.66)

1.10 complex representations.

We’ve seen that bivectors like e12 square to minus one. Geometric algebra has infinitely many
such imaginary numbers, which can be utilized to introduce problem specific “complex planes”
as desired. In three dimensional and higher spaces, imaginary representations (such as the R3
pseudoscalar) with grades higher than two are also possible.
Using the reversion relationship of eq. (1.66), we can see that the I behaves as an imaginary

I 2 = I(−I † )
= −(e1 e2 e3 )(e3 e2 e1 )
= −e1 e2 e2 e1 (1.67)
= −e1 e1
= −1.
2 A rotation in spacetime.
36 geometric algebra.

Given many possible imaginary representations, complex and complex-like numbers can be
represented in GA for any k-vector i that satisfies i2 = −1 since the multivector
z = x + iy, (1.68)

will have all the required properties of a complex number.


For example, in Euclidean spaces we could use either of
u∧v
i= q
− (u ∧ v)2
u∧v∧w (1.69)
i= q ,
2
− (u ∧ v ∧ w)

provided u, v(, w) are linearly independent. Given a set of orthonormal vectors u, v(, w), then
i = uv
(1.70)
i = uvw,
are also suitable as imaginaries.
Other complex number like representations are also possible with GA. Quaternions, which
are often used in computer graphics to represent rotations, are the set q ∈ {a + xi + yj + zk | a, x, y, z ∈ R}
where

i2 = j2 = k2 = −1
ij = k = −ji
(1.71)
jk = i = −kj
ki = j = −ik.

Like complex numbers, quaternions can be represented in GA as 0,2 multivectors, but require
three imaginaries instead of one.
Other complex like representations are also possible in GA, provided suitable conjugation
operations are defined. For example, an operation called Clifford conjugation (or spatial rever-
sal) designated A is introduced in [4] that toggles the sign of any multivector components with
grade g mod 4 = 1, 2. Illustrating by example, given a multivector A = 1 + e1 + e12 + e123 , the
Clifford conjugate is
A = 1 − e1 − e12 + e123 , (1.72)

leaving the sign of the scalar and pseudoscalar components untouched (much like the reversion
operator † toggles the sign of any grade 2 or 3 multivector components). Such a complex conju-
gation like operation allows (0,1), (0,2), (1,3) or (2,3) multivectors to encode relativistic “proper
length” using geometric algebras built from real Euclidean vector spaces.
1.11 multivector dot product. 37

Exercise 1.9 Quaternions.


Show that the relations eq. (1.71) are satisfied by the unit bivectors i = e32 , j = e13 , k =
e21 , demonstrating that quaternions, like complex numbers, may be represented as multivector
subspaces.

1.11 multivector dot product.

In general the product of two k-vectors is a multivector, with a selection of different grades. For
example, the product of two bivectors may have grades 0, 2, or 4
e12 (e21 + e23 + e34 ) = 1 + e13 + e1234 . (1.73)

Similarly, the product of a vector and bivector generally has grades 1 and 3
e1 (e12 + e23 ) = e2 + e123 . (1.74)

We’ve identified the vector dot product with scalar grade selection of their vector product,
the selection of the lowest grade of their product. This motivates the definition of a general
multivector dot product

Definition 1.26: Multivector dot product

The dot (or inner) product of two multivectors A = i=0 hAii , B =


PN PN
i=0 hBii is defined as

N D
X E
A·B ≡ Ai B j .
|i− j|
i, j=0

If A, B are k-vectors with equal grade, then the dot product is just the scalar selection of their
product
A · B = hABi, (1.75)

and if A, B are a k-vectors with grades r , s respectively, then their dot product is a single grade
selection
A · B = hABi|r−s| . (1.76)

Example 1.4: Multivector dot products.


38 geometric algebra.

The most common and useful multivector dot products are for pairs of multivectors that
are each entirely a specific grade, such as a vector-bivector dot product

(e1 + 2e2 ) · (e12 + e23 ) = h(e1 + 2e2 ) (e12 + e23 )i1 (1.77)
= e2 − 2e1 + 2e3 ,

or a bivector-trivector dot product

(e1 + 2e2 ) · e123 = h(e1 + 2e2 ) e123 i2 (1.78)


= e23 + 2e31 .

Should the products be of mixed grade, then we sum all the individual dot products
(1 + e1 + 2e23 ) · (e2 − e31 ) = h1e2 i1 + he1 e2 i + 2he23 e2 i1 − h1e31 i2 − he1 e31 i1 − 2he23 e31 i
= e2 − 2e3 + e13 + e3 .
(1.79)

Unfortunately, the notation for the multivector dot product is not standardized. In particular,
some authors [7] prefer left and right contraction operations that omit the absolute value in the
grade selections. A dot product like operator for scalar selection is also common.

Definition 1.27: Alternate dot products.

The left and right contraction operations are respectively defined as


N D
X E
AcB = Ai B j
j−i
i, j=0
N D
X E
AbB = Ai B j ,
i− j
i, j=0

where any selection of a negative grade is taken to be zero. The scalar product is defined
as
N D
X E
A∗B = Ai B j .
0
i, j=0

In an attempt to avoid inundating the reader with too many new operators, this book will stick
to the dot, wedge and grade selection operators. However, these are common enough that they
deserve mentioning.
1.11 multivector dot product. 39

1.11.1 Dot product of a vector and bivector

An important example of the generalized dot product is the dot product of a vector and bivector.
Unlike the dot product of two vectors, a vector-bivector dot product is order dependent.
The vector dot product is zero when the two vectors are normal. This is also true if the vector
and bivector are normal, that is, having no common factor, as in

e1 · e23 = he123 i1 = 0. (1.80)

On the other hand, a non-zero vector-bivector dot product requires the vector to have some
overlap with the bivector. A bivector formed from the product of two normal vectors B =
ab, a · b = 0, will have a non-zero dot product with any vector that lies in span {a, b}

(αa + βb) · (ab) = α kak2 b − β kbk2 a. (1.81)

It is often useful to be able to expand a vector-bivector dot product. A useful identity for such
an expansion is

Theorem 1.4: Dot product of vector and wedge product.

The dot product of a vector and a wedge product of two vectors distributes as

a · (b ∧ c) = (c ∧ b) · a = (a · b)c − (a · c)b.

Before proving this theorem, let’s take a look at what it implies. This shows that only vectors
that lie in the span of the plane represented by the bivector will result in a non-zero vector-
bivector dot product. We also know that when a vector that lies entirely in the span of a bivector
is multiplied by that bivector, the result is rotated by ±π/2. This means that a vector-bivector
dot product is normal to the vector that is dotted with the bivector. This can also be seen alge-
braically since

a · (a · (b ∧ c)) = a · ((a · b)c − (a · c)b)


= (a · c) (a · b) − (a · c) (a · b) (1.82)
= 0.

The net effect of a vector-bivector dot product is to select only components of the vector that
lie in the plane of the bivector, and then to rotate those by ±π/2 in that plane, plus scale that
rotated vector by the magnitude of bivector.
40 geometric algebra.

There are (somewhat tricky) coordinate free ways to prove theorem 1.4 , but a dumber simple
expansion in coordinates also does the job.
X X D E
a · (b ∧ c) = ai b j ck ei · (e j ∧ ek ) = ai b j ck ei e j ek
1
i, j,k i, j,k
X X D E (1.83)
(c ∧ b) · a = ai b j ck (ek ∧ e j ) · ei = ai b j ck ek e j ei .
1
i, j,k i, j,k
D E
If all of i, j, k are unique then ei e j ek = 0, so the vector selection is non-zero only when i
1
equals one of j, k. For example

he1 e1 e2 i1 = e2
(1.84)
he1 e2 e1 i1 = −e1 .

Given j , k, and i = j or i = k, then it is simple to show (exercise 1.10) that


D E D E
ei e j ek = ek e j ei , (1.85)
1 1

so a · (b ∧ c) = (c ∧ b) · a. Additionally (exercise 1.11)


D E
ei e j ek = ek (e j · ei ) − e j (ek · ei ) . (1.86)
1

Plugging eq. (1.86) back into eq. (1.83) proves the theorem
X    
a · (b ∧ c) = ai b j ck ek e j · ei − e j (ek · ei )
i, j,k (1.87)
= (a · b) c − (a · c) b.

The RHS of eq. (1.87) shows that the vector-bivector dot product has the following relation
to the R3 triple cross product

Theorem 1.5

For vectors in R3 , the dot product of a vector and vector wedge product can be expressed
as a triple cross product

a · (b ∧ c) = (b × c) × a.

This can be proven by invoking the well known identity for the triple cross product ([14])
a × (b × c) = (a · c)b − (a · b)c. (1.88)
1.11 multivector dot product. 41

Alternatively, it can be proven directly by applying the identity a ∧ b = I(a × b) to the vector-
bivector product, and then selecting the vector grade

a (b ∧ c) = aI (b × c)
= I (a · (b × c)) + I (a ∧ (b × c)) (1.89)
= I (a · (b × c)) + I 2 a × (b × c)
= I (a · (b × c)) + (b × c) × a.

This multivector has a pseudoscalar (grade 3) component, and a vector component, so select-
ing the grade one component proves the theorem

= a · (b ∧ c) = (b × c) × a.


a (b ∧ c) 1 (1.90)

1.11.2 Bivector dot product.

Being able to compute the generalized dot product of two bivectors will also have a number of
applications. When those bivectors are wedge products, there is a useful distribution identity for
this dot product.

Theorem 1.6: Dot product distribution over wedge products.

Given two sets of wedge products a ∧ b, and c ∧ d, their dot product is

(a ∧ b) · (c ∧ d) = ((a ∧ b) · c) · d = (b · c)(a · d) − (a · c)(b · d).

To prove this, select the scalar grade of the product (a ∧ b)(c ∧ d)

(a ∧ b)(c ∧ d) = (a ∧ b)(cd − c · d) (1.91)


= (a ∧ b)cd − (a ∧ b)(c · d).

The second term, a bivector, is not of interest since it will be killed by the scalar selection
operation. The remainder can be expanded in grades, first making use of the fact that a bivector-
vector product has only grade 1 and 3 components
(a ∧ b)c = (a ∧ b) · c + h(a ∧ b)ci3 . (1.92)

Multiplication of the trivector term by d produces a grade 2,4 multivector which can be
ignored. The product of (a ∧ b) · c, a vector, with d is a grade 0,2 multivector, of which only the
scalar grade is of interest. That is

(a ∧ b) · (c ∧ d) = h(a ∧ b)(c ∧ d)i (1.93)


= ((a ∧ b) · c) · d.
42 geometric algebra.

To complete the proof, we apply theorem 1.4

((a ∧ b) · c) · d = (a(b · c) − b(a · c)) · d (1.94)


= (a · d)(b · c) − (b · d)(a · c).

In R3 this identity also has a cross product equivalent

Theorem 1.7: Dot products of wedges as cross products.

The dot product of two R3 wedge products can be expressed as cross products

(a ∧ b) · (c ∧ d) = −(a × b) · (c × d).

This follows by scalar grade selection

(a ∧ b) · (c ∧ d) = h(a ∧ b)(c ∧ d)i


= hI(a × b)I(c × d)i (1.95)
= −(a × b) · (c × d).

1.11.3 Problems.

Exercise 1.10 Index permutation in vector selection.


Prove eq. (1.85).

Exercise 1.11 Dot product of unit vector with unit bivector.


Prove eq. (1.86).

1.12 permutation within scalar selection.

As scalar selection is at the heart of the generalized dot product, it is worth knowing some of
the ways that such a selection operation can be manipulated.

Theorem 1.8: Permutation of multivector products in scalar selection.


1.13 multivector wedge product. 43

The factors within a scalar grade selection of a pair of multivector products may be per-
muted or may be cyclically permuted

hABi = hBAi
hAB · · · YZi = hZAB · · · Yi.

It is sufficient to prove just the two multivector permutation case. One simple, but inelegant
method, is to first expand the pair of multivectors in coordinates. Let
X X
A = a0 + ai ei + ai j ei j + · · ·
i i< j
X X (1.96)
B = b0 + bi ei + bi j ei j + · · ·
i i< j

Only the products of equal unit k-vectors ei j , ei jk , · · · can contribute scalar components to the
sum, so the scalar selection of the products must have the form
X X
hABi = a0 b0 + ai bi e2i + ai j bi j e2i j + · · · (1.97)
i i< j

This sum is also clearly equal to hBAi, completing the proof.

1.13 multivector wedge product.

We’ve identified the vector wedge product of two vectors with the selection of the highest
grade of their product. Looking back to the multivector products of eq. (1.73), and eq. (1.74) as
motivation, a generalized wedge product can be defined that selects the highest grade terms of
a given multivector product

Definition 1.28: Multivector wedge product.

For the multivectors A, B defined in definition 1.26, the wedge (or outer) product is defined
as
N D
X E
A∧B ≡ Ai B j .
i+ j
i, j=0

If A, B are a k-vectors with grades r, s respectively, then their wedge product is a single grade
selection
A ∧ B = hABir+s . (1.98)
44 geometric algebra.

The most important example of the generalized wedge is the wedge product of a vector with
wedge of two vectors

Theorem 1.9: Wedge of three vectors.

The wedge product of three vectors is associative

(a ∧ b) ∧ c = a ∧ (b ∧ c),

so can be written simply as a ∧ b ∧ c.

The proof follows directly from the definition

(a ∧ b) ∧ c = h(a ∧ b)ci3
= h(ab − a · b)ci3 (1.99)
= habci3 − (a · b)hci3
= habci3 ,

where the grade-3 selection of a vector is zero by definition. Similarly

a ∧ (b ∧ c) = ha(b ∧ c)i3
= ha(bc − b · c)i3 (1.100)
= habci3 − (b · c)hai3
= habci3 ,

which proves the theorem. It is simple to show that the wedge of three vectors is completely anti-
symmetric (any interchange of vectors changes the sign), and that cyclic permutation a → b →
c → a of the vectors leaves it unchanged (exercise 1.12). These properties are also common to
the triple product of R3 vector algebra, a fact that is associated with the fact that there is also a
determinant structure to the triple wedge product, which can be shown by direct expansion in
coordinates
D E
a ∧ b ∧ c = ai b j ck ei e j ek
X 3
= ai b j ck ei e j ek
i, j,k
(1.101)
ai a j ak
X
=
b
i b j bk ei jk .
i< j<k

ci c j ck
1.14 projection and rejection. 45

This shows that the R3 wedge of three vectors is triple product times the pseudoscalar

a ∧ b ∧ c = (a · (b × c)) I. (1.102)

Note that the wedge of n vectors is also associative. A full proof is possible by induction,
which won’t be done here. Instead, as a hint of how to proceed if desired, consider the coordinate
expansion of a trivector wedged with a vector
X D E
(a ∧ b ∧ c) ∧ d = ai b j ck ei e j ek dl el
4
i, j,k,l
X (1.103)
= ai b j ck dl ei e j ek el .
i, j,k,l

This can be rewritten with any desired grouping ((a ∧ b) ∧ c) ∧ d = (a ∧ b) ∧ (c ∧ d) =


a ∧ (b ∧ c ∧ d) = · · ·. Observe that this can also be put into a determinant form like that of
eq. (1.101). Whenever the number of vectors matches the dimension of the underlying vector
space, this will be a single determinant of all the coordinates of the vectors multiplied by the
unit pseudoscalar for the vector space.

1.13.1 Problems.

Exercise 1.12 Properties of the wedge of three vectors.


a. Show that the wedge product of three vectors is completely antisymmetric.
b. Show that the wedge product of three vectors a ∧ b ∧ c is invariant with respect to cyclic
permutation.

Exercise 1.13 R4 wedge of a non-blade with itself.


While the wedge product of a blade with itself is always zero, this is not generally true of the
wedge products of arbitrary k-vectors in higher dimensional spaces. To demonstrate this, show
that the wedge of the bivector B = e1 e2 + e3 e4 with itself is non-zero. Why is this bivector not
a blade?

1.14 projection and rejection.

Let’s now look at how the dot plus wedge product decomposition of the vector product can be
applied to compute vector projection and rejection, which are defined as
46 geometric algebra.

Definition 1.29: Vector projection and rejection.

Given a vector x and vector u the projection of x onto the direction of u is defined as

Proju (x) = (x · û)û,

where û = u/kuk. The rejection of x by u is defined as the component of x that is normal


to u

Reju (x) = x − Proju (x).

An example of projection and rejection with respect to a direction vector u is illustrated in


fig. 1.16.
x
3.0

2.5
Reju (x)

2.0 u

1.5

1.0
Proju (x)

0.5

0.0
0 1 2 3 4

Figure 1.16: Projection and rejection illustrated.

Computation of the projective and rejective components of a vector x relative to a direction


û requires little more than a multiplication by 1 = ûû, and some rearrangement

x = xûû
= (xû) û
  (1.104)
= x · û + x ∧ û û
= (x · û) û + (x ∧ û) û.

The vector x is split nicely into its projection and rejective components in a complementary
fashion
Proju (x) = (x · û) û (1.105a)
1.14 projection and rejection. 47

Reju (x) = (x ∧ û) û. (1.105b)

By construction, (x ∧ û) û must be a vector, despite any appearance of a multivector nature.


The utility of this multivector rejection formula is not for hand or computer algebra cal-
culations, where it will generally be faster and simpler to compute x − (x · û)û, than to use
eq. (1.105b). Instead this will come in handy as a new abstract algebraic tool.
When it is desirable to perform this calculation explicitly, it can be done more efficiently
using a no-op grade selection operation. In particular, a vector can be written as it’s own grade-
1 selection
x = hxi1 , (1.106)

so the rejection can be re-expressed using definition 1.26 as a generalized bivector-vector dot
product

Reju (x) = (x ∧ û) û 1 = (x ∧ û) · û.




(1.107)

In R3 , using theorem 1.5, the rejection operation can also be expressed as a triple cross
product
Reju (x) = û × (x × û) . (1.108)

To help establish some confidence with these new additions to our toolbox, here are a pair of
illustrative examples using eq. (1.105b), and eq. (1.107) respectively.

Example 1.5: An R2 rejection.

Let x = ae1 + be2 and u = e1 for which the wedge is x ∧ û = be2 e1 . Using eq. (1.105b) the
rejection of x by u is

Reju (x) = (x ∧ û) û


= (be2 e1 )e1 (1.109)
= be2 (e1 e1 )
= be2 ,

as expected.

This example provides some guidance about what is happening geometrically in eq. (1.105b).
The wedge operation produces a pseudoscalar for the plane spanned by {x, u} that is scaled as
sin θ where θ is the angle between x and u. When that pseudoscalar is multiplied by û, û is
rotated in the plane by π/2 radians towards x, yielding the normal component of the vector x.
Here’s a slightly less trivial R3 example
48 geometric algebra.

Example 1.6: An R3 rejection.


Let x = ae2 + be3 and û = (e1 + e2 )/ 2 for which the wedge product is

e23 e31 e12
1
x ∧ û = √ 0 a b
2
1 1 0
(1.110)
1
= √ (e23 (−b) − e31 (−b) + e12 (−a))
2
1
= √ (b(e32 + e31 ) + ae21 ) .
2
Using eq. (1.107) the rejection of x by u is
1
(x ∧ û) · û = (b(e32 + e31 ) + ae21 ) · (e1 + e2 ). (1.111)
2
Each of these bivector-vector dot products has the form ers · et = herst i1 which is zero
whenever the indexes r, s, t are unique, and is a vector whenever one of indexes are repeated
(r = t, or s = t). This leaves

1
(x ∧ û) · û = (be3 + ae2 + be3 − ae1 )
2 (1.112)
a
= be3 + (e2 − e1 ).
2

Example 1.7: Velocity and acceleration in polar coordinates.

In eq. (1.35), and eq. (1.36) we found the polar representation of the velocity and acceler-
ation vectors associated with the radial parameterization r(r, θ) = rr̂(θ).
We can alternatively compute the radial and azimuthal components of these vectors in
terms of their projective and rejective components

v = vr̂r̂ = v · r̂ + v ∧ r̂r̂
(1.113)
a = ar̂r̂ = a · r̂ + a ∧ r̂r̂,
1.14 projection and rejection. 49

so
v · r̂ = r0
v ∧ r̂ = rωθ̂ ∧ r̂ = ωθ̂ ∧ r
a · r̂ = r00 − rω2 (1.114)
1  2 0
a ∧ r̂ = r ω θ̂ ∧ r̂.
r
We see that it is natural to introduce angular velocity and acceleration bivectors. These
both lie in the θ̂ ∧ r̂ plane. Of course, it is also possible to substitute the cross product for
the wedge product, but doing so requires the introduction of a normal direction that may
not intrinsically be part of the problem (i.e. two dimensional problems).

In the GA literature the projection and rejection operations are usually written using the
vector inverse

Definition 1.30: Vector inverse.

Define the inverse of a vector x, when it exists, as


x
x−1 = .
kxk2

This inverse satisfies x−1 x = xx−1 = 1.

The vector inverse may not exist in a non-Euclidean vector space where x2 can be zero for
non-zero vectors x.
In terms of the vector inverse, the projection and rejection operations with respect to u can
be written without any reference to the unit vector û = u/kuk that lies along that vector

1
Proju (x) = (x · u)
u (1.115)
1 1
Reju (x) = (x ∧ u) = (x ∧ u) · .
u u
50 geometric algebra.

It was claimed in the definition of rejection that the rejection is normal to the projection. This
can be shown trivially without any use of GA (exercise 1.14). This also follows naturally using
the grade selection operator representation of the dot product

Reju (x) · Proju (x) = Reju (x) Proju (x)




= h(x ∧ û) û (x · û) ûi
D E
= (x · û) (x ∧ û) û2 (1.116)
= (x · û) hx ∧ ûi
= 0.

This is zero because the scalar grade of a wedge product, a bivector, is zero by definition.

Exercise 1.14 Rejection normality


Prove, without any use of GA, that x − Proju (x) is normal to u, as claimed in definition 1.29.

Exercise 1.15 Rejection example.

a. Repeat example 1.6 by calculating (x ∧ û)û and show that all the grade three components
of this multivector product vanish.
b. Explicitly calculate x − (x · û)û and show that this matches eq. (1.112).

1.15 normal factorization of the wedge product.

A general bivector has the form


X
B= ai j ei j , (1.117)
i, j

which is not necessarily a blade. On the other hand, a wedge product is always a blade 3

Theorem 1.10: Wedge product normal factorization

The wedge product of any two non-colinear vectors a, b always has a normal (2-blade)
factorization

a ∧ b = uv, u · v = 0.

3 In R3 any bivector is also a blade [1]


1.16 the wedge product as an oriented area. 51

This can be proven by construction. Pick u = a and v = Reja (b), then

a Reja (b) =  a(b) + a ∧ Reja (b)




a ·
Rej
!
b·a (1.118)
=a∧ b− a
kak2
= a ∧ b,

since a ∧ (αa) = 0 for any scalar α.


The significance of theorem 1.10 is that the square of any wedge product is negative

(uv)2 = (uv)(−vu)
= −u(v2 )u (1.119)
= −|u|2 |v|2 ,

which in turn means that exponentials with wedge product arguments can be used as rotation
operators.

Exercise 1.16 R3 bivector factorization.


Find some normal factorizations for the R3 bivector e12 + e23 + e31 .

1.16 the wedge product as an oriented area.

The coordinate representation of the R2 wedge product (eq. (1.56)) had a single e12 bivector fac-
tor, whereas the expansion in coordinates for the general RN wedge product was considerably
messier (eq. (1.55)). This difference can be eliminated by judicious choice of basis.
A simpler coordinate representation for the RN wedge product follows by choosing an or-
thonormal basis for the planar subspace spanned by the wedge vectors. Given vectors a, b, let
{û, v̂} be an orthonormal basis for the plane subspace P = span {a, b}. The coordinate represen-
tations of a, b in this basis are
a = (a · û)û + (a · v̂)v̂
(1.120)
b = (b · û)û + (b · v̂)v̂.
The wedge of these vectors is
   
a ∧ b = (a · û)û + (a · v̂)v̂ ∧ (b · û)û + (b · v̂)v̂
 
= (a · û)(b · v̂) − (a · v̂)(b · û) ûv̂ (1.121)

a · û a · v̂
= ûv̂.
b · û b · v̂
52 geometric algebra.

We see that this basis allows for the most compact (single term) coordinate representation of
the wedge product.
If a counterclockwise rotation by π/2 takes û to v̂ the determinant will equal the area of the
parallelogram spanned by a and b. Let that area be designated

a · û a · v̂
A = . (1.122)
b · û b · v̂

A given wedge product may have any number of other wedge or normal product representa-
tions
a ∧ b = (a + βb) ∧ b
= a ∧ (b + αa)
= (Aû) ∧ v̂
= û ∧ (Av̂) (1.123)

= (αAû) ∧
α
v̂0
= (βAû ) ∧
0
β
These equivalencies can be thought of as different geometrical representations of the same
object. Since the spanned area and relative ordering of the wedged vectors remains constant.
Some different parallelogram representations of a wedge products are illustrated in fig. 1.18.

Figure 1.18: Parallelogram representations of wedge products.

As there are many possible normal factorizations for a given wedge product, and also many
possible wedge products that produce the same value bivector, we can say that a wedge product
1.17 general rotation. 53

represents an area with a specific cyclic orientation, but any such area is a valid representation.
This is illustrated in fig. 1.19.

Figure 1.19: Different shape representations of a wedge product.

Exercise 1.17 Parallelogram area.


Show that the area A of the parallelogram spanned by vectors

a = a1 e1 + a2 e2
b = b1 e1 + b2 e2 ,

is

a1 a2
A = ± ,
b1 b2

and that the sign is positive if the rotation angle θ that takes â to b̂ is positive, θ ∈ (0, π).

1.17 general rotation.

Equation (1.21) showed that the R2 pseudoscalar anticommutes with any vector x ∈ R2 ,
xi = −ix, (1.124)

and that the sign of the bivector exponential argument must be negated to maintain the value of
the vector x ∈ R2 on interchange

xeiθ = e−iθ x. (1.125)

The higher dimensional generalization of these results are


54 geometric algebra.

Theorem 1.11: Wedge and exponential commutation and conjugation rules.

Given two non-colinear vectors a, b, let the planar subspace formed by their span be des-
ignated S = span {a, b}.

(a) Any vector p ∈ S anticommutes with the wedge product a ∧ b

p(a ∧ b) = −(a ∧ b)p.

(b) Any vector n normal to this plane (n · a = n · b = 0) commutes with this wedge product

n(a ∧ b) = (a ∧ b)n.

(c) Reversing the order of multiplication of a vector p ∈ S with an exponential ea∧b ,


requires the sign of the exponential argument to be negated

pea∧b = e−a∧b p.

This sign change on interchange will be called conjugation.

(d) Any normal vectors n commute with a such a complex exponential

nea∧b = ea∧b n.

The proof relies on the fact that a normal factorization of the wedge product is possible. If
p is one of those factors, then the other is uniquely determined by the multivector equation
a ∧ b = pq, for which we must have q = 1x (a ∧ b) ∈ S and p · q = 0 4 . Then

p(a ∧ b) = p(pq)
= p(−qp) (1.126)
= −(pq)p
= −(a ∧ b)p.

4 The identities required to show that q above has no trivector grades, and to evaluate it explicitly in terms of a, b, x,
will be derived later.
1.17 general rotation. 55

Any normal n must also be perpendicular to the factors p, q, with n · p = n · q = 0, so

n(a ∧ b) = n(pq)
= (−pn)q
= −p(−qn) (1.127)
= (pq)n
= (a ∧ b)n.

For the complex exponentials, introduce a unit pseudoscalar for the plane i = p̂q̂ satisfying
i2 = −1 and a scalar rotation angle θ = (a ∧ b)/i, then for vectors p ∈ S

pea∧b = peiθ
= p (cos θ + i sin θ)
= (cos θ − i sin θ) p (1.128)
= e−iθ p
= e−a∧b p,

and for vectors n normal to S

nea∧b = neiθ
= n (cos θ + i sin θ)
= (cos θ + i sin θ) n (1.129)
= eiθ n
= ea∧b n,

which completes the proof.


The point of this somewhat abstract seeming theorem is to prepare for the statement of a
general RN rotation, which is

Definition 1.31: General rotation

Let B = {p̂, q̂} be an orthonormal basis for a planar subspace with unit pseudoscalar i = p̂q̂
where i2 = −1. The rotation of a vector x through an angle θ with respect to this plane is

Rθ (x) = e−iθ/2 xeiθ/2 .

Here the rotation sense is that of the π/2 rotation from p̂ to q̂ in the subspace S = span B.
56 geometric algebra.

This statement did not make any mention of a normal direction. Such a normal direction is
not unique for dimensions higher than 3, nor defined for two dimensions. Instead the rotational
sense is defined by the ordering of the factors in the bivector i.
To check that this operation has the desired semantics, let x = xk + x⊥ , where xk ∈ S and
x⊥ · p = 0 ∀p ∈ S . Then

Rθ (x) = e−iθ/2 xeiθ/2


= e−iθ/2 xk + x⊥ eiθ/2

(1.130)
= xk eiθ + x⊥ e−iθ/2 eiθ/2
= xk eiθ + x⊥ .
As desired, this rotation operation rotates components of the vector that lie in the planar
subspace S by θ, while leaving the normal components of the vector unchanged, as illustrated
in fig. 1.20. This is what we can call rotation around a normal in R3 .

Figure 1.20: Rotation with respect to the plane of a pseudoscalar.

1.18 symmetric and antisymmetric vector sums.


1.19 reflection. 57

Theorem 1.12: Symmetric and antisymmetric vector products.

1. The dot product of vectors x, y can be written as

1
x·y = (xy + yx) .
2

This sum, including all permutations of the products of x and y is called a completely
symmetric sum. A useful variation of this relationship is

yx = 2(x · y) − xy.

2. The wedge product of vectors x, y can be written as

1
x∧y = (xy − yx) .
2

This sum, including all permutations of the products x and y, with a sign change for
any interchange, is called a completely antisymmetric sum.

These identities highlight the symmetric and antisymmetric nature of the respective dot and
wedge products in a coordinate free form, and will be useful in the manipulation of various
identities. The proof follows by direct computation after first noting that the respect vector
products are
xy = x · y + x ∧ y (1.131a)

yx = y · x + y ∧ x
(1.131b)
= x · y − x ∧ y.
In eq. (1.131b) the interchange utilized the respective symmetric and antisymmetric nature
of the dot and wedge products.
Adding and subtracting eq. (1.131) proves the result.

1.19 reflection.

Geometrically the reflection of a vector x across a line directed along u is the difference of the
projection and rejection
1 1
x0 = (x · u) − (x ∧ u)
u u (1.132)
1
= (x · u − x ∧ u) .
u
58 geometric algebra.

Using the symmetric and antisymmetric sum representations of the dot and wedge products
from theorem 1.12 the reflection can be expressed as vector products
1 1
x0 = (
xu  + ux) ,
 + ux − 
xu (1.133)
2 u
yielding a remarkably simple form in terms of vector products

1
x0 = ux . (1.134)
u

As an illustration, here is a sample CliffordBasic reflection computation

In[1]:= ClearAll[u, x, uu, invu, i, o, proj, rej, ux, uxu]


u = 4 e[1] + 2 e[2];
x = 3 e[1] + 3 e[2];
uu = InnerProduct[u, u];
invu = u / uu;
i = InnerProduct[x, u];
o = OuterProduct[x, u];
proj = i invu // N // Simplify
rej = GeometricProduct[o, invu] // N // Simplify
ux = GeometricProduct[u, x]
uxu = GeometricProduct[ux, invu] // N // Simplify

Out[1]= 3.6 e[1] + 1.8 e[2]

Out[2]= -0.6 e[1] + 1.2 e[2]

Out[3]= 18 + 6 e[1,2]

Out[4]= 4.2 e[1] + 0.6 e[2]

the results of which are plotted in fig. 1.21.

1.20 linear systems.

Linear systems can be solved using the wedge product. Illustrating by example, consider the
following two variable equation in RN
ax + by = c. (1.135)

To solve for x simply wedge with b, and to solve for y wedge with a
(ax + 
by) ∧ b = c ∧ b
(1.136)
 + by) = a ∧ c,
a ∧ (ax
1.20 linear systems. 59

x
3.0

2.5 1
(x u)
u
2.0 u

1.5
1
-(x u)
1.0 1 u
(x·u)
u
0.5 1
ux
u
0.0
0 1 2 3 4

Figure 1.21: Reflection.

so, if the system has a solution, it is given by


1
x= c∧b
a∧b (1.137)
1
y= a ∧ c.
a∧b
Higher order systems can be solved in the same fashion, but the equation to be solved must
be wedged more times.
For example, if the n variable system
a1 x1 + a2 x2 · · · + an xn = b, (1.138)

has a solution, it is given by


1
x1 = b ∧ a2 ∧ · · · ∧ an
a1 ∧ a2 ∧ · · · ∧ an
1
x2 = a1 ∧ b ∧ · · · ∧ an
a1 ∧ a2 ∧ · · · ∧ an (1.139)
..
.
1
xn = a1 ∧ a2 ∧ · · · ∧ b.
a1 ∧ a2 ∧ · · · ∧ an
If this system has no solution, then these n-vector ratios will not be scalars.
For the solution of xk , the numerator in each case is the denominator wedge product with
the ak replaced by the solution vector b. If this sounds like Cramer’s rule, that is because the
two are equivalent when the dimension of the vector equals the number of variables in the
60 geometric algebra.

linear system. For example, consider the solution for x1 of eq. (1.138) for an R3 system, with
a1 = u, a2 = v, a3 = w

b1 v1 w1

b2 v2 w2  e1 
e2 e3

b ∧ v ∧ w b3 v3 w3
x1 = = , (1.140)
u ∧ v ∧ w u v w
1 1 1


u2 v2 w2  e1 
e2 e3
u3 v3 w3

which is exactly the ratio of determinants found in the Cramer’s rule solution of this problem.
We get Cramer’s rule for free due to the antisymmetric structure of the wedge product.
Cramer’s rule doesn’t apply to cases where the dimension of the space exceeds the number of
variables, but a wedge product solution does not have that restriction. As an example, consider
the two variable system eq. (1.135) for vectors in R4 as follows
     
1 1  1 
     
1 0  2 
a =   , b =   , c =   . (1.141)
0 0  0 
     
0 1 −1

Here’s a (Mathematica) computation of the wedge products for the solution 5

In[5]:= ClearAll[a, b, c, iab, aWedgeB, cWedgeB, aWedgeC, x, y]


a = e[1] + e[2];
b = e[1] + e[4];
c = e[1] + 2 e[2] - e[4];

aWedgeB = OuterProduct[a, b];


cWedgeB = OuterProduct[c, b];
aWedgeC = OuterProduct[a, c];

(* 1/aWedgeB *)
iab = aWedgeB / GeometricProduct[aWedgeB, aWedgeB];
x = GeometricProduct[iab, cWedgeB];
y = GeometricProduct[iab, aWedgeC];

{{a ∧ b = , aWedgeB},{c ∧ b = , cWedgeB},

5 Using the CliffordBasic.m geometric algebra module from https://github.com/jlaragonvera/Geometric-Algebra.


1.20 linear systems. 61

{a ∧ c = , aWedgeC},{¨ = ¨ x},{ÿ = ¨ y}
} // Grid

Out[5]= a ∧ b = -e[1,2] + e[1,4] + e[2,4]


c ∧ b = -2 e[1,2] + 2 e[1,4] + 2 e[2,4]
a ∧ c = e[1,2] - e[1,4] - e[2,4]
x = 2
y = -1

which shows that 2a − b = c.

1.20.0.1 Example: intersection of two lines.

As a concrete example, let’s solve the intersection of two lines problem illustrated in fig. 1.22.
4

3
b0

a1
b1
2

a0

0
0.0 0.5 1.0 1.5 2.0 2.5 3.0

Figure 1.22: Intersection of two lines.

In parametric form, the lines in this problem are


r1 (s) = a0 + s(a1 − a0 )
(1.142)
r2 (t) = b0 + t(b1 − b0 ),
so the solution, if it exists, is found at the point satisfying the equality
a0 + s(a1 − a0 ) = b0 + t(b1 − b0 ). (1.143)

With
u1 = a1 − a0
u2 = b1 − b0 (1.144)
d = a0 − b0 ,
62 geometric algebra.

the desired equation to solve is


d + su1 = tu2 . (1.145)

As with any linear system, we can solve for s or t by wedging both sides with one of u1 or u2

d ∧ u1 = tu2 ∧ u1
(1.146)
d ∧ u2 + su1 ∧ u2 = 0.

In R2 these equations have a solution if u1 ∧ u2 , 0, and in RN these have solutions if the


bivectors on each sides of the equations describe the same plane (i.e. the bivectors on each side
of eq. (1.146) are related by a scalar factor). Put another way, these have solutions when s and
t are scalars with the values
u2 ∧ d
s=
u1 ∧ u2
(1.147)
u1 ∧ d
t= .
u1 ∧ u2

Exercise 1.18 Intersection of a line and plane.


Let a line be parameterized by

r(a) = p + aa,

and a plane be parameterized by

r(b, c) = q + bb + cc.

a. State the vector equation to be solved, and its solution for a in terms of a ratio of wedge
products.
b. State the conditions for which the solution exist in R3 and RN .
c. In terms of coordinates in R3 write out the ratio of wedge products as determinants and
compare to the Cramer’s rule solution.

1.21 a summary comparision.

Here is a summary that compares various GA expressions with close equivalents in traditional
vector (or complex) algebra.
1.21 a summary comparision. 63

Comparison of traditional and geometric algebra identities.


Name Geometric algebra Traditional algebra
Norm squared x2 x·x
complex imaginary e12 , e123 , · · · i
complex number x + e12 y x + iy
vector products

ab = kak kbk exp (iab ∆θ) a · b = kak kbk cos (∆θ)


a · b = kak kbk cos (∆θ) a × b = n̂ kak kbk sin (∆θ)
a ∧ b = iab kak kbk sin (∆θ)
, (iab n̂ = e123 ).

wedge/cross product anti- a ∧ b = −b ∧ a a × b = −b × a


symmetry
wedge/cross filtering a ∧ (αa + b) = a ∧ b a × (αa + b) = a × b

P ai a j
P ai a j

wedge/cross coordinate ex- a ∧ b = i< j ee a × b = i< j e × ej
b b i j b b i
pansion i j i j

e e e e e e e e e
2 3 3 1 1 2 1 2 3
wedge/cross determinant a ∧ b = a1 a2 a3 a × b = a1 a2 a3

expansion
b1 b2 b3 b1 b2 b3

vector-bivector dot product a · (b ∧ c) (b × c) × a


( triple cross product)
bivector dot product (dot (a ∧ b) · (c ∧ d) −(a × b) · (c × d)
product of cross products)

a a a a a a
i j k i j k
Triple wedge (triple prod- a ∧ b ∧ c = bi b j bk I a · (b × c) = bi b j bk

uct)
ci c j ck ci c j ck

Normal to vectors a, b I(a ∧ b) a×b


Plane normal to a vector n (In) ∧ x = 0 n·x = 0
Plane normal to vectors (a ∧ b) ∧ x = 0 (a × b) · x = 0
a, b
Projection (x · û)û (x · û)û
Rejection (x ∧ û)û (û × x) × û
  
 cos θ sin θ   x
2D rotation xeiθ , i = e12    
− sin θ cos θ y
   

3D rotation in the plane of e−ûv̂θ/2 xeûv̂θ/2 (x · û)(û cos θ + v̂ sin θ) +


û, v̂, where û · v̂ = 0 (x · v̂)(v̂ cos θ − û sin θ) +
(û × x) × û
Reflection ûxû (x · û)û + û × (û × x)
64 geometric algebra.

1.22 problem solutions.

Answer for Exercise 1.16


e2 − e1
e12 + e23 + e31 = (e1 + e2 − 2e3 )
2
e3 − e2
= (2e1 − e2 − e3 ) .
2
The respective parallelogram representations of these bivector factorizations are illustrated
in fig. 1.17, showing the bivectors face on, and from a side view that shows they are coplanar.

(a) (b)
Figure 1.17: Two equivalent bivector factorizations.
M U LT I V E C T O R C A L C U L U S .
2
2.1 reciprocal frames.

2.1.1 Motivation and definition.

The end goal of this chapter is to be able to integrate multivector functions along curves and
surfaces, known collectively as manifolds. For our purposes, a manifold is defined by a parame-
terization, such as the vector valued function x(a, b) where a, b are scalar parameters. With one
parameter the vector traces out a curve, with two a surface, three a volume, and so forth. The
respective partial derivatives of such a parameterized vector define a local basis for the surface
at the point at which the partials are evaluated. The span of such a basis is called the tangent
space, and the partials that constitute it are not necessarily orthonormal, or even normal.
Unfortunately, in order to work with the curvilinear non-orthonormal bases that will be en-
countered in general integration theory, some additional tools are required.

• We introduce a reciprocal frame which partially generalizes the notion of normal to non-
orthonormal bases.

• We will borrow the upper and lower index (tensor) notation from relativistic physics that
is useful for the intrinsically non-orthonormal spaces encountered in that study, as this
notation works well to define the reciprocal frame.

Definition 2.1: Reciprocal frame

Given a basis for a subspace {x1 , x2 , · · · xn }, where the vectorsn xi are not necessarily
o or-
thonormal, the reciprocal frame is defined as the set of vectors x1 , x2 , · · · xn satisfying

x i · x j = δi j ,

where the vector x j is not the j-th power of x, but is a superscript index, the conventional
way of denoting a reciprocal frame vector, and δi j is the Kronecker delta.

Note that any orthonormal basis is also its reciprocal basis.


The definition above introduces mixed index variables for the first time in this text, which
may be unfamiliar. These are most often used in tensor algebra, where any expression that

65
66 multivector calculus.

has pairs of upper and lower indexes implies a sum, and is called the summation (or Einstein)
convention. For example:
X
ai bi ≡ ai bi
i
X (2.1)
i
A j BiC ≡ j
Ai j BiC j .
i, j

Summation convention will not be used explicitly in this text, as it deviates from normal
practises in electrical engineering1 .

2.1.1.1 Vector coordinates.

The most important application of a reciprocal frame is for the computation of the coordinates
of a vector with respect to a non-orthonormal frame. Let a vector a have coordinates ai with
respect to a basis {xi }
X
a= a jx j, (2.2)
j

where j in a j is an index not a power2 .


Dotting with the reciprocal frame vectors xi provides these coordinates ai
 
X 
a · x =  a x j  · xi
i  j

j
X (2.3)
= a jδ ji
j
=a. i

The vector can also be expressed with coordinates taken with respect to the reciprocal frame.
Let those coordinates be ai , so that
X
a= ai xi . (2.4)
i

1 Generally, when summation convention is used, explicit summation is only used explicitly when upper and lower
indexes are not perfectly matched, but summation is still implied. Readers of texts that use summation convention
can check for proper matching of upper and lower indexes to ensure that the expressions make sense. Such matching
is the reason a mixed index Kronecker delta has been used in the definition of the reciprocal frame.
2 In tensor algebra, any index that is found in matched upper and lower index pairs, is known as a dummy summation
index, whereas an index that is unmatched is known as a free index. For example, in a j bi j (summation implied) j is
a summation index, and i is a free index. We are free to make a change of variables of any summation index, so for
the same example we can write ak bik . These index tracking conventions are obvious when summation symbols are
included explicitly, as we will do.
2.1 reciprocal frames. 67

Dotting with the basis vectors xi provides the reciprocal frame relative coordinates ai
 
X 
a · xi =  a j x j  · xi
j
X (2.5)
= a jδ ji
j
= ai .

We can summarize eq. (2.3) and eq. (2.5) by stating that a vector can be expressed in terms
of coordinates relative to either the original or reciprocal basis as follows
X  X
a= a · xj xj = (a · x j ) x j . (2.6)
j j

In tensor algebra the basis is generally implied3 .


An example of a 2D oblique Euclidean basis and a corresponding reciprocal basis is plotted
in fig. 2.1. Also plotted are the superposition of the projections required to arrive at a given
point (4, 2)) along the e1 , e2 directions or the e1 , e2 directions. In this plot, neither of the re-
ciprocal frame vectors ei are normal to the corresponding basis vectors ei . When one of ei is
increased(decreased) in magnitude, there will be a corresponding decrease(increase) in the mag-
nitude of ei , but if the orientation is remained fixed, the corresponding direction of the reciprocal
frame vector stays the same.

2.1.1.2 Bivector coordinates.

Higher grade multivector objects may also be represented in curvilinear coordinates. Illustrating
by example, we will calculate the coordinates of a bivector constrained to a three parameter
manifold span {x1 , x2 , x3 } which can be represented as

1 X ij X
B= B xi ∧ x j = Bi j xi ∧ x j . (2.7)
2 i, j i< j

3 In tensor algebra, a vector, identified by the coordinates ai is called a contravariant vector. When that vector is
identified by the coordinates ai it is called a covariant vector. These labels relate to how the coordinates transform
with respect to norm preserving transformations. We have no need of this nomenclature, since we never transform
coordinates in isolation, but will always transform the coordinates along with their associated basis vectors.
68 multivector calculus.

x2
8

4 (a·x2 )x2 x2

(a·x1 )x1 x2 x1
a
2
x1
x1
0

x1
-2 (a·x2 )x2

(a·x1 )x1
-4
x2

-6
0 2 4 6 8 10 12

Figure 2.1: Oblique and reciprocal bases.


2.1 reciprocal frames. 69

The coordinates Bi j can be determined by dotting B with x j ∧ xi , where i , j, yielding

  1X  
B · x j ∧ xi = Brs (xr ∧ x s ) · x j ∧ xi
2 r,s
1 X rs  
= B (xr ∧ x s ) · x j · xi
2 r,s
1 X rs  
= B xr δ s j − x s δr j · xi (2.8)
2 r,s
1 X rs  i j 
= B δr δ s − δ s i δr j
2 r,s
1  ij 
= B − B ji .
2
We see that the coordinates of a bivector, even with respect to a non-orthonormal basis, are
antisymmetric, so eq. (2.8) is just Bi j as claimed. That is
 
Bi j = B · x j ∧ xi . (2.9)

Just as the reciprocal frame was instrumental for computation of the coordinates of a vec-
tor with respect to an arbitrary (i.e. non-orthonormal frame), we use the reciprocal frame to
calculate the coordinates of a bivector, and could do the same for higher grade k-vectors as
well.

2.1.2 R2 reciprocal frame.

How are the reciprocal frame vectors computed? While these vectors have a natural GA repre-
sentation, this is not intrinsically a GA problem, and can be solved with standard linear algebra,
using a matrix inversion. For example, given a 2D basis {x1 , x2 }, the reciprocal basis can be
assumed to have a coordinate representation in the original basis

x1 = ax1 + bx2
(2.10)
x2 = cx1 + dx2 .

Imposing the constraints of definition 2.1 leads to a pair of 2x2 linear systems that are easily
solved to find
1  
x1 = (x2 ) 2
x 1 − ( x1 · x 2 ) x2
(x1 )2 (x2 )2 − (x1 · x2 )2
(2.11)
1  
x =
2
(x1 ) x2 − (x1 · x2 ) x1 .
2
(x1 )2 (x2 )2 − (x1 · x2 )2
70 multivector calculus.

The reader may notice that for R3 the denominator is related to the norm of the cross product
x1 × x2 . More generally, this can be expressed as the square of the bivector x1 ∧ x2

− (x1 ∧ x2 )2 = − (x1 ∧ x2 ) · (x1 ∧ x2 )


= − ((x1 ∧ x2 ) · x1 ) · x2 (2.12)
= (x1 )2 (x2 )2 − (x1 · x2 )2 .

Additionally, the numerators are each dot products of x1 , x2 with that same bivector
x2 · (x1 ∧ x2 )
x1 =
(x1 ∧ x2 )2
(2.13)
x1 · (x2 ∧ x1 )
x =
2
,
(x1 ∧ x2 )2
or

1
x1 = x2 ·
x1 ∧ x2
(2.14)
1
x2 = x1 · .
x2 ∧ x1

Recall that dotting with the unit bivector of a plane (or its inverse) rotates a vector in that
plane by π/2. In a plane subspace, such a rotation is exactly the transformation to ensure that
x1 · x2 = x2 · x1 = 0. This shows that the reciprocal frame for the basis of a two dimensional
subspace is found by a duality transformation of each of the curvilinear coordinates, plus a
subsequent scaling operation. As x1 ∧ x2 , the pseudoscalar for the subspace spanned by {x1 , x2 },
is not generally a unit bivector, the dot product with its inverse also has a scaling effect.

Numerical example: Here is a Mathematica calculation of the reciprocal frame depicted in


fig. 2.1

In[6]:= ClearAll[x1, x2, inverse]


x1 = e[1] + e[2]; x2 = e[1] + 2 e[2];
x12 = OuterProduct[x1, x2];
inverse[a_] := a / GeometricProduct[a, a] ;
x12inverse = inverse[x12];
s1 = InnerProduct[x2, x12inverse];
s2 = InnerProduct[x1, -x12inverse];
s1
s2
dots[a_,b_] := {a , "··", b, " = ",
InnerProduct[a // ReleaseHold, b // ReleaseHold]};
2.1 reciprocal frames. 71

MapThread[dots, {{x1 // HoldForm, x2 // HoldForm,


x1 // HoldForm, x2 // HoldForm},
{s1 // HoldForm, s1 // HoldForm,
s2 // HoldForm, s2 // HoldForm}}] // Grid

Out[6]= 2 e[1] - e[2]

Out[7]= -e[1] + e[2]

Out[8]= x1 · s1 = 1
x2 · s1 = 0
x1 · s2 = 0
x2 · s2 = 1

This shows the reciprocal vector calculations using eq. (2.14) and that the defining property
xi · x j = δi j of the reciprocal frame vectors is satisfied.

Example: R2 : Given a pair of arbitrary oriented vectors in R2 , x1 = a1 e1 + a2 e2 , x2 = b1 e1 +


b2 e2 , the pseudoscalar associated with the basis {x1 , x2 } is

x1 ∧ x2 = (a1 e1 + a2 e2 ) ∧ (b1 e1 + b2 e2 ) (2.15)


= (a1 b2 − a2 b1 ) e12 .

The inverse of this pseudoscalar is


1 1
= e21 . (2.16)
x1 ∧ x2 a1 b2 − a2 b1

So for this fixed oblique R2 basis, the reciprocal frame is just


e21
x1 = x2
a1 b2 − a2 b1
e12 (2.17)
x = x1
2
.
a1 b2 − a2 b1

The vector x1 is obtained by rotating x2 by −π/2, and rescaling it by the area of the parallel-
ogram spanned by x1 , x2 . The vector x2 is obtained with the same scaling plus a rotation of x1
by π/2.

2.1.3 R3 reciprocal frame.

In this section we generalize eq. (2.14) to R3 vectors, which will illustrate the general case by
example.
72 multivector calculus.

Given a subspace spanned by a three vector basis {x1 , x2 , x3 } the reciprocal frame vectors can
be written as dot products
 
x1 = (x2 ∧ x3 ) · x3 ∧ x2 ∧ x1
 
x2 = (x3 ∧ x1 ) · x1 ∧ x3 ∧ x2 (2.18)
 
x = ( x1 ∧ x2 ) · x ∧ x ∧ x .
3 2 1 3

Each of those trivector terms equals −x1 ∧ x2 ∧ x3 and can be related to the (known) pseu-
doscalar x1 ∧ x2 ∧ x3 by observing that
 
x1 ∧ x2 ∧ x3 · (x3 ∧ x2 ∧ x1 )
  
= x1 · x2 · x3 · (x3 ∧ x2 ∧ x1 )
  (2.19)
= x1 · x2 · (x2 ∧ x1 )
= x1 · x1
= 1,

which means that

1
−x1 ∧ x2 ∧ x3 = −
x3 ∧ x2 ∧ x1 (2.20)
1
= ,
x1 ∧ x2 ∧ x3
and

1
x1 = (x2 ∧ x3 ) ·
x1 ∧ x2 ∧ x3
1
x2 = (x3 ∧ x1 ) · (2.21)
x1 ∧ x2 ∧ x3
1
x3 = (x1 ∧ x2 ) ·
x1 ∧ x2 ∧ x3

Geometrically, dotting with this trivector is a duality transformation within the subspace
spanned by the three vectors x1 , x2 , x3 , also scaling the result so that the xi · x j = δi j condi-
tion is satisfied. The scaling factor is the volume of the parallelepiped spanned by x1 , x2 , x3 .

2.1.4 Problems.
2.2 curvilinear coordinates. 73

Exercise 2.1 Reciprocal frame for two dimensional subspace.


Prove eq. (2.11).

Exercise 2.2 Two vector reciprocal frame


Calculate the reciprocal frame for the R3 subspace spanned by {x1 , x2 } where

x1 = e1 + 2e2
(2.25)
x2 = e2 − e3 .

2.2 curvilinear coordinates.

2.2.1 Two parameters.

Curvilinear coordinates can be defined for any subspace spanned by a parameterized vector into
that space. As an example, consider a two parameter planar subspace of parameterized by the
following continuous vector function

3
x(u1 , u2 ) = u1 e1 cosh (atanh(1/2) + e12 u2 ) , (2.31)
2
where u1 ∈ [0, 1] and u2 ∈ [0, π/2]. This parameterization spans the first quadrant of the ellipse
with semi-major axis length 1, and semi-minor axis length 1/2 4 Contours for this parameter-
ization are plotted in fig. 2.2. The radial contours are for fixed values of u2 and the elliptical
contours fix the value of u1 , and depict a set of elliptic curves with a semi-major/major axis
ratio of 1/2.
We define a curvilinear basis associated with each point in the region by the partials
∂x
x1 =
∂u1
(2.32)
∂x
x2 = .
∂u2
For our the function eq. (2.31) our curvilinear basis elements are

3
x1 = e1 cosh (atanh(1/2) + e12 u2 )
2√ (2.33)
3
x2 = u1 e2 sinh (atanh(1/2) + e12 u2 ) .
2
4 A parameterization of an elliptic area may or may not not be of much use in electrodynamics. It does, however,
provide a fairly simple but non-trivial example of a non-orthonormal parameterization.
74 multivector calculus.

0.5

0.4

0.3

0.2

0.1

0.0
0.0 0.2 0.4 0.6 0.8 1.0

Figure 2.2: Contours for an elliptical region.

We form vector valued differentials for each parameter

dx1 = x1 du1
(2.34)
dx2 = x2 du2 .

For eq. (2.31), the values of these differentials dx1 , dx2 with du1 = du2 = 0.1 are plotted in
fig. 2.3 for the points (u1 , u2 ) = (0.7, 5π/20), (0.9, 3π/20), (1.0, 5π/20) in (dark-thick) red, blue
and purple respectively.
0.5

x2 x1
0.4

x2 x1

x2 x1
0.3
x2
x2 x1
x1
x2 x1
0.2

0.1
0.4 0.5 0.6 0.7 0.8 0.9

Figure 2.3: Differentials for an elliptical parameterization.

In this case and in general there is no reason to presume that there is any orthonormality
constraint on the basis {x1 , x2 } for a given two parameter subspace.
2.2 curvilinear coordinates. 75

Should we wish to calculate the reciprocal frame for eq. (2.31) , we would find (exercise 2.3)
that

x1 = e1 3 sinh (atanh(1/2) + e12 u2 )
e2 √ (2.35)
x2 = 3 cosh (atanh(1/2) + e12 u2 ) .
u1
These are plotted (scaled by da = 0.1 so they fit in the image nicely) in fig. 2.3 using thin
light arrows.
When evaluating surface integrals, we will form oriented (bivector) area elements from the
wedge product of the differentials

d2 x ≡ dx1 ∧ dx2 . (2.36)


p
This absolute value of this area element −(d2 x)2 is the area of the parallelogram spanned
by dx1 , dx2 . In this example, all such area elements lie in the x − y plane, but that need not be
the case.
Also note that we will only perform integrals for those parametrizations for which the area
element d2 x is non-zero.
Exercise 2.3 Elliptic curvilinear and reciprocal basis.
From eq. (2.31), compute the curvilinear coordinates eq. (2.33), and the reciprocal frame vec-
tors eq. (2.35). Check using scalar grade selection that xi · x j = δi j . Hints: Given µ = atanh(1/2),

• cosh(µ + iθ)e2 = e2 cosh(µ − iθ).

• Re (cosh(µ − iθ) sinh(µ + iθ)) = 2/3.

fixme: don’t introduce the idea of tangent space until a 3D example. Remove the R3 reference
above, and keep this first example planar.
At the point of evaluation, the span of these differentials is called the tangent space. In this
particular case the tangent space at all points in the region is the entire x-y plane. These partials
locally span the tangent space at a given point on the surface.

2.2.1.1 Curved two parameter surfaces.

Continuing to illustrate by example, let’s now consider a non-planar two parameter surface

x(u1 , u2 ) = (u1 − u2 )2 e1 + (1 − (u2 )2 )e2 + u1 u2 e3 . (2.37)


76 multivector calculus.

The curvilinear basis elements, and the area element, are


x1 = 2(u1 − u2 )e1 + u2 e3
x2 = 2(u2 − u1 )e1 − 2u2 e2 + u1 e3 (2.38)
 
x1 ∧ x2 = −4v1 (u1 − v1 ) e12 + 2v21 e23 + 2v21 − 2u21 e31 .
Two examples of these vectors and the associated area element (rescaled to fit) is plotted in
fig. 2.4. This plane is called the tangent space at the point in question, and has been evaluated
at (u1 , u2 ) = (0.5, 0.5), (0.35, 0.75).

Figure 2.4: Two parameter manifold.

The results of eq. (2.38) can be calculated easily by hand for this particular parameterization,
but also submit to symbolic calculation software. Here’s a complete example using CliffordBa-
sic
In[9]:= << CliffordBasic‘;
$SetSignature={3,0};

In[10]:= ClearAll[xp, x, x1, x2]


(* Use dummy parameter values for the derivatives,
and then switch them to function parameter values. *)
xp := (a - b)^2 e[1] + (1 - b^2) e[2] + b a e[3];
x[u_, v_] := xp /. {a → u, b→→v};
x1[u_, v_] := D[xp, a] /. {a → u, b→→v};
x2[u_, v_] := D[xp, b] /. {a → u, b→→v};

x1[u,v]
x2[u,v]
OuterProduct[x1[u, v], x2[u, v]] // GFactor
2.2 curvilinear coordinates. 77

Out[10]= 2 (u-v) e[1] + v e[3]

Out[11]= -2 (u - v) e[1] - 2 v e[2] + u e[3]

Out[12]= (-4 u v + 4 v2 ) e[1,2] + (2 u2 - 2 v2 ) e[1,3] + 2 v2 e[2,3]

2.2.2 Three (or more) parameters.

We can extend the previous two parameter subspace ideas to higher dimensional (or one dimen-
sional) subspaces associated with a parameterization

Definition 2.2: Curvilinear coordinates and volume element

Given a parameterization x(u1 , u2 , · · · , uk ) with k degrees of freedom, we define the curvi-


linear basis elements xi by the partials

∂x
xi = .
∂ui
The span of {xi } at the point of evaluation is called the tangent space. A subspace associated
with a parameterization of this sort is also called a manifold. The volume element for the
subspace is

dk x = du1 du2 · · · duk x1 ∧ x2 ∧ · · · ∧ xk .

Such aq volume element is a k-vector. The volume of the hyper-parallelepiped bounded by



{x } is (dk x)2 .
i

We will assume that the parameterization is non-generate. This means that the volume ele-
ment dk x is non-zero in the region of interest. Note that a zero volume element implies a linear
dependency in the curvilinear basis elements xi .
Given a parameterization x = x(u, v, · · · , w), we may also write xu , xv , · · · , xw for the curvi-
linear basis elements, and xu , xv , · · · , xw for the reciprocal frame. When doing so, sums over
numeric indexes like i xi xi should be interpreted as a sum over all the parameter labels, i.e.
P
xu xu + xv xv + · · ·.

2.2.3 Gradient.

With the introduction of the ideas of reciprocal frame and curvilinear coordinates, we are getting
closer to be able to formulate the geometric algebra generalizations of vector calculus.
78 multivector calculus.

The next step in the required mathematical preliminaries for geometric calculus is to de-
termine the form of the gradient with respect to curvilinear coordinates and the parameters
associated with those coordinates.
Suppose we have a vector parameterization of RN
x = x(u1 , u2 , · · · , uN ). (2.39)

We can employ the chain rule to express the gradient in terms of derivatives with respect to
ui

X ∂
∇= ei
i
∂xi
X ∂u j ∂
= ei
i, j
∂xi ∂u j
(2.40)
X X ∂u j  ∂
 
=  ei
 
j i
∂x i ∂u j

X  ∂
= ∇u j .
j
∂u j

It turns out that the gradients of the parameters are in fact the reciprocal frame vectors

Theorem 2.1: Reciprocal frame vectors

Given a curvilinear basis with elements xi = ∂x/∂ui , the reciprocal frame vectors are given
by

xi = ∇ui .
2.2 curvilinear coordinates. 79

This can be proven by direct computation

∂x
xi · x j = (∇ui ) ·
∂u j
n
∂ui ∂x s
X ! !
= er · es
r,s=1
∂xr ∂u j
n
X ∂ui ∂x s
= (er · e s )
r,s=1
∂xr ∂u j
n (2.41)
X ∂ui ∂x s
= δrs
r,s=1
∂xr ∂u j
n
X ∂ui ∂xr
=
r=1
∂xr ∂u j
∂u j
=
∂ui
= δi j .

This shows that xi = ∇ui has the properties required of the reciprocal frame, proving the
theorem. We are now able to define the gradient with respect to an arbitrary set of parameters

Theorem 2.2: Curvilinear representation of the gradient

Given an N-parameter vector parameterization x = x(u1 , u2 , · · · , uN ) of RN , with curvilin-


ear basis elements xi = ∂x/∂ui , the gradient can be expressed as
X ∂
∇= xi .
i
∂ui

It is often convenient to define ∂i ≡ ∂/∂ui , so that the gradient can be expressed in mixed
index representation
X
∇= xi ∂i .
i

2.2.4 Vector derivative.

Given curvilinear coordinates defined on a subspace definition 2.2, we don’t have enough pa-
rameters to define the gradient. For calculus on the k-dimensional subspace, we define the vector
derivative
80 multivector calculus.

Definition 2.3: Vector derivative

Given an k-parameter vector parameterization x = x(u1 , u2 , · · · , uk ) of RN with k ≤ N, and


curvilinear basis elements xi = ∂x/∂ui , the vector derivative ∂ is defined as
k
X
∂= xi ∂i .
i=1

When the dimension of the subspace (number of parameters) equals the dimension of the
underlying vector space, the vector derivative equals the gradient. Otherwise we can write
∇ = ∂ + ∇⊥ , (2.42)

and can think of the vector derivative as the projection of the gradient onto the tangent space at
the point of evaluation.
Please see [18] for an excellent introduction of the reciprocal frame, the gradient, and the
vector derivative, and for details about the connectivity of the manifold ignored here.

2.2.5 Examples.

We’ve just blasted through a few abstract ideas:

• The curvilinear representation of the gradient.

• The gradient representation of the reciprocal frame.

• The vector derivative.

This completes the mathematical preliminaries required to formulate geometric calculus, the
multivector generalization of line, surface, and volume integrals. Before diving into the calculus
let’s consider some example parameterizations to illustrate how some of the new ideas above fit
together.

2.2.5.1 Example parameterization: Polar coordinates.

We will now consider a simple concrete example of a vector parameterization, that of polar
coordinates in R2
x(ρ, φ) = ρe1 exp (e12 φ) , (2.43)

as illustrated in fig. 2.5.


Using this example we will
2.2 curvilinear coordinates. 81

3.0

2.5

2.0

x 
ρρ
1.5

e2
1.0


ρ
0.5

θ e1
0.5 1.0 1.5 2.0 2.5 3.0

Figure 2.5: Polar coordinates.

• Compute the curvilinear coordinates. We will refer to these as xρ , xφ , instead of x1 , x2 .

• Find the squared length of xρ , xφ , and show that they are perpendicular (but not orthonor-
mal.)

• Perform a first bivector valued integral.

• Compute the reciprocal frame vectors with geometric arguments.

• Compute the reciprocal frame explicitly from the gradients of the coordinates.

• Find the polar form of the gradient with respect to this parameterization.

Curvilinear coordinates. The curvilinear coordinate basis can be computed directly


xρ = ρe1 exp (e12 φ)

∂ρ (2.44a)
= e1 exp (e12 φ)


xφ = ρe1 exp (e12 φ)

∂φ (2.44b)
= ρe1 e12 exp (e12 φ)
= ρe2 exp (e12 φ) .
82 multivector calculus.

Normality. To show that these vectors are perpendicular, we can select the scalar grade of
their product, and use theorem 1.11, property (c) to swap the vector and complex exponential,
conjugating the exponential

xρ · xφ = e1 exp (e12 φ) ρe2 exp (e12 φ)



 
= ρ e1 exp (e12 φ) exp (−e12 φ) e2


(2.45)
= ρhe1 e2 i
= 0.

Length of basis elements. We can use the same method to find the (squared) length of the
vectors

x2ρ = e1 exp (e12 φ) e1 exp (e12 φ)



= e1 exp (e12 φ) exp (−e12 φ) e1




D E (2.46)
= e21
= 1,

and

x2φ = ρe2 exp (e12 φ) ρe2 exp (e12 φ)



 

= ρ2 e2 exp (e12 φ) exp (−e12 φ) e2




D E (2.47)
= ρ2 e22
= ρ2 .

A bivector integral. One of our goals is to understand the multivector generalization of Stokes’
theorem and the divergence theorem, but even before that, we can evaluate some simple multi-
vector integrals. In particular, we can calculate the (oriented) area of a circle, given a bivector
representation of the area element.
2.2 curvilinear coordinates. 83

Z r Z 2π Z r Z 2π
dxρ ∧ dxφ = dρdφ xρ ∧ xφ
ρ =0 φ =0 ρ=0 φ=0
Z r Z 2π D E
= dρdφ xρ xφ
2
ρ=0 φ=0
Z r Z 2π
= dρdφ e1 exp (e12 φ) ρe2 exp (e12 φ) 2


ρ=0 φ=0
(2.48)
Z r Z 2π
= ρdρdφ e1 e2 exp (−e12 φ) exp (e12 φ) 2


ρ=0 φ=0
Z r Z 2π
= ρdρdφ e12
ρ=0 φ=0
= πr2 e12 .

Integrating the bivector area over a circular region gives us the area of that region, but
weighted by the R2 pseudoscalar. This is an oriented area.

Reciprocal basis. Because xρ , xφ are mutually perpendicular, we have only to rescale them to
determine the reciprocal basis, and can do so by inspection

xρ = e1 exp (e12 φ)
1 (2.49)
xφ = e2 exp (e12 φ) .
ρ
According to theorem 2.1 we should be able to find eq. (2.49) by computing the gradients of
ρ and φ respectively. If we do so using the R2 standard basis representation of the gradient, we
must first solve for ρ = ρ(x, y), φ = φ(x, y), inverting

x = ρ cos φ
(2.50)
y = ρ sin φ.

An implicit solution to this inversion problem is

ρ2 = x2 + y2
(2.51)
tan φ = y/x,
84 multivector calculus.

which we can implicitly differentiate to evaluate the components of the desired gradients
∂ρ
2ρ = 2x
∂x
∂ρ
2ρ = 2y
∂y
1 ∂φ y (2.52)
=− 2
cos2 φ ∂x x
1 ∂φ 1
= .
cos2 φ ∂y x
The gradients are therefore

1
∇ρ = (cos φ, sin φ)
ρ (2.53a)
= e1 ee12 φ
= xρ

!
y 1
∇φ = cos2 φ − ,
x2 x
1
= (− sin φ, cos φ)
ρ
e2 (2.53b)
= (cos φ + e12 sin φ)
ρ
e2 e12 φ
= e
ρ
= xφ ,

which is consistent with eq. (2.49), as expected.

Gradient. The polar form of the R2 gradient is

∂ ∂
∇ = xρ + xφ
∂ρ ∂φ (2.54)
∂ 1 ∂
= ρ̂ + φ̂ ,
∂ρ ρ ∂φ
where
ρ̂ = xρ = e1 exp (e12 φ) = xρ
1 (2.55)
φ̂ = xφ = e2 exp (e12 φ) = ρxφ .
ρ
2.2 curvilinear coordinates. 85

Should we extend this vector space to R3 , the parameterization of eq. (2.43) covers the sub-
space of the x-y plane, and for that subspace, the vector derivative is

∂ ∂
∂ = xρ + xφ
∂ρ ∂φ (2.56)
∂ 1 ∂
= ρ̂ + φ̂ .
∂ρ ρ ∂φ

2.2.5.2 Example parameterization: Spherical coordinates.

The spherical vector parameterization admits a compact GA representation. From the coordinate
representation, some factoring gives

x = r (e1 sin θ cos φ + e2 sin θ sin φ + e3 cos θ)


= r (sin θe1 (cos φ + e12 sin φ) + e3 cos θ)
(2.57)
= r sin θe1 ee12 φ + e3 cos θ
 

= re3 cos θ + sin θe3 e1 ee12 φ .


 

With
i = e12
(2.58)
j = e31 eiφ ,

this is
x = re3 e jθ . (2.59)

The curvilinear basis can be easily be computed in software


In[13]:= ClearAll[i, j, ej, x, xr, xt, xp]
i = e[1, 2];
j[phi_] = GeometricProduct[e[3, 1], Cos[phi] + i Sin[phi]];
ej[t_, p_] = Cos[t] + j[p] Sin[t];
x[r_, t_, p_] = r GeometricProduct[e[3], ej[t, p]];

→r;
xr[r_, theta_, phi_] = D[x[a, theta, phi], a] /. a→
→theta;
xt[r_, theta_, phi_] = D[x[r, t, phi], t] /. t→
→phi;
xp[r_, theta_, phi_] = D[x[r, theta, p], p] /. p→

{x[r, θ, φ ],
xr[r, θ, φ ],
xt[r, θ, φ ],
xp[r, θ, φ ]} // Column
86 multivector calculus.

Out[13]= r (Cos[θ] e[3] + Cos[φ] e[1] Sin[θ] + e[2] Sin[θ] Sin[φ])


Cos[θ] e[3] + Cos[φ] e[1] Sin[θ] + e[2] Sin[ θ] Sin[φ]
r (Cos[θ] Cos[φ] e[1] - e[3] Sin[θ] + Cos[θ] e[2] Sin[φ])
r (Cos[φ] e[2] Sin[θ] - e[1] Sin[θ] Sin[φ])

Unfortunately the compact representation is lost doing so. Computing the basis elements
manually, we find
xr = e3 e jθ (2.60a)

xθ = re3 je jθ
= re3 e31 eiφ e jθ (2.60b)
= re1 eiφ e jθ

∂  
xφ = re3 sin θe31 eiφ
∂φ (2.60c)
= r sin θe1 e12 eiφ
= r sin θe2 eiφ .
These are all mutually normal, which can be verified by computing dot products.
In[14]:= ClearAll[x1, x2, x3]
x1 = xr[r, θ , φ ];
x2 = xt[r, θ , φ ];
x3 = xp[r, θ , φ ];
MapThread[InnerProduct, {{x1, x2, x3}, {x2, x3, x1}}]// Simplify

Out[14]= {0,0,0}

An algebraic computation of these dot products is left as a problem for the student (exer-
cise 2.4). Orthonormalization of the curvilinear basis is now possible by inspection
r̂ = xr = e3 e jθ
1
θ̂ = xθ = e1 eiφ e jθ (2.61)
r
1
φ̂ = xφ = e2 eiφ ,
r sin θ
so
xr = r̂ = e3 e jθ
1 1
xθ = θ̂ = e1 eiφ e jθ (2.62)
r r
φ 1 1
x = φ̂ = e2 eiφ .
r sin θ r sin θ
2.2 curvilinear coordinates. 87

In particular, this shows that the spherical representation of the gradient is


∂ ∂ ∂
∇ = xr + xθ + xφ
∂r ∂θ ∂φ (2.63)
∂ 1 ∂ 1 ∂
= r̂ + θ̂ + φ̂ .
∂r r ∂θ r sin θ ∂φ
The spherical (oriented) volume element can also be computed in a compact fashion

d3 x
= xr ∧ xθ ∧ xφ
drdθdφ
D E
= xr xθ xφ
3
D E (2.64)
= e3 e jθ re1 eiφ e jθ r sin θe2 eiφ
D E3
= r2 sin θ e3 e jθ e1 eiφ e jθ e2 eiφ
3
= r2 sin θ e123 .
The scalar factor is in fact the Jacobian with respect to the spherical parameterization
dV ∂(x1 , x2 , x3 )
=
drdθdφ ∂(r, θ, φ)

sin θ cos φ sin θ sin φ cos θ
(2.65)
= r cos θ cos φ r cos θ sin φ −r sin θ

−r sin θ sin φ r sin θ cos φ 0
= r2 sin θ.
The final reduction of eq. (2.64), and the expansion of the Jacobian eq. (2.65), are both easily
verified with software
In[15]:= OuterProduct[ xr[r, θ , φ ],
xt[r, θ , φ ],
xp[r, θ , φ ]]

{e1,e2,e3} = IdentityMatrix[3];
jacobian = {xr[r, θ , φ ],
xt[r, θ , φ ],
xp[r, θ , φ ]} /. {e[1] → e1, e[2] → e2, e[3]→
→e3};
Det[ jacobian ] // Simplify
Out[15]= r2 e[1,2,3] Sin[θ]
Out[16]= r2 Sin[θ]

Performing these calculations manually are left as problems for the student (exercise 2.6,
exercise 2.5).
88 multivector calculus.

2.2.5.3 Example parameterization: Toroidal coordinates.

Figure 2.6: Toroidal parameterization.

Here is a 3D example of a parameterization with a non-normal curvilinear basis, that of a


toroidal subspace specified by two angles and a radial distance to the center of the toroid, as
illustrated in fig. 2.6.
The position vector on the surface of a toroid of radius ρ within the torus can be stated directly

 
x(ρ, θ, φ) = e− jθ/2 ρe1 eiφ + Re3 e jθ/2 (2.66a)
i = e1 e3 (2.66b)
j = e3 e2 (2.66c)

It happens that the unit bivectors i and j used in this construction happen to have the quaternion-
ic properties i j = − ji, and i2 = j2 = −1 which can be verified easily.
After some regrouping the curvilinear basis is found to be

∂x
xρ = = e− jθ/2 e1 eiφ e jθ/2 (2.67a)
∂ρ
∂x
xθ = = e− jθ/2 (R + ρ sin φ) e2 e jθ/2 (2.67b)
∂θ
∂x
xφ = = e− jθ/2 ρe3 eiφ e jθ/2 . (2.67c)
∂φ
2.2 curvilinear coordinates. 89

The oriented volume element can be computed using a trivector selection operation, which
conveniently wipes out a number of the interior exponentials
∂x ∂x ∂x D E
∧ ∧ = ρ (R + ρ sin φ) e− jθ/2 e1 eiφ e2 e3 eiφ e jθ/2 . (2.68)
∂ρ ∂θ ∂φ 3

Note that e1 commutes with j = e3 e2 , so also with e− jθ/2 . Also e2 e3 = − j anticommutes with
i, so there is a conjugate commutation effect eiφ j = je−iφ . This gives
D E D E
e− jθ/2 e1 eiφ e2 e3 eiφ e jθ/2 = − e1 e− jθ/2 je−iφ eiφ e jθ/2
3 3
D E
= − e1 e − jθ/2 jθ/2
je
3 (2.69)
= −he1 ji3
= I.

Together the trivector grade selection reduces almost magically to just

∂x ∂x ∂x
∧ ∧ = ρ (R + ρ sin φ) I. (2.70)
∂ρ ∂θ ∂φ

FIXME: show this with Mathematica too.


Thus the (scalar) volume element is

dV = ρ (R + ρ sin φ) dρdθdφ. (2.71)

As a check, it should be the case that the volume of the complete torus using this volume
element has the expected V = (2πR)(πr2 ) value.
That volume is
Z r Z 2π Z 2π
V= ρ (R + ρ sin φ) dρdθdφ. (2.72)
ρ=0 θ=0 φ=0

The sine term conveniently vanishes over the 2π interval, leaving just

1
V = r2 R(2π)(2π), (2.73)
2
as expected.

2.2.6 Problems.

Exercise 2.4 Spherical coordinate basis normality.


90 multivector calculus.

Using scalar selection, show that the spherical curvilinear basis of eq. (2.60) are all mutually
normal.
Exercise 2.5 Spherical volume element pseudoscalar.
Using geometric algebra, perform the reduction of the grade three selection made in the final
step of eq. (2.64).

Exercise 2.6 Spherical volume Jacobian.


Without software, expand and simplify the determinant of eq. (2.65).

2.3 integration theory.

2.3.1 Hypervolume integral

We wish to generalize the concepts of line, surface and volume integrals to hypervolumes and
multivector functions, and define a hypervolume integral as

Definition 2.4: Multivector integral.

Given a hypervolume parameterized by k parameters, k-volume volume element dk x, and


multivector functions F, G, a k-volume integral with the vector derivative acting to the right
on G is written as
Z →
dk x ∂G,
V

a k-volume integral with the vector derivative acting to the left on F is written as
Z ←
Fdk x ∂,
V

and a k-volume integral with the vector derivative acting bidirectionally on F, G is written
as
Z ↔
Z  ←
Z → 
Fd x ∂G ≡
k
Fd x ∂ G +
k
Fdk x ∂G .
V V V
2.3 integration theory. 91

The explicit meaning of these directional acting derivative operations is given by the chain
rule coordinate expansion
 
↔ X i ↔
Fd x ∂ G = Fd x  x ∂i  G
k k 

i
X X
= (∂i F)d x
k
xiG + Fdk x xi (∂iG)
i i
← →
≡ (Fdk x ∂)G + Fdk x( ∂G),

with ∂ acting on F and G, but not the volume element dk x, which may also be a function
of the implied parameterization.

The vector derivative may not commute with F, G nor the volume element dk x, so we are
forced to use some notation to indicate what the vector derivative (or gradient) acts on. In
conventional right acting cases, where there is no ambiguity, arrows will usually be omitted,
but braces may also be used to indicate the scope of derivative operators. This bidirectional
notation will also be used for the gradient, especially for volume integrals in R3 where the
vector derivative is identical to the gradient.
Some authors use the Hestenes dot notation, with overdots or primes to indicating the exact
scope of multivector derivative operators, as in

Ḟdk x∂˙ Ġ = Ḟdk x∂G


˙ + Fdk x∂˙ Ġ. (2.77)

The dot notation has the advantage of emphasizing that the action of the vector derivative (or
gradient) is on the functions F, G, and not on the hypervolume element dk x. However, in this
book, where primed operators such as ∇0 are used to indicate that derivatives are taken with
respect to primed x0 variables, a mix of dots and ticks would have been confusing.

2.3.2 Fundamental theorem.

The fundamental theorem of geometric calculus is a generalization of many conventional scalar


and vector integral theorems, and relates a hypervolume integral to its boundary. This is a a
powerful theorem, which we will use with Green’s functions to solve Maxwell’s equation, but
also to derive the geometric algebra form of Stokes’ theorem, from which most of the familiar
integral calculus results follow.

Theorem 2.3: Fundamental theorem of geometric calculus


92 multivector calculus.

Given multivectors F, G, a parameterization x = x(u1 , u2 , · · ·), with hypervolume element


dk x = dk uIk , where Ik = x1 ∧ x2 ∧ · · · ∧ xk , the hypervolume integral is related to the
boundary integral by
Z ↔
I
Fdk x ∂G = Fdk−1 xG,
V ∂V

where ∂V represents the boundary of the volume, and dk−1 x is the hypersurface element.
The hypersurface element and boundary integral is defined for k > 1 as
I XZ    
k−1
Fd xG ≡ dk−1 ui F Ik · xi G ,
∂V ∆ui
i

where dk−1 ui is the product of all du j except for dui . For k = 1 the hypersurface element and
associated boundary “integral” is really just convenient general shorthand, and should be
taken to mean the evaluation of the FG multivector product over the range of the parameter
I
Fd0 xG ≡ FG|∆u1 .
∂V

The geometry of the hypersurface element dk−1 x will be made more clear when we consider
the specific cases of k = 1, 2, 3, representing generalized line, surface, and volume integrals
respectively. Instead of terrorizing the reader with a general proof theorem 2.3, which requires
some unpleasant index gymnastics, this book will separately state and prove the fundamental
theorem of calculus for each of the k = 1, 2, 3 cases that are of interest for problems in R2
and R3 . For the interested reader, a sketch of the general proof of theorem 2.3 is available in
appendix B.
Before moving on to the line, surface, and volume integral cases, we will state and prove the
general Stokes’ theorem in its geometric algebra form.

2.3.3 Stokes’ theorem.

An important consequence of the fundamental theorem of geometric calculus is the geometric


algebra generalization of Stokes’ theorem. The Stokes’ theorem that we know from conven-
tional vector calculus relates R3 surface integrals to the line integral around a bounding surface.
The geometric algebra form of Stokes’ theorem is equivalent to Stokes’ theorem from the theory
of differential forms, which relates hypervolume integrals of blades5 to the integrals over their
hypersurface boundaries, a much more general result.

5 Blades are isomorphic to the k-forms found in the theory of differential forms.
2.3 integration theory. 93

Theorem 2.4: Stokes’ theorem

Given a k volume element dk x and an s-blade F, s < k


Z Z
d x · (∂ ∧ F) =
k
dk−1 x · F.
V ∂V

We will see that most of the well known scalar and vector integral theorems can easily be
derived as direct consequences of theorem 2.4, itself a special case of theorem 2.3.
We can prove Stokes’ theorem from theorem 2.3 by setting F = 1, and requiring that G is an
s-blade, with grade s < k. The proof follows by selecting the k − (s + 1) grade, the lowest grade
of dk x(∂ ∧ G), from both sides of theorem 2.3.
For the grade selection of the hypervolume integral we have
*Z + *Z Z +
k
d x∂G = d x(∂ · G) +
k k
d x(∂ ∧ G) , (2.78)
V k−(s+1) V V k−(s−1)

however, the lowest grade of is k − (s − 1) = k − s + 1 > k − (s + 1), so the divergence


dk x(∂ · G)
k−1
integral is zero. As d x is a k − 1 blade
Z Z D E
d x · (∂ ∧ G) =
k
dk−1 xG
k−(s+1)
V Z∂V (2.79)
= k−1
d x · G,
∂V
proving the theorem.

2.3.4 Line integral.

A single parameter curve, and the corresponding differential with respect to that parameter, is
plotted in fig. 2.7.
The differential with respect to the parameter a is

∂x
dxa = da = xa da. (2.80)
∂a
The vector derivative has just one component
X
∂= xi (xi · ∇)
i
∂ (2.81)
= xa
∂a
≡ xa ∂a .
94 multivector calculus.

Figure 2.7: One parameter manifold.

The line integral specialization of definition 2.4 can now be stated

Definition 2.5: Multivector line integral.

Given an connected curve C parameterized by a single parameter, and multivector func-


tions F, G, we define the line integral as
Z ↔
Z  ←
Z → 
Fd x ∂G ≡
1
Fd x ∂ G +
1
Fd1 x ∂G ,
C C C

where the one parameter differential form d1 x = da xa varies over the curve.

The line integral specialization of theorem 2.3 is

Theorem 2.5: Multivector line integral.

Given an connected curve C parameterized by a single parameter, and multivector func-


tions F, G, the line integral
Z
Fd1 x∂G = FG|∆a .
C
2.3 integration theory. 95

The proof follows by expansion. For the (single variable) parameterization a above
Z ↔
Z  ←
Z → 
Fd x ∂ G =
1
Fd x ∂ G +
1
Fd1 x ∂G
C ZC C
∂F ∂G
Z
= daxa xaG + Fdaxa xa
ZC ∂a C ∂a
∂F ∂G (2.82)
Z
= da G + daF
ZC ∂a C ∂a

= da (FG)
C ∂a
= F(a1 )G(a1 ) − F(a0 )G(a0 ),

where the boundaries of the parameterization have been assumed to be C : a ∈ [a0 , a1 ]. We


have a perfect cancellation of the reciprocal frame xa with the vector xa that lies along the
curve, since xa xa = 1. This leaves a perfect derivative of the product of FG, which can be
integrated over the length of the curve, yielding the difference of the product with respect to the
parameterization of the end points of the curve.
For a single parameter subspace the reciprocal frame vector xa is trivial to calculate, as it is
just the inverse of xa , that is xa = xa /kxa k2 . Observe that we did not actually have to calculate
it, but instead only require that the vector is invertible.
An important (and familiar) special case of theorem 2.5 is the fundamental theorem of calcu-
lus for line integrals, which can be obtained by using a single scalar function f

Theorem 2.6: Line integral of a scalar function (Stokes’).

Given a scalar function f , its line integral is given by


Z ↔
Z
d1 x · ∂ f = d1 x · ∇ f = f |∆a .
C C

Writing out theorem 2.5 with F = 1, G = f (x(a)), we have


Z
d1 x∂ f = f |∆a . (2.83)
C

This is a multivector equation with scalar and bivector grades on the left hand side, but only
scalar grades on the right. Equating grades yields two equations
Z
d1 x · ∂ f = f |∆a (2.84a)
C

Z
d1 x ∧ ∂ f = 0 (2.84b)
C
96 multivector calculus.

Because d1 x · ∂ = d1 x · ∇, we can replace the vector derivative with the gradient in eq. (2.84a),
which yields the conventional line integral result, proving the theorem.

2.3.5 Surface integral.

A two parameter curve, and the corresponding differentials with respect to those parameters, is
plotted in fig. 2.8.

Figure 2.8: Two parameter manifold differentials.

Given parameters a, b, the differentials along each of the parameterization directions are
∂x
dxa = da = xa da
∂a (2.85)
∂x
dxb = db = xb db.
∂b
The bivector valued surface area element for this parameterization is

d2 x = dxa ∧ dxb = dadb(xa ∧ xb ). (2.86)


2.3 integration theory. 97

The vector derivative, the projection of the gradient onto the surface at the point of integration
(also called the tangent space), now has two components
X
∂= xi (xi · ∇)
i
∂ ∂ (2.87)
= xa + xb
∂a ∂b
≡ xa ∂a + xb ∂b .

The surface integral specialization of definition 2.4 can now be stated

Definition 2.6: Multivector surface integral.

Given an connected surface S parameterized by two parameters, and multivector functions


F, G, we define the surface integral as
Z Z  ←
Z → 
2
Fd x∂G ≡ Fd x ∂ G +
2
Fd x ∂G ,
2
S S S

where the two parameter differential form d2 x = dadb xa ∧ xb varies over the surface.

The surface integral specialization of theorem 2.3 is

Theorem 2.7: Multivector surface integral.

Given an connected surface S parameterized by two parameters, and multivector functions


F, G, the surface integral
Z
Fd x∂G =
2
FdxG,
S ∂S

where ∂S is the boundary of the surface S .

In R2 with d2 x = idA, i = e12 , this may be written


Z
Fi∇G dA = FdxG. (2.88)
S ∂S

As i does not commute with all R2 multivectors, unless F = 1 we cannot generally pull i out of
the integral. In R3 , with d2 x = I n̂dA, we may use a scalar area element, but cannot generally
replace the vector derivative with the gradient.
98 multivector calculus.

To see why this works, we would first like to reduce the product of the area element and the
vector derivative
 
d2 x∂ = dadb (xa ∧ xb ) xa ∂a + xb ∂b . (2.89)

Since xa ∈ span {xa , xb }, this multivector product has only a vector grade. That is

(xa ∧ xb ) xa = (xa ∧ xb ) · xa + ( ( a
((
(x(
a∧ b) ∧ x
(x(
= (xa ∧ xb ) · x a
(2.90)
= xa xb · xa − xb xa · xa
 
= −xb .

Similarly

(xa ∧ xb ) xb = (xa ∧ xb ) · xb + 
(x
a∧
 b
b) ∧ x
x
= (xa ∧ xb ) ·xb   (2.91)
= xa xb · xb − xb xa · xb
= xa ,

so
d2 x∂ = xa ∂b − xb ∂a . (2.92)

Inserting this into the surface integral, we find


Z Z  ←
Z → 
Fd x∂G =
2
Fd x ∂ G +
2
Fd2 x ∂G
S ZS S Z
= dadb (∂b Fxa − ∂a Fxb ) G + dadb F (xa ∂bG − xb ∂aG)
ZS ZS
∂F ∂x ∂F ∂x ∂x ∂G ∂x ∂G
! !
= dadb − G+ dadb F −
S ∂b ∂a ∂a ∂b S ∂a ∂b ∂b ∂a
∂ ∂x ∂ ∂x ∂ ∂x ∂ ∂x
Z ! Z ! Z !
= dadb F G − dadb F G − dadb F − G
∂b ∂a ∂a ∂b ∂b ∂a ∂a ∂b
ZS ! ZS S
∂ ∂x ∂ ∂x
!
= dadb F G − dadb F G .
S ∂b ∂a S ∂a ∂b
(2.93)

This leaves two perfect differentials, which can both be integrated separately. That gives

∂x ∂x
Z Z ! Z !
Fd x∂G =
2
da F G − db F G
∂a ∆b ∂b ∆a
S
Z∆a Z ∆b (2.94)
= (FdxaG)|∆b − (FdxbG)|∆a .
∆a ∆b
2.3 integration theory. 99

Figure 2.9: Contour for two parameter surface boundary.

Suppose we are integrating over the unit parameter volume space [a, b] ∈ [0, 1] ⊗ [0, 1] as
illustrated in fig. 2.9.
Comparing to the figure we see that we’ve ended up with a clockwise line integral around
the boundary of the surface. For a given subset of the surface, the bivector area element can be
chosen small enough that it lies in the tangent space to the surface at the point of integration. In
that case, a larger bounding loop can be conceptualized as the sum of a number of smaller ones,
as sketched in fig. 2.10, in which case the contributions of the interior loop segments cancel out.

2.3.6 Two parameter Stokes’ theorem.

Two special cases of theorem 2.7 when scalar and vector functions are integrated over a surface.
For scalar functions we have

Theorem 2.8: Surface integral of scalar function (Stokes’).

Given a scalar function f (x) its surface integrals is given by


Z Z
d x·∂f =
2
d x·∇f =
2
dx f.
S S ∂S
100 multivector calculus.

Figure 2.10: Sum of infinitesimal loops.

In R3 this can be written as


Z
dAn̂ × ∇ f = dx f,
S ∂S

where n̂ is the outwards normal specified by d2 x = I n̂dA.

To show the first part, we can split the (multivector) surface integral into vector and trivector
grades
Z Z Z
d x∂ f =
2
d x · ∂f +
2
d2 x ∧ ∂ f. (2.95)
S S S

Since xa , xb both lie in the span of {xa , xb }, d2 x ∧ ∂ = 0, killing the second integral in
eq. (2.95). If the gradient is decomposed into its projection along the tangent space (the vec-
tor derivative) and its perpendicular components, only the vector derivative components of the
gradient contribute to its dot product with the area element. That is
 
d2 x · ∇ = d2 x · xa ∂a + xb ∂b + · · ·

= d2 x · xa ∂a + xb ∂b
 (2.96)

= d2 x · ∂.

This means that for a scalar function


Z Z
d x∂ f =
2
d2 x · ∇ f. (2.97)
S S
2.3 integration theory. 101

The second part of the theorem follows by grade selection, and application of a duality trans-
formation for the area element
D E
d2 x · ∇ f = d2 x∇ f
1
= dAhI n̂∇ f i1 (2.98)
= dAhI (n̂ · ∇ f + I n̂ × ∇ f )i1
= −dAn̂ × ∇ f.

back substitution of eq. (2.98) completes the proof of theorem 2.8.


For vector functions we have
Theorem 2.9: Surface integral of a vector function (Stokes’).

Given a vector function f(x) the surface integral is given by


Z
d2 x · (∇ ∧ f) = dx · f.
S ∂S

In R3 this can be written as


Z
dAn̂ · (∇ × f ) = dx · f,
S ∂S

where n̂ is the outwards normal specified by d2 x = I n̂dA.

2.3.7 Green’s theorem.

Theorem 2.9, when stated in terms of coordinates, is another well known result

Theorem 2.10: Green’s theorem.

Given a vector f = i fi xi in RN , and a surface parameterized by x = x(u1 , u2 )


P

!
∂ f1 ∂ f2
Z
du1 du2 − = du1 f1 + du2 f2 .
S ∂u2 ∂u1 ∂S

This is often stated for R2 with a Cartesian x, y parameterization, such as f = Pe1 + Qe2 .
In that case
!
∂P ∂Q
Z
dxdy − = Pdx + Qdy.
S ∂y ∂x ∂S
102 multivector calculus.

FIXME: Add an example (lots to pick from in any 3rd term calc text)
The first equality in theorem 2.10 holds in RN for vectors expressed in terms of an arbitrary
curvilinear basis. Only the (curvilinear) coordinates of the vector f contribute to this integral,
and only those that lie in the tangent space. The reciprocal basis vectors xi are also nowhere
to be seen. This is because they are either obliterated in dot products with x j , or cancel due to
mixed partial equality.
To see how this occurs let’s look at the area integrand of theorem 2.9
 
X    
d2 x · (∇ ∧ f) = du1 du2 (x1 ∧ x2 ) ·  xi ∂i ∧ f j x j 
ij
X   
= du1 du2 (x1 ∧ x2 ) · xi · ∂i ( f j x j )
ij
X  X  
= du1 du2 (x1 ∧ x2 ) · xi · x j ∂i f j + du1 du2 f j (x1 ∧ x2 ) · xi · (∂i x j ).
ij ij
(2.99)

With a bit of trouble, we will see that the second integrand is zero. On the other hand, the
first integrand simplifies without too much trouble
X  X
(x1 ∧ x2 ) · xi · x j ∂i f j = (x1 δ2i − x2 δ1i ) · x j ∂i f j
ij ij
(2.100)
X
= x1 · x j ∂2 f j − x2 · x j ∂1 f j
j
= ∂2 f1 − ∂1 f2 .

For the second integrand, we have


X   X X
f j (x1 ∧ x2 ) · xi · (∂i x j ) = fj (x1 δ2i − x2 δ1i ) · (∂i x j )
ij j i
X   (2.101)
= f j x1 · (∂2 x j ) − x2 · (∂1 x j )
j

We can apply the chain rule (backwards) to the portion in brackets to find
   
x1 · (∂2 x j ) − x2 · (∂1 x j ) = 
∂2x  j j
1 · x − (∂2 x1 ) · x − ∂1x
2 · x + (∂1 x2 ) · x
 j j

= x j · (∂1 x2 − ∂2 x1 )
∂ ∂x ∂ ∂x
! (2.102)
= xj · −
∂u1 ∂u2 ∂u2 ∂u1
= 0.
2.3 integration theory. 103

In this reduction the derivatives of xi · x j = δi j were killed since those are constants (either
zero or one). The final step relies on the fact that we assume our vector parameterization is well
behaved enough that the mixed partials are zero.
Substituting these results into theorem 2.9 we find


X i 

dx · f = (du1 x1 + du2 x2 ) ·  fi x 
∂S ∂S
i

= du1 f1 + du2 f2 (2.103)


Z∂S
= du1 du2 (∂2 f1 − ∂1 f2 ) ,
S

which completes the proof.

2.3.8 Volume integral.

A three parameter curve, and the corresponding differentials with respect to those parameters,
is sketched in fig. 2.11.

Figure 2.11: Three parameter volume element.


104 multivector calculus.

Given parameters u1 , u2 , u3 , we can denote the differentials along each of the parameteriza-
tion directions as
∂x
dx1 = du1 = x1 du1
∂u1
∂x
dx2 = du2 = x2 du2 (2.104)
∂u2
∂x
dx3 = du3 = x3 du3 .
∂u3
The trivector valued volume element for this parameterization is

d3 x = dx1 ∧ dx1 ∧ dx1 = d3 u (x1 ∧ x2 ∧ x3 ), (2.105)

where d3 u = du1 du2 du3 . The vector derivative, the projection of the gradient onto the volume
at the point of integration (also called the tangent space), now has three components
X
∂= xi (xi · ∇)
i
∂ ∂ ∂ (2.106)
=x 1
+ x2 + x3
∂u1 ∂u2 ∂u3
≡ x ∂1 + x ∂2 + x ∂3 .
1 2 3

The volume integral specialization of definition 2.4 can now be stated

Definition 2.7: Multivector volume integral.

Given an connected volume V parameterized by two parameters, and multivector functions


F, G, we define the volume integral as
Z ↔
Z  ←
Z → 
Fd x ∂G ≡
3
Fd x ∂ G +
3
Fd3 x ∂G ,
V V V

where the three parameter differential form d3 x = d3 u x1 ∧ x2 ∧ x3 , d3 u = du1 du2 du3 varies

over the volume, and ∂ acts on F, G, but not the volume element d2 x.

The volume integral specialization of theorem 2.3 is

Theorem 2.11: Multivector volume integral.


2.3 integration theory. 105

Given an connected volume V parameterized by three parameters for which dx1 , dx2 , dx3
is a right handed triple, and multivector functions F, G, a volume integral can be reduced
to a surface integral as follows
Z ↔

Fd3 x ∂G = Fd2 xG,
V ∂V

where ∂V is the boundary of the volume V, and d2 x is the counterclockwise oriented area
element on the boundary of the volume. In R3 with d3 x = IdV, d2 x = I n̂dA, this can be
written
Z ↔
Z
dV F ∂G = dAF n̂G,
V ∂V

To see why this works, and define d2 x more precisely, we would first like to reduce the product
of the volume element and the vector derivative
 
d3 x∂ = d3 u (x1 ∧ x2 ∧ x3 ) x1 ∂1 + x2 ∂2 + x3 ∂3 . (2.107)
Since all xi lie within span {x1 , x2 , x3 }, this multivector product has only a vector grade. That
is
(((
(x1 ∧ x2 ∧ x3 ) xi = (x1 ∧ x2 ∧ x3 ) · xi + (
(x(
1∧ 2 ∧ x3 ) ∧ x ,
(x( (( i
(2.108)
for all xi . These products reduces to
(x2 ∧ x3 ∧ x1 ) x1 = x2 ∧ x3
(x3 ∧ x1 ∧ x2 ) x2 = x3 ∧ x1 (2.109)
(x1 ∧ x2 ∧ x3 ) x3 = x1 ∧ x2 .
Inserting eq. (2.109) into the volume integral, we find
Z Z  ←
Z → 
Fd3 x∂G = Fd3 x ∂ G + Fd3 x ∂G
V ZV V

= d u ((∂1 F)x2 ∧ x3G + (∂2 F)x3 ∧ x1G + (∂3 F)x1 ∧ x2G)


3
VZ

+ d3 u (Fx2 ∧ x3 (∂1G) + Fx3 ∧ x1 (∂2G) + Fx1 ∧ x2 (∂3G))


Z V
= d3 u (∂1 (Fx2 ∧ x3G) + ∂2 (Fx3 ∧ x1G) + ∂3 (Fx1 ∧ x2G)) (2.110)
VZ

− d3 u (F(∂1 (x2 ∧ x3 ))G + F(∂2 (x3 ∧ x1 ))G + F(∂3 (x1 ∧ x2 ))G)


Z V

= d3 u (∂1 (Fx2 ∧ x3G) + ∂2 (Fx3 ∧ x1G) + ∂3 (Fx1 ∧ x2G))


VZ

− d3 u F (∂1 (x2 ∧ x3 ) + ∂2 (x3 ∧ x1 ) + ∂3 (x1 ∧ x2 )) G.


V
106 multivector calculus.

The sum within the second integral is


3
X        
∂i Ik · xi = ∂3 (x1 ∧ x2 ∧ x3 ) · x3 + ∂1 (x2 ∧ x3 ∧ x1 ) · x1 + ∂2 (x3 ∧ x1 ∧ x2 ) · x2
i=1
= ∂3 (x1 ∧ x2 ) + ∂1 (x2 ∧ x3 ) + ∂2 (x3 ∧ x1 )
= (∂3 x1 ) ∧ x2 + x1 ∧ (∂3 x2 )
+ (∂1 x2 ) ∧ x3 + x2 ∧ (∂1 x3 )
+ (∂2 x3 ) ∧ x1 + x3 ∧ (∂2 x1 )
= x2 ∧ (−∂3 x1 + ∂1 x3 ) + x3 ∧ (−∂1 x2 + ∂2 x1 ) + x1 ∧ (−∂2 x3 + ∂3 x2 )
∂2 x ∂2 x ∂2 x ∂2 x ∂2 x ∂2 x
! ! !
= x2 ∧ − + + x3 ∧ − + + x1 ∧ − + ,
∂3 ∂1 ∂1 ∂3 ∂1 ∂2 ∂2 ∂1 ∂2 ∂3 ∂3 ∂2
(2.111)
which is zero by equality of mixed partials. This leaves three perfect differentials, which can
integrated separately, giving
Z Z Z
Fd x∂G =
3
du2 du3 (Fx2 ∧ x3G)|∆u1 + du3 du1 (Fx3 ∧ x1G)|∆u2
V Z
+ du1 du2 (Fx1 ∧ x2G)|∆u3
Z Z Z
= (Fdx2 ∧ dx3G)|∆u1 + (Fdx3 ∧ dx1G)|∆u2 + (Fdx1 ∧ dx2G)|∆u3 .
(2.112)
This proves the theorem from an algebraic point of view. With the aid of a geometrical model,
such as that of fig. 2.12, if assuming that dx1 , dx2 , dx3 is a right handed triple). it is possible to
convince oneself that the two parameter integrands describe an integral over a counterclockwise
oriented surface (
We obtain the RHS of theorem 2.11 if we introduce a mnemonic for the bounding oriented
surface of the volume
d2 x ≡ dx1 ∧ dx2 + dx2 ∧ dx3 + dx3 ∧ dx1 , (2.113)
where it is implied that each component of this area element and anything that it is multiplied
with is evaluated on the boundaries of the integration volume (for the parameter omitted) as
detailed explicitly in eq. (2.112).

2.3.9 Three parameter Stokes’ theorem.

Three special cases of theorem 2.11 can be obtained by integrating scalar, vector or bivector
functions over the volume, as follows
2.3 integration theory. 107

(a) (b)

Figure 2.12: Differential surface of a volume.

Theorem 2.12: Volume integral of scalar function (Stokes’).

Given a scalar function f (x) its volume integral is given by


Z Z
d x·∂f =
3
d x·∇f =
3
d2 x f.
V V ∂V

In R3 this can be written as


Z Z
dV∇ f = dAn̂ f
V ∂V

where n̂ is the outwards normal specified by d2 x = I n̂dA, and d3 x = IdV.

Theorem 2.13: Volume integral of vector function (Stokes’).

Given a vector function f(x) the volume integral of the (bivector) curl is related to a surface
integral by
Z Z
d 3 x · (∂ ∧ f ) = d 3 x · (∇ ∧ f ) = d2 x · f.
V V ∂V
108 multivector calculus.

In R3 this can be written as


Z Z
dV∇ × f = dAn̂ × f,
V ∂V

or with a duality transformation f = IB, where B is a bivector


Z Z
dV∇ · B = dAn̂ · f,
V ∂V

where n̂ is the outwards normal specified by d2 x = I n̂dA, and d3 x = IdV.

Theorem 2.14: Volume integral of bivector function (Stokes’, divergence).

Given a bivector function B(x), the volume integral of the (trivector) curl is related to a
surface integral by
Z Z
d 3 x · ( ∂ ∧ B) = d 3 x · ( ∇ ∧ B) = d2 x · B.
V V ∂V

In R3 this can be written as


Z Z
dV∇ ∧ B = dAn̂ ∧ B,
V ∂V

or, making a duality transformation B(x) = If(x), where f is a vector, by


Z Z
dV∇ · f = dAn̂ · f,
V ∂V

where n̂ is the outwards normal specified by d2 x = I n̂dA, and d3 x = IdV.

2.3.10 Divergence theorem.

Observe that for R3 we there are dot product relations in each of theorem 2.12, theorem 2.13
and theorem 2.14 which can be summarized as

Theorem 2.15: Divergence theorem.


2.4 multivector fourier transform and phasors. 109

Given an R3 multivector M containing only grades 0,1, or 2


Z Z
dV∇ · M = dAn̂ · M,
V ∂V

where n̂ is the outwards normal to the surface bounding V.

2.4 multivector fourier transform and phasors.

It will often be convenient to utilize time harmonic (frequency domain) representations. This
can be achieved by utilizing Fourier transform pairs or with a phasor representation.
We may define Fourier transform pairs of multivector fields and sources in the conventional
fashion

Definition 2.8: Multivector Fourier transform pairs

The Fourier transform pair for a multivector valued function f (x, t) will be written as
Z
f (x, t) = fω (x)e jωt dω
Z
1
fω (x) = f (x, t)e− jωt dt,

where j is an arbitrary scalar imaginary that commutes with all multivectors.

In these transform pairs, the imaginary j need not be represented by any geometrical imagi-
nary such as e12 . In particular, we need not assume that the representation of j is the R3 pseu-
doscalar I, despite the fact that I does commute with all R3 multivectors. We wish to have the
freedom to assume that non-geometric real and imaginary operations can be performed without
picking or leaving out any specific grade pseudoscalar components of the multivector fields or
sources, so we won’t impose any a-priori restrictions on the representations of j. In particular,
this provides the freedom to utilize phasor (fixed frequency) representations of our multivec-
tor functions. We will use the engineering convention for our phasor representations, where
assuming a complex exponential time dependence of the following form is assumed
110 multivector calculus.

Definition 2.9: Multivector phasor representation.

The phasor representation f (x) of a multivector valued (real) function f (x, t) is defined
implicitly as
 
f (x, t) = Re f (x)e jωt ,
2.5 green’s functions. 111

where j is an arbitrary scalar imaginary that commutes with all multivectors.

The complex valued multivector f (x) is still generated from the real Euclidean basis for R3 ,
so there will be no reason to introduce complex inner products spaces into the mix.
The reader must take care when reading any literature that utilizes Fourier transforms or
phasor representation, since the conventions vary. In particular the
 physics representation of a
phasor typically uses the opposite sign convention f (x, t) = Re f (x)e−iωt , which toggles the
sign of all the imaginaries in derived results.

2.5 green’s functions.

2.5.1 Motivation.

Every engineer’s toolbox includes Laplace and Fourier transform tricks for transforming dif-
ferential equations to the frequency domain. Here the space and time domain equivalent of
the frequency and time domain linear system response function, called the Green’s function, is
introduced.
Everybody’s favorite differential equation, the harmonic oscillator, can be used as an illustra-
tive example
x00 + kx0 + (ω0 )2 x = f (t). (2.114)
Here derivatives are with respect to time, ω0 is the intrinsic angular frequency of the oscillator,
k is a damping factor, and f (t) is a forcing function. If the oscillator represents a child on a
swing at the park (a pendulum system), then k represents the friction in the swing pivot and
retardation due to wind, and the forcing function represents the father pushing the swing. The
forcing function f (t) could include an initial impulse to get the child up to speed, or could have
a periodic aspect, such as the father running underdogs6 as the child gleefully shouts “Again,
again, again!”
The full solution of this problem is x(t) = x s (t) + x0 (t), where x s (t) is a solution of eq. (2.114)
and x0 is any solution of the homogeneous equation x000 + kx00 + (ω0 )2 x0 = 0, picked to satisfy
the boundary value constraints of the problem.
Let’s attack this problem initially with Fourier transforms (definition 2.8)
We can assume zero position and velocity for the non-homogeneous problem, since we can
adjust the boundary conditions with the homogeneous solution x0 (t). With zero boundary con-
ditions on x, x0 , the transform of eq. (2.114) is
((− jω)2 + jω + (ω0 )2 )X(ω) = F(ω), (2.115)
6 The underdog is a non-passive swing pushing technique, where you run behind and under the swing, giving a push
as you go. Before my kids learned to “pump their legs”, and even afterwards, this was their favorite way of being
pushed on the swing. With two or more kids this forcing function tires quickly, as it is split between the oscillating
children.
112 multivector calculus.

so the system is solved in the frequency domain by the system response function G(ω)
X(ω) = G(ω)F(ω), (2.116)

where
1
G(ω) = − . (2.117)
ω2 − jω − (ω0 )2
We can apply the inverse transformation to find the time domain solution for the forced
oscillator problem.
Z
x(t) = dωG(ω)F(ω)e jωt
Z Z !
1 0 − jωt0 0
= dωG(ω) 0
dt f (t )e dt e jωt (2.118)

Z Z !
1 jω(t−t0 )
= 0 0
dt f (t ) dωG(ω)e .

The frequency domain integral is the Green’s function. We’ll write this as
Z
1 0
G(t, t0 ) = dωG(ω)e jω(t−t ) . (2.119)

If we can evaluate this integral, then the system can be considered solved, where a solution
is given by the convolution integral
Z
x(t) = dt0 f (t0 )G(t, t0 ) + x0 (t). (2.120)

The Green’s function is the weighting factor that determines how much of f (t0 ) for each time
t0 contributes to the motion of the system that is explicitly due to the forcing function. Green’s
functions for physical problems are causal, so only forcing events in the past contribute to the
current state of the system (i.e. if you were to depend on only future pushes of the swing, you
would have a very bored child.)
An alternate way of viewing a linear systems problem is to assume that a convolution solution
of the form eq. (2.120) must exist. Since the equation is a linear, it is reasonable to assume that
a linear weighted sum of all the forcing function values must contribute to the solution. If such
a solution is assumed, then we can operate on that solution with the differential operator for the
problem. For our harmonic oscillator problem that operator is

∂2 ∂
L= + k + (ω0 )2 . (2.121)
∂t 2 ∂t
2.5 green’s functions. 113

We find

∂2 ∂
!
f (t) = + k + (ω0 ) x(t) 2
∂t2 ∂t
Z
= dt0 f (t0 )G(t, t0 ) (2.122)

∂2 ∂
Z !
= dt0 f (t0 ) 2 + k + (ω0 )2 G(t, t0 ).
∂t ∂t

We see that the Green’s function, when acted on by the differential operator, must have the
characteristics of a delta function
∂2 ∂
!
+ k + (ω0 ) G(t, t0 ) = δ(t − t0 ).
2
(2.123)
∂t2 ∂t

The problem of determining the Green’s function, implicitly determining the solution of any
forced system, can be viewed as seeking the solution of distribution equations of the form

LG(t, t0 ) = δ(t − t0 ). (2.124)

Framing the problem this way is independent of whatever techniques (transform or other) that
we may choose to use to determine the structure of the Green’s function itself. Observe that the
Green’s function itself is not unique. In particular, we may add any solution of the homogeneous
problem LG0 (t, t0 ) = 0 to the Green’s function, just as we can do so for the forced system itself.
We will see that Green’s functions provide a general method of solving many of the linear
differential equations that will be encountered in electromagnetism.

2.5.1.1 Time domain problems in electromagnetism

Examples of the PDEs that we can apply Green’s function techniques to include

1∂
!
∇+ F(x, t) = J(x, t) (2.125a)
c ∂t

1 ∂2 1∂ 1∂
! ! !
∇ − 2 2 F(x, t) = ∇ −
2
∇+ F(x, t) = B(x, t). (2.125b)
c ∂t c ∂t c ∂t

The reader is no doubt familiar with the wave equation (eq. (2.125b)), where F is the waving
function, and B is the forcing function. Scalar and vector valued wave equations are encountered
114 multivector calculus.

in scalar and vector forms in conventional electromagnetism. We will see multivector variations
of the wave equation, so it should be assumed that F and B are multivector valued.
Equation (2.125a) is actually the geometric algebra form of Maxwell’s equation (singular),
where F is a multivector with grades 1 and 2, and J is a multivector containing all the charge and
current density contributions. We will call the operator in eq. (2.125a) the spacetime gradient7 .
Armed with Fourier transform or phasor representations, the frequency domain representa-
tions of eq. (2.125) are found to be

(∇ + jk) F(x) = J(x) (2.126a)

 
∇2 + k2 F(x) = (∇ − jk) (∇ + jk) F(x) = B(x), (2.126b)

where k = ω/c, and any explicit frequency dependence in our transform pairs has been
suppressed. We will call these equations the first and second order Helmholtz equations respec-
tively. The first order equation applies a multivector differential operator to a multivector field,
which must equal the multivector forcing function (the sources).
For statics problems (k = 0), we may work with real fields and sources, dispensing with any
need to take real parts.

2.5.2 Green’s function solutions.

2.5.2.1 Unbounded.

The operators in eq. (2.125), and eq. (2.126) all have a similar linear structure. Abstracting that
structure, all these problems have the form
LF(x) = J(x), (2.127)

where L is an operator formed from a linear combination of linear operators 1, ∇, ∇2 , ∂t , ∂tt .


Given the linear structure of the PDE that we wish to solve, it makes sense to assume that the
solutions also have a linear structure. The most general such solution we can assume has the
form
Z
F(x, t) = G(x, x0 ; t, t0 )J(x0 , t0 )dV 0 dt0 + F0 (x, t), (2.128)

7 A slightly different operator is also called the spacetime gradient in STA (Space Time Algebra) [6], which employs a
non-Euclidean basis to generate a four dimensional relativistic geometric algebra. Our spacetime gradient is related
to the STA spacetime gradient by a constant factor.
2.5 green’s functions. 115

where F0 (x, t) is any solution to the equivalent homogeneous equation LF0 = 0, and G(x, x0 ; t, t0 )
is the Green’s function (to be determined) associated with eq. (2.127). Operating on the pre-
sumed solution eq. (2.128) with L yields

J(x, t) = LF(x, t)
Z !
=L G(x, x ; t, t )J(x , t )dV dt + F0 (x, t)
0 0 0 0 0 0
(2.129)
Z
= LG(x, x0 ; t, t0 ) J(x0 , t0 )dV 0 dt0 ,


which shows that we require the Green’s function to have delta function semantics satisfying
LG(x, x0 ; t, t0 ) = δ(x − x0 )δ(t − t0 ). (2.130)

The scalar valued Green’s functions for the Laplacian and the (2nd order) Helmholtz equa-
tions are well known. The Green’s functions for the spacetime gradient and the 1st order
Helmholtz equation (which is just the gradient when k = 0) are multivector valued and will
be derived here.

2.5.2.2 Green’s theorem.

When the presumed solution is a superposition of only states in a bounded region then life gets
a bit more interesting. For instance, consider a problem for which the differential operator is a
function of space only, with a presumed solution such as
Z
F(x) = dV 0 B(x0 )G(x, x0 ) + F0 (x), (2.131)
V

then life gets a bit more interesting. This sort of problem requires different treatment for opera-
tors that are first and second order in the gradient.
For the second order problems, we require Green’s theorem, which must be generalized
slightly for use with multivector fields.
The basic idea is that we can relate the Laplacian of the Green’s function and the field
F(x0 ) (∇0 )2G(x, x0 ) = G(x, x0 ) (∇0 )2 F(x0 ) + · · ·. That relationship can be expressed as the
integral of an antisymmetric sandwich of the two functions

Theorem 2.16: Green’s theorem

Given a multivector function F and a scalar function G


Z   Z
F∇ G − G∇ F dV =
2 2
( F n̂ · ∇G − Gn̂ · ∇F ) ,
V ∂V
116 multivector calculus.

where ∂V is the boundary of the volume V.

A straightforward, but perhaps inelegant way of proving this theorem is to expand the anti-
symmetric product in coordinates
X
F∇2G − G∇2 F = F∂k ∂k G − G∂k ∂k F
k (2.132)
X
= ∂k (F∂k G − G∂k F) − (∂k F)(∂k G) + (∂k G)(∂k F).
k

Since G is a scalar, the last two terms cancel, and we can integrate
Z   XZ
F∇ G − G∇ F dV =
2 2
∂k (F∂k G − G∂k F) . (2.133)
V k V

Each integral above involves one component of the gradient. From theorem 2.3 we know that
Z Z
∇QdV = n̂QdA, (2.134)
V ∂V

for any multivector Q. Equating components gives


Z Z
∂k QdV = n̂ · ek QdA, (2.135)
V ∂V

which can be substituted into eq. (2.133) to find


Z   XZ
F∇ G − G∇ F dV =
2 2
n̂ · ek (F∂k G − G∂k F) dA
V ∂V
k
Z (2.136)
= (F(n̂ · ∇)G − G(n̂ · ∇)F) dA,
∂V

which proves the theorem.

2.5.2.3 Bounded solutions to first order problems.

For first order problems we will need an intermediate result similar to Green’s theorem.

Lemma 2.1: Normal relations for a gradient sandwich.

Given multivector functions F(x0 ), G(x, x0 ), and a gradient ∇0 acting bidirectionally on


functions of x0 , we have
←0 →0
Z ! Z ! Z
− G(x, x ) ∇ F(x )dV =
0 0 0 0 0
G(x, x ) ∇ F(x ) dV − 0
G(x, x0 )n̂0 F(x0 )dA0 .
V V ∂V
2.5 green’s functions. 117

This follows directly from theorem 2.3

↔0
Z Z
G(x, x )n̂ F(x )dA =
0 0 0 0
G(x, x0 ) ∇ F(x0 )dV 0
∂V V (2.137)
←0 →0
Z ! Z !
= G(x, x ) ∇ F(x )dV +
0 0 0
G(x, x ) ∇ F(x ) dV ,
0 0 0
V V

which can be rearranged to prove lemma 2.1.

2.5.3 Helmholtz equation.

2.5.3.1 Unbounded superposition solutions for the Helmholtz equation.

The specialization of eq. (2.130) to the Helmholtz equation eq. (2.126b) is


 
∇2 + k2 G(x, x0 ) = δ(x − x0 ). (2.138)

While it is possible [20] to derive the Green’s function using Fourier transform techniques,
we will state the result instead, which is well known

Theorem 2.17: Green’s function for the Helmholtz operator.

The advancing (causal), and the receding (acausal) Green’s functions satisfying eq. (2.138)
are respectively
0
e− jkkx−x k
Gadv (x, x ) = −
0
4π kx − x0 k
0
e jkkx−x k
Grec (x, x0 ) = − .
4π kx − x0 k

We will use the advancing (causal) Green’s function, and refer to this function as G(x, x0 )
without any subscript. Because it may not be obvious that these Green’s function representations
are valid in a multivector context, a demonstration of this fact can be found in appendix C.1.
Observe that as a special case, the Helmholtz Green’s function reduces to the Green’s function
for the Laplacian when k = 0
1
G(x, x0 ) = − . (2.139)
4π kx − x0 k
118 multivector calculus.

2.5.3.2 Bounded superposition solutions for the Helmholtz equation.

For our application of theorem 2.17 to the Helmholtz problem, we are actually interested in a
antisymmetric sandwich of the Helmholtz operator by the function F and the scalar (Green’s)
function G, but that reduces to an asymmetric sandwich of our functions around the Laplacian
   
F ∇2 + k2 G − G ∇2 + k2 F = F∇2G +  2
FkG − G∇2 F −  2
GkF (2.140)
= F∇2G − G∇2 F,

so
Z   Z  
F(x ) (∇ ) + k G(x, x ) =
0 0 2 2 0
G(x, x0 ) (∇0 )2 + k2 F(x0 )dV 0
V VZ

+ F(x0 )(n̂0 · ∇0 )G(x, x0 ) − G(x, x0 )(n̂0 · ∇0 )F(x0 ) dA0 .



∂V
(2.141)

This shows that if we assume the Green’s function satisfies the delta function condition
eq. (2.138) , then the general solution of eq. (2.126b) is formed from a bounded superposition
of sources is
Z
F(x) = G(x, x0 )B(x0 )dV 0
ZV (2.142)
+ (G(x, x0 )(n̂0 · ∇0 )F(x0 ) − F(x0 )(n̂0 · ∇0 )G(x, x0 )) dA0 .
∂V

We are also free to add in any specific solution F0 (x) that satisfies the homogeneous Helmholtz
equation. There is also freedom to add any solution of the homogeneous Helmholtz equation to
the Green’s function itself, so it is not unique. For a bounded superposition we generally desire
that the solution F and its normal derivative, or the Green’s function G (and it’s normal deriva-
tive) or an appropriate combination of the two are zero on the boundary, so that the surface
integral is killed.

2.5.4 First order Helmholtz equation.

The specialization of eq. (2.130) to the first order Helmholtz equation eq. (2.126a) is
(∇ + jk) G(x, x0 ) = δ(x − x0 ). (2.143)

This Green’s function is multivector valued


2.5 green’s functions. 119

Theorem 2.18: Green’s function for the first order Helmholtz operator.

The Green’s function satisfying


←0
→  !
∇ + jk G(x, x ) = G(x, x ) − ∇ + jk = δ(x − x0 ),
0 0

is

e− jkr
!

G(x, x ) =
0
jk (1 + r̂) + ,
4πr r

where r = x − x0 , r = krk and r̂ = r/r.

A special but important case is the k = 0 condition, which provides the Green’s function for
the gradient, which is vector valued

1 r̂
G(x, x0 ; k = 0) = . (2.144)
4π r2
If we denote the (advanced) Green’s function for the 2nd order Helmholtz operator theo-
rem 2.17 as φ(x, x0 ), we must have
→  →  → 
∇ + jk G(x, x0 ) = δ(x − x0 ) = ∇ + jk ∇ − jk φ(x, x0 ), (2.145)

we see that the Green’s function is given by


→ 
G(x, x0 ) = ∇ − jk φ(x, x0 ). (2.146)

This can be computed directly

→  e− jkr !
G(x, x ) = ∇ − jk −
0
4πr
∂ e− jkr
! !
= r̂ − jk −
∂r 4πr (2.147)
− jkr
! !
−e jk 1 jk
= r̂ − − 2 −
4π r r r
− jkr
!
e r̂
= jk (1 + r̂) + ,
4π r
120 multivector calculus.

as claimed. Observe that since φ is scalar valued, we can also rewrite eq. (2.146) in terms of a
right acting operator
← 
G(x, x ) = φ(x, x ) ∇ − jk
0 0

←0
! (2.148)
= φ(x, x ) − ∇ − jk ,
0

so
←0 ←0
! !
G(x, x ) − ∇ + jk = φ(x, x ) (∇ ) + k
0 0 2 2
= δ(x − x0 ). (2.149)

This is relevant for bounded superposition states, whichRwe will discuss next now that the
proof of theorem 2.18 is complete. In particular addition of V G(x, x0 ) jkF(x0 )dV 0 to both sides
of lemma 2.1 gives
←0 →0
Z !! Z ! !
G(x, x ) − ∇ + jk F(x )dV =
0 0 0
G(x, x ) ∇ + jk F(x ) dV 0
0 0
V V
Z (2.150)
− G(x, x0 )n̂0 F(x0 )dA0 .
∂V

Utilizing theorem 2.18, and substituting J(x0 ) from eq. (2.126a), we find that one solution to
the first order Helmholtz equation is
Z Z
F(x) = G(x, x0 )J(x0 )dV 0 − G(x, x0 )n̂0 F(x0 )dA0 . (2.151)
V ∂V

We are free to add any specific solution F0 that satisfies the homogeneous equation (∇ + jk) F0 =
0.

2.5.5 Spacetime gradient.

We want to find the Green’s function that solves spacetime gradient equations of the form
eq. (2.125a). For the wave equation operator, it is helpful to introduce a d’Lambertian opera-
tor, defined as follows.

Definition 2.10: d’Lambertian (wave equation) operator.

Let

1∂ 1∂ 1 ∂2
! !
 = ∇− ∇+ = ∇2 − 2 2 .
c ∂t c ∂t c ∂t
2.5 green’s functions. 121

We will be able to derive the Green’s function for the spacetime gradient from the Green’s
function for the d’Lambertian. The Green’s function for the spacetime gradient is multivector
valued and given by the following.

Theorem 2.19: Green’s function for the spacetime gradient.

The Green’s function for the spacetime gradient, satisfying

1∂
!
∇+ G(x − x0 , t − t0 ) = δ(x − x0 )δ(t − t0 ),
c ∂t

is
r̂ ∂ r̂ 1 ∂
!
1
G(x − x0 , t − t0 ) = − 2 + + δ(−r/c + t − t0 ),
4π r ∂r r cr ∂t

where r = x − x0 , r = krk and r̂ = r/r.

With the help of eq. (C.18) it is possible to further evaluate the delta function derivatives, how-
ever, we will defer doing so until we are ready to apply this Green’s function in a convolution
integral to solve Maxwell’s equation.
To prove this result, let φ(x − x0 , t − t0 ) be the retarded time (causal) Green’s function for the
wave equation, satisfying

1∂ 1∂
! !
φ(x − x , t − t ) = ∇ +
0 0
∇− φ(x − x0 , t − t0 )
c ∂t c ∂t (2.152)
= δ(x − x0 )δ(t − t0 ).
This function has the value
1
φ(r, t − t0 ) = − δ(−r/c + t − t0 ), (2.153)
4πr
where r = x − x0 , r = krk. Derivations of this Green’s function, and it’s acausal advanced time
friend, can be found in [20], [14], and use the usual Fourier transform and contour integration
tricks.
Comparing eq. (2.152) to the defining statement of theorem 2.19, we see that the spacetime
gradient Green’s function is given by

1∂
!
G(x − x , t − t ) = ∇ −
0 0
φ(r, t − t0 )
c ∂t (2.154)
∂ 1∂
!
= r̂ − φ(r, t − t0 ),
∂r c ∂t
122 multivector calculus.

where r̂ = r/r. Evaluating the derivatives gives

∂ 1 ∂ δ(−r/c + t − t0 )
!
1
G(r, t − t ) = −
0
r̂ −
4π ∂r c ∂t r (2.155)
1 r̂ ∂ 1 ∂
!

=− δ(−r/c + t − t ) − 2 δ(−r/c + t − t ) −
0 0
δ(−r/c + t − t ) ,
0
4π r ∂r r cr ∂t

which completes the proof after some sign cancellation and minor rearrangement.

2.6 helmholtz theorem.

In conventional electromagnetism Maxwell’s equations are posed in terms of separate diver-


gence and curl equations. It is therefore desirable to show that the divergence and curl of a
function and it’s normal characteristics on the boundary of an integration volume determine
that function uniquely. This is known as the Helmholtz theorem

Theorem 2.20: Helmholtz first theorem.

A vector M is uniquely determined by its divergence

∇ · M = s,

and curl

∇ × M = C,

and its value over the boundary.

The conventional proof of Helmholtz’s theorem uses the Green’s function for the (second
order) Helmholtz operator. Armed with a vector valued Green’s function for the gradient, a first
order proof is also possible. As illustrations of the geometric integration theory developed in
this chapter, both strategies will be applied here to this problem.
In either case, we start by forming an even grade multivector (gradient) equation containing
both the dot and cross product contributions

∇M = ∇ · M + I∇ × M = s + IC. (2.156)

First order proof. For the first order case, we perform a grade one selection of lemma 2.1,
setting F = M where G is the Green’s function for the gradient given by eq. (2.144). The proof
follows directly
2.6 helmholtz theorem. 123

←0
Z !
M(x) = − G(x, x ) ∇ M(x0 )dV 0
0

Z *V →0
!+ Z
= 0 0 0
G(x, x0 )n̂0 M(x0 ) 1 dA0


G(x, x ) ∇ M(x ) dV −
∂V
ZV 1
1
= (x − x0 ) s(x0 ) + IC(x0 ) 1 dV 0


3
V 4π kx − x0 k (2.157)
Z
1
(x − x0 )n̂0 M(x0 ) 1 dA0


− 3
0
Z ∂V 4π kx − x k
1
= (x − x0 )s(x0 ) − (x − x0 ) × C(x0 ) dV 0

0 3
V 4π kx − x k
Z
1
(x − x0 )n̂0 M(x0 ) 1 dA0 .


− 3
∂V 4π kx − x k0

If M is well behaved enough that the boundary integral vanishes on an infinite surface, we see
that M is completely specified by the divergence and the curl. In general, the divergence and
the curl, must also be supplemented by the the value of vector valued function on the boundary.
Observe that the boundary integral has a particularly simple form for a spherical surface
or radius R centered on x0 . Switching to spherical coordinates r = x0 − x = R r̂(θ, φ) where
r̂ = (x0 − x)/kx0 − xk is the outwards normal, we have

M(x0 )
Z Z
1
(x − x0 )n̂0 M(x0 ) 1 dA0 = dA0


− 3
∂V 4π kx − x0 k 4π kx − x0 k2
∂V
(2.158)
Z π Z 2π
1
= M(R, θ, φ) sin θdθdφ.
4π θ=0 φ=0

This is an average of M over the surface of the radius-R sphere surrounding the point x where
the field M is evaluated.

Second order proof. Again, we use eq. (2.156) to discover the relation between the vector
M and its divergence and curl. The vector M can be expressed at the point of interest as a
convolution with the delta function at all other points in space
Z
M(x) = dV 0 δ(x − x0 )M(x0 ). (2.159)
V

The Laplacian representation of the delta function in R3 is


1 2 1
δ(x − x0 ) = − ∇ , (2.160)
4π kx − x0 k
124 multivector calculus.

so M can be represented as the following convolution


Z
1 1
M(x) = − dV 0 ∇2 M(x0 ). (2.161)
4π V kx − x0 k
Using this relation and proceeding with a few applications of the chain rule, plus the fact that
∇1/kx − x0 k = −∇0 1/kx − x0 k, we find
Z
1
−4πM(x) = dV 0 ∇2 0k
M(x0 )
kx − x
*ZV +
1
= 0 2
dV ∇ M(x ) 0
V kx − x0 k
*Z ! 1 +
1
=− 0
dV ∇ ∇ 0 0
M(x )
V kx − x0 k 1 (2.162)
0 ∇0 M(x0 )
* Z !+
0 M(x )
=− ∇ dV ∇0

kx − x0 k kx − x0 k 1
* ZV
M(x0 ) 0 s(x ) + IC(x )
0 0
+ * Z +
=− ∇ dA n̂0
+ ∇ dV
∂V kx − x0 k 1 V kx − x0 k
Z1
M(x0 ) 0 0
* Z + Z
0 s(x ) 0 IC(x )
=− ∇ dA n̂0
+ ∇ dV + ∇ · dV .
∂V kx − x0 k 1 V kx − x0 k V kx − x0 k

By inserting a no-op grade selection operation in the second step, the trivector terms that
would show up in subsequent steps are automatically filtered out. This leaves us with a boundary
term dependent on the surface and the normal and tangential components of M. Added to that is
a pair of volume integrals that provide the unique dependence of M on its divergence and curl.
When the surface is taken to infinity, which requires kMk /kx − x0 k → 0, then the dependence
of M on its divergence and curl is unique.
In order to express final result in traditional vector algebra form, a couple transformations are
required. The first is that

haIbi1 = I 2 a × b = −a × b. (2.163)

For the grade selection in the boundary integral, note that

h∇n̂Xi1 = h∇(n̂ · X)i1 + h∇(n̂ ∧ X)i1


= ∇(n̂ · X) + h∇I(n̂ × X)i1 (2.164)
= ∇(n̂ · X) − ∇ × (n̂ × X).
2.7 problem solutions. 125

These give

M(x0 ) M(x0 )
Z Z
1 1
M(x) = ∇ dA0 n̂ · 0
− ∇ × dA0 n̂ ×
4π ∂V kx − x k 4π ∂V kx − x0 k
(2.165)
s(x0 ) 0
Z Z
1 1 0 C(x )
−∇ dV 0 + ∇ × dV .
4π V kx − x0 k 4π V kx − x0 k

2.7 problem solutions.

Answer for Exercise 2.1


Assuming the representation of eq. (2.10), the dot products are

1 = x1 · x1 = ax21 + bx1 · x2
0 = x2 · x1 = ax2 · x1 + bx22
(2.22)
0 = x1 · x2 = cx21 + dx1 · x2
1 = x2 · x2 = cx2 · x1 + dx22 .

This can be written out as a pair of matrix equations


    
1  x2 x 1 · x2
 a
  =  1   
0 x2 · x1 x22 b
     (2.23)
0  x2 x1 · x2   c 
  =  1    .
1 x2 · x1 x22 d
  

The matrix inverse is


 −1  
 x2 x1 · x2  1  x2 −x1 · x2 
 1  =  2  . (2.24)

x2 · x1 x22 x1 x2 − (x1 · x2 )2
2 2 
−x2 · x1 x21

Multiplying by the (1, 0), and (0, 1) vectors picks out the respective columns, and gives
eq. (2.11).

Answer for Exercise 2.2


The bivector for the plane spanned by this basis is

x1 ∧ x2 = (e1 + 2e2 ) ∧ (e2 − e3 )


= e12 − e13 − 2e23 (2.26)
= e12 + e31 + 2e32 .
126 multivector calculus.

This has the square

(x1 ∧ x2 )2 = (e12 + e31 + 2e32 ) · (e12 + e31 + 2e32 )


= −1 − 1 − 4 (2.27)
= −6.
Dotting −x1 with the bivector is

x1 · (x2 ∧ x1 ) = − (e1 + 2e2 ) · (e12 + e31 + 2e32 )


= − (e2 − e3 − 2e1 − 4e3 ) (2.28)
= 2e1 − e2 + 5e3 .
For x2 the dot product with the bivector is

x2 · (x1 ∧ x2 ) = (e2 − e3 ) · (e12 + e31 + 2e32 )


= −e1 − 2e3 − e1 − 2e2 (2.29)
= −2e1 − 2e2 − 2e3 ,
so
1
x1 = (e1 + e2 + e3 )
3 (2.30)
1
x2 = (−2e1 + e2 − 5e3 ) .
6
It is easy to verify that this has the desired semantics.

Answer for Exercise 2.4


Computing the various dot products is made easier by noting that e3 and eiφ commute,
whereas e jθ e3 = e3 e− jθ , e1 eiφ = e−iφ e1 , e2 eiφ = e−iφ e2 (since e3 j, e1 i and e2 i all anticommute)
D E
xr · xθ = e3 e jθ e1 eiφ e jθ
D E
= e jθ e3 e jθ e1 eiφ
D E (2.74a)
= e3 e− jθ e jθ e1 eiφ
D E
= e3 e1 eiφ
=0
D E
xr · xφ = e3 e jθ r sin θe2 eiφ
D   E
= r sin θ e3 cos θ + e31 sin θeiφ e2 eiφ
D E (2.74b)
= r sin2 θ e1 eiφ e2 eiφ
= r sin2 θhe1 e2 i
=0
2.7 problem solutions. 127

D E
xθ · xφ = r sin θ e1 eiφ e jθ e2 eiφ
D E
= r sin θ e2 e1 e jθ
D  E (2.74c)
= r sin θ e2 e1 cos θ + e31 sin θeiφ
D E
= r sin2 θ e32 eiφ
= 0.

Answer for Exercise 2.5


Using the commutation relations from last problem, first note that
 
e1 eiφ e jθ = e1 cos θeiφ + sin θe31 e−iφ eiφ
 
= e1 cos θ + sin θe31 e−iφ eiφ (2.75)
 
= cos θ − sin θe31 eiφ e1 eiφ
= e− jθ e1 eiφ .

This gives
D E D E
e3 e jθ e1 eiφ e jθ e2 eiφ = e3 e1 eiφ e2 eiφ
3 3
= he3 e1 e2 i3 (2.76)
= e123 .
E L E C T RO M AG N E T I S M .
3
3.1 conventional formulation.

Maxwell’s equations provide an abstraction, the field, that aggregates the effects of an arbi-
trary electric charge and current distribution on a “test” charge distribution. The test charge is
assumed to be small and isolated enough that it does not also appreciably change the fields
themselves. Once the fields are determined, the Lorentz force equation can be used to deter-
mine the dynamics of the test particle. These dynamics can be determined without having to
compute all the interactions of that charge with all the charges and currents in space, nor having
to continually account for the interactions of those charge with each other.
We will use vector differential form of Maxwell’s equations with antenna theory extensions
(fictitious magnetic sources) as our starting point
∂B
∇ × E = −M − (3.1a)
∂t

∂D
∇×H=J+ (3.1b)
∂t

∇·D=ρ (3.1c)

∇ · B = ρm . (3.1d)

These equations relate the primary electric and magnetic fields

• E(x, t) : Electric field intensity [V/m] (Volts/meter)

• H(x, t) : Magnetic field intensity [A/m] (Amperes/meter),

and the induced electric and magnetic fields

• D(x, t) : Electric flux density (or displacement vector) [C/m] (Coulombs/meter)

• B(x, t) : Magnetic flux density [W/m2 ] (Webers/square meter),

to the charge densities

129
130 electromagnetism.

• ρ(x, t) : Electric charge density [C/m3 ] (Coulombs/cubic meter)

• ρm (x, t) : Magnetic charge density [W/m3 ] (Webers/cubic meter),

and the current densities

• J(x, t) : Electric current density [A/m2 ] (Amperes/square meter),

• M(x, t) : Magnetic current density [V/m2 ] (Volts/square meter).

All of the fields and sources can vary in space and time, and are specified here in SI units. The
sources M, ρm can be considered fictional, representing physical phenomena such as infinitesi-
mal current loops.
In general, the relationship between the electric and magnetic fields (constitutivity relation-
ships) may be complicated non-isotropic tensor operators, functions of all of E, D, B and H. In
this book, we will assume that the constitutive relationships between the electric and magnetic
fields are independent
B = µH (3.2a)

D = E, (3.2b)

where  = r 0 is the permittivity of the medium [F/m] (Farads/meter), and µ = µr µ0 is


the permeability of the medium [H/m] (Henries/meter). The permittivity and permeability may
be functions of both time and position, and model the materials that the fields are propagating
through. In free space µr = 1 and r = 1 so these relationships are simply

B = µ0 H (3.3a)

D = 0 E, (3.3b)

where

• 0 = 8.85 × 10−12 C2 /N/m2 : Permittivity of free space (Coulombs squared/Newton/square


meter)

• µ0 = 4π × 10−7 N/A2 : Permeability of free space (Newtons/Ampere-squared).


3.1 conventional formulation. 131

These constants are related to the speed of light, c = 3.00 × 108 m/s by µ0 0 = 1/c2 .
Antenna theory extends Maxwell’s equations with fictional magnetic charge and current den-
sities that are useful to model real phenomena such as infinitesimal current loops. Antenna
related problems are usually tackled in the frequency domain. We will use the engineering con-
ventions for the frequency domain described in section 2.4.
Continuous models for charge and current distributions are used in Maxwell’s equations,
despite the fact that charges (i.e. electrons) are particles, and are not distributed in space. The
discrete nature of electronic charge can be modelled using a delta function representation of the
charge and current densities
X
ρ(x, t) = qa δ(x − xa (t))
a
X (3.4)
J(x, t) = qa va (x, t).
a

This model is inherently non-quantum mechanical, as it assumes that it is possible to simul-


taneous measure the position and velocity of an electron.
The dynamics of particle interaction with the fields are provided by the Lorentz force and
power equations
dp
= q (E + v × B) (3.5a)
dt

dE
= qE · v. (3.5b)
dt
Both the energy and the momentum relations of eq. (3.5) are stated, since the simplest (rela-
tivistic) form of the Lorentz force equation directly encodes both. For readers unfamiliar with
eq. (3.5b), exercise 3.1 provides a derivation method.
The quantities involved in the Lorentz equations are

• p(x, t) : Test particle momentum [kg m/s] (Kilogram meters/second)

• E(x, t) : Test particle kinetic energy [J] (Joules, kilogram meter^2/second^2)

• q : Test particle charge [C] (Coulombs)

• v : Test particle velocity [m/s] (Meters/second)

The task of extracting all the physical meaning from the Maxwell and Lorentz equations is a
difficult one. Our attempt to do so will use the formalism of geometric algebra.
132 electromagnetism.

3.1.1 Problems.

Exercise 3.1 Lorentz power and force relationship. (§17 [17])


Using the relativistic definitions of momentum and energy
mv
p(x, t) = p
1 − v2 /c2
mc2
E(x, t) = p ,
1 − v2 /c2

show that dE/dt = v · dp/dt, and use this to derive eq. (3.5b) from eq. (3.5a).

3.2 maxwell’s equation.

We will work with a multivector representation of the fields in isotropic media satisfying the
constituency relationships from eq. (3.2), and define a multivector field that includes both elec-
tric and magnetic components

Definition 3.1: Electromagnetic field strength.

The electromagnetic field strength ([V/m] (Volts/meter)) is defined as

F = E + IηH (= E + IcB),

where

• η= µ/ ([Ω] Ohms), is the impedance of the media.


p


• c = 1/ µ ([m/s] meters/second), is the group velocity of a wave in the media.
When  = 0 , µ = µ0 , c is the speed of light.

F is called the Faraday by some authors.

The factors of η (or c) that multiply the magnetic fields are for dimensional consistency,
√ √ √
since [ E] = [ µH] = [B/ µ]. The justification for imposing a dual (or complex) structure
on the electromagnetic field strength can be found in the historical development of Maxwell’s
equations, but we will also see such a structure arise naturally in short order.
3.2 maxwell’s equation. 133

No information is lost by imposing the complex structure of definition 3.1, since we can
always obtain the electric field vector E and the magnetic field bivector IH by grade selection
from the electromagnetic field strength when desired

E = hFi1
1 (3.6)
IH = hFi2 .
η
We will also define a multivector current containing all charge densities and current densities

Definition 3.2: Multivector current.

The current ([A/m2 ] (Amperes/square meter)) is defined as

J = η (cρ − J) + I (cρm − M) .

When the fictitious magnetic source terms (ρm , M) are included, the current has one grade for
each possible source (scalar, vector, bivector, trivector). With only conventional electric sources,
the current is still a multivector, but contains only scalar and vector grades.
Given the multivector field and current, it is now possible to state Maxwell’s equation (singu-
lar) in its geometric algebra form

Theorem 3.1: Maxwell’s equation.

Maxwell’s equation is a multivector equation relating the change in the electromagnetic


field strength to charge and current densities and is written as

1∂
!
∇+ F = J.
c ∂t

Maxwell’s equation in this form will be the starting place for all the subsequent analysis in
this book. As mentioned in section 2.5, the operator ∇ + (1/c)∂t will be called the spacetime
gradient1 .

1 This form of spacetime gradient is given a special symbol by a number of authors, but there is no general agreement
on what to use. Instead of entering the fight, it will be written it out in full in this book.
134 electromagnetism.

To prove theorem 3.1 we first insert the isotropic constituency relationships from eq. (3.2)
into eq. (3.1), so that we are working with two field variables instead of four
1
∇·E= ρ

∂H
∇ × E = −M − µ
∂t
1 (3.7)
∇·H= ρm
µ
∂E
∇×H=J+
∂t
Inserting a = ∇ into eq. (1.62) the vector product of the gradient with another vector
∇b = ∇ · b + I∇ × b. (3.8)
The respective dot and cross products for E and H in eq. (3.7) can be grouped using eq. (3.8)
into multivector gradient equations
∂H
!
1
∇E = ρ + I −M − µ
 ∂t
(3.9)
∂E
!
1
∇H = ρm + I J +  .
µ ∂t
Multiplying the gradient equation for the magnetic field by ηI so that both equations have the
same dimensions, and so that the electric field appears in both equations as E and not IE, we
find
1∂ 1
∇E + (IηH) = ρ − IM
c ∂t  (3.10)
1 ∂E
∇IηH + = Icρm − ηJ,
c ∂t
where µ/η = η = 1/c was used to simplify things slightly, and all the field contributions have
been moved to the left hand side. The first multivector equation has only scalar and bivector
grades, whereas the second has only vector and trivector grades. This means that if we add
these equations, we can recover each by grade selection, and no information is lost. That sum is
1∂
!
∇+ (E + IηH) = η (cρ − J) + I (cρm − M) . (3.11)
c ∂t
Application of definition 3.1 and definition 3.2 to eq. (3.11) proves the theorem, verifying the
assertion that Maxwell’s equations can be consolidated into a single multivector equation. There
is a lot of information packed into this single equation. Where possible, we want to work with
the multivector form of Maxwell’s equation, either in the compact form of theorem 3.1 or the
explicit form of eq. (3.11), and not decompose Maxwell’s equation into the conventional repre-
sentation by grade selection operations.
3.3 wave equation and continuity. 135

3.2.0.1 Problems.

Exercise 3.2 Dot and cross product relation to vector product.


Using coordinate expansion, convince yourself of the validity of eq. (3.8).

Exercise 3.3 Extracting the conventional Maxwell’s equations.


Apply grade 0,1,2, and 3 selection operations to eq. (3.11). Determine the multiplicative
(scalar or trivector) constants required to obtain eq. (3.7) from the equations that result from
such grade selection operations.

3.3 wave equation and continuity.

Some would argue that the conventional form eq. (3.1a) of Maxwell’s equations have built in
redundancy since continuity equations on the charge and current densities couple some of these
equations. We will take an opposing view, and show that such continuity equations are necessary
consequences of Maxwell’s equation in its wave equation form, and derive those conditions.
This amounts to a statement that the multivector current J is not completely unconstrained.

Theorem 3.2: Electromagnetic wave equation and continuity conditions.

The electromagnetic field is a solution to the non-homogeneous wave equation

1∂
!
F = ∇ − J.
c ∂t

In source free conditions, this reduces to a homogeneous wave equation, with group veloc-
ity c, the speed of the wave in the media. When expanded explicitly in terms of electric
and magnetic fields, and charge and current densities, this single equation resolves to a
non-homogeneous wave equation for each of the electric and magnetic fields

1 ∂J
E = ∇ρ + µ + ∇ × M
 ∂t
1 ∂M
H = ∇ρm +  − ∇ × J,
µ ∂t
136 electromagnetism.

as well as a pair of continuity equations coupling the respective charge and current densi-
ties
∂ρ
∇·J+ =0
∂t
∂ρm
∇·M+ = 0.
∂t

To prove, we operate on theorem 3.1 with ∇ − (1/c)∂t , one of the factors, along with the
spacetime gradient, of the d’Lambertian (wave equation) operator, which gives

1∂
!
F = ∇ − J. (3.12)
c ∂t

Since the left hand side has only grades 1 and 2, eq. (3.12) splits naturally into two equations,
one for grades 1,2 and one for grades 0,3

1∂
* ! +
F = ∇ − J
c ∂t 1,2
(3.13)
1∂
* ! +
0= ∇− J .
c ∂t 0,3

Unpacking these further, we find that there is information carried in the requirement that the
grade 0,3 selection of eq. (3.13) is zero. In particular, grade 0 selection gives

0 = h(∇ − (1/c)∂t )Ji


1 ∂
* +
= ∇− (η (cρ − J) + I (cρm − M))
c ∂t (3.14)
∂ρ
!
= −η ∇ · J + ,
∂t

which demonstrates the continuity condition on the electric sources. Similarly, grade three se-
lection gives

0 = h(∇ − (1/c)∂t )Ji3


1 ∂
* +
= ∇− (η (cρ − J) + I (cρm − M))
c ∂t 3
(3.15)
∂ρm
!
= −I ∇ · M + ,
∂t

which demonstrates the continuity condition on the (fictitious) magnetic sources if included in
the current.
3.4 plane waves. 137

For the non-homogeneous wave equation of theorem 3.2, the current derivatives may be
expanded explicitly. For the wave equation for the electric field, this is

1∂
* ! +
E = ∇ − J
c ∂t
! 1
1∂ ρ
* +
= ∇− − ηJ + I (cρm − M)
c ∂t  1
1 1 ∂J (3.16)
= ∇ρ − I (∇ ∧ M) + η
* c ∂t
1∂
! +
= ∇− J
c ∂t 1
1 ∂J
= ∇ρ + µ + ∇ × M,
 ∂t
as claimed. The forced magnetic field equation is

1∂
* ! +
1
H = ∇− J
ηI c ∂t
! 2
1∂ ρ
* +
1
= ∇− − ηJ + I (cρm − M)
ηI c ∂t  2
I ∂M
!
1 (3.17)
= −∇ ∧ J + Ic∇ρm +
ηI c ∂t
∂M
!
1 1
= −I (∇ × J) + I ∇ρm + I
I µ ∂t
1 ∂M
= ∇ρm +  − ∇ × J,
µ ∂t
completing the proof.

3.4 plane waves.

With all sources zero, the free space Maxwell’s equation as given by theorem 3.1 for the elec-
tromagnetic field strength reduces to just

1∂
!
∇+ F(x, t) = 0. (3.18)
c ∂t

Utilizing a phasor representation of the form definition 2.9, we will define the phasor repre-
sentation of the field as
138 electromagnetism.

Definition 3.3: Plane wave.

We represent the electromagnetic field strength plane wave solution of Maxwell’s equation
in phasor form as
 
F(x, t) = Re F(k)e jωt ,

where the complex valued multivector F(k) also has a presumed exponential dependence

F(k) = F̃e− jk·x .

We will now show that solutions of the electromagnetic field wave equation have the form

Theorem 3.3: Plane wave solutions to Maxwell’s equation.

Single frequency plane wave solutions of Maxwell’s equation have the form
  
F(x, t) = Re 1 + k̂ k̂ ∧ E e− jk·x+ jωt ,

where kkk = ω/c, k̂ = k/kkk is the unit vector pointing along the propagation direction,
and E is any complex-valued vector variable. When a E · k = 0 constraint is imposed on
the vector variable E, that variable can be interpreted as the electric field, and the solution
reduces to
  
F(x, t) = Re 1 + k̂ E e− jk·x+ jωt ,

showing that the field phasor F(k) = E(k) + IηH(k) splits naturally into electric and mag-
netic components

E(k) = Ee− jk·x


ηH(k) = k̂ × E e− jk·x ,

where the directions k̂, E, H form a right handed triple.


3.4 plane waves. 139

We wish to act on F(k)e− jk·x+ jωt with the spacetime gradient ∇ + (1/c)∂t , but must take care
of order when applying the gradient to a non-scalar valued function. In particular, if A is a
multivector, then

3
X
∇Ae− jk·x = em ∂m Ae− jk·x
m=1
X3 (3.19)
= em A (− jkm ) e− jk·x
m=1
= − jkA.

Therefore, insertion of the presumed phasor solution of the field from definition 3.3 into eq. (3.18)
gives
 ω
0 = −j k − F(k). (3.20)
c
If F(k) has a left multivector factor
 ω
F(k) = k + F̃, (3.21)
c
where F̃ is a multivector to be determined, then
 ω  ω  ω
k− F(k) = k − k+ F̃
c c c (3.22)
 ω 2 !
= k2 − F̃,
c

which is zero if if kkk = ω/c. Let kkk F̃ = F0 + F1 + F2 + F3 , where F0 , F1 , F2 , and F3 respec-


tively have grades 0,1,2,3, so that
 
F(k) = 1 + k̂ (F0 + F1 + F2 + F3 )
= F0 + F1 + F2 + F3 + k̂F0 + k̂F1 + k̂F2 + k̂F3 (3.23)
= F0 + F1 + F2 + F3 + k̂F0 + k̂ · F1 + k̂· F2 + k̂ · F3 + k̂ ∧ F1 + k̂ ∧ F2 
= F0 + k̂ · F1 + F1 + k̂F0 + k̂ · F2 + F2 + k̂ · F3 + k̂ ∧ F1 + F3 + k̂ ∧ F2 .

Since the field F has only vector and bivector grades, the grades zero and three components of
the expansion above must be zero, or

F0 = −k̂ · F1
(3.24)
F3 = −k̂ ∧ F2 ,
140 electromagnetism.

so
  
F(k) = 1 + k̂ F1 − k̂ · F1 + F2 − k̂ ∧ F2
   (3.25)
= 1 + k̂ F1 − k̂F1 + k̂ ∧ F1 + F2 − k̂F2 + k̂ · F2 .

The multivector 1 + k̂ has the projective property of gobbling any leading factors of k̂

(1 + k̂)k̂ = k̂ + 1 (3.26)
= 1 + k̂,

so for Fi ∈ F1 , F2

(1 + k̂)(Fi − k̂Fi ) = (1 + k̂)(Fi − Fi ) = 0, (3.27)

leaving
  
F(k) = 1 + k̂ k̂ · F2 + k̂ ∧ F1 . (3.28)

For k̂ · F2 to be non-zero F2 must be a bivector that lies in a plane containing k̂, and k̂ · F2
is a vector in that plane that is perpendicular to k̂. On the other hand k̂ ∧ F1 is non-zero only
if F1 has a non-zero component that does not lie in along the k̂ direction, but k̂ ∧ F1 , like
F2 describes a plane that containing k̂. This means that having both bivector and vector free
variables F2 and F1 provide more degrees of freedom than required. For example, if E is any
vector, and F2 = k̂ ∧ E, then
     
1 + k̂ k̂ · F2 = 1 + k̂ k̂ · k̂ ∧ E
   
= 1 + k̂ E − k̂ k̂ · E
    (3.29)
= 1 + k̂ k̂ k̂ ∧ E
 
= 1 + k̂ k̂ ∧ E,
  
which has the form 1 + k̂ k̂ ∧ F1 , so the electromagnetic field strength phasor may be gener-
ally written
 
F(k) = 1 + k̂ k̂ ∧ E e− jk·x , (3.30)
 
Expanding the multivector factor 1 + k̂ k̂ ∧ E we find
     
1 + k̂ k̂ ∧ E = k̂ · k̂ ∧ E +  ∧ E + k̂ ∧ E

k̂ ∧
 k̂ 
  (3.31)
= E − k̂ k̂ ∧ E + k̂ ∧ E.
3.5 statics. 141

The vector grade has the component of E along the propagation direction removed (i.e. it is the
rejection), so there is no loss of generality should a E · k = 0 constraint be imposed. Such as
constraint let’s us write the bivector as a vector product k̂ ∧ E = k̂E, and then use the projective
property eq. (3.26) to gobble the leading k̂ factor, leaving
   
F(k) = 1 + k̂ E e− jk·x = E + I k̂ × E e− jk·x . (3.32)

It is also noteworthy that the directions k̂, Ê, Ĥ form a right handed triple, which can be seen
by computing their product

(k̂Ê)Ĥ = (−Êk̂)(−I k̂Ê)


= +I Ê2 k̂2 (3.33)
= I.

These vectors must all be mutually orthonormal for their product to be a pseudoscalar multiple.
Should there be doubt, explicit dot products may be computed with ease using grade selection
operations
D E D E
k̂ · Ĥ = k̂(−I k̂Ê) = − I Ê = 0
D E D E (3.34)
Ê · Ĥ = Ê(−I k̂Ê) = − I k̂ = 0,

where the zeros follow by noting that I Ê, I k̂ are both bivectors. The conventional representation
of the right handed triple relationship between the propagation direction and fields is stated as
a cross product, not as a pseudoscalar relationship as in eq. (3.33). These are easily seen to be
equivalent

k̂ = I ĤÊ
= I(Ĥ ∧ Ê) (3.35)
= I 2 (Ĥ × Ê)
= Ê × Ĥ.

3.5 statics.

3.5.1 Inverting the Maxwell statics equation.

Similar to electrostatics and magnetostatics, we can restrict attention to time invariant fields
(∂t F = 0) and time invariant sources (∂t J = 0), but consider both electric and magnetic sources.
In that case Maxwell’s equation is reduced to an invertible first order gradient equation
∇F(x) = J(x), (3.36)
142 electromagnetism.

Theorem 3.4: Maxwell’s statics solution.

The solution to the Maxwell statics equation is given by

h(x − x0 )J(x0 )i1,2


Z
1
F(x) = dV 0 + F0 ,
4π V kx − x0 k3
where F0 is any function for which ∇F0 = 0. The explicit expansion in electric and mag-
netic fields and charge and current densities is given by
Z !
1 1 1
E(x) = dV 0
(x − x )ρ(x ) + (x − x ) × M(x )
0 0 0 0
4π V kx − x0 k3 
Z !
1 1 1
H(x) = dV 0
J(x ) × (x − x ) + (x − x )ρm (x ) .
0 0 0 0
4π V kx − x0 k3 µ

We see that the solution incorporates both a Coulomb’s law contribution and a Biot-Savart
law contribution, as well as their magnetic source analogues if applicable.
To prove theorem 3.4, we utilize the Green’s function for the (first order) gradient eq. (2.144),
finding immediately
Z
F(x) = dV 0G(x, x0 )∇0 J(x0 )
*ZV +
= 0 0
dV G(x, x )∇ J(x ) 0 0
(3.37)
V 1,2
0 )J(x0 )
Z * +
1 (x − x
= dV 0
.
4π V kx − x0 k3 1,2

Here a no-op grade selection has been inserted to simplify subsequent manipulation2 . We are
also free to add any grade 1,2 solution of the homogeneous gradient equation, which provides
the multivector form of the solution.
To unpack the multivector result, let s = x − x0 , and expand the grade 1,2 selection

hsJi1,2 = ηhs(cρ − J)i1,2 + hsI(cρm − M)i1,2


= ηcsρ − η(s ∧ J) + cIsρm − I(s ∧ M) (3.38)
1
= sρ + ηI(J × s) + scρm I + s × M,


2 If this grade selection filter is omitted, it is possible to show that the scalar and pseudoscalar contributions to the
(x − x0 )J product are zero on the boundary of the Green’s integration volume. [16]
3.5 statics. 143

so the field is
Z !
1 1 1
F(x) = dV 0
sρ + s × M
4π V kx − x0 k3 
Z (3.39)
1 1
+I dV 0 (scρm + ηJ × s) .
4π V kx − x0 k3
Comparing this expansion to the field components F = E + ηIH, we complete the proof.

3.5.2 Enclosed charge.

In conventional electrostatics we obtain a relation between the normal electric field component
and the enclosed charge by integrating the electric field divergence. The geometric algebra
generalization of this relates the product of the normal and the electromagnetic field strength
related to the enclosed multivector current

Theorem 3.5: Enclosed multivector current.

The total multivector current in the volume is related to the surface integral of n̂F over the
boundary of the volume by
Z Z
dAn̂F = dV J.
∂V V

This is a multivector equation, carrying information for each grade in the multivector cur-
rent, and after explicit expansion is equivalent to
Z Z
1
dA n̂ · E = dV ρ
∂V  V
Z Z
dA n̂ × H = dV J
Z∂V V
Z
dA n̂ × E = − dV M
Z∂V ZV
1
dA n̂ · H = dV ρm .
∂V µ V

To prove theorem 3.5 simply evaluate the volume integral of the gradient of the field using
theorem 2.11
Z Z
dV∇F = dAn̂F, (3.40)
V ∂V
144 electromagnetism.

and note that


Z Z
dV∇F = dV J. (3.41)
V V

This is a multivector relationship, containing a substantial amount of information, which can be


extracted by expanding n̂F

n̂F = n̂ (E + IηH)
= n̂ · E + I(n̂ × E) + Iη (n̂ · H + I n̂ × H) (3.42)
= n̂ · E − η(n̂ × H) + I(n̂ × E) + Iη(n̂ · H).

Inserting this into theorem 3.5, and equating grades, we find


Z Z
1
dAn̂ · E = dV ρ
∂V V 
Z Z
− dAη(n̂ × H) = − dVηJ
∂V
Z ZV (3.43)
I dA(n̂ × E) = −I dVM
Z ∂V Z V
I dAη(n̂ · H) = I dVcρm ,
∂V V

which completes the proof after cancelling common factors and some minor adjustments of
the multiplicative constants. Of course eq. (3.43) could have obtained directly from Maxwell’s
equations in their conventional form eq. (3.1). However, had we integrated the conventional
Maxwell’s equations, it would not have been obvious that the crazyR mix of fields,R sources, dot
and cross products in eq. (3.41) had a hidden structure as simple as ∂V dAn̂F = V dV J.

3.5.3 Enclosed current.

In this section we will present the generalization of Ampere’s law to line integrals of the total
electromagnetic field strength.

Theorem 3.6: Line integral of the field.

The line integral of the electromagnetic field strength is



∂F
Z !
dx F = I dA n̂J − ,
∂A A ∂n
3.5 statics. 145

where ∂F/∂n = (n̂ · ∇) F. Expressed in terms of the conventional consistent fields and
sources, this multivector relationship expands to four equations, one for each grade
Z
dx · E = dA n̂ · M
∂A ZA
n̂ρm ∂H
!
dx × H = dA −n̂ × J + −
∂A A µ ∂n

n̂ρ ∂E
Z !
dx × E = dA n̂ × M + −
 ∂n
∂A A
Z
dx · H = − dA n̂ · J.
∂A A

The last of the scalar equations in theorem 3.6 is Ampere’s law


Z
dx · H = n̂ · J = Ienc , (3.44)
∂A A

and the first is the dual of Ampere’s law for (fictitious) magnetic current density3 . In eq. (3.44)
the flux of the electric current density equals the enclosed current flowing through an open
surface. This enclosed current equals the line integral of the magnetic field around the boundary
of that surface.
To prove theorem 3.6 we compute the surface integral of the current J = ∇F
Z Z
d xJ =
2
d2 x∇F. (3.45)
A A

As we are working in R3 not R2 , the gradient may not be replaced by the vector derivative in
eq. (3.45). Instead we must split the gradient into its vector derivative component, the projection
of the gradient onto the tangent plane of the integration surface, and its normal component
∇ = ∂ + n̂(n̂ · ∇). (3.46)

The surface integral form eq. (2.89) of the fundamental theorem of geometric calculus may be
applied to the vector derivative portion of the field integral
Z Z Z
d x∇F =
2
d x ∂F +
2
d2 x n̂ (n̂ · ∇) F, (3.47)
A A A

3 Even without the fictitious magnetic sources, neither the name nor applications of the two cross product line integrals
with the normal derivatives are familiar to the author.
146 electromagnetism.

so
Z
dx F = d2 x (J − n̂ (n̂ · ∇) F)
∂A ZA
= dA (I n̂J − (n̂ · ∇) IF) (3.48)
A
∂F
Z !
= dA I n̂J − I ,
A ∂n

where the surface area bivector has been written in its dual form d2 x = I n̂dA in terms of a
scalar area element, and the directional derivative has been written in scalar form with respect
to a parameter n that represents the length along the normal direction. This proves the first part
of theorem 3.6.
Observe that the dx F product has all possible grades

dx F = dx (E + IηH)
= dx · E + Iηdx · H + dx ∧ E + Iηdx ∧ H (3.49)
= dx · E − η(dx × H) + I(dx × E) + Iη(dx · H),

as does the I n̂J product (in general)


ρ 
I n̂J = I n̂ − ηJ + I (cρm − M)

ρ
= n̂I − ηn̂IJ − n̂cρm + n̂M (3.50)

ρ
= n̂ · M + η(n̂ × J) − n̂cρm + I(n̂ × M) + n̂I − ηI(n̂ · J).

On the other hand IF = IE − ηH has only grades 1,2, like F itself. This allows the line integrals
to be split by grade selection into components with and without a normal derivative
Z
hdx Fi0,3 = dA hI n̂Ji0,3
∂A
Z A
(3.51)
hdx Fi1,2 = dA (hI n̂Ji1,2 − (n̂ · ∇) IF ) .
∂A A

The first of eq. (3.51) contains Ampere’s law and its dual as one multivector equation, which can
be seen more readily by explicit expansion in the constituent fields and sources using eq. (3.49),
eq. (3.50)
Z
(dx · E + Iη(dx · H)) = dA (n̂ · M − ηI(n̂ · J))
∂A A
(3.52)
ρ ∂
Z !
(−η(dx × H) + I(dx × E)) = dA η(n̂ × J) − n̂cρm + I(n̂ × M) + n̂I − ( IE − ηH) .
∂A A  ∂n
3.5 statics. 147

Further grade selection operations, and minor adjustments of the leading constants completes
the proof.
It is also worth pointing out that for pure magnetostatics problems where J = ηJ, F = IηH,
that Ampere’s law can be written in a trivector form
Z Z
dx ∧ F = I dA n̂ · J = Iη dA n̂ · J. (3.53)
∂A A A

This encodes the fact that the magnetic field component of the total electromagnetic field
strength is most naturally expressed in geometric algebra as a bivector.

3.5.4 Example field calculations.

Having seen a number of theoretical applications of the geometric algebra framework, let’s now
see how some of our new tools can be used to calculate the fields for specific static electromag-
netism charge and current configurations.

3.5.4.1 Line segment.

In this example the (electric) field is calculated at a point on the z-axis, due to a finite line charge
density of λ along a segment [a, b] of the x-axis. This is illustrated in fig. 3.1.

x = re1 ei x - x'
e3

x' = xe1 e1

Figure 3.1: Line charge density.

Introducing a unit imaginary i = e13 for the rotation from the x-axis to the z-axis, the field
point observation point is

x = re1 eiθ . (3.54)


148 electromagnetism.

The charge element point is x0 = xe1 , so the difference can now be written with e1 factored
to the left or to the right
   
x − x0 = e1 reiθ − x = re−iθ − x e1 . (3.55)

This allows the squared vector length to be calculated as a product of complex conjugates

2    
x − x0 = re−iθ − x e1 e1 reiθ − x
  
= re−iθ − x reiθ − x (3.56)
 
= r2 + x2 − rx eiθ + e−iθ
= r2 + x2 − 2rx cos θ.

The total (electric) field is therefore

λ b
re1 eiθ − xe1
Z
F= dx
4π r2 + x2 − 2xr cos θ 3/2

a
(3.57)
λe1
Z b/r
eiθ − u
= du  .
4πr a/r 1 + u2 − 2u cos θ 3/2
This integral can be evaluated by table lookup or using tools like Mathematica. For θ = π/2

eiθ − u 1 + iu
Z
du 3/2
= p , (3.58)
1 + u2 − 2u cos θ 1 + u2


and for other angles θ , nπ/2


p
eiθ − u (1 − ue−iθ ) 1 + u2 − 2u cos θ
Z
du = . (3.59)
1 + u2 − 2u cos θ
3/2 (1 + u2 ) sin(2θ)

The interesting takeaway is not the form of the solution, but the fact that GA allows an
introduction of a “complex plane” for many problems that have polar representations in a plane.
When such a complex plane is introduced, existing Computer Algebra Systems (CAS), like
Mathematica, can be utilized for the grunt work of the evaluation.
Observe that the numerator factors like e1 (1 + iu) and e1 (1 − ue−iθ ) compactly describe the
direction of the vector field at the observation point. Either of these can be expanded explicitly
in sines and cosines if desired
e1 (1 + iu) = e1 + ue3
(3.60)
e1 (1 − ue−iθ ) = e1 (1 − u cos θ) + ue3 sin θ.
3.5 statics. 149

3.5.4.2 Infinite line current.

Given a static line charge density and current density along the z-axis

ρ(x) = λδ(x)δ(y)
(3.61)
J(x) = vρ(x) = vλe3 δ(x)δ(y),

the total multivector current is

J = η(cρ − J)
= η(c − ve3 )λδ(x)δ(y) (3.62)
λ v 
= 1 − e3 δ(x)δ(y).
 c
We can find the field for this current using theorem 3.4. To do so, let the field observation
point be x = x⊥ + ze3 , so the total field is

λ h(x − x0 )(1 − (v/c)e3 )i1,2 0


Z
F(x) = dx0 dy0 dz0 3
δ(x )δ(y0 )
4π V kx − x k0

λ + 0 )e )(1 − (v/c)e )i
Z ∞
h(x ⊥ (z − z 3 3 1,2
= dz0 3/2 (3.63)
4π −∞ 
x2⊥ + (z − z0 )2


λ (x⊥ − (v/c)x⊥ e3 ) ∞ dz0 λe3 ∞ (z − z0 )dz0


Z Z
= 3/2
+ .
4π 
−∞ x2 + (z − z0 )2
 4π −∞ x2 + (z − z0 )2 3/2

⊥ ⊥

The first integral is 2/x2⊥ , whereas the second is zero (odd function, over even interval). The
bivector term of the grade selection above had a x⊥ ∧ e3 = x⊥ e3 factor, which can be further
reduced using cylindrical coordinates x = Rρ̂ + ze3 , since x⊥ = Rρ̂, which leaves

λ v 
F(x) = ρ̂ (1 − v/c) = E (1 − v/c) = E + I ×E , (3.64)
2πR c
where v = ve3 . The vector component of this is the electric field, which is therefore directed
radially, whereas the (dual) magnetic field ηIH is a set of oriented planes spanning the radial
and z-axis directions. We can also see that there is a constant proportionality factor that relates
the electric and magnetic field components, namely
IηH = −Ev/c, (3.65)

or
H = v × D. (3.66)
150 electromagnetism.

Exercise 3.4 Linear magnetic density and currents.


Given magnetic charge density ρm = λm δ(x)δ(y), and current density M = ve3 ρm = vρm ,
show that the field is given by

λm c  v
F(x) = I ρ̂ 1 − ,
4πR c
or with B = λm ρ̂/(4πR),

F = B × v + cIB.

3.5.4.3 Infinite planar current.

A variation on the above example puts a uniform charge density ρ(x) = σδ(z) in a plane, along
with an associated current density J(x) = ve1 eiθ ρ(x), i = e12 . Letting v = ve1 eiθ , the multivec-
tor current is
J(x) = ση (c − v) δ(z), (3.67)

so the field off the plane is


$
σ dz0 dA0

F(x) = (x − x0 )(1 − v/c) 1,2 δ(z0 ).



3
(3.68)
4π 0
kx − x k
If xk = (x ∧ e3 )e3 , and x0k = (x0 ∧ e3 )e3 , are the components of the vectors x, x0 in the x-y
0
plane, then integration over z0 and a change of variables x0k − xk = r0 e1 eiθ yields
"
σ r0 dr0 dθ0 D iθ0
 E
F(x) = ze 3 − r 0
e1 e (1 − v/c) . (3.69)
4π (z2 + r0 )3/2
2 1,2

0
The eiθ integrands are killed, so for z , 0, the field is
σz
F(x) = he3 (1 − v/c)i1,2 . (3.70)
4π|z|
Since v ∈ span {e1 , e2 } the product e3 v is a bivector and the grade selection can be dropped,
leaving
σ sgn(z)  v
F(x) = e3 1 − . (3.71)
4π c
This field toggles sign when crossing the plane, but is constant otherwise. The electric and
magnetic field components are once again related by eq. (3.66).
3.5 statics. 151

Exercise 3.5 Planar magnetic density and currents.


Given magnetic charge density ρm = σm δ(z), and current density M = vρm , v = ve1 eiθ , i =
e12 , show that the field is given by

σm c sgn(z)  v
F(x) = i 1− .
4π c

3.5.4.4 Arc line charge.

In this example we will examine the (electric) field due to a line charge density of λ along a
circular arc segment φ0 ∈ [a, b], of radius r in the x-y plane. The field will be evaluated at the
spherical coordinate point (R, θ, φ), as illustrated in fig. 3.2.

Figure 3.2: Circular line charge.

Using the GA spherical parameterization of eq. (2.58), and eq. (2.59), the observation point
now has the simple representation

x = Re3 e jθ , (3.72)
152 electromagnetism.

and is the product of a polar directed vector with a complex exponential whose argument is the
polar rotation angle. The bivector j is a function of the azimuthal angle φ, and encodes all the
geometry of the rotation. To sum the contributions of the charge elements we need the distance
between the charge element and the observation point. That vector difference is
0
x − x0 = Re3 e jθ − re1 eiφ . (3.73)

Compare this to the tuple representation


x − x0 = (R sin θ cos φ − r cos φ0 , R sin θ sin φ − r cos φ0 , cos θ), (3.74)

for which the prospect of working with is considerably less attractive. The squared length of
eq. (3.73) is
   0
(x − x0 )2 = R2 + r2 − 2Rr e3 e jθ · e1 eiφ . (3.75)

The dot product of unit vectors in eq. (3.75) can be reduced using scalar grade selection
   0 D  0 E
e3 e jθ · e1 eiφ = e1 sin θeiφ e1 eiφ
D
= sin θ e−iφ eiφ
0E (3.76)
= sin θ cos(φ0 − φ),

so
q
x − x = R2 + r2 − 2Rr sin θ cos(φ0 − φ).
0
(3.77)

The electric field is


Z b 0
1 Re3 e jθ − re1 eiφ
F= λrdφ0
 . (3.78)
4π0 a R2 + r2 − 2Rr sin θ cos(φ0 − φ) 3/2
Non-dimensionalizing eq. (3.78) with u = r/R, a change of variables α = φ0 − φ, and noting
that iφ̂ = e1 eiφ , the field is

λr b−φ
e3 e jθ − ue1 eiφ eiα
Z
F= dα
4π0 R2 a−φ 1 + u2 − 2u sin θ cos α 3/2

(3.79)
λr b−φ
r̂ + φ̂uieiα
Z
= dα  ,
4π0 R2 a−φ 1 + u2 − 2u sin θ cos α 3/2
or
 
λr b−φ
Z

F = r̂ 
 

4π0 R2 a−φ (1 + u2 − 2u sin θ cos α)3/2
  (3.80)
 λrui Z b−φ eiα dα
+ φ̂   .


2 3/2
4π0 R a−φ (1 + u2 − 2u sin θ cos α) 
3.5 statics. 153

Without CAS support for GA, this pair of integrals has to be evaluated separately. The first
integral scales the radial component of the electric field. The second integral scales and rotates
0
φ̂ within the azimuthal plane, producing an electric field component in a φ̂ = φ̂eiΦ direction.

3.5.4.5 Field of a ring current.

Let’s now compute the field due to a static charge and current density on a ring of radius R as
illustrated in fig. 3.3.

Figure 3.3: Field due to a circular distribution.

A static charge distribution on a ring at z = 0 has the form


ρ(x) = λδ(z)δ(r − R). (3.81)
As always the current distribution is of the form J = vρ, and in this case the velocity is
azimuthal v = e2 eiφ , i = e12 . The total multivector current is
1  v
J = λδ(z)δ(r − R) 1 − . (3.82)
 c
Let the point that we observe the field, and the integration variables be
x = ze3 + rρ̂
(3.83)
x0 = z0 e3 + r0 ρ̂0 .
The field is
$ 0 )e + r ρ̂ − r 0 ρ̂0  1 − v e eiφ0
D  E
λ (z − z 3 c 2 1,2
F(x) = dz0 r0 dr0 dφ0 δ(z0 )δ(r0 − R)
0 2 3/2
4π (z − z ) + (rρ̂ − r ρ̂ )
0 2 0
(3.84)
0
D  E
λ
Z ze 3 + r ρ̂ − R ρ̂0
1 − v
e
c 2 e iφ
1,2
= Rdφ0 .
0 2 3/2
4π z + (rρ̂ − Rρ̂ )
2
154 electromagnetism.

Without loss of generality, we can align the axes so that ρ̂ = e1 , and introduce dimensionless
variables
z̃ = z/R
(3.85)
r̃ = r/R,

which gives
0 0
D  E
λ
Z 2π z̃e3 + r̃e1 − e1 eiφ 1 − vc e2 eiφ
1,2
F= dφ0 3/2
. (3.86)
4πR z̃2 + (r̃e1 − e1 eiφ )2
0 
0

In the denominator, the vector square expands as

0 0 0
(r̃e1 − e1 eiφ )2 = (r̃ − e−iφ )e21 (r̃ − eiφ ) (3.87)
= r̃2 + 1 − 2r̃ cos(φ0 ),

and the grade selection in the numerator is


 0
 v 0
 0
z̃e3 + r̃e1 − e1 eiφ 1 − e2 eiφ = z̃e3 + r̃e1 − e1 eiφ
c 1,2
(3.88)
v 0 
− z̃e31 eiφ + r̃i cos(φ0 ) + i .
c
Any of the exponential integrals terms are of the form
Z 2π Z 2π
0 iφ0
dφ e f (cos(φ )) =
0
dφ0 cos(φ0 ) f (cos(φ0 )), (3.89)
0 0

since the i sin φ0 f (cos(φ0 ) contributions are odd functions around φ0 = π.


For general z, r the integrals above require numeric evaluation or special functions. Let

Z 2π
1
A= dφ0
1 + z̃2 + r̃2 − 2r̃ cos(φ0 ) 3/2

0
  (3.90a)
4E − (r̃−1)4r̃2 +z̃2
= p
z̃2 + (r̃ − 1)2 z̃2 + (r̃ + 1)2



cos(φ0 )
Z
B= dφ0
1 + z̃2 + r̃2 − 2r̃ cos(φ0 ) 3/2

0
        (3.90b)
2 z̃2 + r̃2 + 1 E − (r̃−1)4r̃2 +z̃2 − z̃2 + (r̃ + 1)2 K − (r̃−1)4r̃2 +z̃2
= p ,
r̃ z̃2 + (r̃ − 1)2 z̃2 + (r̃ + 1)2

3.5 statics. 155

(a) (b)

Figure 3.4: (a) A(z̃, ρ̃). (b) B(z̃, ρ̃).

where K(m), E(m) are complete elliptic integrals of the first and second kind respectively. As
seen in fig. 3.4, these functions are similar, both tailing off quickly with z, ρ, with largest values
the ring.
Finally, restoring generality by making the transformation e1 → e1 eiφ = ρ̂, e2 → e2 eiφ = φ̂,
the field is now fully determined
λ  vi   v  
F= z̃e3 + r̃ρ̂ − A − ρ̂ + (z̃e3 ρ̂ + r̃i) B . (3.91)
4πR c c
The field directions are nicely parameterized as multivector expressions, with the relative
weightings in different directions scaled by the position dependent integral coefficients of eq. (3.90).
The multivector field can be separated into its respective electric and magnetic components by
inspection
λ
E = hFi1 = (z̃Ae3 + ρ̂(r̃A − B))
4πR
1 λv   (3.92)
H = h−IFi1 = −e3 ( A + r̃B) − φ̂z̃A ,
η0 4πR
which, as expected, shows that the static charge distribution ρ ∝ λ only contributes to the
electric field, and the static current distribution J ∝ vλ only contributes to the magnetic field.
See fig. 3.5, fig. 3.6 for plots of the electric and magnetic field directional variation near z̃ = 0,
and fig. 3.7 for larger z where the azimuthal component of the field dominates.

Exercise 3.6 Magnetic sources on a ring.


Given a constant (magnitude) multivector current on a ring J = Iλm δ(z)δ(r − R)(c − ve2 eiφ ), i =
e12 , show that the field is
λm c  v   v  
F= z̃i + r̃φ̂e3 + e3 A + e3 φ̂ + (z̃ρ̂ − r̃e3 ) B .
4πR c c
156 electromagnetism.

Figure 3.5: Electric field direction for circular charge density distribution near z = 0.

Figure 3.6: Magnetic field direction for circular current density distribution near z = 0.
3.5 statics. 157

Figure 3.7: Magnetic field for larger z.

3.5.4.6 Ampere’s law. Two current sources.

Let’s try using Ampere’s law as stated in theorem 3.6 two compute the field at a point in the
blue region illustrated in fig. 3.8. This represents a pair of z-axis electric currents of magnitude
I1 , I2 flowing through the z = 0 points p1 , p2 on the x-y plane.

I1
e2

e1
Figure 3.8: Magnetic field between two current sources.
158 electromagnetism.

Solving the system with superposition, let’s consider first one source flowing through p =
(p x , py , 0) with current J = e3 Ie δ(x − p x )δ(y − py ), and evaluate the field due to this source at the
point r. With only magnetic sources in the multivector current, Ampere’s law takes the form
Z
dxF = −I dAe3 (−ηJ) = IηIe . (3.93)
∂A A

The field F must be a bivector satisfying dx · F = 0. The circle is parameterized by


r = p + Re1 eiφ , (3.94)
so

dx = Re2 eiφ dφ = Rφ̂dφ. (3.95)

With the line element having only a φ̂ component, F must be a bivector proportional to e3 r̂. Let
F = F0 e31 eiφ , where F0 is a scalar, so that drF is a constant multiple of the unit pseudoscalar
Z 2π Z 2π
drF = RF0 e2 eiφ dφe31 eiφ
0 0
Z 2π (3.96)
= RF0 e231 e−iφ eiφ dφ
0
= 2πIRF0 ,
so

1
F0 = IIe
I2πR (3.97)
Ie
= .
2πR
The field strength relative to the point p is

ηIe
F= e3 r̂
2πR (3.98)
ηIe
= e3 r̂.
2πR
Switching to an origin relative coordinate system, removing the z = 0 restriction for r and pk ,
and summing over both currents, the total field at any point r strictly between the currents is
X ηIk 1
F= e3
k=1,2
2π e3 (e3 ∧ (r − pk ))
X ηIk (3.99)
1
= .
k=1,2
2π e 3 ∧ (r − p k )
3.6 dynamics. 159

The bivector nature of a field with only electric current density sources is naturally represented
by the wedge product e3 ∧ (r − pk ) which is a vector product of e3 and the projection of r − pk
onto the x-y plane.

3.6 dynamics.

3.6.1 Inverting Maxwell’s equation.

Maxwell’s equation (theorem 3.1) is invertable using the Green’s function for the spacetime
gradient theorem 2.19. That solution is
Theorem 3.7: Jefimenkos solution.

The solution of Maxwell’s equation is given by


Z !
1 0 r̂ 1
F(x, t) = F0 (x, t) + dV 2 J(x , tr ) + (1 + r̂) J̇(x , tr ) ,
0 0
4π r cr

where F0 (x, t) is any specific solution of the homogeneous equation (∇ + (1/c)∂t ) F0 = 0,


time derivatives are denoted by overdots, and all times are evaluated at the retarded time
tr = t − r/c. When expanded in terms of the electric and magnetic fields (ignoring mag-
netic sources), the non-homogeneous portion of this solution is known as the Jefimenkos’
equations [8].

0 r̂ ρ(x , tr ) ρ̇(x0 , tr ) η
0
Z ! !
1
E= dV + − J̇(x , tr )
0
4π r r c cr
Z ! (3.100)
1 0 1 1
H= dV J̇(x , tr ) + 2 J(x , tr ) × r̂,
0 0
4π cr r

The full solution is


Z
F(x, t) = F0 (x, t) + dV 0 dt0G(x − x0 , t − t0 )J(x0 , t0 )
(3.101)
r̂ ∂ r̂ 1 ∂
Z ! !
1
= F0 (x, t) + 0 0
dV dt − 2 + + δ(−r/c + t − t ) J(x , t )
0 0 0
4π r ∂r r cr ∂t
where r = x − x0 , r = krk and r̂ = r/r. With the help of eq. (C.18), the derivatives in the Green’s
function eq. (3.101) can be evaluated, and the convolution reduces to
Z ! !
1 r̂ r̂ 1 d 1 d
dt G(x − x , t − t )J(x , t ) =
0 0 0 0 0
J(x , tr ) − −
0
J(x , tr ) +
0
J(x , tr )
0
.

4π r 2 r c dtr cr dtr t =t−r/c
r
(3.102)
160 electromagnetism.

There have been lots of opportunities to mess up with signs and factors of c, so let’s expand
this out explicitly for a non-magnetic current source J = ρ/ − ηJ. Neglect the contribution of
the homogeneous solution F0 , and utilizing our freedom to insert a no-op grade 1,2 selection
operation, that removes any scalar and pseudoscalar terms that are necessarily killed over the
full integration range, we find

0 r̂ ρ ρ̇
Z *  !+
1  1
F= dV 2 − ηJ + (1 + r̂) − ηJ̇
4π r  cr  1,2
η ρ̇ η
Z !
1 r̂ r̂ 1
= dV 0
ρ − η 2 ∧ J − J̇ + r̂ − r̂ ∧ J̇ (3.103)
4π r2 r cr cr  cr
r̂ρ̇ ηJ̇ η η
Z  
1  r̂
= dV  2 ρ +
0
− I r̂ × J̇ − I 2 r̂ × J .


4π r cr cr cr r

As F = E + IηH, the respective electric and magnetic fields by inspection. After re-inserting
the space and time parameters that we suppressed temporarily, the proof is complete.
The disadvantages of separating the field and current components into their constituent com-
ponents is also made painfully obvious by the complexity of the conventional statement of the
solution compared to the equivalent multivector form.

3.7 energy and momentum.

3.7.1 Field energy and momentum density and the energy momentum tensor.

It is assumed here that the conventional definitions of the field energy and momentum density
are known to the reader, as well as the conservation equations relating their space and time
derivatives. For reference, the conventional definitions of those densities follow.

Definition 3.4: (Conventional) Energy and momentum density and Poynting vector.

The quantities E and P defined as

1 2 
E= E + µH2
2
1
Pc = E × H,
c

are known respectively as the field energy and momentum densities. S = c2 P = E × H is


called the Poynting vector.
3.7 energy and momentum. 161

We will derive the conservation relationships that justify calling E, P the energy and momentum
densities, and will also show that the Poynting vector represents the energy flux through a
surface per unit time.
In geometric algebra, it is arguably more natural to write the Poynting vector as a bivector-
vector dot product such as
1
S= (IηH) · E, (3.104)
η
since this involves only components of the total electromagnetic field strength F = E + IηH.
However, we can do better, representing both E and S in terms of F directly. The key to doing so
is making use of the fact that the energy and momentum densities are themselves components of
a larger symmetric rank-2 energy momentum tensor, which can in turn be represented compactly
in geometric algebra.

Definition 3.5: (Conventional) energy momentum and Maxwell stress tensors.

The rank-2 symmetric tensor Θµν , with components


 2 
Θ00 = E + η2 H2
2
1
Θ0i = (E × H) · ei
c
1  2 !
Θ = − Ei E j + η Hi H j − δi j E + η H ,
ij 2 2 2
2

is called the energy momentum tensor. The spatial index subset of this tensor is known as
the Maxwell stress tensor, and is often represented in dyadic notation
 ↔ X
a· T · b = ai T i j b j ,
i, j

or
↔ X
a· T≡ ai T i j e j
i, j

where T i j = −Θi j .
162 electromagnetism.

Here we use the usual convention of Greek indices such as µ, ν for ranging over both time (0)
and spatial {1, 2, 3} indexes, and and Latin letters such as i, j for the “spatial” indexes {1, 2, 3}.
The names and notation for the tensors vary considerably4 .
In geometric algebra the energy momentum tensor, and the Maxwell stress tensor may be
represented as linear grade 0,1 multivector valued functions of a grade 0,1 multivector.

Definition 3.6: Energy momentum and Maxwell stress tensors.

We define the energy momentum tensor as

1
T (a) = FaF † ,
2
where a is a 0,1 multivector parameter. We introduce a shorthand notation for grade one
selection with vector valued parameters

T(a) = hT (a)i1 ,

and call this the Maxwell stress tensor.

Theorem 3.8: Expansion of the energy momentum tensor.

Given a scalar parameter α, and a vector parameter a = k ak ek , the energy momentum


P
tensor of definition 3.6 is a grade 0,1 multivector, and may be expanded in terms of E, S
and T(a) as
!
S S
T (α + a) = α E + − a · + T(a),
c c

where T(ei ) · e j = −Θi j , or T(a) = a· T.

4 Θµν in definition 3.5 is called the symmetric stress tensor by some authors [14], and the energy momentum tensor
by others, and is sometimes written T µν ([17], [6]). The sign conventions and notation for the spatial components
Θi j vary as well, but all authors appear to call this subset the Maxwell stress tensor. The Maxwell stress tensor may
be written as σi j (= −Θi j ) [17], or as T i j (= −Θi j ) ([8], [14].)
3.7 energy and momentum. 163

Theorem 3.8 relates the geometric algebra definition of the energy momentum tensor to the
quantities found in the conventional electromagnetism literature. Because T is a linear function
of its parameter, we may prove this in parts, starting with α = 1, a = 0, which gives
1
T (1) = FF †
2
1
=  (E + IηH) (E − IηH)
2
1  2  1
=  E + η2 H2 + Iη (HE − EH)
2 2 (3.105)
1 2  I
= E + µH + H ∧ E
2
2 c
1 2  1
= E + µH + E × H
2
2 c
S
=E+ .
c
An immediate take away from this expansion is that we may dispense with any requirement to
refer to electric or magnetic field components in isolation and can express the energy and mo-
mentum densities (and Poynting) vector in terms of only the total electromagnetic field strength
1 D E
E =  FF †
2
1 D †E
Pc =  FF (3.106)
2 1
1 D †E
S= FF .
2η 1

The power of this simple construction will be illustrated later when we compute the field energy
and momentum densities for a number of Maxwell equation solutions in their geometric algebra
form.
An expansion of T (ek ) is harder to do algebraically than eq. (3.105), but doing so will demon-
strate that T (a) is a 0,1 grade multivector parameter for any grade 0,1 parameter5 . Cheating a
bit, here are the results of a brute force expansion of T (a) using a Mathematica GA computer
algebra package

 2  η2  
T (1) = E1 + E22 + E32 + H12 + H22 + H32
2 2
+ e1 η ( E2 H3 − E3 H2 ) (3.107a)
+ e2 η ( E3 H1 − E1 H3 )
+ e3 η ( E1 H2 − E2 H1 )
5 Such an expansion is a worthwhile problem to develop GA manipulation skills. The reader is encouraged to try this
independently first, and to refer to appendix E for hints if required.
164 electromagnetism.

T (e1 ) = η ( E3 H2 − E2 H3 )
1   η2  
+ e1  E12 − E22 − E32 + H12 − H22 − H32
2   2 (3.107b)
+ e2  E1 E2 + η2 H1 H2
 
+ e3  E1 E3 + η2 H1 H3

T (e2 ) = η ( E1 H3 − E3 H1 )
 
+ e1  E1 E2 + η2 H1 H2
1   η2   (3.107c)
+ e2  −E12 + E22 − E32 + −H12 + H22 − H32
2   2
+ e3  E2 E3 + η H2 H3
2

T (e3 ) = η ( E2 H1 − E1 H2 )
 
+ e1  E1 E3 + η2 H1 H3

+ e2  E2 E3 + η2 H2 H3
 (3.107d)

1   η2  
+ e3  −E12 − E22 + E32 + −H12 − H22 + H32
2 2
Comparison to definition 3.5 shows that multivector energy momentum tensor is related to
the conventional tensor representation by

hT (1)i = Θ0 0 = Θ00
hT (1)i1 · ei = Θ0 i = Θ0i
(3.108)
hT (ei )i = Θi 0 = −Θi0
T(ei ) · e j = Θi j = −Θi j = T i j .
3.7 energy and momentum. 165

The only thing left to show is that how T(a) is equivalent to the dyadic notation found in ([8],
[14]).
X
T(a) = ai T(ei )
i
X  
= ai T(ei ) · e j e j
i, j (3.109)
X
= ai T i j e j
i, j

= a· T .

The dyadic notation is really just a clumsy way of expressing the fact that T(a) is a linear vector
valued function of a vector, which naturally has a matrix representation.

3.7.2 Poynting’s theorem (prerequisites.)

Poynting’s theorem is a set of conservation relationships between relating space and time change
of energy density and momentum density, or more generally between related components of the
energy momentum tensor. The most powerful way of stating Poynting’s theorem using geomet-
ric algebra requires a few new concepts, differential operator valued linear functions, and the
adjoint.

Definition 3.7: Differential operator valued multivector functions.

Given a multivector valued linear functions of the form f (x) = AxB, where A, B, x are
multivectors, and a linear operator D such as ∇, ∂t , or ∇ + (1/c)∂t , the operator valued
linear function f (D) is defined as
↔ ← →
f (D) = A D B = (A D)B + A(D B),

where D indicates that D is acting bidirectionally to the left and to the right.

Perhaps counter intuitively, using operator valued parameters for the energy momentum ten-
sor T or the Maxwell stress tensor T will be particularly effective to express the derivatives
of the tensor. There are a few cases of interest, all related to evaluation of the tensor with a
parameter value of the spacetime gradient.
166 electromagnetism.

Theorem 3.9: Energy momentum tensor operator parameters.

1 ∂T (1) 1 ∂
!
S
T ((1/c)∂t ) = = E+
c ∂t c ∂t c
S
hT (∇)i = −∇ ·
c
3
X
hT (∇)i1 = T(∇) = (∇ · T(ek )) ek .
k=1

We will proceed to prove each of the results of theorem 3.9 in sequence, starting with the
time partial, which is a scalar operator

 ↔ †
T (∂t ) = F ∂t F
2
 
= (∂t F)F † + F(∂t F † ) (3.110)
2

= ∂t FF †
2
= ∂t T (1).

To evaluate the tensor at the gradient we have to take care of order. This is easiest in a scalar
selection where we may cyclically permute any multivector factors

  ↔ †
hT (∇)i = F∇F
2
 D E
= ∇F † F (3.111)
2 D
 E
= ∇ F†F ,
2 1

but

F † F = (E − IηH) (E + IηH)
= E2 + η2 H2 + Iη (EH − HE) (3.112)
= E2 + η2 H2 − 2ηE × H.

Plugging eq. (3.112) into eq. (3.111) proves the result.


3.7 energy and momentum. 167

Finally, we want to evaluate the Maxwell stress tensor of the gradient

3
X
T(∇) = ek (T(∇)) · ek
k=1
X3
= ek ∂m (T(em ) · ek )
k,m=1
(3.113)
X3
= ek ∂m (T(ek ) · em )
k,m=1
X 3
= ek (∇ · T(ek )) ,
k=1

as claimed.
Will want to integrate T(∇) over a volume, which is essentially a divergence operation.

Theorem 3.10: Divergence integral for the Maxwell stress tensor.

Z Z
dVT(∇) = dAT(n̂).
V ∂V

To prove theorem 3.10, we make use of the symmetric property of the Maxwell stress tensor
Z XZ
dVT(∇) = dVek ∇ · T(ek )
V k V
XZ
= dAek n̂ · T(ek )
k ∂V
XZ
= dAek nm T(ek ) · em
k,m ∂V
(3.114)
XZ
= dAek nm T(em ) · ek
k,m ∂V
XZ
= dAek T(n̂) · ek
k ∂V
Z
= dAT(n̂),
∂V

as claimed.
Finally, before stating Poynting’s theorem, we want to introduce the concept of an adjoint.
168 electromagnetism.

Definition 3.8: Adjoint.

The adjoint A(x) of a linear operator A(x) is defined implicitly by the scalar selection
D E
yA(x) = hxA(y)i.

The adjoint of the energy momentum tensor is particularly easy to calculate.

Theorem 3.11: Adjoint of the energy momentum tensor.

The adjoint of the energy momentum tensor is


 †
T (x) = F xF.
2

The adjoint T and T satisfy the following relationships


D E
T (1) = hT (1)i = E
D E S
T (1) = −hT (1)i1 = −
1 c
D E S
T (a) = −hT (a)i = a ·
D E c
T (a) = hT (a)i1 = T(a).
1

Using the cyclic scalar selection permutation property hABCi = hCABi we form

D E
hxT (y)i = xFyF †
2D (3.115)
 E
= yF † xF .
2
Referring back to definition 3.8 we see that the adjoint must have the stated form. Proving the
grade selection relationships of eq. (3.115) has been left as an exercise for the reader. A brute
force symbolic algebra proof using Mathematica is also available in appendix F.
As in theorem 3.9, the adjoint may also be evaluated with differential operator parameters.

Theorem 3.12: Adjoint energy momentum tensor operator parameters.


3.7 energy and momentum. 169

D E 1 ∂T (1) 1 ∂E
T ((1/c)∂t ) = =
c ∂t c ∂t
D E 1 ∂S
T ((1/c)∂t ) = − 2
1 c ∂t
D E S
T (∇) = ∇ ·
D E c
T (∇) = T(∇).
1

The proofs of theorem 3.12 are all fairly simple


1 ∂  †  1 ∂
!
S
T ((1/c)∂t ) = F F = E− . (3.116)
c 2 ∂t c ∂t c

D E D E
T (∇) = 1T (∇)
= h∇T (1)i
(3.117)
= ∇ · hT (1)i1
S
=∇· .
c

D E X  D E 
T (∇) = ek ek · T (∇)
1 1
k
X D E
= ek ek T (∇)
k
X
= ek h∇T (ek )i
k
(3.118)
X
= ek ∇ · hT (ek )i1
k
X
= ek ∇ · T(ek )
k
= T(∇).

3.7.3 Poynting theorem.

All the prerequisites for stating Poynting’s theorem are now finally complete.

Theorem 3.13: Poynting’s theorem (differential form.)


170 electromagnetism.

The adjoint energy momentum tensor of the spacetime gradient satisfies the following
multivector equation
 † 
T (∇ + (1/c)∂t ) = F J + J† F .
2

Note that the multivector F † J + J † F can only have scalar and vector grades, since it equals
its reverse. This equation can be put into a form that is more obviously a conservation law
by stating it as a set of scalar grade identities

1∂ D E
∇ · hT (a)i1 + hT (a)i = a(F † J + J † F) .
c ∂t 2
These may be written as respective scalar and vector grades equations

1 ∂E S 1
+ ∇ · = − (E · J + H · M)
c ∂t c c
1 ∂S
− 2 + T(∇) = ρE + E × M + ρm H + µJ × H.
c ∂t
Conventionally, only the scalar grade relating the time rate of change of the energy density
to the flux of the Poynting vector, is called Poynting’s theorem.

The conservation relationship of theorem 3.13 follows from

! ! !† ! !
↔ 1 ↔ 1 1
F † ∇ + ∂t F = ∇ + ∂t F F + F † ∇ + ∂t F (3.119)
c c c
= J F + F J.
† †

The scalar form of theorem 3.13 follows from


D E 1∂
aT (∇ + (1/c)∂t ) = h(∇ + (1/c)∂t )T (a)i = ∇ · hT (a)i1 + hT (a)i. (3.120)
c ∂t
We may use the scalar form of the theorem to extract the scalar grade, by setting a = 1, for
which the right hand side can be reduced to a single term since scalars are reversion invariant
D E D E† D E
F† J = F† J = J† F , (3.121)
3.7 energy and momentum. 171

so

1∂ S 1 ∂E
∇ · hT (1)i1 + hT (1)i = ∇ · +
c ∂t c c ∂t E
D
= F† J + J† F
2D E
=  F† J (3.122)
= h(E − IηH) (η (cρ − J) + I (cρm − M))i
=  (−ηE · J − ηH · M)
1 1
= − E · J − H · M,
c c
which proves the claimed explicit expansion of the scalar grade selection of Poynting’s theorem.
The left hand side of the vector grade selection follows by linearity using theorem 3.12
D E D E 1 ∂S
T (∇ + (1/c)∂t ) = T (∇) + T ((1/c)∂t ) = T(∇) − 2 . (3.123)
1 1 c ∂t
The right hand side is a bit messier to simplify. Let’s do this in pieces by superposition, first
considering just electric sources

D  † E η
ek F J + J † F = hek ((E − IηH)(cρ − J) + (cρ − J)(E + IηH))i
2 2
1
= ek · h(E − IηH)(cρ − J) + (cρ − J)(E + IηH)i1
2c
1 (3.124)
= ek · (cρE + IηH ∧ J)
c
1
= ek · (cρE − ηH × J)
c
= ek · (ρE + µJ × H) ,

and then magnetic sources

D  † E 
ek F J + J † F = hek ((E − IηH)I(cρm − M) − I(cρm − M)(E + IηH))i
2 2

= ek · h(IE + ηH)(cρm − M) + (cρm − M)(−IE + ηH)i1 (3.125)
2
= ek · (ηcρm H − IE ∧ M)
= ek · (ρm H + E × M) .

Jointly, eq. (3.123), eq. (3.124), eq. (3.125) complete the proof.
The integral form of theorem 3.13 submits nicely to physical interpretation.
172 electromagnetism.

Theorem 3.14: Poynting’s theorem (integral form.)


Z Z Z
EdV = − dAn̂ · S − dV (J · E + M · H)
∂t V ∂V V
(3.126)

Z Z Z Z
dV (ρE + J × B) + dV (ρm H − M × E) = − dVP + dAT(n̂).
V V ∂t V ∂V

Proof of theorem 3.14 requires only the divergence theorem, theorem 3.10, and definition 3.4.
The scalar integral in theorem 3.14 relates the rate of change of total energy in a volume to
the flux of the Poynting through the surface bounding the volume. If the energy in the volume
increases(decreases), then in a current free region, there must be Poynting flux into(out-of) the
volume. The direction of the Poynting vector is the direction that the energy is leaving the
volume, but only the projection of the Poynting vector along the normal direction contributes to
this energy loss.
The right hand side of the vector integral in theorem 3.14 is a continuous representation of
the Lorentz force (or dual Lorentz force for magnetic charges), the mechanical force on the
charges in the volume. This can be seen by setting J = ρv (or M = ρm M)
Z Z
dV (ρE + J × B) = dVρ (E + v × B)
V ZV (3.127)
= dq (E + v × B) .
V

As the field in the volume is carrying the (electromagnetic) momentum pem = V dVP, we
R

can identify the sum of the Maxwell stress tensor normal component over the bounding integral
as time rate of change of the mechanical and electromagnetic momentum
Z
dpmech dpem
+ = dAT(n̂). (3.128)
dt dt ∂V

3.7.4 Examples: Some static fields.

We’ve found solutions for a number of static charge and current distributions.

(a) For constant electric sources along the z-axis (eq. (3.64)) , with current J moving with
velocity v = ve3 , the field had the form F = E ρ̂ (1 − v/c).

(b) For constant magnetic sources along the z-axis (exercise 3.4) , with current M moving with
velocity v = ve3 , the field had the form F = ηHI ρ̂ (1 − v/c).
3.7 energy and momentum. 173

(c) For constant electric sources in the x-y plane (eq. (3.71)) , with current J moving with
velocity v = ve1 eiθ , i = e12 , the field had the form F = Ee3 (1 − v/c).

(d) For constant magnetic sources in the x-y plane (exercise 3.5) , with current M moving with
velocity v = ve1 eiθ , i = e12 , the field had the form F = ηHi (1 − v/c).

In all cases the field has the form F = A(1 − v/c), where A is either a vector or a bivector that
anticommutes with the current velocity v, so the energy momentum tensor T (1) has the form


T (1) = A(1 − v/c)2 A†
2

= AA† (1 + v/c)2 (3.129)
2
 †  v 2 !
v
= AA 1 + +2 .
2 c c
For the electric sources this is
S  2  v 2 !
v
E+ = E 1+ +2 , (3.130)
c 2 c c
or
  v 2 !
E = E2 1 +
2 c (3.131)
S = E v.2

For the magnetic sources this is


S µ 2  v 2 !
v
H+ = H 1+ +2 , (3.132)
c 2 c c
or
µ 2  v 2 !
H= H 1+
2 c (3.133)
S = µH v. 2

There are three terms in the multivector (1 − v/c)2 = 1 + (v/c)2 + 2v/c. For electric sources,
the first scalar term is due to the charge distribution, and provides the electric field contribution
to the energy density. The second scalar term is due to the current distribution, and provides
the magnetic field contribution to the energy density. The final vector term, proportional to the
current velocity contributes to the Poynting vector, showing that the field momentum travels
along the direction of the current in these static configurations.
Calculation of the T (ek ) tensor components is generally more involved. Let’s do this calcula-
tion for each of the fields above in turn to illustrate.
174 electromagnetism.

(a): To calculate T (e3 ) we can reduce the following products

Fe3 F † = E 2 ρ̂ (1 − v/c) e3 (1 − v/c) ρ̂


= −E 2 e3 ρ̂ (1 − v/c)2 ρ̂
 
= −E 2 e3 ρ̂ 1 + v2 /c2 − 2v/c ρ̂ (3.134)
 
= −E 2 e3 ρ̂2 1 + v2 /c2 + 2v/c
 
= −E 2 e3 1 + v2 /c2 + 2v/c .

Since
S
T (ek ) = − · ek + T(ek ). (3.135)
c
This means that S · e3 = E 2 v, as already found. The vector component of this tensor element
is
  
T(e3 ) = − E 2 e3 1 + v2 /c2 . (3.136)
2
This component of the stress tensor is aligned along the same axis as the velocity. Calculation
of the other stress tensor components is easiest in cylindrical coordinates. Along the radial
direction
ρ̂ (1 − v/c) ρ̂ (1 − v/c) ρ̂ = ρ̂2 (1 + v/c) (1 − v/c) ρ̂
  (3.137)
= 1 − v2 /c2 ρ̂,

and along the azimuthal direction

ρ̂ (1 − v/c) θ̂ (1 − v/c) ρ̂ = ρ̂θ̂ (1 + v/c) (1 − v/c) ρ̂


 
= −θ̂ρ̂2 1 − v2 /c2 (3.138)
 
= −θ̂ 1 − v2 /c2 .

Since T (a) is a linear operator for any vector parameters a, it cannot have any grade zero
component along any directions e · e3 = 0. No grade zero component of T (e1 ), T (e2 ) implies
that the Poynting vector is zero along the e1 and e2 directions respectively, as we saw above in
eq. (3.131).
In summary
  
T(ρ̂) = E 2 1 − v2 /c2 ρ̂
2
  
T(θ̂) = − E 2 1 − v2 /c2 θ̂ (3.139)
2
  
T(e3 ) = − E 2 1 + v2 /c2 e3 .
2
3.7 energy and momentum. 175

For this field that T(ρ̂) is entirely radial, whereas T(θ̂) is entirely azimuthal.
In terms of an arbitrary vector in cylindrical coordinates

a = aρ ρ̂ + aθ θ̂ + az e3 , (3.140)

the grade one component of the tensor is


      
T(a) = E 2 1 − v2 /c2 aρ ρ̂ − aθ θ̂ − E 2 1 + v2 /c2 az e3 . (3.141)
2 2

(b): For F = ηHI ρ̂ (1 − v/c), and v = ve3 we have

FaF † = η2 H 2 I ρ̂ (1 − v/c) a (1 − v/c) ρ̂(−I) (3.142)


= η2 H 2 ρ̂ (1 − v/c) a (1 − v/c) ρ̂.

We can write the tensor components immediately, since eq. (3.142) has exactly the same
structure as the tensor components computed in part (a) above. That is
µ    µ  
T(a) = H 2 1 − v2 /c2 aρ ρ̂ − aθ θ̂ − H 2 1 + v2 /c2 az e3 . (3.143)
2 2

(c): For F = Ee3 (1 − v/c), and v = vρ̂, we have

FaF † = E 2 e3 (1 − (v/c)ρ̂) a (1 − (v/c)ρ̂) e3 , (3.144)

so we need the following grade selections

= e3 ρ̂ (1 − (v/c)ρ̂)2 e3
D E
e3 (1 − (v/c)ρ̂) ρ̂ (1 − (v/c)ρ̂) e3


1 1
D   E
= e3 ρ̂ 1 + (v /c ) − 2(v/c)ρ̂ e3
2 2
1
 
= 1 + (v2 /c2 ) e3 ρ̂e3
 
= − 1 + (v2 /c2 ) ρ̂
D E D E
e3 (1 − (v/c)ρ̂) θ̂ (1 − (v/c)ρ̂) e3 = e3 θ̂ (1 + (v/c)ρ̂) (1 − (v/c)ρ̂) e3 (3.145)
1 1
D   E
= e3 θ̂ 1 − (v2 /c2 ) e3
1
 
= − 1 − (v2 /c2 ) θ̂
e3 (1 − (v/c)ρ̂) e3 (1 − (v/c)ρ̂) e3 1 = (1 + (v/c)ρ̂) (1 − (v/c)ρ̂) e3 1



 
= 1 − (v2 /c2 ) e3 .

So the Maxwell stress tensor components of interest are


      
T(a) = − E 2 1 + (v2 /c2 ) aρ ρ̂ + E 2 1 − (v2 /c2 ) az e3 − aθ θ̂ . (3.146)
2 2
176 electromagnetism.

(d): For F = ηHi (1 − v/c) , i = e12 , and v = vρ̂, we can use a duality transformation for the
unit bivector i
F = ηHIe3 (1 − v/c) , (3.147)

so
FaF † = η2 H 2 e3 (1 − v/c) a (1 − v/c) e3 . (3.148)

Equation (3.148) has the structure found in part (c) above, so


µ   µ   
T(a) = − H 2 1 + (v2 /c2 ) aρ ρ̂ + H 2 1 − (v2 /c2 ) az e3 − aθ θ̂ . (3.149)
2 2

3.7.5 Complex energy and power.

Theorem 3.15: Complex power representation.

Given a time domain representation of a phasor based field F = F(ω)


 
F(t) = Re Fe jωt ,

the energy momentum tensor multivector T (1) has the representation

S   
T (1) = E + = Re F ∗ F † + FF † e2 jωt .
c 4
With the usual definition of the complex Poynting vector

1 1
S = E × H∗ = ( IH∗ ) · E,
2 2
the energy and momentum components of T (1), for real µ,  are

1  1   
E= |E|2 + µ|H|2 + Re E2 + µH2 e2 jωt
4 4
1  
S = Re S + Re (E × H) e2 jωt .
2

To prove theorem 3.15 we start by expanding the real part operation explicitly
 
F(t) = Re Fe jωt
1  jωt  (3.150)
= Fe + F ∗ e− jωt .
2
3.7 energy and momentum. 177

The energy momentum multivector for the field is therefore

1   jωt  
F(t)F(t)† = Fe + F ∗ e− jωt F † e jωt + F ∗ † e− jωt

2 8 (3.151)
  ∗  ∗ 
= FF † e2 jωt + FF † e2 jωt + F ∗ F † + F ∗ F † ,
8
so we have

S 1
E+ = F(t)F(t)†
c 2 (3.152)
  
= Re F ∗ F † + FF † e2 jωt ,
4
which proves the first part of the theorem.
Next, we’d like to expand T (1)

1 ∗ † 1
F F =  E∗ + IηH∗ (E − IηH)

4 4
1 ∗ 
= E E + η2 H∗ H + Iη H∗ E − E∗ H (3.153)
4
1 I 
= |E|2 + µ|H|2 + H∗ E − E∗ H .
4 c
The scalar terms are already real, but the real part of the vector term is

I I
Re H∗ E − E∗ H = H∗ E − E∗ H + HE∗ − EH∗
 
4c 8c
I
= 2H∗ ∧ E + 2H ∧ E∗

8c (3.154)
1
= E × H∗ + E∗ × H

4c
1
= Re E × H∗ .

2c
The FF † factor of e2 jωt above was expanded in eq. (3.105), so the energy momentum mul-
tivector is
S 1  1
E+ = |E|2 + µ|H|2 + Re (E × H∗ )
c 4 2c ! ! (3.155)
1 2  1
+ Re E + µH + E × H e
2 2 jωt
.
4 2c

Expressing eq. (3.155) in terms of the complex Poynting vector S, completes the proof.
178 electromagnetism.

Observe that averaging over one period T kills any sinusoidal contributions, so the steady
state energy and Poynting vectors are just

1 τ+T
Z
1 
E(t)dt = |E|2 + µ|H|2
T τ 4
Z τ+T (3.156)
1
S(t)dt = Re S.
T τ

3.8 lorentz force.

3.8.1 Statement.

We now wish to express the Lorentz force equation eq. (3.5a) in its geometric algebra form. A
few definitions are helpful.

Definition 3.9: Energy momentum multivector.

For a particle with energy E and momentum p, we define the energy momentum multivector
as

T = E + cp.

Definition 3.10: Multivector charge.

We may define a multivector charge that includes both the magnitude and velocity (relative
to the speed of light) of the charged particle.
Z
Q= JdV,
V

where J = ρe ve , M = ρm vm . For electric charges this is

Q = qe (1 + ve /c) ,

and for magnetic charges

Q = Iqm (1 + vm /c) ,

where qe = V ρe dV, qm = V ρm dV.


R R
3.8 lorentz force. 179

With a multivector charge defined, the Lorentz force equation can be stated in terms of the
total electromagnetic field strength

Theorem 3.16: Lorentz force and power.

The respective power and force experienced by particles with electric (and/or magnetic)
charges is described by definition 3.10 is

1 dT D † E 1 † 
= FQ = F Q + FQ† .
c dt 0,1 2
where hdT/dti = dE/dt is the power and hdT/dti1 = cdp/dt is the force on the parti-
cle, and Q† is the electric or magnetic charge/velocity multivector of definition 3.10. The
conventional representation of the Lorentz force/power equations
D dpE
FQ† = q (E + v × B)
=
dt1
D E dE
c FQ† = = qE · v.
dt
may be recovered by grade selection operations. For magnetic particles, such a grade se-
lection gives
!
D E dp 1
FQ †
= = qm cB − vm × E
1 dt c
D E dE 1 v m
c FQ† = = qm B · .
dt η c

To prove theorem 3.16, we can expand the multivector product Fq (1 + v/c) into its con-
stituent grades
 v  v
qF 1 + = q (E + IcB) 1 +
c c
q
= qE + qIBv + Ev + qcIB (3.157)
c !
q 1
= E · v + q (E + v × B) + q cIB + E ∧ v + q(IB) ∧ v.
c c
180 electromagnetism.

We see the (c-scaled) particle power relationship eq. (3.5b) in the grade zero component and
the Lorentz force eq. (3.5b) in the grade 1 component. A substitution q → −Iqm , v → vm , and
subsequent grade 0,1 selection gives
!
  vm  1
−Iqm F 1 + = −Iqm cIB + E ∧ vm − Iqm IB · vm
c 0,1 c (3.158)
!
1
= qm cB − vm × E + qm B · vm .
c

The grade one component of this multivector has the required form for the dual Lorentz force
equation from theorem 3.14. Scaling the grade zero component by c completes the proof.

3.8.2 Constant magnetic field.

The Lorentz force equation that determines the dynamics of a charged particle in an external
field F has been restated as a multivector differential equation, but how to solve such an equation
is probably not obvious. Given a constant external magnetic field bivector F = IcB, the Lorentz
force equation is reduced to
dv v
m = qF · , (3.159)
dt c
or
qF
Ω=−
mc (3.160)
dv
= v · Ω,
dt
where Ω is a bivector containing all the constant factors.
This can be solved by introducing a multivector integration factor R and its reverse R† on the
left and right respectively

dv †
R R = Rv · ΩR†
dt
1
= R (vΩ − Ωv) R† (3.161)
2
1 1
= RvΩR† − RΩvR† ,
2 2
or
dv † 1 1
0=R R + RΩvR† − RvΩR† . (3.162)
dt 2 2
3.8 lorentz force. 181

Let
Ṙ = RΩ/2. (3.163)

Since Ω is a bivector Ṙ† = −ΩR† /2, so by chain rule


d  
0= RvR† . (3.164)
dt
The integrating factor has solution

R = eΩt/2 , (3.165)

a “complex exponential”, so the solution of eq. (3.159) is

v(t) = e−Ωt/2 v(0)eΩt/2 . (3.166)

The velocity of the charged particle traces out a helical path. Any component of the initial
velocity v(0)⊥ perpendicular to the Ω plane is untouched by this rotation operation, whereas
components of the initial velocity v(0)k that lie in the Ω plane will trace out a circular path. If
Ω̂ is the unit bivector for this plane, that velocity is
 
v(0)k = v(0) · Ω̂ Ω̂−1 = (v(0) ∧ B̂) · B̂
 
v(0)⊥ = v(0) ∧ Ω̂ Ω̂−1 = (v(0) · B̂)B̂
(3.167)
v(t) = v(0)k eΩt + v(0)⊥
= v(0)k cos(qBt/m) + v(0)k × B̂ sin(qBt/m) + v(0)⊥ ,

where B = BB̂.
A multivector integration factor method for solving the Lorentz force equation in constant
external electric and magnetic fields can be found in [9]. Other examples, solved using a rela-
tivistic formulation of GA, can be found in [6], [11], and [12].

3.8.2.1 Problems.

Exercise 3.7 Constant magnetic field.


In eq. (3.167), each of v(0) · Ω̂ Ω̂−1 , v(0) ∧ Ω̂ Ω̂−1 , and v(0)k eΩt + v(0)⊥ , was expanded by
   

setting Ω̂ = I B̂. Perform those calculations.


182 electromagnetism.

3.9 polarization.

3.9.1 Phasor representation.

In a discussion of polarization, it is convenient to align the propagation direction along a fixed


direction, usually the z-axis. Setting k̂ = e3 , βz = k · x in a plane wave representation from
theorem 3.3 the field is
F(x, ω) = (1 + e3 )Ee− jβz
  (3.168)
F(x, t) = Re F(x, ω)e jωt ,
where E · e3 = 0 , (i.e. E is an electric field, and not just a free parameter).
Here the imaginary j has no intrinsic geometrical interpretation, E = Er + jEi is allowed
to have complex values, and all components of E is perpendicular to the propagation direction
(Er · e3 = Ei · e3 = 0). Stated explicitly, this means that the electric field phasor may have real
or complex components in either of the transverse plane basis directions, as in
E = (α1 + jβ1 ) e1 + (α2 + jβ2 ) e2 . (3.169)
The total time domain field for this general phasor field is easily found to be
F(x, t) = (1 + e3 ) ((α1 e1 + α2 e2 ) cos (ωt − βz) − (β1 e1 + β2 e2 ) sin (ωt − βz)) . (3.170)
Different combinations of α1 , α2 , β1 , β2 lead to linear, circular, or elliptically polarized plane
wave states to be discussed shortly. Before doing so, we want to find natural multivector repre-
sentations of eq. (3.170). Such representations are possible using either the pseudoscalar for the
transverse plane e12 , or the R3 pseudoscalar I.

3.9.2 Transverse plane pseudoscalar.

3.9.2.1 Statement.

In this section the pseudoscalar of the transverse plane, written i = e12 , is used as an imaginary.

Definition 3.11: Phase angle.

Define the total phase as

φ(z, t) = ωt − βz.

We seek a representation of the field utilizing complex exponentials of the phase, instead of
signs and cosines. It will be helpful to define the coordinates of the Jones vector to state that
representation.
3.9 polarization. 183

Definition 3.12: Jones vectors.

The coordinates of the Jones vector, conventionally defined as a tuple of complex values
(c1 , c2 ), are

c1 = α1 + iβ1
c2 = α2 + iβ2 .

In this definition we have used i = e12 , the pseudoscalar of the transverse plane, as the
imaginary.

We will not use the Jones vector as a tuple, but will use c1 , c2 as stated above.

Theorem 3.17: Circular polarization coefficients.

The time domain representation of the field in eq. (3.170) can be stated in terms of the total
phase as
 
F = (1 + e3 ) e1 αR eiφ + αL e−iφ ,

where
1
αR = (c1 + ic2 )
2
1
αL = (c1 − ic2 )† ,
2
where c1 , c2 are the 0,2 grade multivector representation of the Jones vector coordinates
from definition 3.12.

To prove theorem 3.17, we have only to factor e1 out of eq. (3.170) and then substitute
complex exponentials for the sine and cosine
(α1 e1 + α2 e2 ) cos (φ) − (β1 e1 + β2 e2 ) sin (φ)
= e1 ((α1 + α2 i) cos (φ) − (β1 + β2 i) sin (φ))
e1     
= (α1 + α2 i) eiφ + e−iφ + (β1 + β2 i) i eiφ − e−iφ
2 (3.171)
e1    
= (α1 + iβ1 + i(α2 + iβ2 )) eiφ + (α1 + iβ1 )† + i(α2 + iβ2 )† e−iφ
2
e1 
(c1 + ic2 ) eiφ + (c1 − ic2 )† e−iφ ,

=
2
which completes the proof.
184 electromagnetism.

3.9.2.2 Linear polarization.

Linear polarization is described by


1
αR = kEk ei(ψ+θ)
2 (3.172)
1
αL = kEk ei(ψ−θ) ,
2
so the field is
F = (1 + e3 ) kEk e1 eiψ cos(ωt − βz + θ). (3.173)

Here θ is an arbitrary initial phase. The electric field E traces out all the points along the line
spanning the points between ±e1 eiψ kEk, whereas the magnetic field H traces out all the points
along ±e2 eiψ kEk /η as illustrated (with η = 1) in fig. 3.9.

i ψ
e1 E e

E i ψ e2
e2 e
η
ψ e1

E ψ
-e2 i
e
η

ψ
-e1 E e
i

Figure 3.9: Linear polarization.

3.9.2.3 Circular polarization.

A field for which the change in phase results in the electric field tracing out a (clockwise,counterclockwise)
circle
ER = kEk (e1 cos φ + e2 sin φ) = kEk e1 exp (e12 φ)
(3.174)
EL = kEk (e1 cos φ − e2 sin φ) = kEk e1 exp (−e12 φ) ,

is referred to as having (right,left) circular polarization, so the choice αR = kEk , αL = 0 results


in a right polarized wave
F = (1 + e3 ) kEk e1 ei(ωt−kz) , (3.175)
3.9 polarization. 185

and αL = kEk , αR = 0 results in a left polarized wave


F = (1 + e3 ) kEk e1 e−i(ωt−kz) , (3.176)

There are different conventions for the polarization orientation, and here the IEEE antenna con-
vention discussed in [2] are used.

3.9.2.4 Elliptical parameterization.

An elliptical polarized electric field can be parameterized as


E = Ea e1 cos φ + Eb e2 sin φ, (3.177)

which corresponds to circular polarization coefficients with values


1
αR = ( Ea − Eb )
2 (3.178)
1
αL = ( Ea + Eb ) .
2
Therefore an elliptically polarized field can be represented as
1  
F = (1 + e3 )e1 (Ea + Eb )eiφ + (Ea − Eb )e−iφ . (3.179)
2
An interesting variation of the elliptical polarization uses a hyperbolic parameterization. If
a, b are the semi-major/minor axes of the ellipse (i.e. a > b), and a = ae1 eiψ is p the vectoral
representation of the semi-major axis (not necessarily placed along e1 ), and e = 1 − (b/a)2 is
the eccentricity of the ellipse, then it can be shown ([9]) that an elliptic parameterization can be
written in the compact form
r(φ) = ea cosh(tanh−1 (b/a) + iφ). (3.180)

When the bivector imaginary i = e12 is used then this parameterization is real and has only
vector grades, so the electromagnetic field for a general elliptic wave has the form

F = eEa (1 + e3 ) e1 eiψ cosh (m + iφ)


m = tanh−1 ( Eb /Ea ) (3.181)
q
e = 1 − (Eb /Ea )2 ,

where Ea (Eb ) are the magnitudes of the electric field components lying along the semi-major(minor)
axes, and the propagation direction e3 is normal to both the major and minor axis directions. An
elliptic electric field polarization is illustrated in fig. 3.10, where the vectors representing the
major and minor axes are Ea = Ea e1 eiψ , Eb = Eb e1 eiψ . Observe that setting Eb = 0 results in
the linearly polarized field of eq. (3.173).
186 electromagnetism.

Ea

e2

Eb
ψ
e1

Figure 3.10: Electric field with elliptical polarization.

3.9.2.5 Energy and momentum.

Each polarization considered above (linear, circular, elliptical) have the same general form

F = (1 + e3 ) e1 eiψ f (φ), (3.182)

where f (φ) is a complex valued function (i.e. grade 0,2). The structure of eq. (3.182) could be
more general than considered so far. For example, a Gaussian modulation could be added into
the mix with f (φ) = eiφ−(φ/σ) /2 . Independent of the form of f , we may compute the energy,
2

momentum and Maxwell stress tensor for the plane wave given by eq. (3.182).

Theorem 3.18: Energy momentum tensor components for a plane wave.

The energy momentum tensor components for the plane wave given by eq. (3.182) are

S
T (1) = −T (e3 ) =  (1 + e3 ) f f † = E +
c
T (e1 ) = T (e2 ) = 0.

Only the propagation direction of a plane wave, regardless of its polarization (or even whether
or not there are Gaussian or other damping factors), carries any energy or momentum, and only
the propagation direction component of the Maxwell stress tensor T(a) is non-zero.
To prove theorem 3.18, we may compute T (a) separately for each of a = 1, e1 , e2 , e3 . Key
to all of these computations is the fact that e3 commutes with scalars and i, and e1 , e2 both
3.9 polarization. 187

   
e  e 
anticommute with i, and more generally   (a + ib) = (a − ib)  1 . For T (1) we need the
 1
e2 e2
product of the field and its reverse

FF † = (1 + e3 )  iψ
f f†  e1 (1 + e3 )
e−iψ

e1 
e

scalar (3.183)
= (1 + e3 )2 f f †
= 2 (1 + e3 ) f f † ,

so T (1) =  (1 + e3 ) f f † . For T (e3 ) we have

Fe3 F † = (1 + e3 ) e1 eiψ f e3 f † e−iψ e1 (1 + e3 )


= − (1 + e3 ) e3 e1 eiψ f f † e−iψ e1 (1 + e3 ) (3.184)
= − (1 + e3 ) e1 eiψ f f † e−iψ e1 (1 + e3 )
= −2 (1 + e3 ) f f † ,

so T (e3 ) = −T (1). For T (e1 ), we have

Fe1 F † = (1 + e3 ) e1 eiψ f e1 f † e−iψ e1 (1 + e3 )


= (1 + e3 ) e1 eiψ f 2 eiψ e21 (1 + e3 )
= (1 + e3 ) e1 f 2 e2iψ (1 + e3 )
= (1 + e3 ) e1 (1 + e3 ) f 2 e2iψ (3.185)
= (1 + e3 ) (1 − e3 ) e1 f 2 e2iψ
 
= 1 − e23 e1 f 2 e2iψ
= 0.

Clearly Fe2 F † = 0 as well, so T (e1 ) = T (e2 ) = 0, completing the proof.


Using theorem 3.18 the energy momentum vector for the linearly polarized wave of eq. (3.173)
is

T (1) = (1 + e3 ) kEk2 cos2 (φ + θ), (3.186)
2
and for the circularly polarized wave of eq. (3.175), or eq. (3.176) is

T (1) = (1 + e3 ) kEk2 . (3.187)
2
A circularly polarized wave carries maximum energy and momentum, whereas the energy and
momentum of a linearly polarized wave oscillates with the phase angle.
188 electromagnetism.

For the elliptically polarized wave of eq. (3.181) we have f (φ) = eEa cosh (m + iφ). The
absolute value of f is

f f † = e2 Ea2 cosh (m + iφ) (cosh (m + iφ))†


= e2 Ea2 (cosh(2m) + cos(2φ))
 2 (3.188)
Eb2  2 
  
 Eb
= e Ea  2 + 2 1 − 2  cos φ
2 2

Ea Ea
The simplification above made use of the identity (1 − (b/a)2 ) cosh(2 atanh(b/a)) = 1 + (b/a)2 .
The energy momentum for an elliptically polarized wave is therefore
 2
Eb2  2 
  
  Eb
T (1) = (1 + e3 ) e Ea  2 + 2 1 − 2  cos φ .
2 2

(3.189)
2 Ea Ea
As expected, the phase dependent portion of the energy momentum tensor vanishes as the wave
function approaches circular polarization.

3.9.3 Pseudoscalar imaginary.

In this section we use the R3 pseudoscalar as an imaginary. As before, we seek a representation


of the field utilizing complex exponentials of the phase, instead of signs and cosines, and as
before the we wish to define Jones vector coordinates as a go-between.
Definition 3.13: Jones vectors.

The coordinates of the Jones vector, conventionally defined as a tuple of complex values
(c1 , c2 ), are

c1 = α1 + Iβ1
c2 = α2 + Iβ2 .

In this definition we have used the R3 pseudoscalar I as the imaginary.

We will not use the Jones vector as a tuple, but will use c1 , c2 as stated above.
Theorem 3.19: Circular polarization coefficients.

The time domain representation of the field in eq. (3.170) can be stated in terms of the total
phase as
 
F = (1 + e3 ) e1 αR e−Iφ + αL eIφ ,
3.9 polarization. 189

where
1
αR = (c1 + Ic2 )†
2
1
αL = (c1 − Ic2 ) ,
2
where c1 , c2 are the 0,2 grade multivector representation of the Jones vector coordinates
from definition 3.13.
Notice that the signs of the exponentials have flipped for the left and right handed circular
polarizations. It may not obvious that the electric and magnetic fields in this representation
have the desired transverse properties. To see why that is still the case, and to understand the
conjugation in the complex exponentials, consider the right circular polarization case with αR =
kEk , αL = 0

F = (1 + e3 ) e1 kEk e−Iφ
= (1 + e3 ) kEk (e1 cos φ − e23 sin φ) (3.190)
= (1 + e3 ) kEk (e1 cos φ + e32 sin φ) ,
but since (1 + e3 ) e3 = 1 + e3 , we have
F = (1 + e3 ) kEk (e1 cos φ + e2 sin φ) , (3.191)
which has the claimed right circular polarization.
To prove theorem 3.19 itself, the sine and cosine in eq. (3.170) can be expanded in complex
exponentials
2 (α1 e1 + α2 e2 ) cos φ − 2 (β1 e1 + β2 e2 ) sin φ
   
= (α1 e1 + α2 e2 ) eIφ + e−Iφ + (β1 e1 + β2 e2 ) I eIφ − e−Iφ (3.192)
   
= (α1 e1 − Iα2 (Ie2 )) eIφ + e−Iφ + (β1 e1 − Iβ2 (Ie2 )) I eIφ − e−Iφ .
Since the leading 1 + e3 gobbles any e3 factors, its action on the dual of e2 is

(1 + e3 ) Ie2 = (1 + e3 ) e31 (3.193)


= (1 + e3 ) e1 .
This allows us to unconditionally factor out e1 from eq. (3.192), so the field is

1     
F= (1 + e3 ) e1 (α1 − Iα2 ) eIφ + e−Iφ + (β1 − Iβ2 ) I eIφ − e−Iφ
2
1  
= (1 + e3 ) e1 (α1 + Iβ1 − I (α2 + Iβ2 )) eIφ + (α1 − Iβ1 − I (α2 − Iβ2 )) e−Iφ
2 (3.194)
1   † 
† −Iφ

= (1 + e3 ) e1 (c1 − Ic2 ) e + c1 − Ic2 e

2  
= (1 + e3 ) e1 αR e−Iφ + αL e−Iφ ,
190 electromagnetism.

which completes the proof.


Observe that there are some advantages to the pseudoscalar plane wave form, especially
for computing energy momentum tensor components since I commutes with all grades. For
example, we can see practically by inspection that

S  
T (1) = E + =  (1 + e3 ) |αR |2 + |αL |2 , (3.195)
v

where the absolute value is computed using the reverse as the conjugation operation |z|2 = zz† .

3.10 transverse fields in a waveguide.

We now wish to consider more general solutions to the source free Maxwell’s equation than
the plane wave solutions derived in section 3.4. One way of tackling this problem is to assume
the solution exists, but ask how the field components that lie strictly along the propagation
direction are related to the transverse components of the field. Without loss of generality, it can
be assumed that the propagation direction is along the z-axis.

Theorem 3.20: Transverse and propagation field components.

If e3 is the propagation direction, the components of a field F in the propagation direction


and in the transverse plane are respectively

1
Fz = ( F + e3 Fe3 )
2
1
Ft = ( F − e3 Fe3 ) ,
2
where F = Fz + Ft .

To determine the components of the field that lie in the propagation direction and transverse
planes, we state the field in the propagation direction, building it from the electric and magnetic
field projections along the z-axis

Fz = (E · e3 ) e3 + Iη (H · e3 ) e3
1 1
= (Ee3 + e3 E) e3 + Iη (He3 + e3 H) e3
2 2
1 1 (3.196)
= (E + e3 Ee3 ) + Iη (H + e3 He3 )
2 2
1
= (F + e3 Fe3 ) .
2
3.10 transverse fields in a waveguide. 191

The difference F − Fz is the transverse component

Ft = F − Fz
1
= F − (F + e3 Fe3 ) (3.197)
2
1
= (F − e3 Fe3 ) ,
2
as claimed.
We wish to split the gradient into transverse and propagation direction components.

Definition 3.14: Transverse and propagation direction gradients.

Define the propagation direction gradient as e3 ∂z , and transverse gradient by

∇t = ∇ − e3 ∂z .

Given this definition, we seek to show that

Theorem 3.21: Transverse and propagation field solutions.

Given a field propagating along the z-axis (either forward or backwards), with angular
frequency ω, represented by the real part of

F(x, y, z, t) = F(x, y)e jωt∓ jkz ,

the field components that solve the source free Maxwell’s equation are related by

1
Ft = j ω ∇t F z
c ∓ ke3
1
Fz = j ω ∇t F t .
c ∓ ke3

Written out explicitly, the transverse field component expands as

j
 ωη 
Et = ω2
±k∇t E z + e3 × ∇ t Hz
2 c
c −k
j  ω 
ηHt = ω 2 ±kη∇t Hz − e3 × ∇t Ez .
2 c
c −k
192 electromagnetism.

To prove we first insert the assumed phasor representation into Maxwell’s equation, which
gives
 ω 
∇t + j ∓ ke3 F(x, y) = 0. (3.198)
c
Dropping the x, y dependence for now (i.e. F(x, y) → F, we find a relation between the
transverse gradient of F and the propagation direction gradient of F
ω 
∇t F = − j ∓ ke3 F. (3.199)
c
From this we now seek to determine the relationships between Ft and Fz .
Since ∇t has no x̂, ŷ components, e3 anticommutes with the transverse gradient
e3 ∇t = −∇t e3 , (3.200)

but commutes with 1 ∓ e3 . This means that

1 1
(∇t F ± e3 (∇t F) e3 ) = (∇t F ∓ ∇t e3 Fe3 )
2 2 (3.201)
1
= ∇t (F ∓ e3 Fe3 ) ,
2
or
1
(∇t F + e3 (∇t F ) e3 ) = ∇t Ft
2 (3.202)
1
(∇t F − e3 (∇t F ) e3 ) = ∇t Fz ,
2
so Maxwell’s equation eq. (3.199) becomes
ω 
∇t F t = − j ∓ ke3 Fz
c
ω  (3.203)
∇t Fz = − j ∓ ke3 Ft .
c
Provided ω2 , (kc)2 , these can be inverted. Such an inversion allows an application of the
transverse gradient to whichever one of Fz , Ft is known, to compute the other, as stated in
theorem 3.21.
The relation for Ft in theorem 3.21 is usually stated in terms of the electric and magnetic
fields. To perform that expansion, we must first evaluate the multivector inverse explicitly
ω
± ke3
F z = j  c 2 ∇t Ft
ω 2
c − k
ω (3.204)
± ke3
F t = j  2
c
∇t Fz .
ω 2
c − k
3.11 multivector potential. 193

so that we are in position to expand most of the terms in the numerator


ω   ω 
± ke3 ∇t Fz = − e3 ± k ∇t e3 Fz
c c
 ω
= ±k − e3 ∇t (Ez + IηHz ) (3.205)
c
 ωη   ω 
= ±k∇t Ez + e3 × ∇t Hz + I ±kη∇t Hz − e3 × ∇t Ez ,
c c
from which the transverse electric and magnetic fields stated in theorem 3.21 can be read off. A
similar expansion for Ez , Hz in terms of Et , Ht is also possible.

Exercise 3.8 Transverse electric and magnetic field components.


Fill in the missing details in the steps of eq. (3.205).

Exercise 3.9 Propagation direction components.


Perform an expansion like eq. (3.205) to find Ez , Hz in terms of Et , Ht .

3.11 multivector potential.

3.11.1 Definition.

Conventional electromagnetism utilizes scalar and vector potentials, so it is reasonable to expect


that the desired multivector representation of the potential is a grade 0,1 multivector. A potential
representation with grades 2,3 works for (fictitious) magnetic sources, so we may generally
allow a multivector potential to have any grades. Such a potential is related to the field as
follows.

Definition 3.15: Multivector potential.

The electromagnetic field strength F for a multivector potential A is

1∂
* ! +
F = ∇− A .
c ∂t 1,2

Before unpacking ∇ − 1c ∂t∂ A, we want to label the different grades of the multivector poten-
 

tial, and do so in a way that is consistent with the conventional potential representation of the
electric and magnetic fields.
194 electromagnetism.

Definition 3.16: Multivector potential representation.

Let

A = −φ + cA + ηI (−φm + cF) ,

where

1. φ is the scalar potential V (Volts).

2. A is the vector potential W/m (Webers/meter).

3. φm is the scalar potential for (fictitious) magnetic sources A (Amperes).

4. F is the vector potential for (fictitious) magnetic sources C (Coulombs).

This specific breakdown of A into scalar and vector potentials, and dual (pseudoscalar and
bivector) potentials has been chosen to match SI conventions, specifically those of [3] (which
includes fictitious magnetic sources.)
We can now express the fields in terms of the potentials.

Theorem 3.22: Fields and the potential wave equations.

In terms of the potential components, the electric field vector and the magnetic field bivec-
tor are
1∂ ∂A 1
* ! +
E = ∇− A = −∇φ − − ∇×F
c ∂t 1 ∂t 
1∂ ∂F 1
* ! + !
IηH = ∇ − A = Iη −∇φm − + ∇×A .
c ∂t 2 ∂t µ
3.11 multivector potential. 195

The potentials are related to the sources by

ρ ∂ 1 ∂φ
!
φ = − − ∇·A+ 2
 ∂t c ∂t
1 ∂φ
!
A = −µJ + ∇ ∇ · A + 2
c ∂t
1 ∂φm
!
F = −M + ∇ ∇ · F + 2
c ∂t
ρm ∂ 1 ∂φm
!
φm = − − ∇·F+ 2
µ ∂t c ∂t

To prove theorem 3.22 we start by expanding (∇ − (1/c)∂t )A using definition 3.16 and then
group by grade to find

1∂ 1∂
! !
∇− A= ∇− (−φ + cA + ηI (−φm + cF))
c ∂t c ∂t
1 ∂φ ∂A
= −∇φ + c∇ · A + c∇ ∧ A + −
c ∂t ∂t
1 ∂φm ∂F
!
+ Iη −∇φm + c∇ · F + c∇ ∧ F + −
c ∂t ∂t
1 ∂φ
= c∇ · A + (3.206)
c ∂t
∂A 1 ∂F 1
!
+ −∇φ − − ∇ × F + Iη −∇φm − + ∇×A
∂t  ∂t µ
E IηH
1 ∂φm
!
+ Iη c∇ · F + ,
c ∂t

which shows the claimed field split.


1 ∂
 
In terms of the potentials Maxwell’s equation ∇ + c ∂t F = J is

1∂ 1∂
!* ! +
∇+ ∇− A = J, (3.207)
c ∂t c ∂t 1,2

or
1∂ 1∂
!* ! +
A = J + ∇ + ∇− A . (3.208)
c ∂t c ∂t 0,3
196 electromagnetism.

This is almost a wave equation. Inserting eq. (3.206) into eq. (3.208) and selecting each grade
gives four almost-wave equations

ρ 1∂ 1 ∂φ
!
−φ = + c∇ · A +
 c ∂t c ∂t
1 ∂φ
!
cA = −ηJ + ∇ c∇ · A +
c ∂t
1 ∂φm
!!
ηcI F = −IM + ∇ · Iη c∇ · F +
c ∂t
1∂ 1 ∂φm
!
−Iηφm = Icρm + Iη c∇ · F +
c ∂t c ∂t

Using η = µc, ηc = 1, and ∇ · (Iψ) = I∇ψ for scalar ψ, a bit of rearrangement completes the
proof.

3.11.2 Gauge transformations.

Clearly it is desirable if potentials can be found for which ∇ · A + (1/c2 )∂t φ = ∇ · F + (1/c2 )∂t φm =
0. Finding such potentials relies on the fact that the potential representation is not unique. In
particular, we have the freedom to add any spacetime gradient of any scalar or pseudoscalar
potential without changing the field.

Theorem 3.23: Gauge invariance.

The spacetime gradient of a grade 0,3 multivector Ψ may be added to a multivector poten-
tial
1∂
!
A = A+ ∇+
0
Ψ,
c ∂t

without changing the field. That is

1∂ 1∂ 0
* ! + * ! +
F = ∇− A = ∇− A .
c ∂t 1,2 c ∂t 1,2

To prove theorem 3.23 let

1∂
!
A = A+ ∇+
0
(ψ + Iφ), (3.209)
c ∂t
3.11 multivector potential. 197

where ψ and φ are scalar functions. The field for potential A0 is

1∂ 0
* ! +
F = ∇−
0
A
c ∂t 1,2
1∂ 1∂
* ! ! !+
= ∇− A+ ∇+ (ψ + Iφ)
c ∂t c ∂t 1,2
(3.210)
1∂ 1∂ 1∂
* ! + * ! ! +
= ∇− A + ∇− ∇+ (ψ + Iφ)
c ∂t 1,2 c ∂t c ∂t 1,2
= F + h(ψ + Iφ)i1,2 ,

which is just F since since the d’Lambertian operator  is a scalar operator and ψ + Iφ has no
vector nor bivector grades.
We say that we are working in the Lorenz gauge, if the 0,3 grades of ∇ − 1c ∂t∂ A are zero, or
 

a transformation that kills those grades is made.

Theorem 3.24: Lorentz gauge transformation.

Given any multivector potential A solution of Maxwell’s equation, the transformation

1∂
!
A = A− ∇+
0
Ψ,
c ∂t

where
1∂
* ! +
Ψ = ∇− A ,
c ∂t 0,3

allows Maxwell’s equation to be written in wave equation form

A0 = J.

To prove theorem 3.24, let

1∂
!
A= A + ∇+
0
Ψ, (3.211)
c ∂t
198 electromagnetism.

so Maxwell’s equation becomes

1∂ 1∂
!* ! +
J = ∇+ ∇− A
c ∂t c ∂t 1,2
1∂ 1∂
!* ! +
= A − ∇ + ∇− A
c ∂t c ∂t 0,3 (3.212)
1∂ 1∂ 1∂
! !* ! +
=  A0 +  ∇ + Ψ− ∇+ ∇− A
c ∂t c ∂t c ∂t 0,3
∂ ∂
! * ! + 
1 1
=  A0 + ∇ + Ψ − ∇ − A  .

 

c ∂t c ∂t 0,3

Requiring

1∂
* ! +
Ψ = ∇− A , (3.213)
c ∂t 0,3

completes the proof. Observe that Ψ has only grades 0,3 as required of a gauge function.
Such a transformation completely decouples Maxwell’s equation, providing one scalar wave
equation for each grade of A0 = J, relating each grade of the potential A0 to exactly one grade
of the source multivector current J. We are free to immediately solve for A0 using the (causal)
Green’s function for the d’Lambertian

δ(|x − x0 | − c(t − t0 ) 0 0
Z
A0 (x, t) = − dV 0 dt0 J(x , t )
4π kx − x0 k
(3.214)
J(x0 , t − 1c kx − x0 k)
Z
1
=− dV 0 ,
4π kx − x0 k
which is the sum of all the current contributions relative to the point x at the retarded time
tr = t − (1/c) kx − x0 k. The field follows immediately by differentiation and grade selection

1∂ 0
* ! +
F = ∇− A . (3.215)
c ∂t 1,2

Again, using the Green’s function for the d’Lambertian, the explicit form of the gauge func-
tion Ψ is
1 ∂
D  E
Z ∇ − A(x 0, t )
1 c ∂t r 0,3
Ψ=− dV 0 0
, (3.216)
4π kx − x k
however, we don’t actually need to compute this. Instead, we only have to know we are free to
construct a field from any solution A0 of A0 = J using eq. (3.215).
3.11 multivector potential. 199

3.11.3 Far field.

Theorem 3.25: Far field magnetic vector potential.

Given a vector potential with a radial spherical wave representation

e− jkr
A= A(θ, φ),
r
the far field (r  1) electromagnetic field is

F = − jω (1 + r̂) (r̂ ∧ A) .

If A⊥ = r̂ (r̂ ∧ A) represents the non-radial component of the potential, the respective


electric and magnetic field components are

E = − jωA⊥
1
H = r̂ × E.
η

To prove theorem 3.25, we will utilize a spherical representation of the gradient


∇ = r̂∂r + ∇⊥
θ̂ φ̂ (3.217)
∇⊥ = ∂θ + ∂φ .
r r sin θ
The gradient of the vector potential is

e− jkr
∇A = (r̂∂r + ∇⊥ ) A
! r− jkr
1 e e− jkr
= r̂ − jk − A+ ∇⊥ A (3.218)
r r r
!
1
= − jk + r̂A + O(1/r2 )
r
≈ − jkr̂A.

Here, all the O(1/r2 ) terms, including the action of the non-radial component of the gradient
on the 1/r potential, have been neglected. From eq. (3.218) the far field divergence and the
(bivector) curl of A are
∇ · A = − jkr̂ · A
(3.219)
∇ ∧ A = − jkr̂ ∧ A.
200 electromagnetism.

Finally, the far field gradient of the divergence of A is

∇ (∇ · A) = (r̂∂r + ∇⊥ ) (− jkr̂ · A)
≈ − jkr̂∂r (r̂ · A)
!
1 (3.220)
= − jkr̂ − jk − (r̂ · A)
r
≈ −k r̂ (r̂ · A) ,
2

again neglecting any O(1/r2 ) terms. The field is

c2
F = − jωA − j ∇ (∇ · A) + c∇ ∧ A
ω
= − jωA + jωr̂ (r̂ · A) − jkcr̂ ∧ A (3.221)
= − jω (A − r̂ (r̂ · A)) − jωr̂ ∧ A
= − jωr̂ (r̂ ∧ A) − jωr̂ ∧ A
= − jω (r̂ + 1) (r̂ ∧ A) ,

which completes the first part of the proof. Extraction of the electric and magnetic fields can be
done by inspection and is left to the reader to prove.
One interpretation of this is that the (bivector) magnetic field is represented by the plane
perpendicular to the direction of propagation, and the electric field by a vector in that plane.

Theorem 3.26: Far field electric vector potential.

Given a vector potential with a radial spherical wave representation

e− jkr
F= F(θ, φ),
r
the far field (r  1) electromagnetic field is

F = − jωηI (r̂ + 1) (r̂ ∧ F) .

If F⊥ = r̂ (r̂ ∧ F) represents the non-radial component of the potential, the respective


electric and magnetic field components are

E = jωηr̂ × F
H = − jωF⊥ .

The proof of theorem 3.26 is left to the reader.


3.11 multivector potential. 201

Example 3.1: Vertical dipole potential.

We will calculate the far field along the propagation direction vector k̂ in the z-y plane

k̂ = e3 eiθ
(3.222)
i = e32 ,

for the infinitesimal dipole potential


e3 µI0 l − jkr
A= e , (3.223)
4πr
as illustrated in fig. 3.11.

Figure 3.11: Vertical infinitesimal dipole and selected propagation direction.

The wedge of k̂ with A is proportional to


D E
k̂ ∧ e3 = k̂e3
D 2 E
= e3 e e3

(3.224)
D E2
= e3 e
2 −iθ
2
= −i sin θ,

so from ?? the field is


  µI0 l − jkr
F = jω 1 + e3 eiθ i sin θ e . (3.225)
4πr
202 electromagnetism.

The electric and magnetic fields can be found from the respective vector and bivector
grades of eq. (3.225)

jωµI0 l − jkr iθ
E= e e3 e i sin θ
4πr
jωµI0 l − jkr iθ
= e e2 e sin θ (3.226)
4πr
jkηI0 l sin θ − jkr
= e (e2 cos θ − e3 sin θ) ,
4πr
and

1 µI0 l − jkr
H= jωi sin θ0 e
Iη 4πr
1 µI0 l − jkr
= e321 e32 jω sin θ0 e (3.227)
η 4πr
jk sin θ0 I0 l − jkr
= −e1 e .
4πr
The multivector electrodynamic field expression eq. (3.225) for F is more algebraically
compact than the separate electric and magnetic field expressions, but this comes with the
complexity of dealing with different types of imaginaries. There are two explicit unit imag-
inaries in eq. (3.225), the scalar imaginary j used to encode the time harmonic nature of the
field, and i = e32 used to represent the plane that the far field propagation direction vector
lay in. Additionally, when the magnetic field component was extracted, the pseudoscalar
I = e123 entered into the mix. Care is required to keep these all separate, especially since
I, j commute with all grades, but i does not.

3.12 dielectric and magnetic media.

3.12.1 Statement.

Without imposing the constitutive relationships eq. (3.2) the geometric algebra form of Maxwell’s
equations requires a pair of equations, multivector fields, and multivector sources, instead of one
of each.

Theorem 3.27: Maxwell’s equations in media.


3.12 dielectric and magnetic media. 203

Maxwell’s equations in media are

1∂
* ! +
∇+ F = Je
c ∂t 0,1
1∂
* ! +
∇+ G = I Jm ,
c ∂t 2,3

where c is the group velocity of F, G in the medium, the fields are grade 1,2 multivectors

I
F = D+ H
c
G = E + IcB,

and the sources are grade 0,1 multivectors

1
Je = ρ − J
c
Jm = cρm − M.

To prove theorem 3.27 we may simply expand the spacetime gradients and grade selection
operations, and compare to eq. (3.1), the conventional representation of Maxwell’s equations.
For F we have

1∂
* ! +
J
ρ− = ∇+ F
c c ∂t
!  0,1
1∂
* +
I 
= ∇+ D+ H
c ∂t c 0,1 (3.228)
1 ∂D 1 ∂H
* +
I I
= ∇·D+∇∧D+ ∇·H+ ∇∧H+ +I 2
c c c ∂t c ∂t 0,1
1 ∂D 1
=∇·D+ − ∇ × H,
c ∂t c
204 electromagnetism.

and for G

1∂
* ! +
I (cρm − M) = ∇ + G
c ∂t
! 2,3
1∂
* +
= ∇+ (E + IcB)
c ∂t 2,3
1 ∂E ∂B
* +
= ∇ · E + ∇ ∧ E + Ic∇ · B + Ic∇ ∧ B + +I (3.229)
c ∂t ∂t 2,3
∂B
= ∇ ∧ E + Ic∇ · B + I
∂t !
∂B
= I ∇ × E + c∇ · B + .
∂t
Applying further grade selection operations, rescaling (cancelling all factors of c and I), and a
bit of rearranging, gives
∇·D=ρ
∂D
∇×H=J+
∂t (3.230)
∇ · B = ρm
∂B
∇ × E = −M − ,
∂t
which are Maxwell’s equations, completing the proof.

Exercise 3.10 Maxwell’s equations in media.


The proof above is somewhat unfriendly, as it works backwards from the answer. No motiva-
tion was given for why the particular multivector fields were chosen, nor why grade selection
operations were required. To obtain some insight on why this works, prove theorem 3.27 from
eq. (3.2) directly as follows:
1. Eliminate cross products using ∇ × f = I(∇ ∧ f).

2. Introduce a scalar constant c with dimensions of velocity and redimensionalize any time
derivatives ∂f/∂t = (1/c)∂(cf)/∂t, so that [(1/c)∂/∂t] = [∇].

3. If required, multiply each of Maxwell’s equations by a factor of I, to obtain a scalar and


vector equation for D, H, and a bivector and pseudoscalar equation for E, B.

4. Sum the pairs of equations to form a multivector equation for each of D, H and E, B.

5. Factor the terms in each equation into a product of the spacetime gradient and the respec-
tive fields F, G, and show the result may be simplified by grade selection.
3.12 dielectric and magnetic media. 205

3.12.2 Alternative form.

Theorem 3.28: Grade selection free equations.

Given multivector solutions F 0 , G0 to

1∂
!
Je = ∇ + F0
c ∂t
1∂
!
I Jm = ∇ + G0 ,
c ∂t

these can be related to solutions F, G of Maxwell’s equations given by theorem 3.27 by

F = F 0 1,2

G = G0 1,2 ,

if
1 ∂
0
* ! +
∇+ F =0
c ∂t 0,1
1 ∂
0
* ! +
∇+ G 3 = 0.
c ∂t 2,3

To prove we select the grade 0,1 and grade 2,3 components from space time gradient equa-
tions of theorem 3.28. For the electric sources, this gives

1∂
* ! +
Je = ∇ + F0
c ∂t 0,1 (3.231)
1 ∂
0 1 ∂
0 1 ∂
0
* ! + * ! + * ! +
= ∇+ F 1,2 + ∇+ F + ∇+ F 3 ,
c ∂t 0,1 c ∂t 0,1 c ∂t 0,1

however ∇ + 1c ∂t∂ hF 0 i3 has only grade 2,3 components, leaving just


 

1 ∂
0 1 ∂
0
* ! + * ! +
Je = ∇ + F 1,2 + ∇+ F , (3.232)
c ∂t 0,1 c ∂t 0,1
as claimed. For the magnetic sources, we have

1∂
* ! +
I Jm = ∇ + G0
c ∂t 2,3 (3.233)
1 ∂
0 1 ∂
0 1 ∂
0
* ! + * ! + * ! +
= ∇+ G 1,2 + ∇+ G + ∇+ G 3 ,
c ∂t 2,3 c ∂t 2,3 c ∂t 2,3
206 electromagnetism.

1 ∂
 
however ∇ + c ∂t hG0 i0 has only grade 0,1 components, leaving just

1 ∂
0 1 ∂
0
* ! + * ! +
I Jm = ∇+ G 1,2 + ∇+ G , (3.234)
c ∂t 2,3 c ∂t 2,3

completing the proof.


Theorem 3.28 is probably a more effect geometric algebra form for solution of Maxwell’s
equations in matter, as the grade selection free spacetime gradients can be solved for F 0 , G0
directly using Green’s function convolution. However, we have an open question of how to
impose a zero scalar grade constraint on F 0 and a zero pseudoscalar grade constraint on G0 .

Question: Is the solution as simple as grade selection of the convolution?


Z
F= dt0 dV 0 G(x − x0 , t − t0 )Je 1,2


Z (3.235)
G= dt0 dV 0 G(x − x0 , t − t0 )I Jm 1,2 ,

where G(x − x0 , t − t0 ), is the Green’s function for the space time gradient theorem 2.19, not to
be confused with G = E + IcB,

3.12.3 Gauge like transformations.

Because of the grade selection operations in theorem 3.27, we cannot simply solve for F, G
using the Green’s function for the spacetime gradient. However, we may make a gauge-like
transformation of the fields. Additional exploration is required to determine if such transforma-
tions can be utilized to solve theorem 3.27.

Theorem 3.29: Multivector transformation of the fields.

If F, G are solutions to theorem 3.27, then so are

1∂
* ! +
F = F + ∇−
0
Ψ2,3
c ∂t 1,2
1∂
* ! +
G0 = G + ∇ − Ψ0,1 ,
c ∂t 1,2

where Ψ2,3 is any multivector with grades 2,3 and Ψ0,1 is any multivector with grades 0,1.
3.12 dielectric and magnetic media. 207

To prove theorem 3.29 we need to show that

1∂ 1∂
* ! + * ! +
∇+ F 0
= ∇+ F
c ∂t 0,1 c ∂t 0,1
(3.236)
1∂ ∂
* ! + * ! +
1
∇+ G0 = ∇+ G .
c ∂t 2,3 c ∂t 2,3

Let’s start with F


1∂ 1∂ 1∂ 1∂
* ! + * ! + * !* ! + +
∇+ F 0
= ∇+ F + ∇+ ∇− Ψ2,3
c ∂t 0,1 c ∂t 0,1 c ∂t c ∂t 1,2 0,1
1∂
* ! +
= ∇+ + Ψ2,3 0,1


F (3.237)
c ∂t 0,1
1∂ 1∂
* !* ! + +
− ∇+ ∇− Ψ2,3 .
c ∂t c ∂t 0,3 0,1

The second term is killed since Ψ2,3 has no grade 0,1 components  by definition, so neither
1 ∂

does Ψ2,3 . To see that the last term is zero, note that ∇ − c ∂t Ψ2,3 can have only grades
1,2,3, so ∇ − 1c ∂t∂ Ψ2,3 is a trivector. This means that ∇ + 1c ∂t∂ ∇ − 1c ∂t∂ Ψ2,3
D  E   D  E
has
0,3 0,3
only
D grades  2,3,
E which are obliterated by the final grade 0,1 selection operation, leaving just
∇ + 1c ∂t∂ F . For G we have
0,1

1∂ 1∂ 1∂ 1∂
* ! + * ! + * !* ! + +
∇+ G 0
= ∇+ G + ∇+ ∇− Ψ0,1
c ∂t 2,3 c ∂t 2,3 c ∂t c ∂t 1,2 2,3
1∂
* ! +
= ∇+ + Ψ0,1 2,3


G (3.238)
c ∂t 2,3
1∂ 1∂
* !* ! + +
− ∇+ ∇− Ψ0,1 .
c ∂t c ∂t 0,3 2,3

As before the d’Lambertian term is killed as it has no grades


D 2,3. To see that
E the last term is
1 ∂ 1 ∂
 
zero, note that ∇ − c ∂t Ψ0,1 can have only grades 0,1,2, so ∇ − c ∂t Ψ0,1 is a scalar. This
0,3
1 ∂ 1 ∂
  D  E
means that ∇ + c ∂t ∇ − c ∂t Ψ0,1 has only grades 0,1, which are obliterated by the final
0,3
grade 2,3 selection operation, leaving ∇ + 1c ∂t∂ G , completing the proof.
D  E
2,3
An additional variation of theorem 3.29 is also possible.

Theorem 3.30: Multivector transformation of the fields.


208 electromagnetism.

If F, G are solutions to theorem 3.27, then so are

1∂
!
F = F + ∇−
0
Ψ2,3
c ∂t
1∂
!
G = G+ ∇−
0
Ψ0,1
c ∂t

where Ψ2,3 is any multivector with grades 2,3 and Ψ0,1 is any multivector with grades 0,1.

Theorem 3.30 can be proven by direct substitution. For F

1∂ 1∂ 1∂
* ! ! !+ * ! +
∇+ F+ ∇− Ψ2,3 = ∇+ F + Ψ2,3
c ∂t c ∂t 0,1 c ∂t 0,1 (3.239)
1∂
* ! +
= ∇+ F
c ∂t ,

and for G

1∂ 1∂ 1∂
* ! ! !+ * ! +
∇+ G+ ∇− Ψ0,1 = ∇+ G + Ψ0,1
c ∂t c ∂t 2,3 c ∂t 2,3 (3.240)
1∂
* ! +
= ∇+ G
c ∂t ,

which completes the proof.

3.12.4 Boundary value conditions.

Theorem 3.31: Boundary value relations.

The difference in the normal and tangential components of the electromagnetic field span-
ning a surface on which there are a surface current or surface charge or current densities
Je = Jes δ(n), Jm = Jms δ(n) can be related to those surface sources as follows

hn̂(F2 − F1 )i0,1 = Jes


hn̂(G2 − G1 )i2,3 = I Jms ,
3.12 dielectric and magnetic media. 209

where Fk = Dk + IHk /c, Gk = Ek + IcBk , k = 1, 2 are the fields in the where n̂ = n̂2 =
−n̂1 is the outwards facing normal in the second medium. In terms of the conventional
constituent fields, these may be written

n̂ · (D2 − D1 ) = ρs
n̂ × (H2 − H1 ) = Js
n̂ · (B2 − B1 ) = ρms
n̂ × (E2 − E1 ) = −Ms .

Figure 3.12 illustrates a surface where we seek to find the fields above the surface (region
2), and below the surface (region 1). These fields will be determined by integrating Maxwell’s
equation over the pillbox configuration, allowing the height n of that pillbox above or below the
surface to tend to zero, and the area of the pillbox top to also tend to zero.

Figure 3.12: Pillbox integration volume.

We will work with theorem 3.27, Maxwell’s equations in media, in their frequency domain
form
h∇Fi0,1 + jkD = Jes δ(n)
(3.241)
h∇Gi2,3 + jkIcB = I Jms δ(n),

and integrate these over the pillbox volume in the figure. That is
Z Z Z
dVh∇Fi0,1 + jk dVD = dndAJes δ(n)
Z Z Z (3.242)
dVh∇Gi2,3 + jkIc dVB = I dndAJms δ(n).
210 electromagnetism.

The gradient integrals can be evaluated with theorem 2.11. Evaluating the delta functions picks
leaves an area integral on the surface. Additionally, we assume that we are making the pillbox
volume small enough that we can employ the mean value theorem for the D, B integrals
Z  
dAhn̂Fi0,1 + jk∆A n1 D̃1 + n2 D̃2 = ∆AJes
Z ∂V   (3.243)
dAhn̂Gi2,3 + jkIc∆A n1 B̃1 + n2 B̃2 = I∆AJms .
∂V

We now let n1 , n2 tend to zero, which kills off the D, B contributions, and also kills off the side
wall contributions in the first pillbox surface integral. This leaves

hn̂2 F2 i0,1 + hn̂1 F1 i0,1 = Jes


(3.244)
hn̂2G2 i2,3 + hn̂1G1 i2,3 = Jms .

Inserting n̂ = n̂2 = −n̂1 completes the first part of the proof.


Expanding the grade selection operations, we find

n̂ · (D2 − D1 ) = ρ s
I n̂ ∧ (H2 /c − H1 /c) = −J s /c
(3.245)
n̂ ∧ (E2 − E1 ) = −IM s
Icn̂ · (B2 − B1 ) = Icρms ,

and expansion of the wedge’s as cross’s using eq. (1.58) completes the proof.
In the special case where there are surface charge and current densities along the interface
surface, but the media is uniform (1 = 2 , µ1 = µ2 ), then the field and current relationship has
a particularly simple form [5]
n̂(F2 − F1 ) = J s . (3.246)

Exercise 3.11 Uniform media with currents and densities.


Prove that eq. (3.246) holds when 1 = 2 , µ1 = µ2 .
DISTRIBUTION THEOREMS.
A
Theorem A.1: Vector Blade dot and wedge product relations.

Given a k-blade B and a vector a, the dot and wedge products have the following commu-
tation relationships

B · a = (−1)k−1 a · B
(A.1)
B ∧ a = (−1)k a ∧ B,

and can be expressed as symmetric and antisymmetric sums depending on the grade of the
blade

1 
a∧B = aB + (−1)k Ba
2 (A.2)
1 
a·B = aB − (−1)k Ba .
2

To prove these, split the blade into components that intersect with and are disjoint from a as
follows
1
B = n1 n2 · · · nk−1 + m1 m2 · · · mk , (A.3)
a
where ni are all mutually normal and normal to a and mi are all normal. The products of B with
a are

1
aB = a n1 n2 · · · nk−1 + am1 m2 · · · mk (A.4)
a
= n1 n2 · · · nk−1 + am1 m2 · · · mk ,

and

1
Ba = n1 n2 · · · nk−1 a + m1 m2 · · · mk a
a (A.5)
= (−1)k−1 n1 n2 · · · nk−1 + (−1)k am1 m2 · · · mk
= (−1)k (−n1 n2 · · · nk−1 + am1 m2 · · · mk ) ,

211
212 distribution theorems.

or
(−1)k Ba = −n1 n2 · · · nk−1 + am1 m2 · · · mk . (A.6)

Respective addition and subtraction of eq. (A.4) and eq. (A.6) gives

aB + (−1)k Ba = 2am1 m2 · · · mk (A.7)


= 2a ∧ B,

and

aB − (−1)k Ba = 2n1 n2 · · · nk−1 (A.8)


= 2a · B,

proving eq. (A.2). Grade selection from eq. (A.6) gives

(−1)k B · a = −n1 n2 · · · nk−1 (A.9)


= −a · B,

and

(−1)k B ∧ a = am1 m2 · · · mk (A.10)


= a ∧ B,

which proves eq. (A.1).

Theorem A.2: Vector-trivector dot product.

Given a vector a and a blade b ∧ c ∧ d formed by wedging three vectors, the dot product
of the two can be expanded as bivectors like

a · (b ∧ c ∧ d) = (b ∧ c ∧ d) · a (A.11)
= (a · b)(c ∧ d) − (a · c)(b ∧ d) + (a · d)(b ∧ c).

The proof follows by expansion in coordinates


X D E
a · (b ∧ c ∧ d) = ai b j ck dl ei e j ek el . (A.12)
2
j,k,l

The products within the grade two selection operator can be of either grade two or grade four,
so only the terms where one of i = j, i = k, or i = l contributes. Repeated anticommutation of
distribution theorems. 213

the normal unit vectors can put each such pair adjacent, where they square to unity. Those are
respectively

hei ei ek el i2 = ek el
D E D E
ei e j ei el = − e j ei ei el = −e j el (A.13)
2
D E D E2 D E
ei e j ek ei = − e j ei ek ei = + e j ek ei ei = e j ek .
2 2 2

Substitution back into eq. (1.83) gives


X  
a · (b ∧ c ∧ d) = ai b j ck dl ei · e j (ek el ) − ei · ek (e j el ) + ei · el (e j ek )
j,k,l (A.14)
= (a · b)(c ∧ d) − (a · c)(b ∧ d) + (a · d)(b ∧ c).

Theorem A.2 is a specific case of the more general identity

Theorem A.3: Vector blade dot product distribution.

A vector dotted with a n − blade distributes as


n
X
x · (y1 ∧ y2 ∧ · · · ∧ yn ) = (−1)i (x · yi ) (y1 ∧ · · · ∧ yi−1 ∧ yi+1 ∧ · · · ∧ yn ) .
i=1

This dot product is symmetric(antisymmetric) when the grade of the blade the vector is
dotted with is odd(even).

For a proof of theorem A.3 (valid for all metrics) see [6].

Theorem A.4: Distribution of inner products

Given two blades A s , Br with grades subject to s > r > 0, and a vector b, the inner product
distributes according to

A s · (b ∧ Br ) = ( A s · b) · Br .

The proof is straightforward, but also mechanical. Start by expanding the wedge and dot prod-
ucts within a grade selection operator

A s · (b ∧ Br ) = hA s (b ∧ Br )i s−(r+1)
1
(A.15)
= A s bBr + (−1)r Br b s−(r+1) .

2
214 distribution theorems.

Solving for Br b in
2b · Br = bBr − (−1)r Br b, (A.16)

we have

1
A s · (b ∧ Br ) = hA s bBr + A s (bBr − 2b · Br )i s−(r+1) (A.17)
2
= hA s bBr i s−(r+1) − (A( r ) s−(r+1) .

(( (
s ((
b(· B( (

The last term above is zero since we are selecting the s − r − 1 grade element of a multivector
with grades s − r + 1 and s + r − 1, which has no terms for r > 0. Now we can expand the A s b
multivector product, for
A s · (b ∧ Br ) = h(A s · b + A s ∧ b) Br i s−(r+1) . (A.18)

The latter multivector (with the wedge product factor) above has grades s + 1 − r and s + 1 + r,
so this selection operator finds nothing. This leaves
A s · (b ∧ Br ) = h(A s · b) · Br + (A s · b) ∧ Br i s−(r+1) . (A.19)

The first dot products term has grade s − 1 − r and is selected, whereas the wedge term has
grade s − 1 + r , s − r − 1 (for r > 0). 
P R O O F S K E T C H F O R T H E F U N D A M E N TA L T H E O R E M O F
GEOMETRIC CALCULUS.
B
We start with expanding the hypervolume integral, by separating the geometric product of the
volume element and vector derivative direction vectors into dot and wedge contributions
Z ↔ XZ ↔
Fd x ∂ G =
k
dk uFIk xi ∂ i G
V i V
XZ (B.1)
 ↔
= dk uF Ik · xi + Ik ∧ xi ∂ i G.
i V

n o
Because xi lies in span x j , the wedge product above is zero, leaving
Z ↔ XZ  ↔
Fd x ∂ G =
k
dk uF Ik · xi ∂ i G,
V i V
XZ XZ  
= d u(∂i F)Ik · x G +
k i
dk uF Ik · xi (∂iG) (B.2)
i V i V
 
XZ     Z X  
= dk uF  ∂i Ik · xi  G.

dk u∂i F Ik · xi G −

i V V i

The sum in the second integral turns out to be zero, but is somewhat messy to show in general.
The k = 1 is a special case, as it is trivial

∂1 (x1 · x1 ) = ∂1 1 = 0. (B.3)

The k = 2 case is illustrative

2
X  
∂i I3 · xi = ∂1 ((x1 ∧ x2 ) · x1 ) + ∂2 ((x1 ∧ x2 ) · x2 )
i =1
(B.4)
= ∂1 (−x2 ) + ∂2 x1
∂2 x ∂2 x
=− + ,
∂u1 ∂2 ∂u2 ∂1

215
216 proof sketch for the fundamental theorem of geometric calculus.

which is zero by equality of mixed partials. To show that this sums to zero in general observe
that cyclic permutation of the wedge factors in the pseudoscalar only changes the sign

x1 ∧ x2 ∧ · · · ∧ xk = x2 ∧ x3 ∧ · · · ∧ xk ∧ x1 (−1)1(k−1)
= x3 ∧ x4 ∧ · · · ∧ xk ∧ x1 ∧ x2 (−1)2(k−1) (B.5)
= xi+1 ∧ xi+2 ∧ · · · ∧ xk ∧ x1 ∧ x2 ∧ · · · ∧ xi (−1)i(k−1) .

The pseudoscalar dot product Ik · xi is therefore

Ik · xi = (x1 ∧ x2 ∧ · · · ∧ xk ) · xi (B.6)
= xi+1 ∧ xi+2 ∧ · · · ∧ xk ∧ x1 ∧ x2 ∧ · · · ∧ xi−1 (−1)i(k−1) ,

and the sum is


X  
∂i Ik · xi
i
= (∂i,i+1 x) ∧ xi+2 ∧ · · · ∧ xk ∧ x1 ∧ x2 ∧ · · · ∧ xi−1 (−1)i(k−1)
+ xi+1 ∧ (∂i,i+2 x) ∧ · · · ∧ xk ∧ x1 ∧ x2 ∧ · · · ∧ xi−1 (−1)i(k−1) (B.7)
.
+ ..
+ xi+1 ∧ xi+2 ∧ · · · ∧ xk ∧ x1 ∧ x2 ∧ · · · ∧ (∂i,i−1 x)(−1)i(k−1) .

For each i , j there will be one partial ∂i, j x and one partial ∂ j,i x in this sum. Consider, for
example, the 1, 2 case which come from the i = 1, 2 terms in the sum

∂1 (x2 ∧ x3 ∧ · · · ∧ xk−1 ∧ xk )(−1)1(k−1) + ∂2 (x3 ∧ x4 ∧ · · · ∧ xk ∧ x1 )(−1)2(k−1)


= (∂1,2 x) ∧ x3 ∧ · · · ∧ xk−1 ∧ xk )(−1)1(k−1) + x3 ∧ x4 ∧ · · · ∧ xk ∧ (∂2,1 x)(−1)2(k−1) + · · ·
 
= (−1)k−1 (x3 ∧ · · · ∧ xk−1 ∧ xk ) ∧ (−1)k−2 ∂1,2 x + (−1)k−1 ∂2,1 x + · · ·
∂2 x ∂2 x
!
= (x3 ∧ · · · ∧ xk−1 ∧ xk ) ∧ − + + ···
∂u1 ∂u2 ∂u2 ∂u1
(B.8)

By equality of mixed partials this difference of 1, 2 partials are killed. The same argument holds
P 
for all other indexes, proving that i ∂i Ik · xi = 0.
Equation (B.2) is left with a sum of perfect differentials, each separately integrable

∂  
Z ↔ XZ Z  
Fd x ∂ G =
k
d k−1
ui F Ik · xi G
dui
V i ∂V ∆ui ∂ui
XZ (B.9)
   
= dk−1 ui F Ik · xi G ,
∂V ∆ui
i
proof sketch for the fundamental theorem of geometric calculus. 217

which completes the sketch of the proof.


While much of the theoretical heavy lifting was carried by the reciprocal frame vectors, the
final result does not actually require computing those vectors. When k equals the dimension of
the space, as in R3 volume integrals, the vector derivative ∂ is identical to the ∇, in which case
we do not even require the reciprocal frame vectors to express the gradient.
For a full proof of theorem 2.3, additional mathematical subtleties must be considered. Issues
of connectivity of the hypervolumes (and integration theory in general) are covered very nicely
in [18]. For other general issues required for a complete proof, like the triangulation of the
volume and its boundary, please see [13], [6], and [21].
GREEN’S FUNCTIONS.
C
c.1 helmholtz operator.

The goal. The Helmholtz equation to solve is


 
∇2 + k2 f (x) = u(x). (C.1)

To solve using the Green’s function of theorem 2.17, we require


 
∇2 + k2 G(x, x0 ) = δ3 (x − x0 ). (C.2)

Verifying this requires two steps, first considering points x , x0 , and then considering an
infinitesimal neighbourhood around x0 .

Case I. x , x0 . We will absorb the sign associated with the causal and acausal Green’s
function variations by writing i = ± j, so that for points x , x0 , (i.e. r = kx − x0 k , 0), working
in spherical coordinates, we find

  1  
−4π ∇2 + k2 G(x, x0 ) = 2 r2G0 0 − 4πk2G
r
k2
! !
1 d 2 ikr 1 ikr
= 2 r − 2 e + eikr
r dr r r r
1 d   k 2
= 2 (rik − 1) eikr + eikr (C.3)
r dr r
1   ikr k2 ikr
= 2 ik + rik − 1 ik e + e
r r
1   e ikr
= 2 −rk2 eikr + k2
r r
= 0.

Case II. In the neighbourhood of kx − x0 k <  Having shown that we end up with zero ev-
erywhere that x , x0 we are left to consider an infinitesimal neighbourhood of the volume
surrounding the point x in our integral. Following the Coulomb treatment in §2.2 of [19] we use

219
220 green’s functions.

a spherical volume element centered around x of radius , and then convert a divergence to a
surface area to evaluate the integral away from the problematic point.

Z 
1
Z  ikkx−x0 k
2 e
 
∇ + k G(x, x ) f (x )dV = −
2 2 0 0 0
∇ +k
2
f (x0 )dV 0
4π kx−x0 k< kx − x0 k
(C.4)
1
Z  ikkx00 k
2 e

=− f (x + x ) ∇ + k
00 2
dV 00 ,
4π kx00 k< kx00 k

where a change of variables x00 = x0 − x, as illustrated in fig. C.1, has been performed.

Figure C.1: Neighbourhood kx − x0 k < .

We assume that f (x) is sufficiently continuous and “well behaved” that it can be pulled it
out of the integral, replaced with a mean value f (x∗ ) in the integration neighbourhood around
x00 = 0.
Z 
f (x∗ )
Z  ikkx00 k
2 e
 
∇ + k G(x, x ) f (x )dV = lim −
2 2 0 0 0
∇ +k
2
dV 00 . (C.5)
→0 4π kx00 k< kx00 k
C.1 helmholtz operator. 221

The k2 term of eq. (C.5) can be evaluated with a spherical coordinate change of variables
ikkx00 k  π 2π
eikr 2
Z Z Z Z
2e
k dV =
00
k2 r dr sin θdθdφ
kx00 k< kx00 k r=0 θ=0 φ=0 r
Z 
= 4πk2 reikr dr
r=0
Z k (C.6)
= 4π ueiu du
u=0
k
= 4π (−iu + 1)eiu 0
 
= 4π (−ik + 1)eik − 1 .

To evaluate the Laplacian term of eq. (C.5), we can make a change of variables for the Lapla-
cian
00 ikkx00 k 00 !
eikkx k 2 e eikkx k
∇ 00 = ∇x00 00 = ∇x00 · ∇x00 00 , (C.7)
kx k kx k kx k

and then employ the divergence theorem

Z ikkx00 k Z 00 !
eikkx k
2e
∇ dV 00 = ∇ · ∇ x00 dV 00
x00
kx00 k < kx00 k 00
kx k< kx 00 k
(C.8)
00 !
eikkx k
Z
= ∇x00 00 · n̂dA00 ,
∂V kx k

where ∂V represents the surface of the kx00 k <  neighbourhood, and n̂ is the unit vector directed
along x00 = x0 − x. To evaluate this surface integral we will require only the radial portion of the
gradient. With r = kx00 k, that is
00 !
eikkx k ∂ eikr
!
∇x00 00 · n̂ = n̂ · n̂
kx k ∂r r
∂ eikr
=
∂r r ! (C.9)
1 1 ikr
= ik − 2 e
r r
eikr
= (ikr − 1) 2 .
r
222 green’s functions.

Using spherical coordinates again, with an area element dA00 = r2 sin θdθdφ, we obtain

ikkx00 k π 2π

eikr 2
Z Z Z
2e
dV =
00
(ikr − 1) 2 r sin θdθdφ

∇ (C.10)
kx00 k < kx00 k θ=0 φ=0 r
r=
= 4π (ik − 1) eik .

Putting everything back together we have

1
Z   eikkx−x0 k  
− ∇ +k2 2
f (x 0
)dV 0
= lim − f (x ∗
) (−ik + 1)eik
− 1 + (ik − 1) e ik
4π kx − x0 k →0
  (C.11)
= lim − f (x∗ ) (−ik + 1 + ik − 1)eik − 1
→0
= lim f (x∗ ).
→0

Observe the perfect cancellation of all the explicitly  dependent terms. The mean value point
x∗ is also  dependent, but tends to x in the limit, leaving

1
Z   eikkx−x0 k
f (x) = − ∇2 + k2 f (x0 )dV 0 . (C.12)
4π kx − x0 k

This proves the delta function property that we claimed the Green’s function had.

c.2 delta function derivatives.

The Green’s function for the spacetime gradient ends up with terms like
d
δ(−r/c + t − t0 )
dr (C.13)
d
δ(−r/c + t − t0 ),
dt
where t0 is the integration variable of the test function that the delta function will be applied to.
If these were derivatives with respect to the integration variable, then we could use
Z ∞ !
d
0
δ(t ) φ(t0 ) = −φ0 (0),
0
(C.14)
−∞ dt

which follows by chain rule, and an assumption that φ(t0 ) is well behaved at the points at infinity.
It is not clear that how, if at all, this could be applied to either of eq. (C.13).
Let’s go back to square one, and figure out the meaning of these delta functions by their
action on a test function. We wish to compute
Z ∞
d
δ(au + b − t0 ) f (t0 )dt0 . (C.15)
−∞ du
C.2 delta function derivatives. 223

Let’s start with a change of variables t00 = au + b − t0 , for which we find

t0 = au + b − t00
dt00 = −dt0 (C.16)
d dt00 d d
= 00
= a 00 .
du du dt dt
Substitution back into eq. (C.15) gives
Z −∞ ! Z ∞ !
d d
a δ(t ) f (au + b − t )(−dt ) = a
00 00 00
δ(t ) f (au + b − t00 )dt00
00
∞ dt00 −∞ dt
00

= aδ(t00 ) f (au + b − t00 ) −∞
Z ∞
d
−a δ(t00 ) 00 f (au + b − t00 )dt00 (C.17)
−∞ dt

d
= − a 00 f (au + b − t ) 00
dt t00 =0
d
= a f (s) .
ds s=au+b

This shows that the action of the derivative of the delta function (with respect to a non-
integration variable parameter u) is

d d
δ(au + b − t ) ∼ a
0
. (C.18)
du ds s=au+b
G A E L E C T R O D Y N A M I C S I N T H E L I T E R AT U R E .
D
The notation and nomenclature used to express Maxwell’s equation in the GA literature, much
of which has a relativistic focus, has not been standardized. Here is an overview of some of the
variations that will be encountered in readings.

Space Time Algebra (STA). [6] Maxwell’s equation is written

∇F = J
F = E + IB
I = γ0 γ1 γ2 γ3 (D.1)
J = γµ J µ = γ0 ( ρ − J )
∇ = γµ ∂µ = γ0 (∂t + ∇) .

STA uses a relativistic basis γµ and its dual {γµ } for which γ02 = −γk2 = 1, k ∈ 1, 2, 3, and γµ ·
n o

γν = δµ ν . Spatial vectors are expressed in terms of the Pauli basis σi = γi γ0 , which are bivectors
that behave as Euclidean basis vectors (squaring to unity, and all mutually anticommutative). F
is called the electromagnetic field strength (and is not a 1,2 multivector, but a bivector), ∇ is
called the vector derivative operator, ∇ called the three-dimensional vector derivative operator,
and J is called the spacetime current (and is a vector, not a multivector). The physicist’s “natural
units” c = 0 = µ0 are typically used in STA. The d’Lambertian in STA is  = ∇2 = ∂2t − ∇2 ,
although the earliest formulation of STA [10] used  for the vector derivative. Only spatial
vectors are in bold, and all other multivectors are non-bold. STA is inherently relativistic, and
can be used to obtain many of the results in this book more directly. STA can easily be related
to the tensor formulation of electrodynamics.
Maxwell’s equations as expressed in theorem 3.1 can be converted to their STA form (in SI
units) by setting ei = γi γ0 and by left multiplying both sides by γ0 .

225
226 ga electrodynamics in the literature.

Algebra of Physical Space (APS). [4] Maxwell’s equation is written as


1
∂F = 
0 c
F = E + icB
i = e123 (D.2)
1
∂ = ∂t − ∇
c
1
j= (ρc + j) .
0 c
F is called the Faraday, ∂ the gradient, j the current density, and 0,1 multivectors are called
paravectors. A Euclidean spacial basis {e1 , e2 , e3 } is used, and e0 = 1 is used as the time-like
basis “vector”. In APS, where e0 = 1 is not a vector grade object, a standard GA dot product for
which eµ · eν = δµ ν to express proper length. APS uses inner products based on grade selection
from the multivector zz, where z is the Clifford conjugation operation that changes the sign of
any vector and bivector grades of a multivector z. This conjugation operation is also used to
express Lorentz transformations, and is seen in Maxwell’s equation, operating on the current
density and gradient. The d’Lambertian is written as  = ∂∂ = (1/c2 )∂2t − ∇2 . While APS is
only cosmetically different than theorem 3.1 the treatment in [4] is inherently relativistic.

Jancewicz. [15] Maxwell’s equation in linear isotropic media is written as


√ √
D f + eD ln  + b̂D ln µ = ̃
√ ∂
D = ∇ + µ
∂t
f = e + b̂

e = E
(D.3)
1
b̂ = √ IB
µ
I = e123
1 √
̃ = √ ρ − µj.

Jancewicz works with fields that have been re-dimensionalized to the same units, uses an
overhat bold notation for bivectors (which are sometimes called volutors). D is called the cliffor
differential operator, f the electromagnetic cliffor, and ̃ the density of electric sources. In media
that for which µ,  are constant in space and time, his Maxwell equation reduces to D f = ̃. The

d’Lambertian is written as  = D ∗ D = ∇2 − µ∂2t , where D ∗ = ∇ − µ∂t . Unlike Baylis,
which uses a “paravector” approach extensively for his relativistic treatment, this book ends
with a relativistic treatment using STA.
E N E R G Y M O M E N T U M T E N S O R F O R V E C T O R PA R A M E T E R S .
E
Did you cry cheater because of the reliance on brute force computer assisted symbolic algebra
to find the T (a) relations of eq. (3.108)? Or did you try this as a problem, and need some
assistance?
If so, here is an expansion of the energy momentum tensor for vector parameters. We start
with

Fei = (E + IηH) ei (E.1)


= Ei + E ∧ ei + Iη (Hi + H ∧ ei ) .

To show that the scalar grades are related to the Poynting vector as hT (ei )i = −S · ei /c, we
need the scalar grades of the tensor for vector parameters

D E D E
Fei F † =  E + Iη (H ∧ ei ) E +  ) · E +

(E∧ei

Ei IηH E
i
2 2


+ η − Ei IH + ( η (( ) · H − (E ∧ ei ) IH + ηHi
((
H(∧(
ei( H (E.2)
2
D E
= I 2 η (H × ei ) E − η (E × ei ) I 2 H
2
= − (E × H) · ei /c,

which is the Poynting relationship that was asserted. For the vector grades we have
D E
Fei F † = h(Ei + Iη (H ∧ ei ) + E ∧ ei + IηHi ) (E − IηH)i1
1
= Ei E + ( i ) E + (E ∧ ei ) · E + 


(H
(
Iη( (∧(e( IηH E 1
i
D E
+ η −  + η (H ∧ e ) · H − (E∧e
i ) IH + ηHi H (E.3)

EiIH i  1
= Ei E + EEi − E2 ei + η2 HHi − η2 H2 ei + η2 Hi H
 
= 2Ei E − E2 ei + η2 2Hi H − H2 ei .

Assembling all the results, we have


  
hT (ei )i1 · e j = 2Ei E j − E2 δi j + η2 2Hi H j − H2 δi j , (E.4)
2
which proves that hT (ei )i1 · e j = −Θi j as stated in definition 3.5.

227
M AT H E M AT I C A N O T E B O O K S .
F
These Mathematica notebooks, some just trivial ones used to generate figures, others more
elaborate, and perhaps some even polished, can be found in
https://github.com/peeterjoot/mathematica/tree/master/.
The free Wolfram CDF player, is capable of read-only viewing these notebooks to some
extent.

• Dec 18, 2016 GAelectrodynamics/projectionAndRejectionDiagram.nb


Figure illustrating projection and rejection.

• Dec 19, 2016 GAelectrodynamics/oneParameterDifferentialFig.nb


One parameter differential figure.

• Dec 20, 2016 GAelectrodynamics/twoParameterDifferentialFig.nb


Two parameter differential figure.

• Sep 26, 2017 GAelectrodynamics/parallelograms.nb


Overlapping parallelograms with fixed areas. Figure: parrallelogramsFig1.eps.

• Sep 26, 2017 GAelectrodynamics/parallelogram.nb


This is the notebook for two rotation figures. One is for a rotation of a vector lying in a
plane (but that plane is viewed from a 3D vantage point), and the other is for a rotation
with respect to a plane through an angle, and applied to a vector out of the plane.

• Oct 23, 2017 GAelectrodynamics/circularLineChargeDensity.nb


Some messy evaluation of integrals that end up expressed in terms of elliptic E() and F()
functions. Was associated with the evaluation of the charge of a circular segment of line
charge.

• Oct 23, 2017 GAelectrodynamics/LineChargeIntegralsAndFigure.nb


Integrals for Line charge problem, including some of the special angle cases that seem
to require separate evaluation. Also has a plot linechargeFig1.eps, and some plots (not in
the book) of the integrands.

229
230 mathematica notebooks.

• Nov 2, 2017 GAelectrodynamics/lineChargeArcFigure.nb


Figure for circular arc of line charge. One arc of charge on the x-y plane at a fixed radius.
Field point, azimuthal angles for the range of the line charge.

• Nov 19, 2017 GAelectrodynamics/zcapPotential.nb


Figure for (magnetic) vector potential: vectorPotentialFig1.eps.

• Nov 19, 2017 GAelectrodynamics/gaToroid.nb


Cool toroidal segment figure for the book. toroidFig1.eps

• Nov 19, 2017 GAelectrodynamics/pillboxIntegrationVolumeFig1.nb


This is the figure for pillbox integration volume that was used in the boundary value
analysis of Maxwell’s equations.

• Nov 23, 2017 GAelectrodynamics/circularBasisCoordinatesInTermsOfJonesVector.nb


Verify hand calculation from polarization.tex (also set as a problem). Got my hand cal-
culation wrong. Try this with Mathematica instead. Has some additional checking of the
solution.

• Nov 23, 2017 GAelectrodynamics/ellipticalPolarizationFig1.nb


A plot of a rotated ellipse showing the major and minor axes, and the angle of rotation.
This was related to an elliptically polarized plane wave.

• Nov 23, 2017 GAelectrodynamics/linearPolarizationFig1.nb


Figure (linearPolarizationFig1.eps) showing the electric and magnetic field directions for
a linearly polarized field propagating at a fixed angle to the horizontal in the transverse
plane.

• Nov 24, 2017 GAelectrodynamics/pseudoscalarPolarizedRelationtoJones.nb


Jones vector related calculations for GA representation of plane wave.

• Nov 28, 2017 GAelectrodynamics/vectorOrientationAndAdditionFigures.nb


Vector addition and vector (and scalar) sign figures: VectorsWithOppositeOrientation-
Fig1.eps, vectorAdditionFig1.eps, scalarOrientationFig1.eps.

• Dec 3, 2017 GAelectrodynamics/ellipticalPolarizationEnergyMomentumSimplification.nb


Hyperbolic cosine and arctan double angle reductions. Probably for cosh parameteriza-
tion of an ellipse.
mathematica notebooks. 231

• Dec 3, 2017 GAelectrodynamics/amperesLawMultiplePoints.nb


Figure for amperesLawBetweenTwoCurrents.eps. Circles surrounding two currents, with
respective phicap vectors around those sources.

• Dec 5, 2017 GAelectrodynamics/stressEnergyTensorValues.nb


Uses my GA30.m package to compute the values of the energy momentum tensor mul-
tivectors, and relate those to the conventional tensor description of the same. Calculates
the expansion of the adjoint of the energy momentum tensor, and also the expansion of
some of the adjoint energy momentum tensor terms for the Poynting vector.

• Dec 13, 2017 GAelectrodynamics/planeCurrentIntegrals.nb


Simple integrals for plane current distributions.

• Dec 14, 2017 GAelectrodynamics/ringIntegral2.nb


Elliptic integrals for charge and current distribution on a ring.

• Dec 14, 2017 GAelectrodynamics/currentRingIntegrals.nb


Some integrals related to circular current/charge distributions. A Manipulate that plots
the magnitude of one of the integrands. A plot (chargeAndCurrentOnRingFig1.eps) that
shows the geometric of the current ring and coordinate system used to solve or express
the problem.

• Dec 17, 2017 GAelectrodynamics/cylinderFieldIntegrals.nb


Symbolic evaluation of integrals for a cylindrical field distribution of finite and infinite
length.

• Dec 17, 2017 GAelectrodynamics/ringFieldEllipticIntegralEquivalents.nb


This notebook has transformation techniques to translate a couple of circular charge dis-
tribution integrals into their elliptic integral form. It also has plots of some of the electric
and magnetic fields obtained from solving one such problem.

• Jan 24, 2018 GAelectrodynamics/stressTensorSymmetricDemo.nb


A CliffordBasic calculation of the strain portion of the stress tensor, and an explicit
demonstration that it is symmetric.

• Jan 28, 2018 GAelectrodynamics/exponentialFormOfVectorProductInR3.nb


A somewhat random seeming complex exponential evaluation using CliffordBasic, and
an R3 bivector argument.
232 mathematica notebooks.

• Jan 28, 2018 GAelectrodynamics/exponentialFormOfVectorProductInR2.nb


Some R2 complex exponential calculations using CliffordBasic.

• Jan 28, 2018 GAelectrodynamics/compositionOfExponentials.nb


This is a figure that has an equilateral triangle in the corner of the first quadrant. This
was used to illustrate that the product of two complex exponentials is another complex
exponential (in R3), but the bivector argument for that resulting exponential describes (in
general) a different plane.

• Jan 29, 2018 GAelectrodynamics/radialVectorCylindricalFig1.nb


Figure: radialVectorCylindricalFig1.eps. Notebook uses a dynamic (Manipulate) to gen-
erate the figure at a desirable angle and radius.

• Feb 1, 2018 GAelectrodynamics/dualityInR3Fig1.nb


Figure (dualityInR3Fig1.eps) showing the R3 dual plane to a vector graphically. The
scaling of the dual plane was only for illustration purposes and did not match the length
of the vector.

• Feb 3, 2018 GAelectrodynamics/factorizationProblemVerification.nb


Verify answers for normal factorization problem. 2.16

• Feb 3, 2018 GAelectrodynamics/bivectorFactorizationFigures.nb


Figures that illustrate two rectangular factorizations of a bivector in R3.

• Feb 5, 2018 GAelectrodynamics/neighbourhoodFig1.nb


A nice little figure illustrating an infinitesimal neighbourhood around a given point. This
was used as a figure in the somewhat tedious verification of a Green’s function, done in
one of the appendixes.

• Feb 14, 2018 GAelectrodynamics/FourierTransformMathematicaParamsExploration.nb


The purpose of this notebook is to show (i.e. decode) the meaning visually of the vari-
ous Mathematica FourierTransform FourierParameters options available. Shows all the
conventions (modern physics, pure math, signal processing, classical physics).

• Feb 20, 2018 GAelectrodynamics/ellipticalContoursFigures.nb


Hyperbolic parameterization of an ellipse, and contours for the associated curvilinear
coordinates. ellipticalContoursFig1.eps, and ellipticalContoursFig1.eps figures.
mathematica notebooks. 233

• Feb 22, 2018 GAelectrodynamics/2dmanifoldPlot.nb


Figure: 2dmanifoldFig1.eps. CliffordBasic calculation of the basis elements above and
the area element. Same calculation using my GA30.m package. Generation of mmacell
text for the book showing the input and output cells for the CliffordBasic calculation.

• Feb 24, 2018 GAelectrodynamics/reciprocalBasicCalculationFromFigure.nb


Reciprocal basis computation with conventional vector algebra. Same calculation using
bivectors. Display the cells for the book latex.

• Feb 24, 2018 GAelectrodynamics/linearSystemR4.nb


A CliffordBasic solution to an R4 linear system a x + b y = c, using wedge products to
solve. Also includes mmacell output to embed the solution in the book as Mathematica
input and output.

• Feb 24, 2018 GAelectrodynamics/reflectionFigureGAcalculation.nb


CliffordBasic calculations for Figure 1.20 reflection (reflectionFig1.eps), but not the fig-
ure itself. Also has mmacell output for the input and output cells for this calculation.

• Feb 24, 2018 GAelectrodynamics/curvilinearPolarFig1.nb


Plot (curvilinearPolarFig1.eps) that shows a 2d vector in polar coordinates, the radial
vector, and the angle relative to the horizon.

• Feb 25, 2018 GAelectrodynamics/sphericalPolar.nb


Spherical polar basis and volume element. Calcuation of the curvilinear basis elements
done with conventional vector algebra, and CliffordBasic. Also includes mmacell output
for the book.

• Feb 28, 2018 GAelectrodynamics/orientedAreas.nb


Bivector square and parallelogram figures, Figures for 90 degree rotations. Figure for line
intersection. Figure for vector addition, showing scaled multiples of orthonormal bases
elements.

• Feb 28, 2018 GAelectrodynamics/unitBivectorsFig.nb


Unit bivectors figures in R3. unitBivectorsFig1.eps, unitBivectorsFig2.eps.

• Feb 28, 2018 GAelectrodynamics/unitBivectorAreaRepresentationsFig.nb


A figure that shows different shape representations of unit bivectors in R2. Includes
parallelogram, square, circle and ellipse representations. Also includes inscribed arc to
show the orientation of the bivectors. That was done using Arrow in combination with
BSplineCurve, where the points on the curve come from evaluating CirclePoints.
234 mathematica notebooks.

• Feb 28, 2018 GAelectrodynamics/circularBivectorsIn3DFig1.nb


This is the notebook for a few bivector related illustrations. The first is two circular rep-
resentations of a bivector viewed from a 3D vantage point. Another is a bivector addition
figure, with two bivectors summed in 3D. That figure was confusing (but cool), and has
been left out of the book. The last figure separates the space between those bivectors
summed in the second figure showing the summands and the sum all distinct. The current
draft of the book includes this figure, but it is still a bit confusing.

• Apr 2, 2018 GAelectrodynamics/polarizationCircular.nb


A hand calculation seemed to show that I had the wrong expressions for alphaL, alphaR
in my polarization notes. Here’s a check of the correction of those expressions
INDEX

ei j , ei jk , · · ·, 14 dimension, 3
, 120 divergence theorem, 108
RN , 2 dot product, 3, 28, 42
0-vector, 6 dual, 26
1-vector, 6
2-vector, 7 electric charge density, 129
3-vector, 10 electric current density, 129
energy density, 160
anticommutation, 17 energy flux, 160
antisymmetric sum, 56 energy momentum tensor, 161
area element, 96 Euclidean space, 5
Euler’s formula, 21
basis, 3
bivector, 7 far field, 199
blade, 34 Fourier transform, 109
boundary values, 208 frequency domain, 109
fundamental theorem of geometric calculus,
circular polarization, 184 91
colinear vectors
wedge, 33 gauge transformation, 196
commutation, 16, 53 grade, 12, 34
complex exponential, 21 grade selection, 14, 28
complex imaginary, 18, 35 gradient
complex plane, 148 spherical, 87
complex power, 176 Gradient of the coordinates., 83
conjugation, 53 Green’s function, 111, 114
convolution, 123 Helmholtz, 117
Cramer’s rule, 60 Laplacian, 117
cross product, 31
curvilinear coordinates, 73, 77, 80, 85 Helmholtz
Green’s function, 117
delta function, 123 Helmholtz operator, 114
determinant Helmholtz’s theorem, 122
wedge product, 31
differential form, 88, 93, 96, 103 Jones vector, 182, 188

235
236 Index

k-vector, 12 reflection, 57
rejection, 45
Laplacian, 123 reverse, 34
Green’s function, 117 right circular polarization, 184
left circular polarization, 184 rotation, 20, 55
length, 4
line charge, 147, 151 scalar, 6
linear combination, 2 scalar selection, 42
linear dependence, 2 span, 3
linear independence, 3 spherical coordinates, 85, 90, 151
linear system, 58 standard basis, 5
Lorenz gauge, 197 subspace, 3
symmetric sum, 56
magnetic charge density, 129
magnetic current density, 129 tangent space, 77
Mathematica, 229 time harmonic, 109
Maxwell stress tensor, 161 toroid, 88
Maxwell’s equation, 132 trivector, 10
momentum density, 160
multivector, 12 unit pseudoscalar, 15
multivector dot product, 37 unit vector, 5
multivector space, 12
vector, 6
multivector wedge product, 43
vector derivative, 79
normal, 5 vector inverse, 49
vector product, 27
oriented volume element, 77 vector space, 1
orthonormal, 5 volume element, 77, 103
volume parameterization, 103
parallelogram, 53
plane wave, 182 wedge factorization, 50
polar coordinates, 80 wedge product, 28
polar representation, 30 linear solution, 58
polarization, 182
Poynting vector, 160
projection, 45
pseudoscalar, 15, 19, 20, 35

reciprocal basis
polar, 83
reciprocal frame, 65, 72, 73
BIBLIOGRAPHY

[1] Rafal Ablamowicz and Garret Sobczyk. Lectures on Clifford (geometric) algebras and ap-
plications, chapter Introduction to Clifford Algebras. Springer, 2004. (Cited on page 50.)
[2] Constantine A Balanis. Advanced engineering electromagnetics. Wiley New York, 1989.
(Cited on page 185.)
[3] Constantine A Balanis. Antenna theory: analysis and design. John Wiley & Sons, 3rd
edition, 2005. (Cited on page 194.)
[4] William Baylis. Electrodynamics: a modern geometric approach, volume 17. Springer
Science & Business Media, 2004. (Cited on pages 36 and 226.)
[5] James M Chappell, Samuel P Drake, Cameron L Seidel, Lachlan J Gunn, and Derek Ab-
bott. Geometric algebra for electrical and electronic engineers. Proceedings of the IEEE,
102(9), 2014. (Cited on page 210.)
[6] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University
Press New York, Cambridge, UK, 1st edition, 2003. (Cited on pages 114, 162, 181, 213,
217, and 225.)
[7] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan
Kaufmann, San Francisco, 2007. (Cited on page 38.)
[8] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall
Upper Saddle River, NJ, 3rd edition, 1999. (Cited on pages 159, 162, and 165.)
[9] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers,
1999. (Cited on pages 181 and 185.)
[10] David Hestenes. Space-time algebra, volume 1. Springer, 1966. (Cited on page 225.)
[11] David Hestenes. Proper dynamics of a rigid point particle. Journal of Mathematical
Physics, 15(10):1778–1786, 1974. (Cited on page 181.)
[12] David Hestenes. Proper particle mechanics. Journal of Mathematical Physics, 15(10):
1768–1777, 1974. (Cited on page 181.)
[13] David Hestenes, Garret Sobczyk, and James S Marsh. Clifford Algebra to Geometric
Calculus. A Unified Language for Mathematics and Physics. AAPT, 1985. (Cited on
page 217.)

237
238 bibliography

[14] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975. (Cited
on pages 40, 121, 162, and 165.)

[15] Bernard Jancewicz. Multivectors and Clifford algebra in electrodynamics. World Scien-
tific, 1988. (Cited on page 226.)

[16] Bernard Jancewicz. Multivectors and Clifford algebra in electrodynamics, chapter Ap-
pendix I. World Scientific, 1988. (Cited on page 142.)

[17] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann,
1980. ISBN 0750627689. (Cited on pages 132 and 162.)

[18] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing


Platform, 2012. (Cited on pages 80 and 217.)

[19] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987. (Cited on


page 219.)

[20] J Schwinger, LL DeRaad Jr, KA Milton, and W-Y Tsai. Classical electrodynamics,
perseus. Reading, page 355, 1998. (Cited on pages 117 and 121.)

[21] Garret Sobczyk and Omar León Sánchez. Fundamental theorem of calculus. Advances in
Applied Clifford Algebras, 21(1):221–231, 2011. URL http://arxiv.org/abs/0809.
4526. (Cited on page 217.)

[22] Bartel Leendert Van Der Waerden, E.. Artin, and E.. Noether. Modern algebra. Frederick
Ungar, 1943. (Cited on page 13.)

You might also like