You are on page 1of 20

Chapter 2

N-dimensional geometric algebra

In N -dimensional space there are N linearly independent vectors. In other words,


any vector a in N -dimensional space can be expressed as a sum of N vectors.
If the space is Euclidean, the square of any vector a in the space is positive,
and we can always expand a in terms of N mutually orthonormal basis vectors
{u1 , u2 , ..., uN }. However, the algebraic rules of adding and multiplying vectors
are independent on the signature of the vectors, i.e. they do note depend if a2
is positive or negative scalar. Thus we can relax the requirement that a2 > 0,
and include vectors with negative or zero squares as well. The sign of the square
(plus, minus, or zero) of a vector is commonly referred as the signature of the
vector.
We denote the geometric algebra for a space which includes p linearly indepen-
dent vectors with positive (> 0) square and q linearly independent vectors with
negative square as G(p, q). The three-dimensional Euclidean space E(3) can be
denoted using this notation as G(3, 0) and the five-dimensional Minkowski space
M(5) can be written as G(4, 1). In terms of mutually orthogonal basis vectors
{u1 , u2 , ..., uN }, there are p basis vectors {u1 , u2 , ..., up } so that ηi = ui ·ui = 1 for
i ≤ p, and q = N −p basis vectors {up+1 , up+2 , ..., uN } so that ηi = ui ·ui = −1 for
i > p. Often, we denote the basis vectors with negative signature by an overbar as
{up+1 = ū1 , up+2 = ū2 , ..., uN = ūq }. The N -dimensional space G(p, q) spanned
by the orthonormal vectors {u1 , u2 , ..., uN } (fulfilling ui · uj = ηi δij where δij is
the Kronecker’s delta, which is one if i = j and zero otherwise) is characterized
by the condition u1 ∧u2 ∧...∧uN ∧a = 0 for any vector a in the space. All the N -

67
68 CHAPTER 2. N-DIMENSIONAL GEOMETRIC ALGEBRA

blades are directly proportional to the unit pseudoscalar IN̄ = u1 ∧ u2 ∧ ... ∧ uN 1 .


In spaces of odd dimensions, IN̄ commutes with all the multivectors. In spaces of
even dimensions, IN̄ anticommutes with all odd-grade multivectors and commutes
with all even-grade multivectors. This can be summarized by

IN̄ Ak̄ = (−1)k(N −1) Ak̄ IN̄ (2.1)

The multiplication of a k-blade Ak̄ (k ≤ N ) by the IN̄ produces a k − N -blade


AN −k = IN̄ Ak̄ . The product of the unit pseudoscalar by itself produces either
plus or minus one, i.e.,

I2N̄ = (−1)N (N −1)/2 (−1)q (2.2)

Any two blades Ak̄ and Bl̄ with k + l ≤ N fulfill

Ak̄ c(Bl̄ IN̄ ) = Ak̄ ∧ Bl̄ IN̄ (2.3)

In addition of consisting positively and negatively signed vectors, we allow the


existence of the null-vectors, i.e., vectors which square to zero. The reader should
be aware that there are different kinds of null-vectors. For example, by multi-
plying any finite vector a, which belongs to G(p, q) by a nilpotent infinitesimal
scalar , we obtain a null-vector

 = a (2.4)

P
Since any null-vector of the form a can be written as a = i  (a · ui ui − a · ūi ūi )
using the mutually orthogonal basis vectors ui with positive signature and ūi with
negative signature (i.e., ui · uj = −ūi · ūj = δij and ui · ūj = 0), we can still argue
that the underlying geometric algebra has the structure G(p, q).
If the dot product between two null-vectors is a finite scalar differing from
zero, the null-vectors can not be obtained by multiplying any finite vectors by a
nilpotent infinitesimal (why not?). Consider now the two linearly independent
null-vectors 01 and 02 , which obey 02i = 0 but 01 · 02 = 2. Any vector a =
1
The unit pseudoscalar is printed in bold face to emphasize that it is an N -blade. In E(3),
we denote I3̄ = i, as usual.
69

a(1) 01 + a(2) 02 , which lies in the plane 01 ∧ 02 (a(1) and a(2) are real valued
scalars) can be written as
2
X
a= 0(i) · a0i (2.5)
i=1
(1) (2)
where {0 = 02 /2, 0 = 01 /2} is the reciprocal basis associated with the basis
{01 , 02 }. Since a2 = 4a(1) a(2) , it is evident that the vector a need not be null: a2
can be any real scalar. Now we can express

01 = e + ē (2.6)
02 = e − ē (2.7)

where e · ē = 0 and e2 = 1 = −ē2 . Thus in some sense the mutually orthogonal


but opposite signed vectors e and ē span the space where the null-vectors 01 and
02 reside. They in turn can be constructed from the null-basis as
01 + 02 01
e = = + 0(1) (2.8)
2 2
01 − 02 02
ē = = − + 0(2) (2.9)
2 2
Consider now a set of null vectors {01 , 02 , ..., 0r }, for which the real valued
inner products 0i · 0j = dij differ from zero for i 6= j (and naturally dii = 0).
Generally, if
01 ∧ 02 ∧ ... ∧ 0r · 0r ∧ 0r−1 ∧ ... ∧ 01 = d12...r (2.10)

differs from zero, the vectors {01 , 02 , ..., 0r } are linearly independent. In that
case we can modify our previous approach and construct a standard orthonormal
basis
1
ei = 0i + 0(i) (2.11)
2
1
ēi = − 0i + 0(i) (2.12)
2
using the reciprocal basis {0(1) , 0(2) , ..., 0(r) } defined via 0i · 0(j) = δij and 0(i) ·
0(i) = 0. The existence of the reciprocal basis is guaranteed by Eq. (2.10), and
in the next section we present an explicit way of constructing it. Evidently now
ei ·ēj = 0 and ei · ej = δij = −ēi ·ēj , so the vectors {e1 , e2 , ..., er , ē1 , ē2 , ..., ēr } are
mutually orthogonal and can be (at least formally) considered of spanning a 2r-
dimensional space with the pseudoscalar I2r = e1 ∧ e2 ∧ ... ∧ er ∧ ē1 ∧ ē2 ∧ ... ∧ ēr ,
70 CHAPTER 2. N-DIMENSIONAL GEOMETRIC ALGEBRA

whose square is (−1)r[1+(r−1)/2] . The problem can be reversed and the space
characterized by the condition in Eq. (2.10) can be regarded as being embedded
in the space characterized by the pseudoscalar I2r . Then we can write

0i = ei + ēi (2.13)

so wee see that the space spanned by the r null-vectors {01 , 02 , ..., 0r } can be
regarded as being embedded to a 2r-dimensional space spanned by r positively
signed basis vectors {e1 , e2 , ..., er } and r negatively signed basis vectors {ē1 , ē2 , ..., ēr }.
Hence we can claim that the underlying geometric algebra has the structure
G(r, r).

2.1 Outer products


Definition 3 (Outer product) The outer product between a k-blade Ak̄ = a1 ∧
a2 ∧. . . ∧ ak and an l-blade Bl̄ = b1 ∧b2 ∧. . . ∧ bl is the k +l part of the geometric
product Ak̄ Bl̄ , i.e.
Ak̄ ∧ Bl̄ = hAk̄ Bl̄ ik+l (2.14)

where hAk̄ Bl̄ im̄ denotes the m-blade part of Ak̄ Bl̄ .

Because the outer product of any two vectors is antisymmetric, the outer
product Bl̄ ∧ Ak̄ differs from Ak̄ ∧ Bl̄ at most by sign, i.e.

Ak̄ ∧ Bl̄ = (−1)kl Bl̄ ∧ Ak̄ (2.15)

The outer product of two blades is associative with respect to all its vector factors,
i.e. (a1 ∧ a2 ∧ . . . ∧ ak ) ∧ (b1 ∧ b2 ∧ . . . ∧ bl ) = (a1 ∧ a2 ) ∧ . . . ∧ ak ∧ b1 ∧ b2 ∧
. . . ∧ bl = ... .

2.1.1 Reshaping blades through Schmidt orthogonaliza-


tion
Because a ∧ a = 0, any blade Ak̄ = a1 ∧ a2 ∧ .... ∧ ak can be rewritten using
mutually orthogonal vectors {a01 , a02 , ..., a0k } as

Ak̄ = a01 a02 ....a0k (2.16)


2.2. CONTRACTIONS 71

If a02
i 6= 0 for all i = 1, 2, ..., k, the vectors are obtained as

i
X
a0i+1 a0−1 · ai+1 a0j

= ai+1 − j (2.17)
j=1

where it is understood that a01 = a1 . The vectors {a01 , a02 , ..., a0k } are orthogonal,
i.e. a0i ·a0j = 0 for i 6= j, because the vector a0i+1 is created from ai+1 by subtracting
its component along the previous vectors {a01 , a02 , ..., a0i }. This is known as the
Scmidt orthogonalization procedure.

2.2 Contractions
The contraction formulas can be concentrated into to

Definition 4 (Left contraction) The left contraction (or contraction onto) of


two arbitrary multivectors A and B is given by
X
AcB = hhAik̄ hBil̄ il−k (2.18)
kl

Definition 5 (Right contraction) The right contraction (or contraction by)


is given by
X
AbB = hhAik̄ hBil̄ ik−l (2.19)
kl

If A is a pure k-blade Ak̄ , and B is a pure l-blade Bl̄ , the contractions are
given by
Ak̄ cBl̄ = hAk̄ Bl̄ il−k (2.20)

and
Ak̄ bBl̄ = hAk̄ Bl̄ ik−l (2.21)

Because the left contraction of two k-blades Ak̄ and Bk̄ is equivalent to the right
contraction, we often denote the contraction of two k-blades simply by the dot2 ,
2
We require that k > 0 to avoid the conflict with the well established generalized inner
product Ak̄ · Bl̄ (Ref. [34]), which is zero by definition if k = 0 or l = 0. Otherwise, Ak̄ · Bl̄ =
hAk̄ Bl̄ i|k−l| for k, l > 0. However, we favor the right and left contractions, because their
geometric interpretation is straightforward. Also, algebraic manipulations are simpler using
contractions, because their definition does not involve “zero grade exception”.
72 CHAPTER 2. N-DIMENSIONAL GEOMETRIC ALGEBRA

i.e.
Ak̄ cBk̄ = Ak̄ bBk̄ = Ak̄ · Bk̄ for k > 0 (2.22)

The contractions of two blades are related through the reversion as

(Ak̄ bBl̄ )† = Bl̄† cA†k̄ (2.23)

Because the reversion and the grade involution

Ăk̄ = (−1)k Ak̄ (2.24)

can be generalized via the linearity of addition to general multivectors as A† =


P † P
k Ak̄ and Ă = k Ăk̄ , we obtain the result

Abx = −xcĂ (2.25)

and one can write


1 
xcA = xA − Ăx (2.26)
2
1 
Abx = Ax − xĂ (2.27)
2

2.2.1 Expansion rules


It is often necessary to be able to evaluate contractions between multivectors in
terms of the vector factors contained in them. We introduce some very useful
expansion rules for this purpose.
Because we can write the geometric product ab as ab = 2a · b − ba, we may
write the product ab1 b2 . . . bp as

ab1 b2 . . . bp = 2a · b1 b2 . . . bp − b1 ab2 . . . bp
= 2a · b1 b2 . . . bp − 2b1 a · b2 . . . bp + b1 b2 a . . . bp (2.28)

This produces the expansion rule


p
X
p
ab1 b2 . . . bp = (−1) b1 b2 . . . bp a + 2 (−1)k+1 a · bk b1 b2 . . . b̌k . . . bp (2.29)
k=1

where b̌k means that the vector bk is omitted from the product. The geometric
product b1 b2 . . . bp is always an even (odd) multivector, if p is even (odd), because
2.2. CONTRACTIONS 73

each successive pair of vectors (b1 b2 ), (b3 b4 ), ..., (bp−1 bp ) for even p is an even
multivector. One more vector in the product makes the result odd. Thus, by
taking the advantage of Eq. (2.25),

ac(b1 b2 . . . bp ) = −(−1)p (b1 b2 . . . bp )ba (2.30)

and we obtain
p
X
ac(b1 b2 . . . bp ) = (−1)k+1 a · bk b1 b2 . . . b̌k . . . bp (2.31)
k=1

We may extract more expansion rules by taking the grade p − 1 part in both sides
of Eq. (2.31). Because hac(b1 b2 . . . bp )ip−1 = ac hb1 b2 . . . bp ip = ac (b1 ∧ . . . ∧ bp ),


and b1 b2 . . . b̌k . . . bp p−1 = b1 ∧ . . . ∧ b̌k ∧ . . . ∧ bp , this produces
p
X
ac (b1 ∧ . . . ∧ bp ) = (−1)k+1 a · bk (b1 ∧ . . . ∧ b̌k ∧ . . . ∧ bp ) (2.32)
k=1

By using Eqs. (2.25) and (2.32), we can easily extract the contraction rules in
Eqs. (1.30), (1.34) and (1.37). A valuable generalization of Eq. (2.32) is the
formula

Ar̄ c (b1 ∧ . . . ∧ bp ) = Ar̄ · (b1 ∧ . . . ∧ br )(br+1 ∧ br+2 ∧ . . . ∧ bp )


−Ar̄ · (b2 ∧ . . . ∧ br+1 )(b1 ∧ br+2 ∧ . . . ∧ bp ) + ... (2.33)
p
X
= (j1 j2 ...jp )Ar̄ · (bj1 ∧ . . . ∧ bjr )(bjr +1 ∧ . . . ∧ bjp )
j1 <j2 <...<jr

where Ar̄ = a1 ∧ . . . ∧ ar is a simple r-blade, r ≤ p, and the permutation symbol


(j1 j2 ...jp ) is 1 (−1), if (j1 j2 ...jp ) is an even (odd) permutation of (1, 2, ..., p). The
number of terms in the expansion is pr = p!/[r!(p − r)!], which is the number


of ways to choose r objects from a collection of p objects. The right-hand side


of Eq. (2.33) can be further simplified by expressing the dot product of the two
r-blades as a linear combination of the contractions of (r − 1)-blades as

(a1 ∧ . . . ∧ ar ) · (b1 ∧ . . . ∧ br ) (2.34)


Xr
(−1)k+1 (ar · bk ) (a1 ∧ . . . ∧ ar−1 ) · b1 ∧ . . . ∧ b̌k ∧ . . . ∧ br

=
k=1

This expansion rule may be used repeatedly to reduce any dot product of two
r-blades to a linear combination of dot products of vectors.
74 CHAPTER 2. N-DIMENSIONAL GEOMETRIC ALGEBRA

Example 11 By the direct application of Eq. (2.34), we see that

a ∧ b · c ∧ d = a · db · c − a · cb · d

Example 12 By the successive use of Eq. (2.34), the square of the volume
spanned by the vectors x1 , x2 and x3 can be expressed as

|x1 ∧ x2 ∧ x3 |2 = (x1 ∧ x2 ∧ x3 )† · (x1 ∧ x2 ∧ x3 )


= (x3 ∧ x2 ∧ x1 ) · (x1 ∧ x2 ∧ x3 )
= (x1 · x1 ) (x3 ∧ x2 ) · (x2 ∧ x3 ) − (x1 · x2 ) (x3 ∧ x2 ) · (x1 ∧ x3 )
+ (x1 · x3 ) (x3 ∧ x2 ) · (x1 ∧ x2 )
= (x1 · x1 ) (x2 · x2 x3 · x3 − x2 · x3 x3 · x2 )
− (x1 · x2 ) (x2 · x1 x3 · x3 − x2 · x3 x3 · x1 )
+ (x1 · x3 ) (x2 · x1 x3 · x2 − x2 · x2 x3 · x1 )

By expanding the inner products, this reads as

|x1 ∧ x2 ∧ x3 |2 = x21 x22 x23 1 − cos2 θ12 − cos2 θ13 − cos2 θ23 + 2 cos θ12 cos θ13 cos θ23


where θij is the angle between xi and xj .

2.2.2 Rewriting rules


The rewriting rule
(A ∧ B)cC = Ac (BcC) (2.35)

is valid for any multivectors A, B, and C. To prove this rule in the general case,
it is best to by start proving the equivalent rule

(Ak̄ ∧ Bl̄ )cCm̄ = Ak̄ c (Bl̄ cCm̄ ) (2.36)

valid for any blades Ak̄ , Bl̄ , Cm̄ . If l > m, or k + l > m (i.e. k > m − l) both sides
are trivially zero. If k + l ≤ m, we can repeatedly use the Laplace’s expansion
rule in Eq. (2.34) to show that the left-hand side is equal to the right-hand side.
The validity of the general formula follows from the fact that any multivector can
be written as a sum of blades.
2.3. GEOMETRIC PRODUCT 75

Example 13 Let A = 2 + u1 + u1 ∧ u2 , B = 1 + u1 , and C = u2 . Then


A ∧ B = (2 + u1 + u1 ∧ u2 ) ∧ (1 + u1 ) = 2 + u1 + u1 ∧ u2 + (2 + u1 + u1 ∧ u2 ) ∧ u1 =
2 + 3u1 + u1 ∧ u2 , so (A ∧ B)cC = (2 + 3u1 + u1 ∧ u2 )cu2 = 2u2 . On the other
hand, (BcC) = (1 + u1 )cu2 = u2 and Ac (BcC) = (2 + u1 + u1 ∧ u2 )cu2 = 2u2 .
Clearly, these two results are equal.

2.3 Geometric product


Generally, one can write for the vector x and arbitrary multivector A

xA = xcA + x ∧ A (2.37)
Ax = Abx + A ∧ x (2.38)

However, the geometric product AB between two arbitrary multivectors A and


B is not related to contractions and outer products by a formula analogous to
Eqs. (2.37) and (2.38). Instead, the geometric product Ak̄ Bl̄ of two blades Ak̄
and Bl̄ results in terms of intermediate grade from |k − l| to k + l ≤ N in the
steps of two, i.e.
(k+l−|k−l|)/2
X
Ak̄ Bl̄ = hAk̄ Bl̄ i|k−l|+2m (2.39)
m=0

2.4 Basis representation


Because the number of independent k-blades in the N -dimensional space (with
k ≤ N ) is Nk = k!(NN−k)!
!

, any multivector A = A0̄ + A1̄ + ... + AN̄ can be written
PN N 
as a sum of k=1 k = 2N basis elements {1, {g1 , g2 , ..., gN }, {g1 ∧ g2 , g1 ∧
g3 , ...., gN −1 ∧ gN }, ...., g1 ∧ g2 ∧ ... ∧ gN } as
N N
(k)
N X (k)
N X
(i) (k̄) (k̄) (i)
X X
A= Ak̄ Gi = Ai Gk̄ (2.40)
k=0 i=1 k=0 i=1

k vector factors
(k̄) z }| { (i)
where Gi =gi1 ∧ gi2 ∧ ... ∧ gik is the covariant ith k-blade basis element, Gk̄
k vector factors
z }| {
(i)
=g(i1 ) ∧ g(i2 ) ∧ ... ∧ g(ik ) is the contravariant ith k-blade basis element, Ak̄ =
76 CHAPTER 2. N-DIMENSIONAL GEOMETRIC ALGEBRA

(i)† (k̄) (k̄)†


Ak̄ · Gk̄ , and Ai = Ak̄ · Gi . The vectors gi1 , gi2 , ..., giN are selected from the
set {g1 , g2 , ..., gN } by requiring i1 < i2 < ... < ik .
The reciprocal basis vectors {g(1) , g(2) , ..., g(N ) } are obtained as
g1 ∧ ... ∧ ǧi ∧ .. ∧ gN
g(i) = (−1)i−1 (2.41)
g1 ∧ ... ∧ gi ∧ .. ∧ gN
where ǧi means that the vector gi is omitted from the product. As the name
implies, the basis and reciprocal basis are related by

gi · g(j) = δij (2.42)

where δij is one, if i = j and zero otherwise. Apart from being reciprocal, the co-
and contravariant basis vectors are related as
N
X N
X
(i)
gi g = g(i) gi = N (2.43)
i=1 i=1

Proof. By the reciprocality condition in Eq. (2.42), it follows that N


P
i=1 gi ·
g(i) = N (i)
· gi = N . On the other hand, because g(i) = N (i)
· g(j) gj , it
P P
i=1 g j g
follows that i=1 gi ∧ g(i) = N
PN (i)
· g(j) gi ∧ gj = 0 (note that in the double
P
ij g
sum for any given term g(i) · g(j) gi ∧ gj there is also its negative g(i) · g(j) gj ∧ gi =
−g(i) · g(j) gi ∧ gj ). This concludes the proof.
A generalization of Eq. (2.43) is
N
X N
X
(i)
gi Ar̄ g = g(i) Ar̄ gi = (−1)r (N − 2r)Ar̄ (2.44)
i=1 i=1

or
N
X N
X
(i)
g(i) cAr̄ gi = rAr̄

(gi cAr̄ ) g = (2.45)
i=1 i=1
N
X XN
(gi ∧ Ar̄ ) g(i) = g(i) ∧ Ar̄ gi = (N − r)Ar̄

(2.46)
i=1 i=1

where Ar̄ is some arbitrary r-vector.


Proof. Obviously a = N
P (i)
PN (i)
i=1 gi · ag = i=1 g · agi . This, in turn, together
PN (i)
with the expansion formula in Eq. (1.30) implies that i=1 g gi c (a ∧ b) =
PN (i) PN (i)
i=1 g (gi · a) b − i=1 g (gi · b) a = ab − ba = 2a ∧ b. A moment of
2.5. DUALITY 77

thinking should reveal that the extension for arbitrary r-blades produces Eq.
(2.45) (the second equality follows by interchanging the roles of a frame and its
reciprocal).
PN PN
On the other hand, i=1 (gi ∧ Ar̄ ) g(i) = i=1 g(i) (gi Ar̄ − gi cAr̄ ), where Eq.
(2.46) can be read off.
The result in Eq. (2.44) follows by combining Eqs. (2.45) and (2.46) and using
the fact that gi ∧ Ar̄ = (−1)r Ar̄ ∧ gi .

2.4.1 Plucker coordinates


(i)
The ratios of the coefficients Ak̄ are independent of the magnitude of Ak̄ . They
are called the Plucker coordinates. If A is a simple k-blade a1 ∧ a1 ∧ ...ak , the
(i)
coordinates Ak̄ are related by the condition A ∧ A = 0.

Example 14 Let the dimension of space be N = 4. Then, if A is any bivector


A = u ∧ v,

(12) (13) (14) (23)


A = A2̄ g1 ∧ g2 + A2̄ g1 ∧ g3 + A2̄ g1 ∧ g4 + A2̄ g2 ∧ g3
(24) (34)
+A2̄ g2 ∧ g4 + A2̄ g3 ∧ g4

By using A ∧ A = 0, we obtain the condition

(12) (34) (13) (24) (14) (23)


A2̄ A2̄ − A2̄ A2̄ + A2̄ A2̄ =0

2.5 Duality
N

Because the number of independent k-blades in the N -dimensional space, k
(with k ≤ N ), is equal to the number of independent (N − k)-blades, we can
define the dual Ãk̄ to a blade Ak̄ as

Ãk̄ = Ak̄ I−1



= Ak̄ cI−1

(2.47)

We then define a new ?-product through

à ? B̃ = (AB)∼ (2.48)
78 CHAPTER 2. N-DIMENSIONAL GEOMETRIC ALGEBRA

and  ∼
A ? B = ÃB̃ (2.49)

As the reader can verify using Eqs. (2.48) and (2.49), the ?-product is associative,
i.e.
A ? (B ? C) = (A ? B) ? C (2.50)

and distributive with respect to addition, i.e.,

A ? (B + C) = A ? B + A ? C (2.51)
(B + C) ? A = B ? A + C ? A (2.52)

and the pseudoscalar 1̃ = I−1



plays the role of the unit element, i.e.

1̃ ? A = I−1

? A = I4N̄ A = A (2.53)

We can say that the ?-product represents a dual to geometric product. While the
geometric product generates the N -dimensional geometric algebra G(p, q) from
the subspace G1̄ (p, q) of the vectors {u1 , u2 , ..., uN }, the ?-product generates
the same geometric algebra from the subspace GN −1 (p, q) of the (N − 1)-blades
{ũ1 , ũ2 , ..., ũN }. We can stretch this analogy further, and define the “outer”
product ∨ as D ∼
E
Ãk̄ ∨ B̃l̄ = Ãk̄ ? B̃l̄ (2.54)
k+l

and the “contractions” d and e as


D E ∼
Ãk̄ eB̃l̄ = Ãk̄ ? B̃l̄ (2.55)
l−k
D E ∼
Ãk̄ dB̃l̄ = Ãk̄ ? B̃l̄ (2.56)
k−l

2.6 Join and meet


The join J = Ak̄ ∪ Bl̄ is the smallest blade containing both Ak̄ and Bl̄ as factors.
If the blades Ak̄ and Bl̄ are disjoint (i.e. if Ak̄ cBl̄ = Ak̄ bBl̄ = 0), their join is
obviously proportional to Ak̄ ∧ Bl̄ . Assume now for a while that l ≥ k, and that
Ak̄ = a1 ∧ a2 ∧ ... ∧ ak and Bl̄ are not disjoint. Then the blade Bl̄ can be written,

up to scale and orientation, as Bl̄ = ˙ ai1 ∧ ai2 ∧ ... ∧ aij ∧ (b1 ∧ b2 ∧ ... ∧ bl−j ),
2.7. ORTHOGONAL PROJECTIONS 79

where the overdotted equality sign signifies equality up to a scalar factor. The
indices {i1 , i2 , ..., ij } can be any j-permutation from {1, 2, ..., k}, and ai · bj = 0
˙ 1 ∧ a2 ∧ ... ∧ ak ∧
for any value of the indices i and j. In this case, the join is J=a
b1 ∧ b2 ∧ ... ∧ bl−j 6= Ak̄ ∧ Bl̄ . The unknown scaling factor remains fundamentally
unresolvable due to the reshapable nature of the blades. Fortunately it appears
that in all geometrically relevant entities this ambiguity cancels.
The meet M = Ak̄ ∩ Bl̄ is the largest common factor of the blades Ak̄
and Bl̄ (the intersection of Ak̄ and Bl̄ ). If Ak̄ = a1 ∧ a2 ∧ ... ∧ ak and Bl̄ =

ai1 ∧ ai2 ∧ ... ∧ aij ∧ (b1 ∧ b2 ∧ ... ∧ bl−j ), the meet is proportional to ai1 ∧ ai2 ∧
... ∧ aij . If the sum of the grades of the blades Ak̄ and Bl̄ (where k ≤ l) in the
n-dimensional space is k + l ≥ n, the meet of the two blades is M = Ak̄ I−1

n̄ cBl̄ ,

where In̄ is an unit n-blade. The join and meet are related as (up to an orientation
and scale) as

˙ Ak̄ ∧ [M−1 cBl̄ = Ak̄ bM−1 ∧ Bl̄


 
J = (2.57)
Bl̄ cJ−1 cAk̄ = Bl̄ b J−1 bAk̄
 
M =
˙ (2.58)

2.7 Orthogonal projections


k
In short, an orthogonal projection operator P̂A (X) returns the part of X, which
belongs (in some sense) to the subspace of A. Both A and X can (in principle) be
arbitrary multivectors, although defining the projection operator is by no means
evident in the most general case (but see Exercise 52). However, some of the
k
preferable characteristics of P̂A (X) can be listed immediately. Any operator
called as projection operator must fulfill following (intuitive) criterions:

Criterion 6 (Identity projection) Projection of A onto itself is an identity


operation:
k
P̂A (A) = A (2.59)

This criterion is sufficiently self evident, so it need not be further justified.

Criterion 7 (Projection of complement) The projection of the complement


k
X − P̂A (X) to A is zero, i.e.
 
k k
P̂A X − P̂A (X) = 0 (2.60)
80 CHAPTER 2. N-DIMENSIONAL GEOMETRIC ALGEBRA

Criterion 7 is especially crucial in the development of multivector calculus on


manifold, where the part of multivector derivative differentiating in the direc-
tion normal to the manifold must be annihilated. It naturally suggest that the
rejection operator should be defined as

k
P̂A⊥ (X) = X − P̂A (X) (2.61)

Note, on the other hand, that projections need not commute. For example,
k
the orthogonal projection P̂A (a) of a vector
 a
to a plane A at right angle to it is
k k k
zero. Consequently, the projection P̂B P̂A (a) of P̂A (a) to the plane B, which
is at an angle of 45◦ both
 to a and A, is also zero. On the other hand, both
k k k
P̂B (a) and P̂A P̂B (a) differ from zero, as the reader can easily verify.

2.7.1 Projections of blade to blade


k
Consider now the orthogonal projection P̂a (x) of vector x to vector a. It can be
extracted from the product of x with 1 = aa−1 , which results in

xaa (xa) a 1
xaa−1 = 2
= 2
= 2 (x · a + x ∧ a) a (2.62)
a a a
We immediately recognize the (parallel) component x · aa/a2 of x in a as the
k
projection P̂a (x) (see Fig. 2.1). Sometimes the rejection x ∧ aa/a2 (the part of
x, which is orthogonal to a) is denoted as P̂a⊥ (x).
k
Similarly, we obtain the projection P̂A (x) of vector x to bivector A as the
parallel component in the product

x = xAA−1 = xcAA−1 + x ∧ AA−1 (2.63)

k
i.e., P̂A (x) = xcAA−1 .
k
We can generalize this pattern and define the projection P̂Ak̄ (Xl̄ ) of any l-
blade Xl̄ to an invertible k-blade Ak̄ as

k
P̂Ak̄ (Xl̄ ) = (Xl̄ cAk̄ ) A−1

(2.64)

Example 15 Let x = u1 +u2 and A = u1 ∧u2 −2u3 ∧u1 . Then |A|2 = 12 +22 =
5, A−1 = A† / |A|2 = 52 u3 ∧ u1 − 15 u1 ∧ u2 , and xcA−1 = (u1 + u2 )c( 52 u3 ∧ u1 −
2.7. ORTHOGONAL PROJECTIONS 81

1
u
5 1
∧ u2 ) = − 25 u3 − 15 u2 + 51 u1 , and
k 2 1 1
xcA−1 cA = (− u3 − u2 + u1 )c(u1 ∧ u2 − 2u3 ∧ u1 )

P̂A (x) =
5 5 5
4 1 1 2 1 2
= u1 + u1 + u2 + u3 = u1 + u2 + u3
5 5 5 5 5 5
It is worth emphasizing that this rule holds without exceptions, unlike those
utilizing the “symmetric” inner product (with the zero grade exception). The
k
projection P̂Ak̄ (Xl̄ ) of an l-blade Xl̄ to the blade Ak̄ fulfills
k
Ak̄ ∧ P̂Ak̄ (Xl̄ ) = 0 (2.65)
k
In other words, the blade P̂Ak̄ (Xl̄ ) can be expanded by a set of k vectors {a1 , a2 , ..., ak },
which span the subspace whose pseudoscalar is Ak̄ = a1 ∧ a2 ∧ ... ∧ ak . . By
a simple inspection we see that projection of a blade to blade preserves the
grade of theD blade E it operates on, or produces a zero result. In other words,
k k k
P̂A (Xl̄ ) = P̂A (Xl̄ ) or P̂A (Xl̄ ) = 0. The projection of Xl̄ to A is zero only if

Xl̄ has no projection on A. Incidentally, the zero-exception does not necessarily
conflict with the grade preservation, because the resulting projection could as
well be regarded as null l-blade 0l̄ (i.e. an l-blade with magnitude of zero). It
follows from this and the grade preserving property that the projection of a blade
to a blade of lower degree is always zero, i.e.
k
P̂Ak̄ (Xl̄ ) = 0 if l > k (2.66)

By using the definition in Eq. (2.64), we obtain some special relations for com-
posite projections:

1. AB = A ∧ B ⇒
k k
(a) P̂A ◦ P̂B = 0
k k
(b) P̂B ◦ P̂A = 0

2. AB = AcB ⇒
k k k
(a) P̂A ◦ P̂B = P̂A
k k k
(b) P̂B ◦ P̂A = P̂A
k k k
3. P̂A ◦ P̂A = P̂A
82 CHAPTER 2. N-DIMENSIONAL GEOMETRIC ALGEBRA

Figure 2.1: Decomposition of vector a to components along and perpendicular


to vector b.

2.8 Cross product


Can one define a cross product of vectors in N -dimensional space E(N )? The
answer to that question depends on which properties of the cross product in
three-dimensional space are used to define the cross product in N -dimensional
space. We might require that

1. a × b is a bilinear vector function of a and b.

2. a×b is a perpendicular to both a and b, i.e. (a × b)·a = 0 and (a × b)·b =


0.

3. |a × b| is equal to |a ∧ b|.

The second requirement excludes the generalization of cross-product to one-


and two-dimensional spaces: in those spaces there are no direction perpendic-
ular to a and b. On the other hand, the perpendicular direction to a ∧ b is
ambiguous for spaces with N > 3. For example, in four-dimensional space all
the directions κu3 + λu4 with either of the scalars κ or λ differing from zero are
perpendicular to the plane u1 ∧ u2 ({u1 , u2 , u3 , u4 } are four orthonormal vectors
2.8. CROSS PRODUCT 83

spanning the space). The concept of cross-product can, however, be generalized


to seven-dimensional space. In dimension 7 one can define for u1 , u2 , ..., u7 the
multiplication rules

u1 × u2 = u4 u2 × u4 = u1 u4 × u1 = u2
u2 × u3 = u5 u3 × u5 = u2 u5 × u2 = u3
..
. (2.67)
u7 × u1 = u3 u1 × u3 = u7 u3 × u7 = u1

and ui × uj = −uj × ui . The above multiplication table can be condensed into


the form ui × ui+1 = ui+3 where the indices i, i + 1 and i + 3 are permuted
cyclically among themselves and computed modulo 7. Contrary to the usual
three-dimensional cross product this 7-dimensional cross product does not satisfy
the Jacobi identity

(a × b) × c + (b × c) × a + (c × a) × b = 0 (2.68)

A cross product of two vectors satisfying the above rules is unique to dimensions
3 and 7; it does not exist in any other dimensions. It can be also defined by
the conventional quaternion or octonion products as “a × b = habi1̄ ” where the
product ab is interpreted as consisting of scalar and vector parts. Of course, in
our viewpoint ab is a scalar plus a bivector, so taking the “1-vector” part in E(3)
should really be interpreted as taking the dual of the bivector part of ab, i.e.,
a × b = a ∧ bci, as before. In E(7) the cross product is given in our viewpoint as

a × b = a ∧ bc(i124 + i235 + i346 + i457 + i561 + i672 + i713 ) (2.69)

where iijk = ui uj uk . From this it follows that there are vectors, say c and d,
which do not lie in the plane a ∧ b, but whose cross product c × d defined via
Eq. (2.69) is in the direction of a × b. This can be understood by considering the
number of linearly independent bivectors and vectors in E(7): there are 72 = 21


linearly independent bivectors, but only 71 = 7 linearly independent vectors.




Thus, if we map a bivector to vector, this mapping can not be a one-to-one


correspondence in E(7), but just a method of associating a vector to a simple
bivector.
84 CHAPTER 2. N-DIMENSIONAL GEOMETRIC ALGEBRA

The three-dimensional cross product is invariant under all rotations of SO(3),


while the 7-dimensional cross product is not invariant under all of SO(7), but
only under the exceptional Lie group G2 , a subgroup of SO(7).
Interestingly enough, the conclusion that cross product has its analogue only
in seven-dimensional space is not changed even if the requirements are weakened
to

1. a × b is a continuous function of a and b.

2. a×b is a perpendicular to both a and b, i.e. (a × b)·a = 0 and (a × b)·b =


0.

3. If a and b are linearly independent, a × b 6= 0.

If we were looking for a vector valued product of k factors, not just two factors,
then one should first try to modify or formalize the requirements for k factors.
A natural thing to do is to consider a vector valued product a1 × a2 × ... × ak
satisfying (a1 × a2 × ... × ak ) · ai = 0 (orthogonality) and (a1 × a2 × ... × ak )2 =
det(ai · aj ) (Gram determinant). In this context an answer to the question “in
what dimensions there is a generalization of the cross product” is that there are
cross products in 3 dimensions with 2 factors, in 7 dimensions with 2 factors,
in N dimensions with N − 1 factors, in 8 dimensions with 3 factors and no
others (except if one allows trivial answers, then there would also be in all even
dimensions a vector product with only 1 factor and in 1 dimension an identically
vanishing cross product of 2 factors).

2.9 Further reading


The axiomatic approach to N -dimensional geometric algebra is given in Clifford
algebra to geometric calculus by Hestenes and Sobczyk [34].
Null-vectors and signatures of spaces are discussed in an article by Pozo and
Sobczyk [65].
Duality in N -dimensional geometric algebra is studied in the paper by Conradt
[15].
2.10. EXERCISES 85

The generalization of cross product to N dimensions is discussed in the


Lounesto’s book Clifford algebra and spinors [54] (see also his paper [55]). It
is also studied in the papers by Massey [56] and Silagadze [66].

2.10 Exercises
Exercise 46 Prove that the product AB of two even (odd) multivectors A and
B is always even.

Exercise 47 Prove that ha1 a2 ...ar is̄ = 0 if r + s is an odd integer.

Exercise 48 Why is the scalar exception (Footnote 2) needed, if one defines


aA = a · A + a ∧ A for an arbitrary vector a and an arbitrary multivector A?

Exercise 49 Let Ak̄ = a1 ∧ a2 ∧ ... ∧ ak and Bl̄ = a1 ∧ a2 ∧ ... ∧ aj−1 ∧ c ∧ b1 ∧ b2 ∧


... ∧ bl−j , where l ≥ k. What is the meet M = Ak̄ ∩ Bl̄ and the join J = Ak̄ ∪ Bl̄ ,
if a) c = aj b) c = aj + a1 c) c = aj + b1 d) c = aj + d, where d · ai = d · bi = 0
for all indices i.

k k k
Exercise 50 If AB = A ∧ B, it is often claimed that P̂AB = P̂A + P̂B . Consider
the four-dimensional space E(4), and choose A = u1 ∧u2 , B = u3 ∧u4 . Evidently,
in E(4) A ∧ B = u1 ∧ u2 ∧ u3 ∧ u4 = u1 u2 u3 u4 = AB (see Sect. 2). Calculate
k k k
P̂A (u2 ∧ u3 ), P̂B (u2 ∧ u3 ), and P̂AB (u2 ∧ u3 ) Does the result agree with the above
“rule”? Can you invent other exceptions to the “rule” above?

k k k
Exercise 51 If AB = AcB, it is often claimed that P̂AB = P̂B + P̂A . Can you
find counter examples to this “rule”? (Hint: See previous Exercise).

Exercise 52 Define
3 X
3   X2 X
2 3  
k k
X X
ÔA (B) = P̂hAi hBij̄ − P̂hAi hBij̄ |hAik̄ |
ı̄ ı̄
j=1 i=j j=0 k=j i=k+1
1 X
1 2 3  
k
X X X
+ [P̂hAi hBij̄ |hAik̄ | |hAil̄ |
ı̄
j=0 k=j l=k+1 i=l+1
k
−P̂hAi (hBi0̄ ) |hAi0̄ | |hAi1̄ | |hAi2̄ |]

86 CHAPTER 2. N-DIMENSIONAL GEOMETRIC ALGEBRA

where we now assume that B is an arbitrary multivector in E(3) and A is sum


of unit k-vectors (i.e., either |hAik̄ | = 1 or hAik̄ = 0) in E(3). Does the operator
ÔA fullfill the Criterions 6 and 7? Is it linear? Does it preserve grades, if B is
some pure k-vector? Taking all this into account, is ÔA a valid generalization
of projection operator in E(3)? Do you think it is unique? Can you give any
straightforward geometric interpretation for its action in E(3)? [This Exercise is
due to my student, Elina Sälli]

Exercise 53 Prove that the cross product in N -dimensional space E(N ) does not
generally satisfy the triple cross product identity a × (b × c) = ba · c − ca · b for
N 6= 1, 3. Hint: define a ternary product

{a, b, c} = a × (b × c) − ba · c + c (a · b) (2.70)
PN
which is zero if the identity is valid, and show that ijk {ui , uj , uk }2 = N (N −
1)(N − 3), where {u1 , u2 , ..., uN } is an orthonormal basis for N -dimensional Eu-
clidean space. Is the case N = 1 mathematically interesting?

Exercise 54 Show that the ternary product in Eq. (2.70) satisfies the identity
2a × {b, c, d} = {a, b, c × d} + {a, c, d × b} + {a, d, b × c}.
PN
Exercise 55 Calculate ijkl |ui × {uj , uk , ul }|2 . What does the result imply, if
one tries to define cross products for N -dimensional space assuming the identity
2a × {b, c, d} = {a, b, c × d} + {a, c, d × b} + {a, d, b × c} but not the validity
of a × (b × c) = ba · c − ca · b?

You might also like